user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.57k
lvwerra
2023-02-03T13:19:23
Closing this for now - feel free to re-open if you have any follow-up :)
121
lvwerra
2023-01-30T10:47:10
This seems a sound reasoning to me, there are some issues with that example at the moment anyway with the spikes mentioned in #101 so it might be worth changing the reward function. To be honest I chose the first way that worked when building this example :)
120
DaehanKim
2023-02-01T15:51:00
Thank you for answering. I'm running the sentiment-control example with changed reward definition. will report when finished!
120
DaehanKim
2023-02-04T09:08:24
Changed reward shows better reward plot. ![image](https://user-images.githubusercontent.com/20675681/216758541-7ce89ee4-1806-440c-b01c-6d4e6d989451.png) Changed reward is defined as follow: ``` def logits_to_reward(logits, task): """ Take the positive and negative logits and scale it for the task. task [negative]: reward = neg_logit - pos_logit task [neutral]: reward = -abs(pos_logit - neg_logit) + 4 task [positive]: reward = pos_logit - neg_logit logits : List of tensors (negative_logit, positive_logit) """python3 rewards = list() for i in range(len(logits)): if task[i]=='[negative]': rewards.append(logits[i][0] - logits[i][1]) elif task[i]=='[neutral]': rewards.append(-torch.abs(logits[i][0] - logits[i][1]) + 4) elif task[i]=='[positive]': rewards.append(logits[i][1] - logits[i][0]) else: raise ValueError('task has to be in [0, 1, 2]!') return rewards ``` I also tried step reward definition. If [positive] it gives reward 1 if `pos_logit - neg_logit > 1` If [negative] it gives reward 1 if `neg_logit - pos_logit > 1` If [neutral] it gives reward 1 if `abs(pos_logit - neg_logit) < 0.4` (This results in less than 0.1 probability difference btw pos and neg) otherwise reward is 0. This doesn't work and is even worse than original implementation. Two runs stopped because of sentiment pipeline error. It accidently received >512 token inputs. I suspect this happens because of non-english characters that explodes sentiment token counts. I'll take a look and report when I figure it out.
120
lvwerra
2023-02-07T09:31:18
Thanks for the report, this looks great! The issue with the loss spikes is now fixed with #126. One thing to keep in mind is that since you are changing the reward definition the reward curve's scale also changes. E.g. the new negative reward has twice the range of the original one.
120
DaehanKim
2023-02-10T07:05:51
@lvwerra That's a good point I missed out! I'll run with `0.5*new_reward` and see how they are different. Thanks!
120
DaehanKim
2023-02-13T11:43:24
I ran experiments with new scales and things doesn't seem quite improved. So I suppose reward is well-defined empirically. (@lvwerra 👍 ) ![image](https://user-images.githubusercontent.com/20675681/218448042-6ed0c644-5ee9-49c3-bad7-c49c5cdac2ad.png) [full wandb charts here](https://wandb.ai/lucas01/trl?workspace=user-lucas01) changed reward was defined as follow: ``` def logits_to_reward(logits, task): """ Take the positive sentiment logit and scale it for the task. task [negative]: reward = -(pos_logit - neg_logit)*0.5 task [neutral]: reward = -abs(pos_logit - neg_logit)+4 task [positive]: reward = (pos_logit - neg_logit)*0.5 """ rewards = [] for i in range(len(logits)): if task[i]=='[negative]': rewards.append(0.5*(logits[i][0] - logits[i][1])) elif task[i]=='[neutral]': rewards.append(-torch.abs(logits[i][0] - logits[i][1]) + 4) elif task[i]=='[positive]': rewards.append(0.5*(logits[i][1] - logits[i][0])) else: raise ValueError('task has to be in [0, 1, 2]!') return rewards ```
120
lvwerra
2023-02-13T14:07:30
Thanks for running the experiments @DaehanKim! Indeed, the original definition seems a bit better (pure luck :P) but was definitely worth exploring!
120
DaehanKim
2023-02-14T12:00:54
Thanks man! I suppose it's good to close the issue. Please reopen when necessary!
120
natolambert
2023-01-28T01:55:33
It could be good to make things like this configurable in a branch and learning how these implementation details transfer to RLHF.
119
DaehanKim
2023-01-28T14:19:48
imo, residual clipping seems beneficial to prevent policy loss spiking reported in #101 . It's probably coming from instability in value estimation.
119
natolambert
2023-01-30T19:26:28
Yeah, I'm running residual clipping example(s), we'll see. At least it'll be good to have the option to try both.
119
natolambert
2023-01-30T21:41:42
Residual value prediction didn't help with stability (it's crimson-wish) <img width="973" alt="Screenshot 2023-01-30 at 1 41 28 PM" src="https://user-images.githubusercontent.com/10695622/215601730-c6a1593f-6b20-4b37-bced-623a963c6217.png">
119
natolambert
2023-01-30T22:33:35
Also not a big help via the other approx KL formulation. W&B [here](https://wandb.ai/natolambert/TRL/runs/e1258rd6?workspace=user-natolambert). Though, it's slightly more stable? We'll see how this run finishes converging. <img width="1422" alt="Screenshot 2023-01-30 at 2 33 23 PM" src="https://user-images.githubusercontent.com/10695622/215611270-74f8ff43-10cd-43dd-9eb0-db117b8af90d.png">
119
lvwerra
2023-06-01T12:28:33
Closing this for now, feel free to reopen if there's an update.
119
HuggingFaceDocBuilderDev
2023-01-28T01:24:08
_The documentation is not available anymore as the PR was closed or merged._
118
natolambert
2023-01-28T20:59:31
I think we try it and see. It covers most use cases, I just don't really know how model loading etc is handled in transformers. My intuition is they're all pretrained and using torch, so the torch setting should work. Not so hard to check 🤗 (I can double check that)
118
lvwerra
2023-01-30T09:53:46
As @younesbelkada pointed out we initialize the models before setting up the `PPOTrainer`. I think we could copy the [`set_seed`](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/trainer_utils.py#L83) function from transformers (we don't need the TF case) which also takes care of the CUDA case. Then we can use it in the `PPOTrainer` and also expose it to the user.
118
natolambert
2023-01-30T17:16:29
What is the way the model is initialized? I'm confused because I thought we were fine-tuning existing models. What parts get initialized? @lvwerra @younesbelkada -- seems like a gap in my new NLP knowledge :)
118
lvwerra
2023-01-30T17:18:16
@natolambert it's actually the RL part: the value head is randomly initialized :)
118
younesbelkada
2023-01-30T17:24:23
As a side note, from what I have discovered, you need to declare the seed before initializing any new module, i.e. in this case before initializing `model` & `ref_model`. Check the example snippet below: ```python import torch import torch.nn as nn torch.manual_seed(0) linear_1 = nn.Linear(10, 10) linear_2 = nn.Linear(10, 10) # check weights are not the same assert not torch.allclose(linear_1.weight, linear_2.weight) torch.manual_seed(0) linear_1 = nn.Linear(10, 10) torch.manual_seed(0) linear_2 = nn.Linear(10, 10) # check weights are the same assert torch.allclose(linear_1.weight, linear_2.weight) ``` If we make sure users do this, maybe it will be possible to ensure reproducibility - thus worth documenting it!
118
natolambert
2023-01-30T17:53:43
I don't think we want all the weights to be the same, no? The initial seed should be enough to maintain reproducibility _across runs_ -- does that make sense?
118
natolambert
2023-01-30T18:45:11
If we are encountering issues later in this, we could also incorportate [set_full_determinism](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/trainer_utils.py#L58)
118
natolambert
2023-01-31T19:59:59
I personally just don't really see an issue with the default seed being 0? It's very normal practice in RL imo. The users for non-research generally will just not touch it? Maybe you didn't see the default seed in the constructor, which would explain some of the confusion. I don't know how a default seed is particularly different than no seeding.
118
lvwerra
2023-01-30T11:06:54
I just checked and you are right, this is not always the case! I will fix the example. cc @younesbelkada maybe this explains the spikes.
117
lvwerra
2023-02-07T09:26:24
This should be fixed with #126.
117
HuggingFaceDocBuilderDev
2023-01-27T17:13:40
_The documentation is not available anymore as the PR was closed or merged._
116
natolambert
2023-01-27T17:15:52
FYI @lvwerra
116
HuggingFaceDocBuilderDev
2023-01-26T22:13:44
_The documentation is not available anymore as the PR was closed or merged._
115
TristanThrush
2023-01-30T18:07:06
> Hi @TristanThrush, this is great - thanks for adding! I looked at the code and was thinking about ways to simplify it a bit further. Here's my proposal: > > For the reward model we can use the vanilla `AutoModelForSequenceClassification`. This is also just a linear layer on top of the hidden states. Instead of having a custom model we could then just write a `compute_loss` for the `Trainer` (see `CustomTrainer` example [here](https://huggingface.co/docs/transformers/v4.26.0/en/main_classes/trainer#trainer)). > > The advantage of this: 1) simplifies the code a bit and 2) we don't need a custom model class. For 2) this also allows you to use all the `.push_to_hub` and `from_pretrained` functionality to share the reward model without any changes. > > If we are already having a custom `compute_loss` function we could just tokenize the dataset beforehand and pad to the max. Inside `compute_loss` we can remove the excessive padding from the batch (I think this could be a one-liner) before doing the forward pass. Then we wouldn't need a custom collator neither and the tokenization could be integrated into the `turn_into_text_classification_format`. > > Regarding the config: since we are not explaining exactly how to use it we could omit it here and refer to the DS integration of the Trainer? > > What do you think? Thanks leandro, makes sense! I'm getting roped into the rlhf mturk stuff right now but I will try to address later in the week.
115
TristanThrush
2023-01-30T18:07:34
> Added some small comments (a somewhat quick read through, I can re-visit): > > * Did you train a model, is it on the hub? > * You mentioned the results match the original paper, can you add that info to the PR? > * Can you pull main and run the linting on examples? `Make style` > * Should we add a small page in docs? Thanks! These suggestions make sense to me. Will try to address later in the week!
115
TristanThrush
2023-02-02T06:38:52
Alright @lvwerra thanks again for your useful comments! In response, I: * added the deepspeed config in the usage example in the readme. It was a mistake that it was missing anyway. * used AutoModelForSequenceClassification instead * used a custom trainer with a custom `compute_loss` function * I tried to not use a custom data collator, but I found that it was difficult for 2 reasons: 1) I needed to specify `"return_loss": True` somewhere in the batched input data. If I don't do this, then `AutoModelForSequenceClassification` will error out, because my custom `compute_loss` function will not be used for the validation step. The custom `compute_loss` function needs to be used for training and validation instead of the normal model `forward` function. The normal `forward` function expects `input_ids`, not `input_ids_j` and `input_ids_k`. It is fine to use the normal `forward` function after the reward model is trained when you want to use it in the ppo stage, though. 2) It seems a bit slower to pad to the max and then remove padding I'm happy to try to make it work though if you still think that a custom data collator is not good!
115
TristanThrush
2023-02-02T06:40:42
Thanks @natolambert for the great comments. In response, I linted my code and added the doc pages. The one thing I havent addressed is pointing to a trained model which matches the paper's results. I will do so soon, but am still training a model on the newly altered code.
115
natolambert
2023-02-06T16:39:03
Random comment on reward model, make sure we aren't assigning reward / gradients to the prompt / instruction tokens.
115
lvwerra
2023-02-06T16:41:13
@natolambert you mean during RL training? The PPOTrainer takes care of that and only looks at rewards for the response tokens. For the RM training it doesn't really matter as we predict the reward after the last token, right?
115
TristanThrush
2023-02-08T02:47:16
> Awesome, thanks for the refactoring. Looks much cleaner now IMO! > > Left just a few minor things to clean up then we can merge! 🚀 Ok I believe I've addressed the comments. Will also add a model card to the linked model. Thanks, and let me know if there is anything else!
115
lvwerra
2023-01-26T09:22:09
How many different responses do you expect? I could see that there might not be enough variety in the responses in a large batch and maybe training on smaller batches would help.
114
younesbelkada
2023-01-26T11:54:47
@simonlevine what architecture are you using? Can you share with us the logs of your training? Also I second @lvwerra 's comment about the batches
114
simonlevine
2023-01-26T16:44:23
Hi, thank you for the advice, I'll tune the batch size (and forward batch size). Without going into too much detail, architecture is akin to GPT (2), but trained to generate biological sequences. The prompt is about 100 characters and the response is a fixed length of about 30. I can share training logs when I have more consistent results. Thanks agaIn!
114
lvwerra
2023-01-30T13:35:29
Ok, I am closing this for now - feel free to reopen if there are any news :)
114
lvwerra
2023-01-26T09:18:36
You can train a reward model on human preference data (see for example: https://huggingface.co/datasets/openai/summarize_from_feedback) or you could also use a text generation metric as a reward model such as BLEU or ROUGE. However, since the metrics are not perfect there is a chance the model will overfit to those metrics. Does that answer your question?
113
gd1m3y
2023-01-26T09:21:10
Yeah thankyou for your quick feedback will try this and yes this answers the question
113
HuggingFaceDocBuilderDev
2023-01-25T14:44:44
_The documentation is not available anymore as the PR was closed or merged._
112
HuggingFaceDocBuilderDev
2023-01-25T14:08:30
_The documentation is not available anymore as the PR was closed or merged._
111
HuggingFaceDocBuilderDev
2023-01-25T13:14:45
_The documentation is not available anymore as the PR was closed or merged._
110
HuggingFaceDocBuilderDev
2023-01-25T13:10:44
_The documentation is not available anymore as the PR was closed or merged._
109
HuggingFaceDocBuilderDev
2023-01-25T11:13:54
_The documentation is not available anymore as the PR was closed or merged._
108
HuggingFaceDocBuilderDev
2023-01-25T10:45:41
_The documentation is not available anymore as the PR was closed or merged._
107
lvwerra
2023-04-12T11:21:09
I think we are not doing this for now, so I am closing the issue :)
106
HuggingFaceDocBuilderDev
2023-01-25T10:01:22
_The documentation is not available anymore as the PR was closed or merged._
105
lvwerra
2023-01-25T08:56:09
Thanks! It's just a utility function to flatten nested dicts for logging.
104
natolambert
2023-01-25T18:12:18
fixed in #112
104
HuggingFaceDocBuilderDev
2023-01-25T01:18:45
_The documentation is not available anymore as the PR was closed or merged._
103
lvwerra
2023-01-25T08:46:22
Thanks for the fixes @natolambert! Did you run the linter on the examples? I see there are a lot of changes there. If we want to do that I think we should update the CI and Makefile too so they stay clean in the future. No strong opinion tbh.
103
natolambert
2023-01-25T18:13:06
Yep I did, I can add them to `make` or revert, preference @lvwerra ? My linter was just complaining when I went through so I did it once.
103
lvwerra
2023-01-26T10:42:47
I think adding is good, then we have consistently formatted examples :)
103
natolambert
2023-01-27T00:49:26
Okay @lvwerra -- made the updates and merged with main, lets make sure it passes again then feel free to merge.
103
HuggingFaceDocBuilderDev
2023-01-24T18:07:48
_The documentation is not available anymore as the PR was closed or merged._
102
younesbelkada
2023-01-24T16:11:56
One idea could be that we don't mask out the logits corresponding to padding tokens when computing the loss, it is something I am having a look in https://github.com/lvwerra/trl/pull/100 - But I am not sure here if this is really the rootcause of this
101
natolambert
2023-01-24T16:42:14
Yeah, so something weird is going one with a simultaneous large drop in entropy, clip fraction, etc. Can we log the model outputs at that step? Is there any chance the model output gets stuck on something?
101
natolambert
2023-01-25T03:12:03
@younesbelkada your idea makes sense. Some follow ups: 1. @lvwerra what experiment setup was this? I'd love to dig further. 2. what does a clip frac of .55 mean, is that half of the value samples are clipped in the PPO update? Or am I off by a factor of 100? Below is musings on PPO stability: * [Thread](https://github.com/hill-a/stable-baselines/issues/340#issuecomment-497729167) from stable baselines, suggests entropy coefficient was way too high (different domain than RLHF) (will add more if I find it) The more I look, there is surely some numerical instability in the loss computation at that step (NaN), which is impressive it recovers from. I'm thinking about what is the right intermediate values to log (maybe optionally). Can we do something that if there is a NaN or a big loss value, we dump a bunch of values to the logger? I am sure we will see things like this when doing more RLHF. 3. How should we configure the logger for a rich researchy-approach (lots of unknowns).
101
DaehanKim
2023-02-03T16:34:21
I also observed a spike in policy loss when running sentiment-control example, and I initially thought it's because of some strange samples or high variance in positive logits. And I found this : pipeline doesn't always output 'POSITIVE' logit at 1 index. ![순서바뀜](https://user-images.githubusercontent.com/20675681/216655039-e805a8f1-4a18-429f-9df3-a3b8c90059a8.PNG) and in the notebook, `output[1]['score']` is considered as a positive logit and fed into the PPOTrainer. I guess this causes unstable training because reward signal is not valid. Am I making sense? btw, I didn't realize this and run several experiments with changed reward definitions (that uses both positive and negative logits) and reward_mean wasn't increasing as training goes on. ![image](https://user-images.githubusercontent.com/20675681/216656765-6e73177b-bf16-4fa3-ab2c-a81f919fcbc4.png) I'll report further experiment results at #120
101
DaehanKim
2023-02-03T16:48:37
I corrected parsing pipeline output and loss spike still remains in sentiment-control notebook example. so there may be another reaseon for this unstability. ![image](https://user-images.githubusercontent.com/20675681/216659430-e828a210-203d-4b1e-a3c9-556653795ebb.png)
101
lvwerra
2023-02-03T17:33:55
Thanks @DaehanKim, yes there is an issue besides the order of the logits. I tracked it down to some changes done in #80 (no spikes at the beginning of the PR and spikes at time of merge) and I started tracking the issue down in #126. I'll report as well here if I figure it out!
101
lvwerra
2023-02-07T09:33:59
The issue with the loss spikes in the sentiment control notebook was that sometimes only a few new tokens would be generated (1-2) and this would cause the loss to spike. Not sure, yet, where exactly this behaviour comes from but we now know where to look: we can actively generate short sequences and investigate what causes the loss explosion.
101
tengxiaoliu
2023-02-08T07:44:26
I also experienced the spike loss in my case. I'm using the seq2seq t5 model as the backbone. The model is initialized with a supervised finetuned model. I find that the spike loss comes from steps that have a negative advantage and an extremely high ratio r(\theta). This falls in the situation 6 in the [figure](https://huggingface.co/blog/deep-rl-ppo) below. <img width="883" alt="image" src="https://user-images.githubusercontent.com/61669825/217460022-1565f798-eace-4c8f-8f49-15af3801ec76.png"> In my case, removing pg_losses1 and only keeping the clipped pg_losses2 can help restrict the ratio and stabilize the loss. I didn't train the model from scratch, so the clip fraction is low (less than 3%). But this is a problem if the clip fraction is too high and most of the loss is clipped. It's not a general solution though, just some findings from my case.
101
github-actions[bot]
2023-06-20T15:05:04
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
101
b11z
2024-04-24T18:20:48
The issue that @DaehanKim noticed is also present in the [gpt2-sentiment.ipynb example](https://github.com/huggingface/trl/blob/main/examples/notebooks/gpt2-sentiment.ipynb). It might be nice to propagate the `extract_pipe_output` fix to that notebook as well.
101
HuggingFaceDocBuilderDev
2023-01-23T20:15:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_100). All of your documentation changes will be reflected on that endpoint.
100
lvwerra
2023-02-07T09:57:34
Closing in favour of #133.
100
lvwerra
2023-01-23T14:51:11
Thanks for adding. Could you also update it in `trl/__init__.py` for consistency?
99
HuggingFaceDocBuilderDev
2023-01-23T14:53:40
_The documentation is not available anymore as the PR was closed or merged._
99
mishig25
2023-01-23T14:54:39
Did so !
99
mishig25
2023-01-23T15:07:21
@lvwerra please feel free to merge it 👍
99
HuggingFaceDocBuilderDev
2023-01-23T13:48:31
_The documentation is not available anymore as the PR was closed or merged._
98
HuggingFaceDocBuilderDev
2023-01-23T10:44:40
_The documentation is not available anymore as the PR was closed or merged._
97
HuggingFaceDocBuilderDev
2023-01-23T10:31:26
_The documentation is not available anymore as the PR was closed or merged._
96
HuggingFaceDocBuilderDev
2023-01-20T14:24:57
_The documentation is not available anymore as the PR was closed or merged._
95
22Mukesh22
2023-01-19T10:34:22
https://trlx.readthedocs.io/en/latest/
94
HuggingFaceDocBuilderDev
2023-01-20T12:26:50
_The documentation is not available anymore as the PR was closed or merged._
93
younesbelkada
2023-01-20T18:24:18
Final Run on 2xNvidia T4 for `t5-imdb`: https://wandb.ai/distill-bloom/trl/runs/z4sm1ppv?workspace=user-younesbelkada and the wandb logs of the ppo-sentiment with GPT on 4xA100: https://wandb.ai/distill-bloom/trl/runs/1eapyuim?workspace=user-younesbelkada apart from the spike on step 2 for the first run, the generations seems nice and the reward curve converging smoothly ! 🔥
93
HuggingFaceDocBuilderDev
2023-01-18T14:38:55
_The documentation is not available anymore as the PR was closed or merged._
92
younesbelkada
2023-01-18T15:02:54
Thanks! I will give it a try with `tensorboard` and merge if there is no issue
92
younesbelkada
2023-01-18T17:39:42
I can confirm everything works fine (single & multi-GPU with wandb and with `tensorboard`) after 37aa98e for `tensorboard`, since I made some non-minor modifications could you have a second look? 🙏 thanks
92
HuggingFaceDocBuilderDev
2023-01-17T09:48:06
_The documentation is not available anymore as the PR was closed or merged._
91
younesbelkada
2023-01-17T09:14:26
This is on the way! see https://github.com/lvwerra/trl/pull/75 and https://github.com/younesbelkada/trl/pull/1
90
cdxzyc
2023-01-18T08:35:23
> This is on the way! > see #75 and [younesbelkada#1](https://github.com/younesbelkada/trl/pull/1) Hi, can this branch now run the T5 model correctly?
90
lvwerra
2023-01-18T10:42:52
We are still working on it and testing it we'll have something in the coming days.
90
lvwerra
2023-01-23T14:57:35
It is merged now! 🚀
90
HuggingFaceDocBuilderDev
2023-01-17T07:35:56
_The documentation is not available anymore as the PR was closed or merged._
89
HuggingFaceDocBuilderDev
2023-01-17T07:35:41
_The documentation is not available anymore as the PR was closed or merged._
88
HuggingFaceDocBuilderDev
2023-01-17T07:36:32
_The documentation is not available anymore as the PR was closed or merged._
87
HuggingFaceDocBuilderDev
2023-01-16T13:55:01
_The documentation is not available anymore as the PR was closed or merged._
86
younesbelkada
2023-01-19T11:12:40
Actually this was not working in the sharded case, I'd expect users to load large & sharded models too! The commit 70274c3 & bd10a97 should fix that
86
HuggingFaceDocBuilderDev
2023-01-16T11:21:13
_The documentation is not available anymore as the PR was closed or merged._
85
lvwerra
2023-01-13T13:11:12
the reward itself is not differentiable and as such you can't backpropagate it. PPO is one way of estimating a differentiable loss from a non-differentiable reward. working on enc-dec support in #75 :)
84
JoaoLages
2023-01-13T15:04:23
Ah yes, nicely noted. The main differentiable output is the ratio between new log probabilities and initial log probabilities, right? `log_probs / old_log_probs`
84
fabianbunbury
2023-01-09T18:39:12
Also i apologise if that tutorial has nothing to do with you and is made by some unaffiliated person i'm new to hugging faces.
83
lvwerra
2023-01-13T13:16:19
Actually the `AutoModel` approach is the new way to do it but it's not yet released. You need to install from `main` branch to use it.
83
xiaoyesoso
2023-01-09T07:08:39
https://github.com/lvwerra/trl/issues/82
81