user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-05-27 22:20:31
body
stringlengths
1
173k
issue_number
int64
1
3.5k
__index_level_0__
int64
0
10.1k
qgallouedec
2025-03-11T14:37:42
Good point, I'll be happy to receive a PR for this :)
3,049
1,219
shirinyamani
2025-03-28T17:29:31
I've commented on your PR!
3,049
1,220
jamesbraza
2025-03-28T17:42:19
Hi @shirinyamani thanks for the PR comment, but I think you're misunderstanding here, can you reopen this issue? This issue still stands. https://github.com/Future-House/trl/pull/9 was about resolving https://github.com/huggingface/trl/issues/3018 on a fork, and by happenstance I fixed this issue in that PR too. However, I am not going to open that PR into actual `trl` as it was too hacky.
3,049
1,221
qgallouedec
2025-03-11T13:40:08
Thanks for fixing it. Can you just apply the suggestion?
3,048
1,222
HuggingFaceDocBuilderDev
2025-03-11T17:25:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3048). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,048
1,223
HuggingFaceDocBuilderDev
2025-03-11T13:53:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3046). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,046
1,224
qgallouedec
2025-03-10T19:27:22
I see the issue, maybe we should not make the assumption that all prompts are always different. The alternative is to do something like `prompt[::self.num_generations]` WDYT?
3,045
1,225
shing100
2025-03-11T08:52:14
I have the same issue. After updating trl, there have been too many VRAMs used to learn. When SFT training the 7.8b model with 2 nodes (H100*8), use a total of 454.08 GiB. Liger-kernal + deepspeed zero3 micro batch size 1 sequence_len 8192 https://github.com/axolotl-ai-cloud/axolotl/issues/2387
3,044
1,226
maoulee
2025-03-11T11:49:38
> I have the same issue. > > After updating trl, there have been too many VRAMs used to learn. > > When SFT training the 7.8b model with 2 nodes (H100*8), use a total of 454.08 GiB. > > Liger-kernal + deepspeed zero3 micro batch size 1 sequence_len 8192 > > [axolotl-ai-cloud/axolotl#2387](https://github.com/axolotl-ai-cloud/axolotl/issues/2387) Have you solve this problem in trl ?I find this code can be work fine in unsloth but with a very slowly speed
3,044
1,227
qgallouedec
2025-03-11T17:43:21
Can you provide the full traceback? Here it's hard to know where is the memory peak
3,044
1,228
maoulee
2025-03-13T03:53:15
> Can you provide the full traceback? Here it's hard to know where is the memory peak I have sovle this problem by use function in unsloth-zoo, it helps vllm get the weights of lora instead of move model weights to vllm it reduced the vram of model weights Here is the terminal out put :INFO 03-13 11:20:41 gptq_marlin.py:202] Using MarlinLinearKernel for GPTQMarlinLinearMethod Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 7.22it/s] Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 7.21it/s] INFO 03-13 11:20:42 model_runner.py:1115] Loading model weights took 0.4302 GB INFO 03-13 11:20:42 punica_selector.py:18] Using PunicaWrapperGPU. INFO 03-13 11:20:55 worker.py:267] Memory profiling takes 12.69 seconds INFO 03-13 11:20:55 worker.py:267] the current vLLM instance can use total_gpu_memory (39.39GiB) x gpu_memory_utilization (0.20) = 7.88GiB INFO 03-13 11:20:55 worker.py:267] model weights take 0.43GiB; non_torch_memory takes 0.09GiB; PyTorch activation peak memory takes 1.39GiB; the rest of the memory reserved for KV Cache is 5.97GiB. INFO 03-13 11:20:55 executor_base.py:110] # CUDA blocks: 32588, # CPU blocks: 21845 INFO 03-13 11:20:55 executor_base.py:115] Maximum concurrency for 2500 tokens per request: 208.56x INFO 03-13 11:21:14 model_runner.py:1434] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage. Capturing CUDA graph shapes: 100%|████████████████████████████████████████| 35/35 [00:21<00:00, 1.64it/s] INFO 03-13 11:21:36 model_runner.py:1562] Graph capturing finished in 22 secs, took 1.74 GiB INFO 03-13 11:21:36 llm_engine.py:431] init engine (profile, create kv cache, warmup model) took 54.14 seconds {'loss': 0.0, 'grad_norm': 2.7884418964385986, 'learning_rate': 1.0101010101010103e-07, 'rewards/reward_len': -321.578125, 'reward': -321.578125, 'reward_std': 314.4985647201538, 'completion_length': 229.140625, 'kl': 0.0, 'epoch': 0.01} {'loss': -0.0, 'grad_norm': 1.232833981513977, 'learning_rate': 2.0202020202020205e-07, 'rewards/reward_len': -120.75, 'reward': -120.75, 'reward_std': 145.09556579589844, 'completion_length': 79.8125, 'kl': 0.0, 'epoch': 0.02} {'loss': -0.0, 'grad_norm': 1.4564472436904907, 'learning_rate': 3.0303030303030305e-07, 'rewards/reward_len': -170.6875, 'reward': -170.6875, 'reward_std': 182.55911830067635, 'completion_length': 104.9375, 'kl': -5.692243576049805e-06, 'epoch': 0.03} {'loss': -0.0, 'grad_norm': 3.2063918113708496, 'learning_rate': 4.040404040404041e-07, 'rewards/reward_len': -110.671875, 'reward': -110.671875, 'reward_std': 129.3709478378296, 'completion_length': 73.71875, 'kl': -8.501112461090088e-06, 'epoch': 0.04} {'loss': -0.0, 'grad_norm': 1.7419143915176392, 'learning_rate': 5.05050505050505e-07, 'rewards/reward_len': -234.28125, 'reward': -234.28125, 'reward_std': 278.61364382505417, 'completion_length': 128.328125, 'kl': -7.413327693939209e-06, 'epoch': 0.05} {'loss': -0.0, 'grad_norm': 2.447553873062134, 'learning_rate': 6.060606060606061e-07, 'rewards/reward_len': -201.859375, 'reward': -201.859375, 'reward_std': 169.03560876846313, 'completion_length': 157.59375, 'kl': -6.861984729766846e-06, 'epoch': 0.06} {'loss': -0.0, 'grad_norm': 1.1706939935684204, 'learning_rate': 7.070707070707071e-07, 'rewards/reward_len': -75.9375, 'reward': -75.9375, 'reward_std': 133.00669565796852, 'completion_length': 55.546875, 'kl': -6.794929504394531e-06, 'epoch': 0.07} {'loss': -0.0, 'grad_norm': 2.1840455532073975, 'learning_rate': 8.080808080808082e-07, 'rewards/reward_len': -399.328125, 'reward': -399.328125, 'reward_std': 241.75924617052078, 'completion_length': 297.390625, 'kl': -4.477798938751221e-06, 'epoch': 0.08} {'loss': -0.0, 'grad_norm': 2.187257766723633, 'learning_rate': 9.090909090909091e-07, 'rewards/reward_len': -199.421875, 'reward': -199.421875, 'reward_std': 201.53497797250748, 'completion_length': 132.828125, 'kl': -6.563961505889893e-06, 'epoch': 0.09} {'loss': 0.0, 'grad_norm': 1.8141218423843384, 'learning_rate': 1.01010101010101e-06, 'rewards/reward_len': -334.484375, 'reward': -334.484375, 'reward_std': 281.6256628036499, 'completion_length': 225.859375, 'kl': 1.1272728443145752e-05, 'epoch': 0.1} {'loss': 0.0, 'grad_norm': 2.5700647830963135, 'learning_rate': 1.111111111111111e-06, 'rewards/reward_len': -163.3125, 'reward': -163.3125, 'reward_std': 170.36365354061127, 'completion_length': 118.921875, 'kl': 1.0117888450622559e-05, 'epoch': 0.11} {'loss': 0.0, 'grad_norm': 1.258663535118103, 'learning_rate': 1.2121212121212122e-06, 'rewards/reward_len': -317.734375, 'reward': -317.734375, 'reward_std': 255.7184435725212, 'completion_length': 214.5, 'kl': 1.574307680130005e-05, 'epoch': 0.12} {'loss': 0.0, 'grad_norm': 2.4687442779541016, 'learning_rate': 1.3131313131313134e-06, 'rewards/reward_len': -397.640625, 'reward': -397.640625, 'reward_std': 343.2056703567505, 'completion_length': 255.921875, 'kl': 0.0002644285559654236, 'epoch': 0.13} {'loss': 0.0, 'grad_norm': 2.0361921787261963, 'learning_rate': 1.4141414141414143e-06, 'rewards/reward_len': -61.234375, 'reward': -61.234375, 'reward_std': 134.06728866696358, 'completion_length': 41.28125, 'kl': 0.000720784068107605, 'epoch': 0.14} {'loss': 0.0, 'grad_norm': 2.076171875, 'learning_rate': 1.5151515151515152e-06, 'rewards/reward_len': -68.78125, 'reward': -68.78125, 'reward_std': 85.20245426893234, 'completion_length': 50.203125, 'kl': 0.0004588514566421509, 'epoch': 0.15} {'loss': 0.0, 'grad_norm': 2.653731107711792, 'learning_rate': 1.6161616161616164e-06, 'rewards/reward_len': -244.984375, 'reward': -244.984375, 'reward_std': 229.60207390785217, 'completion_length': 167.515625, 'kl': 0.0006752237677574158, 'epoch': 0.16} {'loss': 0.0, 'grad_norm': 1.4232606887817383, 'learning_rate': 1.7171717171717173e-06, 'rewards/reward_len': -433.109375, 'reward': -433.109375, 'reward_std': 422.8487824201584, 'completion_length': 293.953125, 'kl': 0.0012104883790016174, 'epoch': 0.17} {'loss': 0.0001, 'grad_norm': 1.926514983177185, 'learning_rate': 1.8181818181818183e-06, 'rewards/reward_len': -183.265625, 'reward': -183.265625, 'reward_std': 192.7100260257721, 'completion_length': 130.8125, 'kl': 0.0017363205552101135, 'epoch': 0.18} {'loss': 0.0001, 'grad_norm': 1.6588062047958374, 'learning_rate': 1.9191919191919192e-06, 'rewards/reward_len': -81.53125, 'reward': -81.53125, 'reward_std': 122.57226317375898, 'completion_length': 56.0, 'kl': 0.0016131997108459473, 'epoch': 0.19} {'loss': 0.0001, 'grad_norm': 1.1836130619049072, 'learning_rate': 2.02020202020202e-06, 'rewards/reward_len': -78.203125, 'reward': -78.203125, 'reward_std': 164.13510417938232, 'completion_length': 54.578125, 'kl': 0.0036144256591796875, 'epoch': 0.2} {'loss': 0.0003, 'grad_norm': 1.376534342765808, 'learning_rate': 2.1212121212121216e-06, 'rewards/reward_len': -221.0625, 'reward': -221.0625, 'reward_std': 193.60646617412567, 'completion_length': 159.625, 'kl': 0.006711140275001526, 'epoch': 0.21} {'loss': 0.0004, 'grad_norm': 1.8582404851913452, 'learning_rate': 2.222222222222222e-06, 'rewards/reward_len': -147.4375, 'reward': -147.4375, 'reward_std': 201.59488809108734, 'completion_length': 104.296875, 'kl': 0.01008462905883789, 'epoch': 0.22} {'loss': 0.0004, 'grad_norm': 2.769685745239258, 'learning_rate': 2.3232323232323234e-06, 'rewards/reward_len': -73.296875, 'reward': -73.296875, 'reward_std': 126.35262995958328, 'completion_length': 58.734375, 'kl': 0.010751724243164062, 'epoch': 0.23} {'loss': 0.0004, 'grad_norm': 1.448876976966858, 'learning_rate': 2.4242424242424244e-06, 'rewards/reward_len': -326.9375, 'reward': -326.9375, 'reward_std': 302.11334347724915, 'completion_length': 230.59375, 'kl': 0.00894937664270401, 'epoch': 0.24} {'loss': 0.0003, 'grad_norm': 6.789086818695068, 'learning_rate': 2.5252525252525258e-06, 'rewards/reward_len': -63.015625, 'reward': -63.015625, 'reward_std': 75.98726436495781, 'completion_length': 39.59375, 'kl': 0.00805211067199707, 'epoch': 0.25} {'loss': 0.0004, 'grad_norm': 4.663589000701904, 'learning_rate': 2.6262626262626267e-06, 'rewards/reward_len': -78.328125, 'reward': -78.328125, 'reward_std': 89.43056464195251, 'completion_length': 54.546875, 'kl': 0.01078033447265625, 'epoch': 0.26} 3%|█▋ | 26/990 [21:29<12:05:37, 45.16s/it]
3,044
1,229
kashif
2025-03-13T07:45:55
thanks @abhigoyal1997 having a look now
3,043
1,230
kashif
2025-03-13T08:21:09
@abhigoyal1997 is the issue that the `beta` is not the same as the beta in paper? Also, note that the ` F.kl_div` takes inputs q and p to calculate KL(p||q) which can cause confusion too?
3,043
1,231
kashif
2025-03-13T08:28:51
The paper has: ![Screenshot 2025-03-13 at 09 23 46](https://github.com/user-attachments/assets/d2747da5-9749-4c21-ad4b-84fb5ecc3ae4) In TRL it is implemented as: $$ D_{{JSD}(\beta)}(P \| Q) = \beta KL\Big(P \Big \| \beta Q + (1- \beta)P \Big) + (1 - \beta) KL\Big(Q \Big \| \beta Q + (1 - \beta) P \Big) $$ You can see when beta=0, the loss is the KL(student || teacher) which is `F.kl_div(teacher, student)` in TRL and when beta=1, the loss is KL(teacher || student) which is `F.kl_div(student, teacher)` so there is a difference in the original vs. the TRL formulation
3,043
1,232
HuggingFaceDocBuilderDev
2025-03-13T09:23:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3043). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,043
1,233
abhigoyal1997
2025-03-13T10:51:28
> The paper has: ![Screenshot 2025-03-13 at 09 23 46](https://private-user-images.githubusercontent.com/8100/422247204-d2747da5-9749-4c21-ad4b-84fb5ecc3ae4.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDE4NjMyMzksIm5iZiI6MTc0MTg2MjkzOSwicGF0aCI6Ii84MTAwLzQyMjI0NzIwNC1kMjc0N2RhNS05NzQ5LTRjMjEtYWQ0Yi04NGZiNWVjYzNhZTQucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDMxMyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAzMTNUMTA0ODU5WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZjgxN2YwNzUwNTZlMGI3YjM0ODI2OGRlMDg2ZjdkMDI0NDJkM2U5MjQ0NTk1NGRjMmVhMWU4NjA1NDliNzcxZCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.wegqtofi_nb6VasUoYAVx_5ziefU4V1_YKC0y2t_hFc) > > In TRL it is implemented as: > > D J S D ( β ) ( P | Q ) = β K L ( P | β Q + ( 1 − β ) P ) + ( 1 − β ) K L ( Q | β Q + ( 1 − β ) P ) > > You can see when beta=0, the loss is the KL(student || teacher) which is `F.kl_div(teacher, student)` in TRL and when beta=1, the loss is KL(teacher || student) which is `F.kl_div(student, teacher)` so there is a difference in the original vs. the TRL formulation Hi Kashif, yes this was the problem. The mixture distribution was calculated with the wrong weights. Thanks for reviewing and approving!
3,043
1,234
skoshx
2025-03-10T20:45:07
So turns out this was just a skill issue, `max_length` defaults to `512`, so in `_forward` the outputs get truncated, so we get zero output, causing the error. There should probably be an assertion like; ```py assert config.max_length > config.max_new_tokens, "`max_length` should be higher than `max_new_tokens` or your outputs will get truncated to zero length." ```
3,042
1,235
qgallouedec
2025-03-11T17:47:44
How much memory does your system have?
3,039
1,236
qgallouedec
2025-03-11T17:52:48
From the log, it's not clear where this memory peak occur. Can you try to be even more precise with the looping pattern you made? I'll give it a try myself as well
3,039
1,237
qgallouedec
2025-03-10T05:39:50
Thanks for reporting. Trl doesn't support python 3.14. Currently, 3.13 should work but it not officially supported, see #2593. Max supported version is 3.12.
3,038
1,238
debdeepsanyal
2025-03-09T07:00:54
same issue. the code was working with the GRPOTrainer earlier but now it throws off this RuntimeError.
3,035
1,239
debdeepsanyal
2025-03-09T19:37:36
After some further checking, I think the problem occurs if I am using `device_map='auto'`. Someone kindly fix this portion.
3,035
1,240
stevebell117
2025-03-10T14:29:44
We also have `device_map='auto'`
3,035
1,241
qgallouedec
2025-03-08T18:24:24
I am encountering this issue as well. Any idea how to solve it?
3,034
1,242
dongdongzhaoUP
2025-03-12T13:47:13
Also
3,034
1,243
jenna-russell
2025-03-18T19:14:51
I also am encountering this issue
3,034
1,244
lilakk
2025-03-18T19:18:25
I've been encountering the same issue!
3,034
1,245
Bingogogogogo
2025-03-19T11:35:24
same issue
3,034
1,246
Vanchrn
2025-03-22T02:38:28
same
3,034
1,247
wofeishenling
2025-03-23T11:55:03
same issue
3,034
1,248
naajeehxe
2025-03-24T11:54:01
same here...
3,034
1,249
anakin87
2025-03-30T13:16:27
Related Transformers PR: https://github.com/huggingface/transformers/pull/36162 As a workaround, you can try installing `transformers==4.48.3`.
3,034
1,250
qgallouedec
2025-04-01T18:16:40
Could be related: https://github.com/huggingface/transformers/pull/36729
3,034
1,251
MrZhengXin
2025-04-11T03:01:40
How about setting `cache_implementation='dynamic'` in GRPOConfig https://github.com/huggingface/trl/blob/d625c5533a6b1c84d3565c8080857f6bb81c538a/trl/trainer/grpo_config.py#L80 ```python training_args = GRPOConfig( # ... cache_implementation="dynamic", ) trainer = GRPOTrainer( model=model, processing_class=tokenizer, args=training_args, train_dataset=dataset, # ... ) ``` This could be the issue of StaticCache: https://github.com/huggingface/transformers/issues/37189
3,034
1,252
singhalarchit
2025-04-14T20:12:06
@MrZhengXin .. this did not solve the issue. I am using Qwen-2.5 7B.
3,034
1,253
qgallouedec
2025-04-15T18:38:56
For context, this error only occurs when generating with transformers. So, to solve this problem and make generation faster, all at once, I recommend using vLLM instead, see documentation: https://huggingface.co/docs/trl/en/grpo_trainer#speed-up-training-with-vllm-powered-generation
3,034
1,254
TriLoo
2025-04-18T06:25:30
same here
3,034
1,255
PolarisHsu
2025-04-18T07:01:08
same issue
3,034
1,256
ChrisKimZHT
2025-04-19T06:41:49
I'm encountering this issue during DPO training too. And set `gradient_checkpointing` to `true` OR install `transformers==4.48.3` can temporary solve this problem.
3,034
1,257
p1kachu2233
2025-04-21T22:08:25
same
3,034
1,258
harveyaot
2025-04-24T07:15:54
my case is using the default collator, which used padding_side='right' without any dynamic checks if flash_attn_2 used. after passing in a customized data collator, the issue resolved. [https://github.com/huggingface/trl/blob/89556c8cbf1a816539167a46cdf285419e057fec/trl/trainer/sft_trainer.py#L131](code to update)
3,034
1,259
shon-otmazgin-wix
2025-05-19T20:10:33
do `use_cache=False` while creating the model solved my issues
3,034
1,260
AndreiCComan
2025-03-10T16:23:51
@JinyuanSun I had a similar issue in #2856 which has been fixed. Could you try to run the same MRE and the latest changes (i.e., learning rate etc.) I posted there?
3,031
1,261
wyuzh
2025-03-18T12:10:25
Same issue. #2856 is not the same issue, since we want to perform GRPO on a fine-tuned PeftModel, but not perform GRPO together with PEFT.
3,031
1,262
DingZhenChen-code
2025-03-20T12:51:56
Same issue. How to continue training on a fine-tuned PeftModel which lora module is not merged. Maybe resume from ckpt is helpful.
3,031
1,263
cliang-huanglab
2025-05-26T05:28:19
Same issue. Have you found a solution?
3,031
1,264
qgallouedec
2025-03-11T14:07:37
That's a good point. That's also what's done in open-instruct: https://github.com/allenai/open-instruct/blob/6d5320539f23a6dd55c892fd35e7e86907569af1/open_instruct/grpo_vllm_thread_ray_gtrl.py#L777C9-L777C37 Ideally, we would like to have some curves to show this gap, so if someone has any, feel free to share.
3,029
1,265
HuggingFaceDocBuilderDev
2025-03-11T15:37:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3029). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,029
1,266
deekshaVarshney
2025-03-07T15:19:09
@kashif
3,027
1,267
qgallouedec
2025-03-07T13:48:04
If I summarize your article, the term KL doesn't seem correct to you because the sampling is done under $π_{\mathrm{old}}$ and not with $π_θ$? Note that in practice (and this is the default setting), $μ=1$ (implies $π_{\mathrm{old}} = π_{\theta}$), this issue doesn’t arise. In the general case, we would need to find a way to perform importance sampling on the KL term—is that your idea?
3,025
1,268
zanghyu
2025-03-07T16:41:07
> If I summarize your article, the term KL doesn't seem correct to you because the sampling is done under π old and not with π θ ? Note that in practice (and this is the default setting), μ = 1 (implies π old = π θ ), this issue doesn’t arise. In the general case, we would need to find a way to perform importance sampling on the KL term—is that your idea? Yes exactly. So the current implementation of GRPO is just an on-policy version, it does not seem like the original one in GRPO paper.
3,025
1,269
qgallouedec
2025-03-07T17:59:36
In the DeepSeek Math paper, they use the same KL term, no?
3,025
1,270
zanghyu
2025-03-07T18:29:18
> In the DeepSeek Math paper, they use the same KL term, no? I got your point. Yeah, they use the same KL term, while the equation in their paper shows that their samples are from the old policy distribution. So the default implementation in this repo is okay ( as being on-policy), but it is hard to say how to implement an off-policy version, right?
3,025
1,271
qgallouedec
2025-03-07T18:46:34
Maybe with some kind of importance sampling?
3,025
1,272
zanghyu
2025-03-08T02:39:55
> Maybe with some kind of importance sampling? $$\nabla_\theta\mathbb{E}_{\pi_\theta}[\log\pi_\theta - \log\pi_\text{ref}]=\mathbb{E}_{\pi_\theta}[(\log\pi_\theta-\log\pi_\text{ref})\cdot \nabla_\theta \log\pi_\theta$$. So we only need to add the logprob difference between $$\log\pi_\theta$$ and $$\log\pi_\text{ref}$$ in the reward function. By doing so, we don't need to re-sample again, we can just use the samples from the old policy, and since we add this term into the reward function, it naturally multiply the coef of IS, so everythings fine. It's quite simple. --- The formula seems doesn't render right...
3,025
1,273
qgallouedec
2025-03-07T13:17:11
Thanks for reporting. The easiest is indeed to turn it off. Another way is to call `LLM.llm_engine.reset_prefix_cache()` (suggested by @hmellor) after the new weights are loaded. If someone wants to try this and if it works, a PR would be welcome
3,024
1,274
HuggingFaceDocBuilderDev
2025-03-07T11:25:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3023). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,023
1,275
qgallouedec
2025-04-10T15:26:41
The most common approach is to use vLLM now, I'm closing this PR
3,023
1,276
qgallouedec
2025-03-07T16:25:40
Can you share some code and results?
3,021
1,277
JosephChenHub
2025-04-17T10:42:21
<img width="891" alt="Image" src="https://github.com/user-attachments/assets/88993907-21d6-4e3e-ace6-98889ffffbff" /> We have a similar observation. The above curves show two settings: - GH200: per device batch size=16, gradient accumulation steps = 2, world size=8, num_generations=8 => batch size = 32 - A100: per device batch size=2, gradient accumulation steps = 8, world size=16, num_generations=8 => batch size = 32 I guess this is because the advantage in the loss function ``` per_token_loss1 = coef_1 * advantages.unsqueeze(1) per_token_loss2 = coef_2 * advantages.unsqueeze(1) per_token_loss = -torch.min(per_token_loss1, per_token_loss2) if self.beta != 0.0: per_token_loss = per_token_loss + self.beta * per_token_kl loss = (per_token_loss * completion_mask).sum() / completion_mask.sum() ``` let's say, you have gradient accumulation steps = 2, two batch samples D1, D2, advantages A1, A2 loss ( (D1,D2), (A1, A2) ) != loss(D1, A1) + loss(D2, A2)
3,021
1,278
JosephChenHub
2025-04-17T11:03:33
I noticed that the updated version is ```loss = ((per_token_loss * completion_mask).sum(-1) / completion_mask.sum(-1).clamp(min=1.0)).mean()``` maybe it has been resolved.
3,021
1,279
loxs123
2025-05-02T02:19:39
> I noticed that the updated version is `loss = ((per_token_loss * completion_mask).sum(-1) / completion_mask.sum(-1).clamp(min=1.0)).mean()` > > maybe it has been resolved. ```python if self.loss_type == "grpo": loss = ((per_token_loss * completion_mask).sum(-1) / completion_mask.sum(-1).clamp(min=1.0)).mean() elif self.loss_type == "bnpo": loss = (per_token_loss * completion_mask).sum() / completion_mask.sum().clamp(min=1.0) elif self.loss_type == "dr_grpo": loss = (per_token_loss * completion_mask).sum() / (per_token_loss.size(0) * self.max_completion_length) else: raise ValueError(f"Unknown loss type: {self.loss_type}") ``` The complete code is shown above, so the issue you just mentioned likely doesn’t exist for `grpo_loss`/`dr_grpo`. However, the `bnpo` (and probably `dapo` as well) loss still seems to have the issue. Additionally, `grpo_loss` and `bnpo_loss` are inherently two different loss calculation methods. I believe that `loss = ((per_token_loss * completion_mask).sum(-1) / completion_mask.sum(-1).clamp(min=1.0)).mean()` is not a correction to the original loss, but rather follows the original GRPO algorithm.
3,021
1,280
AMindToThink
2025-03-06T20:44:01
Here's the problem part of the documentation: [here](https://huggingface.co/docs/trl/en/sft_trainer#:~:text=dataset%20%3D%20load_dataset(%22lucasmccabe%2Dlmi/CodeAlpaca%2D20k%22%2C%20split%3D%22train%22)
3,019
1,281
CloseChoice
2025-05-05T18:11:35
This is fixed
3,019
1,282
tchang1997
2025-03-10T13:54:09
+1 — As a hack, I've been getting around this by defining new reward functions and setting `reward_weight` to zero (so it still gets logged, but doesn't affect the "actual" reward).
3,018
1,283
qgallouedec
2025-03-06T15:53:58
Let's say that you've 8 GPUs, in the limit you can have `per_device_batch_size=1` and `num_generations=8`. And set the number of gradient accumulation steps to any value. > Currently `per_device_train_batch_size` must be a multiple of `num_generations` which can severely limit how large you can make it before That's not exactly that. It's per_device_train_batch_size*num_devices that must be a multiple of `num_generations`. While I understand the motivation, I think it's not straightforward to implement.
3,017
1,284
JamesBowerXanda
2025-03-06T16:19:59
Ah yes, sorry I forgot about number of devices. Though this doesn't change much right, we just amend my statement to `num_devices * per_device_train_batch_size * gradient_accumulation_steps ` must be a multiple of `num_generations`. Is it complicated because currently the prepare_inputs method does both the generation and score calculation then the inputs are passed straight to the compute_loss method by the Trainer superclass? I can see how it could cause more issues than it is worth having to fiddle with the core pipeline just for one trainer. I just thought I would bring it because I noticed how much smoother the training seemed when I was able to up the number of generations using smaller models and this seemed to be the big bottleneck to that.
3,017
1,285
qgallouedec
2025-03-06T18:06:00
> Is it complicated because currently the prepare_inputs method does both the generation and score calculation then the inputs are passed straight to the compute_loss method by the Trainer superclass? Yes that's correct > I was able to up the number of generations using smaller models and this seemed to be the big bottleneck to that. You can increase the number of generations quite high actually. Example, if you've 8 GPUs that can handle 4 generations, you can use number of generations per prompt up to 32,
3,017
1,286
JamesBowerXanda
2025-03-07T09:05:00
Ok, I understand, thanks for your prompt responses. Unfortunately I am most interested in using this on my personal gpu so I am not using multiple gpu clusters. Thanks for your time, I am happy for the issue to be closed since it is not deemed feasible.
3,017
1,287
qgallouedec
2025-03-07T09:11:09
With 1 GPU, the best you can do is to set `num_generations=per_device_train_batch_size`, and set the `gradient_accumulation_steps` depending on the desired effective batch size. Example: ``` per_device_train_batch_size = 8 num_generations = 8 gradient_accumulation_steps = 16 ``` To have an effective batch size of 128
3,017
1,288
JamesBowerXanda
2025-03-07T09:28:08
I understand this but it doesn't solve the issue of the loss function being an estimation based on a sample size of 8. ![Image](https://github.com/user-attachments/assets/6ffe84d0-3e0f-42b9-8dd5-50400d16c1b8) Based on the GRPO loss formulation the expectation we estimate is conditional on the input prompt as are the advantage calculations and just increasing the gradient accumulation to 16 gives us 16 high variance estimates of the expectation rather than one low variance estimation. I hope this makes sense. As I said before I can see why this is deemed not worth it since most large scale use cases can probably afford to just up the number of gpus. I had just hoped it would be an easier adjustment that would allow us hobbyists to stick closer to the theory of the paper.
3,017
1,289
qgallouedec
2025-03-07T10:20:17
Then you should increase `num_generations`. By default it's 8, but in the DeepSeek Math paper, they use 64. Of course you'll be probably limited by the compute here if you've only have 1 GPU
3,017
1,290
qgallouedec
2025-03-07T10:25:20
> I had just hoped it would be an easier adjustment In fact, this is tricky, as it would involve sampling, generating and calculating the advantage for the whole batch, then iterating somehow over the batch. It's not impossible, but it adds an implementation complexity that I don't think is justified. In my experience, playing with a low `num_generations` gives good results.
3,017
1,291
JamesBowerXanda
2025-03-07T11:00:04
Forgive my naivety but would it not be as simple as overiding the `training_step` method for `GRPOTrainer` from the base `Trainer` one which is: ``` def training_step( self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]], num_items_in_batch=None ) -> torch.Tensor: """ Perform a training step on a batch of inputs. Subclass and override to inject custom behavior. Args: model (`nn.Module`): The model to train. inputs (`Dict[str, Union[torch.Tensor, Any]]`): The inputs and targets of the model. The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument `labels`. Check your model's documentation for all accepted arguments. Return: `torch.Tensor`: The tensor with training loss on this batch. """ model.train() if hasattr(self.optimizer, "train") and callable(self.optimizer.train): self.optimizer.train() inputs = self._prepare_inputs(inputs) if is_sagemaker_mp_enabled(): loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulation_steps) return loss_mb.reduce_mean().detach().to(self.args.device) with self.compute_loss_context_manager(): loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) del inputs if ( self.args.torch_empty_cache_steps is not None and self.state.global_step % self.args.torch_empty_cache_steps == 0 ): if is_torch_xpu_available(): torch.xpu.empty_cache() elif is_torch_mlu_available(): torch.mlu.empty_cache() elif is_torch_musa_available(): torch.musa.empty_cache() elif is_torch_npu_available(): torch.npu.empty_cache() elif is_torch_mps_available(min_version="2.0"): torch.mps.empty_cache() else: torch.cuda.empty_cache() kwargs = {} # For LOMO optimizers you need to explicitly use the learnign rate if self.args.optim in [OptimizerNames.LOMO, OptimizerNames.ADALOMO]: kwargs["learning_rate"] = self._get_learning_rate() if self.args.n_gpu > 1: loss = loss.mean() # mean() to average on multi-gpu parallel training if self.use_apex: with amp.scale_loss(loss, self.optimizer) as scaled_loss: scaled_loss.backward() else: # Finally we need to normalize the loss for reporting if not self.model_accepts_loss_kwargs and self.compute_loss_func is None: loss = loss / self.args.gradient_accumulation_steps # Turning off loss scaling w.r.t. gradient accumulation when DeepSpeed is enabled # https://github.com/huggingface/transformers/pull/35808 if self.accelerator.distributed_type == DistributedType.DEEPSPEED: kwargs["scale_wrt_gas"] = False self.accelerator.backward(loss, **kwargs) return loss.detach() ``` to somehting like ``` def training_step( self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]], num_items_in_batch=None ) -> torch.Tensor: """ Perform a training step on a batch of inputs. Subclass and override to inject custom behavior. Args: model (`nn.Module`): The model to train. inputs (`Dict[str, Union[torch.Tensor, Any]]`): The inputs and targets of the model. The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument `labels`. Check your model's documentation for all accepted arguments. Return: `torch.Tensor`: The tensor with training loss on this batch. """ model.train() if hasattr(self.optimizer, "train") and callable(self.optimizer.train): self.optimizer.train() inputs = self._prepare_inputs(inputs) if is_sagemaker_mp_enabled(): loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulation_steps) return loss_mb.reduce_mean().detach().to(self.args.device) # CHANGED: Split the inputs into mini-batches mini_batch_size = self.args.per_device_train_batch_size * self.args.n_gpu mini_batch_inputs = [] for i in range(inputs["prompt_ids"].shape[0] // mini_batch_size): mini_batch_inputs.append( { key: value[i * mini_batch_size : (i + 1) * mini_batch_size] for key, value in inputs.items() } ) losses = [] del inputs # CHANGED: Iterate over the mini-batches for loss calculation and gradient backward pass for inputs in mini_batch_inputs: with self.compute_loss_context_manager(): loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) del inputs if ( self.args.torch_empty_cache_steps is not None and self.state.global_step % self.args.torch_empty_cache_steps == 0 ): if is_torch_xpu_available(): torch.xpu.empty_cache() elif is_torch_mlu_available(): torch.mlu.empty_cache() elif is_torch_musa_available(): torch.musa.empty_cache() elif is_torch_npu_available(): torch.npu.empty_cache() elif is_torch_mps_available(min_version="2.0"): torch.mps.empty_cache() else: torch.cuda.empty_cache() kwargs = {} # For LOMO optimizers you need to explicitly use the learnign rate if self.args.optim in [OptimizerNames.LOMO, OptimizerNames.ADALOMO]: kwargs["learning_rate"] = self._get_learning_rate() if self.args.n_gpu > 1: loss = loss.mean() # mean() to average on multi-gpu parallel training if self.use_apex: with amp.scale_loss(loss, self.optimizer) as scaled_loss: scaled_loss.backward() else: # Finally we need to normalize the loss for reporting if not self.model_accepts_loss_kwargs and self.compute_loss_func is None: loss = loss / self.args.gradient_accumulation_steps # Turning off loss scaling w.r.t. gradient accumulation when DeepSpeed is enabled # https://github.com/huggingface/transformers/pull/35808 if self.accelerator.distributed_type == DistributedType.DEEPSPEED: kwargs["scale_wrt_gas"] = False self.accelerator.backward(loss, **kwargs) # CHANGED: Append the loss to the list so that we can average it later and return the same value as before losses.append(loss.detach()) # CHANGED: Average the losses and return the same value as before loss = torch.mean(torch.tensor(losses)) return loss.detach() ``` I have added comments starting with `# CHANGED:` to all parts I have edited from the trainers method.
3,017
1,292
JamesBowerXanda
2025-03-07T11:04:13
Sorry, I am not trying to be a pain. As I said previously I am happy for you to close this if it is just a no go. Just thought I would offer the suggestion in case it helped.
3,017
1,293
qgallouedec
2025-03-07T11:11:11
It might work, but that's the complexity I want to avoid. Forking the repo might be the best option here. Or subclass `GRPOTrainer` to override the `training_step` method.
3,017
1,294
JamesBowerXanda
2025-03-07T11:17:52
Ok, I am happy to do that. I won't bog you down anymore on this.
3,017
1,295
ingambe
2025-03-16T21:13:30
Actually, being restricted on the minibatch size by the number of trajectories is very limiting. Depending on the problem, if the variance is large or the reward is very sparse, 8 iterations will not cut it.
3,017
1,296
jaeminSon
2025-04-07T06:03:58
If I understand correctly, per_device_train_batch_size is an integer, which means single GPU should be able to handle a backprop. H100 has roughly 80GB memory and I encountered GPU OOM with Qwen2-7B model. If I'm correct, this could be quite a constraint as bigger models cannot be run.
3,017
1,297
jarrelscy
2025-04-13T23:44:59
Hi @JamesBowerXanda I ran into a similar thing as what you had and needed a larger generation batch size. I've implemented something which you can run using this. As mentioned above, I overwrote training_step within GRPOTrainer for this to work. ``` # train_grpo.py from datasets import load_dataset from trl import GRPOConfig, GRPOTrainer dataset = load_dataset("trl-lib/tldr", split="train") # Define the reward function, which rewards completions that are close to 20 characters def reward_len(completions, **kwargs): return [-abs(20 - len(completion)) for completion in completions] training_args = GRPOConfig(output_dir="Qwen2-0.5B-GRPO", logging_steps=10, per_device_train_batch_size=16, # needs to be a multiple of num_generations num_generations=8, # needs to be a multiple of num_generations_chunks num_generations_chunks=8) trainer = GRPOTrainer( model="Qwen/Qwen2-0.5B-Instruct", reward_funcs=reward_len, args=training_args, train_dataset=dataset, ) trainer.train() ``` You can find it [here](https://github.com/huggingface/trl/pull/3288)
3,017
1,298
skoshx
2025-03-06T14:47:19
This is the offending code in `online_dpo_trainer.py`: ```py def _generate(self, model, prompts): eos_token_id = self.processing_class.eos_token_id pad_token_id = self.processing_class.pad_token_id # Apply chat template and tokenize the input. We do this on-the-fly to enable the use of reward models and # policies with different tokenizers / chat templates. inputs = [{"prompt": prompt} for prompt in prompts] inputs = [maybe_apply_chat_template(x, self.processing_class) for x in inputs] inputs = [self.tokenize_row(x, model.config.is_encoder_decoder, self.processing_class) for x in inputs] inputs = self.data_collator(inputs) # Sample 2 completions per prompt of size `max_new_tokens` from the model inputs = self._prepare_inputs(inputs) prompt_ids = inputs["prompt_input_ids"].repeat(2, 1) prompt_mask = inputs["prompt_attention_mask"].repeat(2, 1) with unwrap_model_for_generation( model, self.accelerator, gather_deepspeed3_params=self.args.ds3_gather_for_generation ) as unwrapped_model: output = unwrapped_model.generate( input_ids=prompt_ids, attention_mask=prompt_mask, generation_config=self.generation_config, ) completion_ids = output[:, prompt_ids.size(1) :] completion_ids, completion_mask = truncate_right(completion_ids, eos_token_id, pad_token_id) return prompt_ids, prompt_mask, completion_ids, completion_mask ``` I fixed the error by moving the input tokenization and collation logic inside the `unwrap_model_for_generation` block. ```py with unwrap_model_for_generation( model, self.accelerator, gather_deepspeed3_params=self.args.ds3_gather_for_generation ) as unwrapped_model: # Apply chat template and tokenize the input. We do this on-the-fly to enable the use of reward models and # policies with different tokenizers / chat templates. inputs = [{"prompt": prompt} for prompt in prompts] inputs = [maybe_apply_chat_template(x, self.processing_class) for x in inputs] inputs = [self.tokenize_row(x, model.config.is_encoder_decoder, self.processing_class) for x in inputs] inputs = self.data_collator(inputs) # Sample 2 completions per prompt of size `max_new_tokens` from the model inputs = self._prepare_inputs(inputs) prompt_ids = inputs["prompt_input_ids"].repeat(2, 1) prompt_mask = inputs["prompt_attention_mask"].repeat(2, 1) output = unwrapped_model.generate( input_ids=prompt_ids, attention_mask=prompt_mask, generation_config=self.generation_config, ) ``` That seemed to work, but then I get some quite bad looking error: ``` AttributeError: 'DeepSpeedZeRoOffload' object has no attribute '_register_hooks_recursively' [rank0]: Traceback (most recent call last): [rank0]: File "/mnt/ml-data/crafty/simple/docs_dpo_online_repro.py", line 28, in <module> [rank0]: trainer.train() [rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/transformers/trainer.py", line 2241, in train [rank0]: return inner_training_loop( [rank0]: ^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop [rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/trainer/online_dpo_trainer.py", line 538, in training_step [rank0]: prompt_ids, prompt_mask, completion_ids, completion_mask = self._generate(model, prompts) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/trainer/online_dpo_trainer.py", line 482, in _generate [rank0]: with unwrap_model_for_generation( [rank0]: File "/usr/lib/python3.11/contextlib.py", line 144, in __exit__ [rank0]: next(self.gen) [rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/models/utils.py", line 213, in unwrap_model_for_generation [rank0]: add_hooks(model) [rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/models/utils.py", line 174, in add_hooks [rank0]: optimizer_offload._register_hooks_recursively(optimizer_offload.module) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: AttributeError: 'DeepSpeedZeRoOffload' object has no attribute '_register_hooks_recursively' ``` So I tried by using a lower DeepSpeed stage (1), and a smaller model so they would fit on one GPU: ```py # train_online_dpo.py from datasets import load_dataset from trl import OnlineDPOConfig, OnlineDPOTrainer, PairRMJudge from transformers import AutoModelForCausalLM, AutoTokenizer from trl import BasePairwiseJudge class DummyPairwiseJudge(BasePairwiseJudge): def judge(self, prompts: list[str], completions: list[list[str]], shuffle_order: bool = True) -> list[int]: return [0 for prompt in prompts] pass pass model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct") # model = AutoModelForCausalLM.from_pretrained("unsloth/Meta-Llama-3.1-8B-Instruct") # tokenizer = AutoTokenizer.from_pretrained("unsloth/Meta-Llama-3.1-8B-Instruct") # Explicitly defining `ref_model` because of error "ValueError: DeepSpeed ZeRO-3 is enabled and is not compatible with `create_reference_model()`. Please instantiate your reference model directly with `AutoModelForCausalLM.from_pretrained()`." # ref_model = AutoModelForCausalLM.from_pretrained("unsloth/Meta-Llama-3.1-8B-Instruct") ref_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") train_dataset = load_dataset("trl-lib/ultrafeedback-prompt", split="train") training_args = OnlineDPOConfig(output_dir="Qwen2-0.5B-OnlineDPO", logging_steps=10, bf16=True) trainer = OnlineDPOTrainer( model=model, judge=DummyPairwiseJudge(), args=training_args, processing_class=tokenizer, train_dataset=train_dataset, ref_model=ref_model ) trainer.train() ``` This trains successfully: ```bash {'loss': 0.6932, 'grad_norm': 46.673831939697266, 'learning_rate': 4.823521106875618e-07, 'objective/kl': 0.9423828125, 'objective/entropy': 255.7, 'objective/non_score_reward': -0.09420166015625, 'rewards/chosen': -0.001299285888671875, 'rewards/rejected': -0.00139923095703125, 'rewards/accuracies': 0.45625, 'rewards/margins': 0.000103759765625, 'logps/chosen': -53.7, 'logps/rejected': -56.8, 'val/contain_eos_token': 0.29375, 'beta': 0.09999999999999999, 'epoch': 0.11} 4%|████▋ | 255/7083 [10:28<4:36:33, 2.43s/it] ``` So basically, seems like using DeepSpeed Stage 3 just doesn't work. And it's a shame because even 7B models can't be finetuned without quantization even with A100 80GB GPUs...
3,016
1,299
skoshx
2025-03-06T17:08:01
🎉 Update: Quickly reading through the DeepSpeed codebase gave me the understanding that `DeepSpeedZeRoOffload` class automatically registered hooks upon instance creation, so I removed the `optimized_offload._register_hooks_recursively(optimizer_offload.module)` code line (`add_hooks` can completely be disregarded in `trl/models/utils.py`) and now Online DPO works with DeepSpeed ZeRo Stage 3. The above training with the `unsloth/Meta-Llama-3.1-8B-Instruct` model on 2xA100 (80GB) node would take about 53 hours to complete: ``` 0%| | 6/7083 [03:00<53:37:49, 27.28s/it] ``` I'm happy to open a PR to make these fixes, but would love the input of a maintainer to maybe shed some light on potential problems from these patches, since I haven't worked that long on the TRL repo.
3,016
1,300
qgallouedec
2025-03-06T17:59:42
Is it related to #2963?
3,016
1,301
skoshx
2025-03-06T19:55:55
The second part is related, but that won't fix the original "AttributeError: 'dict' object has no attribute 'is_encoder_decoder'" error. Also, I see that PR was merged, but still I'm not convinced it's even needed to call `self._register_deepspeed_module(self.module)`, like they do in that PR, since it gets called automatically in `__init__`? Am I missing something? [Code line where hooks are automatically set up](https://github.com/deepspeedai/DeepSpeed/blob/c2c81993948fc28385542196c8544fb442017987/deepspeed/runtime/zero/parameter_offload.py#L177)
3,016
1,302
qgallouedec
2025-03-06T08:00:54
Thanks for reporting, how would you fix that?
3,015
1,303
Boltzmachine
2025-03-06T19:48:19
I clamp it for now
3,015
1,304
vagitablebirdcode
2025-03-14T09:55:29
I recommend implementing a similar `SoftClip` method in Pytorch as in TensorFlow Probability for truncation, its formula is similar to the following: ![Image](https://github.com/user-attachments/assets/bea559be-e38d-4849-aedf-8266a68fd060) This activation function ensures that the output is smooth over the entire defined domain to prevent gradient explosion during backpropagation here.
3,015
1,305
Alex-HaochenLi
2025-05-18T10:34:56
Hi @Boltzmachine, I met the same issue. I am wondering how you set the clamp value?
3,015
1,306
August-murr
2025-03-06T07:59:50
@qgallouedec I'm gonna have to ask you to reproduce or at least rerun the code you used to train the https://github.com/huggingface/trl/pull/2873#issuecomment-2663793035 so I can calrify wether the problem is on my side and my script or TRL.
3,013
1,307
AndreiCComan
2025-03-06T17:47:02
@August-murr I had a similar issue in #2856 which has been fixed. Could you try to run the same MRE I posted in #2856 and confirm you are facing the same issue?
3,013
1,308
cuiyuhao1996
2025-03-18T02:52:42
I ran into the same problem, even with the latest update.
3,013
1,309
cuiyuhao1996
2025-03-18T02:54:40
Have you solved the problem? :)
3,013
1,310
August-murr
2025-03-18T12:05:25
> Have you solved the problem? :) @qgallouedec said he was working on it @qgallouedec any updates?
3,013
1,311
Techie5879
2025-04-05T06:55:13
Current setup: vLLM model running on GPU0, and in another notebook, have set GPU 1 to be only visible (for training). This is from - https://github.com/huggingface/trl/blob/main/docs/source/speeding_up_training.md ``` training_args = GRPOConfig( output_dir="Llama-3.2-1B-GRPO4", logging_steps=1, save_steps=500, learning_rate=5e-7, adam_beta1 = 0.9, adam_beta2 = 0.99, weight_decay = 0.1, warmup_ratio = 0.05, max_grad_norm = 0.1, max_steps = 10000, per_device_train_batch_size=6, num_generations=6, lr_scheduler_type="cosine", push_to_hub=False, bf16=True, report_to="wandb", use_vllm=True, max_prompt_length = max_prompt_length, max_completion_length = 512, ) ``` ``` trainer = GRPOTrainer( model=MODEL_ID, processing_class=tokenizer, reward_funcs=[ xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func, ], args=training_args, train_dataset=dataset, # peft_config=lora_config, ) ``` With PEFT config, training just doesn't seem to work well, without the PEFT config, works much better and rewards are increasing. trl = 0.16.0, peft=0.15.1
3,013
1,312
qgallouedec
2025-04-05T17:52:09
We usually use a higher learning rate when using peft. Could you try this?
3,013
1,313
Techie5879
2025-04-05T18:26:44
@qgallouedec I've tried about 2e-5 with llama 3.2 1b. Using lora rank 64 Do you recommend something else/going higher?
3,013
1,314
qgallouedec
2025-03-07T16:36:09
This can be considered; have you tried implementing it?
3,010
1,315
radna0
2025-03-07T16:39:35
@qgallouedec I’m still experimenting with LMDeploy for inference, so not yet.
3,010
1,316
HuggingFaceDocBuilderDev
2025-03-11T14:34:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3009). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,009
1,317
qgallouedec
2025-03-22T18:19:41
## Benchmark packing ```python import timeit import numpy as np from datasets import Dataset from trl.data_utils import pack_examples, pack_dataset # Create a larger dataset with sequence lengths following a gamma distribution num_samples = 10_000 # Generate sequence lengths following a gamma distribution seq_lengths = np.random.gamma(shape=5, scale=20, size=num_samples) # mean will be 100 seq_lengths = np.clip(seq_lengths, 10, None).astype(int) # Clip to [10, inf) # Generate input sequences with random lengths based on gamma distribution examples = { "input_ids": [list(range(length)) for length in seq_lengths], "attention_mask": [[1] * length for length in seq_lengths], } dataset = Dataset.from_dict(examples) max_length = 128 # Set a fixed packing length # Benchmark pack_dataset time_pack_dataset = timeit.timeit(lambda: pack_dataset(dataset, max_length), number=10) # Benchmark dataset.map with pack_examples time_pack_examples = timeit.timeit( lambda: dataset.map(pack_examples, batched=True, fn_kwargs={"seq_length": max_length}), number=10 ) print(f"pack_dataset time: {time_pack_dataset:.4f} seconds") print(f"dataset.map(pack_examples) time: {time_pack_examples:.4f} seconds") ``` ``` pack_dataset time: 0.0667 seconds dataset.map(pack_examples) time: 19.3734 seconds Speedup: 290.46x ```
3,009
1,318