user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-07-30 20:59:07
body
stringlengths
1
173k
issue_number
int64
1
3.81k
__index_level_0__
int64
0
11.8k
cyr0930
2025-02-28T08:59:55
ah sorry it's DPOTrainer
2,985
8,600
kevinlu1248
2025-03-08T02:36:10
It also seems to not work with Deepspeed stage 1/2, getting: ``` [rank0]: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) ```
2,985
8,601
kevinlu1248
2025-03-08T03:11:04
Found a fix for stage 1 & 2 by explicitly initializing the reference model, it looks like it gets garbage collected off of VRAM as well after the initial log prob computation is complete.
2,985
8,602
jamesbraza
2025-02-28T07:05:09
Is there a standard solution for DeepSpeed tests in CI? I think this is the first integration test for DeepSpeed added to the repo. In the future, we can expand it to cover https://github.com/huggingface/trl/pull/2871 and https://github.com/huggingface/trl/pull/2963.
2,984
8,603
qgallouedec
2025-03-24T16:36:18
I can't reproduce, this runs on my side: ```python from datasets import load_dataset from trl import GRPOConfig, GRPOTrainer def dummy_reward_func(completions, **kwargs): return [0.0] * len(completions) dataset = load_dataset("trl-lib/tldr") training_args = GRPOConfig(output_dir="2983", num_iterations=3) trainer = GRPOTrainer( model="Qwen/Qwen2-0.5B-Instruct", reward_funcs=dummy_reward_func, args=training_args, train_dataset=dataset["train"], ) trainer.train() ``` Maybe try to upgrade TRL? If you still get the issue, please provide a MRE.
2,983
8,604
Andcircle
2025-03-24T20:53:19
Thanks @qgallouedec, this is long time back I saw the new release note for 0.16.0, num_iteration is added as a feature. I will try it out again
2,983
8,605
qgallouedec
2025-03-24T21:30:56
Thanks, yes, sorry for the delay, the number of open issues is overwhelming, I'm trying to catch up.
2,983
8,606
HuggingFaceDocBuilderDev
2025-02-28T10:49:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2982). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,982
8,607
jhinpan
2025-02-28T01:36:57
Current Plan of adding alternative inference backend as SGLang: 0. Add global flag as `use_sglang` 1. Init the offline Engine in `def __init__()` 2. Implement function of `_update_sglang_engine_weights` to function like `_move_model_to_vllm` 3. Receive generation requests in `_prepare_inputs()` 4. Shut down SGLang engine after speeding up the generation
2,981
8,608
jhinpan
2025-02-28T01:46:48
Current Issue Summary: - Successful Standalone Inference: The SGLang Server works correctly as an inference engine when run in a separate terminal. - Distributed Engine Initialization Failure: When switching to using the SGLang engine within the distributed trainer context, the distributed initialization of the offline SGLang engine fails. - Logging Insights: - All processes successfully pass the explicit barriers, and the main process reaches the SGLang engine initialization. - However, no log messages are observed indicating that the generation call (`engine.generate()`) is executed or that a response is returned. - Conclusion: These observations suggest that execution is stalling either inside or before the call to engine.generate(). The issue likely lies within SGLang’s internal behavior when operating under the distributed trainer context.
2,981
8,609
jhinpan
2025-03-01T02:41:57
It seems the issue is now narrowing down to: `accelerate launch` has some conflicts with SGLang offline engine intialization. Need to make more checks these two days. - Firstly, maybe we can check whether Accelerate + SGLang, i.e. without things in TRL, behaves normally - If so, maybe we can check whether manually creating processes (instead of using Accelerate) + SGLang works; otherwise we may need to either make minimal sample or check what global things does TRL change - If so, maybe we can check environment variable differences between the "manual+SGLang" and "Accelerate+SGLang", and try to remove the different ones, and see whether it works - If so, we may further dig that single (or several) env var to know what is happening
2,981
8,610
zhaochenyang20
2025-03-11T01:52:36
Amazing work, I will review it quickly.
2,981
8,611
qgallouedec
2025-03-12T02:40:12
Is this PR ready for review? There seems to be many files/lines used for dev that aren't cleaned. Also, can you make sure to run the pre-commit? It seems like you use a custom format config and it results to many lines changed but not related to the PR
2,981
8,612
zhaochenyang20
2025-03-12T02:56:21
@jhinpan Rebase with the main? Then I can ask them to review.
2,981
8,613
jhinpan
2025-03-12T03:11:07
> Is this PR ready for review? There seems to be many files/lines used for dev that aren't cleaned. > Also, can you make sure to run the pre-commit? It seems like you use a custom format config and it results to many lines changed but not related to the PR I just cleaned all those dev files and run the pre-commit. Hope that work. Feel free to lmk whether those testing scripts needed to be removed. @qgallouedec @zhaochenyang20
2,981
8,614
zhaochenyang20
2025-03-12T04:21:06
> This branch has conflicts that must be resolved > Changes can be cleanly merged. @jhinpan
2,981
8,615
nopepper
2025-02-28T07:57:32
I was wrong, I had `beta=0.0` in all my experiments. Setting `beta=0.001` was enough to prevent the gradient explosion. Perhaps we shouldn't suggest that option in the docs so prominently? ![Image](https://github.com/user-attachments/assets/901a2028-4096-4d58-85d6-faff13573669)
2,980
8,616
qgallouedec
2025-02-28T15:24:36
I suspected that this could produce surprising results on a long run. https://github.com/huggingface/trl/pull/2806#issuecomment-2645941307 Would you recommend adding some sort of warning in the documentation?
2,980
8,617
nopepper
2025-02-28T15:30:41
Sounds good. Perhaps something like this? ```bash KL coefficient. If `0.0`, the reference model is not loaded, reducing memory usage and improving training speed, but may be numerically unstable for long training runs. ```
2,980
8,618
qgallouedec
2025-02-28T15:56:08
Looks good! Are you willing to open a PR?
2,980
8,619
kenluozhenyu
2025-03-03T14:50:08
Same problem, TRL version: 0.15.1, on Windows 11
2,979
8,620
Tony-yzj
2025-03-19T03:51:08
same problem, TRL version: 0.15.2, CUDA12.1, on Windows 11
2,979
8,621
qgallouedec
2025-02-27T22:16:39
Thanks for the suggestion, do you have any such tutorial in mind?
2,978
8,622
ParagEkbote
2025-02-28T14:23:52
Yes, we could use a model like smolm2 and use DPO or ORPO with a custom dataset to display the integration. WDYT? cc: @qgallouedec
2,978
8,623
ParagEkbote
2025-03-11T18:26:23
Gentle ping cc: @qgallouedec
2,978
8,624
qgallouedec
2025-03-11T18:31:43
Hi, sorry if this is unclear. In fact this part of the documentation belongs to the community, to share its notebooks. That's why it's called “community tutorials”. If you have a notebook to add, we can add it. But I think what you're really looking for is documentation for the various TRL integrations? If yes, then we have an “integration" section in the doc. It's not finished yet, and we're very open to contributions.
2,978
8,625
qgallouedec
2025-02-27T14:41:06
Try to downgrade vLLM to 0.7.2. See #2952
2,977
8,626
zaddy6
2025-02-27T14:42:40
vLLM 0.7.2 doesnt support the new phi4 mini any other workaround apart from downgrading?
2,977
8,627
qgallouedec
2025-02-27T14:15:14
Answer here: https://github.com/huggingface/open-r1/issues/239#issuecomment-2646297851 😊
2,976
8,628
L1n111ya
2025-02-27T14:25:03
> Answer here: [huggingface/open-r1#239 (comment)](https://github.com/huggingface/open-r1/issues/239#issuecomment-2646297851) 😊 Thank you for your reply. I know the advantage function is 0, but what puzzles me is that since that's the case, the loss only has the KL divergence term. Does it not update based on the reward function? How does the reward converge?
2,976
8,629
iamansinha
2025-03-01T14:03:22
This might help: [huggingface/open-r1/issues/239#issuecomment-2692241946](https://github.com/huggingface/open-r1/issues/239#issuecomment-2692241946)
2,976
8,630
L1n111ya
2025-03-02T04:08:38
> This might help: [huggingface/open-r1/issues/239#issuecomment-2692241946](https://github.com/huggingface/open-r1/issues/239#issuecomment-2692241946) Thank you for your reply, it has resolved my doubts.
2,976
8,631
HuggingFaceDocBuilderDev
2025-02-27T09:15:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2975). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,975
8,632
nbasyl
2025-02-27T09:49:52
Thanks for double-checking this!
2,974
8,633
HuggingFaceDocBuilderDev
2025-02-27T11:10:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2974). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,974
8,634
canghongjian
2025-02-27T08:09:29
Two h20 is enough to start the training, but the speed is quite slow.
2,972
8,635
Tuziking
2025-02-27T08:42:26
> Two h20 is enough to start the training, but the speed is quite slow.两个 h20 足够开始训练,但速度相当慢。 I use GRPOtrainer to train with two H20 , but cuda out of memory(I tried more H20 to train but also failed). What's strange is that during the first step, each GPU only uses 40GB, but at the second step, it suddenly fills up and causes the OOM error. I don't know if this is a bug. My configuration is as follows: ``` python training_args = GRPOConfig( output_dir=output_dir, learning_rate=5e-6, adam_beta1 = 0.9, adam_beta2 = 0.99, weight_decay = 0.1, warmup_ratio = 0.1, lr_scheduler_type='cosine', logging_steps=1, bf16=True, per_device_train_batch_size=1, gradient_accumulation_steps=1, num_generations=2, max_prompt_length=256, max_completion_length=512, num_train_epochs=1, save_steps=100, max_grad_norm=0.1, log_on_each_node=False, # use_vllm=False, report_to="wandb", vllm_gpu_memory_utilization=0.5, ) trainer = GRPOTrainer( model = model, # reward_funcs = xmlcount_reward_func, reward_funcs = [ xmlcount_reward_func, soft_format_reward_func, # strict_format_reward_func, int_reward_func, correctness_reward_func, ], args = training_args, train_dataset = dataset, ) ```
2,972
8,636
Fox237
2025-03-14T02:22:35
同样的问题,解决了吗?
2,972
8,637
baibizhe
2025-03-01T15:58:05
no.i've stuck on this problem for a while. the fact is that trl only support tensor parallel=1 currently. you could switch to some other framework such as verl and openrlhf. they work smoothly
2,971
8,638
xz259
2025-03-15T15:17:45
I think the prompts over the gradient_accumulation_steps should be batched together. 7 generations is under utilizing the inference GPU.
2,971
8,639
lyh1028
2025-03-09T03:11:59
have the same question
2,970
8,640
qgallouedec
2025-02-27T13:28:40
Thanks @logicaltrojan!, I took the opportunity to fix it everywhere! Once the CI is green we can merge :)
2,969
8,641
qgallouedec
2025-02-27T13:28:58
@bot /style
2,969
8,642
github-actions[bot]
2025-02-27T13:29:19
Style fixes have been applied. [View the workflow run here](https://github.com/huggingface/trl/actions/runs/13567545048).
2,969
8,643
HuggingFaceDocBuilderDev
2025-02-27T13:32:47
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2969). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,969
8,644
qgallouedec
2025-02-26T20:44:38
Yes, the solution is just to rename the argument :)
2,968
8,645
ErikKankaTrea
2025-02-27T03:36:14
Thanks!!!
2,968
8,646
HuggingFaceDocBuilderDev
2025-02-26T09:54:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2966). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,966
8,647
jojo23333
2025-02-27T04:06:43
Hi @qgallouedec ,thanks for your great contribution! in this [line](https://github.com/huggingface/trl/blob/019fc6dbaa03b888f9d5c1845f0f690da8ed310c/trl/trainer/grpo_trainer.py#L752) , I wonder why not get logp when sampling but instead do the inference again? vllm should also support getting the logp value.
2,966
8,648
qgallouedec
2025-02-27T08:41:32
Yes, but we need the gradient, and the logp returned by vllm are just the values
2,966
8,649
zetian1025
2025-03-03T11:07:39
Same issue. The problem seems to be related to deepseed stage 3 (use stage 2 will avoid the problem)
2,965
8,650
Sampson1107
2025-03-12T11:56:54
Same issue. Same stage3
2,965
8,651
yeruoforever
2025-02-26T14:05:44
TypeError: '>=' not supported between instances of 'list' and 'tuple'
2,963
8,652
qgallouedec
2025-02-26T16:21:37
Thanks a lot @jamesbraza
2,963
8,653
qgallouedec
2025-02-26T16:24:32
by the way, is this change needed as well? https://github.com/huggingface/trl/pull/2871
2,963
8,654
jamesbraza
2025-02-26T18:55:39
> by the way, is this change needed as well? #2871 Yes, testing it now. Clearly I didn't test this PR previously, as @yeruoforever reported it had a `TypeError` haha, my bad.
2,963
8,655
jamesbraza
2025-02-27T07:56:10
Hi @qgallouedec I have completed my validations, this PR is ready for merge. I had to also pull in: - https://github.com/huggingface/trl/pull/2871 (thanks for sharing it) - I hit https://github.com/huggingface/trl/issues/2953, and to fix it, I changed [this line](https://github.com/huggingface/trl/blob/019fc6dbaa03b888f9d5c1845f0f690da8ed310c/trl/trainer/grpo_trainer.py#L752) to `with torch.no_grad()` instead of `with torch.inference_mode()`
2,963
8,656
qgallouedec
2025-02-27T08:51:54
Thanks, but I can't see the above mentionned changes. Did you pushed them?
2,963
8,657
jamesbraza
2025-02-27T19:26:59
> Thanks, but I can't see the above mentionned changes. Did you pushed them? The other ones I mentioned weren't related to this PR, so they're not here. They are all here: https://github.com/Future-House/trl/tree/working-grpo-2025-02-27
2,963
8,658
qgallouedec
2025-03-04T15:46:55
Thanks! Just commit the suggestions and we are good to merge. Usually [allowing maintainer to edit the PR](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork) makes things easier for us.
2,963
8,659
jamesbraza
2025-03-04T17:54:28
> Usually [allowing maintainer to edit the PR](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork) makes things easier for us. Yeah I always do, but for some reason that checkbox isn't present in this PR's right panel :/ <img width="340" alt="screenshot of right panel" src="https://github.com/user-attachments/assets/63754c9c-f88e-4117-b6f1-5bdaa4935c4d" /> Regardless, PR is ready for review again
2,963
8,660
qgallouedec
2025-03-05T13:58:44
Thanks!
2,963
8,661
HuggingFaceDocBuilderDev
2025-03-05T14:03:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2963). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,963
8,662
gengzijun
2025-02-26T07:11:19
I'm having the same issue, do you know how to fix this now
2,962
8,663
nopepper
2025-02-26T13:50:22
I've noticed a similar issue. DeepSpeed runs will eventually fail catastrophically and the reward plummets to 0. DDP doesn't work anymore when using `vLLM` and just hangs on the first step.
2,960
8,664
qgallouedec
2025-02-26T14:14:48
You should set num_processes to 2 if you have 2 GPUs. Also, to get comparable results you must ensure that the effective batch size (num gpus*per device batch size*grad accum) is the same. I'm not sure it will solve your problem, let me know
2,960
8,665
Kfkcome
2025-02-26T15:29:28
> You should set num_processes to 2 if you have 2 GPUs. Also, to get comparable results you must ensure that the effective batch size (num gpus_per device batch size_grad accum) is the same. I'm not sure it will solve your problem, let me know If I set num_processes = 1, the first gpu focus train and the second gpu focus sampling(vllm). And I use the same args, only change the `export CUDA_VISIBLE_DEVICES=0` to `export CUDA_VISIBLE_DEVICES=0,2` I got the another test use the same args and one gpu. The format reward increase very fast just like the one before I test.... ![Image](https://github.com/user-attachments/assets/5ff946cc-b1a3-4608-9efa-6688c5078039)
2,960
8,666
I-l-l-I
2025-03-12T11:22:54
@Kfkcome Something similar happened to me. There may be a communication problem between the GPUs. I solved it by replacing ```python llm_model.load_weights(state_dict.items()) ``` in `GRPOTrainer._move_model_to_vllm` with ```python llm_device = next(llm_model.parameters()).device for name, param in state_dict.items(): weight = param.to('cpu').to(llm_device) llm_model.load_weights(weights=[(name, weight)]) del weight ```
2,960
8,667
HuggingFaceDocBuilderDev
2025-02-25T13:18:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2956). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,956
8,668
qgallouedec
2025-02-25T13:35:22
Thank for this great work!! It seems very close to GRPO, can you summarize the key differences to make the reviewing a bit easier for me?
2,955
8,669
liziniu
2025-02-25T15:13:06
Hi, ReMax differs from GRPO in two key aspects: baseline estimation and advantage calculation. ## Key Conceptual Differences **Baseline Estimation:** - GRPO uses the averaged empirical reward as the baseline value - ReMax simply uses the reward value of a greedy decoding completion as the baseline **Advantage Calculation:** - GRPO calculates the grouped mean and standard deviation for normalization - ReMax does not require this normalization step ## Implementation Details The implementation of `remax_config.py` is basically same with `grpo_config.py`, with modifications primarily in the trainer code's (`remax_trainer.py`) `_generate_and_score_completions` function (lines 690-880): ### Key Modifications 1. **Lines 690-760:** Modified generation to incorporate greedy decoding for baseline estimation - Sampling parameters for vllm/HF generation slightly changed to accommodate this 2. **Lines 760-808:** Minimal changes to existing code 3. **Lines 810-860:** Added calculation of rewards for greedy completions 4. **Lines 860-870:** Direct advantage calculation without additional operations like gathering ### Additional Changes - **`__init__` method:** - Changed vllm sampling parameters by setting `n = 1` - This preserves the function of generating multiple completions since prompts are repeated in lines 690-760 - **`compute_loss` method (line 951):** - Added `dim=1` when calculating averaged loss - Loss is first normalized across timesteps then across different batches - This implementation follows the description in the paper I also provide an introduction to ReMax at the [docs](https://github.com/huggingface/trl/pull/2955/commits/c0fcdd350103d0f22a17645f44de8cf41719cd06?short_path=15a860f#diff-15a860fec3744a3eecf254dff5060ee5d30d3ec245dbc8dfad7417bcc3bc513b). If you have additional questions, please feel free to let me know.
2,955
8,670
qgallouedec
2025-02-27T11:07:37
Thanks! Can you try to integrate the very latest changes in GRPO?
2,955
8,671
liziniu
2025-03-03T08:27:12
Hi, I’ve integrated the latest changes from GRPO. Below is a summary of the updates: - Lines 736-802: Modified the sampling strategy for ReMax to incorporate greedy decoding, which improves baseline estimation for ReMax. - Lines 857-906: Added reward calculations for greedy completions. These rewards will be used to compute the advantage function. - Lines 908-915: Implemented a customized advantage calculation specifically tailored for ReMax. Let me know if you have any questions or need further details!
2,955
8,672
kashif
2025-03-03T08:31:55
currently, the remax trainer file is a copy of the remax config file... is that a mistake?
2,955
8,673
liziniu
2025-03-03T11:44:48
Hi @kashif, thank you for pointing that out! It was my mistake to copy the wrong content. I’ve now fixed it.
2,955
8,674
liziniu
2025-03-12T02:36:54
Hi @qgallouedec Could you please review the code when you have a moment? If you need any additional information to assist with the review, I’d be happy to provide it. Thanks in advance!
2,955
8,675
HuggingFaceDocBuilderDev
2025-02-25T10:14:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2954). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,954
8,676
qgallouedec
2025-02-25T10:41:41
Nice! When it works, can you also add a few lines in https://huggingface.co/docs/trl/en/reducing_memory_usage? 🙏
2,954
8,677
kashif
2025-02-25T13:03:41
sure!
2,954
8,678
casper-hansen
2025-03-03T16:47:10
Looks like this PR was parked for now. @kashif did the implementation not work? This is super relevant to me if I am going to use TRL for training long-context reasoners
2,954
8,679
kashif
2025-03-03T16:49:20
@casper-hansen So during training, i see a reduction in memory but the memory jumps up to default as soon as the eval steps starts and i am investigating why...
2,954
8,680
casper-hansen
2025-03-03T17:26:24
@kashif The implementation relies on `torch.autograd.graph.saved_tensors_hooks` which would seem to only work if you are computing gradients.
2,954
8,681
gycg
2025-02-25T07:06:22
same issue
2,953
8,682
Kfkcome
2025-02-26T02:18:40
Maybe trl GRPO didn't support num_generation>1 ?
2,953
8,683
jamesbraza
2025-02-27T01:36:27
https://github.com/huggingface/trl/pull/2565 used `torch.no_inference` in `trl/trainer/grpo_trainer.py`, and I think https://github.com/huggingface/trl/pull/2899 should have switched it to `torch.no_grad`.
2,953
8,684
jamesbraza
2025-02-27T08:29:12
Hi @willccbb would you be open to making a more minimal reproducer that resembles the unit tests here: https://github.com/huggingface/trl/blob/main/tests/test_grpo_trainer.py I am struggling to make a unit test that reproduces this error: ```python def test_stub(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") model = AutoModelForCausalLM.from_pretrained( "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", torch_dtype=torch.float32, use_cache=False, ) with tempfile.TemporaryDirectory() as tmp_dir: training_args = GRPOConfig( output_dir=tmp_dir, learning_rate=0.1, per_device_train_batch_size=3, num_generations=3, max_completion_length=32, num_iterations=4, gradient_checkpointing=True, report_to="none", ) trainer = GRPOTrainer( model=model, reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) trainer.train() ```
2,953
8,685
jamesbraza
2025-02-27T08:34:59
Fwiw, this was the stack I was seeing, with `deepspeed==0.16.4` and `torch==2.5.1+cu124`: ```none 5: [rank45]: Traceback (most recent call last): 5: [rank45]: File "/path/to/repo/scripts/train.py", line 268, in <module> 5: [rank45]: main(script_args, training_args, model_args) 5: [rank45]: File "/path/to/repo/scripts/train.py", line 254, in main 5: [rank45]: trainer.train(**train_kw) 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2245, in train 5: [rank45]: return inner_training_loop( 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2556, in _inner_training_loop 5: [rank45]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3706, in training_step 5: [rank45]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/trl/extras/profiling.py", line 33, in wrapper 5: [rank45]: result = func(self, *args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/trl/trainer/grpo_trainer.py", line 985, in compute_loss 5: [rank45]: per_token_logps = self._get_per_token_logps(model, input_ids, attention_mask, logits_to_keep) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/trl/extras/profiling.py", line 33, in wrapper 5: [rank45]: result = func(self, *args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/trl/trainer/grpo_trainer.py", line 624, in _get_per_token_logps 5: [rank45]: logits = model(input_ids=input_ids, attention_mask=attention_mask, logits_to_keep=logits_to_keep + 1).logits 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl 5: [rank45]: return self._call_impl(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl 5: [rank45]: return forward_call(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn 5: [rank45]: ret_val = func(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/deepspeed/runtime/engine.py", line 1987, in forward 5: [rank45]: loss = self.module(*inputs, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl 5: [rank45]: return self._call_impl(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl 5: [rank45]: return inner() 5: [rank45]: ^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1790, in inner 5: [rank45]: result = forward_call(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 5: [rank45]: return func(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 855, in forward 5: [rank45]: outputs = self.model( 5: [rank45]: ^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl 5: [rank45]: return self._call_impl(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl 5: [rank45]: return inner() 5: [rank45]: ^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1790, in inner 5: [rank45]: result = forward_call(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 596, in forward 5: [rank45]: hidden_states = self.norm(hidden_states) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl 5: [rank45]: return self._call_impl(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl 5: [rank45]: return inner() 5: [rank45]: ^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1790, in inner 5: [rank45]: result = forward_call(*args, **kwargs) 5: [rank45]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5: [rank45]: File "/path/to/repo/.venv/lib/python3.12/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 223, in forward 5: [rank45]: return self.weight * hidden_states.to(input_dtype) 5: [rank45]: ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 5: [rank45]: RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd. ```
2,953
8,686
qgallouedec
2025-02-27T14:39:01
I cannot reproduce right now. Do I miss something?: ```python import os import sys import torch from transformers import AutoTokenizer, AutoModelForCausalLM from trl import GRPOTrainer, GRPOConfig from datasets import load_dataset model_name = "Qwen/Qwen2.5-7B-Instruct" model_kwargs = dict(torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", use_cache=False) training_args = GRPOConfig( output_dir="2953", run_name="GRPO-2953", learning_rate=2e-6, beta=0.02, adam_beta1=0.9, adam_beta2=0.99, warmup_ratio=0.1, lr_scheduler_type="cosine", logging_steps=1, bf16=True, per_device_train_batch_size=2, gradient_accumulation_steps=2, num_generations=2, num_iterations=2, max_prompt_length=1024, max_completion_length=512, num_train_epochs=1, save_strategy="epoch", save_only_model=True, max_grad_norm=1.0, report_to="wandb", use_vllm=True, vllm_gpu_memory_utilization=0.7, vllm_device="cuda:7", log_on_each_node=False, ) # model = AutoLigerKernelForCausalLM.from_pretrained( model = AutoModelForCausalLM.from_pretrained(model_name, **model_kwargs) tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token def dummy_reward_func(completions, **kwargs): return [0.0] * len(completions) dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") trainer = GRPOTrainer( model=model, processing_class=tokenizer, reward_funcs=[dummy_reward_func], args=training_args, train_dataset=dataset, ) trainer.train() ``` ```console accelerate launch --num-processes 7 --config-file examples/accelerate_configs/deepspeed_zero3.yaml 2953.py ``` - Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 - Python version: 3.12.9 - TRL version: 0.16.0.dev0+aa18ecf - PyTorch version: 2.5.1 - CUDA device(s): NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3 - Transformers version: 4.49.0 - Accelerate version: 1.4.0 - Accelerate config: not found - Datasets version: 3.3.2 - HF Hub version: 0.29.1 - bitsandbytes version: not installed - DeepSpeed version: 0.16.4 - Diffusers version: not installed - Liger-Kernel version: 0.5.3 - LLM-Blender version: not installed - OpenAI version: 1.64.0 - PEFT version: not installed - vLLM version: 0.7.3
2,953
8,687
HuggingFaceDocBuilderDev
2025-02-24T23:06:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2952). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,952
8,688
qgallouedec
2025-02-24T21:46:34
LGTM, once the latest nit recommendations are applied, and CI green, we're good to merge, thanks :)
2,951
8,689
HuggingFaceDocBuilderDev
2025-02-24T21:50:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2951). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,951
8,690
qgallouedec
2025-02-24T22:12:28
<img width="1550" alt="Screenshot 2025-02-24 at 23 12 05" src="https://github.com/user-attachments/assets/52118c75-977e-4869-95b5-84784a263f8e" />
2,951
8,691
qgallouedec
2025-02-24T22:15:15
@bot /style
2,951
8,692
qgallouedec
2025-02-24T22:44:11
Failing tests are because liger-kernel introduced a bug in their latest version. We can safely ignore it, I guess they'll do a patch release soon. See https://github.com/linkedin/Liger-Kernel/issues/586
2,951
8,693
glorgao
2025-02-24T19:00:48
After thoroughly reviewing the implementation, I found that the evaluation is conducted in a roll-out manner. Specifically: Each question is duplicated `num_generations` times across GPUs. Answers are generated `num_generations` times using different sampling strategies. Will the num_generations responses be identical? No, even in the initial phase, some answers have a `reward_acc` distribution of `[0,0,0,1,1,1,1].` I'm not certain whether this is the optimal evaluation method, but it seems acceptable. Therefore, I am closing this issue.
2,950
8,694
HuggingFaceDocBuilderDev
2025-02-24T18:15:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2949). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,949
8,695
BenasdTW
2025-02-26T01:36:18
After this PR is merged, can I still pass a Liger model to `SFTTrainer`? Do I need to explicitly set `use_liger_kernel=True`?
2,949
8,696
qgallouedec
2025-02-26T08:32:34
Yes you can still pass a liger kernel model, but you will have to pas ``use_liger_kernel=True``
2,949
8,697
qgallouedec
2025-02-24T17:17:06
Cool! WDYT of ```python from rich.console import Console from rich.panel import Panel from rich.table import Table from rich.text import Text def print_output_sample(prompts: list[str], completions: list[str], step: int) -> None: """Print out a sample of model completions.""" console = Console() table = Table(show_header=True, header_style="bold white", expand=True, padding=(0, 1, 1, 0)) table.add_column("Prompt", style="bright_yellow") table.add_column("Completion", style="bright_green") for s, p in zip(prompts, completions, strict=True): table.add_row(Text(s), Text(p)) panel = Panel(table, expand=False, title=f"Step {step}", border_style="bold white") console.print(panel) print_output_sample(["Hello, my name is", "The weather is", "I am feeling"], [" John", " sunny", " happy"], 1) print_output_sample(["Hello, my name is", "The weather is", "I am feeling"], [" John", " sunny", " happy"], 2) ``` ```console ╭───────────── Step 1 ─────────────╮ │ ┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓ │ │ ┃Prompt ┃Completion ┃ │ │ ┃ ┃ ┃ │ │ ┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩ │ │ │Hello, my name is │ John │ │ │ │ │ │ │ │ │The weather is │ sunny │ │ │ │ │ │ │ │ │I am feeling │ happy │ │ │ │ │ │ │ │ └──────────────────┴───────────┘ │ ╰──────────────────────────────────╯ ╭───────────── Step 2 ─────────────╮ │ ┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓ │ │ ┃Prompt ┃Completion ┃ │ │ ┃ ┃ ┃ │ │ ┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩ │ │ │Hello, my name is │ John │ │ │ │ │ │ │ │ │The weather is │ sunny │ │ │ │ │ │ │ │ │I am feeling │ happy │ │ │ │ │ │ │ │ └──────────────────┴───────────┘ │ ╰──────────────────────────────────╯ ``` Are you willing to open a PR?
2,948
8,698
nopepper
2025-02-24T18:47:42
@qgallouedec Looks good! I'll open a PR soon 🙂
2,948
8,699