user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-05-27 22:20:31
body
stringlengths
1
173k
issue_number
int64
1
3.5k
__index_level_0__
int64
0
10.1k
dushyantbehl
2025-03-20T13:11:01
@lewtun we had noticed the same issue of frequent `FileNotFoundError` in `cache` files on our codebase using `local_main_process_first` while preparing datasets using map but our runs were on 2/4 Nodes with 8 GPUs each ... could you provide an overview of how to check if the race condition what you refer to here is what we faced.
3,106
1,019
fabianlim
2025-03-28T23:21:11
This PR is closed as it is superseded by #3162
3,105
1,020
willccbb
2025-03-20T19:00:08
there are multiple ways to approximate KLD, the GRPO trainer is using the formula from the DeepSeekMath paper which introduced the algorithm https://arxiv.org/abs/2402.03300
3,104
1,021
dominiquegarmier
2025-03-21T09:16:36
thanks for clarifying, i was unaware of this estimator.
3,104
1,022
wutaiqiang
2025-05-06T01:17:37
http://joschu.net/blog/kl-approx.html more details
3,104
1,023
tchang1997
2025-03-18T22:26:28
`.merge_adapter()` "loads" the LoRA weights into the base architecture (L678), so any `print(model)` call will look similar to a LoRA-free model. Then, `.unmerge_adapter()` reverses that operation (L701). In short, `trl` does `merge_adapter()` -> modify `state_dict` keys for compatibility -> load weights in vLLM -> `unmerge_adapter()` (to restore the original PeftModel for training).
3,103
1,024
shirinyamani
2025-03-24T18:32:15
correct @tchang1997 Thanks for the explaination!
3,103
1,025
Ishan-Kumar2
2025-05-20T11:16:15
Hi @qgallouedec sorry for the delay, I had my final exams :) I have made the changes, Please take a look!
3,100
1,026
zeyushen-yo
2025-03-31T22:52:47
I came up with this issue as well. Why hasn't this issue been solved? Shouldn't everyone who use GRPO come up with this issue?
3,098
1,027
qgallouedec
2025-04-01T01:04:47
I can't reproduce: when I print ```python print(prompts, completions, output_reward_func) ``` after https://github.com/huggingface/trl/blob/e751a16df56e70190fb94bed4a2035eec3303777/trl/trainer/grpo_trainer.py#L822 This is what I get with the following code: ```python from datasets import Dataset from trl import GRPOTrainer, GRPOConfig dataset = Dataset.from_dict( { "prompt": [ "Give me a 1.0 reward.", "Give me a 2.0 reward.", "Give me a 3.0 reward.", ] } ) def dummy_reward(prompts, completions, **kwargs): rewards = [] for prompt, completion in zip(prompts, completions): # exctract the wanted reward from the prompt rewards.append(float(prompt.split(" ")[3])) return rewards trainer = GRPOTrainer( model="Qwen/Qwen2-0.5B", args = GRPOConfig(max_completion_length=2, num_generations=4, report_to="none"), reward_funcs=[dummy_reward], train_dataset=dataset, ) trainer.train() ``` ``` ['Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.'] [' The first', ' Is it', ' Is there', ' I can', ' All of', ' If I', ' For example', ' No time'] [1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 3.0, 3.0] ['Give me a 2.0 reward.', 'Give me a 2.0 reward.', 'Give me a 2.0 reward.', 'Give me a 2.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.'] [' I also', ' I was', ' How can', ' The first', ' The amount', ' (0', ' I can', ' How can'] [2.0, 2.0, 2.0, 2.0, 1.0, 1.0, 1.0, 1.0] ['Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.'] [' I am', ' And a', ' How much', ' Not only', ' I will', ' My first', ' I apologize', ' Good job'] [1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 3.0, 3.0] ``` They seem to match, no?
3,098
1,028
leoyuppieqnew
2025-04-01T01:59:03
> I can't reproduce: when I print > > print(prompts, completions, output_reward_func) > after > > [trl/trl/trainer/grpo_trainer.py](https://github.com/huggingface/trl/blob/e751a16df56e70190fb94bed4a2035eec3303777/trl/trainer/grpo_trainer.py#L822) > > Line 822 in [e751a16](/huggingface/trl/commit/e751a16df56e70190fb94bed4a2035eec3303777) > > output_reward_func = reward_func(prompts=prompts, completions=completions, **reward_kwargs) > This is what I get with the following code: > > from datasets import Dataset > from trl import GRPOTrainer, GRPOConfig > > dataset = Dataset.from_dict( > { > "prompt": [ > "Give me a 1.0 reward.", > "Give me a 2.0 reward.", > "Give me a 3.0 reward.", > ] > } > ) > > def dummy_reward(prompts, completions, **kwargs): > rewards = [] > for prompt, completion in zip(prompts, completions): > # exctract the wanted reward from the prompt > rewards.append(float(prompt.split(" ")[3])) > return rewards > > trainer = GRPOTrainer( > model="Qwen/Qwen2-0.5B", > args = GRPOConfig(max_completion_length=2, num_generations=4, report_to="none"), > reward_funcs=[dummy_reward], > train_dataset=dataset, > ) > trainer.train() > ``` > ['Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.'] [' The first', ' Is it', ' Is there', ' I can', ' All of', ' If I', ' For example', ' No time'] [1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 3.0, 3.0] > ['Give me a 2.0 reward.', 'Give me a 2.0 reward.', 'Give me a 2.0 reward.', 'Give me a 2.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.'] [' I also', ' I was', ' How can', ' The first', ' The amount', ' (0', ' I can', ' How can'] [2.0, 2.0, 2.0, 2.0, 1.0, 1.0, 1.0, 1.0] > ['Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 1.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.', 'Give me a 3.0 reward.'] [' I am', ' And a', ' How much', ' Not only', ' I will', ' My first', ' I apologize', ' Good job'] [1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 3.0, 3.0] > ``` > > They seem to match, no? I used vllm as the inference backend, and tried 1 GPU inference 7 GPUs training in 0.15.2 and 2 GPUs inference 6 GPUs training in 0.16.0, and this bug occurred. Maybe you should try using vllm as the backend and > 4 GPUs to try again, and the bug will reproduce.
3,098
1,029
qgallouedec
2025-04-01T02:16:55
```python # 3098.py from datasets import Dataset from trl import GRPOTrainer, GRPOConfig dataset = Dataset.from_dict({"prompt": [f"Give me a {float(x)} reward." for x in range(100)]}) def dummy_reward(prompts, completions, **kwargs): rewards = [] for prompt, completion in zip(prompts, completions): # exctract the wanted reward from the prompt rewards.append(float(prompt.split(" ")[3])) return rewards trainer = GRPOTrainer( model="Qwen/Qwen2.5-1.5B", args=GRPOConfig(max_completion_length=2, num_generations=4, report_to="none", use_vllm=True), reward_funcs=[dummy_reward], train_dataset=dataset, ) trainer.train() ``` ``` CUDA_VISIBLE_DEVICES=2,3,4,5,6,7 accelerate launch sandbox/3098.py ``` ``` trl vllm-serve --model Qwen/Qwen2.5-1.5B --tensor_parallel_size 2 ``` TRL version: 0.17.0.dev0 ``` ['Give me a 6.0 reward.', 'Give me a 6.0 reward.', 'Give me a 6.0 reward.', 'Give me a 6.0 reward.', 'Give me a 13.0 reward.', 'Give me a 13.0 reward.', 'Give me a 13.0 reward.', 'Give me a 13.0 reward.'] [' Sure!', ' Can you', ' We’re', ' I will', ' You can', ' Give me', ' Give me', ' I want'] [6.0, 6.0, 6.0, 6.0, 13.0, 13.0, 13.0, 13.0] ['Give me a 22.0 reward.', 'Give me a 22.0 reward.', 'Give me a 22.0 reward.', 'Give me a 22.0 reward.', 'Give me a 7.0 reward.', 'Give me a 7.0 reward.', 'Give me a 7.0 reward.', 'Give me a 7.0 reward.'] [' The story', ' The goal', ' This is', ' 0', ' In a', ' Write a', ' How can', ' 1'] [22.0, 22.0, 22.0, 22.0, 7.0, 7.0, 7.0, 7.0] ['Give me a 28.0 reward.', 'Give me a 28.0 reward.', 'Give me a 28.0 reward.', 'Give me a 28.0 reward.', 'Give me a 31.0 reward.', 'Give me a 31.0 reward.', 'Give me a 31.0 reward.', 'Give me a 31.0 reward.'] [' Can you', ' As a', ' I need', ' The best', ' The goal', " I'm", ' I am', ' We must'] [28.0, 28.0, 28.0, 28.0, 31.0, 31.0, 31.0, 31.0] ['Give me a 30.0 reward.', 'Give me a 30.0 reward.', 'Give me a 30.0 reward.', 'Give me a 30.0 reward.', 'Give me a 21.0 reward.', 'Give me a 21.0 reward.', 'Give me a 21.0 reward.', 'Give me a 21.0 reward.'] [' I love', ' This is', ' You will', ' The following', ' I don', ' To earn', ' If you', ' You are'] [30.0, 30.0, 30.0, 30.0, 21.0, 21.0, 21.0, 21.0] ... ``` they still match
3,098
1,030
qgallouedec
2025-04-01T02:29:37
And if I print this: ```python for prompt, completion in zip(prompts, completions): print(prompt, completion) ``` after https://github.com/huggingface/trl/blob/e751a16df56e70190fb94bed4a2035eec3303777/trl/trainer/grpo_trainer.py#L822 And I run with the same setting the following training: ```python from datasets import Dataset from trl import GRPOTrainer, GRPOConfig dataset = Dataset.from_dict({"prompt": [f"{x} \\times 2 =" for x in range(32)]}) def dummy_reward(prompts, completions, **kwargs): return [0] * len(prompts) trainer = GRPOTrainer( model="Qwen/Qwen2.5-1.5B", args=GRPOConfig(max_completion_length=3, num_generations=4, report_to="none", use_vllm=True, temperature=0.1), reward_funcs=[dummy_reward], train_dataset=dataset, ) trainer.train() ``` I get: ``` 10 \times 2 = 20 10 \times 2 = 20 10 \times 2 = 20 10 \times 2 = 20 16 \times 2 = 32 16 \times 2 = 32 16 \times 2 = 32 16 \times 2 = 32 19 \times 2 = 38 19 \times 2 = 38 19 \times 2 = 38 19 \times 2 = 38 27 \times 2 = 54 27 \times 2 = 54 27 \times 2 = 54 27 \times 2 = 54 9 \times 2 = 18 9 \times 2 = 18 9 \times 2 = 18 9 \times 2 = 18 29 \times 2 = 58 29 \times 2 = 58 ... ``` Which shows that prompts and completions also match.
3,098
1,031
zeyushen-yo
2025-04-01T02:32:01
Since I'm using my school's shared cluster which doesn't support running vllm_client on localhost, I'm using the version from commit 491921c. That version gives the error talked about here. Would be helpful to add vllm support without using localhost, which might solve the issue given your toy example is working.
3,098
1,032
qgallouedec
2025-04-01T03:05:33
We might of that in the future, see #3162
3,098
1,033
leoyuppieqnew
2025-04-02T02:35:44
> # 3098.py > from datasets import Dataset > from trl import GRPOTrainer, GRPOConfig > > dataset = Dataset.from_dict({"prompt": [f"Give me a {float(x)} reward." for x in range(100)]}) > > > def dummy_reward(prompts, completions, **kwargs): > rewards = [] > for prompt, completion in zip(prompts, completions): > # exctract the wanted reward from the prompt > rewards.append(float(prompt.split(" ")[3])) > return rewards > > > trainer = GRPOTrainer( > model="Qwen/Qwen2.5-1.5B", > args=GRPOConfig(max_completion_length=2, num_generations=4, report_to="none", use_vllm=True), > reward_funcs=[dummy_reward], > train_dataset=dataset, > ) > trainer.train() > ``` > CUDA_VISIBLE_DEVICES=2,3,4,5,6,7 accelerate launch sandbox/3098.py > ``` > > ``` > trl vllm-serve --model Qwen/Qwen2.5-1.5B --tensor_parallel_size 2 > ``` > > TRL version: 0.17.0.dev0 > > ``` > ['Give me a 6.0 reward.', 'Give me a 6.0 reward.', 'Give me a 6.0 reward.', 'Give me a 6.0 reward.', 'Give me a 13.0 reward.', 'Give me a 13.0 reward.', 'Give me a 13.0 reward.', 'Give me a 13.0 reward.'] [' Sure!', ' Can you', ' We’re', ' I will', ' You can', ' Give me', ' Give me', ' I want'] [6.0, 6.0, 6.0, 6.0, 13.0, 13.0, 13.0, 13.0] > ['Give me a 22.0 reward.', 'Give me a 22.0 reward.', 'Give me a 22.0 reward.', 'Give me a 22.0 reward.', 'Give me a 7.0 reward.', 'Give me a 7.0 reward.', 'Give me a 7.0 reward.', 'Give me a 7.0 reward.'] [' The story', ' The goal', ' This is', ' 0', ' In a', ' Write a', ' How can', ' 1'] [22.0, 22.0, 22.0, 22.0, 7.0, 7.0, 7.0, 7.0] > ['Give me a 28.0 reward.', 'Give me a 28.0 reward.', 'Give me a 28.0 reward.', 'Give me a 28.0 reward.', 'Give me a 31.0 reward.', 'Give me a 31.0 reward.', 'Give me a 31.0 reward.', 'Give me a 31.0 reward.'] [' Can you', ' As a', ' I need', ' The best', ' The goal', " I'm", ' I am', ' We must'] [28.0, 28.0, 28.0, 28.0, 31.0, 31.0, 31.0, 31.0] > ['Give me a 30.0 reward.', 'Give me a 30.0 reward.', 'Give me a 30.0 reward.', 'Give me a 30.0 reward.', 'Give me a 21.0 reward.', 'Give me a 21.0 reward.', 'Give me a 21.0 reward.', 'Give me a 21.0 reward.'] [' I love', ' This is', ' You will', ' The following', ' I don', ' To earn', ' If you', ' You are'] [30.0, 30.0, 30.0, 30.0, 21.0, 21.0, 21.0, 21.0] > ... > ``` > > they still match Please try the below code, it will be reproduced: ``` # 3098.py import re import numpy as np from datasets import Dataset, load_dataset from trl import GRPOTrainer, GRPOConfig data = { "prompt": [f"<|begin▁of▁sentence|><|User|>\nPlease tell me {float(x)} * {float(x)} is\n<|Assistant|>" for x in range(1500)], "intent": [float(x) for x in range(1500)] } dataset = Dataset.from_dict(data).train_test_split(test_size=0.01) def dummy_reward(prompts, completions, intent, **kwargs): rewards = [] for prompt, completion, label in zip(prompts, completions, intent): # exctract the wanted reward from the prompt print({"prompt": prompt, "completion": completion, "label": label}) rewards.append(float(label)) return rewards trainer = GRPOTrainer( model="deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", args=GRPOConfig( do_eval=True, gradient_accumulation_steps=16, gradient_checkpointing=True, per_device_train_batch_size=1, per_device_eval_batch_size=1, num_generations=6, use_vllm=True, report_to="none" ), reward_funcs=[dummy_reward], train_dataset=dataset['train'], eval_dataset=dataset['test'] ) trainer.train() ``` Then its output is like this: ![Image](https://github.com/user-attachments/assets/1603e90c-9629-4f6e-9562-8a32210bf0c7) start command: ``` CUDA_VISIBLE_DEVICES=0,1 trl vllm-serve --model deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B --tensor_parallel_size 2 --gpu_memory_utilization 0.95 --max_model_len 12000 --enable_prefix_caching true ``` and ``` CUDA_VISIBLE_DEVICES=2,3,4,5,6,7 accelerate launch 3098.py ```
3,098
1,034
leoyuppieqnew
2025-04-08T06:04:40
> > # 3098.py > > from datasets import Dataset > > from trl import GRPOTrainer, GRPOConfig > > dataset = Dataset.from_dict({"prompt": [f"Give me a {float(x)} reward." for x in range(100)]}) > > def dummy_reward(prompts, completions, **kwargs): > > rewards = [] > > for prompt, completion in zip(prompts, completions): > > # exctract the wanted reward from the prompt > > rewards.append(float(prompt.split(" ")[3])) > > return rewards > > trainer = GRPOTrainer( > > model="Qwen/Qwen2.5-1.5B", > > args=GRPOConfig(max_completion_length=2, num_generations=4, report_to="none", use_vllm=True), > > reward_funcs=[dummy_reward], > > train_dataset=dataset, > > ) > > trainer.train() > > ``` > > CUDA_VISIBLE_DEVICES=2,3,4,5,6,7 accelerate launch sandbox/3098.py > > ``` > > > > > > > > > > > > > > > > > > > > > > > > ``` > > trl vllm-serve --model Qwen/Qwen2.5-1.5B --tensor_parallel_size 2 > > ``` > > > > > > > > > > > > > > > > > > > > > > > > TRL version: 0.17.0.dev0 > > ``` > > ['Give me a 6.0 reward.', 'Give me a 6.0 reward.', 'Give me a 6.0 reward.', 'Give me a 6.0 reward.', 'Give me a 13.0 reward.', 'Give me a 13.0 reward.', 'Give me a 13.0 reward.', 'Give me a 13.0 reward.'] [' Sure!', ' Can you', ' We’re', ' I will', ' You can', ' Give me', ' Give me', ' I want'] [6.0, 6.0, 6.0, 6.0, 13.0, 13.0, 13.0, 13.0] > > ['Give me a 22.0 reward.', 'Give me a 22.0 reward.', 'Give me a 22.0 reward.', 'Give me a 22.0 reward.', 'Give me a 7.0 reward.', 'Give me a 7.0 reward.', 'Give me a 7.0 reward.', 'Give me a 7.0 reward.'] [' The story', ' The goal', ' This is', ' 0', ' In a', ' Write a', ' How can', ' 1'] [22.0, 22.0, 22.0, 22.0, 7.0, 7.0, 7.0, 7.0] > > ['Give me a 28.0 reward.', 'Give me a 28.0 reward.', 'Give me a 28.0 reward.', 'Give me a 28.0 reward.', 'Give me a 31.0 reward.', 'Give me a 31.0 reward.', 'Give me a 31.0 reward.', 'Give me a 31.0 reward.'] [' Can you', ' As a', ' I need', ' The best', ' The goal', " I'm", ' I am', ' We must'] [28.0, 28.0, 28.0, 28.0, 31.0, 31.0, 31.0, 31.0] > > ['Give me a 30.0 reward.', 'Give me a 30.0 reward.', 'Give me a 30.0 reward.', 'Give me a 30.0 reward.', 'Give me a 21.0 reward.', 'Give me a 21.0 reward.', 'Give me a 21.0 reward.', 'Give me a 21.0 reward.'] [' I love', ' This is', ' You will', ' The following', ' I don', ' To earn', ' If you', ' You are'] [30.0, 30.0, 30.0, 30.0, 21.0, 21.0, 21.0, 21.0] > > ... > > ``` > > > > > > > > > > > > > > > > > > > > > > > > they still match > > Please try the below code, it will be reproduced: > > ``` > # 3098.py > import re > import numpy as np > from datasets import Dataset, load_dataset > from trl import GRPOTrainer, GRPOConfig > > data = { > "prompt": [f"<|begin▁of▁sentence|><|User|>\nPlease tell me {float(x)} * {float(x)} is\n<|Assistant|>" for x in range(1500)], > "intent": [float(x) for x in range(1500)] > } > > dataset = Dataset.from_dict(data).train_test_split(test_size=0.01) > > def dummy_reward(prompts, completions, intent, **kwargs): > rewards = [] > for prompt, completion, label in zip(prompts, completions, intent): > # exctract the wanted reward from the prompt > print({"prompt": prompt, "completion": completion, "label": label}) > rewards.append(float(label)) > return rewards > > > trainer = GRPOTrainer( > model="deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", > args=GRPOConfig( > do_eval=True, > gradient_accumulation_steps=16, > gradient_checkpointing=True, > per_device_train_batch_size=1, > per_device_eval_batch_size=1, > num_generations=6, > use_vllm=True, > report_to="none" > ), > reward_funcs=[dummy_reward], > train_dataset=dataset['train'], > eval_dataset=dataset['test'] > ) > trainer.train() > ``` > > Then its output is like this: > > ![Image](https://github.com/user-attachments/assets/1603e90c-9629-4f6e-9562-8a32210bf0c7) > > start command: > > ``` > CUDA_VISIBLE_DEVICES=0,1 trl vllm-serve --model deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B --tensor_parallel_size 2 --gpu_memory_utilization 0.95 --max_model_len 12000 --enable_prefix_caching true > ``` > > and > > ``` > CUDA_VISIBLE_DEVICES=2,3,4,5,6,7 accelerate launch 3098.py > ``` @qgallouedec Could you please help me check if I did it wrong or it is a bug?
3,098
1,035
binary-husky
2025-03-16T16:31:52
![image](https://github.com/user-attachments/assets/6fdf46d5-e433-4b76-b186-e2ab6bc87ee4)
3,094
1,036
qgallouedec
2025-03-18T04:58:12
@binary-husky thank you very much for this work. It gave us a better understanding of how to achieve this. I wanted to take a more ambitious approach and decided to refactor it further. Since this was more than I could reasonably ask to an external contributor, I took the liberty of committing the changes directly to your branch. I hope that’s okay with you!
3,094
1,037
qgallouedec
2025-03-18T05:53:26
```python # 3094.py from datasets import load_dataset from trl import GRPOTrainer, GRPOConfig dataset = load_dataset("trl-lib/tldr", split="train") # Dummy reward function: count the number of unique characters in the completions def reward_num_unique_chars(completions, **kwargs): return [len(set(c)) for c in completions] training_args = GRPOConfig(output_dir="3094", use_vllm=True, bf16=True, gradient_checkpointing=True, logging_steps=10) trainer = GRPOTrainer( model="Qwen/Qwen2.5-7B", args=training_args, reward_funcs=reward_num_unique_chars, train_dataset=dataset, ) trainer.train() ``` ``` trl vllm-serve --model Qwen/Qwen2.5-7B --tensor_parallel_size 4 ``` ``` CUDA_VISIBLE_DEVICES=4,5,6,7 accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml 3094.py ```
3,094
1,038
qgallouedec
2025-03-18T23:11:20
### Experiment 1 Number of samples per min (for the same effective batch size of 224) | Number of GPUs (training/generation) | Before | After | | ------------------------------------ | ------ | ----- | | 7/1 | todo | 3.2 | | 4/4 | N/A | 2.8 | <img width="1176" alt="Screenshot 2025-03-18 at 16 09 54" src="https://github.com/user-attachments/assets/33d40900-ea0f-44dc-a637-7c9588210e01" /> <img width="778" alt="Screenshot 2025-03-18 at 16 23 06" src="https://github.com/user-attachments/assets/b250596a-18ed-42d9-881d-3c86cf028c9b" /> ```python from datasets import load_dataset from trl import GRPOTrainer, GRPOConfig dataset = load_dataset("trl-lib/tldr", split="train") # Dummy reward function: count the number of unique characters in the completions def reward_num_unique_chars(completions, **kwargs): return [len(set(c)) for c in completions] training_args = GRPOConfig( output_dir="3094-7-1", use_vllm=True, bf16=True, gradient_checkpointing=True, logging_steps=10, gradient_accumulation_steps=4, ) trainer = GRPOTrainer( model="Qwen/Qwen2.5-7B", args=training_args, reward_funcs=reward_num_unique_chars, train_dataset=dataset, ) trainer.train() ```
3,094
1,039
qgallouedec
2025-03-19T00:22:00
## Experiment 2 Number of samples per min (for the same effective batch size of 672) | Number of GPUs (training/generation) | Before | After | | ------------------------------------ | ------ | ------ | | 7/1 | ??? | 1.09 | | 6/2 | N/A | 1.14 | | 4/4 | N/A | 0.96 | <img width="1180" alt="Screenshot 2025-03-18 at 17 12 36" src="https://github.com/user-attachments/assets/77f0f389-40e0-4280-859e-5d385bb20002" /> <img width="780" alt="Screenshot 2025-03-18 at 17 12 56" src="https://github.com/user-attachments/assets/afc1935c-c4a7-439e-9bf2-e75f1c0dc66c" /> <img width="287" alt="Screenshot 2025-03-18 at 17 14 35" src="https://github.com/user-attachments/assets/58b586f0-47d8-43f8-a7c2-1a8bf0a8abc5" />
3,094
1,040
HuggingFaceDocBuilderDev
2025-03-19T03:08:20
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3094). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,094
1,041
zaddy6
2025-03-19T16:50:29
@qgallouedec can we already use this? :)
3,094
1,042
qgallouedec
2025-03-19T16:58:20
Yes @zaddy6, but please share any error, questions, regression etc
3,094
1,043
zaddy6
2025-03-19T17:08:03
```from datasets import load_dataset from trl import GRPOTrainer, GRPOConfig dataset = load_dataset("trl-lib/tldr", split="train") # Dummy reward function: count the number of unique characters in the completions def reward_num_unique_chars(completions, **kwargs): return [len(set(c)) for c in completions] training_args = GRPOConfig(output_dir="3094", use_vllm=True, bf16=True, gradient_checkpointing=True, logging_steps=10) trainer = GRPOTrainer( model="Qwen/Qwen2.5-7B", args=training_args, reward_funcs=reward_num_unique_chars, train_dataset=dataset, ) trainer.train()``` ``` - Platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35 - Python version: 3.11.11 - TRL version: 0.16.0.dev0 - PyTorch version: 2.5.1 - CUDA device(s): NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3 - Transformers version: 4.50.0.dev0 - Accelerate version: 1.5.2 - Accelerate config: not found - Datasets version: 3.4.1 - HF Hub version: 0.29.3 - bitsandbytes version: 0.45.3 - DeepSpeed version: 0.16.4 - Diffusers version: 0.32.2 - Liger-Kernel version: 0.5.5 - LLM-Blender version: not installed - OpenAI version: 1.66.3 - PEFT version: 0.14.0 - vLLM version: 0.7.2 ``` Getting OOM for the training using `mistralai/Mistral-Nemo-Instruct-2407` @qgallouedec
3,094
1,044
qgallouedec
2025-03-19T17:10:08
Which ZeRO stage do you use?
3,094
1,045
zaddy6
2025-03-19T17:13:20
> Which ZeRO stage do you use? `zero_stage 3` I got it to work but after the first set of generations it crashes ``` orch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass. tensor at position 4: saved metadata: {'shape': torch.Size([5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} tensor at position 6: saved metadata: {'shape': torch.Size([4096, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} tensor at position 13: saved metadata: {'shape': torch.Size([1024, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} tensor at position 20: saved metadata: {'shape': torch.Size([1024, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} tensor at position 43: saved metadata: {'shape': torch.Size([5120, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} tensor at position 53: saved metadata: {'shape': torch.Size([5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} tensor at position 55: saved metadata: {'shape': torch.Size([14336, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} tensor at position 63: saved metadata: {'shape': torch.Size([14336, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} tensor at position 72: saved metadata: {'shape': torch.Size([5120, 14336]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: Traceback (most recent call last): [rank0]: File "/workspace/never.py", line 962, in <module> [rank0]: trainer.train() [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 2251, in train [rank0]: return inner_training_loop( [rank0]: ^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 2562, in _inner_training_loop [rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 3770, in training_step [rank0]: self.accelerator.backward(loss, **kwargs) [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/accelerate/accelerator.py", line 2351, in backward [rank0]: self.deepspeed_engine_wrapped.backward(loss, **kwargs) [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/accelerate/utils/deepspeed.py", line 266, in backward [rank0]: self.engine.backward(loss, **kwargs) [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn [rank0]: ret_val = func(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 2126, in backward [rank0]: self.optimizer.backward(loss, retain_graph=retain_graph) [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn [rank0]: ret_val = func(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py", line 2284, in backward [rank0]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph) [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward [rank0]: scaled_loss.backward(retain_graph=retain_graph) [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward [rank0]: torch.autograd.backward( [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward [rank0]: _engine_run_backward( [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward [rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/autograd/function.py", line 307, in apply [rank0]: return user_fn(self, *args) [rank0]: ^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 511, in decorate_bwd [rank0]: return bwd(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/runtime/zero/linear.py", line 80, in backward [rank0]: input, weight, bias = ctx.saved_tensors [rank0]: ^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 1129, in unpack_hook [rank0]: frame.check_recomputed_tensors_match(gid) [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 903, in check_recomputed_tensors_match [rank0]: raise CheckpointError( [rank0]: torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass. [rank0]: tensor at position 4: [rank0]: saved metadata: {'shape': torch.Size([5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: tensor at position 6: [rank0]: saved metadata: {'shape': torch.Size([4096, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: tensor at position 13: [rank0]: saved metadata: {'shape': torch.Size([1024, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: tensor at position 20: [rank0]: saved metadata: {'shape': torch.Size([1024, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: tensor at position 43: [rank0]: saved metadata: {'shape': torch.Size([5120, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: tensor at position 53: [rank0]: saved metadata: {'shape': torch.Size([5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: tensor at position 55: [rank0]: saved metadata: {'shape': torch.Size([14336, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: tensor at position 63: [rank0]: saved metadata: {'shape': torch.Size([14336, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: tensor at position 72: [rank0]: saved metadata: {'shape': torch.Size([5120, 14336]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} ```
3,094
1,046
zaddy6
2025-03-19T17:19:04
On the positive side I noticed this PR fixed the issue with vLLM generation getting stuck at 0 https://github.com/huggingface/trl/issues/2977#issuecomment-2734866274
3,094
1,047
zaddy6
2025-03-19T17:41:17
> > Which ZeRO stage do you use? > > `zero_stage 3` > > I got it to work but after the first set of generations it crashes > > ``` > orch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass. > tensor at position 4: > saved metadata: {'shape': torch.Size([5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > tensor at position 6: > saved metadata: {'shape': torch.Size([4096, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > tensor at position 13: > saved metadata: {'shape': torch.Size([1024, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > tensor at position 20: > saved metadata: {'shape': torch.Size([1024, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > tensor at position 43: > saved metadata: {'shape': torch.Size([5120, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > tensor at position 53: > saved metadata: {'shape': torch.Size([5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > tensor at position 55: > saved metadata: {'shape': torch.Size([14336, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > tensor at position 63: > saved metadata: {'shape': torch.Size([14336, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > tensor at position 72: > saved metadata: {'shape': torch.Size([5120, 14336]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > > [rank0]: Traceback (most recent call last): > [rank0]: File "/workspace/never.py", line 962, in <module> > [rank0]: trainer.train() > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 2251, in train > [rank0]: return inner_training_loop( > [rank0]: ^^^^^^^^^^^^^^^^^^^^ > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 2562, in _inner_training_loop > [rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) > [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 3770, in training_step > [rank0]: self.accelerator.backward(loss, **kwargs) > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/accelerate/accelerator.py", line 2351, in backward > [rank0]: self.deepspeed_engine_wrapped.backward(loss, **kwargs) > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/accelerate/utils/deepspeed.py", line 266, in backward > [rank0]: self.engine.backward(loss, **kwargs) > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn > [rank0]: ret_val = func(*args, **kwargs) > [rank0]: ^^^^^^^^^^^^^^^^^^^^^ > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 2126, in backward > [rank0]: self.optimizer.backward(loss, retain_graph=retain_graph) > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn > [rank0]: ret_val = func(*args, **kwargs) > [rank0]: ^^^^^^^^^^^^^^^^^^^^^ > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py", line 2284, in backward > [rank0]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph) > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward > [rank0]: scaled_loss.backward(retain_graph=retain_graph) > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward > [rank0]: torch.autograd.backward( > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward > [rank0]: _engine_run_backward( > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward > [rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass > [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/autograd/function.py", line 307, in apply > [rank0]: return user_fn(self, *args) > [rank0]: ^^^^^^^^^^^^^^^^^^^^ > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 511, in decorate_bwd > [rank0]: return bwd(*args, **kwargs) > [rank0]: ^^^^^^^^^^^^^^^^^^^^ > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/deepspeed/runtime/zero/linear.py", line 80, in backward > [rank0]: input, weight, bias = ctx.saved_tensors > [rank0]: ^^^^^^^^^^^^^^^^^ > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 1129, in unpack_hook > [rank0]: frame.check_recomputed_tensors_match(gid) > [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 903, in check_recomputed_tensors_match > [rank0]: raise CheckpointError( > [rank0]: torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass. > [rank0]: tensor at position 4: > [rank0]: saved metadata: {'shape': torch.Size([5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: tensor at position 6: > [rank0]: saved metadata: {'shape': torch.Size([4096, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: tensor at position 13: > [rank0]: saved metadata: {'shape': torch.Size([1024, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: tensor at position 20: > [rank0]: saved metadata: {'shape': torch.Size([1024, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: tensor at position 43: > [rank0]: saved metadata: {'shape': torch.Size([5120, 4096]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: tensor at position 53: > [rank0]: saved metadata: {'shape': torch.Size([5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: tensor at position 55: > [rank0]: saved metadata: {'shape': torch.Size([14336, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: tensor at position 63: > [rank0]: saved metadata: {'shape': torch.Size([14336, 5120]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: tensor at position 72: > [rank0]: saved metadata: {'shape': torch.Size([5120, 14336]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > [rank0]: recomputed metadata: {'shape': torch.Size([0]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} > ``` I figure this had to do with `gradient_checkpointing_kwargs={"use_reentrant": False}` I removed that but encountered another bug Might be related to this issue from the early days https://github.com/huggingface/trl/issues/2698 ``` File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2470, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 967, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 500, in collective_rpc return executor.collective_rpc(method, timeout, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 51, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/vllm/utils.py", line 2220, in run_method return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/scripts/vllm_serve.py", line 111, in update_named_param self.model_runner.model.load_weights(weights=[(name, weight)]) File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 567, in load_weights return loader.load_weights( ^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 235, in load_weights autoloaded_weights = set(self._load_module("", self.module, weights)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 224, in _load_module raise ValueError(msg) ValueError: There is no module or parameter named 'base_model' in LlamaForCausalLM INFO: 127.0.0.1:44824 - "POST /close_communicator/ HTTP/1.1" 200 OK ```
3,094
1,048
qgallouedec
2025-03-19T17:42:16
The error seems not to be OOM, or am I missing something? Also I've noticed that in you're script: ``` model="Qwen/Qwen2.5-7B", ``` do you use `mistralai/Mistral-Nemo-Instruct-2407` or `Qwen/Qwen2.5-7B`?
3,094
1,049
zaddy6
2025-03-19T17:45:21
> The error seems not to be OOM, or am I missing something? Also I've noticed that in you're script: > > ``` > model="Qwen/Qwen2.5-7B", > ``` > > do you use `mistralai/Mistral-Nemo-Instruct-2407` or `Qwen/Qwen2.5-7B`? `mistralai/Mistral-Nemo-Instruct-2407` but the OOM is resolved now, see my most recent comment I think the bug is related to https://github.com/huggingface/trl/issues/2698 As it works fine without peft
3,094
1,050
shirinyamani
2025-03-19T17:59:06
> > The error seems not to be OOM, or am I missing something? Also I've noticed that in you're script: > > ``` > > model="Qwen/Qwen2.5-7B", > > ``` > > > > > > > > > > > > > > > > > > > > > > > > do you use `mistralai/Mistral-Nemo-Instruct-2407` or `Qwen/Qwen2.5-7B`? > > `mistralai/Mistral-Nemo-Instruct-2407` but the OOM is resolved now, see my most recent comment I think the bug is related to #2698 > > As it works fine without peft Did you also just try with the `Lora` config like the issue you linked? I am wondering, is the issue related to using peft generally or using peft with specific config like Lora (i.e. perft might be fine with other supported configs?)? Reading your reply, when you do not use `peft` you can get the training process successful? (using vllm remote--this pr setup) is it actually faster than before ?
3,094
1,051
zaddy6
2025-03-19T18:04:00
> > > The error seems not to be OOM, or am I missing something? Also I've noticed that in you're script: > > > ``` > > > model="Qwen/Qwen2.5-7B", > > > ``` > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > do you use `mistralai/Mistral-Nemo-Instruct-2407` or `Qwen/Qwen2.5-7B`? > > > > > > `mistralai/Mistral-Nemo-Instruct-2407` but the OOM is resolved now, see my most recent comment I think the bug is related to #2698 > > As it works fine without peft > > Did you also just try with the `Lora` config like the issue you linked? I am wondering, is the issue related to using peft generally or using peft with specific config like Lora (i.e. perft might be fine with other supported configs?)? > > Reading your reply, when you do not use `peft` you can get the training process successful? (using vllm remote--this pr setup) is it actually faster than before ? Yes without peft training seems faster but crashes with OOM after a certain number of steps. With peft and lora config it fails with similar errors to the linked issue, I can confirm this doesnt work with LORA ``` model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map=None ).to("cuda") lora_config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "up_proj", "down_proj", "gate_proj"], task_type="CAUSAL_LM", lora_dropout=0.05 ) trainer = GRPOTrainer( model=get_peft_model(model, lora_config), processing_class=tokenizer, reward_funcs=[check_originality_func], args=training_args, train_dataset=dataset, ) ```
3,094
1,052
qgallouedec
2025-03-19T22:27:55
## Compile vs not compiled It works without further modification with compiled modules: <img width="1263" alt="Screenshot 2025-03-19 at 15 27 33" src="https://github.com/user-attachments/assets/02c69c19-57f2-4b6d-9b6e-9c9911cd8b8b" /> ```python from datasets import load_dataset from trl import GRPOTrainer, GRPOConfig dataset = load_dataset("trl-lib/tldr", split="train") # Dummy reward function: count the number of unique characters in the completions def reward_num_unique_chars(completions, **kwargs): return [len(set(c)) for c in completions] training_args = GRPOConfig( output_dir="3094-compiled", use_vllm=True, bf16=True, gradient_checkpointing=True, logging_steps=10, torch_compile=True, ) trainer = GRPOTrainer( model="Qwen/Qwen2.5-7B", args=training_args, reward_funcs=reward_num_unique_chars, train_dataset=dataset, ) trainer.train() ``` ``` CUDA_VISIBLE_DEVICES=4,5,6,7 accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml --num_processes 4 sandbox/3094.py ``` ``` trl vllm-serve --model Qwen/Qwen2.5-7B --tensor_parallel_size 4 ```
3,094
1,053
binary-husky
2025-03-20T06:07:34
> > The error seems not to be OOM, or am I missing something? Also I've noticed that in you're script: > > ``` > > model="Qwen/Qwen2.5-7B", > > ``` > > > > > > > > > > > > > > > > > > > > > > > > do you use `mistralai/Mistral-Nemo-Instruct-2407` or `Qwen/Qwen2.5-7B`? > > `mistralai/Mistral-Nemo-Instruct-2407` but the OOM is resolved now, see my most recent comment I think the bug is related to #2698 > > As it works fine without peft When using peft, peft will rename some weight names. The current imp. of grpo + lora will handle this weight name issue by: ```python state_dict = unwrapped_model.state_dict() # Remove base_model and base_layer prefixes state_dict = { k.removeprefix("base_model.model.").replace(".base_layer", ""): v for k, v in state_dict.items() } # Remove values with adapter prefix (example: "_lora") state_dict = {k: v for k, v in state_dict.items() if unwrapped_model.prefix not in k} # When module to save, remove its prefix and discard the original module state_dict = { k.replace("modules_to_save.default.", ""): v for k, v in state_dict.items() if "original_module" not in k } ```
3,094
1,054
maoulee
2025-03-20T07:11:46
> > > The error seems not to be OOM, or am I missing something? Also I've noticed that in you're script: > > > ``` > > > model="Qwen/Qwen2.5-7B", > > > ``` > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > do you use `mistralai/Mistral-Nemo-Instruct-2407` or `Qwen/Qwen2.5-7B`? > > > > > > `mistralai/Mistral-Nemo-Instruct-2407` but the OOM is resolved now, see my most recent comment I think the bug is related to #2698 > > As it works fine without peft > > When using peft, peft will rename some weight names. The current imp. of grpo + lora will handle this weight name issue by: > > ```python > state_dict = unwrapped_model.state_dict() > # Remove base_model and base_layer prefixes > state_dict = { > k.removeprefix("base_model.model.").replace(".base_layer", ""): v for k, v in state_dict.items() > } > # Remove values with adapter prefix (example: "_lora") > state_dict = {k: v for k, v in state_dict.items() if unwrapped_model.prefix not in k} > # When module to save, remove its prefix and discard the original module > state_dict = { > k.replace("modules_to_save.default.", ""): v > for k, v in state_dict.items() > if "original_module" not in k > } > ``` > > > 错误似乎不是 OOM,还是我遗漏了什么?此外,我还注意到 you 的脚本中: > > > ``` > > > model="Qwen/Qwen2.5-7B", > > > ``` > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 你用 or 吗?`mistralai/Mistral-Nemo-Instruct-2407``Qwen/Qwen2.5-7B` > > > > > > `mistralai/Mistral-Nemo-Instruct-2407`但是 OOM 现在已经解决了,请参阅我最近的评论,我认为该错误与 #2698 有关 > > 因为它在没有 peft 的情况下工作正常 > > 使用 peft 时,peft 将重命名一些权重名称。当前 grpo + lora 的小程序将通过以下方式处理此权重名称问题: > > ```python > state_dict = unwrapped_model.state_dict() > # Remove base_model and base_layer prefixes > state_dict = { > k.removeprefix("base_model.model.").replace(".base_layer", ""): v for k, v in state_dict.items() > } > # Remove values with adapter prefix (example: "_lora") > state_dict = {k: v for k, v in state_dict.items() if unwrapped_model.prefix not in k} > # When module to save, remove its prefix and discard the original module > state_dict = { > k.replace("modules_to_save.default.", ""): v > for k, v in state_dict.items() > if "original_module" not in k > } > ``` Why not just push the lora weights to vllm and customize the pushed lora weights to vllm LoraQuest. this would avoid some of the issues lora has with vllm in grpo. I have used this idea in conjunction with this service and it works well in zero2. This can also support quantitative models using GRPO.
3,094
1,055
zaddy6
2025-03-20T09:36:51
> > > The error seems not to be OOM, or am I missing something? Also I've noticed that in you're script: > > > ``` > > > model="Qwen/Qwen2.5-7B", > > > ``` > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > do you use `mistralai/Mistral-Nemo-Instruct-2407` or `Qwen/Qwen2.5-7B`? > > > > > > `mistralai/Mistral-Nemo-Instruct-2407` but the OOM is resolved now, see my most recent comment I think the bug is related to #2698 > > As it works fine without peft > > When using peft, peft will rename some weight names. The current imp. of grpo + lora will handle this weight name issue by: > > ```python > state_dict = unwrapped_model.state_dict() > # Remove base_model and base_layer prefixes > state_dict = { > k.removeprefix("base_model.model.").replace(".base_layer", ""): v for k, v in state_dict.items() > } > # Remove values with adapter prefix (example: "_lora") > state_dict = {k: v for k, v in state_dict.items() if unwrapped_model.prefix not in k} > # When module to save, remove its prefix and discard the original module > state_dict = { > k.replace("modules_to_save.default.", ""): v > for k, v in state_dict.items() > if "original_module" not in k > } > ``` Has this fix been merged?
3,094
1,056
kashif
2025-03-20T09:44:32
@zaddy6 we are looking at the peft support next
3,094
1,057
zaddy6
2025-03-20T21:02:25
Getting a new error after the most recent commit train.py ``` import torch from datasets import load_dataset from trl import GRPOTrainer, GRPOConfig, get_peft_config from peft import LoraConfig, get_peft_model from transformers import AutoTokenizer, AutoModelForCausalLM dataset = load_dataset("trl-lib/tldr", split="train") # Dummy reward function: count the number of unique characters in the completions def reward_num_unique_chars(completions, **kwargs): return [len(set(c)) for c in completions] model_name = "Qwen/Qwen2.5-7B" lora_config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "up_proj", "down_proj", "gate_proj"], task_type="CAUSAL_LM", lora_dropout=0.05 ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map=None ).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token training_args = GRPOConfig( output_dir="3094-compiled", use_vllm=True, bf16=True, gradient_checkpointing=True, logging_steps=10, torch_compile=True, ) trainer = GRPOTrainer( model=get_peft_model(model, lora_config), processing_class=tokenizer, args=training_args, reward_funcs=reward_num_unique_chars, train_dataset=dataset, ) trainer.train() ``` Error Information ``` RuntimeError: The size of tensor a (0) must match the size of tensor b (3584) at non-singleton dimension 1 [rank1]: Traceback (most recent call last): [rank1]: File "/workspace/trl_test.py", line 51, in <module> [rank1]: trainer.train() [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 2251, in train [rank1]: return inner_training_loop( [rank1]: ^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 2562, in _inner_training_loop [rank1]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 3718, in training_step [rank1]: inputs = self._prepare_inputs(inputs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/extras/profiling.py", line 87, in wrapper [rank1]: return func(self, *args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 627, in _prepare_inputs [rank1]: inputs = self._generate_and_score_completions(inputs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 657, in _generate_and_score_completions [rank1]: self._move_model_to_vllm() [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/extras/profiling.py", line 87, in wrapper [rank1]: return func(self, *args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 600, in _move_model_to_vllm [rank1]: merged_model = self.model.merge_and_unload() [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload [rank1]: return self._unload_and_optionally_merge( [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge [rank1]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names) [rank1]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/peft/tuners/lora/layer.py", line 514, in merge [rank1]: base_layer.weight.data += delta_weight [rank1]: RuntimeError: The size of tensor a (0) must match the size of tensor b (3584) at non-singleton dimension 1 [rank0]: Traceback (most recent call last): [rank0]: File "/workspace/trl_test.py", line 51, in <module> [rank0]: trainer.train() [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 2251, in train [rank0]: return inner_training_loop( [rank0]: ^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 2562, in _inner_training_loop [rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/transformers/trainer.py", line 3718, in training_step [rank0]: inputs = self._prepare_inputs(inputs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/extras/profiling.py", line 87, in wrapper [rank0]: return func(self, *args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 627, in _prepare_inputs [rank0]: inputs = self._generate_and_score_completions(inputs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 657, in _generate_and_score_completions [rank0]: self._move_model_to_vllm() [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/extras/profiling.py", line 87, in wrapper [rank0]: return func(self, *args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 600, in _move_model_to_vllm [rank0]: merged_model = self.model.merge_and_unload() [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload [rank0]: return self._unload_and_optionally_merge( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge [rank0]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names) [rank0]: File "/opt/conda/envs/unsloth_env/lib/python3.11/site-packages/peft/tuners/lora/layer.py", line 514, in merge [rank0]: base_layer.weight.data += delta_weight [rank0]: RuntimeError: The size of tensor a (0) must match the size of tensor b (3584) at non-singleton dimension 1 ``` using 8xH100 with accelerate launch and deepspeed ``` - Platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35 - Python version: 3.11.11 - TRL version: 0.16.0.dev0 - PyTorch version: 2.5.1 - CUDA device(s): NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3 - Transformers version: 4.50.0.dev0 - Accelerate version: 1.5.2 - Accelerate config: not found - Datasets version: 3.4.1 - HF Hub version: 0.29.3 - bitsandbytes version: 0.45.3 - DeepSpeed version: 0.16.4 - Diffusers version: 0.32.2 - Liger-Kernel version: 0.5.5 - LLM-Blender version: not installed - OpenAI version: 1.66.3 - PEFT version: 0.14.0 - vLLM version: 0.7.2 ```
3,094
1,058
vagitablebirdcode
2025-03-21T03:46:45
How about using Subprocess to start and stop the vllm service, so that a unified program can perform training? ```python # start the server command = ['trl', 'vllm-serve', '--model', model_name] llm_process = subprocess.run(command) # stop the server llm_process.send_signal(signal.SIGINT) ret_code = llm_process.wait() ```
3,094
1,059
qgallouedec
2025-03-21T03:51:55
Usually, users usually want to deploy a vllm server on one node and do the training on another. I don't see how subprocess can be used in this scenario.
3,094
1,060
binary-husky
2025-03-21T09:28:25
Encounter a issue with peft: when using zero3 + peft, `merge_adapter()` works very well at the beginning, but after executing `_get_per_token_logps`, `merge_adapter()` will report error cause by a layer lora .weight becomes empty (probably because zero3 move lora weights elsewhere automatically). this problem requires many conditions to reproduce, I'm tracking this problem under qwen 32b + peft rank 32 ``` self._move_model_to_vllm() # -- good self._move_model_to_vllm() # -- good self._move_model_to_vllm() # -- good self._move_model_to_vllm() # -- good per_token_logps = self._get_per_token_logps(model, input_ids, attention_mask, logits_to_keep) self._move_model_to_vllm() # -- error ``` ![image](https://github.com/user-attachments/assets/5cc1edf6-4c25-4874-8896-dc8f60f3bd38) .venv/lib/python3.11/site-packages/peft/tuners/lora/layer.py
3,094
1,061
binary-husky
2025-03-21T09:33:21
> Encounter a issue with peft: > > when using zero3 + peft, `merge_adapter()` works very well at the beginning, but after executing `_get_per_token_logps`, `merge_adapter()` will report error cause by a layer lora .weight becomes empty (probably because zero3 move lora weights elsewhere automatically). > > this problem requires many conditions to reproduce, I'm tracking this problem under qwen 32b + peft rank 32 > > ``` > self._move_model_to_vllm() # -- good > self._move_model_to_vllm() # -- good > self._move_model_to_vllm() # -- good > self._move_model_to_vllm() # -- good > per_token_logps = self._get_per_token_logps(model, input_ids, attention_mask, logits_to_keep) > self._move_model_to_vllm() # -- error > ``` > > ![image](https://private-user-images.githubusercontent.com/96192199/425388702-5cc1edf6-4c25-4874-8896-dc8f60f3bd38.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDI1NDk4MjMsIm5iZiI6MTc0MjU0OTUyMywicGF0aCI6Ii85NjE5MjE5OS80MjUzODg3MDItNWNjMWVkZjYtNGMyNS00ODc0LTg4OTYtZGM4ZjYwZjNiZDM4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAzMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMzIxVDA5MzIwM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWZjOWY2ZWYyMzU0ZTUwZmZkMTA0NDc4MTUxZWExNmM0ZjEyZDI4ZDUyNTI5ODNkYjgwOWU5NmI5OGRjOTE3YmQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.yClP66XFBGspiRFsRunIT86BGJnGQMPCuedwBvZldEE) .venv/lib/python3.11/site-packages/peft/tuners/lora/layer.py @zaddy6 maybe this origins from the same cause, we both have unexpected funny empty tensors
3,094
1,062
kashif
2025-03-21T09:38:50
@binary-husky do you think we should sync the weights when its deepspeed zero3 via the `deepspeed.comm.get_rank() == 0` process?
3,094
1,063
kashif
2025-03-21T10:50:13
@binary-husky i think the issue is that the peft `unmerge_adapter()` is happening outside of the deepspeed context and so let me move all peft logic inside a single `GatheredParameters` context
3,094
1,064
kashif
2025-03-21T11:01:09
@binary-husky can you kindly check now?
3,094
1,065
zaddy6
2025-03-21T11:11:57
@kashif I tried and it works now, will do a full run and revert back
3,094
1,066
zaddy6
2025-03-21T16:13:34
> @kashif I tried and it works now, will do a full run and revert back <img width="623" alt="image" src="https://github.com/user-attachments/assets/55d0716d-06f2-4373-89d7-e2d91e1395d4" /> LORA Run doesn't learn
3,094
1,067
qgallouedec
2025-03-21T16:35:18
You don't use the same lr, do you?
3,094
1,068
zaddy6
2025-03-21T16:43:13
> You don't use the same lr, do you? Same LR <img width="1868" alt="Screenshot 2025-03-21 at 4 42 48 PM" src="https://github.com/user-attachments/assets/a4cf9e5e-3ff9-4d92-856b-e3ca05749d43" />
3,094
1,069
qgallouedec
2025-03-21T16:46:48
You can't expect the same results then
3,094
1,070
zaddy6
2025-03-21T16:55:03
> You can't expect the same results then But then interestingly same LR converges, using main branch but doesnt for this PR
3,094
1,071
qgallouedec
2025-03-21T16:56:18
> But then interestingly same LR converges, using main branch but doesnt for this PR Which commit do you use?
3,094
1,072
binary-husky
2025-03-21T16:56:30
zero3 + peft + bigbigmodel is more nasty than I expected, I'm writing and testing this solution ```python @profiling_decorator def _move_model_to_vllm_for_zero3_plus_peft_plus_bigbigmodel(self): """ Why this special method is needed for the special (zero3 + peft + very_large_model) combination? Why this combination is very nasty? 1. `model.merge_adapter()` must be executed after `unwrap_model_for_generation` 2. `unwrap_model_for_generation` can cause GPU OOM if model is very large 3. Usually, GPU OOM can be resolve by setting `gather_deepspeed3_params=False` 4. But guess what? `gather_deepspeed3_params=False` cause error for `model.merge_adapter()` Now you see why this is a very nasty problem? 😂 So, we rewrite the `merge_adapter` code, to avoid GPU OOM, we have to merge the adapter module by module. The basic idea is: 1. We first deal with lora weights only 2. Then we deal with whatever params that are left behind """ from peft.tuners.tuners_utils import BaseTunerLayer, onload_layer print(f"Begin transfer") # Update the weights in vLLM. When using DeepSpeed ZeRO Stage 3, we need to gather the parameters before updating the weights. deepspeed_plugin = self.accelerator.state.deepspeed_plugin zero_stage_3 = deepspeed_plugin is not None and deepspeed_plugin.zero_stage == 3 warning = "This special `_move_model_to_vllm` is designed only for zero3 + peft + very_large_model. Only a nasty problem needs a nasty solution like this." if not zero_stage_3 or not is_peft_model(self.model): raise RuntimeError(warning) parameter_to_transfer_map_id_name = {id(param): name for name, param in self.model.named_parameters()} # 1. We first deal with lora weights only, it is very very nasty 😂 for module in self.model.modules(): # This return not only leaf modules, but also the parent module if isinstance(module, BaseTunerLayer): # do not know what this `onload_layer` thing does, but it seems important with onload_layer(module): # get all the parameters of this small module param_list_of_this_small_module = [param for relative_name, param in module.named_parameters()] with deepspeed.zero.GatheredParameters(param_list_of_this_small_module) if zero_stage_3 else nullcontext(): # we must `GatheredParameters` before module.merge module.merge(adapter_names=None) for relative_name, param in module.named_parameters(): param_python_id = id(param) # get the absolute name of the parameter # absolute_name = f"{module.prefix}.{relative_name}" absolute_name = parameter_to_transfer_map_id_name[param_python_id] # f"{module.prefix}.{relative_name}" # one less weight to worry about parameter_to_transfer_map_id_name.pop(param_python_id) # only the main process is responsible for transferring weights if self.accelerator.is_main_process: # When using PEFT, we need to recover the original parameter name and discard some parameters absolute_name = absolute_name.removeprefix("base_model.model.").replace(".base_layer", "") if self.model.prefix in absolute_name: continue # When module to save, remove its prefix and discard the original module if "original_module" in absolute_name: continue absolute_name = absolute_name.replace("modules_to_save.default.", "") # Finally it is time to be transferred. 🌟 print(f"Transferring: {absolute_name}") self.vllm_client.update_named_param(absolute_name, param.data) # and of course, we must unmerge before exit `GatheredParameters` module.unmerge() # 2. Then we deal with whatever params that are left behind remaining_param_list = [(name, param) for name, param in self.model.named_parameters() if name in parameter_to_transfer_map_id_name.values()] for name, param in remaining_param_list: with deepspeed.zero.GatheredParameters([param]) if zero_stage_3 else nullcontext(): if self.accelerator.is_main_process: name = name.removeprefix("base_model.model.").replace(".base_layer", "") if self.model.prefix in name: raise RuntimeError("Something must be wrong because we assume lora-related weights are already transferred.") if "original_module" in name: raise RuntimeError("Something must be wrong because we assume lora-related weights are already transferred.") if ("modules_to_save.default." in name): raise RuntimeError("Something must be wrong because we assume lora-related weights are already transferred.") print(f"Transferring: {name}") self.vllm_client.update_named_param(name, param.data) # Reset the prefix cache after updating weights if self.accelerator.is_main_process: self.vllm_client.reset_prefix_cache() ```
3,094
1,073
qgallouedec
2025-03-21T16:58:28
@binary-husky Regarding zero3 + peft + bigbigmodel, I think it's best to work directly in PEFT to enable merge/unmerge with zero3.
3,094
1,074
zaddy6
2025-03-21T17:00:34
> > But then interestingly same LR converges, using main branch but doesnt for this PR > > Which commit do you use? https://github.com/huggingface/trl/pull/3094/commits/657cb21df92182b1503772c4103840a2a09e194b
3,094
1,075
qgallouedec
2025-03-21T17:31:16
@zaddy6 I can't reproduce: ```python import tempfile from datasets import load_dataset from trl import GRPOConfig, GRPOTrainer from peft import LoraConfig # Dummy reward function: count the number of unique characters in the completions def reward_num_unique_chars(completions, **kwargs): return [len(set(c)) for c in completions] dataset = load_dataset("trl-lib/tldr", split="train") with tempfile.TemporaryDirectory() as tmp_dir: training_args = GRPOConfig( output_dir=tmp_dir, learning_rate=1e-4, bf16=True, logging_steps=10, # use_vllm=True, ) trainer = GRPOTrainer( model="Qwen/Qwen2.5-0.5B", reward_funcs=reward_num_unique_chars, args=training_args, train_dataset=dataset, peft_config=LoraConfig(), ) trainer.train() ``` <img width="1211" alt="Screenshot 2025-03-21 at 10 30 57" src="https://github.com/user-attachments/assets/90f4368a-d7e2-4c65-b92b-1c50b39b885d" />
3,094
1,076
zaddy6
2025-03-21T18:01:29
my bad, the issue was from my end smh <img width="638" alt="image" src="https://github.com/user-attachments/assets/36e35048-8df8-4d36-8a46-0dad0e050d81" /> @qgallouedec It works and funny enough it converges even faster for my usecase
3,094
1,077
qgallouedec
2025-03-21T18:17:28
Thanks @binary-husky for d759c9c but as I mentioned here > @binary-husky Regarding zero3 + peft + bigbigmodel, I think it's best to work directly in PEFT to enable merge/unmerge with zero3. I think the zero3 + peft + bigbigmodel is a peft issue, and we shouldn't implement this logic in TRL. I'd recommend you open a PR in peft instead
3,094
1,078
qgallouedec
2025-03-21T19:10:23
Curious errors in the CI, which I can't reproduce locally, I'm pretty sure it's because the runners don't have GPUs, and the import fails because of that, even if you don't use GPUs. I think we can ignore that for now. I'm merging!
3,094
1,079
binary-husky
2025-03-21T19:16:33
nice, looks much more elegant now ~
3,094
1,080
maoulee
2025-03-22T03:30:59
> @zaddy6 we are looking at the peft support next I've modified vLLM by patching it to directly accept and load LoRA parameters as separate adapters during the generation process. This bypasses the need to transfer the full model parameters. This adapter-loading approach avoids potential errors associated with merging and unloading PEFT models. ```code @app.post("/apply_lora/") def apply_lora(request: ApplyLoraRequest, background_tasks: BackgroundTasks): worker = llm.llm_engine.model_executor.driver_worker lora_weights = worker.lora_weight lora_config = request.lora_config if worker.lora_id==None: worker.lora_id=0 else: worker.lora_id=worker.lora_id+1 from vllm.lora.request import LoRARequest lora_request = LoRARequest( lora_name=str(worker.lora_id), lora_int_id=worker.lora_id, lora_tensors=lora_weights, lora_config=lora_config, ) worker.lora_requests=lora_request return {"message": f"LoRA applied with ID: {worker.lora_id}", "lora_id": worker.lora_id} ``` This modification has proven effective in my experiments using a ZeRO-2 setup for GRPO training on an R1-32B-INT4 model. ![image](https://github.com/user-attachments/assets/b2c1ba28-be60-4f0d-a9b9-642800eba008) ![image](https://github.com/user-attachments/assets/29594d9c-2b7b-4207-a5a5-9b40f3d15869) Would it be helpful if I uploaded my modified vllm_serve.py, vllm_client.py and vllm_patch.py files? I'm relatively new to code sharing, so I'm not sure of the best way to provide the code。
3,094
1,081
qgallouedec
2025-03-22T03:33:31
Thanks @maoulee, feel free to open a PR so that we can test :)
3,094
1,082
maoulee
2025-03-22T03:48:07
> Thanks @maoulee, feel free to open a PR so that we can test :) Ok, I open a new pr and update the mentioned files
3,094
1,083
Andcircle
2025-03-25T04:24:45
@binary-husky @qgallouedec Sorry I still haven't make this work, how to make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training? as stated here: ------------------------------------------------------------------------------------------------------------ 2 machine | 1 for training, 1 for VLLM | using NCCL to deliver param updates ------------------------------------------------------------------------------------------------------------ --- (1) start MAIN TRAINING script: (on machine 1, all 8 gpus for training) --- CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' \ accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml \ --num_processes=8 \ grpo_with_remote_vllm.py \ --model_name_or_path /mnt/data_cpfs/model_cache/modelscope/hub/Qwen/Qwen/Qwen2___5-7B-Instruct/ \ --dataset_name "trl-internal-testing/zen" \ --output_dir './mytests' \ --bf16 \ --use_remote_vllm=True \ --vllm_max_model_len 4096 \ --remote_vllm_num_gpus=1 \ --remote_vllm_ip_port='22.6.225.225:8000' --- (2) start VLLM script (do not run the commandline below, it's only a demo, the true commandline will be `printed` by the MAIN TRAINING script.): (on machine 2, 1 GPU for VLLM) --- >> the commandline will be `printed` by the MAIN TRAINING script.
3,094
1,084
qgallouedec
2025-03-25T04:42:32
Ignore the pr description it's an old version. Please refer to the doc
3,094
1,085
Andcircle
2025-03-25T04:49:28
> Ignore the pr description it's an old version. Please refer to the doc the doc use SLURM, it only show how to use the whole node for VLLM, can we still do something like: make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training?
3,094
1,086
binary-husky
2025-03-25T07:40:24
> > Ignore the pr description it's an old version. Please refer to the doc > > the doc use SLURM, it only show how to use the whole node for VLLM, can we still do something like: make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training? @Andcircle You can refer to my personal notebook below for training 32B Qwen, it is ugly, not general, but may deliver some basic ideas: ``` # 1. Move the Model to Memory in all node🌟 # ---------------------------- # Install rsync # apt install rsync tmux -y && \ # Clear memory disk # rm -rf /dev/shm/targetmodel && \ # Move the model # rsync -av /path/to/Qwen2___5-32B-Instruct/ /dev/shm/targetmodel # ---------------------------- # 2. Machine 1 [eth0: 22.6.222.80] (Few GPUs) Start vLLM Service (Steps 2 and 3 can be done in any order) # GPU List 🌟 # CUDA_VISIBLE_DEVICES="0,1,2,3" \ # vLLM Serve # trl vllm-serve \ # Model # --model /dev/shm/targetmodel \ # Total GPUs 🌟 # --tensor_parallel_size 4 \ # # --host 0.0.0.0 --port 8000 \ # # --max_model_len 8192 # 3-1. Machine 2 [eth0: 22.8.150.23] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order) # Change Directory # cd /path/to/openr1 && \ # Virtual Env # source .venv/bin/activate && \ # Clear Terminal # clear && \ # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \ # # accelerate launch \ # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \ # Number of Machines # --num_machines=2 \ # Total GPUs # --num_processes=16 \ # Main IP # --main_process_ip="22.8.150.23" \ # Machine Rank # --machine_rank=0 \ # Target Program # src/open_r1/grpo.py \ # Training Params 🌟 # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \ # VLLM Machine 🌟 # --vllm_server_host 22.6.222.80 # 3-2. Machine 3 [eth0: 22.6.191.91] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order) # Change Directory # cd /path/to/openr1 && \ # Virtual Env # source .venv/bin/activate && \ # Clear Terminal # clear && \ # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \ # # accelerate launch \ # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \ # Number of Machines # --num_machines=2 \ # Total GPUs # --num_processes=16 \ # Main IP # --main_process_ip="22.8.150.23" \ # Machine Rank 🌟 # --machine_rank=1 \ # Target Program # src/open_r1/grpo.py \ # Training Params # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \ # VLLM Machine # --vllm_server_host 22.6.222.80 ```
3,094
1,087
Andcircle
2025-03-25T16:05:44
@binary-husky awesome! really appreciated!!
3,094
1,088
Andcircle
2025-03-26T03:42:18
@binary-husky I'm trying to use GPU as efficient as possible in your above solution, in machine 1, the 0,1,2,3 used for vllm, then 4,5,6,7 can't be used for training anymore. I'm trying to start 2 vllm, one on 0123 with port 8000, one on 4567 with port 9000 Then machine 2 will call vllm1, machine 3 call vllm2, then I can train 2 variations of model at the same time (I thought) But actually it doesn't work, the vllm client update from machine3 will have error as following: Any hints how should I make this setup work? ``` [rank0]: trainer = GRPOTrainer( [rank0]: ^^^^^^^^^^^^ [rank0]: File "/home/user/.local/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 457, in __init__ [rank0]: self.vllm_client = VLLMClient( [rank0]: ^^^^^^^^^^^ [rank0]: File "/home/user/.local/lib/python3.11/site-packages/trl/extras/vllm_client.py", line 95, in __init__ [rank0]: self.init_communicator() [rank0]: File "/home/user/.local/lib/python3.11/site-packages/trl/extras/vllm_client.py", line 215, in init_communicator [rank0]: self.pynccl_comm = PyNcclCommunicator(pg, device="cuda:0") [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/user/.local/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__ [rank0]: self.comm: ncclComm_t = self.nccl.ncclCommInitRank( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/user/.local/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank [rank0]: self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm), [rank0]: File "/home/user/.local/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK [rank0]: raise RuntimeError(f"NCCL error: {error_str}") [rank0]: RuntimeError: NCCL error: unhandled system error (run with NCCL_DEBUG=INFO for details) ```
3,094
1,089
qgallouedec
2025-03-26T05:05:56
Maybe the easiest is to use 4 machines? (1 node for training, 1 for vLLM)x2
3,094
1,090
jiangix-paper
2025-03-26T16:45:11
@binary-husky Great job. I want to know, if I use containers to start multi-node grpo. Is it that I can't execute the corresponding commands on each node? Does it looks like I have to use slurm to manage distributed training?
3,094
1,091
Andcircle
2025-03-26T20:05:45
> Maybe the easiest is to use 4 machines? (1 node for training, 1 for vLLM)x2 4 GPU is more than enough for vLLM, which means the rest 4 are wasted. Unfortunately we have very limited GPU resources, that's why trying to figure this out, hahaha Thanks anyway
3,094
1,092
binary-husky
2025-03-27T04:51:39
> @binary-husky > > I'm trying to use GPU as efficient as possible > > in your above solution, in machine 1, the 0,1,2,3 used for vllm, then 4,5,6,7 can't be used for training anymore. I'm trying to start 2 vllm, one on 0123 with port 8000, one on 4567 with port 9000 Then machine 2 will call vllm1, machine 3 call vllm2, then I can train 2 variations of model at the same time (I thought) > > But actually it doesn't work, the vllm client update from machine3 will have error as following: > > Any hints how should I make this setup work? 2 vllms? There are two ports you need to consider, you probably forget the other one? Please check port conflict ~ ![image](https://github.com/user-attachments/assets/7ced20b4-1fa5-47db-9890-45fbcf4f26ca)
3,094
1,093
Andcircle
2025-03-27T22:30:10
> > @binary-husky > > I'm trying to use GPU as efficient as possible > > in your above solution, in machine 1, the 0,1,2,3 used for vllm, then 4,5,6,7 can't be used for training anymore. I'm trying to start 2 vllm, one on 0123 with port 8000, one on 4567 with port 9000 Then machine 2 will call vllm1, machine 3 call vllm2, then I can train 2 variations of model at the same time (I thought) > > But actually it doesn't work, the vllm client update from machine3 will have error as following: > > Any hints how should I make this setup work? > > 2 vllms? There are two ports you need to consider, you probably forget the other one? Please check port conflict ~ > > ![image](https://private-user-images.githubusercontent.com/96192199/427347651-7ced20b4-1fa5-47db-9890-45fbcf4f26ca.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDMxMTQ3MjMsIm5iZiI6MTc0MzExNDQyMywicGF0aCI6Ii85NjE5MjE5OS80MjczNDc2NTEtN2NlZDIwYjQtMWZhNS00N2RiLTk4OTAtNDVmYmNmNGYyNmNhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAzMjclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMzI3VDIyMjcwM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTVkZjUyOWVkNzgwNTU1OTc2ZGRiMDEyMzcxZDc0ODU3MjEwNTlmOTBjNTgwMjY1ZGI4ZGQ2NzI3NjQ3NDFkMDMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.v7FqLlQvG2nCA8N7Tr-P-Vjo-Tb0rbRHpBiF-TKUmzA) Yeah I set this through GRPOconfig to different port. But the error seems to say, the weights update from NCCL only support one VLLM deployment, I guess
3,094
1,094
binary-husky
2025-03-28T03:08:14
> Yeah I set this through GRPOconfig to different port. But the error seems to say, the weights update from NCCL only support one VLLM deployment, I guess @Andcircle Sorry, but group port is not exposed to `GRPOconfig`, you have to add it manually in `grpo_trainer.py`, that 51216 thing.
3,094
1,095
tingkuanpei
2025-03-28T09:58:47
32B model with ZeRO3 and sync_ref_model = true,will raise OOM in SyncRefModelCallback::sync_target_model(). error stack: [rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer.py", line 2611, in _inner_training_loop [rank0]: self.control = self.callback_handler.on_step_end(args, self.state, self.control) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer_callback.py", line 535, in on_step_end [rank0]: return self.call_event("on_step_end", args, state, control) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer_callback.py", line 557, in call_event [rank0]: result = getattr(callback, event)( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/apps/dat/nlp/abc/local_exp_git/isa-trl/trl/trainer/callbacks.py", line 132, in on_step_end [rank0]: self.sync_target_model(model, self.ref_model, args.ref_model_mixup_alpha) [rank0]: File "/apps/dat/nlp/abc/local_exp_git/isa-trl/trl/trainer/callbacks.py", line 118, in sync_target_model [rank0]: with deepspeed.zero.GatheredParameters( [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 2224, in __enter__ [rank0]: self.params[0].all_gather(param_list=self.params) [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1143, in all_gather [rank0]: return self._all_gather(param_list, async_op=async_op, hierarchy=hierarchy) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn [rank0]: ret_val = func(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1511, in _all_gather [rank0]: self._allgather_params_coalesced(all_gather_nonquantize_list, hierarchy, quantize=False) [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1799, in _allgather_params_coalesced [rank0]: flat_tensor = torch.empty(tensor_size, dtype=param_list[0].ds_tensor.dtype, [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 270.00 MiB. GPU 0 has a total capacity of 79.33 GiB of which 112.00 MiB is free. Process 529718 has 79.18 GiB memory in use. Of the allocated memory 77.62 GiB is allocated by PyTorch, and 114.49 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
3,094
1,096
vamshi-rvk
2025-03-29T06:17:02
@binary-husky > > > Ignore the pr description it's an old version. Please refer to the doc > > > > > > the doc use SLURM, it only show how to use the whole node for VLLM, can we still do something like: make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training? > > @Andcircle You can refer to my personal notebook below for training 32B Qwen, it is ugly, not general, but may deliver some basic ideas: > > ``` > # 1. Move the Model to Memory in all node🌟 > # ---------------------------- > # Install rsync # apt install rsync tmux -y && \ > # Clear memory disk # rm -rf /dev/shm/targetmodel && \ > # Move the model # rsync -av /path/to/Qwen2___5-32B-Instruct/ /dev/shm/targetmodel > # ---------------------------- > > # 2. Machine 1 [eth0: 22.6.222.80] (Few GPUs) Start vLLM Service (Steps 2 and 3 can be done in any order) > # GPU List 🌟 # CUDA_VISIBLE_DEVICES="0,1,2,3" \ > # vLLM Serve # trl vllm-serve \ > # Model # --model /dev/shm/targetmodel \ > # Total GPUs 🌟 # --tensor_parallel_size 4 \ > # # --host 0.0.0.0 --port 8000 \ > # # --max_model_len 8192 > > # 3-1. Machine 2 [eth0: 22.8.150.23] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order) > # Change Directory # cd /path/to/openr1 && \ > # Virtual Env # source .venv/bin/activate && \ > # Clear Terminal # clear && \ > # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \ > # # accelerate launch \ > # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \ > # Number of Machines # --num_machines=2 \ > # Total GPUs # --num_processes=16 \ > # Main IP # --main_process_ip="22.8.150.23" \ > # Machine Rank # --machine_rank=0 \ > # Target Program # src/open_r1/grpo.py \ > # Training Params 🌟 # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \ > # VLLM Machine 🌟 # --vllm_server_host 22.6.222.80 > > # 3-2. Machine 3 [eth0: 22.6.191.91] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order) > # Change Directory # cd /path/to/openr1 && \ > # Virtual Env # source .venv/bin/activate && \ > # Clear Terminal # clear && \ > # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \ > # # accelerate launch \ > # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \ > # Number of Machines # --num_machines=2 \ > # Total GPUs # --num_processes=16 \ > # Main IP # --main_process_ip="22.8.150.23" \ > # Machine Rank 🌟 # --machine_rank=1 \ > # Target Program # src/open_r1/grpo.py \ > # Training Params # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \ > # VLLM Machine # --vllm_server_host 22.6.222.80 > ``` @binary-husky , thanks for this. Im trying to finetune llama 405b and it uses 16h100s (2 nodes) for vLLM and 8 nodes for training. can you provide me a similar commands config which uses 2 nodes for vllms and the rest for training? Thanks in advance.
3,094
1,097
binary-husky
2025-03-31T04:02:45
> 32B model with ZeRO3 and sync_ref_model = true,will raise OOM in SyncRefModelCallback::sync_target_model(). > > error stack: [rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer.py", line 2611, in _inner_training_loop [rank0]: self.control = self.callback_handler.on_step_end(args, self.state, self.control) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer_callback.py", line 535, in on_step_end [rank0]: return self.call_event("on_step_end", args, state, control) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer_callback.py", line 557, in call_event [rank0]: result = getattr(callback, event)( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/apps/dat/nlp/abc/local_exp_git/isa-trl/trl/trainer/callbacks.py", line 132, in on_step_end [rank0]: self.sync_target_model(model, self.ref_model, args.ref_model_mixup_alpha) [rank0]: File "/apps/dat/nlp/abc/local_exp_git/isa-trl/trl/trainer/callbacks.py", line 118, in sync_target_model [rank0]: with deepspeed.zero.GatheredParameters( [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 2224, in **enter** [rank0]: self.params[0].all_gather(param_list=self.params) [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1143, in all_gather [rank0]: return self._all_gather(param_list, async_op=async_op, hierarchy=hierarchy) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn [rank0]: ret_val = func(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1511, in _all_gather [rank0]: self._allgather_params_coalesced(all_gather_nonquantize_list, hierarchy, quantize=False) [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1799, in _allgather_params_coalesced [rank0]: flat_tensor = torch.empty(tensor_size, dtype=param_list[0].ds_tensor.dtype, [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 270.00 MiB. GPU 0 has a total capacity of 79.33 GiB of which 112.00 MiB is free. Process 529718 has 79.18 GiB memory in use. Of the allocated memory 77.62 GiB is allocated by PyTorch, and 114.49 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) use this one as workaround: https://github.com/huggingface/trl/pull/3094#issuecomment-2743938970 @tingkuanpei
3,094
1,098
binary-husky
2025-03-31T04:05:11
@vamshi-rvk sorry, currently I'm unable to allocate that many machines
3,094
1,099
tongtong0613
2025-04-01T12:41:38
> > > Ignore the pr description it's an old version. Please refer to the doc > > > > > > the doc use SLURM, it only show how to use the whole node for VLLM, can we still do something like: make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training? > > @Andcircle You can refer to my personal notebook below for training 32B Qwen, it is ugly, not general, but may deliver some basic ideas: > > ``` > # 1. Move the Model to Memory in all node🌟 > # ---------------------------- > # Install rsync # apt install rsync tmux -y && \ > # Clear memory disk # rm -rf /dev/shm/targetmodel && \ > # Move the model # rsync -av /path/to/Qwen2___5-32B-Instruct/ /dev/shm/targetmodel > # ---------------------------- > > # 2. Machine 1 [eth0: 22.6.222.80] (Few GPUs) Start vLLM Service (Steps 2 and 3 can be done in any order) > # GPU List 🌟 # CUDA_VISIBLE_DEVICES="0,1,2,3" \ > # vLLM Serve # trl vllm-serve \ > # Model # --model /dev/shm/targetmodel \ > # Total GPUs 🌟 # --tensor_parallel_size 4 \ > # # --host 0.0.0.0 --port 8000 \ > # # --max_model_len 8192 > > # 3-1. Machine 2 [eth0: 22.8.150.23] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order) > # Change Directory # cd /path/to/openr1 && \ > # Virtual Env # source .venv/bin/activate && \ > # Clear Terminal # clear && \ > # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \ > # # accelerate launch \ > # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \ > # Number of Machines # --num_machines=2 \ > # Total GPUs # --num_processes=16 \ > # Main IP # --main_process_ip="22.8.150.23" \ > # Machine Rank # --machine_rank=0 \ > # Target Program # src/open_r1/grpo.py \ > # Training Params 🌟 # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \ > # VLLM Machine 🌟 # --vllm_server_host 22.6.222.80 > > # 3-2. Machine 3 [eth0: 22.6.191.91] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order) > # Change Directory # cd /path/to/openr1 && \ > # Virtual Env # source .venv/bin/activate && \ > # Clear Terminal # clear && \ > # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \ > # # accelerate launch \ > # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \ > # Number of Machines # --num_machines=2 \ > # Total GPUs # --num_processes=16 \ > # Main IP # --main_process_ip="22.8.150.23" \ > # Machine Rank 🌟 # --machine_rank=1 \ > # Target Program # src/open_r1/grpo.py \ > # Training Params # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \ > # VLLM Machine # --vllm_server_host 22.6.222.80 > ``` @binary-husky Hello, referring to your sharing, I used the first four cards of a single H100 to start the VLLM service, while the other two H100s are used for training. However, I encountered the following error. Do you know how to solve this issue? ```shell [Rank12] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8834, OpType=_ALLGATHER_BASE, NumelIn=1638400, NumelOut=26214400, Timeout(ms)=1800000) ran for 1800055 milliseconds before timing out. ... ```
3,094
1,100
binary-husky
2025-04-14T09:13:39
@tongtong0613 I have seen `1800055 milliseconds` error before, when I mess up reward function and make rank 0 compute reward forever. Then the watch dogs on other ranks become very unhappy...
3,094
1,101
B-Gendron
2025-03-20T10:23:59
Hi @JWQZ, It is still possible to run PPO with LoRA adapters using `trl==0.11.4`. Actually the issue is not related to PEFT, but it is related to the fact that, in PPO, the reward needs to be estimated and this is done using a value head at the top of the model. This head in hence part of the model structure, that's why the model class should be something like `AutoModelForCausalLMWithValueHead`. Therefore, what you should do is instantiate the model with this class, as follows: ```py # example lora config lora_config = LoraConfig( r=16, lora_alpha=8, target_modules=["q_proj", "v_proj"], bias="lora_only", lora_dropout=0.1, ) # instantiate model with lora adapters model = AutoModelForCausalLMWithValueHead.from_pretrained( config.model_name, device_map=device, peft_config=lora_config, # this is where peft_config should be specified, not in PPOTrainer torch_dtype=torch.bfloat16, ) ``` If you need to plug fine-tuned adapters for further training, then simply update the weights of the initialized adapters without changing the model class, as follows: ```py # load the fine-tuned adapter weights adapter_model_name = 'path/to/your/adapter/model' peft_model = PeftModel.from_pretrained(model, adapter_model_name) # transfer weights using a state dict lora_state_dict = { k: v for k, v in peft_model.state_dict().items() if "lora" in k } model.load_state_dict(lora_state_dict, strict=False) # make these parameter trainable (if desired) for n, p in model.named_parameters(): if 'lora' in n: p.requires_grad = True ``` Hope this helps!
3,093
1,102
JWQZ
2025-03-20T10:33:01
@B-Gendron Thank you very much for your reply, your approach looks good. Now I am modifying the source code based on trl==0.15.2 to suit my needs. If this doesn't work, I will adopt your approach.
3,093
1,103
qgallouedec
2025-03-15T02:28:26
I understand your question, but I can't see how you plan to combine the methods. For example, how do you combine between DPO and GRPO? One is online, the other offline.
3,092
1,104
AMindToThink
2025-03-15T22:14:14
I figure that you can make an outer loop for num_steps. Inside, you could calculate the loss for a batch of GRPO (by taking and checking model responses) and the loss from a batch of DPO (by measuring the probability of offline responses). Add the two losses together with a weighting factor and do a step. L = (1-alpha)Ldpo + (alpha)Lgrpo Schedulers for the weighting factor and for the batch size would allow for expressive balances. I don’t see what the online/offline distinction means for combining trainers. It just means that instead of looping through the data of one, then the other, you instead loop through the data together and combine the loss functions.
3,092
1,105
HuggingFaceDocBuilderDev
2025-03-14T21:20:16
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3091). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,091
1,106
qgallouedec
2025-03-14T21:25:05
No, actually there have been recurrent reports that SFT can't learn to generate EOS. I'm pretty sure #2405 re-introduced the bug reported in #1623
3,091
1,107
skandermoalla
2025-03-15T08:25:34
@qgallouedec I've faced this multiple times. I think it's just because of the (not so good) practice in the examples everywhere setting the pad token to the eos token. Then the SFT preprocessing masks everything that's a pad token (=eos token), including the real eos token in the chat template.
3,091
1,108
skandermoalla
2025-03-15T08:29:04
Personally, I don't think these forced patches are a good design. I understand that you want the Trainers to work out of the box, but users should still make sure they have a chat template that adds an eos properly. In case someone doesn't want an eos they can't do that anymore now. (Same for the DPOTrainer btw, I think it adds an extra eos token somewhere.)
3,091
1,109
HwangYej1
2025-03-26T08:55:53
i got this TypeError: 'Qwen2TokenizerFast' object is not subscriptabl, after change this code
3,091
1,110
qgallouedec
2025-03-15T02:40:00
4.1.3 is about Process Supervision Instead of giving a single reward per completion, process supervision provides multiple rewards at each step. However, I have concerns about whether the benefits outweigh the added complexity and usability challenges. Some points: - It requires users to adopt a PRM instead of the more common ORM. Since PRMs are less widely available, this shift could be difficult. - Defining a process reward function isn’t straightforward, making implementation more complex for users. - Figure 5 shows a slight advantage of PRMs over ORMs, but only in one of the two evaluations. Given these factors, I’m unsure if the trade-off is justified.
3,090
1,111
tchang1997
2025-03-14T17:28:17
Could you define "doesn't work?" I have training running with PeFT + gradient checkpointing without issues but had to play around with the settings. At a glance, the only major difference I see between our configs is that you might need `gradient_checkpointing_kwargs=dict(reentrant=True)` in your `GRPOConfig`.
3,089
1,112
binary-husky
2025-03-16T15:48:09
you must set `reentrant=True` ``` ... gradient_checkpointing_kwargs: use_reentrant: true ... ```
3,089
1,113
kimihailvfg
2025-03-17T09:52:09
I've tried setting `use_reentrant=true`, it works without peft, but doesn't work with PEFT: `element 0 of tensors does not require grad and does not have a grad_fn`
3,089
1,114
leosmith8004
2025-03-24T14:57:25
OMG,i have the same issue with you,have you solved it ? Thanks for your reply
3,089
1,115
maoulee
2025-03-14T09:37:20
update vllm=0.7.3
3,085
1,116
YueChenkkk
2025-03-14T11:37:47
This works for me ``` accelerate launch --num_processes 4 --gpu_ids 0,2,3,4,5 --config_file accelerate_configs/deepspeed_zero3.yaml train_grpo.py --vllm_device auto ```
3,085
1,117
qgallouedec
2025-04-01T18:14:50
Fixed in #3091 (also related #3200)
3,083
1,118