user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-07-30 20:59:07
body
stringlengths
1
173k
issue_number
int64
1
3.81k
__index_level_0__
int64
0
11.8k
HuggingFaceDocBuilderDev
2025-02-24T15:21:16
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2947). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,947
8,700
HuggingFaceDocBuilderDev
2025-02-24T10:26:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2946). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,946
8,701
HuggingFaceDocBuilderDev
2025-02-24T09:19:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2945). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,945
8,702
qgallouedec
2025-02-24T09:33:47
> will both packed datasets be cached independently Yes! ```python from trl import SFTConfig, SFTTrainer from datasets import load_dataset dataset = load_dataset("trl-lib/Capybara", split="train[:10%]") # Processes the dataset training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT", max_length=128, packing=True) trainer = SFTTrainer( args=training_args, model="Qwen/Qwen2.5-0.5B", train_dataset=dataset, ) # Processes the dataset as well training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT", max_length=256, packing=True) trainer = SFTTrainer( args=training_args, model="Qwen/Qwen2.5-0.5B", train_dataset=dataset, ) # Uses the cache! training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT", max_length=128, packing=True) trainer = SFTTrainer( args=training_args, model="Qwen/Qwen2.5-0.5B", train_dataset=dataset, ) ```
2,945
8,703
davidhughhenrymack
2025-02-24T18:19:24
To perform batch generation, you must have right-aligned (e.g. left padded) token strings. Then, by nature of the generate operation, it will append to those strings with some generations hitting end-of-stream before others, causing right padding. It is possible to run a sequential operation to re-pack all of this (which flash_Attention2 would like...) but in practice not necessary unless you have wildly different sequence lengths and small memory
2,944
8,704
Alex-Songs
2025-03-07T03:39:48
hello, Does GRPO currently not support the scenario where batch size > 1, meaning multiple prompts are involved? In such a case, is left padding required? @davidhughhenrymack
2,944
8,705
davidhughhenrymack
2025-03-07T16:03:22
It does support multiple prompts, so left padding is needed.
2,944
8,706
qgallouedec
2025-02-24T07:37:54
Pi = pi_old at this stage. See the line just above in the algorithm (line 6).
2,943
8,707
helloword12345678
2025-02-24T07:51:56
In my opinion, self.ref_model serves as the old model because the red-marked mathematical formula involves importance sampling. If you were to use the current model (self.model), importance sampling would not be necessary. Regarding your point, line 6 refers to a periodic update, as mentioned in the issue https://github.com/huggingface/trl/issues/2684,By default, this update is turned off.
2,943
8,708
qgallouedec
2025-02-24T07:56:30
No pi_old and pi_ref are different. pi_ref is only used for kL
2,943
8,709
helloword12345678
2025-02-24T08:25:19
thanks, Yes pi_old different with pi_ref! but is pi equals to pi_old in trl code
2,943
8,710
qgallouedec
2025-02-24T08:55:28
> thanks, Yes pi_old different with pi_ref! but is pi equals to pi_old in trl code indeed, when at generation stage, $\pi_\theta = \pi_\mathrm{old}$, that's why we use $\pi_\theta$. Then, you start to optimise, and $\pi_\theta$ starts to differ from $\pi_\mathrm{old}$. But we've saved the $\pi_\mathrm{old}(o_i)$ (`old_per_token_logps`), so it's not an issue, we use this for importance sampling.
2,943
8,711
qgallouedec
2025-02-24T07:43:51
Indeed, GRPO doesn't support IterableDataset. I don't think there is an easy fix. Is it blocking for you?
2,942
8,712
Marsella8
2025-02-26T04:39:04
> Indeed, GRPO doesn't support IterableDataset. I don't think there is an easy fix. Is it blocking for you? I am currently working on a refinement pipeline, so I need to dynamically change the training data after each epoch. I've tried to get around it by subclassing Dataset and having a Mutable Dataset that would randomly sample from my desired dataset, though this has not been working. Any advice on how to fix this issue? Thank you
2,942
8,713
jiaweiHu-XDU
2025-02-26T10:17:35
> 事实上,GRPO 不支持 IterableDataset。我认为没有简单的解决办法。它对你来说是阻塞的吗? Does TRL currently support multi-modal GRPO training? and DPO
2,942
8,714
nsntiw
2025-03-11T03:51:33
I encountered this problem with iterable datasets, adding `dispatch_batches=False` to GRPOConfig got rid of it. If relevant to anyone, the generators of iterabledatasets need additional handling for batch sizes>1, this is my implementation: ``` def my_generator(ds, count=[0], batch_size=4): while True: i = count[0] // batch_size yield {'prompt': ds[i]['prompt'], 'answer': ds[i]['answer']} count[0] += 1 ```
2,942
8,715
qgallouedec
2025-02-24T07:49:08
This is not supported and you'll need to fork trl to have such feature
2,941
8,716
HuggingFaceDocBuilderDev
2025-02-23T19:25:39
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2940). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,940
8,717
edbeeching
2025-02-24T08:52:49
@kashif @qgallouedec @lewtun , I think Liger models are now supported nativately in in transformers if the `use_liger_kernel==True` flag is set, perhaps we can drop the support for this in the `SFTTrainer` and use the native transformers implementation?
2,940
8,718
kashif
2025-02-24T09:13:40
i think so too... we will need to pin the transformer version but yes should be a better soluton
2,940
8,719
qgallouedec
2025-02-24T11:41:07
Thanks @edbeeching! > i think so too... we will need to pin the transformer version but yes should be a better soluton After checking, it seems like `use_liger_kernel` exists for at least 4.46, which is the min version in TRL. So we shouldn't need to bump transformers https://github.com/huggingface/transformers/blob/052e652d6d53c2b26ffde87e039b723949a53493/src/transformers/training_args.py#L1521
2,940
8,720
qgallouedec
2025-02-24T11:43:06
it was introduced in 4.45
2,940
8,721
lewtun
2025-02-24T15:18:41
Thanks for the pointer to `transformers`! Just to confirm, the current PR is fine to merge since the model init is taken care of in this line? https://github.com/huggingface/trl/pull/2940/files#r1967113819 If yes, feel free to merge if I'm offline :)
2,940
8,722
kashif
2025-02-23T18:42:51
@DanFosing which version of TRL are you using?
2,939
8,723
DanFosing
2025-02-23T19:02:31
I experienced this issue with both v0.15.1 and with the alpha version downloaded using: `pip install git+https://github.com/huggingface/trl.git`
2,939
8,724
DanFosing
2025-02-23T19:05:41
Oh and I forgot to mention, max_seq_length didn't seem to work for me for some reason, the warning says it will be deprecated in v.0.20.0 but are you sure it wasn't deprecated already? (that's why I added a comment there in the code, but it's not related to the main fix)
2,939
8,725
kashif
2025-02-24T17:27:26
@DanFosing ok so kindly remove the `max_seq_length` from the `sft_config.py` and move the chat template logic inside the already defined `if not is_processed:` where then it makes sense... instead of a new `if not is_processed:` block
2,939
8,726
kashif
2025-02-24T17:39:56
we can fix the warning and say: removed in version `0.16.0`
2,939
8,727
HuggingFaceDocBuilderDev
2025-02-24T17:44:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2939). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,939
8,728
qgallouedec
2025-02-24T18:58:21
`maybe_apply_chat_template` applies the chat template if needed. hence "maybe". Are encountering a bug? If so what's the traceback?
2,939
8,729
kashif
2025-02-24T19:00:50
i don't think there is a bug, but please correct me @DanFosing if I am mistaken, the issue is that it's doing this extra work when it's not needed.
2,939
8,730
qgallouedec
2025-02-24T19:01:05
> Oh and I forgot to mention, max_seq_length didn't seem to work for me for some reason WDYM "didn't seem to work"? Same question, is an exception raised? if so what's the traceback? Have you tried to pull the very last commits? Could be related to #2947
2,939
8,731
qgallouedec
2025-02-24T19:03:13
> i don't think there is a bug, but please correct me @DanFosing if I am mistaken, the issue is that it's doing this extra work when it's not needed. for clarification, the only extra work done is iterating through the dataset: https://github.com/huggingface/trl/blob/5c0591319646c171e8ea213d1692058f4bf68ead/trl/data_utils.py#L218-L221 which is usually very fast
2,939
8,732
qgallouedec
2025-02-24T19:05:34
That being said, I'm ok to add the `if not is_processed:` to avoid extra logging/iteration
2,939
8,733
qgallouedec
2025-02-24T20:34:21
@bot /style
2,939
8,734
github-actions[bot]
2025-02-24T20:34:47
Style fixes have been applied. [View the workflow run here](https://github.com/huggingface/trl/actions/runs/13507301969).
2,939
8,735
HuggingFaceDocBuilderDev
2025-02-23T16:37:47
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2938). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,938
8,736
qgallouedec
2025-02-23T16:14:16
Thanks for reporting, please provide a refrence code and your system info
2,937
8,737
yzhdut
2025-02-24T02:02:38
> Thanks for reporting, please provide a refrence code and your system info Ok Thanks for your reply system info: gpu: +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.40.07 Driver Version: 550.40.07 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off | | 44% 79C P2 434W / 450W | 22731MiB / 24564MiB | 100% Default | `Package Version ----------------------------- ------------ accelerate 1.4.0 aiofiles 23.2.1 aiohttp 3.9.1 aiosignal 1.3.1 altair 5.2.0 annotated-types 0.6.0 antlr4-python3-runtime 4.9.3 anyio 4.2.0 async-timeout 4.0.3 attrs 23.1.0 auto_gptq 0.7.1 bitsandbytes 0.45.2 blinker 1.7.0 cachetools 5.3.2 certifi 2023.11.17 charset-normalizer 3.3.2 click 8.1.7 cmake 3.27.9 colorama 0.4.6 contourpy 1.2.0 cpm-kernels 1.0.11 cycler 0.12.1 datasets 3.3.1 deepspeed 0.12.4 dill 0.3.7 docstring-parser 0.15 et-xmlfile 1.1.0 exceptiongroup 1.2.0 fastapi 0.108.0 ffmpy 0.3.1 filelock 3.13.1 fonttools 4.47.0 frozenlist 1.4.1 fsspec 2023.10.0 gekko 1.2.1 gitdb 4.0.11 GitPython 3.1.40 gradio 3.38.0 gradio_client 0.8.0 h11 0.14.0 hjson 3.1.0 httpcore 1.0.2 httpx 0.26.0 huggingface-hub 0.29.0 idna 3.6 importlib-metadata 6.11.0 importlib-resources 6.1.1 jieba 0.42.1 Jinja2 3.1.2 joblib 1.3.2 jsonschema 4.20.0 jsonschema-specifications 2023.11.2 kiwisolver 1.4.5 linkify-it-py 2.0.3 lit 17.0.6 markdown-it-py 2.2.0 MarkupSafe 2.1.3 matplotlib 3.8.2 mdit-py-plugins 0.3.3 mdurl 0.1.2 mpmath 1.3.0 multidict 6.0.4 multiprocess 0.70.15 networkx 3.2.1 ninja 1.11.1.1 nltk 3.8.1 numpy 1.26.2 omegaconf 2.3.0 openpyxl 3.1.2 optimum 1.24.0 orjson 3.9.10 packaging 23.2 pandas 2.1.4 peft 0.10.0 Pillow 10.1.0 pip 25.0.1 protobuf 4.25.1 psutil 5.9.6 py-cpuinfo 9.0.0 pyarrow 19.0.1 pyarrow-hotfix 0.6 pydantic 2.5.2 pydantic_core 2.14.5 pydeck 0.8.1b0 pydub 0.25.1 Pygments 2.17.2 pynvml 11.5.0 pyparsing 3.1.1 pyproject 1.3.1 python-dateutil 2.8.2 python-multipart 0.0.6 pytz 2023.3.post1 PyYAML 6.0.1 referencing 0.32.0 regex 2023.10.3 requests 2.32.3 rich 13.7.0 rouge 1.0.1 rouge-chinese 1.0.3 rpds-py 0.13.2 safetensors 0.5.2 scipy 1.11.4 semantic-version 2.10.0 sentencepiece 0.1.99 setuptools 75.8.0 shellingham 1.5.4 shtab 1.6.5 six 1.16.0 smmap 5.0.1 sniffio 1.3.0 sse-starlette 1.8.2 starlette 0.32.0.post1 streamlit 1.29.0 sympy 1.12 tenacity 8.2.3 tiktoken 0.5.2 tokenizers 0.20.3 toml 0.10.2 tomlkit 0.12.0 toolz 0.12.0 torch 2.0.0+cu118 torchvision 0.15.0+cu118 tornado 6.4 tqdm 4.67.1 transformers 4.46.0 transformers-stream-generator 0.0.4 triton 3.2.0 trl 0.15.1 typer 0.12.3 typing_extensions 4.9.0 tyro 0.6.3 tzdata 2024.2 tzlocal 5.2 uc-micro-py 1.0.3 urllib3 2.1.0 uvicorn 0.25.0 validators 0.22.0 watchdog 3.0.0 websockets 11.0.3 wheel 0.41.2 xxhash 3.4.1 yarl 1.9.4 zipp 3.17.0 zstandard 0.23.0` reference code: ` with open('data/RLFTqa_delete_some_answer_is_null.json', 'r') as f: data = json.load(f) random.shuffle(data) data = data[0:200] formatted_data = [] for item in data: formatted_data.append({ "prompt": item["question"], "reference": "||".join(item.get("answer", [])) }) n_total = len(formatted_data) n_train = int(0.7 * n_total) n_eval = int(0.15 * n_total) n_test = n_total - n_train - n_eval train_data = formatted_data[:1] eval_data = formatted_data[n_train:n_train+n_eval] test_data = formatted_data[n_train+n_eval:] dataset_features = Features({ "prompt": Value("string"), "reference": Value("string") }) train_dataset = Dataset.from_list(train_data, features=dataset_features) eval_dataset = Dataset.from_list(eval_data, features=dataset_features) test_dataset = Dataset.from_list(test_data, features=dataset_features) local_model_path = "./Qwen2.5-7B-Instruct-GPTQ-Int4" tokenizer = AutoTokenizer.from_pretrained(local_model_path) tokenizer.pad_token = tokenizer.eos_token base_model = AutoModelForCausalLM.from_pretrained( local_model_path, # quantization_config=quant_config, device_map="auto", trust_remote_code=True, torch_dtype=torch.float16 ) base_model.config.gradient_checkpointing = False lora_config = LoraConfig( r=16, # 适当增加秩维度 lora_alpha=32, target_modules=["q_proj", "k_proj", "v_proj", "o_proj"], # 扩展目标模块 lora_dropout=0.1, bias="none", task_type="CAUSAL_LM", ) model = prepare_model_for_kbit_training(base_model) model = get_peft_model(model, lora_config) grpo_config = GRPOConfig( num_generations=8, # 每组生成数量 max_completion_length=512, # 最大生成长度 gradient_accumulation_steps=2, learning_rate=3e-5, logging_steps=5, logging_first_step = True, save_steps=50, output_dir="./grpo_checkpoints", logging_dir="./grpo_checkpoints/log", log_level="info", log_completions=True, eval_strategy="steps", eval_steps=40, do_predict=True, temperature=0.7, gradient_checkpointing=False, use_vllm=False ) model.bfloat16() trainer = GRPOTrainer( model=model, args=grpo_config, reward_funcs=[reward_func, format_reward_func], # 直接传递奖励函数 train_dataset=train_dataset, eval_dataset=eval_dataset, processing_class=tokenizer, ) train_output = trainer.predict(train_dataset) trainer.train() model.save_pretrained("grpo_lora_adapter")` As mentioned above, the length of train_det is 1. Before training, I first call trainer. predict (train_dataset). Then I printed the completion of the model in the library function, and the obtained location is as follows: xxx/lib/python3.10/site-packages/trl/trainer/grpo_trainer.py: ` def _prepare_inputs(self, inputs: dict[str, Union[torch.Tensor, Any]]) -> dict[str, Union[torch.Tensor, Any]]: device = self.accelerator.device prompts = [x["prompt"] for x in inputs] prompts_text = [maybe_apply_chat_template(example, self.processing_class)["prompt"] for example in inputs] prompt_inputs = self.processing_class( prompts_text, return_tensors="pt", padding=True, padding_side="left", add_special_tokens=False ) prompt_inputs = super()._prepare_inputs(prompt_inputs) prompt_ids, prompt_mask = prompt_inputs["input_ids"], prompt_inputs["attention_mask"] if self.max_prompt_length is not None: prompt_ids = prompt_ids[:, -self.max_prompt_length :] prompt_mask = prompt_mask[:, -self.max_prompt_length :] # Generate completions using either vLLM or regular generation if self.args.use_vllm: print("Using VLM model") # First, have main process load weights if needed if self.state.global_step != self._last_loaded_step: self._move_model_to_vllm() self._last_loaded_step = self.state.global_step # Generate completions using vLLM: gather all prompts and use them in a single call in the main process all_prompts_text = gather_object(prompts_text) if self.accelerator.is_main_process: outputs = self.llm.generate(all_prompts_text, sampling_params=self.sampling_params, use_tqdm=False) completion_ids = [out.token_ids for completions in outputs for out in completions.outputs] else: completion_ids = [None] * len(all_prompts_text) # Broadcast the completions from the main process to all processes, ensuring each process receives its # corresponding slice. completion_ids = broadcast_object_list(completion_ids, from_process=0) process_slice = slice( self.accelerator.process_index * len(prompts), (self.accelerator.process_index + 1) * len(prompts), ) completion_ids = completion_ids[process_slice] # Pad the completions, and concatenate them with the prompts completion_ids = [torch.tensor(ids, device=device) for ids in completion_ids] completion_ids = pad(completion_ids, padding_value=self.processing_class.pad_token_id) prompt_completion_ids = torch.cat([prompt_ids, completion_ids], dim=1) else: # Regular generation path with unwrap_model_for_generation(self.model, self.accelerator) as unwrapped_model: prompt_completion_ids = unwrapped_model.generate( prompt_ids, attention_mask=prompt_mask, generation_config=self.generation_config ) # Compute prompt length and extract completion ids prompt_length = prompt_ids.size(1) prompt_ids = prompt_completion_ids[:, :prompt_length] completion_ids = prompt_completion_ids[:, prompt_length:] # Mask everything after the first EOS token is_eos = completion_ids == self.processing_class.eos_token_id eos_idx = torch.full((is_eos.size(0),), is_eos.size(1), dtype=torch.long, device=device) eos_idx[is_eos.any(dim=1)] = is_eos.int().argmax(dim=1)[is_eos.any(dim=1)] sequence_indices = torch.arange(is_eos.size(1), device=device).expand(is_eos.size(0), -1) completion_mask = (sequence_indices <= eos_idx.unsqueeze(1)).int() # Concatenate prompt_mask with completion_mask for logit computation attention_mask = torch.cat([prompt_mask, completion_mask], dim=1) # (B*G, P+C) logits_to_keep = completion_ids.size(1) # we only need to compute the logits for the completion tokens with torch.inference_mode(): if self.ref_model is not None: ref_per_token_logps = self._get_per_token_logps( self.ref_model, prompt_completion_ids, attention_mask, logits_to_keep ) else: with self.accelerator.unwrap_model(self.model).disable_adapter(): ref_per_token_logps = self._get_per_token_logps( self.model, prompt_completion_ids, attention_mask, logits_to_keep ) # Decode the generated completions completions_text = self.processing_class.batch_decode(completion_ids, skip_special_tokens=True) print("Completions text:", completions_text)` As I mentioned, when predicting, the model shows basic problem-solving ability. However, for the same input, during training, completion appears to significantly reduce model performance due to issues with irrelevant statements and tasks, as well as repetition. And I found that the log doesn't seem to be saved together with the checkpoint. Is it possible that it will only be saved after the training is completed ![Image](https://github.com/user-attachments/assets/d0b506ab-aae3-4aa4-9a1c-526a79a94ff4)
2,937
8,738
Wang-Xiaodong1899
2025-03-12T08:49:56
any update?
2,937
8,739
Wang-Xiaodong1899
2025-03-12T08:57:24
I have also found that when the input prompt is too long, such as the length of my prompt input_id is 3964, I find that the complete result produces strange results. The normal inference model, on the other hand, is the normal output. Here I am using the model Qwen2-VL-7B. strange completion such as `[<output>```python\n```]`, or `'<output>2003:000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000'` Any suggestions?
2,937
8,740
Wang-Xiaodong1899
2025-03-12T08:57:45
@qgallouedec
2,937
8,741
HuggingFaceDocBuilderDev
2025-02-23T13:17:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2936). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,936
8,742
qgallouedec
2025-02-23T16:55:14
Super cool PR @August-murr! I'll check in details asap. > I faced out-of-memory (OOM) issues and couldn't train an agent for more complex tasks. Can you share your code? > In the end, we could write a blog post or report to showcase its effectiveness. Definitely! > While the results look good, they don’t represent a practical use case Do you have another simple while practical use case?
2,936
8,743
qgallouedec
2025-02-23T17:17:44
Can you add some tests and docs as well?
2,936
8,744
August-murr
2025-02-23T19:50:09
> Can you share your code? [Kaggle Notebook](https://www.kaggle.com/code/augustmurr/training-agent-to-generate-code) the biggest issue was really Kaggles 2xT4 having little VRAM. I did try PEFT but then couldn't use it properly with vllm then decided to do full model instead. > > Do you have another simple while practical use case? no, not simpler than that.
2,936
8,745
qgallouedec
2025-02-23T21:22:24
Not sure when you tested it but peft + vllm should be fixed now
2,936
8,746
August-murr
2025-02-25T12:34:15
@qgallouedec I don't get why the tests failed. 8 tests failed with error: `module 'torch' has no attribute 'hip'`
2,936
8,747
qgallouedec
2025-02-28T12:09:49
It's because of liger, merging main should solve this
2,936
8,748
kashif
2025-02-28T12:12:02
they have a fix in the 0.5.4 version
2,936
8,749
qgallouedec
2025-02-28T12:36:15
Actually,I think it's 0.5.4 that contains the bug, that's why we pinned to 0.5.3: https://github.com/huggingface/trl/pull/2952
2,936
8,750
qgallouedec
2025-02-28T18:35:20
don't bother too much with windows. It a test fails, you can skip it
2,936
8,751
August-murr
2025-03-02T09:27:04
@qgallouedec is there anything else needed??
2,936
8,752
August-murr
2025-03-03T18:16:32
just added parallel code execution using asyncio in https://github.com/huggingface/trl/pull/2936/commits/21973857d6f09868bfbac9c24c8fe5b77129c0b2 to make code generation more scalable. Right now, we’re working with E2B, but if we find that alternatives like CodeSandbox.io, Modal, or Daytona are cheaper or faster, we’ll create wrappers for those too.
2,936
8,753
August-murr
2025-03-04T14:16:49
> Not sure when you tested it but peft + vllm should be fixed now @qgallouedec Can you share some code for a training run that worked for you? I've been using the example from [docs](https://huggingface.co/docs/trl/main/en/grpo_trainer#quick-start) plus a lora config and `use_vllm=True` and it has not been working. not for agents btw, just training model with the reward function in the docs.
2,936
8,754
qgallouedec
2025-03-04T14:58:41
Thanks for the feedback, what's the traceback?
2,936
8,755
August-murr
2025-03-04T15:25:34
> Thanks for the feedback, what's the traceback? there is no error, Everything's running, but the reward doesn't really seem to improve —the reward is all over the place. [Notebook](https://www.kaggle.com/code/augustmurr/training-model-with-vllm-peft-for-shorter-length) This exact notebook, without the PEFTConfig, works perfectly, pretty much nailing it with no response (getting the max reward) after just a few steps. So, I’m pretty convinced it has something to do with the PEFT and not the reward function or the data or other hyper-parameters.
2,936
8,756
qgallouedec
2025-03-04T15:59:58
Possibly related? https://github.com/huggingface/trl/pull/2873
2,936
8,757
August-murr
2025-03-04T17:37:32
> Possibly related? #2873 Can you share the code you used in https://github.com/huggingface/trl/pull/2873#issuecomment-2663793035
2,936
8,758
qgallouedec
2025-02-23T12:11:35
@bot /style
2,935
8,759
HuggingFaceDocBuilderDev
2025-02-23T12:15:44
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2935). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,935
8,760
HuggingFaceDocBuilderDev
2025-02-23T12:04:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2934). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,934
8,761
qgallouedec
2025-02-23T13:29:50
@bot /style
2,934
8,762
qgallouedec
2025-02-23T13:32:58
@bot /style
2,934
8,763
qgallouedec
2025-02-23T13:35:00
@bot /style
2,934
8,764
qgallouedec
2025-02-23T16:35:17
@bot /style
2,934
8,765
qgallouedec
2025-02-23T16:36:09
@bot /style
2,934
8,766
github-actions[bot]
2025-02-23T16:36:27
Style fixes have been applied. [View the workflow run here](https://github.com/huggingface/trl/actions/runs/13484908517).
2,934
8,767
sdpkjc
2025-02-25T01:46:31
I’ve encountered the same issue too—the KL divergence value is abnormal. But I haven’t been able to pinpoint the problem yet, so I’m glad to see this. I have a question: why do specify tokens exist? Doesn’t VLLM already skip special tokens when generating token sequences? 🤔
2,933
8,768
glowwormX
2025-03-05T02:34:47
I also encountered a sudden increase in KL. Is there any solution now? @kalomaze
2,933
8,769
qgallouedec
2025-02-23T17:21:27
Thanks, just a minor suggestion :)
2,932
8,770
qgallouedec
2025-02-23T17:21:35
@bot /style
2,932
8,771
cuiyuhao1996
2025-02-24T07:39:04
How is this solved? I also encountered the same problem.
2,931
8,772
qgallouedec
2025-02-24T08:00:12
Try to upgrade trl, it should solve the issue
2,931
8,773
qgallouedec
2025-02-24T08:00:13
Try to upgrade trl, it should solve the issue
2,931
8,774
qgallouedec
2025-02-24T22:24:11
Any reason you closed #2857? (duplicate) For ref, I suggested using histogram: https://github.com/huggingface/trl/pull/2857#issuecomment-2666618701
2,930
8,775
qgallouedec
2025-02-27T10:38:44
@bot /style
2,930
8,776
qgallouedec
2025-02-27T10:39:29
@bot /style
2,930
8,777
github-actions[bot]
2025-02-27T10:39:52
Style fixes have been applied. [View the workflow run here](https://github.com/huggingface/trl/actions/runs/13564466917).
2,930
8,778
HuggingFaceDocBuilderDev
2025-02-22T12:13:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2929). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,929
8,779
qgallouedec
2025-02-24T22:25:52
Thanks a lot @ghrua! Huge work! I'll test it asap. It definitely makes sense :)
2,929
8,780
qgallouedec
2025-02-25T11:16:05
Have you tried with vLLM 0.7.3 btw?
2,929
8,781
ghrua
2025-02-25T11:21:49
Hey @qgallouedec! Thanks for reviewing my commit. ~~I am sorry that I haven't tried vLLM 0.7.3. Let me do a quick check. Please wait a moment.~~ I checked that the env for this commit is based on `0.7.3`. Sorry for the misremember. ``` ▶ pip list | grep vllm vllm 0.7.3 ```
2,929
8,782
qgallouedec
2025-02-25T12:46:17
So it works with this version? I'm asking because currently, main branch hangs at some point when you use vLLM 0.7.3, and I'm curious to know if your PR solves it
2,929
8,783
ghrua
2025-02-25T13:45:31
> So it works with this version? I'm asking because currently, main branch hangs at some point when you use vLLM 0.7.3, and I'm curious to know if your PR solves it Yes, my commit works well with vLLM 0.7.3. I re-run the code to double check it. Two parts may be helpful: 1. I use a context manager to control the device that each vLLM can access during inference, because the vLLM may mis-use the devices of distributed training processes: https://github.com/huggingface/trl/blob/79e9af0c7e179f9be5ee1a0f0e88b3e4ac82cb70/trl/trainer/grpo_trainer.py#L670 2. Three additional patches are used for a smoother initialization of vLLM (though not very elegant 😅): https://github.com/huggingface/trl/blob/79e9af0c7e179f9be5ee1a0f0e88b3e4ac82cb70/trl/trainer/grpo_trainer.py#L474
2,929
8,784
ghrua
2025-02-27T14:32:39
> Ok now I understand better what you are trying to do. > Why do you need $K$ processes to control $K$ instances of vLLM? Couldn't you do everything in the main process? That was my initial plan... I tried to initialise a vllm_list for K vllm instances in the main process and use the ThreadExecutor to call the inference in parallel. However, it will meet lots of errors (I forget the details, but seems to be the inconsistency between q and kv cache.) I didn't check the detailed code of vLLM, but those errors seems come from the conflict of some shared static variables. And I find this issue: https://github.com/vllm-project/vllm/issues/1676. Please let me know if I misunderstood anything. Afterwards, I change my strategy by initialising K instances in K processes.
2,929
8,785
SeiunSky0131
2025-03-10T02:31:38
> This PR allows users to leverage K GPUs for inference and N − K GPUs for training. Hi, that's a great work! However, when I test this code on my server equipped with 8 NVIDIA A100 80G, with 6 GPUs(GPU 0-5) for training and 2 GPUs(GPU 6-7) for vLLM engine, I find that GPU 7's utilization is always 0 throughout the training process.(See the blue line of GPU7 in the following figure) <img width="1457" alt="截屏2025-03-10 10 15 57" src="https://github.com/user-attachments/assets/141c8e3a-6720-4d7e-805e-c12f1a2260eb" /> It seems that no generation tasks are allocated to GPU 7. Could you check whether this situation applies to your code?
2,929
8,786
loki369loki
2025-03-11T02:27:47
> > This PR allows users to leverage K GPUs for inference and N − K GPUs for training. > > Hi, that's a great work! However, when I test this code on my server equipped with 8 NVIDIA A100 80G, with 6 GPUs(GPU 0-5) for training and 2 GPUs(GPU 6-7) for vLLM engine, I find that GPU 7's utilization is always 0 throughout the training process.(See the blue line of GPU7 in the following figure) <img alt="截屏2025-03-10 10 15 57" width="1457" src="https://private-user-images.githubusercontent.com/87608284/420733648-141c8e3a-6720-4d7e-805e-c12f1a2260eb.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDE2NTk3ODMsIm5iZiI6MTc0MTY1OTQ4MywicGF0aCI6Ii84NzYwODI4NC80MjA3MzM2NDgtMTQxYzhlM2EtNjcyMC00ZDdlLTgwNWUtYzEyZjFhMjI2MGViLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAzMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMzExVDAyMTgwM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU3NzQ0NjEyYTMzY2FlNzYwYWY0MGY4NjYzM2Q3NTY5NmI2MjZjZWJjYTdjNDdjMzVjMWFhNDBiZTEwOWM0ZGEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.-gHuPaDuCEydqN1ULQNmov69ELI5g5Or3sHloLXYIKY"> > > It seems that no generation tasks are allocated to GPU 7. Could you check whether this situation applies to your code? Encountered the same issue, GPU7 is not functioning properly for vllm inference. ![vllm_multi_gpu_inference_test](https://github.com/user-attachments/assets/c1c70010-70d2-4f77-97b4-924d3a72b03c)
2,929
8,787
skepsun
2025-03-19T09:07:23
It seems not to solve the problem that vllm hangs when we want to use large models (for example, 32b) which could only be placed on multiple cards.
2,929
8,788
qgallouedec
2025-03-22T18:41:00
Closed via #3094
2,929
8,789
mehdiataei
2025-02-22T05:57:35
Fixed by downgrading to transformers==4.49.0 from dev.
2,928
8,790
dignfei
2025-02-22T12:22:32
I can train Qwen2.5-3B on 4090 ( 24gb )
2,927
8,791
dignfei
2025-02-22T12:22:58
Qwen2.5-7B only need 2x H20(80GB)
2,927
8,792
Tuziking
2025-02-22T12:39:57
> Qwen2.5-7B only need 2x H20(80GB) I'm sorry to bother you. But can you share your code with me or help me to find the bug in my code as follow? I reference willccbb's code to train. ``` python import re import torch from datasets import load_dataset, Dataset from transformers import AutoTokenizer, AutoModelForCausalLM from trl import GRPOConfig, GRPOTrainer from peft import LoraConfig, get_peft_model, TaskType import wandb import logging from scripts.utils.replace_grpo_trainer import trigger # logging.basicConfig( filename="GRPO-Qwen2.5-7B.log", # 保存的日志文件名 level=logging.INFO, # 日志等级 format="%(asctime)s - %(message)s", # 日志格式 datefmt="%Y-%m-%d %H:%M:%S" ) logger = logging.getLogger("logger") # Load and prep dataset SYSTEM_PROMPT = """ A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the think process in the mind and then provides the user with the answer. The think process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> think process here </think> <answer> answer here </answer> """ XML_COT_FORMAT = """ <think> {think} </think> <answer> {answer} </answer> """ def extract_xml_answer(text: str) -> str: answer = text.split("<answer>")[-1] answer = answer.split("</answer>")[0] return answer.strip() def extract_hash_answer(text: str) -> str | None: if "####" not in text: return None return text.split("####")[1].strip() # uncomment middle messages for 1-shot prompting def get_gsm8k_questions(split = "train") -> Dataset: data = load_dataset('dataset/gsm8k', 'main')[split] # type: ignore data = data.map(lambda x: { # type: ignore 'prompt': [ {'role': 'system', 'content': SYSTEM_PROMPT}, # {'role': 'user', 'content': 'What is the largest single-digit prime number?'}, # {'role': 'assistant', 'content': XML_COT_FORMAT.format( # think="9 is divisble by 3 and 8 is divisible by 2, but 7 is prime.", # answer="7" # )}, {'role': 'user', 'content': x['question']} ], 'answer': extract_hash_answer(x['answer']) }) # type: ignore return data # type: ignore dataset = get_gsm8k_questions() # print(dataset[0]) # Reward functions def correctness_reward_func(prompts, completions, answer, **kwargs) -> list[float]: responses = [completion[0]['content'] for completion in completions] q = prompts[0][-1]['content'] extracted_responses = [extract_xml_answer(r) for r in responses] print(len(responses), len(extracted_responses), len(answer)) # for response, extracted_response, _answer in zip(responses, extracted_responses, answer): logger.info('-'*20) logger.info(f"Question:\n{q}") logger.info(f"Answer:\n{answer[0]}") logger.info(f"Response:\n{responses[0]}") logger.info(f"Extracted:\n{extracted_responses[0]}") logger.info(f"Correctness: {1.0 if extracted_responses[0] == answer[0] else 0.0}") # wandb.log({"Correctness": 1.0 if extracted_responses[0] == answer[0] else 0.0}) # print('-'*20, f"Question:\n{q}", f"\nAnswer:\n{answer[0]}", f"\nResponse:\n{responses[0]}", f"\nExtracted:\n{extracted_responses[0]}") return [2.0 if r == a else 0.0 for r, a in zip(extracted_responses, answer)] def int_reward_func(completions, **kwargs) -> list[float]: # print("int_reward_func") responses = [completion[0]['content'] for completion in completions] extracted_responses = [extract_xml_answer(r) for r in responses] return [0.5 if r.isdigit() else 0.0 for r in extracted_responses] # def strict_format_reward_func(completions, **kwargs) -> list[float]: # """Reward function that checks if the completion has a specific format.""" # # print("strict_format_reward_func") # pattern = r"^<think>\n.*?\n</think>\n<answer>\n.*?\n</answer>\n$" # responses = [completion[0]["content"] for completion in completions] # matches = [re.match(pattern, r) for r in responses] # return [0.5 if match else 0.0 for match in matches] def soft_format_reward_func(completions, **kwargs) -> list[float]: # print("soft_format_reward_func") """Reward function that checks if the completion has a specific format.""" pattern = r"<think>.*?</think>\s*<answer>.*?</answer>" responses = [completion[0]["content"] for completion in completions] matches = [re.match(pattern, r) for r in responses] return [0.5 if match else 0.0 for match in matches] def count_xml(text) -> float: count = 0.0 if text.count("<think>\n") == 1: count += 0.125 if text.count("\n</think>\n") == 1: count += 0.125 if text.count("\n<answer>\n") == 1: count += 0.125 count -= len(text.split("\n</answer>\n")[-1])*0.001 if text.count("\n</answer>") == 1: count += 0.125 count -= (len(text.split("\n</answer>")[-1]) - 1)*0.001 return count def xmlcount_reward_func(completions, **kwargs) -> list[float]: # print("xmlcount_reward_func") contents = [completion[0]["content"] for completion in completions] return [count_xml(c) for c in contents] output_dir = "outputs/Qwen2.5-7B-GRPO" model_name = "models/Qwen2.5-7B" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) training_args = GRPOConfig( output_dir=output_dir, learning_rate=5e-6, adam_beta1 = 0.9, adam_beta2 = 0.99, weight_decay = 0.1, warmup_ratio = 0.1, lr_scheduler_type='cosine', logging_steps=1, bf16=True, per_device_train_batch_size=2, gradient_accumulation_steps=4, num_generations=2, max_prompt_length=256, max_completion_length=512, num_train_epochs=1, save_steps=100, max_grad_norm=0.1, log_on_each_node=False, use_vllm=False, report_to="wandb" ) trainer = GRPOTrainer( model = model, # reward_funcs = xmlcount_reward_func, reward_funcs = [ xmlcount_reward_func, soft_format_reward_func, # strict_format_reward_func, int_reward_func, correctness_reward_func, ], args = training_args, train_dataset = dataset, ) trainer.train() trainer.save_model(output_dir) ```
2,927
8,793
Tuziking
2025-02-23T09:25:21
I test the problem in different per_device_train_batch_size and num_generations, I find that if I use one H20, `per_device_train_batch_size==4, num_generations==4`to train, it can continue some steps before OOM. But if I use 3 x H20, `per_device_train_batch_size==2, num_generations==6`to train, OOM occur early. I don't know why more H20 to train but OOM is occur early. The error log is as follow: ``` [rank0]:[W223 17:38:57.742650271 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [rank1]:[W223 17:38:57.742968833 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [rank2]:[W223 17:38:57.745997855 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) Traceback (most recent call last): File "/online1/sc100010/sc100010/qb_project/MARL/trl_GRPO_train.py", line 176, in <module> trainer.train() File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train return inner_training_loop( File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2599, in _inner_training_loop self.optimizer.step() File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/optimizer.py", line 178, in step self.optimizer.step(closure) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper return func.__get__(opt, opt.__class__)(*args, **kwargs) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper out = func(*args, **kwargs) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad ret = func(self, *args, **kwargs) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step adamw( File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback return func(*args, **kwargs) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw func( File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 606, in _multi_tensor_adamw exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 130.00 MiB. GPU 0 has a total capacity of 94.99 GiB of which 87.19 MiB is free. Including non-PyTorch memory, this process has 94.90 GiB memory in use. Of the allocated memory 90.98 GiB is allocated by PyTorch, and 2.01 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) [rank0]: Traceback (most recent call last): [rank0]: File "/online1/sc100010/sc100010/qb_project/MARL/trl_GRPO_train.py", line 176, in <module> [rank0]: trainer.train() [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train [rank0]: return inner_training_loop( [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2599, in _inner_training_loop [rank0]: self.optimizer.step() [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/optimizer.py", line 178, in step [rank0]: self.optimizer.step(closure) [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper [rank0]: return func.__get__(opt, opt.__class__)(*args, **kwargs) [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper [rank0]: out = func(*args, **kwargs) [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad [rank0]: ret = func(self, *args, **kwargs) [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step [rank0]: adamw( [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback [rank0]: return func(*args, **kwargs) [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw [rank0]: func( [rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 606, in _multi_tensor_adamw [rank0]: exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs) [rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 130.00 MiB. GPU 0 has a total capacity of 94.99 GiB of which 87.19 MiB is free. Including non-PyTorch memory, this process has 94.90 GiB memory in use. Of the allocated memory 90.98 GiB is allocated by PyTorch, and 2.01 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) [rank1]: Traceback (most recent call last): [rank1]: File "/online1/sc100010/sc100010/qb_project/MARL/trl_GRPO_train.py", line 176, in <module> [rank1]: trainer.train() [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train [rank1]: return inner_training_loop( [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2599, in _inner_training_loop [rank1]: self.optimizer.step() [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/optimizer.py", line 178, in step [rank1]: self.optimizer.step(closure) [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper [rank1]: return func.__get__(opt, opt.__class__)(*args, **kwargs) [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper [rank1]: out = func(*args, **kwargs) [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad [rank1]: ret = func(self, *args, **kwargs) [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step [rank1]: adamw( [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback [rank1]: return func(*args, **kwargs) [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw [rank1]: func( [rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 606, in _multi_tensor_adamw [rank1]: exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs) [rank1]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 130.00 MiB. GPU 1 has a total capacity of 94.99 GiB of which 127.19 MiB is free. Including non-PyTorch memory, this process has 94.86 GiB memory in use. Of the allocated memory 92.27 GiB is allocated by PyTorch, and 682.32 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) [rank2]: Traceback (most recent call last): [rank2]: File "/online1/sc100010/sc100010/qb_project/MARL/trl_GRPO_train.py", line 176, in <module> [rank2]: trainer.train() [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train [rank2]: return inner_training_loop( [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2599, in _inner_training_loop [rank2]: self.optimizer.step() [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/optimizer.py", line 178, in step [rank2]: self.optimizer.step(closure) [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper [rank2]: return func.__get__(opt, opt.__class__)(*args, **kwargs) [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper [rank2]: out = func(*args, **kwargs) [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad [rank2]: ret = func(self, *args, **kwargs) [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step [rank2]: adamw( [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback [rank2]: return func(*args, **kwargs) [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw [rank2]: func( [rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 606, in _multi_tensor_adamw [rank2]: exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs) [rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 130.00 MiB. GPU 2 has a total capacity of 94.99 GiB of which 1.19 MiB is free. Including non-PyTorch memory, this process has 94.98 GiB memory in use. Of the allocated memory 92.43 GiB is allocated by PyTorch, and 703.27 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) wandb: wandb: 🚀 View run outputs/Qwen2.5-7B-GRPO at: https://wandb.ai/bobo1398861921-nus/huggingface/runs/eorh7fyx wandb: Find logs at: ../../../../../../../../online1/sc100010/sc100010/qb_project/MARL/wandb/run-20250223_173847-eorh7fyx/logs W0223 17:39:00.122000 40366 /online1/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 40439 closing signal SIGTERM W0223 17:39:00.125000 40366 /online1/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 40440 closing signal SIGTERM E0223 17:39:00.741000 40366 /online1/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 2 (pid: 40441) of binary: /home/export/base/sc100010/sc100010/.conda/envs/torch/bin/python Traceback (most recent call last): File "/home/export/base/sc100010/sc100010/.conda/envs/torch/bin/accelerate", line 8, in <module> sys.exit(main()) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main args.func(args) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1163, in launch_command multi_gpu_launcher(args) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/commands/launch.py", line 792, in multi_gpu_launcher distrib_run.run(args) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/run.py", line 910, in run elastic_launch( File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 138, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ trl_GRPO_train.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2025-02-23_17:39:00 host : gpu018 rank : 2 (local_rank: 2) exitcode : 1 (pid: 40441) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ ```
2,927
8,794
willccbb
2025-02-24T23:58:47
try setting `vllm_gpu_memory_utilization=0.7` (default is 0.9) GRPO needs room on the inference node to load the model weights so they can be used to update the vLLM engine. 3B will take 6GB, so it fits in the ~8GB available by default, but 7B won't fit unless you increase the headroom.
2,927
8,795
Tuziking
2025-02-27T03:34:52
> try setting `vllm_gpu_memory_utilization=0.7` (default is 0.9)尝试设置 `vllm_gpu_memory_utilization=0.7` (默认为 0.9) > > GRPO needs room on the inference node to load the model weights so they can be used to update the vLLM engine. 3B will take 6GB, so it fits in the ~8GB available by default, but 7B won't fit unless you increase the headroom.GRPO 需要在推理节点上留出空间来加载模型权重,以便它们可以用来更新 vLLM 引擎。3B 将占用 6GB,因此它适合默认的~8GB 可用空间,但 7B 将无法适应,除非你增加可用空间。 I'm sorry to bother you, but this doesn't seem to help. It still encounters the OOM error at the second step. What's strange is that during the first step, each GPU only uses 40GB, but at the second step, it suddenly fills up and causes the OOM error.
2,927
8,796
Fox237
2025-03-14T02:41:19
same problem
2,927
8,797
kashif
2025-02-21T16:47:42
i might need to update the loss on the liger side with respect to the multi-turn PR in TRL's GRPOTrainer
2,926
8,798
SalmanMohammadi
2025-02-21T17:10:03
> i might need to update the loss on the liger side with respect to the multi-turn PR in TRL's GRPOTrainer Let me know if I can help test!
2,926
8,799