user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-05-27 22:20:31
body
stringlengths
1
173k
issue_number
int64
1
3.5k
__index_level_0__
int64
0
10.1k
qgallouedec
2025-03-14T12:44:08
Thanks! Does it work is you directly modify `transformers.training_args._VALID_DICT_FIELDS` instead?
3,082
1,119
Tavish9
2025-03-14T12:56:22
Yes, but both of `transformers.training_args` and `trl.GRPOConfig` should have their independent `_VALID_DICT_FIELDS `, as private attribute does. In `GRPOConfig.__post_init__`, it first post-inits it's `_VALID_DICT_FIELDS` and then `transformers.training_args`'s
3,082
1,120
qgallouedec
2025-03-14T13:52:40
It seems to work: ```python from transformers.training_args import _VALID_DICT_FIELDS from trl import GRPOConfig _VALID_DICT_FIELDS.append("model_init_kwargs") args = GRPOConfig("output_dir", model_init_kwargs='{"num_labels": 2}') print(args.model_init_kwargs) # {"num_labels": 2} ```
3,082
1,121
qgallouedec
2025-03-14T13:59:50
To do this properly, the first step would be to convert `_VALID_DICT_FIELDS` into a class attribute of `TrainingArguments` in transformers. Are you ready to open such a PR in Transformers? Then we could do: ```python # in transformers class TrainingArguments: _VALID_DICT_FIELDS = [...] # in trl class GRPOConfig(TrainingArguments): _VALID_DICT_FIELDS = TrainingArguments._VALID_DICT_FIELDS + ["model_init_kwargs"] ``` which eliminates the need to duplicate the post init
3,082
1,122
Tavish9
2025-03-14T14:09:34
> To do this properly, the first step would be to convert `_VALID_DICT_FIELDS` into a class attribute of `TrainingArguments` in transformers. Are you ready to open such a PR in Transformers? > > Then we could do: > > ```python > # in transformers > class TrainingArguments: > _VALID_DICT_FIELDS = [...] > > # in trl > class GRPOConfig(TrainingArguments): > _VALID_DICT_FIELDS = TrainingArguments._VALID_DICT_FIELDS + ["model_init_kwargs"] > ``` > > which eliminates the need to duplicate the post init Yes, that was my initial thought as well. However, considering that the `transformers` defines `_VALID_DICT_FIELDS` as semi-private, I decided against submitting a PR to their repository. If we follow the semi-private variable approach, each config should ideally have its own variable, even though this might lead to some code duplication in the `__post__init__` logic. That said, I’m also open to the idea of modifying the semi-private variable in the `transformers` to make it a class attribute. However, I’m not sure if the maintainers would be receptive to this change in philosophy. What's your suggestions?
3,082
1,123
qgallouedec
2025-03-14T14:56:57
Yes I think first modifying transformers is the way to go.
3,082
1,124
Tavish9
2025-03-14T16:42:24
okay, I would notify you when pr merged. :)
3,082
1,125
Tavish9
2025-04-01T10:54:14
Hi, @qgallouedec, the [PR](https://github.com/huggingface/transformers/pull/36736) in Transformers is merged. 🥳
3,082
1,126
qgallouedec
2025-04-02T05:02:51
I just need to review it carefully and ensure backwards compatibility I'll do it asap.
3,082
1,127
HuggingFaceDocBuilderDev
2025-04-05T05:06:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3082). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,082
1,128
Tavish9
2025-04-07T04:02:34
maybe you need to update the version of transfomers and re-run the test?
3,082
1,129
qgallouedec
2025-04-07T04:06:59
So currently this change isn't backward compatible we need to figure out how to make it backward compatible
3,082
1,130
Tavish9
2025-04-07T04:41:17
okay, let me try with version checking
3,082
1,131
srinath1510
2025-03-14T01:27:18
Hi, it seems like `tokenizer.eos_token` is a string, which you are passing as the `padding_value`. The `torch.full()` function expects a numeric value for the `padding_value`. I suggest trying with the token id instead: `padding_value = tokenizer.eos_token_id`
3,080
1,132
sivaganesh07
2025-03-14T17:49:44
Thanks that worked!
3,080
1,133
HuggingFaceDocBuilderDev
2025-03-13T22:28:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3079). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,079
1,134
qgallouedec
2025-03-13T22:38:22
What happens if none of the reward functions return a valid reward? We should add a warning, something like: ```python # If all reward functions return None for a given row, issue a warning if torch.isnan(rewards_per_func).all(dim=1).any(): nan_row_idx = torch.isnan(rewards_per_func).all(dim=1).nonzero(as_tuple=True)[0][0] row_reward_kwargs = {key: value[nan_row_idx] for key, value in reward_kwargs.items()} row_reward_kwargs["prompt"] = prompts[nan_row_idx] row_reward_kwargs["completion"] = completions[nan_row_idx] warnings.warn( f"All reward functions returned None for the following kwargs: {row_reward_kwargs}. " "Please ensure that at least one reward function returns a valid reward." ) ```
3,079
1,135
qgallouedec
2025-03-13T22:39:47
Please also add a unittest for such case
3,079
1,136
qgallouedec
2025-03-15T01:04:59
Let's see if the ci passes 🤞
3,079
1,137
qgallouedec
2025-03-15T01:05:24
You need to re-apply the pre-commits
3,079
1,138
shirinyamani
2025-03-15T01:11:24
> You need to re-apply the pre-commits I did and commit the changes suggested by i, and pushed now!
3,079
1,139
tchang1997
2025-03-13T23:04:19
What's your exact set of trainer args/training script? I noticed that if `self.beta == 0.0`, KL logging is skipped altogether, though that change may have been after 0.15.2.
3,078
1,140
tchang1997
2025-03-13T23:28:16
Huh, that's odd. Just to confirm, you're seeing all the other metrics, just not KL and loss? Just to make sure this isn't an unsloth thing, maybe try training w/ a tiny mockup dataset w/o unsloth? You might also try explicitly setting `beta` to something non-zero in the `GRPOConfig.` FWIW, I haven't needed to explicitly `wandb.init` in my main script — I just let the trainer take care of that and set `WANDB_PROJECT="unsloth"` in the env, since I had some issues with duplicated runs/metrics not logging where I expected on wandb. But if logging was working before this is unlikely to be the issue.
3,078
1,141
SpaceHunterInf
2025-03-13T23:34:49
Here is a screen shot of my wandb workspace, one suspicious thing that I noticed is on the top left it says (6 of 13) but on the top right it says (1-6 of 6). <img width="1446" alt="Image" src="https://github.com/user-attachments/assets/85013b62-0e8c-41cf-973a-bf041b2738c9" /> I will try to see if the beta thing works. I use wandb.init because I want to associate each run on wandb with my model settings as their wandb name. Thanks
3,078
1,142
SpaceHunterInf
2025-03-14T15:18:28
This is the old comment I had, I just realized I accidentally pasted my wandb api key in the previous edit....... Thanks for helping me. I am actually using unsloth + trl GPRO. I can see the cmd output doing well on wandb log, but not on the wandb workspace. The log itself contains all the things I need. ```{'loss': 0.0133, 'grad_norm': 2.667192220687866, 'learning_rate': 2.6140692393428204e-10, 'rewards/correctness_reward_func': 0.25, 'rewards/confidence_reward_func': 0.0, 'rewards/int_reward_func': 0.5, 'rewards/soft_format_reward_func': 0.5, 'reward': 1.25, 'reward_std': 1.3363062143325806, 'completion_length': 32.0, 'kl': 0.3319159746170044, 'epoch': 0.92}``` TLDR, I attached my training config and scripts below. **Config** ``` cfg = TrainingConfig( dataset_name="ecqa", # Model settings model_name="Qwen/Qwen2.5-1.5B-Instruct", max_seq_length=1024, lora_rank=128, load_in_4bit=True, fast_inference=True, gpu_memory_utilization=0.8, target_modules=[ "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", ], # Training args use_vllm=True, learning_rate=5e-6, adam_beta1=0.9, adam_beta2=0.99, weight_decay=0.1, warmup_ratio=0.1, lr_scheduler_type="cosine", optim="adamw_8bit", logging_steps=1, per_device_train_batch_size=8, gradient_accumulation_steps=1, num_generations=8, max_prompt_length=256, max_completion_length=32, max_steps=7000, save_steps=1400, max_grad_norm=0.1, report_to="wandb", output_dir="outputs", wandb_name="ecqa_qwen_hpc", # Reward functions reward_funcs=[ "correctness_reward_func", "confidence_reward_func", "int_reward_func", "soft_format_reward_func" ] ) ``` **Training Script** ``` import argparse import json import os, sys from pathlib import Path import torch from unsloth import FastLanguageModel, is_bfloat16_supported from trl import GRPOConfig, GRPOTrainer from utils.data_utils import get_dataset from utils.training_utils import ( correctness_reward_func, confidence_reward_func, int_reward_func, soft_format_reward_func, load_config, save_config ) def get_reward_functions(reward_func_names): """Map reward function names to actual functions""" reward_funcs_map = { "correctness_reward_func": correctness_reward_func, "confidence_reward_func": confidence_reward_func, "int_reward_func": int_reward_func, "soft_format_reward_func": soft_format_reward_func } return [reward_funcs_map[name] for name in reward_func_names] def train(cfg): """Main training function""" # Load dataset train_dataset, dev_dataset, test_dataset = get_dataset(cfg.dataset_name) # Initialize model model, tokenizer = FastLanguageModel.from_pretrained( model_name=cfg.model_name, max_seq_length=cfg.max_seq_length, load_in_4bit=cfg.load_in_4bit, fast_inference=cfg.fast_inference, max_lora_rank=cfg.lora_rank, gpu_memory_utilization=cfg.gpu_memory_utilization, ) model = FastLanguageModel.get_peft_model( model, r=cfg.lora_rank, target_modules=cfg.target_modules, lora_alpha=cfg.lora_rank, use_gradient_checkpointing="unsloth", random_state=3407, ) # Configure training arguments training_args = GRPOConfig( use_vllm=cfg.use_vllm, learning_rate=cfg.learning_rate, adam_beta1=cfg.adam_beta1, adam_beta2=cfg.adam_beta2, weight_decay=cfg.weight_decay, warmup_ratio=cfg.warmup_ratio, lr_scheduler_type=cfg.lr_scheduler_type, optim=cfg.optim, logging_steps=cfg.logging_steps, bf16=is_bfloat16_supported(), fp16=not is_bfloat16_supported(), per_device_train_batch_size=cfg.per_device_train_batch_size, gradient_accumulation_steps=cfg.gradient_accumulation_steps, num_generations=cfg.num_generations, max_prompt_length=cfg.max_prompt_length, max_completion_length=cfg.max_completion_length, max_steps=cfg.max_steps, save_steps=cfg.save_steps, max_grad_norm=cfg.max_grad_norm, report_to=cfg.report_to, output_dir=cfg.output_dir, ) if cfg.num_train_epochs is not None: training_args.num_train_epochs = cfg.num_train_epochs # Set up reward functions reward_funcs = get_reward_functions(cfg.reward_funcs) # Initialize trainer trainer = GRPOTrainer( model=model, processing_class=tokenizer, reward_funcs=reward_funcs, args=training_args, train_dataset=train_dataset, eval_dataset=dev_dataset, ) # Start training trainer.train() def main(): parser = argparse.ArgumentParser(description="RL training script") parser.add_argument("--config", type=str, default=None, help="Path to config file") args = parser.parse_args() # Load configuration cfg = load_config(args.config) if cfg.report_to == 'wandb': import wandb os.environ['WANDB_API_KEY'] = MYAPI wandb.init(project="unsloth", config=cfg.__dict__, name=cfg.wandb_name) # Save configuration save_config(cfg, cfg.output_dir) # Start training train(cfg) if cfg.report_to == 'wandb': wandb.finish() ```
3,078
1,143
SpaceHunterInf
2025-03-14T19:12:01
... Alright, I know the reason why.. I am an absolute idiot. I didn't realize there was something in the regex filter on wandb. Everything is indeed there.
3,078
1,144
HuggingFaceDocBuilderDev
2025-03-13T18:18:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3076). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,076
1,145
HuggingFaceDocBuilderDev
2025-03-13T17:21:47
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3075). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,075
1,146
qgallouedec
2025-03-13T17:57:00
Thanks! Have you tried to fine-tune a VLM with the trainer? Do you have results to share?
3,072
1,147
CompN3rd
2025-03-14T08:55:26
Well, actual fine tuning is still in progress and riddled with oom issues and the quantization bug referenced in the unittest as well and the full training script relies on a private dataset, but I can at least give a bit more information. The training task I am currently looking at is fine-tuning a vlm (the language model part) doing image captioning to maximize a clip cosine similarity score. So the reward module looks like this ```python @dataclass class CLIPRewardModelOutput(ModelOutput): logits: torch.FloatTensor """The reward logits for the Trainer.""" class CLIPRewardModel(CLIPModel): """Inherits from CLIPModel (i.e. PreTrainedModel), such that the forward computation gives the rl reward, but type-based training logic (accelerator.prepare) is still possible""" def forward(self, *, input_ids, attention_mask, pixel_values) -> torch.Tensor: """Mainly copy-paste from CLIPModel.forward up to the logit and loss computation""" vision_outputs = self.vision_model( pixel_values=pixel_values, output_attentions=False, output_hidden_states=False, interpolate_pos_encoding=False, return_dict=True, ) text_outputs = self.text_model( input_ids=input_ids, attention_mask=attention_mask, position_ids=None, output_attentions=False, output_hidden_states=False, return_dict=True, ) image_embeds = vision_outputs[1] image_embeds = self.visual_projection(image_embeds) text_embeds = text_outputs[1] text_embeds = self.text_projection(text_embeds) # normalized features image_embeds = image_embeds / _get_vector_norm(image_embeds) text_embeds = text_embeds / _get_vector_norm(text_embeds) # cosine similarity of the vectors as reward logits return CLIPRewardModelOutput(logits=cosine_similarity(image_embeds, text_embeds, dim=-1).unsqueeze(-1))```
3,072
1,148
CompN3rd
2025-03-14T08:57:54
Then we have a small modification to the trainer via subclassing (which is why I proposed to split off the relevant code section into it's own member function) ```python class GRPOVlmClipTrainer(GRPOTrainer): def _prepare_inputs_for_reward_module( self, *, inputs: dict[str, torch.Tensor | Any], reward_processing_class: PreTrainedTokenizerBase, prompts: list[str], completions: list[str], images=None, ) -> dict[str, torch.Tensor | Any]: # disregard prompts, only prepare completions (captions) and images reward_inputs = reward_processing_class( images=images, text=completions, return_tensors="pt", padding=True, padding_side="right", add_special_tokens=True, truncation=True, max_length=77, ) reward_inputs = super(GRPOTrainer, self)._prepare_inputs(reward_inputs) return reward_inputs ```
3,072
1,149
CompN3rd
2025-03-14T09:04:51
Finally this leads to reward curves like this, which seem to indicate that it generally optimizes in the right direction. ![image](https://github.com/user-attachments/assets/12abe047-5d97-4c11-82fe-53e224e83572)
3,072
1,150
MohamedAliRashad
2025-03-16T07:10:33
@CompN3rd If you can give me a simple guide on how to use your PR i can help you with testing
3,072
1,151
CompN3rd
2025-03-17T07:45:36
> @CompN3rd If you can give me a simple guide on how to use your PR i can help you with testing Sure, if you want to get started with a semi-realistic example, I'd suggest starting with the setup from the unittest, which should be able to run on a 24Gb gpu (`test_gpo_trainer.py` l.900-987) ```python @require_flash_attn @require_bitsandbytes @require_peft @require_torch_accelerator def test_vlm_training(self): model_name = "HuggingFaceTB/SmolVLM-Instruct" ..... ``` Biggest question there is why 8 bit quantization works, but 4 bit quantization breaks the test (or whether that is somehow expected behavior), so any input in that regard would be valuable. Other than that if you have access to more vram gpus you could rewrite the test configuration to work without quantization or you could alternatively replace the model with a smaller one...
3,072
1,152
MohamedAliRashad
2025-03-18T04:11:02
@CompN3rd I have tried this preprocessing function: ```python def format_data(row): base64_image = encode_image(row["image"]) prompt = "Extract all text from the given image and format it using Markdown syntax. Preserve headings, lists, bold/italic text, and other structural elements. Ensure the output is clean and readable in Markdown format." messages = [ { "role": "user", "content": [ { "type": "image", "image": f"data:image/jpeg;base64,{base64_image}", }, {"type": "text", "text": prompt}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) return inputs ``` and it gave me KeyError: 'prompt' (I am training `Qwen/Qwen2.5-VL-3B-Instruct`)
3,072
1,153
CompN3rd
2025-03-19T10:56:11
@MohamedAliRashad If I understand this correctly, it is probably because the `processor` in your case returns already tokenized `input_ids` and probably `pixel_values` or whatever fields are associated with image/video processing. That is not the type of data the `GRPOTrainer` expects (even before this current pr). In the internal preprocessing function of the trainer, it accesses `prompt` of the input dictionary (and this pr adds `image`, which is expected to be a raw numpy or pil image, not a base64 string). Then it internally calls the processor and goes from there. Tldr.: Your data preprocessing probably interferes with the input data preparation done in the trainer class.
3,072
1,154
nph4rd
2025-03-19T14:07:30
@CompN3rd - so how would one preprocess the data or tell the trainer how to process it? For example, as far as I understand, Qwen2.5-VL uses qwen-vl-util's `process_vision_info`. Based on your changes, what would be the best approach to use that during the input preparation?
3,072
1,155
MohamedAliRashad
2025-03-19T20:12:27
@CompN3rd I changed the preprocessing to be closer to what you have in the test file and it worked wanderfully. I made full finetuning for qwen 2.5 vl 3B and it worked on an 80 GB GPU
3,072
1,156
nph4rd
2025-03-19T20:45:00
@MohamedAliRashad - do you mind sharing the setup/coda you used for that?
3,072
1,157
nph4rd
2025-03-20T17:52:18
I just tested the following with [this dummy dataset](agentsea/vqa-test-formatted) using 4 A100 80GB: ```python from datasets import load_dataset from trl import GRPOConfig, GRPOTrainer import torch from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor from qwen_vl_utils import process_vision_info import copy model_id = "Qwen/Qwen2.5-VL-3B-Instruct" model = Qwen2_5_VLForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", use_cache=False, ) processor = AutoProcessor.from_pretrained(model_id, padding_side="left") dataset = load_dataset("agentsea/vqa-test-formatted", split="train") dataset = dataset.remove_columns(["completion"]) def preprocess_vision_info(examples): examples_copy = copy.deepcopy(examples) batch_size = len(examples["prompt"]) examples["image"] = [] for i in range(batch_size): prompt_data = examples_copy["prompt"][i] image_data = examples_copy["image"][i] for message in prompt_data: for content in message["content"]: if isinstance(content, dict) and content.get("type") == "image": content["image"] = image_data processed_images, _ = process_vision_info(prompt_data) examples["image"].extend(processed_images) return examples dataset = dataset.with_transform(preprocess_vision_info) def reward_len(completions, **kwargs): return [-abs(20 - len(completion)) for completion in completions] training_args = GRPOConfig( output_dir="Qwen2.5-VL-3B-GRPO", logging_steps=1, use_vllm=True, bf16=True, gradient_checkpointing=True, per_device_train_batch_size=1, num_generations=3, max_prompt_length=None, vllm_device="cuda:3", ) trainer = GRPOTrainer( model=model, processing_class=processor, reward_funcs=reward_len, args=training_args, train_dataset=dataset, ) trainer.train() ``` However I'm encountering this error: ``` ValueError: Attempted to assign 5185 + 5185 + 5185 = 15555 multimodal tokens to 31107 placeholders ``` Upon further inspection I found that the code works if I make the following change in [this line specifically](https://github.com/CompN3rd/trl/blob/328ef463a776a00d02decc8bf7e5f8cfbe215c03/trl/trainer/grpo_trainer.py#L823 ): from: ```python prompt_inputs = self.processing_class( text=prompts_text, images=images, return_tensors="pt", padding=True, padding_side="left", add_special_tokens=False, ) ``` to: ```python prompt_inputs = self.processing_class( text=prompts_text.copy(), # send a copy instead images=images, return_tensors="pt", padding=True, padding_side="left", add_special_tokens=False, ) ``` What is happening is that the processor class is mutating the input [here](https://github.com/huggingface/transformers/blob/42c489f2ae738a3b690bb90aab274f02ff024795/src/transformers/models/qwen2_5_vl/processing_qwen2_5_vl.py#L156C21-L156C25). So, vLLM complains because it's receiving the modified `prompts_text`. You can test the side-effect in [this script](https://gist.github.com/nph4rd/f003323ac4c8940f779f44a24b815ff7). I don't think this is an issue that should be handled by neither TRL nor vLLM. I think it should be handled at the source of the proccessor's code. I can see that the [SmolVLM processor class](https://github.com/huggingface/transformers/blob/main/src/transformers/models/smolvlm/processing_smolvlm.py) doesn't have this kind of side-effect. But Qwen2.5-VL's does, so I do wonder how @MohamedAliRashad made it work. I presume it's because `prompts_text` is a string and not a list when using 1 GPU with `num_generations=1`? ---- EDIT: fwiw - I raised https://github.com/huggingface/transformers/issues/36865 + opened https://github.com/huggingface/transformers/pull/36866
3,072
1,158
MohamedAliRashad
2025-03-25T13:18:04
@nph4rd The error you are seeing is because of your context size limit. Qwen (unlike other models) doesn't give a fixed number of tokens for images of different shapes, The number of tokens change based on the size of the input image, if i am not mistaken every `28x28` pixels are one token for them. What you need to do is to resize your images to be in a smaller size than your acceptable context window and also i didn't use `process_vision_info` and it worked fine with me, so you may consider removing it and send the pil images as it is.
3,072
1,159
CompN3rd
2025-03-25T13:26:41
@qgallouedec Let me know if there are refactoring or api changes necessary to make this ready for merging. Would be happy to make those adjustments.
3,072
1,160
nph4rd
2025-03-25T16:47:30
@MohamedAliRashad / @CompN3rd - thanks for the comments. I don't understand why it would work with the change I shared but not without it though? 🤔 With that change I didn't have to resize the images for it to work. Another thing I found is that when I set `log_completions=True`, the training was stuck at this line: https://github.com/huggingface/trl/blob/e94b5facd44764d425bdb110784dd86794ef7a05/trl/trainer/grpo_trainer.py#L1026 Specifically the `gather_object(images)` was timing out. This might be my image size's again, but I thought I'd let you know in case you hadn't tested log_completions.
3,072
1,161
CompN3rd
2025-03-25T17:02:21
@nph4rd Thanks for testing it out. I concur with @MohamedAliRashad observations, that I could produce such errors mostly by having too small of a context window. As for the `log_completions` error, I had a version, where not all processes participated in the communication, which obviously failed. As of yet, I have a test with 2gpus locally, which worked well as well as a cloud test, but that was only one A40 GPU. Both produced images in weights and biases, but I admit, I haven't made multi-node tests. ![Screenshot_20250325-175437.png](https://github.com/user-attachments/assets/84ac360d-29a2-4961-b3aa-dc492f6b7a81)
3,072
1,162
sunildkumar
2025-04-02T03:55:13
Eagerly awaiting this (https://github.com/huggingface/trl/issues/2734 - 2 months and counting)! @CompN3rd Let me know if and how I can help. I've been training VLMs with GRPO for a while now, just not on TRL `main`.
3,072
1,163
sunildkumar
2025-04-05T05:18:45
@qgallouedec – I hope you don’t mind the tag. It’s been a couple of weeks since @CompN3rd has engaged with this PR, and I really appreciate the work that’s been done so far. I’d love to help move it forward if that’s appropriate. I’m not entirely sure what the etiquette is in cases like this—would it be okay to open a follow-up PR branching off of this one, or would you recommend waiting longer? Apologies if this is a naive question, and thank you in advance for any guidance.
3,072
1,164
qgallouedec
2025-04-05T05:42:01
Thanks again for your work on this, and sorry for the slow response, be sure we're doing our best. It's a valuable feature and makes a lot of sense to include. That said, it requires thorough review, testing, and documentation before merging, and at the moment we don’t have the capacity to give it the attention it needs. I’ll make sure to revisit it as soon as I can. In the meantime, keeping the PR open is a great idea. It allows the community to test it, report any issues, and benefit from the feature. And to your question — yes, feel free to open a follow-up PR based on this one. That’s totally fine and actually very helpful. No need to wait.
3,072
1,165
sunildkumar
2025-04-05T05:49:52
@qgallouedec - totally understood. Thanks for your advice!
3,072
1,166
Benjoyo
2025-05-03T20:33:43
Anyone actively working on GRPO support for VLMs still? 🙏
3,072
1,167
chaodreaming
2025-05-08T09:30:33
How long will it take to support grpo?
3,072
1,168
chaodreaming
2025-05-08T09:31:19
About how long will it take to be able to support grpo, a lot of people are very excited about this, thank you very much for your contribution!
3,072
1,169
mccatec
2025-05-13T03:09:28
<img width="593" alt="image" src="https://github.com/user-attachments/assets/7050a162-f429-4520-8bea-8055a48248cc" /> Noticed this from @qgallouedec 's x 🙏
3,072
1,170
chaodreaming
2025-05-14T08:42:47
> <img alt="图像" width="593" src="https://private-user-images.githubusercontent.com/72635723/443021928-7050a162-f429-4520-8bea-8055a48248cc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDcyMTIyNjYsIm5iZiI6MTc0NzIxMTk2NiwicGF0aCI6Ii83MjYzNTcyMy80NDMwMjE5MjgtNzA1MGExNjItZjQyOS00NTIwLThiZWEtODA1NWE0ODI0OGNjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA1MTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNTE0VDA4MzkyNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRlNjI5NGZiYmM1MjNmZjcxYTQ2OWM1YjlmYjc1MjlhMGI0OTM1Y2E5NDA2Y2IwYTM5ZTIzZDhmMzQ5MDhjZTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.quodHN7y3djobvHAjnK17FoLeDwMZ4tPNmM55-0cjDQ"> > 从 的 x 🙏 中注意到这一点 Where did you see this news, hopefully give me a link, I'll be keeping an eye on the progress
3,072
1,171
mccatec
2025-05-14T08:45:59
> > <img alt="图像" width="593" src="https://private-user-images.githubusercontent.com/72635723/443021928-7050a162-f429-4520-8bea-8055a48248cc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDcyMTIyNjYsIm5iZiI6MTc0NzIxMTk2NiwicGF0aCI6Ii83MjYzNTcyMy80NDMwMjE5MjgtNzA1MGExNjItZjQyOS00NTIwLThiZWEtODA1NWE0ODI0OGNjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA1MTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNTE0VDA4MzkyNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRlNjI5NGZiYmM1MjNmZjcxYTQ2OWM1YjlmYjc1MjlhMGI0OTM1Y2E5NDA2Y2IwYTM5ZTIzZDhmMzQ5MDhjZTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.quodHN7y3djobvHAjnK17FoLeDwMZ4tPNmM55-0cjDQ"> > > 从 的 x 🙏 中注意到这一点 > > Where did you see this news, hopefully give me a link, I'll be keeping an eye on the progress https://x.com/QGallouedec/status/1919806234821026141
3,072
1,172
chaodreaming
2025-05-14T08:53:19
> > > <img alt="图像" width="593" src="https://private-user-images.githubusercontent.com/72635723/443021928-7050a162-f429-4520-8bea-8055a48248cc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDcyMTIyNjYsIm5iZiI6MTc0NzIxMTk2NiwicGF0aCI6Ii83MjYzNTcyMy80NDMwMjE5MjgtNzA1MGExNjItZjQyOS00NTIwLThiZWEtODA1NWE0ODI0OGNjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA1MTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNTE0VDA4MzkyNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRlNjI5NGZiYmM1MjNmZjcxYTQ2OWM1YjlmYjc1MjlhMGI0OTM1Y2E5NDA2Y2IwYTM5ZTIzZDhmMzQ5MDhjZTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.quodHN7y3djobvHAjnK17FoLeDwMZ4tPNmM55-0cjDQ"> > > > 从 的 x 🙏 中注意到这一点 > > > > > > Where did you see this news, hopefully give me a link, I'll be keeping an eye on the progress > > https://x.com/QGallouedec/status/1919806234821026141 Thank you very much, I saw that, he is a very capable person, I'm sure soon trl will support grpo multimodal
3,072
1,173
nph4rd
2025-05-22T00:55:58
hey in case you want to try grpo+vlm while it's not supported here, i wrote this up based on TRL's code and @CompN3rd's PR https://github.com/nph4rd/grpo_vlm
3,072
1,174
chaodreaming
2025-05-24T00:58:14
> 嘿,如果您想尝试 GRPO+VLM 而这里不支持它,我根据 TRL 的代码和 PR 编写了这个 > > https://github.com/nph4rd/grpo_vlm You should mention a pr to become a contributor, it's a good resume!
3,072
1,175
HuggingFaceDocBuilderDev
2025-03-13T12:37:40
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3070). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,070
1,176
burtenshaw
2025-03-13T12:41:59
@qgallouedec Amazing work. thanks! a few questions: - Is padding like this necessary for all trainers? - do we need to patch other token ids like this? ```python processor.pad_token_id = processor.tokenizer.pad_token_id processor.bos_token_id = processor.tokenizer.bos_token_id processor.eos_token_id = processor.tokenizer.eos_token_id ``` - did you see how unsloth is dealing with missing token id's: https://huggingface.co/unsloth/gemma-3-4b-it/commit/90fe72f525abc73ff7283c23e6ceccea5d4273bb . Do you think we should open a PR for changes on the hub repo?
3,070
1,177
qgallouedec
2025-03-13T12:46:12
> Is padding like this necessary for all trainers? usually tokenizers have a pad method. Here, the gemma processor doesn't. But maybe we shouldn't use the processor and directly load the tokenizer? Checking
3,070
1,178
NanoCode012
2025-03-13T12:33:06
Thanks for the approval, @kashif . Would it be possible to trigger the workflow as well?
3,069
1,179
HuggingFaceDocBuilderDev
2025-03-13T12:41:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3069). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,069
1,180
NanoCode012
2025-03-23T15:20:33
@kashif , may I ask if this PR may be merged anytime soon?
3,069
1,181
NanoCode012
2025-03-13T09:16:32
This issue also exists for CPOTrainer. Repro: ``` accelerate launch examples/scripts/cpo.py --dataset_name trl-lib/ultrafeedback_binarized --model_name_or_path=gpt2 --per_device_train_batch_size 4 --max_steps 1000 --learning_rate 8e-6 --gradient_accumulation_steps 1 --logging_steps 10 --eval_steps 500 --output_dir="gpt2-aligned-cpo" --warmup_steps 150 --report_to none --bf16 --logging_first_step --no_remove_unused_columns ``` Commenting out the below allows it to run (or calculating `.mean`): https://github.com/huggingface/trl/blob/4871c82b0cd1caae72522182f9171ea069481250/trl/trainer/cpo_trainer.py#L838-L843
3,068
1,182
NanoCode012
2025-03-13T09:20:08
Other trainers that return logits metrics do not have this issue: - BCO: Uses sum logits - DPO: Uses mean logits - KTO: Uses sum logits
3,068
1,183
qgallouedec
2025-03-13T23:11:17
This may help you #3076
3,067
1,184
qgallouedec
2025-03-13T23:12:34
The answer is yes and no. It's still triangular but the samples can "contaminate". That's a known issue #1230
3,067
1,185
tchang1997
2025-03-13T23:14:28
As per [the tutorial](https://huggingface.co/docs/trl/main/en/grpo_trainer) I use `accelerate launch` and set `--num-processes [N_GPUS]` to do multi-GPU training. You may also need to play with [`deepspeed`](https://huggingface.co/docs/trl/main/en/deepspeed_integration) settings. These can all be `pip install`-ed — note that you may need to run `accelerate config` first to set things up.
3,066
1,186
tjoymeed
2025-03-14T03:22:03
Does it support the VRAM combined, ie. 40GB x 8 = 320 GB total ?
3,066
1,187
tchang1997
2025-03-14T17:24:07
In theory, that's completely dependent on your hardware, not these packages. `accelerate` simply lets you do distributed training across GPUs easily, and `deepspeed` has some flags you can set to make training even more memory-efficient.
3,066
1,188
tjoymeed
2025-03-14T17:31:56
Hardware has no problem. What flags can I set to get the combined VRAM 40GB x8 = 320 GB total?
3,066
1,189
tchang1997
2025-03-14T18:01:35
Try `accelerate config` — it'll walk you through some prompts to answer questions about your setup, and auto-set those flags. You can rerun that at any time if you need to change things. It'll also make a `deepspeed` config which you can later edit — see [here](https://huggingface.co/docs/accelerate/en/usage_guides/deepspeed) for more info.
3,066
1,190
qgallouedec
2025-04-05T17:04:30
Your are probably looking for DeepSpeed ZeRO 3, check our doc: https://huggingface.co/docs/trl/main/deepspeed_integration
3,066
1,191
VProv
2025-03-26T16:33:29
Relevant to this PR too https://github.com/huggingface/trl/pull/2568#issuecomment-2755022960
3,065
1,192
HuggingFaceDocBuilderDev
2025-03-12T11:18:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3062). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,062
1,193
iamansinha
2025-03-13T04:54:05
Currently, I think for trl>0.14, `per_device_train_batch_size` means number of generations per device, and not the number of prompts per device. Refer to this illustration given at this [line](https://github.com/huggingface/trl/blob/4871c82b0cd1caae72522182f9171ea069481250/trl/trainer/grpo_trainer.py#L597) in the code comments. So, the number of prompts per device is equal to `per_device_train_batch_size / num_generations` For your example, minimum `per_device_train_batch_size` should be 2, so that for `num_processes=4` for 4 GPUs with `use_vllm=False`, each GPU is generating 2 responses to give 8 generations in total for one prompt sample. And, if you want to generate all 8 generations of one prompt per GPU, then you need to set `per_device_train_batch_size` same as `num_generations`. Similarly, for all generations of `n` number of prompts per GPU, set `per_device_train_batch_size = n * num_generations`. Hope this helps!
3,061
1,194
YueChenkkk
2025-03-14T11:42:23
I think this constraint ensures all the generations are consumed in a single backward step. Otherwise the buffer mechanism will be way more complicated.
3,061
1,195
tonghuikang
2025-03-23T02:23:38
Does this mean that the size of `n_generations` (which is the G is GRPO) is limited by the number of GPUs you have? I would like to try a huge number for `n_generations` though.
3,061
1,196
qgallouedec
2025-03-23T03:12:02
No it means that it's limited by num GPUs x per device batch size
3,061
1,197
tonghuikang
2025-03-23T05:18:49
Can `per_device_train_batch_size` be a large number not limited by GPU memory size? I set ``` num_generations=16, per_device_train_batch_size=16, ``` the run is ok, but when I set ``` num_generations=32, per_device_train_batch_size=32, ``` It ran out of memory in the first training step, it seems that I cannot do num_generations=32 without more GPUs ``` torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 936.00 MiB. GPU 0 has a total capacity of 139.81 GiB of which 684.00 MiB is free. Process 69 has 139.13 GiB memory in use. Of the allocated memory 137.16 GiB is allocated by PyTorch, and 649.90 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ```
3,061
1,198
qgallouedec
2025-04-01T15:45:57
> Can `per_device_train_batch_size` be a large number not limited by GPU memory size? No it can't, the larger the batch size, the larger the memory needed
3,061
1,199
kashif
2025-03-12T12:11:31
@benyaminjami so with this implementation... what happens when `seq_kd` is False? are we then still doing the GKD loss?
3,058
1,200
qgallouedec
2025-03-12T11:13:49
Can you provide the change that you've made? It's not clear from your explanation
3,057
1,201
vagitablebirdcode
2025-03-12T12:06:15
It is easy, just change the `SamplingParams` as follows: ```python self.sampling_params = SamplingParams(n=self.args.num_generations) ``` Then it can generate `n` result for every input. Further, I think vllm can process `per_device_train_batch_size * num_generations * gradient_accumulation_steps` inputs in one time, which can accelerate the training in the collection stage.
3,057
1,202
qgallouedec
2025-03-12T12:20:01
How is it different from the current code?
3,057
1,203
vagitablebirdcode
2025-03-12T12:32:49
I am still in the planning phase and haven't made any changes to the relevant code yet, as this improvement will be a large project. I found that the sampling in Trainer from transformers is based on batch_size and is collected step by step using accumulate_step. To generate per_device_train_batch_size * num_generations * gradient_accumulation_steps at once, we first need to modify the dataset sampler. After the improvement, each iteration should pass in gradient_accumulation_steps * per_device_train_batch_size samples and use llm.generate to collect the results. Finally, the results should be evenly distributed across devices for update calculations.
3,057
1,204
qgallouedec
2025-03-12T12:54:39
Sorry but it's even less clear. What is the suggested change? I still can't see the difference between what you're describing and the current implementation > It is easy, just change the `SamplingParams` as follows: > ```python > self.sampling_params = SamplingParams(n=self.args.num_generations) > ``` > Then it can generate `n` result for every input. Further, I think vllm can process `per_device_train_batch_size * num_generations * gradient_accumulation_steps` inputs in one time, which can accelerate the training in the collection stage. https://github.com/huggingface/trl/blob/fd9e5a7cabc8b7def9b64042cb147616aa0d1d04/trl/trainer/grpo_trainer.py#L525
3,057
1,205
vagitablebirdcode
2025-03-12T13:04:26
I'm very sorry—I didn't notice the main branch and was only looking at the 0.15.2 branch and a few PR branches. The code you mentioned is not present in these branches. In fact, these code in main branch has already implemented my idea. Thank you very much for your response! I will go ahead and close this issue.
3,057
1,206
HuggingFaceDocBuilderDev
2025-03-11T23:24:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3056). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,056
1,207
Pclanglais
2025-03-11T23:20:28
Same question for a different issue. I'm using a model with special tokens signalling different text parts and I'm unable to access them, without setting skip_special_tokens=False
3,054
1,208
qgallouedec
2025-03-11T23:30:39
Thanks for the suggestion. In fact it's already been suggested in #2728, and I think this solution should actually be avoided: https://github.com/huggingface/trl/pull/2728#issuecomment-2635166424
3,054
1,209
mtoslalibu
2025-03-12T13:43:46
> Thanks for the suggestion. In fact it's already been suggested in [#2728](https://github.com/huggingface/trl/pull/2728), and I think this solution should actually be avoided: [#2728 (comment)](https://github.com/huggingface/trl/pull/2728#issuecomment-2635166424) Thank you for your response. I will introduce the batch-related parameters (like max-num-seq) one by one, then. The motivation is that batch size has a strong impact on inference duration, tuning which can reduce GRPO training duration.
3,054
1,210
qgallouedec
2025-03-11T16:58:24
@loricxy0707 can you confirm that this fixes your issue?
3,053
1,211
HuggingFaceDocBuilderDev
2025-03-11T17:01:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3053). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,053
1,212
HuggingFaceDocBuilderDev
2025-03-11T14:58:02
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3052). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,052
1,213
qgallouedec
2025-03-11T14:36:20
Hi, thanks for the question. Yes, we first generate, then compute the reward and the loss, then the weights are updated. > From the looks, it feels like the parameter update is blocked until the first two steps are complete. Does that mean the GPUs (with the model weights loaded) remain idle until then? With vLLM yes, without, these GPUs are used to generate. > I believe it's the same behavior for both the approaches: > - gathering the parameters (on a single GPU) from ds3 before generation > - using a separate GPU with vllm for generation Not exactly, because without vllm, the weights are gathered in all devices, so all devices generate.
3,050
1,214
yash-malik
2025-03-11T16:29:01
Thanks for the answer! That makes sense!
3,050
1,215
Rocketknight1
2025-03-11T12:20:54
cc @zucchini-nlp @qgallouedec
3,051
1,216
qgallouedec
2025-03-11T13:25:59
This is not high priority, so contributions are very welcome. This issue belongs to TRL, I'll transfer it.
3,051
1,217
SabaPivot
2025-03-13T07:04:02
> This is not high priority, so contributions are very welcome. This issue belongs to TRL, I'll transfer it. Sure. https://github.com/om-ai-lab/VLM-R1 Team om-ai-lab has implemented the GRPO Trainer for QWEN-VL series model. Hope this helps.
3,051
1,218