user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.29k
qgallouedec
2024-10-18T14:21:33
Thanks for pointing this out, #2248 will fix it
2,238
reihig-ut
2024-10-24T05:07:42
Thank you for your PR! I retried the reproduction process on branch `kto-conv-data-support`, I got this error: ``` /home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py:479: UserWarning: When using DPODataCollatorWithPadding, you should set `max_length` in the KTOTrainer's init it will be set to `512` by default, but you should do it yourself in the future. warnings.warn( /home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py:489: UserWarning: When using DPODataCollatorWithPadding, you should set `max_prompt_length` in the KTOTrainer's init it will be set to `128` by default, but you should do it yourself in the future. warnings.warn( /home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py:519: UserWarning: When using DPODataCollatorWithPadding, you should set `remove_unused_columns=False` in your KTOConfig we have set it for you, but you should do it yourself in the future. warnings.warn( Traceback (most recent call last): File "/home/hoge/project/test/trl/examples/scripts/kto.py", line 97, in <module> trainer = KTOTrainer( ^^^^^^^^^^^ File "/home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py", line 721, in __init__ super().__init__( TypeError: Trainer.__init__() got an unexpected keyword argument 'processing_class' ```
2,238
benchay1999
2024-10-24T07:47:50
Changing `processing_class` to `tokenizer` worked for me.
2,238
kashif
2024-10-24T08:44:08
should be fixed now in main with latest transformer release
2,238
Mefisto04
2024-10-16T19:16:43
hey @qgallouedec, please review this and assign me this issue
2,237
qgallouedec
2024-10-18T17:23:07
Hi, thanks for reporting @Mefisto04. Feel free to open a PR if you can improve it.
2,237
Mefisto04
2024-10-21T19:28:56
hey @qgallouedec , i have made a pr #2249 , please review that.
2,237
qgallouedec
2024-10-25T16:04:41
Closed via #2249
2,237
HuggingFaceDocBuilderDev
2024-10-21T09:44:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2236). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,236
edbeeching
2024-10-24T06:39:45
HI @sergiopaniego , thanks for impementing this. Could you run `make precommit` to format the code so the quality tests pass (you may have to `pip install pre-commit`) We are discussing internally how feasible it is to hormonize this script with the other VLM training scripts, I will let you know when we have a conclusion.
2,236
sergiopaniego
2024-10-30T09:12:56
Updated! Any updates on the harmonization discussion? I’m happy to make any modifications needed! 😊
2,236
qgallouedec
2024-10-18T17:18:01
This operation replaces tokens outside the attention mask with token 0. This operation has no influence on model output within the attention mask: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_id = "Qwen/Qwen2.5-0.5B-Instruct" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) pad_token_id = tokenizer.pad_token_id input_ids = torch.tensor([[pad_token_id, pad_token_id, 1, 2, 3, 4, 5, pad_token_id]]) attention_mask = input_ids != pad_token_id # [[False, False, True, True, True, True, True, False]] position_ids = attention_mask.cumsum(1) - attention_mask.long() # [[0, 0, 1, 2, 3, 4, 5, 0]] output_wo_mask_fill = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids) input_ids = torch.masked_fill(input_ids, ~attention_mask, 0) # [[0, 0, 0, 1, 2, 3, 4, 0]] output_w_mask_fill = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids) print(torch.mean(torch.abs(output_wo_mask_fill.logits - output_w_mask_fill.logits), dim=-1)) # [[0.8371, 0.8371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.6457]] ``` This operation is not absolutely necessary, since invalid logits are then masked: https://github.com/huggingface/trl/blob/a67f2143c38d6520be8735463ce715ad5c281db8/trl/trainer/rloo_trainer.py#L413-L415
2,235
Chios-C
2024-10-19T05:46:57
Thanks for your great response.
2,235
HuggingFaceDocBuilderDev
2024-10-15T10:04:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2233). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,233
qgallouedec
2024-10-15T08:59:33
Thanks again @DhruvKadam-git. Can you update your branch?
2,232
DhruvKadam-git
2024-10-17T07:36:04
I have updated my branch
2,232
HuggingFaceDocBuilderDev
2024-10-18T17:26:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2232). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,232
qgallouedec
2024-10-18T17:37:28
LGTM now!
2,232
HuggingFaceDocBuilderDev
2024-10-15T08:00:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2231). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,231
qgallouedec
2024-10-15T05:46:49
Thanks @sergiopaniego !
2,230
kashif
2024-10-15T07:52:08
@wenxindongwork I suspect we will need to have our own `prediction_step` method as we use our own datacollator instead of the default one, and the tests didn't catch this bug since the `eval_steps` in the tests were > the `max_steps` so it never ran the evaluation...
2,228
qgallouedec
2024-10-14T12:11:16
Using an iterable dataset might be more suited. If the way you update the dataset depends on the results, you'll probably need to set a callback as well
2,227
qgallouedec
2024-10-12T08:56:34
You're referring to the dev version of the doc (main) while you have v0.11.3 installed. In general, you should either: Use the doc associated with you version: https://huggingface.co/docs/trl/v0.11.3/en/dpo_trainer or Install the dev version `pip install git+https //github.com/huggingface/trl` Regarding the example code, if you decide to keep v0.11, then make the following modification ```diff - trainer = DPOTrainer(model=model, args=training_args, processing_class=tokenizer, train_dataset=preference_example) + trainer = DPOTrainer(model=model, args=training_args, tokenizer=tokenizer, train_dataset=preference_example) ```
2,226
qgallouedec
2024-10-12T08:57:57
Duplicate #2207 #2218
2,226
zhang-tuo-pdf
2024-10-12T16:43:02
Thank you so much! I made the changes based on your suggestions and keep using v0.11. However, I got another error when I am passing the raw text with explicit prompt format into the DPO trainer. The error shows "'dict' object has no attribute 'map'". My code is below: ``` import torch from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer from trl import ( DPOConfig, DPOScriptArguments, DPOTrainer, ModelConfig, TrlParser, get_kbit_device_map, get_peft_config, get_quantization_config, ) def dpo_training(): model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct", cache_dir='/vault/ultraz/llm_models') tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct", cache_dir='/vault/ultraz/llm_models') preference_example = { "prompt": [ "hello", "how are you", "What is your name?", "What is your name?", "Which is the best programming language?", "Which is the best programming language?", "Which is the best programming language?", ], "chosen": [ "hi nice to meet you", "I am fine", "My name is Mary", "My name is Mary", "Python", "Python", "Java", ], "rejected": [ "leave me alone", "I am not fine", "Whats it to you?", "I dont have a name", "Javascript", "C++", "C++", ], } training_args = DPOConfig(output_dir="Qwen2-0.5B-DPO", logging_steps=10) trainer = DPOTrainer(model=model, args=training_args, tokenizer=tokenizer, train_dataset=preference_example) trainer.train() if __name__ == "__main__": dpo_training() ``` And then the error message is below: ``` The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes` was set to a value of `1` `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. /home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py:660: UserWarning: `max_length` is not set in the DPOConfig's init it will default to `512` by default, but you should do it yourself in the future. warnings.warn( /home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py:673: UserWarning: `max_prompt_length` is not set in the DPOConfig's init it will default to `128` by default, but you should do it yourself in the future. warnings.warn( /home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py:708: UserWarning: When using DPODataCollatorWithPadding, you should set `remove_unused_columns=False` in your TrainingArguments we have set it for you, but you should do it yourself in the future. warnings.warn( Traceback (most recent call last): File "/vault/ultraz/DPO_FL/dpo_trainer.py", line 53, in <module> dpo_training() File "/vault/ultraz/DPO_FL/dpo_trainer.py", line 49, in dpo_training trainer = DPOTrainer(model=model, args=training_args, tokenizer=tokenizer, train_dataset=preference_example) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f return f(*args, **kwargs) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py", line 804, in __init__ train_dataset = train_dataset.map( AttributeError: 'dict' object has no attribute 'map' Traceback (most recent call last): File "/home/ultraz/.conda/envs/pretext/bin/accelerate", line 8, in <module> sys.exit(main()) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main args.func(args) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1057, in launch_command simple_launcher(args) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/accelerate/commands/launch.py", line 673, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/home/ultraz/.conda/envs/pretext/bin/python', 'dpo_trainer.py']' returned non-zero exit status 1. ```
2,226
qgallouedec
2024-10-12T16:48:44
This is another thing, please open another issue next time. `preference_example` is an dict, but `train_dataset` is expected to be a `datasets.Dataset` you need to convert it into a dataset via the `datasets.Dataset.from_dict(preference_example)`.
2,226
qgallouedec
2024-10-12T09:20:42
Good catch @Ben-Schneider-code, thanks for reporting. I'd like to take this opportunity to be a little more specific about what our precommit does. Here's a suggestion: ````md TRL relies on `ruff` for maintaining consistent code formatting across its source files. Before submitting any PR, you should apply automatic style corrections and run code verification checks. We provide a `precommit` target in the `Makefile` that simplifies this process by running all required checks and optimizations on only the files modified by your PR. To apply these checks and corrections in one step, use: ```bash $ make precommit ``` This command runs the following: - Executes `pre-commit` hooks to automatically fix style issues with `ruff` and other tools. - Runs additional scripts such as adding copyright information. If you prefer to apply the style corrections separately or review them individually, the `pre-commit` hook will handle the formatting for the files in question. ````
2,225
Ben-Schneider-code
2024-10-12T21:52:07
@qgallouedec done 👍
2,225
qgallouedec
2024-10-13T07:21:45
Thanks a lot @Ben-Schneider-code!
2,225
Ben-Schneider-code
2024-10-30T02:58:59
Hi! @qgallouedec I updated the branch align with main. Please review when you get a chance. Also ```make precommit``` still doesn't work for me? I had to add run the ruff stuff manually. ``` pre-commit run --all-files make: pre-commit: No such file or directory make: *** [Makefile:18: precommit] Error 127 ```
2,224
qgallouedec
2024-10-11T16:09:21
> The trainers on [TRL docs from the website](https://huggingface.co/docs/trl/en/dataset_formats#which-dataset-format-to-use) have links attached, but [the markdown file in the repo](https://github.com/huggingface/trl/blob/main/docs/source/dataset_formats.mdx) didn't contain any of the links. So, I wasn't sure If I should add the [`GKDTrainer`](https://huggingface.co/docs/trl/v0.11.3/en/gkd_trainer#trl.GKDTrainer) docs link to the table, please let me know if I need to add it to this PR. When you write ```[`GKDTrainer`]```, the link is automatically created, no need to add it
2,222
qgallouedec
2024-10-11T19:51:04
Lgtm thanks @August-murr!
2,222
qgallouedec
2024-10-11T13:14:33
Looks good!!
2,221
kashif
2024-10-11T10:32:16
thanks @mst272 can you also kindly add these options to the docs-strings and the documentation of `GKDTrainer`?
2,220
mst272
2024-10-11T15:47:09
hi@kashif,I've added these to the docs-strings and the documentation
2,220
bjb19
2024-10-11T03:55:32
Updating as I have solved the source of the issue 1) Is how the datset is being processed in the example 2) Is it looks like there were changes to the constructor class that are in the docs but not the most recent version Might put in a PR this weekend to fix
2,218
qgallouedec
2024-10-11T06:09:08
You're probably using an example for the dev version (0.12.0dev) while having the latest released version (0.11.3) installed. Either use the latest version example, or install the dev version.
2,218
qgallouedec
2024-10-11T06:11:28
Duplicate #2207
2,218
nivibilla
2024-10-10T22:04:12
followup from #2215
2,217
kashif
2024-10-11T09:39:22
@nivibilla what are the keys in your dataset, as currently the datacollator also checks if there is a `prompt` key to get the prompts only: https://github.com/huggingface/trl/blob/main/trl/trainer/utils.py#L265
2,217
nivibilla
2024-10-11T10:13:07
![image](https://github.com/user-attachments/assets/6e1e75c4-ea7d-4265-b268-cb448c80e875) I just have the prompt column with the name `prompt`
2,217
kashif
2024-10-24T08:44:56
any update on this @nivibilla ? I suspect its the data issue?
2,217
nivibilla
2024-10-10T19:42:40
Actually Im stupid. I figured it out while I was typing the issue. I should be looking at the vocab size not the tokenizer length. https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/aa8e72537993ba99e69dfaafa59ed015b17504d1/config.json#L26
2,215
nivibilla
2024-10-10T19:43:25
Is it worth adding a check in the GKD trainer for this param so this error is more readable for others?
2,215
nivibilla
2024-10-10T19:46:13
Llama 3.1 70b and llama 3.2 1B seem to have the same vocab size I will test with that. It will probably work.
2,215
qgallouedec
2024-10-10T15:32:44
I think you just need to pass this arg. ```python from trl import SFTConfig, TrlParser if __name__ == "__main__": parser = TrlParser(SFTConfig) training_args = parser.parse_args_and_config() print("✅") ``` Fails if you don't specify it ``` $ python 2213.py usage: 2213.py [-h] --output_dir OUTPUT_DIR [--overwrite_output_dir [OVERWRITE_OUTPUT_DIR]] [--do_train [DO_TRAIN]] [--do_eval [DO_EVAL]] [--do_predict [DO_PREDICT]] ... 2213.py: error: the following arguments are required: --output_dir ``` Works if you do: ``` $ python 2213.py --output_dir my_output_dir ✅ ```
2,213
qgallouedec
2024-10-10T16:16:34
When using a notebook, instead of ```python parser = TrlParser((AriaSFTScriptArguments, SFTConfig, AriaModelConfig)) sft_script_args, training_args, model_config = parser.parse_args_and_config() ``` use ```python sft_script_args = AriaSFTScriptArguments() training_args = SFTConfig(output_dir="./aria_ft") model_config = AriaModelConfig() ```
2,213
himanshushukla12
2024-10-10T09:57:15
@qgallouedec please consider the PR [check it here](https://github.com/huggingface/trl/compare/main...himanshushukla12:trl:main?expand=1)
2,212
qgallouedec
2024-10-11T09:09:58
This issue occurs before the training start, right? In my setup everything runs smoothly: ``` $ python examples/scripts/reward_modeling.py \ > --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ > --dataset_name trl-lib/ultrafeedback_binarized \ > --output_dir Qwen2-0.5B-Reward-LoRA \ > --per_device_train_batch_size 8 \ > --num_train_epochs 1 \ > --gradient_checkpointing True \ > --learning_rate 1.0e-4 \ > --logging_steps 25 \ > --eval_strategy steps \ > --eval_steps 50 \ > --max_length 2048 \ > --use_peft \ > --lora_r 32 \ > --lora_alpha 16 [2024-10-11 09:08:16,210] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) Some weights of Qwen2ForSequenceClassification were not initialized from the model checkpoint at Qwen/Qwen2-0.5B-Instruct and are newly initialized: ['score.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /fsx/qgallouedec/trl/examples/scripts/reward_modeling.py:99: UserWarning: You are using a `task_type` that is different than `SEQ_CLS` for PEFT. This will lead to silent bugs Make sure to pass --lora_task_type SEQ_CLS when using this script with PEFT. warnings.warn( wandb: WARNING The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter. wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. wandb: Currently logged in as: qgallouedec (huggingface). Use `wandb login --relogin` to force relogin wandb: Tracking run with wandb version 0.18.0 wandb: Run data is saved locally in /fsx/qgallouedec/trl/wandb/run-20241011_090830-zp3efu8k wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run Qwen2-0.5B-Reward-LoRA wandb: ⭐️ View project at https://wandb.ai/huggingface/huggingface wandb: 🚀 View run at https://wandb.ai/huggingface/huggingface/runs/zp3efu8k 0%| | 0/3875 [00:00<?, ?it/s]You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. /fsx/qgallouedec/miniconda3/envs/trl/lib/python3.11/site-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] Could not estimate the number of tokens of the input, floating-point operations will not be computed {'loss': 0.8947, 'grad_norm': 3.841621160507202, 'learning_rate': 9.935483870967742e-05, 'epoch': 0.01} 1%|█▍ | 36/3875 [00:54<1:48:03, 1.69s/it] ``` Our system info seem close: ``` - Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 - Python version: 3.11.9 - PyTorch version: 2.4.1 - CUDA device(s): NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3 - Transformers version: 4.46.0.dev0 - Accelerate version: 1.0.0 - Accelerate config: not found - Datasets version: 3.0.0 - HF Hub version: 0.24.7 - TRL version: 0.12.0.dev0+45129fc - bitsandbytes version: 0.41.1 - DeepSpeed version: 0.15.1 - Diffusers version: 0.30.3 - Liger-Kernel version: 0.3.0 - LLM-Blender version: 0.0.2 - OpenAI version: 1.46.0 - PEFT version: 0.13.0 ``` You have 2 GPUs, right? Are your 2 GPUs the same?
2,212
himanshushukla12
2024-10-11T09:51:08
> This issue occurs before the training start, right? In my setup everything runs smoothly: > > ``` > $ python examples/scripts/reward_modeling.py \ > > --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ > > --dataset_name trl-lib/ultrafeedback_binarized \ > > --output_dir Qwen2-0.5B-Reward-LoRA \ > > --per_device_train_batch_size 8 \ > > --num_train_epochs 1 \ > > --gradient_checkpointing True \ > > --learning_rate 1.0e-4 \ > > --logging_steps 25 \ > > --eval_strategy steps \ > > --eval_steps 50 \ > > --max_length 2048 \ > > --use_peft \ > > --lora_r 32 \ > > --lora_alpha 16 > [2024-10-11 09:08:16,210] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) > Some weights of Qwen2ForSequenceClassification were not initialized from the model checkpoint at Qwen/Qwen2-0.5B-Instruct and are newly initialized: ['score.weight'] > You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. > /fsx/qgallouedec/trl/examples/scripts/reward_modeling.py:99: UserWarning: You are using a `task_type` that is different than `SEQ_CLS` for PEFT. This will lead to silent bugs Make sure to pass --lora_task_type SEQ_CLS when using this script with PEFT. > warnings.warn( > wandb: WARNING The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter. > wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. > wandb: Currently logged in as: qgallouedec (huggingface). Use `wandb login --relogin` to force relogin > wandb: Tracking run with wandb version 0.18.0 > wandb: Run data is saved locally in /fsx/qgallouedec/trl/wandb/run-20241011_090830-zp3efu8k > wandb: Run `wandb offline` to turn off syncing. > wandb: Syncing run Qwen2-0.5B-Reward-LoRA > wandb: ⭐️ View project at https://wandb.ai/huggingface/huggingface > wandb: 🚀 View run at https://wandb.ai/huggingface/huggingface/runs/zp3efu8k > 0%| | 0/3875 [00:00<?, ?it/s]You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. > /fsx/qgallouedec/miniconda3/envs/trl/lib/python3.11/site-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. > with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] > Could not estimate the number of tokens of the input, floating-point operations will not be computed > {'loss': 0.8947, 'grad_norm': 3.841621160507202, 'learning_rate': 9.935483870967742e-05, 'epoch': 0.01} > 1%|█▍ | 36/3875 [00:54<1:48:03, 1.69s/it] > ``` > > Our system info seem close: > > ``` > - Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 > - Python version: 3.11.9 > - PyTorch version: 2.4.1 > - CUDA device(s): NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3 > - Transformers version: 4.46.0.dev0 > - Accelerate version: 1.0.0 > - Accelerate config: not found > - Datasets version: 3.0.0 > - HF Hub version: 0.24.7 > - TRL version: 0.12.0.dev0+45129fc > - bitsandbytes version: 0.41.1 > - DeepSpeed version: 0.15.1 > - Diffusers version: 0.30.3 > - Liger-Kernel version: 0.3.0 > - LLM-Blender version: 0.0.2 > - OpenAI version: 1.46.0 > - PEFT version: 0.13.0 > ``` > > You have 2 GPUs, right? Are your 2 GPUs the same? Yes, I don't know why this weird thing is happening...😭😭😭
2,212
qgallouedec
2024-10-10T09:46:10
Currently, SFT support VLM, see examples
2,211
MonolithFoundation
2024-10-30T03:46:37
How about DPO support for MLLM? I have some issues on modify it for latest trl
2,211
HuggingFaceDocBuilderDev
2024-10-14T13:44:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2209). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,209
qgallouedec
2024-10-15T09:06:02
## Regression report I ran regression tests to ensure we don't break our DPO. ### Scenarios tested The following scenarios were assessed for potential impact by recent changes: - Encoder-decoder model - Decoder-only - Precompute ref - Auxiliary loss - Vision models ### Dataset Selection As discussed earlier, the new and old (`main`) implementations are not equivalent in cases involving: - Merging of prompt and completion leading to token merging - Truncation needed To avoid these cases, I used a **conversational** dataset with **short** content: `trl-lib/ultrafeedback_binarized`. I applied the following truncation preprocessing to limit sequence length: ```python def truncate(example): return { "prompt": [{"role": "user", "content": example["chosen"][0]["content"][:100]}], "chosen": [{"role": "assistant", "content": example["chosen"][1]["content"][:100]}], "rejected": [{"role": "assistant", "content": example["rejected"][1]["content"][:100]}], } dataset = dataset.map(truncate, desc="Truncate examples") ``` ### Expected Changes Differences in log probabilities (logps) are expected due to initial miscalculations, as mentioned in my previous post. ### Encoder-decoder For this one I needed a custom script: ```python # dpo_encdec.py import torch from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from trl import ( DPOConfig, DPOScriptArguments, DPOTrainer, ModelConfig, TrlParser, get_kbit_device_map, get_peft_config, get_quantization_config, ) from trl.trainer.utils import SIMPLE_CHAT_TEMPLATE if __name__ == "__main__": parser = TrlParser((DPOScriptArguments, DPOConfig, ModelConfig)) script_args, training_args, model_config = parser.parse_args_and_config() torch_dtype = ( model_config.torch_dtype if model_config.torch_dtype in ["auto", None] else getattr(torch, model_config.torch_dtype) ) quantization_config = get_quantization_config(model_config) model_kwargs = dict( revision=model_config.model_revision, attn_implementation=model_config.attn_implementation, torch_dtype=torch_dtype, use_cache=False if training_args.gradient_checkpointing else True, device_map=get_kbit_device_map() if quantization_config is not None else None, quantization_config=quantization_config, ) model = AutoModelForSeq2SeqLM.from_pretrained( model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code, **model_kwargs ) model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-v1_1-small") peft_config = get_peft_config(model_config) if peft_config is None: ref_model = AutoModelForSeq2SeqLM.from_pretrained( model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code, **model_kwargs ) else: ref_model = None tokenizer = AutoTokenizer.from_pretrained( model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code ) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token if tokenizer.chat_template is None: tokenizer.chat_template = SIMPLE_CHAT_TEMPLATE dataset = load_dataset(script_args.dataset_name) def truncate(example): return { "prompt": [{"role": "user", "content": example["chosen"][0]["content"][:100]}], "chosen": [{"role": "assistant", "content": example["chosen"][1]["content"][:100]}], "rejected": [{"role": "assistant", "content": example["rejected"][1]["content"][:100]}], } dataset = dataset.map(truncate, desc="Truncate examples") trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset[script_args.dataset_train_split], eval_dataset=dataset[script_args.dataset_test_split], processing_class=tokenizer, peft_config=peft_config, ) trainer.train() metrics = trainer.evaluate() trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) # Save and push to hub trainer.save_model(training_args.output_dir) if training_args.push_to_hub: trainer.push_to_hub(dataset_name=script_args.dataset_name) ``` ``` # 8 GPUs accelerate launch dpo_encdec.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path google/t5-v1_1-small \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir t5-v1_1-DPO-main \ --no_remove_unused_columns ``` <img width="1303" alt="Screenshot 2024-10-16 at 18 06 03" src="https://github.com/user-attachments/assets/9e293e60-f5d9-43ba-aa33-8f294b270fb0"> ## Decoder-only ``` # 8 GPUs accelerate launch examples/scripts/dpo.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir Qwen2-0.5B-DPO-main \ --no_remove_unused_columns ``` <img width="1303" alt="Screenshot 2024-10-16 at 17 52 25" src="https://github.com/user-attachments/assets/db0e3f6d-57df-4442-894b-5600e4a9cce0"> ### Comment Not sure exactly why the chosen and rejected don't match but the margin seems still to be very close ## Precompute reference ``` # 8 GPUs accelerate launch examples/scripts/dpo.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir Qwen2-0.5B-DPO-main \ --no_remove_unused_columns \ --precompute_ref_log_probs ``` <img width="1303" alt="Screenshot 2024-10-16 at 18 29 21" src="https://github.com/user-attachments/assets/3e34789e-3862-4b06-8d5b-a847f061f049"> ### Comment The curves precisely match their corresponding run without `--precompute_ref_log_probs`. ## Auxiliary loss modify the example script and add ```python model.config.output_router_logits = True ``` ``` accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero3.yaml examples/scripts/dpo.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path mistralai/Mixtral-8x7B-v0.1 \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir Qwen2-0.5B-DPO-2209 \ --gradient_checkpointing \ --max_length 256 \ --use_peft \ --bf16 ``` <img width="1195" alt="Screenshot 2024-10-17 at 17 07 18" src="https://github.com/user-attachments/assets/161971bf-99ac-41a6-9847-6c81acaf602a"> ### Comment Not sure if the training helped a lot, but at least you have consistent results between main and #2209 We've a new `aux_loss` plot! ## Vision model
2,209
qgallouedec
2024-10-16T16:42:07
I still have 2 regression that I'd like to run. But you can already take a look. I'd also like to check the performance difference related to the fixing of "Wrong truncation logic"
2,209
qgallouedec
2024-10-17T08:34:29
Trying to fix the CI. It's annoying because it fails without log, and I can't reproduce locally. Sorry for the numerous commits it implies.
2,209
qgallouedec
2024-10-17T10:33:47
> Regarding the difference in the chosen/rejected rewards of your regression tests, have you looked at the impact on downstream evals like IFEval / AlpacaEval / MixEval? I can run those for you if you have the checkpoints handy and then we can be pretty sure it's fine Nice idea, I'll send you the checkpoints!
2,209
qgallouedec
2024-10-17T20:37:17
@lewtun Here is one: - https://huggingface.co/qgallouedec/Qwen2.5-7B-DPO-2209 - https://huggingface.co/qgallouedec/Qwen2.5-7B-DPO-main Trained with ``` accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero2.yaml examples/scripts/dpo.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path Qwen/Qwen2.5-7B-Instruct \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir Qwen2.5-7B-DPO-2209 \ --gradient_checkpointing \ --max_length 512 \ --use_peft \ --bf16 \ --push_to_hub ``` Another data point for the regression test: <img width="1128" alt="Screenshot 2024-10-18 at 00 10 32" src="https://github.com/user-attachments/assets/157212bd-d82e-4fc5-9d85-cbc628fbcfa0">
2,209
qgallouedec
2024-10-21T09:52:20
## IFEval The new implementation seems to improve results | Model | inst_level_loose_acc | inst_level_strict_acc | prompt_level_loose_acc | prompt_level_strict_acc | | ----------------------------------------------------------------------------- | -------------------- | --------------------- | ---------------------- | ----------------------- | | [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | 0.7122 | 0.6631 | 0.6026 ± 0.0211 | 0.5416 ± 0.0214 | | [Qwen2.5-7B-DPO-main](https://huggingface.co/qgallouedec/Qwen2.5-7B-DPO-main) | 0.7182 | 0.6751 | 0.6155 ± 0.0209 | 0.5693 ± 0.0213 | | [Qwen2.5-7B-DPO-2209](https://huggingface.co/qgallouedec/Qwen2.5-7B-DPO-2209) | 0.7326 | 0.6775 | 0.6303 ± 0.0208 | 0.5656 ± 0.0213 |
2,209
HuggingFaceDocBuilderDev
2024-10-09T15:52:31
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2208). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,208
qgallouedec
2024-10-09T15:56:10
The new curves are significantly smoother compared to before. ![Screenshot 2024-10-09 at 17 52 10](https://github.com/user-attachments/assets/fb441db4-2df4-4f7a-97ac-da37fec6f2d9) I believe this is due to padding completions without applying a loss mask, causing the loss to be calculated over the entire sequence, including the padding tokens: https://github.com/huggingface/trl/blob/7e5924d17ebf7036f03091d60bde15e2367e7fe6/trl/trainer/dpo_trainer.py#L278-L283
2,208
qgallouedec
2024-10-09T14:09:43
Thanks for reporting. You need to use the dev transformers version: ``` pip install git+https://github.com/huggingface/transformers.git ```
2,207
himanshushukla12
2024-10-10T04:51:18
Now got this error: ``` RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ```
2,207
qgallouedec
2024-10-10T09:46:59
Probably not related. Can you open another issue for it?
2,207
himanshushukla12
2024-10-10T16:46:36
@qgallouedec Please review my PR, I'm too excited...
2,206
qgallouedec
2024-10-10T17:08:09
Hey, not having any issue with `trl chat` and 2 GPUs. Can you double check?
2,206
himanshushukla12
2024-10-10T17:10:40
> Hey, not having any issue with `trl chat` and 2 GPUs. Can you double check? I tried with all latest dependency but faced this issue.
2,206
qgallouedec
2024-10-10T17:11:41
Can you share your system info? (`trl env`)
2,206
himanshushukla12
2024-10-10T17:13:42
> Can you share your system info? (`trl env`) - Platform: Linux-6.8.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - PyTorch version: 2.4.1 - CUDA device: NVIDIA RTX 6000 Ada Generation - Transformers version: 4.46.0.dev0 - Accelerate version: 1.0.0 - Accelerate config: not found - Datasets version: 3.0.1 - HF Hub version: 0.25.2 - TRL version: 0.12.0.dev0 - bitsandbytes version: not installed - DeepSpeed version: not installed - Diffusers version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: not installed - PEFT version: 0.13.1
2,206
himanshushukla12
2024-10-10T17:21:53
This is the error i got > > Can you share your system info? (`trl env`) > > * Platform: Linux-6.8.0-41-generic-x86_64-with-glibc2.35 > * Python version: 3.10.11 > * PyTorch version: 2.4.1 > * CUDA device: NVIDIA RTX 6000 Ada Generation > * Transformers version: 4.46.0.dev0 > * Accelerate version: 1.0.0 > * Accelerate config: not found > * Datasets version: 3.0.1 > * HF Hub version: 0.25.2 > * TRL version: 0.12.0.dev0 > * bitsandbytes version: not installed > * DeepSpeed version: not installed > * Diffusers version: not installed > * Liger-Kernel version: not installed > * LLM-Blender version: not installed > * OpenAI version: not installed > * PEFT version: 0.13.1 the error I got ``` trl chat --model_name_or_path /home/trl/models/minimal/ppo/checkpoint-157 Traceback (most recent call last): File "/home/trl/venvTRL/lib/python3.10/site-packages/transformers/utils/hub.py", line 403, in cached_file resolved_file = hf_hub_download( File "/home/trl/venvTRL/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f return f(*args, **kwargs) File "/home/trl/venvTRL/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn validate_repo_id(arg_value) File "/home/trl/venvTRL/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id raise HFValidationError( huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/trl/models/minimal/ppo/checkpoint-157'. Use `repo_type` argument if needed. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/trl/venvTRL/lib/python3.10/site-packages/trl/commands/scripts/chat.py", line 368, in <module> chat_cli() File "/home/trl/venvTRL/lib/python3.10/site-packages/trl/commands/scripts/chat.py", line 275, in chat_cli model, tokenizer = load_model_and_tokenizer(args) File "/home/trl/venvTRL/lib/python3.10/site-packages/trl/commands/scripts/chat.py", line 213, in load_model_and_tokenizer tokenizer = AutoTokenizer.from_pretrained( File "/home/trl/venvTRL/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 854, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "/home/trl/venvTRL/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 686, in get_tokenizer_config resolved_config_file = cached_file( File "/home/trl/venvTRL/lib/python3.10/site-packages/transformers/utils/hub.py", line 469, in cached_file raise EnvironmentError( OSError: Incorrect path_or_model_id: '/home/trl/models/minimal/ppo/checkpoint-157'. Please provide either the path to a local folder or the repo_id of a model on the Hub. [17:14:41] TRL - CHAT failed! See the logs above for further details. cli.py:127 Traceback (most recent call last): File "/home/trl/venvTRL/lib/python3.10/site-packages/trl/commands/cli.py", line 118, in chat subprocess.run( File "/home/z004x2xz/local/python3.10/lib/python3.10/subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['python', '/home/trl/venvTRL/lib/python3.10/site-packages/trl/commands/scripts/chat.py', '--model_name_or_path', '/home/trl/models/minimal/ppo/checkpoint-157']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: <z004x2xz>: Hello </home/trl/models/minimal/ppo/checkpoint-157>: Exception in thread Thread-1 (generate): Traceback (most recent call last): File "/home/z004x2xz/local/python3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/home/z004x2xz/local/python3.10/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/home/trl/venvTRL/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/trl/venvTRL/lib/python3.10/site-packages/transformers/generation/utils.py", line 2173, in generate result = self._sample( File "/home/trl/venvTRL/lib/python3.10/site-packages/transformers/generation/utils.py", line 3169, in _sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 Knowided typ fileouth carryingscope bounds Small Soviet //�Dourunionauses Knowided typ fileouth carryingscope bounds Small Soviet //�Dourunionauses Traceback (most recent call last): File "/home/trl/venvTRL/bin/trl", line 8, in <module> sys.exit(main()) File "/home/trl/venvTRL/lib/python3.10/site-packages/trl/commands/cli.py", line 137, in main chat() File "/home/trl/venvTRL/lib/python3.10/site-packages/trl/commands/cli.py", line 118, in chat subprocess.run( File "/home/z004x2xz/local/python3.10/lib/python3.10/subprocess.py", line 505, in run stdout, stderr = process.communicate(input, timeout=timeout) File "/home/z004x2xz/local/python3.10/lib/python3.10/subprocess.py", line 1146, in communicate self.wait() <z004x2xz>: describe something about new technologies...? </home/trl/models/minimal/ppo/checkpoint-157>: dealualy----------------� Exception in thread Thread-1 (generate): Traceback (most recent call last): File "/home/z004x2xz/local/python3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/home/z004x2xz/local/python3.10/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/home/trl/venvTRL/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/trl/venvTRL/lib/python3.10/site-packages/transformers/generation/utils.py", line 2173, in generate result = self._sample( File "/home/trl/venvTRL/lib/python3.10/site-packages/transformers/generation/utils.py", line 3169, in _sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 dealualy----------------� Committeeoll animalsironflags southwest southwest384 ```
2,206
qgallouedec
2024-10-10T18:05:36
I can't reaaly reproduce it since you're using a local model. Do you get the same error with a remote model?
2,206
himanshushukla12
2024-10-10T18:07:40
> I can't really reproduce it since you're using a local model. Do you get the same error with a remote model? I tried with local models only, not with cloud models. Have you tested my code parallelly?
2,206
qgallouedec
2024-10-10T18:20:00
I did, and everything works as expected
2,206
himanshushukla12
2024-10-10T18:22:45
> I did, and everything works as expected Please share your `trl env` This might help
2,206
qgallouedec
2024-10-10T20:17:13
- Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 - Python version: 3.11.9 - PyTorch version: 2.4.1 - CUDA device(s): NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3 - Transformers version: 4.46.0.dev0 - Accelerate version: 0.34.2 - Accelerate config: not found - Datasets version: 3.0.0 - HF Hub version: 0.24.7 - TRL version: 0.12.0.dev0+b3f93f0 - bitsandbytes version: 0.41.1 - DeepSpeed version: 0.15.1 - Diffusers version: 0.30.3 - Liger-Kernel version: 0.3.0 - LLM-Blender version: 0.0.2 - OpenAI version: 1.46.0 - PEFT version: 0.13.0
2,206
qgallouedec
2024-10-10T20:19:39
``` $ trl chat --model_name_or_path meta-llama/Llama-3.2-1B-Instruct <quentin_gallouedec>: Hello, what's the closest planet? <meta-llama/Llama-3.2-1B-Instruct>: The closest planet to Earth is Venus. On average, Venus is about 25 million miles (40 million kilometers) away from our planet. Due to a massive tilt in Venus's axis, it permanently rotates in the opposite direction of its orbit around the Sun, resulting in very high levels of solar radiation and extreme greenhouse gases in its atmosphere. <quentin_gallouedec>: ```
2,206
himanshushukla12
2024-10-11T04:54:43
I tried like: ``` $ trl chat --model_name_or_path meta-llama/Llama-3.2-1B-Instruct This is what i got File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/bin/trl", line 8, in <module> sys.exit(main()) File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/trl/commands/cli.py", line 137, in main chat() File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/trl/commands/cli.py", line 118, in chat subprocess.run( File "/home/z004x2xz/local/python3.10/lib/python3.10/subprocess.py", line 505, in run stdout, stderr = process.communicate(input, timeout=timeout) <z004x2xz>: Hello, what's the closest planet? .<meta-llama/Llama-3.2-1B-Instruct>: scar=scCUSRound himself,…ирpackageerceerseREET Soldiersendersittiittoatto_signatureLaugh// /; !; Exception in thread Thread-1 (generate): Traceback (most recent call last): File "/home/z004x2xz/local/python3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/home/z004x2xz/local/python3.10/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/transformers/generation/utils.py" , line 2173, in generate result = self._sample( File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/transformers/generation/utils.py" , line 3133, in _sample outputs = self(**model_inputs, return_dict=True) File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/accelerate/hooks.py", line 170, in new_forward output = module._old_forward(*args, **kwargs) File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/transformers/models/llama/modelin g_llama.py", line 1187, in forward outputs = self.model( File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/transformers/models/llama/modelin g_llama.py", line 914, in forward causal_mask = self._update_causal_mask( File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/transformers/models/llama/modelin g_llama.py", line 1003, in _update_causal_mask if AttentionMaskConverter._ignore_causal_mask_sdpa( File "/home/z004x2xz/WorkAssignedByMatt/trl/venvTRL/lib/python3.10/site-packages/transformers/modeling_attn_mask_u tils.py", line 284, in _ignore_causal_mask_sdpa elif not is_tracing and torch.all(attention_mask == 1): RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. scar=scCUSRound himself,…ирpackageerceerseREET Soldiersendersittiittoatto_signatureLaugh// /; !; getResponse/response dad momasLEYesor health outоP acidity as capital Bent Ent ch Cancer immaturelublue yielding of ``` By running like this: ``` CUDA_VISIBLE_DEVICES=0 trl chat --model_name_or_path meta-llama/Llama-3.2-1B-Instruct Hello, what's the closest planet? <meta-llama/Llama-3.2-1B-Instruct>: The closest planet to the Sun is Mercury. It's a small, rocky planet with a highly elliptical orbit that takes about 88 Earth days to complete. However, if you're asking about other planets, it would be Venus or Mars. Venus is the second planet from the Sun, and Mars is the third. If you're looking for a specific planet, I can try and help you with that. Can you please provide more context or clarify what you're asking about? ``` and by specifying the device the inferencing was too fast
2,206
himanshushukla12
2024-10-09T12:27:16
Issue occurs when we have multi-GPU in our system. If we use `CUDA_VISIBLE_DEVICE=0` it works fine
2,205
HuggingFaceDocBuilderDev
2024-10-09T07:39:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2204). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,204
qgallouedec
2024-10-09T08:19:31
You need to enable evaluation, see [the `transformers.TrainingArguments` doc](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.TrainingArguments)
2,203
RonanKMcGovern
2024-10-10T11:25:26
Yes, you have `do_eval` commented out. Also, I believe you need: ``` eval_strategy="steps", do_eval=True, eval_steps=0.1, #for eval every 10% of steps. ``` That said, I ran code today with those parameters and I'm not seeing an eval loss... (unless I manually add one to the trainer).
2,203
qgallouedec
2024-10-10T12:12:17
```python from trl import SFTConfig, SFTTrainer from datasets import load_dataset dataset = load_dataset("trl-lib/Capybara") training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT", eval_steps=100, eval_strategy="steps", logging_steps=10) trainer = SFTTrainer( args=training_args, model="Qwen/Qwen2.5-0.5B", train_dataset=dataset["train"], eval_dataset=dataset["test"], ) trainer.train() ``` ``` [2024-10-10 12:09:46,499] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) wandb: WARNING The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter. wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. wandb: Currently logged in as: qgallouedec (huggingface). Use `wandb login --relogin` to force relogin wandb: Tracking run with wandb version 0.18.0 wandb: Run data is saved locally in /fsx/qgallouedec/trl/wandb/run-20241010_120948-fpkni0nj wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run Qwen/Qwen2.5-0.5B-SFT wandb: ⭐️ View project at https://wandb.ai/huggingface/huggingface wandb: 🚀 View run at https://wandb.ai/huggingface/huggingface/runs/fpkni0nj {'loss': 1.7838, 'grad_norm': 4.98770809173584, 'learning_rate': 4.991565452091768e-05, 'epoch': 0.01} {'loss': 1.6715, 'grad_norm': 5.814300537109375, 'learning_rate': 4.983130904183536e-05, 'epoch': 0.01} {'loss': 1.4957, 'grad_norm': 4.802212238311768, 'learning_rate': 4.974696356275304e-05, 'epoch': 0.02} {'loss': 1.6878, 'grad_norm': 5.192160606384277, 'learning_rate': 4.966261808367072e-05, 'epoch': 0.02} {'loss': 1.6875, 'grad_norm': 4.301702499389648, 'learning_rate': 4.95782726045884e-05, 'epoch': 0.03} {'loss': 1.6357, 'grad_norm': 4.644138813018799, 'learning_rate': 4.9493927125506076e-05, 'epoch': 0.03} {'loss': 1.6582, 'grad_norm': 4.883214473724365, 'learning_rate': 4.9409581646423755e-05, 'epoch': 0.04} {'loss': 1.4887, 'grad_norm': 6.148635387420654, 'learning_rate': 4.9325236167341433e-05, 'epoch': 0.04} {'loss': 1.5467, 'grad_norm': 4.263922214508057, 'learning_rate': 4.924089068825911e-05, 'epoch': 0.05} {'loss': 1.6109, 'grad_norm': 3.8504934310913086, 'learning_rate': 4.915654520917679e-05, 'epoch': 0.05} {'eval_loss': 1.6436591148376465, 'eval_runtime': 5.7794, 'eval_samples_per_second': 34.606, 'eval_steps_per_second': 4.326, 'epoch': 0.05} {'loss': 1.4516, 'grad_norm': 4.733430862426758, 'learning_rate': 4.907219973009447e-05, 'epoch': 0.06} {'loss': 1.6327, 'grad_norm': 5.621842384338379, 'learning_rate': 4.898785425101215e-05, 'epoch': 0.06} {'loss': 1.7306, 'grad_norm': 4.469166278839111, 'learning_rate': 4.890350877192983e-05, 'epoch': 0.07} {'loss': 1.5902, 'grad_norm': 4.800514221191406, 'learning_rate': 4.881916329284751e-05, 'epoch': 0.07} {'loss': 1.7242, 'grad_norm': 4.282039165496826, 'learning_rate': 4.8734817813765186e-05, 'epoch': 0.08} {'loss': 1.5382, 'grad_norm': 4.403784275054932, 'learning_rate': 4.8650472334682865e-05, 'epoch': 0.08} {'loss': 1.7028, 'grad_norm': 4.006992816925049, 'learning_rate': 4.8566126855600543e-05, 'epoch': 0.09} {'loss': 1.6581, 'grad_norm': 3.980820655822754, 'learning_rate': 4.848178137651822e-05, 'epoch': 0.09} {'loss': 1.6722, 'grad_norm': 4.289447784423828, 'learning_rate': 4.83974358974359e-05, 'epoch': 0.1} {'loss': 1.7755, 'grad_norm': 4.594780445098877, 'learning_rate': 4.831309041835358e-05, 'epoch': 0.1} {'eval_loss': 1.6424752473831177, 'eval_runtime': 5.7806, 'eval_samples_per_second': 34.598, 'eval_steps_per_second': 4.325, 'epoch': 0.1} {'loss': 1.5884, 'grad_norm': 4.834494590759277, 'learning_rate': 4.822874493927126e-05, 'epoch': 0.11} {'loss': 1.7616, 'grad_norm': 4.97723913192749, 'learning_rate': 4.814439946018894e-05, 'epoch': 0.11} {'loss': 1.6934, 'grad_norm': 4.84063720703125, 'learning_rate': 4.806005398110662e-05, 'epoch': 0.12} {'loss': 1.6116, 'grad_norm': 4.2668304443359375, 'learning_rate': 4.797570850202429e-05, 'epoch': 0.12} ... ```
2,203
kashif
2024-10-09T06:16:41
good point @lidh15 i think i had the logits interpolated and then while converting everything to log-probs i must have moved it around... I think this is a bug at first glance, let me double check and report back
2,202
kashif
2024-10-09T07:47:19
good catch yes indeed mixtures and logs are not commutative!
2,202
HuggingFaceDocBuilderDev
2024-10-08T13:54:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2201). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,201
kashif
2024-10-08T14:48:15
great catch @muupan 🙇🏽
2,201
HuggingFaceDocBuilderDev
2024-10-08T12:43:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2200). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,200
qgallouedec
2024-10-08T13:52:46
@muupan in my understanding, we should avoid `getattr(model.config, ...)` anywhere outside the init, right?
2,200
muupan
2024-10-08T14:01:03
I think it is safer to avoid `getattr(model.config, ...)` unless we are 100% sure the model is unwrapped.
2,200
muupan
2024-10-08T14:07:01
> It occurred only with deepspeed and it's pretty hard to test. I agree not to add a test for this one, considering that the author tested with the method described in his initial post. Actually, when deepspeed is not used, the original code raises error. model is wrapped by `DistributedDataParallel`, which does not have `config` attribute, so `getattr(model.config, ...)` raises error.
2,200
kashif
2024-10-08T14:49:13
cc @claralp for your information
2,200
claralp
2024-10-10T08:47:10
@muupan thanks for finding! this bug was not present when the feature was developed, even tested it with Deepspeed at that time. Guess there was some change in common model wrapping for data parallelism, why it also appears when using DistributedDataParallel. To cite @qgallouedec > I've just checked, we've the same problem with BCO, CPO, KTO and ORPO. Do you mind adding the same fix for those? The codebase is almost the same Do you want to change it there as well @muupan ?
2,200
kashif
2024-10-10T08:49:28
@claralp that is done in another PR https://github.com/huggingface/trl/pull/2201
2,200
qgallouedec
2024-10-08T20:36:38
Can you also update the badges with the following: ```html <p align="center"> <a href="https://github.com/huggingface/trl/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/trl.svg?color=blue"></a> <a href="https://huggingface.co/docs/trl/index"><img alt="Documentation" src="https://img.shields.io/website?url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftrl%2Findex&down_color=red&down_message=offline&up_color=blue&up_message=online"></a> <a href="https://github.com/huggingface/trl/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/trl.svg"></a> </p> ``` It will remove these: <img width="926" alt="Screenshot 2024-10-08 at 22 35 28" src="https://github.com/user-attachments/assets/d12030be-e32c-4ba0-a153-230025804580">
2,199
HuggingFaceDocBuilderDev
2024-10-08T20:38:12
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2199). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,199
HuggingFaceDocBuilderDev
2024-10-08T09:50:40
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2198). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,198
qgallouedec
2024-10-08T09:15:11
Thanks for this detailed report. I can see that you're using deepspeed. Can you share the version? The suggested solution sounds reasonable. > I can send a PR. I'd be happy to review it, thanks a lot!
2,197
qgallouedec
2024-10-08T09:19:52
> It is possible that other trainers have the same issue, but I have not checked. I've just checked, we've the same problem with BCO, CPO, KTO and ORPO. Do you mind adding the same fix for those? The codebase is almost the same
2,197
muupan
2024-10-08T09:23:10
> I can see that you're using deepspeed. Can you share the version? I use deepspeed==0.15.1 I'll send a PR shortly. If my PR to DPOTrainer is ok I can address other trainers as well.
2,197
qgallouedec
2024-10-08T08:28:47
You should use the [transformers.EarlyStoppingCallback](https://huggingface.co/docs/transformers/en/main_classes/callback#transformers.EarlyStoppingCallback)
2,196