user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.57k
__index_level_0__
int64
0
8.05k
qgallouedec
2024-11-28T18:59:36
Closing in favour of #2411
2,402
400
qgallouedec
2024-12-13T22:51:33
You're right, the documentation is wrong. Would you like to contribute by correcting it?
2,400
401
umbilnm
2024-12-25T10:20:41
Hi! I can correct it. Based on the discussion, it seems we could take one of two approaches: 1) Completely remove this mention from the “Best Practices” section 2) Update the text to clarify that truncation (rather than padding) happens by default. Could you let me know which approach is better?
2,400
402
qgallouedec
2024-12-25T10:49:41
Probably the 2. What do you think?
2,400
403
umbilnm
2024-12-25T11:08:13
Ok, also if max_seq_len isn't specified the trainer sets it to `min(1024, tokenizer.model_max_lenght)` (not 2048), so revised text may look like this: > SFTTrainer truncates sequences by default to the max_seq_length specified. If max_seq_length is not provided, the trainer sets it to the minimum of tokenizer.model_max_length and 1024. Ensure you verify this setting before training to avoid unintended behavior.
2,400
404
HuggingFaceDocBuilderDev
2024-11-26T18:49:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2399). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,399
405
HuggingFaceDocBuilderDev
2024-11-26T15:15:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2398). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,398
406
HuggingFaceDocBuilderDev
2024-11-26T13:44:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2397). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,397
407
qgallouedec
2024-11-26T10:35:31
This script has been renamed [`sft_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/sft_vlm.py) in https://github.com/huggingface/trl/pull/2120
2,396
408
soumyasj
2024-11-26T15:04:41
Thank you! Closing this issue!
2,396
409
HuggingFaceDocBuilderDev
2024-11-26T10:26:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2395). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,395
410
HuggingFaceDocBuilderDev
2024-11-25T17:24:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2394). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,394
411
HuggingFaceDocBuilderDev
2024-11-25T15:45:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2393). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,393
412
HuggingFaceDocBuilderDev
2024-11-25T14:28:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2392). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,392
413
qgallouedec
2024-11-25T09:55:22
You don't need to process the data. The trainer does it for you. The `SFTTrainer` expects a dataset with a column named `"text"` (or `"messages"` for conversational data). Use [trl-internal-testing/zen](https://huggingface.co/datasets/trl-internal-testing/zen) as an example. Example for conversational data: ```python from trl import SFTConfig, SFTTrainer from datasets import load_dataset dataset = load_dataset("trl-internal-testing/zen", "conversational_language_modeling", split="train") training_args = SFTConfig(output_dir="Llama-3.2-1B-Instruct-SFT") trainer = SFTTrainer( args=training_args, model="meta-llama/Llama-3.2-1B-Instruct", train_dataset=dataset, ) trainer.train() ```
2,390
414
Humauaca
2024-11-26T17:38:35
So if I loaded the model and tokenizer with class functions AutoModelForCausalLM.from_pretrained and AutoTokenizer.from_pretrained, respectively, I should pass the tokenizer to the SFTTrainer instance as a argument, right?
2,390
415
qgallouedec
2024-11-26T18:04:29
Yes
2,390
416
Alex-Mathai-98
2024-12-24T17:24:35
Hi @qgallouedec - just a simple follow-up question. In the conversation format, does the SFT Trainer mask out the loss for the instructions? Or does it compute the loss for both the instructions and the responses? No one seems to know the answer to this online.
2,390
417
qgallouedec
2024-12-24T17:38:01
Not by default, you need to use `DataCollatorForCompletionOnly`. See https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only
2,390
418
Alex-Mathai-98
2024-12-24T19:30:17
I see, @qgallouedec. Thank you so much for your quick response :bow: . I just want to make sure I understand your response perfectly. I was talking about https://huggingface.co/docs/trl/sft_trainer#dataset-format-support - so even if we format our data in this **conversations** format, HF will tokenize the entire conversation as a sequence of tokens and then perform **simple language modeling** on the entire text. The link https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only you mentioned in your response seems to be used for a use case that is **NOT** conversations right? In a multi-turn conversation, there will be **multiple** answers - unlike the example mentioned in https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only . In this case, the masking needs to be done for all intermediate instruction text right? For example, a two-turn conversation - {user, assistant, user, assitant} would have {Mask On, Mask Off, Mask On Mask Off}. I guess I will need to do this with a custom data collator?
2,390
419
HuggingFaceDocBuilderDev
2024-11-24T15:17:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2389). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,389
420
coder109
2024-11-27T03:25:48
It is OK if you parse the parameters **after** loading DDPOTrainer(). But I'd like to know what causes these two unrelated functions to affect each other.
2,388
421
coder109
2024-11-27T07:08:02
I PROBABLY know the cause. If you have encountered with the same problem, please modify the source code of `DDPOTrainer()` like this: ```python self.accelerator = Accelerator( log_with=self.config.log_with, mixed_precision=self.config.mixed_precision, project_config=accelerator_project_config, # we always accumulate gradients across timesteps; we want config.train.gradient_accumulation_steps to be the # number of *samples* we accumulate across, so we need to multiply by the number of training timesteps to get # the total number of optimizer steps to accumulate across. gradient_accumulation_steps=self.config.train_gradient_accumulation_steps * self.num_train_timesteps, **self.config.accelerator_kwargs, ) # Accelerate MOD BEGIN self.accelerator.state.deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu'] = 4 # Accelerate MOD END ``` It seems that `DDPOTrainer()` cannot properly load DeepSpeed config from external json files.
2,388
422
qgallouedec
2024-11-26T09:50:17
Thanks for reporting it. Would you like to open a PR to fix it?
2,387
423
dawidm
2024-12-28T12:43:38
@kechunFIVE, @qgallouedec, @dame-cell Seems that current code is inspired by https://iclr-blogposts.github.io/2024/blog/the-n-implementation-details-of-rlhf-with-ppo/, section *General implementation details*, 4.2. Authors tried to recreate results from early OpenAI work, but they say: > Note that in a more recent codebase https://github.com/openai/summarize-from-feedback, OpenAI does stop sampling when encountering EOS token ([summarize_from_feedback/utils/experiment_helpers.py#L19](https://github.com/openai/summarize-from-feedback/blob/8af822a428c93432aa80ffbe5b065a8f93895669/summarize_from_feedback/utils/experiment_helpers.py#L19)). However in this work we aim to do a 1:1 replication, so we align the setting that could keep sampling even eos_token is encountered Reading `PPOTrainer` implementation, it seems that `stop_token`/`stop_token_id` arguments should control when to stop generation: * When `stop_token` is set to `eos` value, everything that is after EOS is padded and therefore ignored in calculations (using masks). In that case, keeping generating after all sequences have EOS seems just wasting of time and resources. I've checked that between stopping on EOS and generating until max length, loss values are almost exact and minimal differences of the order of 1e-8 are caused by masked calculations. * When `stop_token_id` is set, sequences get padded after this token, so generation should stop there, not model's EOS. * The case when `stop_token != None` and `stop_token != 'eos'` are not supported, so they should probably raise an exception.
2,387
424
Benjoyo
2025-01-15T20:41:07
The option to control the stop token should be added to all online trainer configs imho
2,387
425
HuggingFaceDocBuilderDev
2024-11-22T18:41:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2386). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,386
426
qgallouedec
2024-11-22T18:21:38
Thanks for reporting. It likely comes from the chat template. Can you share it?
2,385
427
qgallouedec
2024-11-22T18:24:26
The further explain the error, we expect a chat template that verifies ```python formatted_prompt = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, tokenize=False) formatted_prompt_completion = tokenizer.apply_chat_template(prompt + completion, tokenize=False) assert formatted_prompt_completion.startswith(formatted_prompt) ``` Example with Qwen: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") >>> prompt = [{"role": "user", "content": "Where is Paris?"}] >>> completion = [{"role": "assistant", "content": "In France."}] >>> formatted_prompt = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, tokenize=False) >>> formatted_prompt_completion = tokenizer.apply_chat_template(prompt + completion, tokenize=False) >>> formatted_prompt '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhere is Paris?<|im_end|>\n<|im_start|>assistant\n' >>> formatted_prompt_completion '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhere is Paris?<|im_end|>\n<|im_start|>assistant\nIn France.<|im_end|>\n<|im_start|>assistant\n' >>> formatted_prompt_completion.startswith(formatted_prompt) True ```
2,385
428
qgallouedec
2024-11-22T18:34:03
It may come from here in you example: ```diff ds = ds.map( lambda x: { "system": [{"role": "user", "content": x["system"]}], "prompt": [{"role": "user", "content": x["prompt"]}], "chosen": [{"role": "assistant", "content": x["chosen"]}], - "rejected": [{"role": "user", "content": x["rejected"]}], + "rejected": [{"role": "assistant", "content": x["rejected"]}], } ) ```
2,385
429
MohamedAliRashad
2024-11-22T18:47:59
@qgallouedec I am the stupidest person on earth Thanks a lot
2,385
430
HuggingFaceDocBuilderDev
2024-11-22T18:09:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2384). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,384
431
qgallouedec
2024-11-24T16:07:10
Hi! Thanks for the suggestion. It could be a great addition. I haven't read the paper in detail yet but what you describe sounds closer to KTO than DPO, doesn't it? Do you have an implementation that already works?
2,383
432
AML14
2024-11-22T12:55:31
Update: DPO doesn't even work with a code completion task (i.e., neither the input nor output include FIM special tokens) with the base model. As an example, here is the output generated by `Qwen/Qwen2.5-Coder-0.5B` for the following input: ```java // Input: protected RouteBuilder createRouteBuilder()throws Exception { return new RouteBuilder() { // Output: @Override public void configure() throws Exception { from("direct:hello") .to("mock:hello"); } }; }<|endoftext|> ``` And here is the output of the same model after having applied DPO with about 3000 instances, where the prompt is the input and the chosen/rejected are correct/wrong completions: ```java // Input: protected RouteBuilder createRouteBuilder()throws Exception { return new RouteBuilder() { // Output: public void configure() throws Exception { <|fim_middle|> <|fim_middle|> <|fim_middle|><|endoftext|> ``` The model is completely broken after applying DPO.
2,382
433
yiyepiaoling0715
2024-11-23T10:18:06
> And here is the output of the same model after having applied DPO with about 3000 instances, where the prompt is the input and the chosen/rejected are correct/wrong completions: why not work with code completion task? I also do the code completion task with rl. i get some benefit,maybe not work under your situation is because of your train corpus
2,382
434
qgallouedec
2024-11-23T16:16:27
Is your dataset public? How does the training curves look like?
2,382
435
qgallouedec
2024-11-23T16:22:27
Can you confirm that your effective batch size is 8?
2,382
436
kashif
2024-11-25T12:09:56
@AML14 can you do a quick experiment where you remove the `EOS` token from the `chosen` and `rejected` keys as the `DPOTrainer by default adds those to the ends of the chosen and rejected input_ids
2,382
437
kashif
2024-11-26T13:06:07
@AML14 so I tried with your data and `beta=0.9` and the run is here: https://wandb.ai/krasul/huggingface/runs/wkekg3nb?nw=nwuserkrasul the outputs also look fine to to: ``` >>> input_text = """<|fim_prefix|>def quicksort(arr): ... if len(arr) <= 1: ... return arr ... pivot = arr[len(arr) // 2] ... <|fim_suffix|> ... middle = [x for x in arr if x == pivot] ... right = [x for x in arr if x > pivot] ... return quicksort(left) + middle + quicksort(right)<|fim_middle|>""" >>> model_inputs = TOKENIZER([input_text], return_tensors="pt").to(device) >>> generated_ids = MODEL.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=False)[0] The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:None for open-end generation. >>> output_text = TOKENIZER.decode(generated_ids[len(model_inputs.input_ids[0]):], skip_special_tokens=True) >>> print(f"Prompt: {input_text}\n\nGenerated text: {output_text}") Prompt: <|fim_prefix|>def quicksort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] <|fim_suffix|> middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right)<|fim_middle|> Generated text: left = [x for x in arr if x < pivot] ```
2,382
438
yiyepiaoling0715
2024-12-05T10:00:01
> Is your dataset public? How does the training curves look like? FIM-Tasks,aiXCoder
2,382
439
kashif
2024-12-05T10:01:28
thanks @yiyepiaoling0715
2,382
440
AML14
2024-12-06T16:45:22
Sorry for the late coming back to the discussion, I have a few comments to make: @qgallouedec > Is your dataset public? How does the training curves look like? I'm sharing the new datasets that I've been playing around with in this response. > Can you confirm that your effective batch size is 8? Out of ignorance, what difference does it make? @kashif I tried with your settings, i.e., removing the EOS token from the chosen and rejected, and setting `beta=0.9`. Indeed, the model seems to be better with most inputs, but it still generates nonsense outputs in some cases. And these are cases that **are part of the training dataset**, so I guess the model should be able to understand them. Here I show an example: ```java // Input: <|fim_prefix|>public static String toString(ByteArrayOutputStream os) { return os<|fim_suffix|> }<|fim_middle|> // Output by the model trained with DPO: .toString() + "\n" + " " + Arrays.toString(toStringArray(os)) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" + " " + toStringArray(os) + "\n" // Output by the pre-trained model without any fine-tuning nor DPO: .toString(); //return os.toString(StandardCharsets.UTF_8);<|endoftext|> ``` I did some further tests with other datasets and hyperparameters. Seems that hyperparameters have a huge impact here, and the default values do not yield good results at all. I think this should at least be documented, i.e., that default hyperparameters set for DPO may not work at all for other tasks not related to natural language and human-related preference optimization, like fill-in-the-mask code completion. Just in case you guys want to do some experiments with what I'm working on right now, I'm sharing four datasets: a training, evaluation, and test set, and the evaluation set translated to the DPO format. I proceeded as follows: - Fine-tune Qwen2.5-Coder-0.5B on the training set. One epoch was enough as I noticed that performance on the eval set didn't improve much afterwards. - Apply DPO to the fine-tuned model on the DPO eval set. - Compare the performance of the fine-tuned model w.r.t. the model optimized with DPO. When using the following hyperparameters: ```shell accelerate launch --config_file acc_config_1.yaml dpo.py \ --dataset_name ./developer_1/developer_masked_methods_eval_dpo.json \ --model_name_or_path ckpts1/checkpoint-730 \ --learning_rate 5.0e-7 \ --beta 0.9 \ --num_train_epochs 10 \ --per_device_train_batch_size 4 \ --eval_strategy no \ --save_strategy steps \ --save_steps 0.1 \ --output_dir ckpts1_dpo \ --no_remove_unused_columns ``` After just one epoch with DPO, performance on the test set drops by about ~10%. This is what I don't understand. Of course, when applying `beta=0.1`, performance is way worse. After much trial and error, I've found that the following hyperparameters work better: `learning_rate=1.0e-7`, `loss_type=ipo`, i.e., even smaller LR and IPO loss. This way, after one epoch, the performance on the test set goes up from 69% to 71.4%. But still, after the second epoch on, performance starts to drop again. **Is all this behavior expected? Is DPO not that well-suited to improve performance of transformer models for tasks like this, involving just code and not natural language?** Scripts and datasets: [datasets.zip](https://github.com/user-attachments/files/18041032/datasets.zip) <details> <summary>finetuning.py</summary> ```python import os import pprint import argparse import torch from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForSeq2Seq from datasets import load_dataset from trl import SFTTrainer, SFTConfig def run_training(args, train_data, model, tokenizer): training_args = SFTConfig( output_dir=args.output_dir, per_device_train_batch_size=args.batch_size, max_seq_length=args.max_source_length + args.max_target_length, num_train_epochs=args.epochs, save_strategy="steps", save_steps=0.1, do_train=True, bf16=True ) trainer = SFTTrainer( model=model, args=training_args, train_dataset=train_data, data_collator=DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, padding=True) ) trainer.train() def load_tokenize_data(args, tokenizer): # Load dataset dataset = load_dataset("json", data_files=args.train_data, split="train") FIM_SUFFIX = "<|fim_suffix|>" FIM_SUFFIX_CODELLAMA = "<FILL_ME>" FIM_PREFIX_ID = tokenizer("<|fim_prefix|>")["input_ids"] FIM_MIDDLE_ID = tokenizer("<|fim_middle|>")["input_ids"] EOS_ID = tokenizer("<|endoftext|>")["input_ids"] max_source_length = args.max_source_length - 2 # 2 for FIM_PREFIX_ID and FIM_MIDDLE_ID max_target_length = args.max_target_length - 1 # 1 for EOS def preprocess_single(example): source = example[args.source_column] source = source.replace(FIM_SUFFIX_CODELLAMA, FIM_SUFFIX) target = example[args.target_column] source_ids = FIM_PREFIX_ID + tokenizer(source, max_length=args.max_source_length, truncation=True)["input_ids"] + FIM_MIDDLE_ID target_ids = tokenizer(target, max_length=args.max_target_length, truncation=True)["input_ids"] + EOS_ID input_ids = source_ids + target_ids attention_mask = [1] * len(input_ids) labels = [-100] * len(source_ids) + target_ids return { "input_ids": input_ids, "attention_mask": attention_mask, "labels": labels, } train_data = dataset.map( preprocess_single, remove_columns=dataset.column_names, num_proc=args.num_proc, ) return train_data def main(args): argsdict = vars(args) print(pprint.pformat(argsdict)) # Load and set up tokenizer tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name_or_path) tokenizer.add_special_tokens({"pad_token": "<|pad|>"}) tokenizer.padding_side = "left" tokenizer.truncation_side = "right" # Load and tokenize dataset train_data = load_tokenize_data(args, tokenizer) # Load model model = AutoModelForCausalLM.from_pretrained( args.model_name_or_path, torch_dtype="auto", # device_map="auto" ) model.resize_token_embeddings(len(tokenizer)) # Run training run_training(args, train_data, model, tokenizer) if __name__ == "__main__": parser = argparse.ArgumentParser(description="Fine-tune Qwen model") parser.add_argument("--tokenizer_name_or_path", default="Qwen/Qwen2.5-Coder-0.5B", type=str, help="Tokenizer name") parser.add_argument("--model_name_or_path", default="Qwen/Qwen2.5-Coder-0.5B", type=str, help="Model path") parser.add_argument("--output_dir", default="ckpts1", type=str, help="Output directory") parser.add_argument("--train_data", default="./developer_1/developer_masked_methods_train.json", type=str, help="Path to training data") parser.add_argument("--max_source_length", default=2048, type=int, help="Maximum sample length") parser.add_argument("--max_target_length", default=256, type=int, help="Maximum target length") parser.add_argument("--source_column", default="parsed_masked_method", type=str, help="Source column") parser.add_argument("--target_column", default="parsed_mask", type=str, help="Target column") parser.add_argument("--batch_size", default=16, type=int, help="Batch size") parser.add_argument("--epochs", default=5, type=int, help="Number of epochs") parser.add_argument("--num_proc", default=64, type=int, help="Number of processes") args = parser.parse_args() os.makedirs(args.output_dir, exist_ok=True) main(args) ``` </details> <details> <summary>inference.py</summary> ```python import logging import os import sys from dataclasses import dataclass, field from typing import Optional from tqdm import tqdm import torch, pandas as pd from datasets import load_dataset from transformers import ( HfArgumentParser, AutoModelForCausalLM, AutoTokenizer, DataCollatorForSeq2Seq ) from accelerate import PartialState from accelerate.utils import gather_object from transformers.utils import check_min_version from transformers.utils.versions import require_version # Will error if the minimal version of Transformers is not installed. Remove at your own risks. check_min_version("4.15.0") require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/summarization/requirements.txt") torch.manual_seed(0) logger = logging.getLogger(__name__) @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. """ model_name_or_path: str = field( metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"}, ) use_fast_tokenizer: bool = field( default=False, metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) use_auth_token: Optional[str] = field( default=None, metadata={ "help": "Will use the token generated when running `transformers-cli login` (necessary to use this script " "with private models)." }, ) from_flax: bool = field( default=False, metadata={ "help": "If true, the model will be loaded from a saved Flax checkpoint." }, ) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ dataset_path: Optional[str] = field( default=None, metadata={"help": "Filepath of the dataset to use."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) # dataset_split: Optional[str] = field( # default="test", metadata={"help": "The split of the dataset to use (via the datasets library)."} # ) source_column: Optional[str] = field( default='input', metadata={"help": "The name of the column in the datasets containing the input."}, ) target_column: Optional[str] = field( default='target', metadata={"help": "The name of the column in the datasets containing the target."}, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) preprocessing_num_workers: Optional[int] = field( default=None, metadata={"help": "The number of processes to use for the preprocessing."}, ) max_source_length: Optional[int] = field( default=1024, metadata={ "help": "The maximum total input sequence length after tokenization. Sequences longer " "than this will be truncated, sequences shorter will be padded." }, ) max_target_length: Optional[int] = field( default=128, metadata={ "help": "The maximum total sequence length for target text after tokenization. Sequences longer " "than this will be truncated, sequences shorter will be padded." }, ) num_beams: Optional[int] = field( default=1, metadata={ "help": "Number of beams to use for evaluation. This argument will be passed to ``model.generate``, " "which is used during ``evaluate`` and ``predict``." }, ) batch_size: Optional[int] = field( default=8, metadata={"help": "Batch size used for inference."}, ) output_dir: Optional[str] = field( default=".", metadata={"help": "Output dir."}, ) predictions_filename: Optional[str] = field( default="predictions.txt", metadata={"help": "Name of the file where to store the model predictions."}, ) overwrite_predictions: bool = field( default=False, metadata={"help": "Overwrite the alredy existing predictions files."} ) save_accuracy_filename: Optional[str] = field( default=None, metadata={"help": "Name of the file where to store the model accuracy."}, ) def check_model_accuracy(targets, predictions): assert len(targets) == len(predictions), f"Targets size: {len(targets)} != Predictions size: {len(predictions)}" # compare two sets perfect_predictions = 0 for x,y in zip(targets, predictions): # x = ''.join(x.split()) # To fix double-space issue # y = ''.join(y.split()) if x == y: perfect_predictions += 1 accuracy = perfect_predictions*100.0/len(targets) # print(f"Instances: {len(targets)}\t\tModel Accuracy: {perfect_predictions*100.0/len(targets):.2f}% (pp={perfect_predictions})") return float(accuracy) def save_prediction_stats(filepath: str, inputs: list, targets: list, predictions: list): df = pd.DataFrame({'input': inputs, \ 'target': targets, \ 'prediction': predictions, \ 'correct': [True if p == t else False for p, t in zip(predictions, targets)]}) df.to_csv(filepath, index=False) def main(): parser = HfArgumentParser((ModelArguments, DataTrainingArguments)) model_args, data_args = parser.parse_args_into_dataclasses() # Predictions filepath out_path = os.path.join(data_args.output_dir, data_args.predictions_filename) # # Check if predictions file exists # if not data_args.overwrite_predictions and os.path.exists(out_path): # print(f"Predictions file already exists for checkpoint {out_path}") # return # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) # Print the selected dataset path and checkpoints path print("=" * 50) print(f"Dataset path: {data_args.dataset_path}") print(f"Checkpoints path: {model_args.model_name_or_path}") print(f"Predictions path: {out_path}") print("=" * 50) print(f"Loading model {model_args.model_name_or_path} and tokenizer from {model_args.tokenizer_name}...") tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, ) tokenizer.add_special_tokens({"pad_token": "<|pad|>"}) tokenizer.padding_side = "left" tokenizer.truncation_side = "right" model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, torch_dtype="auto", # device_map="auto" ) model.resize_token_embeddings(len(tokenizer)) print(f"Loading dataset {data_args.dataset_path}") dataset = load_dataset('json', data_files=data_args.dataset_path)["train"] # Select first 90% of the dataset # dataset = dataset.select(range(int(len(dataset) * 0.9))) column_names = dataset.column_names # Get the column names for input/target. source_column = data_args.source_column if source_column not in column_names: raise ValueError( f"--source_column' value '{data_args.source_column}' needs to be one of: {', '.join(column_names)}" ) target_column = data_args.target_column if target_column not in column_names: raise ValueError( f"--target_column' value '{data_args.target_column}' needs to be one of: {', '.join(column_names)}" ) FIM_SUFFIX = "<|fim_suffix|>" FIM_SUFFIX_CODELLAMA = "<FILL_ME>" FIM_PREFIX_ID = tokenizer("<|fim_prefix|>")["input_ids"] FIM_MIDDLE_ID = tokenizer("<|fim_middle|>")["input_ids"] max_source_length = data_args.max_source_length - 2 # 2 for FIM_PREFIX_ID and FIM_MIDDLE_ID def preprocess_single(example): source = example[source_column] source = source.replace(FIM_SUFFIX_CODELLAMA, FIM_SUFFIX) input_ids = FIM_PREFIX_ID + tokenizer(source, max_length=max_source_length, truncation=True)["input_ids"] + FIM_MIDDLE_ID attention_mask = [1] * len(input_ids) return { "input_ids": input_ids, "attention_mask": attention_mask, } predict_dataset = dataset.map( preprocess_single, num_proc=data_args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on prediction dataset", ) print(f"Loaded {len(predict_dataset)} samples for prediction") print(f"Example: {predict_dataset[0]}") predict_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask']) # Split predict_dataset into as many processes as we have, and assign each to a process distributed_state = PartialState() device = distributed_state.device gen_kwargs = { "max_new_tokens": data_args.max_target_length, "num_beams": data_args.num_beams, "do_sample": False, } predictions_chunk = [] model.eval().to(device) with distributed_state.split_between_processes(predict_dataset) as predict_dataset_chunk: dataloader = torch.utils.data.DataLoader( predict_dataset_chunk, batch_size=data_args.batch_size, collate_fn=DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, padding=True) ) print(f"Inferencing...") for i, batch in enumerate(tqdm(dataloader)): batch.to(model.device) out = model.generate(**batch, **gen_kwargs) outputs = tokenizer.batch_decode(out[:, batch['input_ids'].shape[1]:], skip_special_tokens=True) inputs = tokenizer.batch_decode(batch['input_ids'], skip_special_tokens=True) for input1, output in zip(inputs, outputs): print(f"Input: {input1}") print(f"Output: {output}") predictions_chunk.extend(outputs) distributed_state.wait_for_everyone() predictions = gather_object(predictions_chunk) if distributed_state.is_main_process: # Export predictions on a separate file print(f"Writing predictions to {out_path}") with open(out_path, 'w', encoding='utf-8', errors='replace') as f: for pred in predictions: f.write(f"{pred}\n") assert len(predictions) == len(predict_dataset) # Export prediction stats df = dataset.to_pandas() inputs = df[data_args.source_column].tolist() targets = df[data_args.target_column].tolist() save_prediction_stats(out_path.replace('.txt', '.csv'), inputs, targets, predictions) # Print accuracy accuracy = check_model_accuracy(targets, predictions) print(f'Model accuracy: {str(accuracy)}') # Store accuracy on a file if data_args.save_accuracy_filename is not None: print(f"Saving accuracy to {data_args.save_accuracy_filename}") with open(data_args.save_accuracy_filename, 'w', encoding='utf-8', errors='replace') as f: f.write(f"{accuracy:.2f}") if __name__ == "__main__": main() ``` </details> <details> <summary>dpo.py</summary> ```python # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ # Full training accelerate launch --config_file acc_config_1.yaml dpo.py \ --dataset_name ./developer_1/developer_masked_methods_eval_dpo.json \ --model_name_or_path ckpts1/checkpoint-730 \ --learning_rate 5.0e-7 \ --beta 0.9 \ --num_train_epochs 10 \ --per_device_train_batch_size 4 \ --eval_strategy no \ --save_strategy steps \ --save_steps 0.1 \ --output_dir ckpts1_dpo \ --no_remove_unused_columns """ import torch from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer from trl import ( DPOConfig, DPOTrainer, ModelConfig, ScriptArguments, TrlParser, get_kbit_device_map, get_peft_config, get_quantization_config, ) if __name__ == "__main__": parser = TrlParser((ScriptArguments, DPOConfig, ModelConfig)) script_args, training_args, model_config = parser.parse_args_and_config() # Print all arguments and configurations print(script_args) print(training_args) print(model_config) ################ # Model & Tokenizer ################### torch_dtype = ( model_config.torch_dtype if model_config.torch_dtype in ["auto", None] else getattr(torch, model_config.torch_dtype) ) model_kwargs = dict( revision=model_config.model_revision, attn_implementation=model_config.attn_implementation, torch_dtype=torch_dtype, use_cache=False if training_args.gradient_checkpointing else True, # device_map="auto", ) model = AutoModelForCausalLM.from_pretrained( model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code, **model_kwargs ) ref_model = AutoModelForCausalLM.from_pretrained( model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code, **model_kwargs ) tokenizer = AutoTokenizer.from_pretrained( model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code ) tokenizer.add_special_tokens({"pad_token": "<|pad|>"}) tokenizer.padding_side = "left" tokenizer.truncation_side = "right" # if tokenizer.pad_token is None: # tokenizer.pad_token = tokenizer.eos_token model.resize_token_embeddings(len(tokenizer)) ref_model.resize_token_embeddings(len(tokenizer)) ################ # Dataset ################ dataset = load_dataset("json", data_files=script_args.dataset_name)["train"] ########## # Training ################ trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, processing_class=tokenizer ) trainer.train() if training_args.eval_strategy != "no": metrics = trainer.evaluate() trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) # Save # trainer.save_model(training_args.output_dir) ``` </details> <details> <summary>run_inference.sh</summary> ```shell #!/bin/bash # Variable related to the evaluation dataset DATASET_TEST="./developer_1/developer_masked_methods_test.json" # Names of the columns containing inputs and outputs to feed the model INPUT_COLNAME="parsed_masked_method" TARGET_COLNAME="parsed_mask" # Name of the TXT file containing the predictions, stored in each checkpoint PREDICTIONS_FILENAME="predictions_test.txt" # Output dir for the predictions if [ -z "$2" ]; then OUTPUT_DIR="$1" else OUTPUT_DIR="$2" fi mkdir -p $OUTPUT_DIR echo "Generating predictions for model" accelerate launch --config_file acc_config_1.yaml inference.py \ --model_name_or_path="$1" \ --source_column="$INPUT_COLNAME" \ --target_column="$TARGET_COLNAME" \ --max_source_length="2048" \ --max_target_length="256" \ --use_fast_tokenizer \ --dataset_path="$DATASET_TEST" \ --batch_size="128" \ --output_dir="$OUTPUT_DIR" \ --predictions_filename="$PREDICTIONS_FILENAME" \ --preprocessing_num_workers="64" ``` </details>
2,382
441
yiyepiaoling0715
2024-12-25T03:48:55
> https://wandb.ai/krasul/huggingface/runs/wkekg3nb?nw=nwuserkrasul have you got more benefit recently?
2,382
442
HuggingFaceDocBuilderDev
2024-11-21T19:59:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2381). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,381
443
HuggingFaceDocBuilderDev
2024-11-21T19:57:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2380). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,380
444
qgallouedec
2024-11-28T18:11:23
I had a short discussion with @lvwerra (I don't want to change what you said, feel free to correct me). - The main hurdle with the refactoring is that it risks breaking links. - The idea of not deleting example files so that links aren't dead is a good one. - In some cases, we could open PRs on repos to update examples links. - Having a duplicate with examples (in `examples/scripts` ) and scripts (in `trl/scripts`) would make maintenance more arduous, which is why this solution was not chosen at first. As a first iteration, we could only move dpo, sft, and chat (current cli support). This new separation makes even more sense to me considering we're starting to add *true* example (ie, not script meant to be used at runtime) like in #2409 #1518 #2336 #1647
2,380
445
qgallouedec
2024-11-21T19:33:35
Thanks!
2,379
446
qgallouedec
2024-11-21T19:35:56
Pushed to hub here https://huggingface.co/datasets/trl-lib/hh-rlhf-helpful-base
2,379
447
HuggingFaceDocBuilderDev
2024-11-21T19:37:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2379). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,379
448
ccs96307
2024-11-27T08:30:47
I encountered this issue previously and temporarily worked around it by adjusting the accelerate version to 0.34.2. Here are the versions I used: - accelerate==0.34.2 - torch==2.5.1 - transformers==4.46.2 - deepspeed==0.15.4
2,377
449
lzy37ld
2024-12-15T19:04:32
same issues above
2,377
450
IQIUM
2024-12-31T06:06:39
@lzy37ld I fixed this problem by tweaking the torch version to 2.4.0, and maybe some other older version would have worked. - accelerate==0.34.2 - torch==2.4.0 - tansformers==4.46.2 - deepspeed==0.15.4
2,377
451
Superskyyy
2025-01-14T22:58:05
This issue is still persisting, had to downgrade to make zero2/3 work.
2,377
452
qgallouedec
2024-11-30T14:09:32
It seems like the issue lies in how the inputs to the metric computation are being handled. Specifically, the metric expects strings (decoded text), but your code is providing tokens (numerical IDs). You can resolve this by decoding both predictions and labels using the tokenizer before passing them to the metric. ```diff def compute_metrics(eval_pred): logits, labels = eval_pred labels = labels.astype(np.uint16) predictions = np.argmax(logits, axis=-1).astype(np.uint16) + predictions = tokenizer.batch_decode(predictions) + labels = tokenizer.batch_decode(labels) return metric.compute(predictions=predictions, references=labels) ```
2,376
453
scarafoni
2024-12-02T21:33:49
thank you. it's working now.
2,376
454
scarafoni
2024-12-02T21:34:00
issue resolved
2,376
455
xiaoyuxin1002
2024-12-04T11:07:37
change line 262 in ppo_trainer.py to `self.model = self.model.module.policy # save only the policy`
2,375
456
Ugadot
2025-01-03T11:27:31
I am getting similar error when i try to use accelerate training on multiple GPU. However when i use normal single GPU training i don't get this error. This happens when `null_ref_context` function is called. Any ideas on the source of this Error?
2,375
457
qgallouedec
2024-11-20T09:44:57
Please don't use image when referring to code next time. Use [permalink to code](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-a-permanent-link-to-a-code-snippet). --- > train_dataset should be in the form of dict after processing the map No, `map` returns a `Dataset` instance (see [`datasets.map` documentation](https://huggingface.co/docs/datasets/en/process#map)). Unless you remove these columns (prompt, completion) from the dataset, they remain.
2,374
458
a7217339
2024-11-20T09:50:36
Thank you for your guidance. As a beginner, I am not yet proficient in grammar. Sorry.
2,374
459
HuggingFaceDocBuilderDev
2024-11-20T09:36:07
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2373). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,373
460
HuggingFaceDocBuilderDev
2024-11-20T08:39:39
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2372). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,372
461
kashif
2024-11-20T08:40:44
thanks @qgallouedec
2,372
462
qgallouedec
2024-11-20T09:11:02
Failing test not related (same as https://github.com/huggingface/trl/pull/2370#issuecomment-2486585773)
2,372
463
qgallouedec
2024-11-20T07:45:08
Thanks for reporting. Please provide more info, like the training arguments etc
2,371
464
HuggingFaceDocBuilderDev
2024-11-19T18:49:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2370). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,370
465
qgallouedec
2024-11-19T19:34:33
Failing test not related (fixed in #2373)
2,370
466
lewtun
2024-11-19T12:30:06
Yes I agree this symlink business was not a great choice for the chat CLI. Let's revisit later
2,369
467
qgallouedec
2024-11-19T08:15:12
> Can DPOTrainer support inputting encoded token IDs to customize the calculation of different attention masks No, and won't be supported, unless we are provided with a good reason to support it. > excluding the prompt part from loss computation? Actually, it's how dpo works by default. See https://github.com/huggingface/trl/blob/b80c1a6fb8754c578f7178213e56d780abbe96d5/trl/trainer/dpo_trainer.py#L1089-L1092
2,368
468
LBJ6666
2024-11-19T08:55:16
@qgallouedec Thank you for your response
2,368
469
gmonair
2024-11-20T12:41:37
I think I found the issue. For posterity, it seems that it was caused by setting torch_dtype to "half" instead of "auto". User error.
2,367
470
qgallouedec
2024-11-19T05:43:32
Please use english only
2,366
471
qgallouedec
2024-11-20T13:05:00
Probably linked to #2127. Closing as the title is not in English and the question isn't clear enough for us to help you. Feel free to open a clearer issue the complies with our guidelines
2,366
472
HuggingFaceDocBuilderDev
2024-11-18T16:18:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2365). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,365
473
qgallouedec
2024-11-18T12:58:40
Thanks!
2,364
474
HuggingFaceDocBuilderDev
2024-11-18T13:03:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2364). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,364
475
August-murr
2024-11-24T15:09:48
to get it to work with `ppo_trainer.train`, one idea is to modify the `get_reward` function which is used by `PPOTrainer`. Clone the repo and check out the `get_reward` function in `utils.py` https://github.com/huggingface/trl/blob/672c96546d9cae7a6d0afba381b189bb3cb2e8b5/trl/trainer/utils.py#L1069-L1093. Right now, it uses a `torch.nn.Module` to calculate the reward. You can modify it to use your rule-based reward logic instead. Just make sure the function still outputs a `torch.tensor` so `PPOTrainer` doesn’t break. You might also need to adjust some references in `ppo_config.py` and `ppo_trainer.py`. For example, remove anything that assumes there’s a reward model being used, since in your case, there won’t be one.
2,363
476
kashif
2024-11-21T09:34:50
yes would welcome distillation trainers!
2,361
477
kashif
2024-11-18T08:44:55
thanks @bartoszzuk perhaps it's better to set the `self.data_collator` to the default one if it is none and then use `self.data_collator` in the data loaders?
2,360
478
kashif
2024-11-18T10:17:24
you might need to run `make precommit` in the root of the TRL to fix styling
2,360
479
HuggingFaceDocBuilderDev
2024-11-18T10:20:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2360). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,360
480
HuggingFaceDocBuilderDev
2024-11-15T14:07:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2359). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,359
481
ccs96307
2024-11-14T12:34:24
Hi, it looks like the error arises because the `PPOTrainer` class expects a `value_model` to be defined and passed in, which appears to be required in the current TRL version. The `disable_dropout_in_model` method is likely encountering `NoneType` because `value_model` wasn’t specified, and thus defaults to `None`. Hope this helps!
2,357
482
Mrinh212375
2024-11-15T05:33:13
> Hi, it looks like the error arises because the `PPOTrainer` class expects a `value_model` to be defined and passed in, which appears to be required in the current TRL version. The `disable_dropout_in_model` method is likely encountering `NoneType` because `value_model` wasn’t specified, and thus defaults to `None`. > > Hope this helps! Hi, thanks......I have passed value_model the same as policy_model, I thought it was optional, so didn't pass anything.........anyway error is gone. also I can call the ppo_trainer.train() method directly right ? unlike the older version, no need to write ppo training loop.....Can you please clarify on this point.
2,357
483
ccs96307
2024-11-15T13:35:40
Glad to hear the error is resolved! Yes, as far as I know, you can directly call the `ppo_trainer.train()` method without needing to write a training loop.
2,357
484
qgallouedec
2024-11-14T09:05:42
Does "from scratch" means the opposite of "finetuning" for you? Please precise your question
2,356
485
kalocide
2024-11-16T00:47:00
why would you pre-train with RL?
2,356
486
kashif
2024-11-21T10:05:15
just to debug, can you kindly try to see if you get the same issue when you do not pass a validation dataset? Also, can you check what happens when you explicitly pass `num_train_epochs=1` as an option to the `DPOConfig` Thanks!
2,355
487
Mrinh212375
2024-11-14T07:21:27
Hi...I think we need to create a copy of the policy model using create_reference_model() function....is it ?? I'm facing another problem in new PPOTrainer()...according to documentation we need to pass **'module**' unlike the previous version(HF PreTrainedWrapper ). How to get the HF pretrainedWrapper models and pass to PPOTrainer() as module
2,353
488
ccs96307
2024-11-14T12:55:25
I'm hopeful that https://github.com/huggingface/trl/pull/2344 will address this issue! :raised_hands:
2,353
489
ZhaoningYu1996
2024-11-27T02:22:01
Hi, Did you figure this issue out? I am facing the same problem. I am trying to use PPO with PEFT, but the ref_policy cannot be None.
2,353
490
TingchenFu
2024-11-27T02:25:44
I turned to v0.11 as a bypass🤣
2,353
491
ccs96307
2024-11-27T08:44:56
May I ask which version you are using? As far as I know, the current version (0.12.1) does not support `PEFT` for `PPOTrainer` yet. However, if you want to try `PEFT` support, you can consider using `pip install git+https://github.com/huggingface/trl.git` to experiment with it. :muscle:
2,353
492
ZhaoningYu1996
2024-11-27T23:51:31
Thank you for your reply. I am using 0.12.1. I will try pip install git+https://github.com/huggingface/trl.git
2,353
493
ZhaoningYu1996
2024-11-27T23:51:59
> I turned to v0.11 as a bypass🤣 Does 0.11 support PEFT?
2,353
494
qgallouedec
2024-11-23T16:36:38
Thanks for this detailled report. The easiest is probably to remove all columns in `dataset.map` ```python dataset.map(..., remove_columns=dataset.column_names) ``` What do you think? Would you like to make a PR to fix this?
2,351
495
HuggingFaceDocBuilderDev
2024-11-11T23:46:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2350). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,350
496
qgallouedec
2024-11-29T14:49:31
@kashif slow tests pass 🎉
2,350
497
qgallouedec
2024-11-29T15:07:20
> For the various warnings that are removed due to being non-actionable, should we promote them to `logger.info` or remove this for now in the interest of now spawning a huge mess of text for each trainer? I assume you are referring to: https://github.com/huggingface/trl/pull/2350#discussion_r1863624313? With a few exceptions, I don't think it's worth putting an info. However, this PR can serve as a collection if we ever want to promote some in logger.info in the future, but I think for now we can leave it like that. In cases where I haven't found a satisfactory solution, I'll leave conversations open on this PR so that it can be easily identified. But again, I think these cases can wait.
2,350
498
HuggingFaceDocBuilderDev
2024-11-11T23:17:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2349). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,349
499