TRL documentation
Reward Modeling
Reward Modeling
Overview
TRL supports the Outcome-supervised Reward Modeling (ORM) Trainer for training reward models.
This post-training method was contributed by Younes Belkada.
Quick start
This example demonstrates how to train a reward model using the RewardTrainer from TRL. We train a Qwen 3 0.6B model on the UltraFeedback dataset, large-scale, fine-grained, diverse preference dataset.
from trl import RewardTrainer
from datasets import load_dataset
trainer = RewardTrainer(
model="Qwen/Qwen3-0.6B",
train_dataset=load_dataset("trl-lib/ultrafeedback_binarized", split="train"),
)
trainer.train()
Expected dataset type and format
RewardTrainer supports preference datasets type (both implicit and explicit prompt). The RewardTrainer is compatible with both standard and conversational dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
# Standard preference (implicit prompt)
{"chosen": "The sky is blue.",
"rejected": "The sky is green."}
# Conversational preference (implicit prompt)
{"chosen": [{"role": "user", "content": "What color is the sky?"},
{"role": "assistant", "content": "It is blue."}],
"rejected": [{"role": "user", "content": "What color is the sky?"},
{"role": "assistant", "content": "It is green."}]}
# Standard preference (explicit prompt)
{"prompt": "The sky is",
"chosen": " blue.",
"rejected": " green."}
# Conversational preference (explicit prompt)
{"prompt": [{"role": "user", "content": "What color is the sky?"}],
"chosen": [{"role": "assistant", "content": "It is blue."}],
"rejected": [{"role": "assistant", "content": "It is green."}]}
If your dataset is not in one of these formats, you can preprocess it to convert it into the expected format. Here is an example with the lmarena-ai/arena-human-preference-55k dataset:
from datasets import load_dataset
import json
dataset = load_dataset("lmarena-ai/arena-human-preference-55k")
# Filter out ties
dataset = dataset.filter(lambda example: example["winner_tie"] == 0)
# Create 'chosen' and 'rejected' fields based on the winner column
def response_a_b_to_chosen_rejected(example):
if example["winner_model_a"] == 1:
example["chosen"] = example["response_a"]
example["rejected"] = example["response_b"]
else:
example["chosen"] = example["response_b"]
example["rejected"] = example["response_a"]
return example
dataset = dataset.map(response_a_b_to_chosen_rejected)
# Convert to conversational format
def make_conversation(example):
prompt = json.loads(example["prompt"])[0] # '["What color is the sky?"]' -> "What color is the sky?"
chosen = json.loads(example["chosen"])[0]
rejected = json.loads(example["rejected"])[0]
return {
"chosen": [{"role": "user", "content": prompt}, {"role": "assistant", "content": chosen}],
"rejected": [{"role": "user", "content": prompt}, {"role": "assistant", "content": rejected}],
}
dataset = dataset.map(make_conversation)
# Keep only necessary columns
dataset = dataset.select_columns(["chosen", "rejected"])
print(next(iter(dataset["train"])))
{
"chosen": [
{"role": "user", "content": "Is it morally right to try to have a certain percentage of females on managerial positions?"},
{"role": "assistant", "content": "The question of whether it is morally right to aim for a certain percentage of females..."},
],
"rejected": [
{"role": "user", "content": "Is it morally right to try to have a certain percentage of females on managerial positions?"},
{"role": "assistant", "content": "As an AI, I don't have personal beliefs or opinions. However, ..."},
],
}
Looking deeper into the training method
Reward Models (RMs) are typically trained using supervised learning on datasets containing pairs of preferred and non-preferred responses. The goal is to learn a function that assigns higher scores to preferred responses, enabling the model to rank outputs based on preferences.
This section breaks down how reward modeling works in practice, covering the key steps: preprocessing and loss computation.
Preprocessing and tokenization
During training, each example is expected to contain a chosen and rejected field. For more details on the expected formats, see Dataset formats - Preference. The RewardTrainer tokenizes each input using the model’s tokenizer. If prompts and completions (chosen and rejected) are provided separately (explicit prompt case), they are concatenated before tokenization.
Computing the loss
Let be the input sequence (prompt) and and be the chosen and rejected sequences respectively. Under the Bradley-Terry model (Bradley & Terry, 1952), the probability that is preferred over given a reward function is , where is the sigmoid function.
The reward model is trained to assign higher scores to preferred responses over non-preferred ones . The loss is then defined as the negative log-likelihood of the observed preferences:
The Bradley-Terry model is underdetermined, meaning that adding a constant to all rewards does not change the preference probabilities. To address this, Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking proposes adding an auxiliary loss term that encourages the rewards to be centered around zero. This is controlled by the
center_rewards_coefficient
parameter in the RewardConfig. The recommended value is1e-2
.
Logged metrics
While training and evaluating we record the following reward metrics:
global_step
: The total number of optimizer steps taken so far.epoch
: The current epoch number, based on dataset iteration.num_tokens
: The total number of tokens processed so far.loss
: The average loss over the last logging interval.accuracy
: The proportion of correct predictions (i.e., the model assigned a higher score to the chosen response than to the rejected one) averaged over the last logging interval.min_reward
: The minimum reward score assigned by the model. This value is averaged over the logging interval.mean_reward
: The average reward score assigned by the model over the last logging interval.max_reward
: The maximum reward score assigned by the model. This value is averaged over the logging interval.margin
: The average margin (difference between chosen and rejected rewards) over the last logging interval.learning_rate
: The current learning rate, which may change dynamically if a scheduler is used.grad_norm
: The L2 norm of the gradients, computed before gradient clipping.
Customization
Model initialization
You can directly pass the kwargs of the from_pretrained()
method to the RewardConfig. For example, if you want to load a model in a different precision, analogous to
model = AutoModelForSequenceClassification.from_pretrained("Qwen/Qwen3-0.6B", dtype=torch.bfloat16)
you can do so by passing the model_init_kwargs={"dtype": torch.bfloat16}
argument to the RewardConfig.
from trl import RewardConfig
training_args = RewardConfig(
model_init_kwargs={"dtype": torch.bfloat16},
)
Note that all keyword arguments of from_pretrained()
are supported, except for num_labels
, which is automatically set to 1.
Train adapters with PEFT
We support tight integration with 🤗 PEFT library, allowing any user to conveniently train adapters and share them on the Hub, rather than training the entire model.
from datasets import load_dataset
from trl import RewardTrainer
from peft import LoraConfig
dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")
trainer = RewardTrainer(
"Qwen/Qwen3-4B",
train_dataset=dataset,
peft_config=LoraConfig(modules_to_save=["score"]) # important to include the score head when base model is not a sequence classification model
)
trainer.train()
You can also continue training your ~peft.PeftModel
. For that, first load a PeftModel
outside RewardTrainer and pass it directly to the trainer without the peft_config
argument being passed.
from datasets import load_dataset
from trl import RewardTrainer
from peft import AutoPeftModelForCausalLM
model = AutoPeftModelForCausalLM.from_pretrained("trl-lib/Qwen3-4B-Reward-LoRA", is_trainable=True)
dataset = load_dataset("trl-lib/Capybara", split="train")
trainer = RewardTrainer(
model=model,
train_dataset=dataset,
)
trainer.train()
When training adapters, you typically use a higher learning rate (≈1e‑3) since only new parameters are being learned.
RewardConfig(learning_rate=1e-3, ...)
Tool Calling with Reward Modeling
The RewardTrainer fully supports fine-tuning models with tool calling capabilities. In this case, each dataset example should include:
- The conversation messages, including any tool calls (
tool_calls
) and tool responses (tool
role messages) - The list of available tools in the
tools
column, typically provided as JSON schemas
For details on the expected dataset structure, see the Dataset Format — Tool Calling section.
RewardTrainer
class trl.RewardTrainer
< source >( model: typing.Union[str, transformers.modeling_utils.PreTrainedModel] args: typing.Optional[trl.trainer.reward_config.RewardConfig] = None data_collator: typing.Optional[typing.Callable[[list[typing.Any]], dict[str, typing.Any]]] = None train_dataset: typing.Union[datasets.arrow_dataset.Dataset, datasets.iterable_dataset.IterableDataset, NoneType] = None eval_dataset: typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = None processing_class: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None compute_metrics: typing.Optional[typing.Callable[[transformers.trainer_utils.EvalPrediction], dict]] = None callbacks: typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None optimizers: tuple = (None, None) optimizer_cls_and_kwargs: typing.Optional[tuple[type[torch.optim.optimizer.Optimizer], dict[str, typing.Any]]] = None preprocess_logits_for_metrics: typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None peft_config: typing.Optional[ForwardRef('PeftConfig')] = None )
Parameters
- model (
Union[str, PreTrainedModel]
) — Model to be trained. Can be either:- A string, being the model id of a pretrained model hosted inside a model repo on huggingface.co, or a
path to a directory containing model weights saved using
save_pretrained, e.g.,
'./my_model_directory/'
. The model is loaded usingAutoModelForSequenceClassification.from_pretrained
with the keyword arguments inargs.model_init_kwargs
. - A sequence classification PreTrainedModel object.
- A string, being the model id of a pretrained model hosted inside a model repo on huggingface.co, or a
path to a directory containing model weights saved using
save_pretrained, e.g.,
- args (RewardConfig, optional) —
Configuration for this trainer. If
None
, a default configuration is used. - data_collator (
DataCollator
, optional) — Function to use to form a batch from a list of elements of the processedtrain_dataset
oreval_dataset
. Will default to DataCollatorForPreference. - train_dataset (Dataset or IterableDataset) —
Dataset to use for training. This trainer supports preference type (both implicit and
explicit prompt). The format of the samples can be either:
- Standard: Each sample contains plain text.
- Conversational: Each sample contains structured messages (e.g., role and content).
The trainer also supports processed datasets (tokenized) as long as they contain an
chosen_input_ids
andrejected_input_ids
fields. - eval_dataset (Dataset, IterableDataset or
dict[str, Union[Dataset, IterableDataset]]
) — Dataset to use for evaluation. It must meet the same requirements astrain_dataset
. - processing_class (PreTrainedTokenizerBase, optional) —
Tokenizer used to process the data. If
None
, the tokenizer is loaded from the model’s name with from_pretrained. A padding token,processing_class.pad_token
, must be set. If the processing class has not set a padding token,processing_class.eos_token
will be used as the default. - compute_metrics (
Callable[[EvalPrediction], dict]
, optional) — The function that will be used to compute metrics at evaluation. Must take a EvalPrediction and return a dictionary string to metric values. When passing RewardConfig withbatch_eval_metrics
set toTrue
, yourcompute_metrics
function must take a booleancompute_result
argument. This will be triggered after the last eval batch to signal that the function needs to calculate and return the global summary statistics rather than accumulating the batch-level statistics. - callbacks (list of TrainerCallback, optional) —
List of callbacks to customize the training loop. Will add those to the list of default callbacks detailed
in here.
If you want to remove one of the default callbacks used, use the remove_callback method.
- optimizers (
tuple[Optional[torch.optim.Optimizer], Optional[torch.optim.lr_scheduler.LambdaLR]]
, optional, defaults to(None, None)
) — A tuple containing the optimizer and the scheduler to use. Will default to an instance ofAdamW
on your model and a scheduler given by get_linear_schedule_with_warmup controlled byargs
. - optimizer_cls_and_kwargs (
tuple[Type[torch.optim.Optimizer], Dict[str, Any]]
, optional) — A tuple containing the optimizer class and keyword arguments to use. Overridesoptim
andoptim_args
inargs
. Incompatible with theoptimizers
argument.Unlike
optimizers
, this argument avoids the need to place model parameters on the correct devices before initializing the Trainer. - preprocess_logits_for_metrics (
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
, optional) — A function that preprocess the logits right before caching them at each evaluation step. Must take two tensors, the logits and the labels, and return the logits once processed as desired. The modifications made by this function will be reflected in the predictions received bycompute_metrics
.Note that the labels (second parameter) will be
None
if the dataset does not have them. - peft_config (
~peft.PeftConfig
, optional) — PEFT configuration used to wrap the model. IfNone
, the model is not wrapped. Note that if the loaded model is a causal LM, it’s highly recommended to setmodules_to_save=["score"]
in the PEFT configuration to ensure that the reward head is properly trained.
Trainer for Outcome-supervised Reward Models (ORM).
This class is a wrapper around the Trainer class and inherits all of its attributes and methods.
Example:
from trl import RewardTrainer
from datasets import load_dataset
dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")
trainer = RewardTrainer(model="Qwen/Qwen2.5-0.5B-Instruct", train_dataset=dataset)
trainer.train()
train
< source >( resume_from_checkpoint: typing.Union[str, bool, NoneType] = None trial: typing.Union[ForwardRef('optuna.Trial'), dict[str, typing.Any], NoneType] = None ignore_keys_for_eval: typing.Optional[list[str]] = None **kwargs: typing.Any )
Parameters
- resume_from_checkpoint (
str
orbool
, optional) — If astr
, local path to a saved checkpoint as saved by a previous instance ofTrainer
. If abool
and equalsTrue
, load the last checkpoint in args.output_dir as saved by a previous instance ofTrainer
. If present, training will resume from the model/optimizer/scheduler states loaded here. - trial (
optuna.Trial
ordict[str, Any]
, optional) — The trial run or the hyperparameter dictionary for hyperparameter search. - ignore_keys_for_eval (
list[str]
, optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training. - kwargs (
dict[str, Any]
, optional) — Additional keyword arguments used to hide deprecated arguments
Main training entry point.
Will save the model, so you can reload it using from_pretrained()
.
Will only save from the main process.
push_to_hub
< source >( commit_message: typing.Optional[str] = 'End of training' blocking: bool = True token: typing.Optional[str] = None revision: typing.Optional[str] = None **kwargs )
Parameters
- commit_message (
str
, optional, defaults to"End of training"
) — Message to commit while pushing. - blocking (
bool
, optional, defaults toTrue
) — Whether the function should return only when thegit push
has finished. - token (
str
, optional, defaults toNone
) — Token with write permission to overwrite Trainer’s original args. - revision (
str
, optional) — The git revision to commit from. Defaults to the head of the “main” branch. - kwargs (
dict[str, Any]
, optional) — Additional keyword arguments passed along to~Trainer.create_model_card
.
Upload self.model
and self.processing_class
to the 🤗 model hub on the repo self.args.hub_model_id
.
RewardConfig
class trl.RewardConfig
< source >( output_dir: typing.Optional[str] = None overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False eval_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: typing.Optional[int] = None per_gpu_eval_batch_size: typing.Optional[int] = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: typing.Optional[int] = None eval_delay: float = 0 torch_empty_cache_steps: typing.Optional[int] = None learning_rate: float = 0.0001 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs: typing.Union[dict[str, typing.Any], str] = <factory> warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: str = 'passive' log_level_replica: str = 'warning' log_on_each_node: bool = True logging_dir: typing.Optional[str] = None logging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step: bool = False logging_steps: float = 10 logging_nan_inf_filter: bool = True save_strategy: typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps: float = 500 save_total_limit: typing.Optional[int] = None save_safetensors: bool = True save_on_each_node: bool = False save_only_model: bool = False restore_callback_states_from_checkpoint: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: typing.Optional[int] = None jit_mode_eval: bool = False bf16: typing.Optional[bool] = None fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: typing.Optional[bool] = None local_rank: int = -1 ddp_backend: typing.Optional[str] = None tpu_num_cores: typing.Optional[int] = None tpu_metrics_debug: bool = False debug: typing.Union[str, list[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last: bool = False eval_steps: typing.Optional[float] = None dataloader_num_workers: int = 0 dataloader_prefetch_factor: typing.Optional[int] = None past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: bool = True label_names: typing.Optional[list[str]] = None load_best_model_at_end: bool = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False fsdp: typing.Union[list[transformers.trainer_utils.FSDPOption], str, NoneType] = None fsdp_min_num_params: int = 0 fsdp_config: typing.Union[dict[str, typing.Any], str, NoneType] = None fsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = None accelerator_config: typing.Union[dict, str, NoneType] = None parallelism_config: typing.Optional[accelerate.parallelism_config.ParallelismConfig] = None deepspeed: typing.Union[dict, str, NoneType] = None label_smoothing_factor: float = 0.0 optim: typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch_fused' optim_args: typing.Optional[str] = None adafactor: bool = False group_by_length: bool = False length_column_name: str = 'length' report_to: typing.Union[NoneType, str, list[str]] = None project: str = 'huggingface' trackio_space_id: typing.Optional[str] = 'trackio' ddp_find_unused_parameters: typing.Optional[bool] = None ddp_bucket_cap_mb: typing.Optional[int] = None ddp_broadcast_buffers: typing.Optional[bool] = None dataloader_pin_memory: bool = True dataloader_persistent_workers: bool = False skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: typing.Optional[str] = None hub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token: typing.Optional[str] = None hub_private_repo: typing.Optional[bool] = None hub_always_push: bool = False hub_revision: typing.Optional[str] = None gradient_checkpointing: bool = True gradient_checkpointing_kwargs: typing.Union[dict[str, typing.Any], str, NoneType] = None include_inputs_for_metrics: bool = False include_for_metrics: list = <factory> eval_do_concat_batches: bool = True fp16_backend: str = 'auto' push_to_hub_model_id: typing.Optional[str] = None push_to_hub_organization: typing.Optional[str] = None push_to_hub_token: typing.Optional[str] = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: typing.Optional[str] = None ray_scope: typing.Optional[str] = 'last' ddp_timeout: int = 1800 torch_compile: bool = False torch_compile_backend: typing.Optional[str] = None torch_compile_mode: typing.Optional[str] = None include_tokens_per_second: bool = False include_num_input_tokens_seen: typing.Union[str, bool] = False neftune_noise_alpha: typing.Optional[float] = None optim_target_modules: typing.Union[NoneType, str, list[str]] = None batch_eval_metrics: bool = False eval_on_start: bool = False use_liger_kernel: bool = False liger_kernel_config: typing.Optional[dict[str, bool]] = None eval_use_gather_object: bool = False average_tokens_across_devices: bool = True model_init_kwargs: typing.Optional[dict[str, typing.Any]] = None chat_template_path: typing.Optional[str] = None disable_dropout: bool = True dataset_num_proc: typing.Optional[int] = None eos_token: typing.Optional[str] = None pad_token: typing.Optional[str] = None max_length: typing.Optional[int] = 1024 pad_to_multiple_of: typing.Optional[int] = None center_rewards_coefficient: typing.Optional[float] = None activation_offloading: bool = False )
Parameters that control the model
- model_init_kwargs (
dict[str, Any]
, optional) — Keyword arguments for from_pretrained, used when themodel
argument of the RewardTrainer is provided as a string. If you’re training a MoE architecture and want to include the load balancing/auxilliary loss as a part of the final loss, remember to setoutput_router_logits=True
in this dictionary. - chat_template_path (
str
, optional) — If specified, sets the model’s chat template. This can either be the path to a tokenizer (local directory or Hugging Face Hub model) or a direct path to a Jinja template file. When using a Jinja file, you must ensure that any special tokens referenced in the template are added to the tokenizer and that the model’s embedding layer is resized accordingly. - disable_dropout (
bool
, optional, defaults toTrue
) — Whether to disable dropout in the model.
Parameters that control the data preprocessing
- dataset_num_proc (
int
, optional) — Number of processes to use for processing the dataset. - eos_token (
str
, optional) — Token used to indicate the end of a turn or sequence. IfNone
, it defaults toprocessing_class.eos_token
. - pad_token (
str
, optional) — Token used for padding. IfNone
, it defaults toprocessing_class.pad_token
, or if that is alsoNone
, it falls back toprocessing_class.eos_token
. - max_length (
int
orNone
, optional, defaults to1024
) — Maximum length of the tokenized sequence. Samples are filtered out if either chosen or rejected sequence exceeds this value. IfNone
, no filtering is applied. - pad_to_multiple_of (
int
, optional) — If set, the sequences will be padded to a multiple of this value.
Parameters that control the training
- center_rewards_coefficient (
float
, optional) — Coefficient to incentivize the reward model to output mean-zero rewards (proposed by https://huggingface.co/papers/2312.09244, Eq. 2). Recommended value:0.01
. - activation_offloading (
bool
, optional, defaults toFalse
) — Whether to offload the activations to the CPU.
Configuration class for the RewardTrainer.
This class includes only the parameters that are specific to Reward training. For a full list of training arguments, please refer to the TrainingArguments documentation. Note that default values in this class may differ from those in TrainingArguments.
Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line.
DataCollatoForPreference
class trl.trainer.reward_trainer.DataCollatorForPreference
< source >( pad_token_id: int pad_to_multiple_of: typing.Optional[int] = None return_tensors: str = 'pt' )
Data collator used for preference data. Inputs are dynamically padded to the maximum length of a batch.
This collator expects each example in the input list to be a dictionary containing the "chosen_input_ids"
and
"rejected_input_ids"
keys. The collator returns a dictionary containing the following keys:
"input_ids"
: Tensor of input IDs, padded to the maximum length of the batch. The first half of the batch corresponds to the"chosen_input_ids"
and the second half to the"rejected_input_ids"
."attention_mask"
: Tensor of attention mask, padded to the maximum length of the batch.
Optionally, the examples can contain a "margin"
key, in which case the returned dictionary will also contain a
"margin"
key with a tensor of margins.
Examples:
>>> from trl.trainer.reward_trainer import DataCollatorForPreference
>>> collator = DataCollatorForPreference(pad_token_id=0)
>>> examples = [
... {"chosen_input_ids": [1, 2, 3], "rejected_input_ids": [4, 5]},
... {"chosen_input_ids": [6, 7], "rejected_input_ids": [8]},
... ]
>>> collator(examples)
{'input_ids': tensor([[1, 2, 3],
[6, 7, 0],
[4, 5, 0],
[8, 0, 0]]),
'attention_mask': tensor([[1, 1, 1],
[1, 1, 0],
[1, 1, 0],
[1, 0, 0]])}
>>> examples = [
... {"chosen_input_ids": [1, 2, 3], "rejected_input_ids": [4, 5], "margin": 0.5},
... {"chosen_input_ids": [6, 7], "rejected_input_ids": [8], "margin": 0.0},
... ]
>>> collator(examples)
{'input_ids': tensor([[1, 2, 3],
[6, 7, 0],
[4, 5, 0],
[8, 0, 0]]),
'attention_mask': tensor([[1, 1, 1],
[1, 1, 0],
[1, 1, 0],
[1, 0, 0]]),
'margin': tensor([0.5, 0.0])}