TRL documentation
Reducing Memory Usage
Reducing Memory Usage
Section under construction. Feel free to contribute!
Truncation
Sequence lengths in the dataset can vary widely. When data is batched, sequences are padded to match the longest one in the batch, which can cause high memory usage, even if most sequences are relatively short.

To reduce memory usage, it’s important to truncate sequences to a reasonable length. While TRL trainers truncate sequences by default, you may want to adjust the default truncation length to better align with your specific use case.
DPO truncation is applied first to the prompt and to the completion via the max_prompt_length
and max_completion_length
parameters. The max_length
parameter is then used to truncate the resulting sequence.

To set the truncation parameters, use the following code snippet:
from trl import DPOConfig
training_args = DPOConfig(..., max_prompt_length=..., max_length=...)
You can also use the max_completion_length
parameter to truncate the completion, though this is less common since the goal is typically to preserve the completion’s full length whenever possible.
from trl import DPOConfig
training_args = DPOConfig(..., max_completion_length=...)
Packing
This technique applies only to SFT.
Truncation has several drawbacks:
- Loss of information: Key data at the end of a sequence may be discarded.
- Choosing truncation length: Too short loses data; too long undermines efficiency.
Packing, introduced in Raffel et al., 2020, addresses these issues by grouping sequences instead of truncating. It concatenates and splits dataset sequences into the desired lengths.

Packing eliminates padding, preserves all sequence information, and allows for flexible sequence lengths, making it a more efficient alternative to truncation. To enable packing, use packing=True
in the SFTConfig:
from trl import SFTConfig
training_args = SFTConfig(..., packing=True, max_length=512)
Packing may cause batch contamination, where adjacent sequences influence one another. This can be problematic for some applications. For more details, see #1230.
Padding-free
Padding-free batching is an alternative approach for reducing memory usage. In this method, a batch is first sampled and then flattened into a single sequence, avoiding padding. Unlike packing, which can result in incomplete sequences by combining parts of different samples, padding-free batching ensures that all sequences remain complete and intact.

It’s highly recommended to use padding-free batching with Flash Attention 2. Otherwise, you may encounter batch contamination issues.
from trl import DPOConfig
training_args = DPOConfig(..., padding_free=True, model_init_kwargs={"attn_implementation": "flash_attention_2"})
Activation offloading
Activation offloading is a memory efficiency technique that reduces GPU VRAM usage by temporarily moving activation tensors to CPU RAM during the forward pass and bringing them back only when needed for the backward pass. This significantly reduces peak memory usage at the cost of slightly increased training time.
To enable activation offloading in your SFT training configuration:
</hfoption> <hfoption id="SFT">from trl import SFTConfig
training_args = SFTConfig(..., activation_offloading=True)
When using activation offloading with models that use Liger kernels, you must disable Liger cross entropy due to compatibility issues. The issue occurs specifically with use_liger_kernel=True
because Liger cross entropy performs in-place operations which conflict with activation offloading. The default setting (use_liger_kernel=False
) works:
# When using activation offloading with a model that uses Liger kernels:
from trl import SFTConfig
training_args = SFTConfig(
activation_offloading=True,
use_liger_kernel=False, # Disable Liger cross entropy
# Other parameters...
)
Under the hood, activation offloading implements PyTorch’s saved_tensors_hooks
to intercept activations during the forward pass. It intelligently manages which tensors to offload based on size and context, avoiding offloading output tensors which would be inefficient. For performance optimization, it can optionally use CUDA streams to overlap computation with CPU-GPU transfers.
Disabling model gathering for generation in online methods
When using DeepSpeed ZeRO-3, model weights are sharded across multiple GPUs. Online methods involve generating completions from the model as part of the training process. During this step, the model weights are temporarily gathered on a single GPU for generation. For very large models, this gathering can lead to out-of-memory (OOM) errors, as described in this issue: #2250.
If you encounter this issue, you can disable the gathering of model weights for generation by setting the following parameter:
from trl import GRPOConfig
training_args = GRPOConfig(..., ds3_gather_for_generation=False)
This adjustment prevents model weights from being gathered, avoiding OOM errors, but it may result in slower generation speeds.
< > Update on GitHub