TRL documentation
LoRA Without Regret
LoRA Without Regret
Recent research from the team at Thinking Machines Lab (Schulman et al., 2025) shows that LoRA can match full fine-tuning performance when configured correctly, while using only ~67% of the compute. These findings are exciting to TRL users because they’re straightforward to implement and can improve model performance on smaller budgets.
This guide provides simple instructions to reproduce the results of the blog post in TRL.
It is recommended to read the blog post before following this guide, or to consult both resources in parallel for best results.
Benefits of LoRA over full fine-tuning
First of all, let’s remind ourselves of the benefits of LoRA over full fine-tuning.
LoRA adds adapter layers on top of the base model, which contains significantly fewer parameters than the base model itself. This design reduces GPU memory requirements and enables more efficient training. As described in the blog, this approach was originally thought to involve a performance trade-off, although careful configuration can overcome this trade-off and match full fine-tuning performance.
Examples with TRL
Let’s implement and train LoRA adapters in TRL scripts based on the core findings of the blog post. Afterwards, we’ll revisit each finding in light of the TRL results.
Supervised Fine-Tuning (SFT)
The blog post performs SFT on a range of models and datasets from the Hub, which we can reproduce in TRL.
We can integrate these findings with the TRL Python API like so:
from datasets import load_dataset
from peft import LoraConfig
from trl import SFTTrainer, SFTConfig
dataset = load_dataset("open-thoughts/OpenThoughts-114k", split="train")
peft_config = LoraConfig(r=256, lora_alpha=16, target_modules="all-linear")
training_args = SFTConfig(
learning_rate=2e-4,
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
num_train_epochs=1,
report_to=["trackio"],
)
trainer = SFTTrainer(
model="Qwen/Qwen2.5-3B-Instruct",
train_dataset=dataset,
peft_config=peft_config,
args=training_args,
)
trainer.train()
Once training starts, you can monitor the progress in Trackio, which will log the URL.
Reinforcement Learning (GRPO)
The blog post performs GRPO on a range of models and datasets from the Hub, and once again we can reproduce the results in TRL.
Model | Dataset |
---|---|
Llama-3.1-8B-Base | GSM8k |
Llama-3.1-8B-Base | DeepMath-103K |
Qwen3-8b-base | DeepMath-103K |
For reinforcement learning, the blog uses a math reasoning task that we can reproduce as a Python function.
Reward function
def strip_reasoning_accuracy_reward(
completions: list[list[dict[str, str]]], solution: list[str], **kwargs
) -> list[Optional[float]]:
"""Reward function that strips reasoning tags and checks mathematical accuracy.
This function:
1. Extracts the content from completions
2. Removes <think></think> tags (for reasoning that shouldn't be evaluated)
3. Parses both the gold solution and the predicted answer
4. Uses math_verify to check if they are mathematically equivalent
Args:
completions: List of model completions, each containing a list of messages
solution: List of ground truth solutions
**kwargs: Additional arguments (ignored but required for trainer compatibility)
Returns:
List of rewards where:
- 1.0 if the answer is correct
- 0.0 if the answer is incorrect
- None if the solution is not parseable (skips this example)
"""
contents = [completion[0]["content"] for completion in completions]
rewards = []
for content, sol in zip(contents, solution):
# Strip reasoning tags from completion
while "<think>" in content and "</think>" in content:
start = content.find("<think>")
end = content.find("</think>", start)
if start != -1 and end != -1:
content = content[:start] + content[end + len("</think>") :]
else:
break
# Parse gold solution
gold_parsed = parse(
f"${sol}$",
extraction_config=[
LatexExtractionConfig(
boxed_match_priority=0, try_extract_without_anchor=True
)
],
)
if len(gold_parsed) != 0:
# We require the answer to be provided in correct latex (no malformed operators)
answer_parsed = parse(
content,
extraction_config=[
LatexExtractionConfig(
boxed_match_priority=0,
normalization_config=NormalizationConfig(
basic_latex=True,
units=True,
malformed_operators=False,
nits=False,
boxed=True,
),
try_extract_without_anchor=False,
)
],
extraction_mode="first_match",
)
# Compute binary rewards if verifiable, `None` otherwise to skip this example
try:
reward = float(verify(gold_parsed, answer_parsed))
except Exception as e:
print(
f"verify failed: {e}, answer: {answer_parsed}, gold: {gold_parsed}"
)
reward = None
else:
# If the gold solution is not parseable, we assign `None` to skip this example
reward = None
rewards.append(reward)
return rewards
We can implement these recommendations with the TRL Python API like so:
from datasets import load_dataset
from peft import LoraConfig
from trl import GRPOConfig, GRPOTrainer
dataset = load_dataset("HuggingFaceH4/OpenR1-Math-220k-default-verified", split="train")
def strip_reasoning_accuracy_reward(completions, **kwargs):
"""Reward function that strips reasoning and accuracy scores from the model outputs."""
...
peft_config = LoraConfig(
r=1,
lora_alpha=32,
target_modules="all-linear"
)
training_args = GRPOConfig(
learning_rate=5e-5,
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
num_train_epochs=1,
num_generations=8,
generation_batch_size=8,
report_to=["trackio"],
)
trainer = GRPOTrainer(
model="Qwen/Qwen3-0.6B",
reward_funcs=strip_reasoning_accuracy_reward,
args=training_args,
train_dataset=dataset,
peft_config=peft_config,
)
trainer.train()
This snippet skips the reward function which is defined above to keep the example concise.
The reinforcement learning script with GRPO is implemented as a custom script in TRL, which uses the reward function shown above. You can review it at grpo.py
- Reinforcement learning with LoRA best practices
Key findings in optimizing LoRA
The authors recommend applying LoRA to all weight matrices rather than limiting it to attention layers, as increasing the rank does not compensate for this restriction. In TRL, this can be configured using --lora_target_modules all-linear
to apply LoRA to all weight matrices.
We were able to reproduce the results of the blog post using TRL and the SmolLM3 model. We trained the model for 500 steps on the Math 220k dataset with the reward function and configuration above. As you can see in the figure below, the LoRA model’s average train reward curve matches the full fine-tuning curve.
And most importantly, the LoRA model uses significantly less memory than the full fine-tuning model, as we can see in the figure below.
Here are the parameters we used to train the above models
Parameter | LoRA | Full FT |
---|---|---|
--model_name_or_path | HuggingFaceTB/SmolLM3-3B | HuggingFaceTB/SmolLM3-3B |
--dataset_name | HuggingFaceH4/OpenR1-Math-220k-default-verified | HuggingFaceH4/OpenR1-Math-220k-default-verified |
--learning_rate | 1.0e-5 | 1.0e-6 |
--max_prompt_length | 1024 | 1024 |
--max_completion_length | 4096 | 4096 |
--lora_r | 1 | - |
--lora_alpha | 32 | - |
--lora_dropout | 0.0 | - |
--lora_target_modules | all-linear | - |
Let’s break down the key findings of the blog post and how we were able to reproduce them.
1. LoRA performs better when applied to all weight matrices
The authors recommend applying LoRA to all weight matrices rather than limiting it to attention layers, as increasing the rank does not compensate for this restriction.
Attention-only LoRA underperforms even when using a higher rank to match parameter count. In TRL, this can be configured using --lora_target_modules all-linear
to apply LoRA to all weight matrices. In Python, we can do this like so:
from peft import LoraConfig
peft_config = LoraConfig(target_modules="all-linear")
2. The adapter needs sufficient capacity to learn from the dataset
The blog post recommends using a sufficient LoRA rank to learn from the dataset. The rank determines the number of trainable parameters in the LoRA adapter. Therefore, “For datasets that exceed LoRA capacity, LoRA underperforms FullFT”.
In the TRL script, we could use --lora_r
to set the rank and adapt it based on the task and dataset we’re training on. The blog post recommends the following ranks based on the task and dataset size:
Reinforcement learning tasks typically require lower capacity, so smaller LoRA ranks can be used. This is because policy gradient algorithms extract roughly ~1 bit of information per episode, demanding minimal parameter capacity.
The blog post defines the ideal dataset size for LoRA to match full fine-tuning as “Post-training scale”. Which we can use to determine the recommended rank for SFT and RL LoRAs as:
Task Type | Dataset Size | Recommended Rank |
---|---|---|
SFT | Post-training scale | 256 |
RL | Any size | 1-32 |
3. “FullFT and high-rank LoRAs have similar learning curves”
Counterintuitively, the blog post recommends using a higher learning rate than for full fine-tuning. In the table above, we used 1.0e-5 for LoRA and 1.0e-6 for full fine-tuning. In the TRL script, we could use --learning_rate
to set the learning rate. The scaling in LoRA makes the optimal learning rate approximately rank-independent.
4. “In some scenarios, LoRA is less tolerant of large batch sizes than full fine-tuning.”
The blog post recommends using an effective batch size < 32 because the authors found LoRA to be less tolerant of large batch sizes. This could not be mitigated by increasing the LoRA rank. In the TRL script, we could use --per_device_train_batch_size
and --gradient_accumulation_steps
to set the batch size.
Takeaways
Using TRL, you can efficiently implement LoRA adapters to match full fine-tuning performance, applying the core insights (targeting all weight matrices, choosing the right rank, and managing batch size and learning rate) without the heavy compute cost of FullFT.
Citation
@article{schulman2025lora,
title = {{LoRA Without Regret}},
author = {John Schulman and Thinking Machines Lab},
year = 2025,
journal = {Thinking Machines Lab: Connectionism},
doi = {10.64434/tml.20250929},
note = {https://thinkingmachines.ai/blog/lora/}
}