ModelHub
Collection
3 items
β’
Updated
LIMO challenges the conventional wisdom in mathematical reasoning by demonstrating that models can achieve superior performance with significantly less but higher quality training data. Our approach:
Model | AIME24 | MATH500 | Training Samples |
---|---|---|---|
LIMO (Ours) | 57.1% | 94.8% | 817 |
Previous SOTA | 6.5% | 59.2% | 100k+ |
Benchmark | LIMO | Previous SOTA | Improvement |
---|---|---|---|
AIME24 | 57.1% | 6.5% | +50.6% |
MATH500 | 94.8% | 59.2% | +35.6% |
AMC23 | 92.0% | 40.6% | +51.4% |
OlympiadBench | 66.8% | 36.7% | +30.1% |
CHMath | 75.4% | 11.2% | +64.2% |
Gaokao | 81.0% | 49.4% | +31.6% |
Kaoyan | 73.4% | 32.7% | +40.7% |
GradeSchool | 76.2% | 36.2% | +40.0% |
Minerva | 44.9% | 47.1% | -2.2% |
GPQA | 66.7% | 73.3% | -6.6% |
Our LIMO model is available on Hugging Face π€:
Model | Backbone | Size | Link |
---|---|---|---|
LIMO | Qwen2.5-32B-Instruct | 32B | π€ |
We release our datasets through Hugging Face π€:
Dataset | Description | Size | Link |
---|---|---|---|
LIMO | Training set used to train LIMO model | 817 | π€ |
Note: We are gradually releasing additional datasets mentioned in our paper, including those used for comparative experiments, to facilitate reproducibility and further analysis by the research community. Stay tuned!
Our model is fine-tuned on Qwen2.5-32B-Instruct and is compatible with most mainstream frameworks like HF Transformers, VLLM, TensorRT-LLM and etc.
# Install required packages
pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Initialize model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"GAIR/LIMO",
torch_dtype="auto",
trust_remote_code=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("GAIR/LIMO", trust_remote_code=True)
# Prepare input messages (We use the following template and system prompt during training and inference)
messages = [
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
{"role": "user", "content": "What is the result of 1+1?"}
]
# Format input using chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input
inputs = tokenizer(text, return_tensors="pt").to(model.device)
# Generate response
outputs = model.generate(
**inputs,
max_new_tokens=32768,
temperature=0.7,
top_p=0.95,
do_sample=True
)
# Decode and print response
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(response)
# Install required packages
pip install vllm
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
# Initialize the model
llm = LLM(
model="GAIR/LIMO",
tensor_parallel_size=4, # adjust based on available GPUs
trust_remote_code=True,
swap_space=60,
gpu_memory_utilization=0.96,
)
# Prepare input messages (We use the following template and system prompt during training and inference)
messages = [
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
{"role": "user", "content": "What is the result of 1+1?"}
]
# Setup tokenizer
tokenizer = AutoTokenizer.from_pretrained("GAIR/LIMO", trust_remote_code=True)
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Configure generation parameters
sampling_params = SamplingParams(
temperature=0.7,
max_tokens=32768,
top_p=0.95,
)
# Generate response
output = llm.generate(text, sampling_params)
print(output[0].outputs[0].text)
This project is licensed under the MIT License - see the LICENSE file for details.
@misc{ye2025limoreasoning,
title={LIMO: Less is More for Reasoning},
author={Yixin Ye and Zhen Huang and Yang Xiao and Ethan Chern and Shijie Xia and Pengfei Liu},
year={2025},
eprint={2502.03387},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.03387},
}