Aymeric Roucher's picture

Aymeric Roucher

m-ric

AI & ML interests

Leading Agents at Hugging Face 🤗

Recent Activity

updated a dataset about 3 hours ago
smolagents-benchmark/answers
updated a dataset about 4 hours ago
huggingface/documentation-images
upvoted a paper about 5 hours ago
Qwen2.5-VL Technical Report
View all activity

Organizations

Hugging Face's profile picture Orange's profile picture Atmos Bank's profile picture Hugging Test Lab's profile picture Tools's profile picture HuggingFaceM4's profile picture lecocqassociate's profile picture huggingPartyParis's profile picture Supreme's profile picture Propulse Lab's profile picture FactSet's profile picture Leaderboard Organization's profile picture FactSet's profile picture CGIAR's profile picture Aperture Laboratories's profile picture AI Energy Score's profile picture C&A's profile picture Social Post Explorers's profile picture Dev Mode Explorers's profile picture Agent Collab's profile picture SLLHF's profile picture Data Agents's profile picture Hugging Face Party @ PyTorch Conference's profile picture Nerdy Face's profile picture Hugging Face Science's profile picture Agents Leaderboard's profile picture smolagents Benchmark's profile picture Hugging Face Agents Course's profile picture Open R1's profile picture

Posts 93

view post
Post
2072
Less is More for Reasoning (LIMO): a 32B model fine-tuned with 817 examples can beat o1-preview on math reasoning! 🤯

Do we really need o1's huge RL procedure to see reasoning emerge? It seems not.
Researchers from Shanghai Jiaotong University just demonstrated that carefully selected examples can boost math performance in large language models using SFT —no huge datasets or RL procedures needed.

Their procedure allows Qwen2.5-32B-Instruct to jump from 6.5% to 57% on AIME and from 59% to 95% on MATH, while using only 1% of the data in previous approaches.

⚡ The Less-is-More Reasoning Hypothesis:
‣ Minimal but precise examples that showcase optimal reasoning patterns matter more than sheer quantity
‣ Pre-training knowledge plus sufficient computational resources at inference levels up math skills

➡️ Core techniques:
‣ High-quality reasoning chains with self-verification steps
‣ 817 handpicked problems that encourage deeper reasoning
‣ Enough inference-time computation to allow extended reasoning

💪 Efficiency gains:
‣ Only 817 examples instead of 100k+
‣ 40.5% absolute improvement across 10 diverse benchmarks, outperforming models trained on 100x more data

This really challenges the notion that SFT leads to memorization rather than generalization! And opens up reasoning to GPU-poor researchers 🚀

Read the full paper here 👉  LIMO: Less is More for Reasoning (2502.03387)

Articles 9

Article
1.05k

Open-source DeepResearch – Freeing our search agents