R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning

The model was presented in the paper R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning.

Our code is based on Llama-factory/VeRL/Search-R1 for the SFT and RL training and SymBench/BIG-Bench-Hard/reasoning-gym for datasets/benchmarks of reasoning/planning tasks.

πŸ“ Introduction

R1-Code-Interpreter is the first framework to train LLMs for step-by-step code reasoning using multi-turn supervised fine-tuning and reinforcement learning. By curating 144 diverse reasoning and planning tasks, we enable Qwen-2.5 models (3B/7B/14B) to autonomously decide when and how to invoke code. Our best model, R1-CI-14B, outperforms GPT-4o (text-only) and approaches GPT-4o with Code Interpreter, showing emergent self-checking behavior via code generation.

Github repository

Project page: https://huggingface.co/yongchao98

Downloads last month
1,003
Safetensors
Model size
14.8B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for yongchao98/R1-Code-Interpreter-14B

Quantizations
1 model