Llama-3.1-8B Math Reasoning Model
Llama-3.1-8B SFT checkpoints for mathematical reasoning—artifacts of https://arxiv.org/abs/2509.11167.
Model Details
- Base model: Llama-3.1-8B
- Training dataset: tulu3_mixture_math_reasoning
- Learning rate: 5e-06
- Effective batch size: 128
Export Files
This repository includes export files for state averaging and other advanced techniques.
- Downloads last month
- 11
Model tree for pmahdavi/Llama-3.1-8B-math-reasoning
Base model
meta-llama/Llama-3.1-8BSpace using pmahdavi/Llama-3.1-8B-math-reasoning 1
Evaluation results
- Training Loss on tulu3_mixture_math_reasoningself-reported0.980