Llama-3.1-8B General Model
This is a fine-tuned Llama-3.1-8B model specialized for general instruction following tasks. This checkpoint was released alongside https://arxiv.org/abs/2509.11167.
Model Details
- Base model: Llama-3.1-8B
- Training dataset: tulu3_mixture_general
- Learning rate: 5e-06
- Effective batch size: 128
Export Files
This repository includes export files for state averaging and other advanced techniques.
- Downloads last month
- 17
Model tree for pmahdavi/Llama-3.1-8B-general
Base model
meta-llama/Llama-3.1-8BSpace using pmahdavi/Llama-3.1-8B-general 1
Evaluation results
- Training Loss on tulu3_mixture_generalself-reported1.030