Overview
Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability.
The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit, ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 29.73 |
IFEval (0-Shot) | 80.17 |
BBH (3-Shot) | 31.57 |
MATH Lvl 5 (4-Shot) | 15.48 |
GPQA (0-shot) | 7.49 |
MuSR (0-shot) | 11.67 |
MMLU-PRO (5-shot) | 31.97 |
- Downloads last month
- 2,437
Model tree for arcee-ai/Llama-3.1-SuperNova-Lite-GGUF
Base model
meta-llama/Llama-3.1-8BDataset used to train arcee-ai/Llama-3.1-SuperNova-Lite-GGUF
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard80.170
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard31.570
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard15.480
- acc_norm on GPQA (0-shot)Open LLM Leaderboard7.490
- acc_norm on MuSR (0-shot)Open LLM Leaderboard11.670
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard31.970