Lyra-Gutenberg-12B
Sao10K/MN-12B-Lyra-v1 finetuned on jondurbin/gutenberg-dpo-v0.1.
Method
Finetuned using an A100 on Google Colab for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 22.57 |
IFEval (0-Shot) | 34.95 |
BBH (3-Shot) | 36.99 |
MATH Lvl 5 (4-Shot) | 8.31 |
GPQA (0-shot) | 11.19 |
MuSR (0-shot) | 14.76 |
MMLU-PRO (5-shot) | 29.20 |
- Downloads last month
- 26
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for mav23/Lyra-Gutenberg-mistral-nemo-12B-GGUF
Base model
Sao10K/MN-12B-Lyra-v1Dataset used to train mav23/Lyra-Gutenberg-mistral-nemo-12B-GGUF
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard34.950
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard36.990
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard8.310
- acc_norm on GPQA (0-shot)Open LLM Leaderboard11.190
- acc_norm on MuSR (0-shot)Open LLM Leaderboard14.760
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.200