Lyra-Gutenberg-12B

Sao10K/MN-12B-Lyra-v1 finetuned on jondurbin/gutenberg-dpo-v0.1.

Method

Finetuned using an A100 on Google Colab for 3 epochs.

Fine-tune Llama 3 with ORPO

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 22.57
IFEval (0-Shot) 34.95
BBH (3-Shot) 36.99
MATH Lvl 5 (4-Shot) 8.31
GPQA (0-shot) 11.19
MuSR (0-shot) 14.76
MMLU-PRO (5-shot) 29.20
Downloads last month
26
Safetensors
Model size
12.2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nbeerbower/Lyra-Gutenberg-mistral-nemo-12B

Finetuned
(1)
this model
Merges
7 models
Quantizations
9 models

Dataset used to train nbeerbower/Lyra-Gutenberg-mistral-nemo-12B

Spaces using nbeerbower/Lyra-Gutenberg-mistral-nemo-12B 4

Collection including nbeerbower/Lyra-Gutenberg-mistral-nemo-12B

Evaluation results