image/png

Qwen2.5-Gutenberg-Doppel-14B

Qwen/Qwen2.5-14B-Instruct finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with 4x A40 for 3 epochs.

Thank you @ParasiticRogue for sponsoring.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 32.30
IFEval (0-Shot) 80.91
BBH (3-Shot) 48.24
MATH Lvl 5 (4-Shot) 0.00
GPQA (0-shot) 11.07
MuSR (0-shot) 10.02
MMLU-PRO (5-shot) 43.57
Downloads last month
52
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nbeerbower/Qwen2.5-Gutenberg-Doppel-14B

Base model

Qwen/Qwen2.5-14B
Finetuned
(124)
this model
Merges
2 models
Quantizations
10 models

Datasets used to train nbeerbower/Qwen2.5-Gutenberg-Doppel-14B

Evaluation results