Mistral 7B Zephyr DPO V2
The Zephyr DPO recipe applied on top of Mistral 7B (new recipe with chatML format)
Model description
- Model type: A 7.2B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- Language(s) (NLP): Primarily English
- Finetuned from model: wandb/mistral-7b-zephyr-sft
Recipe
We trained using the alignment handbook recipe and logging to W&B
Visit the W&B workspace here
Compute provided by Lambda Labs - 8xA100 80GB node
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 63.22 |
AI2 Reasoning Challenge (25-Shot) | 63.05 |
HellaSwag (10-Shot) | 85.54 |
MMLU (5-Shot) | 61.88 |
TruthfulQA (0-shot) | 59.30 |
Winogrande (5-shot) | 78.53 |
GSM8k (5-shot) | 31.01 |
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for wandb/mistral-7b-zephyr-dpo
Dataset used to train wandb/mistral-7b-zephyr-dpo
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard63.050
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard85.540
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard61.880
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard59.300
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard78.530
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard31.010