Update README.md
Browse files
README.md
CHANGED
|
@@ -18,15 +18,15 @@ Fine-tuned On [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistr
|
|
| 18 |
You can use ChatML format.
|
| 19 |
|
| 20 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 21 |
-
Detailed results can be found [
|
| 22 |
|
| 23 |
| Metric | Value |
|
| 24 |
|-----------------------|---------------------------|
|
| 25 |
-
| Avg. |
|
| 26 |
-
| ARC (25-shot) |
|
| 27 |
-
| HellaSwag (10-shot) |
|
| 28 |
-
| MMLU (5-shot) |
|
| 29 |
-
| TruthfulQA (0-shot) |
|
| 30 |
-
| Winogrande (5-shot) |
|
| 31 |
-
| GSM8K (5-shot) |
|
| 32 |
|
|
|
|
| 18 |
You can use ChatML format.
|
| 19 |
|
| 20 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 21 |
+
Detailed results can be found [Here](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/Q-bert/Optimus-7B/results_2023-12-04T18-59-49.207215.json)
|
| 22 |
|
| 23 |
| Metric | Value |
|
| 24 |
|-----------------------|---------------------------|
|
| 25 |
+
| Avg. | 69.09 |
|
| 26 |
+
| ARC (25-shot) | 65.44 |
|
| 27 |
+
| HellaSwag (10-shot) | 85.41 |
|
| 28 |
+
| MMLU (5-shot) | 63.61 |
|
| 29 |
+
| TruthfulQA (0-shot) | 55.79 |
|
| 30 |
+
| Winogrande (5-shot) | 78.77 |
|
| 31 |
+
| GSM8K (5-shot) | 65.50 |
|
| 32 |
|