Adding Evaluation Results
#3
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -52,4 +52,17 @@ sequences = pipeline(
|
|
52 |
)
|
53 |
for seq in sequences:
|
54 |
print(f"Result: {seq['generated_text']}")
|
55 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
)
|
53 |
for seq in sequences:
|
54 |
print(f"Result: {seq['generated_text']}")
|
55 |
+
```
|
56 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
57 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PY007__TinyLlama-1.1B-intermediate-step-480k-1T)
|
58 |
+
|
59 |
+
| Metric | Value |
|
60 |
+
|-----------------------|---------------------------|
|
61 |
+
| Avg. | 30.06 |
|
62 |
+
| ARC (25-shot) | 30.89 |
|
63 |
+
| HellaSwag (10-shot) | 52.97 |
|
64 |
+
| MMLU (5-shot) | 25.0 |
|
65 |
+
| TruthfulQA (0-shot) | 39.55 |
|
66 |
+
| Winogrande (5-shot) | 57.3 |
|
67 |
+
| GSM8K (5-shot) | 0.53 |
|
68 |
+
| DROP (3-shot) | 4.18 |
|