llama2_7b_mmlu / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
c1476f2
|
raw
history blame
657 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 46.31
ARC (25-shot) 56.14
HellaSwag (10-shot) 79.13
MMLU (5-shot) 60.04
TruthfulQA (0-shot) 40.95
Winogrande (5-shot) 74.43
GSM8K (5-shot) 7.88
DROP (3-shot) 5.59