CodeLlama-13b-hf / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
793d2b2
|
raw
history blame
662 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 37.91
ARC (25-shot) 40.87
HellaSwag (10-shot) 63.35
MMLU (5-shot) 32.81
TruthfulQA (0-shot) 43.79
Winogrande (5-shot) 67.17
GSM8K (5-shot) 12.13
DROP (3-shot) 5.25