Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -188,4 +188,17 @@ We filtered the data from the Web based on its proximity to Wikipedia text and r
|
|
188 |
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
|
189 |
|
190 |
**Use cases**
|
191 |
-
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
188 |
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
|
189 |
|
190 |
**Use cases**
|
191 |
+
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
192 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
193 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_elinas__chronos-33b)
|
194 |
+
|
195 |
+
| Metric | Value |
|
196 |
+
|-----------------------|---------------------------|
|
197 |
+
| Avg. | 49.42 |
|
198 |
+
| ARC (25-shot) | 62.2 |
|
199 |
+
| HellaSwag (10-shot) | 83.48 |
|
200 |
+
| MMLU (5-shot) | 55.87 |
|
201 |
+
| TruthfulQA (0-shot) | 46.67 |
|
202 |
+
| Winogrande (5-shot) | 78.3 |
|
203 |
+
| GSM8K (5-shot) | 13.04 |
|
204 |
+
| DROP (3-shot) | 6.41 |
|