eaddario commited on
Commit
2ea150c
·
verified ·
1 Parent(s): 398ddf7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -87,7 +87,7 @@ The process to generate these models is roughly as follows:
87
  | [DeepSeek-R1-Distill-Llama-8B-F16](./DeepSeek-R1-Distill-Llama-8B-F16.gguf) | 14.009216 ±0.118474 | 100% | N/A | N/A |
88
 
89
  ### ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores
90
- Scores generated using [llama-perplexity](<https://github.com/ggml-org/llama.cpp/tree/master/examples/perplexity>) with 750 tasks per test, and a context size of 768 tokens. Naive (`llama-quantize` with no optimization) Q4_K_M quantization included for comparison.
91
 
92
  For the test data used in the generation of these scores, follow the appropiate links: [HellaSwag](<https://github.com/klosax/hellaswag_text_data>), [ARC, MMLU, Truthful QA](<https://huggingface.co/datasets/ikawrakow/validation-datasets-for-llama.cpp/tree/main>) and [WinoGrande](<https://huggingface.co/datasets/ikawrakow/winogrande-eval-for-llama.cpp/tree/main>)
93
 
 
87
  | [DeepSeek-R1-Distill-Llama-8B-F16](./DeepSeek-R1-Distill-Llama-8B-F16.gguf) | 14.009216 ±0.118474 | 100% | N/A | N/A |
88
 
89
  ### ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores
90
+ Scores generated using [llama-perplexity](<https://github.com/ggml-org/llama.cpp/tree/master/examples/perplexity>) with 750 tasks per test, and a context size of 768 tokens.
91
 
92
  For the test data used in the generation of these scores, follow the appropiate links: [HellaSwag](<https://github.com/klosax/hellaswag_text_data>), [ARC, MMLU, Truthful QA](<https://huggingface.co/datasets/ikawrakow/validation-datasets-for-llama.cpp/tree/main>) and [WinoGrande](<https://huggingface.co/datasets/ikawrakow/winogrande-eval-for-llama.cpp/tree/main>)
93