akumaburn commited on
Commit
7d6a87d
·
verified ·
1 Parent(s): d661a36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -35,21 +35,24 @@ datasets:
35
  Some GGUF quantizations are included as well.
36
 
37
  mistral-7b-openorca.Q8_0.gguf:
38
- - **MMLU-Test:** Final result: 41.5836 +/- 0.4174
39
  - **Arc-Easy:** Final result: 72.6316 +/- 1.8691
40
- - **Truthful QA:** Final result: 32.0685 +/- 1.6339
41
 
42
  llama-3-8b-bnb-4bit.Q8_0.gguf:
43
- - **MMLU-Test:** Final result: 40.4074 +/- 0.4156
44
- - **Arc-Easy:** Final result: 73.8596 +/- 1.8421
 
45
 
46
  **Open_Orca_Llama-3-8B-unsloth.Q8_0.gguf**:
47
- - **MMLU-Test:** Final result: 39.3818 +/- 0.4138
48
- - **Arc-Easy:** Final result: 67.3684 +/- 1.9656
 
49
 
50
  Meta-Llama-3-8B.Q8_0.gguf:
51
- - **MMLU-Test:** Final result: 40.8664 +/- 0.4163
52
- - **Arc-Easy:** Final result: 74.3860 +/- 1.8299
 
53
 
54
  Llama.cpp Options For Testing:
55
  --samplers "tfs;typical;temp" --draft 32 --ctx-size 8192 --temp 0.82 --tfs 0.8 --typical 1.1 --repeat-last-n 512 --batch-size 8192 --repeat-penalty 1.0 --n-gpu-layers 100 --threads 12
 
35
  Some GGUF quantizations are included as well.
36
 
37
  mistral-7b-openorca.Q8_0.gguf:
38
+ - **MMLU-Test:** Final result: **41.5836 +/- 0.4174**
39
  - **Arc-Easy:** Final result: 72.6316 +/- 1.8691
40
+ - **Truthful QA:** Final result: **32.0685 +/- 1.6339**
41
 
42
  llama-3-8b-bnb-4bit.Q8_0.gguf:
43
+ - **MMLU-Test:** Final result: 40.4074 +/- 0.4156
44
+ - **Arc-Easy:** Final result: 73.8596 +/- 1.8421
45
+ - **Truthful QA:** Final result: 26.6830 +/- 1.5484
46
 
47
  **Open_Orca_Llama-3-8B-unsloth.Q8_0.gguf**:
48
+ - **MMLU-Test:** Final result: 39.3818 +/- 0.4138
49
+ - **Arc-Easy:** Final result: 67.3684 +/- 1.9656
50
+ - **Truthful QA:** Final result: 29.0086 +/- 1.5886
51
 
52
  Meta-Llama-3-8B.Q8_0.gguf:
53
+ - **MMLU-Test:** Final result: 40.8664 +/- 0.4163
54
+ - **Arc-Easy:** Final result: **74.3860 +/- 1.8299**
55
+ - **Truthful QA:** Final result: 28.6414 +/- 1.5826
56
 
57
  Llama.cpp Options For Testing:
58
  --samplers "tfs;typical;temp" --draft 32 --ctx-size 8192 --temp 0.82 --tfs 0.8 --typical 1.1 --repeat-last-n 512 --batch-size 8192 --repeat-penalty 1.0 --n-gpu-layers 100 --threads 12