Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,183 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-nc-4.0
|
3 |
language:
|
4 |
- de
|
5 |
- en
|
6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
tags:
|
3 |
+
- merge
|
4 |
+
- mergekit
|
5 |
+
- cstr/Spaetzle-v80-7b
|
6 |
+
- cstr/Spaetzle-v79-7b
|
7 |
+
- cstr/Spaetzle-v81-7b
|
8 |
+
- cstr/Spaetzle-v71-7b
|
9 |
+
base_model:
|
10 |
+
- cstr/Spaetzle-v80-7b
|
11 |
+
- cstr/Spaetzle-v79-7b
|
12 |
+
- cstr/Spaetzle-v81-7b
|
13 |
+
- cstr/Spaetzle-v71-7b
|
14 |
license: cc-by-nc-4.0
|
15 |
language:
|
16 |
- de
|
17 |
- en
|
18 |
+
---
|
19 |
+
|
20 |
+
# Spaetzle-v85-7b
|
21 |
+
|
22 |
+
Spaetzle-v85-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
23 |
+
* [cstr/Spaetzle-v84-7b](https://huggingface.co/cstr/Spaetzle-v84-7b)
|
24 |
+
* [cstr/Spaetzle-v81-7b](https://huggingface.co/cstr/Spaetzle-v81-7b)
|
25 |
+
* [cstr/Spaetzle-v80-7b](https://huggingface.co/cstr/Spaetzle-v80-7b)
|
26 |
+
* [cstr/Spaetzle-v79-7b](https://huggingface.co/cstr/Spaetzle-v79-7b)
|
27 |
+
* [cstr/Spaetzle-v71-7b](https://huggingface.co/cstr/Spaetzle-v71-7b)
|
28 |
+
|
29 |
+
## Evaluation
|
30 |
+
|
31 |
+
EQ-Bench (v2_de): 65.32, Parseable: 171.0
|
32 |
+
|
33 |
+
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|
34 |
+
|--------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|
35 |
+
|[Spaetzle-v85-7b](https://huggingface.co/cstr/Spaetzle-v85-7b)| 44.35| 75.99| 67.23| 46.55| 58.53|
|
36 |
+
|
37 |
+
|
38 |
+
From [Intel/low_bit_open_llm_leaderboard](https://huggingface.co/datasets/Intel/ld_results/blob/main/cstr/Spaetzle-v85-7b-int4-inc/results_2024-06-12-21-00-34.json):
|
39 |
+
|
40 |
+
| Metric | Value |
|
41 |
+
|--------------|---------|
|
42 |
+
| ARC-c | 62.63 |
|
43 |
+
| ARC-e | 85.56 |
|
44 |
+
| Boolq | 87.77 |
|
45 |
+
| HellaSwag | 66.66 |
|
46 |
+
| Lambada | 70.35 |
|
47 |
+
| MMLU | 61.61 |
|
48 |
+
| Openbookqa | 37.2 |
|
49 |
+
| Piqa | 82.48 |
|
50 |
+
| Truthfulqa | 50.43 |
|
51 |
+
| Winogrande | 78.3 |
|
52 |
+
| Average | 68.3 |
|
53 |
+
|
54 |
+
### AGIEval
|
55 |
+
| Task |Version| Metric |Value| |Stderr|
|
56 |
+
|------------------------------|------:|--------|----:|---|-----:|
|
57 |
+
|agieval_aqua_rat | 0|acc |23.23|± | 2.65|
|
58 |
+
| | |acc_norm|22.44|± | 2.62|
|
59 |
+
|agieval_logiqa_en | 0|acc |37.33|± | 1.90|
|
60 |
+
| | |acc_norm|37.94|± | 1.90|
|
61 |
+
|agieval_lsat_ar | 0|acc |25.22|± | 2.87|
|
62 |
+
| | |acc_norm|23.04|± | 2.78|
|
63 |
+
|agieval_lsat_lr | 0|acc |49.41|± | 2.22|
|
64 |
+
| | |acc_norm|50.78|± | 2.22|
|
65 |
+
|agieval_lsat_rc | 0|acc |64.68|± | 2.92|
|
66 |
+
| | |acc_norm|63.20|± | 2.95|
|
67 |
+
|agieval_sat_en | 0|acc |77.67|± | 2.91|
|
68 |
+
| | |acc_norm|78.16|± | 2.89|
|
69 |
+
|agieval_sat_en_without_passage| 0|acc |46.12|± | 3.48|
|
70 |
+
| | |acc_norm|45.15|± | 3.48|
|
71 |
+
|agieval_sat_math | 0|acc |35.45|± | 3.23|
|
72 |
+
| | |acc_norm|34.09|± | 3.20|
|
73 |
+
|
74 |
+
Average: 44.35%
|
75 |
+
|
76 |
+
### GPT4All
|
77 |
+
| Task |Version| Metric |Value| |Stderr|
|
78 |
+
|-------------|------:|--------|----:|---|-----:|
|
79 |
+
|arc_challenge| 0|acc |63.82|± | 1.40|
|
80 |
+
| | |acc_norm|64.76|± | 1.40|
|
81 |
+
|arc_easy | 0|acc |85.90|± | 0.71|
|
82 |
+
| | |acc_norm|82.32|± | 0.78|
|
83 |
+
|boolq | 1|acc |87.61|± | 0.58|
|
84 |
+
|hellaswag | 0|acc |67.39|± | 0.47|
|
85 |
+
| | |acc_norm|85.36|± | 0.35|
|
86 |
+
|openbookqa | 0|acc |38.80|± | 2.18|
|
87 |
+
| | |acc_norm|48.80|± | 2.24|
|
88 |
+
|piqa | 0|acc |83.03|± | 0.88|
|
89 |
+
| | |acc_norm|84.17|± | 0.85|
|
90 |
+
|winogrande | 0|acc |78.93|± | 1.15|
|
91 |
+
|
92 |
+
Average: 75.99%
|
93 |
+
|
94 |
+
### TruthfulQA
|
95 |
+
| Task |Version|Metric|Value| |Stderr|
|
96 |
+
|-------------|------:|------|----:|---|-----:|
|
97 |
+
|truthfulqa_mc| 1|mc1 |50.80|± | 1.75|
|
98 |
+
| | |mc2 |67.23|± | 1.49|
|
99 |
+
|
100 |
+
Average: 67.23%
|
101 |
+
|
102 |
+
### Bigbench
|
103 |
+
| Task |Version| Metric |Value| |Stderr|
|
104 |
+
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|
105 |
+
|bigbench_causal_judgement | 0|multiple_choice_grade|54.74|± | 3.62|
|
106 |
+
|bigbench_date_understanding | 0|multiple_choice_grade|68.29|± | 2.43|
|
107 |
+
|bigbench_disambiguation_qa | 0|multiple_choice_grade|39.53|± | 3.05|
|
108 |
+
|bigbench_geometric_shapes | 0|multiple_choice_grade|22.28|± | 2.20|
|
109 |
+
| | |exact_str_match |12.26|± | 1.73|
|
110 |
+
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|32.80|± | 2.10|
|
111 |
+
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.00|± | 1.59|
|
112 |
+
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|59.00|± | 2.84|
|
113 |
+
|bigbench_movie_recommendation | 0|multiple_choice_grade|45.60|± | 2.23|
|
114 |
+
|bigbench_navigate | 0|multiple_choice_grade|51.10|± | 1.58|
|
115 |
+
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.10|± | 1.02|
|
116 |
+
|bigbench_ruin_names | 0|multiple_choice_grade|52.68|± | 2.36|
|
117 |
+
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|33.57|± | 1.50|
|
118 |
+
|bigbench_snarks | 0|multiple_choice_grade|71.27|± | 3.37|
|
119 |
+
|bigbench_sports_understanding | 0|multiple_choice_grade|74.54|± | 1.39|
|
120 |
+
|bigbench_temporal_sequences | 0|multiple_choice_grade|40.00|± | 1.55|
|
121 |
+
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|21.52|± | 1.16|
|
122 |
+
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.86|± | 0.94|
|
123 |
+
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|59.00|± | 2.84|
|
124 |
+
|
125 |
+
Average: 46.55%
|
126 |
+
|
127 |
+
Average score: 58.53%
|
128 |
+
|
129 |
+
## 🧩 Configuration
|
130 |
+
|
131 |
+
```yaml
|
132 |
+
models:
|
133 |
+
- model: cstr/Spaetzle-v84-7b
|
134 |
+
# no parameters necessary for base model
|
135 |
+
- model: cstr/Spaetzle-v80-7b
|
136 |
+
parameters:
|
137 |
+
density: 0.65
|
138 |
+
weight: 0.2
|
139 |
+
- model: cstr/Spaetzle-v79-7b
|
140 |
+
parameters:
|
141 |
+
density: 0.65
|
142 |
+
weight: 0.2
|
143 |
+
- model: cstr/Spaetzle-v81-7b
|
144 |
+
parameters:
|
145 |
+
density: 0.65
|
146 |
+
weight: 0.2
|
147 |
+
- model: cstr/Spaetzle-v71-7b
|
148 |
+
parameters:
|
149 |
+
density: 0.65
|
150 |
+
weight: 0.2
|
151 |
+
merge_method: dare_ties
|
152 |
+
base_model: cstr/Spaetzle-v84-7b
|
153 |
+
parameters:
|
154 |
+
int8_mask: true
|
155 |
+
dtype: bfloat16
|
156 |
+
random_seed: 0
|
157 |
+
tokenizer_source: base
|
158 |
+
```
|
159 |
+
|
160 |
+
## 💻 Usage
|
161 |
+
|
162 |
+
```python
|
163 |
+
!pip install -qU transformers accelerate
|
164 |
+
|
165 |
+
from transformers import AutoTokenizer
|
166 |
+
import transformers
|
167 |
+
import torch
|
168 |
+
|
169 |
+
model = "cstr/Spaetzle-v85-7b"
|
170 |
+
messages = [{"role": "user", "content": "What is a large language model?"}]
|
171 |
+
|
172 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
173 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
174 |
+
pipeline = transformers.pipeline(
|
175 |
+
"text-generation",
|
176 |
+
model=model,
|
177 |
+
torch_dtype=torch.float16,
|
178 |
+
device_map="auto",
|
179 |
+
)
|
180 |
+
|
181 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
182 |
+
print(outputs[0]["generated_text"])
|
183 |
+
```
|