Commit
·
bc9be8f
1
Parent(s):
799b7b1
update readme and model file
Browse files- .gitattributes +2 -0
- README.md +280 -3
- README_ZH.md +267 -0
- assets/megrez-logo.png +3 -0
- assets/wechat-group.jpg +3 -0
- assets/wechat-official.jpg +3 -0
- chat_template.jinja +1 -0
- config.json +50 -0
- configuration_megrez_moe.py +203 -0
- generation_config.json +6 -0
- model-00001-of-00004.safetensors +3 -0
- model-00002-of-00004.safetensors +3 -0
- model-00003-of-00004.safetensors +3 -0
- model-00004-of-00004.safetensors +3 -0
- model.safetensors.index.json +0 -0
- modeling_megrez_moe.py +1047 -0
- special_tokens_map.json +16 -0
- tokenizer.json +0 -0
- tokenizer_config.json +221 -0
.gitattributes
CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
37 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,3 +1,280 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
tags:
|
8 |
+
- moe
|
9 |
+
- conversational
|
10 |
+
library_name: transformers
|
11 |
+
---
|
12 |
+
|
13 |
+
<div align="center">
|
14 |
+
<img src="./assets/megrez-logo.png" alt="Megrez Logo" width="400" />
|
15 |
+
|
16 |
+
<br>
|
17 |
+
<h1> Megrez2-3x7B-A3B-Preview </h1>
|
18 |
+
|
19 |
+
<a href="https://github.com/infinigence/Infini-Megrez">
|
20 |
+
<b>🔗 Github</b>
|
21 |
+
</a> |
|
22 |
+
<a href="https://github.com/infinigence/Infini-Megrez/blob/main/docs/tech_report.pdf">
|
23 |
+
<b>📄 Tech Report</b>
|
24 |
+
</a> |
|
25 |
+
<a href="https://huggingface.co/spaces/Infinigence/Megrez2-3x7B-A3B-Preview">
|
26 |
+
<b>💻 Demo</b>
|
27 |
+
</a> |
|
28 |
+
<a href="https://huggingface.co/Infinigence/Megrez2-3x7B-A3B-Preview/blob/main/assets/wechat-official.jpg">
|
29 |
+
<b>💬 WeChat Official</b>
|
30 |
+
</a>
|
31 |
+
|
32 |
+
<br>
|
33 |
+
|
34 |
+
<strong>[中文](https://huggingface.co/Infinigence/Megrez2-3x7B-A3B-Preview/blob/main/README_ZH.md) | English</strong>
|
35 |
+
|
36 |
+
</div>
|
37 |
+
|
38 |
+
## Introduction
|
39 |
+
|
40 |
+
Megrez2-3x7B-A3B-Preview is a device native large language model. Megrez2 takes advantages of both the accuracy of Mixture-of-Experts (MoE) architecture and the compact size of Dense models. This preview model was trained on 5T Tokens of data. The official release, with larger training data and better reasoning and agent capabilities, will come later this year.
|
41 |
+
|
42 |
+
## Model Card
|
43 |
+
|
44 |
+
<div align="center">
|
45 |
+
|
46 |
+
| | |
|
47 |
+
|:---:|:---:|
|
48 |
+
| **Architecture** | Mixture-of-Experts (MoE) |
|
49 |
+
| **Total Parameters** | 3x7B |
|
50 |
+
| **Activated Parameters** | 3B |
|
51 |
+
| **Experts Shared Frequency**| 3 |
|
52 |
+
| **Number of Layers** (Dense layer included) | 31 |
|
53 |
+
| **Number of Dense Layers** | 1 |
|
54 |
+
| **Attention Hidden Dimension** | 2048 |
|
55 |
+
| **MoE Hidden Dimension** (per Expert) | 1408 |
|
56 |
+
| **Number of Attention Heads** | 16 |
|
57 |
+
| **Number of Experts** | 64 |
|
58 |
+
| **Selected Experts per Token** | 6 |
|
59 |
+
| **Number of Shared Experts** | 4 |
|
60 |
+
| **Vocabulary Size** | 128,880 |
|
61 |
+
| **Context Length** | 32K |
|
62 |
+
| **Base Frequency of RoPE** | 1,000,000 |
|
63 |
+
| **Attention Mechanism** | GQA |
|
64 |
+
| **Activation Function** | SwiGLU |
|
65 |
+
</div>
|
66 |
+
|
67 |
+
## Performance
|
68 |
+
|
69 |
+
We evaluated Megrez2-3x7B-A3B-Preview using the open-source evaluation tool [OpenCompass](https://github.com/open-compass/opencompass) on several important benchmarks. Some of the evaluation results are shown in the table below.
|
70 |
+
|
71 |
+
<div align="center">
|
72 |
+
<table>
|
73 |
+
<thead>
|
74 |
+
<tr>
|
75 |
+
<th align="center">Benchmark</th>
|
76 |
+
<th align="center">Metric</th>
|
77 |
+
<th align="center"><sup>Megrez2-3x7B<br>-A3B-Preview</sup></th>
|
78 |
+
<th align="center"><sup>Qwen2.5-3B</sup></th>
|
79 |
+
<th align="center"><sup>Qwen2.5-7B</sup></th>
|
80 |
+
<th align="center"><sup>Qwen3-4B</sup></th>
|
81 |
+
<th align="center"><sup>Qwen3-8B</sup></th>
|
82 |
+
<th align="center"><sup>Phi-4-mini</sup></th>
|
83 |
+
<th align="center"><sup>Gemma-3-4B</sup></th>
|
84 |
+
<th align="center"><sup>GPT-4o-mini <br><sup>2024-07-18</sup></sup></th>
|
85 |
+
</tr>
|
86 |
+
</thead>
|
87 |
+
<tbody>
|
88 |
+
<tr>
|
89 |
+
<td align="center">Activate Params (B)</td>
|
90 |
+
<td align="center"></td>
|
91 |
+
<td align="center">3.0</td>
|
92 |
+
<td align="center">3.1</td>
|
93 |
+
<td align="center">7.6</td>
|
94 |
+
<td align="center">4.0</td>
|
95 |
+
<td align="center">8.2</td>
|
96 |
+
<td align="center">3.8</td>
|
97 |
+
<td align="center">4.3</td>
|
98 |
+
<td align="center">-</td>
|
99 |
+
</tr>
|
100 |
+
<tr>
|
101 |
+
<td align="center">Stored Params (B)</td>
|
102 |
+
<td align="center"></td>
|
103 |
+
<td align="center">7.5</td>
|
104 |
+
<td align="center">3.1</td>
|
105 |
+
<td align="center">7.6</td>
|
106 |
+
<td align="center">4.0</td>
|
107 |
+
<td align="center">8.2</td>
|
108 |
+
<td align="center">3.8</td>
|
109 |
+
<td align="center">4.3</td>
|
110 |
+
<td align="center">-</td>
|
111 |
+
</tr>
|
112 |
+
<tr>
|
113 |
+
<td align="center" colspan=9><strong>General Tasks</strong></td>
|
114 |
+
</tr>
|
115 |
+
<tr>
|
116 |
+
<td align="center">C-EVAL</td>
|
117 |
+
<td align="center">EM</td>
|
118 |
+
<td align="center"><strong>91.7</strong></td>
|
119 |
+
<td align="center">68.2</td>
|
120 |
+
<td align="center">76.2</td>
|
121 |
+
<td align="center">72.2</td>
|
122 |
+
<td align="center">77.9</td>
|
123 |
+
<td align="center">40.0</td>
|
124 |
+
<td align="center">-</td>
|
125 |
+
<td align="center">66.3</td>
|
126 |
+
</tr>
|
127 |
+
<tr>
|
128 |
+
<td align="center">MMLU-Pro</td>
|
129 |
+
<td align="center">EM</td>
|
130 |
+
<td align="center"><strong>67.6</strong></td>
|
131 |
+
<td align="center">43.7</td>
|
132 |
+
<td align="center">56.3</td>
|
133 |
+
<td align="center">-</td>
|
134 |
+
<td align="center">-</td>
|
135 |
+
<td align="center">52.8</td>
|
136 |
+
<td align="center">43.6</td>
|
137 |
+
<td align="center">-</td>
|
138 |
+
</tr>
|
139 |
+
<td align="center" colspan=9><strong>Instruction Tasks</strong></td>
|
140 |
+
<tr>
|
141 |
+
<td align="center">IF-Eval</td>
|
142 |
+
<td align="center">Prompt Strict</td>
|
143 |
+
<td align="center">80.2</td>
|
144 |
+
<td align="center">58.2</td>
|
145 |
+
<td align="center">71.2</td>
|
146 |
+
<td align="center">81.2</td>
|
147 |
+
<td align="center">83.0</td>
|
148 |
+
<td align="center">68.6</td>
|
149 |
+
<td align="center"><strong>90.2</strong></td>
|
150 |
+
<td align="center">80.4</td>
|
151 |
+
</tr>
|
152 |
+
<td align="center" colspan=9><strong>Math & STEM Tasks</strong></td>
|
153 |
+
<tr>
|
154 |
+
<td align="center">MATH-500</td>
|
155 |
+
<td align="center">EM</td>
|
156 |
+
<td align="center">81.6</td>
|
157 |
+
<td align="center">65.9</td>
|
158 |
+
<td align="center">75.5</td>
|
159 |
+
<td align="center">84.8</td>
|
160 |
+
<td align="center"><strong>87.4</strong></td>
|
161 |
+
<td align="center">64.0</td>
|
162 |
+
<td align="center">75.6</td>
|
163 |
+
<td align="center">78.2</td>
|
164 |
+
</tr>
|
165 |
+
<tr>
|
166 |
+
<td align="center">GSM8K</td>
|
167 |
+
<td align="center">EM</td>
|
168 |
+
<td align="center">83.6</td>
|
169 |
+
<td align="center">86.7</td>
|
170 |
+
<td align="center">91.6</td>
|
171 |
+
<td align="center">-</td>
|
172 |
+
<td align="center"><strong>93.2</strong></td>
|
173 |
+
<td align="center">88.6</td>
|
174 |
+
<td align="center">89.2</td>
|
175 |
+
<td align="center">-</td>
|
176 |
+
</tr>
|
177 |
+
<td align="center" colspan=9><strong>Coding Tasks</strong></td>
|
178 |
+
<tr>
|
179 |
+
<td align="center">HumanEval</td>
|
180 |
+
<td align="center">Pass@1</td>
|
181 |
+
<td align="center">74.4</td>
|
182 |
+
<td align="center">74.4</td>
|
183 |
+
<td align="center">84.8</td>
|
184 |
+
<td align="center">-</td>
|
185 |
+
<td align="center"><strong>85.9</strong></td>
|
186 |
+
<td align="center">74.4</td>
|
187 |
+
<td align="center">71.3</td>
|
188 |
+
<td align="center">87.2</td>
|
189 |
+
</tr>
|
190 |
+
<tr>
|
191 |
+
<td align="center">MBPP</td>
|
192 |
+
<td align="center">Pass@1</td>
|
193 |
+
<td align="center"><strong>88.0</strong></td>
|
194 |
+
<td align="center">72.7</td>
|
195 |
+
<td align="center">79.2</td>
|
196 |
+
<td align="center">-</td>
|
197 |
+
<td align="center">77.0</td>
|
198 |
+
<td align="center">65.3</td>
|
199 |
+
<td align="center">63.2</td>
|
200 |
+
<td align="center">-</td>
|
201 |
+
</tr>
|
202 |
+
</tbody>
|
203 |
+
</table>
|
204 |
+
</div>
|
205 |
+
|
206 |
+
## How to Run
|
207 |
+
|
208 |
+
### Transformers
|
209 |
+
|
210 |
+
The latest version of `transformers` is recommended or `transformers>=4.52.4` is required.
|
211 |
+
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
|
212 |
+
|
213 |
+
```python
|
214 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
215 |
+
import torch
|
216 |
+
|
217 |
+
path = "Infinigence/Megrez2-3x7B-A3B-Preview"
|
218 |
+
device = "cuda"
|
219 |
+
|
220 |
+
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
|
221 |
+
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True)
|
222 |
+
|
223 |
+
messages = [
|
224 |
+
{"role": "user", "content": "世界上最高的山峰是哪座?"},
|
225 |
+
]
|
226 |
+
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(device)
|
227 |
+
|
228 |
+
model_outputs = model.generate(
|
229 |
+
model_inputs,
|
230 |
+
do_sample=True,
|
231 |
+
max_new_tokens=1024
|
232 |
+
)
|
233 |
+
|
234 |
+
output_token_ids = [
|
235 |
+
model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs))
|
236 |
+
]
|
237 |
+
|
238 |
+
responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0]
|
239 |
+
print(responses)
|
240 |
+
|
241 |
+
# 世界上最高的山峰是珠穆朗玛峰(Mount Everest),位于喜马拉雅山脉的中尼边境。珠穆朗玛峰的海拔高度为8,848.86米(29,031.7英尺),这一数据是由中国和尼泊尔在2020年共同宣布的最新测量结果。珠穆朗玛峰不仅是登山爱好者的圣地,也是地理和科学研究的重要对象。
|
242 |
+
```
|
243 |
+
|
244 |
+
### ModelScope
|
245 |
+
|
246 |
+
`ModelScope` adopts Python API similar to (though not entirely identical to) `Transformers`. For basic usage, simply modify the first line of the above code as follows:
|
247 |
+
|
248 |
+
```python
|
249 |
+
from modelscope import AutoModelForCausalLM, AutoTokenizer
|
250 |
+
```
|
251 |
+
|
252 |
+
### llama.cpp
|
253 |
+
|
254 |
+
Coming soon...
|
255 |
+
|
256 |
+
## How to Deploy
|
257 |
+
|
258 |
+
Megrez2-3x7B-A3B-Preview support using `vLLM` and `SGLang` as inference backends. For more information, please visit the [gitHub repository](https://github.com/infinigence/Infini-Megrez).
|
259 |
+
|
260 |
+
## Best Practice
|
261 |
+
|
262 |
+
To achieve optimal performance, we recommend the following settings:
|
263 |
+
|
264 |
+
1. Sampling Parameters: we suggest using Temperature=0.7 and TopP=0.9 .
|
265 |
+
|
266 |
+
2. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
|
267 |
+
* Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
|
268 |
+
* Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the answer field with only the choice letter, e.g., "answer": "C"."
|
269 |
+
|
270 |
+
## License Agreement
|
271 |
+
|
272 |
+
All our open-weight models are licensed under Apache 2.0.
|
273 |
+
|
274 |
+
## Citation
|
275 |
+
|
276 |
+
Our technical report has been uploaded to GitHub, and is currently under review by arXiv. It is expected to be officially released in the coming days.
|
277 |
+
|
278 |
+
## Contact
|
279 |
+
|
280 |
+
If you have any questions, please feel free to submit a GitHub issue or contact [WeChat groups](https://huggingface.co/Infinigence/Megrez2-3x7B-A3B-Preview/blob/main/assets/wechat-group.jpg).
|
README_ZH.md
ADDED
@@ -0,0 +1,267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<img src="./assets/megrez-logo.png" alt="Megrez Logo" width="400" />
|
3 |
+
|
4 |
+
<br>
|
5 |
+
<h1> Megrez2-3x7B-A3B-Preview </h1>
|
6 |
+
|
7 |
+
<a href="https://github.com/infinigence/Infini-Megrez">
|
8 |
+
<b>🔗 Github</b>
|
9 |
+
</a> |
|
10 |
+
<a href="https://github.com/infinigence/Infini-Megrez/blob/main/docs/tech_report.pdf">
|
11 |
+
<b>📄 Tech Report</b>
|
12 |
+
</a> |
|
13 |
+
<a href="https://huggingface.co/spaces/Infinigence/Megrez2-3x7B-A3B-Preview">
|
14 |
+
<b>💻 Demo</b>
|
15 |
+
</a> |
|
16 |
+
<a href="https://huggingface.co/Infinigence/Megrez2-3x7B-A3B-Preview/blob/main/assets/wechat-official.jpg">
|
17 |
+
<b>💬 WeChat Official</b>
|
18 |
+
</a>
|
19 |
+
|
20 |
+
<br>
|
21 |
+
|
22 |
+
<strong>中文 | [English](https://huggingface.co/Infinigence/Megrez2-3x7B-A3B-Preview/blob/main/README.md)</strong>
|
23 |
+
|
24 |
+
</div>
|
25 |
+
|
26 |
+
## 模型简介
|
27 |
+
|
28 |
+
Megrez2-3x7B-A3B-Preview 是专为终端设备设计的大模型,兼顾MoE的精度杠杆与Dense的总参数量友好。本次发布的为Megrez 2.0预览版本,训练数据量5T Tokens,未来我们计划完成更大规模的数据训练,并提高模型的推理和Agent能力,正式版本预计今年年内发布。
|
29 |
+
|
30 |
+
## 基础信息
|
31 |
+
|
32 |
+
<div align="center">
|
33 |
+
|
34 |
+
| | |
|
35 |
+
|:---:|:---:|
|
36 |
+
| **Architecture** | Mixture-of-Experts (MoE) |
|
37 |
+
| **Total Parameters** | 3x7B |
|
38 |
+
| **Activated Parameters** | 3B |
|
39 |
+
| **Experts Shared Frequency**| 3 |
|
40 |
+
| **Number of Layers** (Dense layer included) | 31 |
|
41 |
+
| **Number of Dense Layers** | 1 |
|
42 |
+
| **Attention Hidden Dimension** | 2048 |
|
43 |
+
| **MoE Hidden Dimension** (per Expert) | 1408 |
|
44 |
+
| **Number of Attention Heads** | 16 |
|
45 |
+
| **Number of Experts** | 64 |
|
46 |
+
| **Selected Experts per Token** | 6 |
|
47 |
+
| **Number of Shared Experts** | 4 |
|
48 |
+
| **Vocabulary Size** | 128,880 |
|
49 |
+
| **Context Length** | 32K |
|
50 |
+
| **Base Frequency of RoPE** | 1,000,000 |
|
51 |
+
| **Attention Mechanism** | GQA |
|
52 |
+
| **Activation Function** | SwiGLU |
|
53 |
+
</div>
|
54 |
+
|
55 |
+
## 性能测试
|
56 |
+
|
57 |
+
我们使用开源评测工具 [OpenCompass](https://github.com/open-compass/opencompass) 对 Megrez2-3x7B-A3B-Preview 进行了评测,部分评测结果如下表所示。
|
58 |
+
|
59 |
+
<div align="center">
|
60 |
+
<table>
|
61 |
+
<thead>
|
62 |
+
<tr>
|
63 |
+
<th align="center">Benchmark</th>
|
64 |
+
<th align="center">Metric</th>
|
65 |
+
<th align="center"><sup>Megrez2-3x7B<br>-A3B-Preview</sup></th>
|
66 |
+
<th align="center"><sup>Qwen2.5-3B</sup></th>
|
67 |
+
<th align="center"><sup>Qwen2.5-7B</sup></th>
|
68 |
+
<th align="center"><sup>Qwen3-4B</sup></th>
|
69 |
+
<th align="center"><sup>Qwen3-8B</sup></th>
|
70 |
+
<th align="center"><sup>Phi-4-mini</sup></th>
|
71 |
+
<th align="center"><sup>Gemma-3-4B</sup></th>
|
72 |
+
<th align="center"><sup>GPT-4o-mini <br><sup>2024-07-18</sup></sup></th>
|
73 |
+
</tr>
|
74 |
+
</thead>
|
75 |
+
<tbody>
|
76 |
+
<tr>
|
77 |
+
<td align="center">Activate Params (B)</td>
|
78 |
+
<td align="center"></td>
|
79 |
+
<td align="center">3.0</td>
|
80 |
+
<td align="center">3.1</td>
|
81 |
+
<td align="center">7.6</td>
|
82 |
+
<td align="center">4.0</td>
|
83 |
+
<td align="center">8.2</td>
|
84 |
+
<td align="center">3.8</td>
|
85 |
+
<td align="center">4.3</td>
|
86 |
+
<td align="center">-</td>
|
87 |
+
</tr>
|
88 |
+
<tr>
|
89 |
+
<td align="center">Stored Params (B)</td>
|
90 |
+
<td align="center"></td>
|
91 |
+
<td align="center">7.5</td>
|
92 |
+
<td align="center">3.1</td>
|
93 |
+
<td align="center">7.6</td>
|
94 |
+
<td align="center">4.0</td>
|
95 |
+
<td align="center">8.2</td>
|
96 |
+
<td align="center">3.8</td>
|
97 |
+
<td align="center">4.3</td>
|
98 |
+
<td align="center">-</td>
|
99 |
+
</tr>
|
100 |
+
<tr>
|
101 |
+
<td align="center" colspan=9><strong>General Tasks</strong></td>
|
102 |
+
</tr>
|
103 |
+
<tr>
|
104 |
+
<td align="center">C-EVAL</td>
|
105 |
+
<td align="center">EM</td>
|
106 |
+
<td align="center"><strong>91.7</strong></td>
|
107 |
+
<td align="center">68.2</td>
|
108 |
+
<td align="center">76.2</td>
|
109 |
+
<td align="center">72.2</td>
|
110 |
+
<td align="center">77.9</td>
|
111 |
+
<td align="center">40.0</td>
|
112 |
+
<td align="center">-</td>
|
113 |
+
<td align="center">66.3</td>
|
114 |
+
</tr>
|
115 |
+
<tr>
|
116 |
+
<td align="center">MMLU-Pro</td>
|
117 |
+
<td align="center">EM</td>
|
118 |
+
<td align="center"><strong>67.6</strong></td>
|
119 |
+
<td align="center">43.7</td>
|
120 |
+
<td align="center">56.3</td>
|
121 |
+
<td align="center">-</td>
|
122 |
+
<td align="center">-</td>
|
123 |
+
<td align="center">52.8</td>
|
124 |
+
<td align="center">43.6</td>
|
125 |
+
<td align="center">-</td>
|
126 |
+
</tr>
|
127 |
+
<td align="center" colspan=9><strong>Instruction Tasks</strong></td>
|
128 |
+
<tr>
|
129 |
+
<td align="center">IF-Eval</td>
|
130 |
+
<td align="center">Prompt Strict</td>
|
131 |
+
<td align="center">80.2</td>
|
132 |
+
<td align="center">58.2</td>
|
133 |
+
<td align="center">71.2</td>
|
134 |
+
<td align="center">81.2</td>
|
135 |
+
<td align="center">83.0</td>
|
136 |
+
<td align="center">68.6</td>
|
137 |
+
<td align="center"><strong>90.2</strong></td>
|
138 |
+
<td align="center">80.4</td>
|
139 |
+
</tr>
|
140 |
+
<td align="center" colspan=9><strong>Math & STEM Tasks</strong></td>
|
141 |
+
<tr>
|
142 |
+
<td align="center">MATH-500</td>
|
143 |
+
<td align="center">EM</td>
|
144 |
+
<td align="center">81.6</td>
|
145 |
+
<td align="center">65.9</td>
|
146 |
+
<td align="center">75.5</td>
|
147 |
+
<td align="center">84.8</td>
|
148 |
+
<td align="center"><strong>87.4</strong></td>
|
149 |
+
<td align="center">64.0</td>
|
150 |
+
<td align="center">75.6</td>
|
151 |
+
<td align="center">78.2</td>
|
152 |
+
</tr>
|
153 |
+
<tr>
|
154 |
+
<td align="center">GSM8K</td>
|
155 |
+
<td align="center">EM</td>
|
156 |
+
<td align="center">83.6</td>
|
157 |
+
<td align="center">86.7</td>
|
158 |
+
<td align="center">91.6</td>
|
159 |
+
<td align="center">-</td>
|
160 |
+
<td align="center"><strong>93.2</strong></td>
|
161 |
+
<td align="center">88.6</td>
|
162 |
+
<td align="center">89.2</td>
|
163 |
+
<td align="center">-</td>
|
164 |
+
</tr>
|
165 |
+
<td align="center" colspan=9><strong>Coding Tasks</strong></td>
|
166 |
+
<tr>
|
167 |
+
<td align="center">HumanEval</td>
|
168 |
+
<td align="center">Pass@1</td>
|
169 |
+
<td align="center">74.4</td>
|
170 |
+
<td align="center">74.4</td>
|
171 |
+
<td align="center">84.8</td>
|
172 |
+
<td align="center">-</td>
|
173 |
+
<td align="center"><strong>85.9</strong></td>
|
174 |
+
<td align="center">74.4</td>
|
175 |
+
<td align="center">71.3</td>
|
176 |
+
<td align="center">87.2</td>
|
177 |
+
</tr>
|
178 |
+
<tr>
|
179 |
+
<td align="center">MBPP</td>
|
180 |
+
<td align="center">Pass@1</td>
|
181 |
+
<td align="center"><strong>88.0</strong></td>
|
182 |
+
<td align="center">72.7</td>
|
183 |
+
<td align="center">79.2</td>
|
184 |
+
<td align="center">-</td>
|
185 |
+
<td align="center">77.0</td>
|
186 |
+
<td align="center">65.3</td>
|
187 |
+
<td align="center">63.2</td>
|
188 |
+
<td align="center">-</td>
|
189 |
+
</tr>
|
190 |
+
</tbody>
|
191 |
+
</table>
|
192 |
+
</div>
|
193 |
+
|
194 |
+
## 如何运行
|
195 |
+
|
196 |
+
### Transformers
|
197 |
+
|
198 |
+
推荐使用最新版本的 `transformers` 或者 `transformers>=4.52.4` 的版本。
|
199 |
+
以下是一个非常简单的代码片段示例,展示如何运行 Megrez2-3x7B-A3B-Preview 模型:
|
200 |
+
|
201 |
+
```python
|
202 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
203 |
+
import torch
|
204 |
+
|
205 |
+
path = "Infinigence/Megrez2-3x7B-A3B-Preview"
|
206 |
+
device = "cuda"
|
207 |
+
|
208 |
+
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
|
209 |
+
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True)
|
210 |
+
|
211 |
+
messages = [
|
212 |
+
{"role": "user", "content": "世界上最高的山峰是哪座?"},
|
213 |
+
]
|
214 |
+
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(device)
|
215 |
+
|
216 |
+
model_outputs = model.generate(
|
217 |
+
model_inputs,
|
218 |
+
do_sample=True,
|
219 |
+
max_new_tokens=1024
|
220 |
+
)
|
221 |
+
|
222 |
+
output_token_ids = [
|
223 |
+
model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs))
|
224 |
+
]
|
225 |
+
|
226 |
+
responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0]
|
227 |
+
print(responses)
|
228 |
+
|
229 |
+
# 世界上最高的山峰是珠穆朗玛峰(Mount Everest),位于喜马拉雅山脉的中尼边境。珠穆朗玛峰的海拔高度为8,848.86米(29,031.7英尺),这一数据是由中国和尼泊尔在2020年共同宣布的最新测量结果。珠穆朗玛峰不仅是登山爱好者的圣地,也是地理和科学研究的重要对象。
|
230 |
+
```
|
231 |
+
|
232 |
+
### ModelScope
|
233 |
+
|
234 |
+
`ModelScope` 采用了与 `Transformers` 类似(但不完全一致)的编程接口。对于基础使用,仅需将上面代码第一行做如下修改:
|
235 |
+
|
236 |
+
```python
|
237 |
+
from modelscope import AutoModelForCausalLM, AutoTokenizer
|
238 |
+
```
|
239 |
+
|
240 |
+
### llama.cpp
|
241 |
+
即将到来...
|
242 |
+
|
243 |
+
## 如何部署
|
244 |
+
|
245 |
+
Megrez2-3x7B-A3B-Preview 支持使用 `vLLM` 和 `SGLang` 作为推理后端,更详细的信息请查看我们的[github仓库](https://github.com/infinigence/Infini-Megrez)。
|
246 |
+
|
247 |
+
## 最佳实践
|
248 |
+
|
249 |
+
为了获得最佳性能,建议以下设置:
|
250 |
+
|
251 |
+
1. 采样参数:推荐使用 Temperature=0.7 和 TopP=0.9 。
|
252 |
+
|
253 |
+
2. 标准化输出格式:在基准测试时,我们建议使用提示来标准化模型输出,比如:
|
254 |
+
* 数学问题:在提示中包含“请逐步推理,并将最终答案放在\boxed{}中。”
|
255 |
+
* 选择题:在提示中添加以下 JSON 结构以标准化响应:“请在 answer 字段中仅以选择字母的形式显示您的选择,例如 "answer": "C" 。”
|
256 |
+
|
257 |
+
## 许可声明
|
258 |
+
|
259 |
+
我们所有的开源模型均采用Apache 2.0协议授权。
|
260 |
+
|
261 |
+
## 引用信息
|
262 |
+
|
263 |
+
我们的技术报告已上传至Github,arXiv也在同步审核中,预计未来几天正式释放。
|
264 |
+
|
265 |
+
## 联系我们
|
266 |
+
|
267 |
+
如果您有任何问题,请随时提交GitHub issue或联系[微信群组](https://huggingface.co/Infinigence/Megrez2-3x7B-A3B-Preview/blob/main/assets/wechat-group.jpg)。
|
assets/megrez-logo.png
ADDED
![]() |
Git LFS Details
|
assets/wechat-group.jpg
ADDED
![]() |
Git LFS Details
|
assets/wechat-official.jpg
ADDED
![]() |
Git LFS Details
|
chat_template.jinja
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|role_start|>system<|role_end|>你是Megrez-3B-Instruct,将针对用户的问题给出详细的、积极的回答。<|turn_end|>' }}{% endif %}{{ '<|role_start|>' + message['role'] + '<|role_end|>' + message['content'] + '<|turn_end|>' }}{% endfor %}{% if add_generation_prompt %}{{ '<|role_start|>assistant<|role_end|>' }}{% endif %}
|
config.json
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"MegrezMoeForCausalLM"
|
4 |
+
],
|
5 |
+
"attention_bias": false,
|
6 |
+
"attention_dropout": 0.0,
|
7 |
+
"auto_map": {
|
8 |
+
"AutoConfig": "configuration_megrez_moe.MegrezMoeConfig",
|
9 |
+
"AutoModel": "modeling_megrez_moe.MegrezMoeModel",
|
10 |
+
"AutoModelForCausalLM": "modeling_megrez_moe.MegrezMoeForCausalLM"
|
11 |
+
},
|
12 |
+
"aux_loss_alpha": 0.001,
|
13 |
+
"bos_token_id": null,
|
14 |
+
"eos_token_id": 120005,
|
15 |
+
"ep_size": 1,
|
16 |
+
"experts_shared_frequency": 3,
|
17 |
+
"first_k_dense_replace": 1,
|
18 |
+
"hidden_act": "silu",
|
19 |
+
"hidden_size": 2048,
|
20 |
+
"initializer_range": 0.02,
|
21 |
+
"intermediate_size": 10944,
|
22 |
+
"max_position_embeddings": 163840,
|
23 |
+
"model_type": "megrez_moe",
|
24 |
+
"moe_intermediate_size": 1408,
|
25 |
+
"moe_layer_freq": 1,
|
26 |
+
"n_group": 1,
|
27 |
+
"n_routed_experts": 64,
|
28 |
+
"n_shared_experts": 4,
|
29 |
+
"norm_topk_prob": false,
|
30 |
+
"num_attention_heads": 16,
|
31 |
+
"num_experts_per_tok": 6,
|
32 |
+
"num_hidden_layers": 31,
|
33 |
+
"num_key_value_heads": 4,
|
34 |
+
"pad_token_id": 120002,
|
35 |
+
"pre_gate": true,
|
36 |
+
"pretraining_tp": 1,
|
37 |
+
"rms_norm_eps": 1e-06,
|
38 |
+
"rope_scaling": null,
|
39 |
+
"rope_theta": 1000000,
|
40 |
+
"routed_scaling_factor": 1.0,
|
41 |
+
"scoring_func": "softmax",
|
42 |
+
"seq_aux": true,
|
43 |
+
"tie_word_embeddings": false,
|
44 |
+
"topk_group": 1,
|
45 |
+
"topk_method": "greedy",
|
46 |
+
"torch_dtype": "bfloat16",
|
47 |
+
"transformers_version": "4.53.1",
|
48 |
+
"use_cache": true,
|
49 |
+
"vocab_size": 128880
|
50 |
+
}
|
configuration_megrez_moe.py
ADDED
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from transformers.configuration_utils import PretrainedConfig
|
2 |
+
from transformers.utils import logging
|
3 |
+
|
4 |
+
logger = logging.get_logger(__name__)
|
5 |
+
|
6 |
+
MegrezMoe_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
|
7 |
+
|
8 |
+
|
9 |
+
class MegrezMoeConfig(PretrainedConfig):
|
10 |
+
r"""
|
11 |
+
This is the configuration class to store the configuration of a [`MegrezMoeModel`]. It is used to instantiate an DeepSeek
|
12 |
+
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
|
13 |
+
defaults will yield a similar configuration to that of the DeepSeek-V2.
|
14 |
+
|
15 |
+
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
16 |
+
documentation from [`PretrainedConfig`] for more information.
|
17 |
+
|
18 |
+
|
19 |
+
Args:
|
20 |
+
vocab_size (`int`, *optional*, defaults to 102400):
|
21 |
+
Vocabulary size of the Deep model. Defines the number of different tokens that can be represented by the
|
22 |
+
`inputs_ids` passed when calling [`MegrezMoeModel`]
|
23 |
+
hidden_size (`int`, *optional*, defaults to 4096):
|
24 |
+
Dimension of the hidden representations.
|
25 |
+
intermediate_size (`int`, *optional*, defaults to 11008):
|
26 |
+
Dimension of the MLP representations.
|
27 |
+
moe_intermediate_size (`int`, *optional*, defaults to 1407):
|
28 |
+
Dimension of the MoE representations.
|
29 |
+
num_hidden_layers (`int`, *optional*, defaults to 32):
|
30 |
+
Number of hidden layers in the Transformer decoder.
|
31 |
+
num_attention_heads (`int`, *optional*, defaults to 32):
|
32 |
+
Number of attention heads for each attention layer in the Transformer decoder.
|
33 |
+
n_shared_experts (`int`, *optional*, defaults to None):
|
34 |
+
Number of shared experts, None means dense model.
|
35 |
+
n_routed_experts (`int`, *optional*, defaults to None):
|
36 |
+
Number of routed experts, None means dense model.
|
37 |
+
routed_scaling_factor (`float`, *optional*, defaults to 1.0):
|
38 |
+
Scaling factor or routed experts.
|
39 |
+
topk_method (`str`, *optional*, defaults to `gready`):
|
40 |
+
Topk method used in routed gate.
|
41 |
+
n_group (`int`, *optional*, defaults to None):
|
42 |
+
Number of groups for routed experts.
|
43 |
+
topk_group (`int`, *optional*, defaults to None):
|
44 |
+
Number of selected groups for each token(for each token, ensuring the selected experts is only within `topk_group` groups).
|
45 |
+
num_experts_per_tok (`int`, *optional*, defaults to None):
|
46 |
+
Number of selected experts, None means dense model.
|
47 |
+
moe_layer_freq (`int`, *optional*, defaults to 1):
|
48 |
+
The frequency of the MoE layer: one expert layer for every `moe_layer_freq - 1` dense layers.
|
49 |
+
first_k_dense_replace (`int`, *optional*, defaults to 0):
|
50 |
+
Number of dense layers in shallow layers(embed->dense->dense->...->dense->moe->moe...->lm_head).
|
51 |
+
\--k dense layers--/
|
52 |
+
norm_topk_prob (`bool`, *optional*, defaults to False):
|
53 |
+
Whether to normalize the weights of the routed experts.
|
54 |
+
scoring_func (`str`, *optional*, defaults to 'softmax'):
|
55 |
+
Method of computing expert weights.
|
56 |
+
aux_loss_alpha (`float`, *optional*, defaults to 0.001):
|
57 |
+
Auxiliary loss weight coefficient.
|
58 |
+
seq_aux = (`bool`, *optional*, defaults to True):
|
59 |
+
Whether to compute the auxiliary loss for each individual sample.
|
60 |
+
num_key_value_heads (`int`, *optional*):
|
61 |
+
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
62 |
+
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
63 |
+
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
64 |
+
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
65 |
+
by meanpooling all the original heads within that group. For more details checkout [this
|
66 |
+
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
|
67 |
+
`num_attention_heads`.
|
68 |
+
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
69 |
+
The non-linear activation function (function or string) in the decoder.
|
70 |
+
max_position_embeddings (`int`, *optional*, defaults to 2048):
|
71 |
+
The maximum sequence length that this model might ever be used with.
|
72 |
+
initializer_range (`float`, *optional*, defaults to 0.02):
|
73 |
+
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
74 |
+
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
|
75 |
+
The epsilon used by the rms normalization layers.
|
76 |
+
use_cache (`bool`, *optional*, defaults to `True`):
|
77 |
+
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
78 |
+
relevant if `config.is_decoder=True`.
|
79 |
+
pad_token_id (`int`, *optional*):
|
80 |
+
Padding token id.
|
81 |
+
bos_token_id (`int`, *optional*, defaults to 1):
|
82 |
+
Beginning of stream token id.
|
83 |
+
eos_token_id (`int`, *optional*, defaults to 2):
|
84 |
+
End of stream token id.
|
85 |
+
pretraining_tp (`int`, *optional*, defaults to 1):
|
86 |
+
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
|
87 |
+
document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
|
88 |
+
necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
|
89 |
+
issue](https://github.com/pytorch/pytorch/issues/76232).
|
90 |
+
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
|
91 |
+
Whether to tie weight embeddings
|
92 |
+
rope_theta (`float`, *optional*, defaults to 10000.0):
|
93 |
+
The base period of the RoPE embeddings.
|
94 |
+
rope_scaling (`Dict`, *optional*):
|
95 |
+
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
|
96 |
+
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
|
97 |
+
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
|
98 |
+
`max_position_embeddings` to the expected new maximum.
|
99 |
+
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
|
100 |
+
Whether to use a bias in the query, key, value and output projection layers during self-attention.
|
101 |
+
attention_dropout (`float`, *optional*, defaults to 0.0):
|
102 |
+
The dropout ratio for the attention probabilities.
|
103 |
+
|
104 |
+
```python
|
105 |
+
>>> from transformers import MegrezMoeModel, MegrezMoeConfig
|
106 |
+
|
107 |
+
>>> # Initializing a Deepseek-V2 style configuration
|
108 |
+
>>> configuration = MegrezMoeConfig()
|
109 |
+
|
110 |
+
>>> # Accessing the model configuration
|
111 |
+
>>> configuration = model.config
|
112 |
+
```"""
|
113 |
+
|
114 |
+
model_type = "megrez_moe"
|
115 |
+
keys_to_ignore_at_inference = ["past_key_values"]
|
116 |
+
|
117 |
+
def __init__(
|
118 |
+
self,
|
119 |
+
vocab_size=102400,
|
120 |
+
hidden_size=4096,
|
121 |
+
intermediate_size=11008,
|
122 |
+
moe_intermediate_size=1407,
|
123 |
+
num_hidden_layers=30,
|
124 |
+
num_attention_heads=32,
|
125 |
+
num_key_value_heads=32,
|
126 |
+
n_shared_experts=None,
|
127 |
+
n_routed_experts=None,
|
128 |
+
ep_size=1,
|
129 |
+
routed_scaling_factor=1.0,
|
130 |
+
topk_method="gready",
|
131 |
+
n_group=None,
|
132 |
+
topk_group=None,
|
133 |
+
num_experts_per_tok=None,
|
134 |
+
moe_layer_freq=1,
|
135 |
+
first_k_dense_replace=0,
|
136 |
+
norm_topk_prob=False,
|
137 |
+
scoring_func="softmax",
|
138 |
+
aux_loss_alpha=0.001,
|
139 |
+
seq_aux=True,
|
140 |
+
hidden_act="silu",
|
141 |
+
max_position_embeddings=2048,
|
142 |
+
initializer_range=0.02,
|
143 |
+
rms_norm_eps=1e-6,
|
144 |
+
use_cache=True,
|
145 |
+
pad_token_id=None,
|
146 |
+
bos_token_id=100000,
|
147 |
+
eos_token_id=100001,
|
148 |
+
pretraining_tp=1,
|
149 |
+
tie_word_embeddings=False,
|
150 |
+
rope_theta=10000.0,
|
151 |
+
rope_scaling=None,
|
152 |
+
attention_bias=False,
|
153 |
+
attention_dropout=0.0,
|
154 |
+
experts_shared_frequency=1,
|
155 |
+
pre_gate=False,
|
156 |
+
**kwargs,
|
157 |
+
):
|
158 |
+
self.vocab_size = vocab_size
|
159 |
+
self.max_position_embeddings = max_position_embeddings
|
160 |
+
self.hidden_size = hidden_size
|
161 |
+
self.intermediate_size = intermediate_size
|
162 |
+
self.moe_intermediate_size = moe_intermediate_size
|
163 |
+
self.num_hidden_layers = num_hidden_layers
|
164 |
+
self.num_attention_heads = num_attention_heads
|
165 |
+
self.n_shared_experts = n_shared_experts
|
166 |
+
self.n_routed_experts = n_routed_experts
|
167 |
+
self.ep_size = ep_size
|
168 |
+
self.routed_scaling_factor = routed_scaling_factor
|
169 |
+
self.topk_method = topk_method
|
170 |
+
self.n_group = n_group
|
171 |
+
self.topk_group = topk_group
|
172 |
+
self.num_experts_per_tok = num_experts_per_tok
|
173 |
+
self.moe_layer_freq = moe_layer_freq
|
174 |
+
self.first_k_dense_replace = first_k_dense_replace
|
175 |
+
self.norm_topk_prob = norm_topk_prob
|
176 |
+
self.scoring_func = scoring_func
|
177 |
+
self.aux_loss_alpha = aux_loss_alpha
|
178 |
+
self.seq_aux = seq_aux
|
179 |
+
# for backward compatibility
|
180 |
+
if num_key_value_heads is None:
|
181 |
+
num_key_value_heads = num_attention_heads
|
182 |
+
|
183 |
+
self.num_key_value_heads = num_key_value_heads
|
184 |
+
self.hidden_act = hidden_act
|
185 |
+
self.initializer_range = initializer_range
|
186 |
+
self.rms_norm_eps = rms_norm_eps
|
187 |
+
self.pretraining_tp = pretraining_tp
|
188 |
+
self.use_cache = use_cache
|
189 |
+
self.rope_theta = rope_theta
|
190 |
+
self.rope_scaling = rope_scaling
|
191 |
+
self.attention_bias = attention_bias
|
192 |
+
self.attention_dropout = attention_dropout
|
193 |
+
|
194 |
+
self.experts_shared_frequency = experts_shared_frequency
|
195 |
+
self.pre_gate = pre_gate
|
196 |
+
|
197 |
+
super().__init__(
|
198 |
+
pad_token_id=pad_token_id,
|
199 |
+
bos_token_id=bos_token_id,
|
200 |
+
eos_token_id=eos_token_id,
|
201 |
+
tie_word_embeddings=tie_word_embeddings,
|
202 |
+
**kwargs,
|
203 |
+
)
|
generation_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"eos_token_id": 120005,
|
4 |
+
"pad_token_id": 120002,
|
5 |
+
"transformers_version": "4.52.4"
|
6 |
+
}
|
model-00001-of-00004.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d971e9e13a94a3356720cac1037b1c32b4d823c9e4a5c18b34624a1350e0267
|
3 |
+
size 4996043840
|
model-00002-of-00004.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:73c6f731a492d97154c856f39516890d974282ceac116d51f202bd85a4012bec
|
3 |
+
size 4995328864
|
model-00003-of-00004.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:22ab95c4d279b425738cc849744506fe73454b8f3861a7201c308d6e08926df8
|
3 |
+
size 4478658288
|
model-00004-of-00004.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:07ae839a6b599bcf7efaf4f79ddd8f8aa876aeabc9237e3c9e29e68546579c7d
|
3 |
+
size 527892608
|
model.safetensors.index.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
modeling_megrez_moe.py
ADDED
@@ -0,0 +1,1047 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2025 Infini-AI and The HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
|
5 |
+
# and OPT implementations in this library. It has been modified from its
|
6 |
+
# original forms to accommodate minor architectural differences compared
|
7 |
+
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
|
8 |
+
#
|
9 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
10 |
+
# you may not use this file except in compliance with the License.
|
11 |
+
# You may obtain a copy of the License at
|
12 |
+
#
|
13 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
14 |
+
#
|
15 |
+
# Unless required by applicable law or agreed to in writing, software
|
16 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
17 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
18 |
+
# See the License for the specific language governing permissions and
|
19 |
+
# limitations under the License.
|
20 |
+
"""PyTorch Megrez model."""
|
21 |
+
import math
|
22 |
+
import warnings
|
23 |
+
from typing import List, Optional, Tuple, Union
|
24 |
+
|
25 |
+
import numpy as np
|
26 |
+
import torch
|
27 |
+
import torch.distributed as dist
|
28 |
+
import torch.nn.functional as F
|
29 |
+
from torch import nn
|
30 |
+
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
|
31 |
+
from transformers.activations import ACT2FN
|
32 |
+
from transformers.cache_utils import Cache, DynamicCache
|
33 |
+
from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask
|
34 |
+
from transformers.modeling_outputs import (BaseModelOutputWithPast, CausalLMOutputWithPast,
|
35 |
+
SequenceClassifierOutputWithPast)
|
36 |
+
from transformers.modeling_utils import PreTrainedModel
|
37 |
+
from transformers.models.llama.modeling_llama import LlamaAttention, LlamaRotaryEmbedding
|
38 |
+
from transformers.pytorch_utils import ALL_LAYERNORM_LAYERS, is_torch_greater_or_equal_than_1_13
|
39 |
+
from transformers.utils import (add_start_docstrings, add_start_docstrings_to_model_forward, logging,
|
40 |
+
replace_return_docstrings)
|
41 |
+
from transformers.utils.import_utils import is_torch_fx_available
|
42 |
+
|
43 |
+
from .configuration_megrez_moe import MegrezMoeConfig
|
44 |
+
|
45 |
+
# This makes `_prepare_4d_causal_attention_mask` a leaf function in the FX graph.
|
46 |
+
# It means that the function will not be traced through and simply appear as a node in the graph.
|
47 |
+
if is_torch_fx_available():
|
48 |
+
if not is_torch_greater_or_equal_than_1_13:
|
49 |
+
import torch.fx
|
50 |
+
|
51 |
+
_prepare_4d_causal_attention_mask = torch.fx.wrap(_prepare_4d_causal_attention_mask)
|
52 |
+
|
53 |
+
|
54 |
+
logger = logging.get_logger(__name__)
|
55 |
+
|
56 |
+
_CONFIG_FOR_DOC = "MegrezMoeConfig"
|
57 |
+
|
58 |
+
|
59 |
+
class MegrezMoeRMSNorm(nn.Module):
|
60 |
+
def __init__(self, hidden_size, eps=1e-6):
|
61 |
+
"""
|
62 |
+
MegrezMoeRMSNorm is equivalent to T5LayerNorm
|
63 |
+
"""
|
64 |
+
super().__init__()
|
65 |
+
self.weight = nn.Parameter(torch.ones(hidden_size))
|
66 |
+
self.variance_epsilon = eps
|
67 |
+
|
68 |
+
def forward(self, hidden_states):
|
69 |
+
input_dtype = hidden_states.dtype
|
70 |
+
hidden_states = hidden_states.to(torch.float32)
|
71 |
+
variance = hidden_states.pow(2).mean(-1, keepdim=True)
|
72 |
+
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
|
73 |
+
return self.weight * hidden_states.to(input_dtype)
|
74 |
+
|
75 |
+
|
76 |
+
ALL_LAYERNORM_LAYERS.append(MegrezMoeRMSNorm)
|
77 |
+
|
78 |
+
|
79 |
+
class MegrezMoeMLP(nn.Module):
|
80 |
+
def __init__(self, config, hidden_size=None, intermediate_size=None):
|
81 |
+
super().__init__()
|
82 |
+
self.config = config
|
83 |
+
self.hidden_size = config.hidden_size if hidden_size is None else hidden_size
|
84 |
+
self.intermediate_size = config.intermediate_size if intermediate_size is None else intermediate_size
|
85 |
+
|
86 |
+
self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
|
87 |
+
self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
|
88 |
+
self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
|
89 |
+
self.act_fn = ACT2FN[config.hidden_act]
|
90 |
+
|
91 |
+
def forward(self, x):
|
92 |
+
down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
|
93 |
+
return down_proj
|
94 |
+
|
95 |
+
|
96 |
+
class MoEGate(nn.Module):
|
97 |
+
def __init__(self, config):
|
98 |
+
super().__init__()
|
99 |
+
self.config = config
|
100 |
+
self.top_k = config.num_experts_per_tok
|
101 |
+
self.n_routed_experts = config.n_routed_experts
|
102 |
+
self.routed_scaling_factor = config.routed_scaling_factor
|
103 |
+
self.scoring_func = config.scoring_func
|
104 |
+
self.alpha = config.aux_loss_alpha
|
105 |
+
self.seq_aux = config.seq_aux
|
106 |
+
self.topk_method = config.topk_method
|
107 |
+
self.n_group = config.n_group
|
108 |
+
self.topk_group = config.topk_group
|
109 |
+
|
110 |
+
# topk selection algorithm
|
111 |
+
self.norm_topk_prob = config.norm_topk_prob
|
112 |
+
self.gating_dim = config.hidden_size
|
113 |
+
self.weight = nn.Parameter(torch.empty((self.n_routed_experts, self.gating_dim)))
|
114 |
+
self.reset_parameters()
|
115 |
+
|
116 |
+
def reset_parameters(self) -> None:
|
117 |
+
import torch.nn.init as init
|
118 |
+
|
119 |
+
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
|
120 |
+
|
121 |
+
def forward(self, hidden_states):
|
122 |
+
bsz, seq_len, h = hidden_states.shape
|
123 |
+
### compute gating score
|
124 |
+
hidden_states = hidden_states.view(-1, h)
|
125 |
+
logits = F.linear(hidden_states.type(torch.float32), self.weight.type(torch.float32), None)
|
126 |
+
if self.scoring_func == "softmax":
|
127 |
+
scores = logits.softmax(dim=-1, dtype=torch.float32)
|
128 |
+
else:
|
129 |
+
raise NotImplementedError(f"insupportable scoring function for MoE gating: {self.scoring_func}")
|
130 |
+
|
131 |
+
### select top-k experts
|
132 |
+
if self.topk_method == "greedy":
|
133 |
+
topk_weight, topk_idx = torch.topk(scores, k=self.top_k, dim=-1, sorted=False)
|
134 |
+
elif self.topk_method == "group_limited_greedy":
|
135 |
+
group_scores = scores.view(bsz * seq_len, self.n_group, -1).max(dim=-1).values # [n, n_group]
|
136 |
+
group_idx = torch.topk(group_scores, k=self.topk_group, dim=-1, sorted=False)[1] # [n, top_k_group]
|
137 |
+
group_mask = torch.zeros_like(group_scores) # [n, n_group]
|
138 |
+
group_mask.scatter_(1, group_idx, 1) # [n, n_group]
|
139 |
+
score_mask = (
|
140 |
+
group_mask.unsqueeze(-1)
|
141 |
+
.expand(bsz * seq_len, self.n_group, self.n_routed_experts // self.n_group)
|
142 |
+
.reshape(bsz * seq_len, -1)
|
143 |
+
) # [n, e]
|
144 |
+
tmp_scores = scores.masked_fill(~score_mask.bool(), 0.0) # [n, e]
|
145 |
+
topk_weight, topk_idx = torch.topk(tmp_scores, k=self.top_k, dim=-1, sorted=False)
|
146 |
+
|
147 |
+
### norm gate to sum 1
|
148 |
+
if self.top_k > 1 and self.norm_topk_prob:
|
149 |
+
denominator = topk_weight.sum(dim=-1, keepdim=True) + 1e-20
|
150 |
+
topk_weight = topk_weight / denominator
|
151 |
+
else:
|
152 |
+
topk_weight = topk_weight * self.routed_scaling_factor
|
153 |
+
### expert-level computation auxiliary loss
|
154 |
+
if self.training and self.alpha > 0.0:
|
155 |
+
scores_for_aux = scores
|
156 |
+
aux_topk = self.top_k
|
157 |
+
# always compute aux loss based on the naive greedy topk method
|
158 |
+
topk_idx_for_aux_loss = topk_idx.view(bsz, -1)
|
159 |
+
if self.seq_aux:
|
160 |
+
scores_for_seq_aux = scores_for_aux.view(bsz, seq_len, -1)
|
161 |
+
ce = torch.zeros(bsz, self.n_routed_experts, device=hidden_states.device)
|
162 |
+
ce.scatter_add_(
|
163 |
+
1,
|
164 |
+
topk_idx_for_aux_loss,
|
165 |
+
torch.ones(bsz, seq_len * aux_topk, device=hidden_states.device),
|
166 |
+
).div_(seq_len * aux_topk / self.n_routed_experts)
|
167 |
+
aux_loss = (ce * scores_for_seq_aux.mean(dim=1)).sum(dim=1).mean() * self.alpha
|
168 |
+
else:
|
169 |
+
mask_ce = F.one_hot(topk_idx_for_aux_loss.view(-1), num_classes=self.n_routed_experts)
|
170 |
+
ce = mask_ce.float().mean(0)
|
171 |
+
Pi = scores_for_aux.mean(0)
|
172 |
+
fi = ce * self.n_routed_experts
|
173 |
+
aux_loss = (Pi * fi).sum() * self.alpha
|
174 |
+
else:
|
175 |
+
aux_loss = None
|
176 |
+
return topk_idx, topk_weight, aux_loss
|
177 |
+
|
178 |
+
|
179 |
+
class AddAuxiliaryLoss(torch.autograd.Function):
|
180 |
+
"""
|
181 |
+
The trick function of adding auxiliary (aux) loss,
|
182 |
+
which includes the gradient of the aux loss during backpropagation.
|
183 |
+
"""
|
184 |
+
|
185 |
+
@staticmethod
|
186 |
+
def forward(ctx, x, loss):
|
187 |
+
assert loss.numel() == 1
|
188 |
+
ctx.dtype = loss.dtype
|
189 |
+
ctx.required_aux_loss = loss.requires_grad
|
190 |
+
return x
|
191 |
+
|
192 |
+
@staticmethod
|
193 |
+
def backward(ctx, grad_output):
|
194 |
+
grad_loss = None
|
195 |
+
if ctx.required_aux_loss:
|
196 |
+
grad_loss = torch.ones(1, dtype=ctx.dtype, device=grad_output.device)
|
197 |
+
return grad_output, grad_loss
|
198 |
+
|
199 |
+
|
200 |
+
class MegrezMoeMoE(nn.Module):
|
201 |
+
"""
|
202 |
+
A mixed expert module containing shared experts.
|
203 |
+
"""
|
204 |
+
|
205 |
+
def __init__(self, config, layer_number, init_experts: bool = True):
|
206 |
+
super().__init__()
|
207 |
+
self.layer_number = layer_number
|
208 |
+
self.config = config
|
209 |
+
self.num_experts_per_tok = config.num_experts_per_tok
|
210 |
+
|
211 |
+
if hasattr(config, "ep_size") and config.ep_size > 1:
|
212 |
+
assert config.ep_size == dist.get_world_size()
|
213 |
+
self.ep_size = config.ep_size
|
214 |
+
self.experts_per_rank = config.n_routed_experts // config.ep_size
|
215 |
+
self.ep_rank = dist.get_rank()
|
216 |
+
if init_experts:
|
217 |
+
self.experts = nn.ModuleList(
|
218 |
+
[
|
219 |
+
(
|
220 |
+
MegrezMoeMLP(config, intermediate_size=config.moe_intermediate_size)
|
221 |
+
if i >= self.ep_rank * self.experts_per_rank
|
222 |
+
and i < (self.ep_rank + 1) * self.experts_per_rank
|
223 |
+
else None
|
224 |
+
)
|
225 |
+
for i in range(config.n_routed_experts)
|
226 |
+
]
|
227 |
+
)
|
228 |
+
else:
|
229 |
+
self.experts = None
|
230 |
+
else:
|
231 |
+
self.ep_size = 1
|
232 |
+
self.experts_per_rank = config.n_routed_experts
|
233 |
+
self.ep_rank = 0
|
234 |
+
if init_experts:
|
235 |
+
self.experts = nn.ModuleList(
|
236 |
+
[
|
237 |
+
MegrezMoeMLP(config, intermediate_size=config.moe_intermediate_size)
|
238 |
+
for i in range(config.n_routed_experts)
|
239 |
+
]
|
240 |
+
)
|
241 |
+
else:
|
242 |
+
self.experts = None
|
243 |
+
|
244 |
+
self.gate = MoEGate(config)
|
245 |
+
if config.n_shared_experts is not None:
|
246 |
+
intermediate_size = config.moe_intermediate_size * config.n_shared_experts
|
247 |
+
self.shared_experts = MegrezMoeMLP(config=config, intermediate_size=intermediate_size)
|
248 |
+
|
249 |
+
def set_experts(self, experts):
|
250 |
+
self.experts = experts
|
251 |
+
|
252 |
+
def forward(self, hidden_states, pre_gate_hidden_states=None):
|
253 |
+
identity = hidden_states
|
254 |
+
orig_shape = hidden_states.shape
|
255 |
+
if pre_gate_hidden_states is not None:
|
256 |
+
topk_idx, topk_weight, aux_loss = self.gate(pre_gate_hidden_states)
|
257 |
+
else:
|
258 |
+
topk_idx, topk_weight, aux_loss = self.gate(hidden_states)
|
259 |
+
hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
|
260 |
+
flat_topk_idx = topk_idx.view(-1)
|
261 |
+
if self.training:
|
262 |
+
hidden_states = hidden_states.repeat_interleave(self.num_experts_per_tok, dim=0)
|
263 |
+
y = torch.empty_like(hidden_states)
|
264 |
+
for i, expert in enumerate(self.experts):
|
265 |
+
y[flat_topk_idx == i] = expert(hidden_states[flat_topk_idx == i])
|
266 |
+
y = (y.view(*topk_weight.shape, -1) * topk_weight.unsqueeze(-1)).sum(dim=1)
|
267 |
+
y = y.to(hidden_states.dtype).view(*orig_shape)
|
268 |
+
y = AddAuxiliaryLoss.apply(y, aux_loss)
|
269 |
+
else:
|
270 |
+
y = self.moe_infer(hidden_states, topk_idx, topk_weight).view(*orig_shape)
|
271 |
+
if self.config.n_shared_experts is not None:
|
272 |
+
shared_out = self.shared_experts(identity)
|
273 |
+
y = y + shared_out
|
274 |
+
# y = y + self.shared_experts(identity)
|
275 |
+
return y
|
276 |
+
|
277 |
+
@torch.no_grad()
|
278 |
+
def moe_infer(self, x, topk_ids, topk_weight):
|
279 |
+
cnts = topk_ids.new_zeros((topk_ids.shape[0], len(self.experts)))
|
280 |
+
cnts.scatter_(1, topk_ids, 1)
|
281 |
+
tokens_per_expert = cnts.sum(dim=0)
|
282 |
+
idxs = topk_ids.view(-1).argsort()
|
283 |
+
sorted_tokens = x[idxs // topk_ids.shape[1]]
|
284 |
+
sorted_tokens_shape = sorted_tokens.shape
|
285 |
+
if self.ep_size > 1:
|
286 |
+
tokens_per_ep_rank = tokens_per_expert.view(self.ep_size, -1).sum(dim=1)
|
287 |
+
tokens_per_expert_group = tokens_per_expert.new_empty(tokens_per_expert.shape[0])
|
288 |
+
dist.all_to_all_single(tokens_per_expert_group, tokens_per_expert)
|
289 |
+
output_splits = tokens_per_expert_group.view(self.ep_size, -1).sum(1).cpu().numpy().tolist()
|
290 |
+
gathered_tokens = sorted_tokens.new_empty(
|
291 |
+
tokens_per_expert_group.sum(dim=0).cpu().item(), sorted_tokens.shape[1]
|
292 |
+
)
|
293 |
+
input_split_sizes = tokens_per_ep_rank.cpu().numpy().tolist()
|
294 |
+
dist.all_to_all(
|
295 |
+
list(gathered_tokens.split(output_splits)),
|
296 |
+
list(sorted_tokens.split(input_split_sizes)),
|
297 |
+
)
|
298 |
+
tokens_per_expert_post_gather = tokens_per_expert_group.view(self.ep_size, self.experts_per_rank).sum(dim=0)
|
299 |
+
gatherd_idxs = np.zeros(shape=(gathered_tokens.shape[0],), dtype=np.int32)
|
300 |
+
s = 0
|
301 |
+
for i, k in enumerate(tokens_per_expert_group.cpu().numpy()):
|
302 |
+
gatherd_idxs[s : s + k] = i % self.experts_per_rank
|
303 |
+
s += k
|
304 |
+
gatherd_idxs = gatherd_idxs.argsort()
|
305 |
+
sorted_tokens = gathered_tokens[gatherd_idxs]
|
306 |
+
tokens_per_expert = tokens_per_expert_post_gather
|
307 |
+
tokens_per_expert = tokens_per_expert.cpu().numpy()
|
308 |
+
|
309 |
+
outputs = []
|
310 |
+
start_idx = 0
|
311 |
+
for i, num_tokens in enumerate(tokens_per_expert):
|
312 |
+
end_idx = start_idx + num_tokens
|
313 |
+
if num_tokens == 0:
|
314 |
+
continue
|
315 |
+
expert = self.experts[i + self.ep_rank * self.experts_per_rank]
|
316 |
+
tokens_for_this_expert = sorted_tokens[start_idx:end_idx]
|
317 |
+
expert_out = expert(tokens_for_this_expert)
|
318 |
+
outputs.append(expert_out)
|
319 |
+
start_idx = end_idx
|
320 |
+
|
321 |
+
outs = torch.cat(outputs, dim=0) if len(outputs) else sorted_tokens.new_empty(0)
|
322 |
+
if self.ep_size > 1:
|
323 |
+
new_x = torch.empty_like(outs)
|
324 |
+
new_x[gatherd_idxs] = outs
|
325 |
+
gathered_tokens = new_x.new_empty(*sorted_tokens_shape)
|
326 |
+
dist.all_to_all(
|
327 |
+
list(gathered_tokens.split(input_split_sizes)),
|
328 |
+
list(new_x.split(output_splits)),
|
329 |
+
)
|
330 |
+
outs = gathered_tokens
|
331 |
+
|
332 |
+
new_x = torch.empty_like(outs)
|
333 |
+
new_x[idxs] = outs
|
334 |
+
final_out = (
|
335 |
+
new_x.view(*topk_ids.shape, -1)
|
336 |
+
.type(topk_weight.dtype)
|
337 |
+
.mul_(topk_weight.unsqueeze(dim=-1))
|
338 |
+
.sum(dim=1)
|
339 |
+
.type(new_x.dtype)
|
340 |
+
)
|
341 |
+
return final_out
|
342 |
+
|
343 |
+
|
344 |
+
# Copied from transformers.models.llama.modeling_llama.repeat_kv
|
345 |
+
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
|
346 |
+
"""
|
347 |
+
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
|
348 |
+
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
|
349 |
+
"""
|
350 |
+
batch, num_key_value_heads, slen, head_dim = hidden_states.shape
|
351 |
+
if n_rep == 1:
|
352 |
+
return hidden_states
|
353 |
+
hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
|
354 |
+
return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
|
355 |
+
|
356 |
+
|
357 |
+
class MegrezMoeDecoderLayer(nn.Module):
|
358 |
+
def __init__(self, config: MegrezMoeConfig, layer_idx: int):
|
359 |
+
super().__init__()
|
360 |
+
self.config = config
|
361 |
+
self.layer_number = layer_idx
|
362 |
+
|
363 |
+
self.experts_shared = (
|
364 |
+
config.experts_shared_frequency is not None and layer_idx >= self.config.first_k_dense_replace
|
365 |
+
)
|
366 |
+
|
367 |
+
self.pre_gate = config.pre_gate
|
368 |
+
|
369 |
+
self.hidden_size = config.hidden_size
|
370 |
+
|
371 |
+
is_moe = (
|
372 |
+
config.n_routed_experts is not None
|
373 |
+
and layer_idx >= config.first_k_dense_replace
|
374 |
+
and layer_idx % config.moe_layer_freq == 0
|
375 |
+
)
|
376 |
+
|
377 |
+
init_experts = (layer_idx - config.first_k_dense_replace) % config.experts_shared_frequency == 0
|
378 |
+
self.self_attn = LlamaAttention(config=config, layer_idx=layer_idx)
|
379 |
+
self.mlp = MegrezMoeMoE(config, layer_idx, init_experts) if is_moe else MegrezMoeMLP(config)
|
380 |
+
self.input_layernorm = MegrezMoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
381 |
+
self.post_attention_layernorm = MegrezMoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
382 |
+
|
383 |
+
def forward(
|
384 |
+
self,
|
385 |
+
hidden_states: torch.Tensor,
|
386 |
+
attention_mask: Optional[torch.Tensor] = None,
|
387 |
+
position_ids: Optional[torch.LongTensor] = None,
|
388 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
389 |
+
output_attentions: Optional[bool] = False,
|
390 |
+
use_cache: Optional[bool] = False,
|
391 |
+
position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
|
392 |
+
**kwargs,
|
393 |
+
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
|
394 |
+
"""
|
395 |
+
Args:
|
396 |
+
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
|
397 |
+
attention_mask (`torch.FloatTensor`, *optional*):
|
398 |
+
attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
|
399 |
+
query_sequence_length, key_sequence_length)` if default attention is used.
|
400 |
+
output_attentions (`bool`, *optional*):
|
401 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
|
402 |
+
returned tensors for more detail.
|
403 |
+
use_cache (`bool`, *optional*):
|
404 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
|
405 |
+
(see `past_key_values`).
|
406 |
+
past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
|
407 |
+
"""
|
408 |
+
|
409 |
+
if self.pre_gate and self.layer_number >= self.config.first_k_dense_replace:
|
410 |
+
hidden_states = torch.split(hidden_states, hidden_states.shape[0] // 2, dim=0)
|
411 |
+
pre_gate_hidden_states = hidden_states[0]
|
412 |
+
hidden_states = hidden_states[1]
|
413 |
+
else:
|
414 |
+
pre_gate_hidden_states = None
|
415 |
+
|
416 |
+
if "padding_mask" in kwargs:
|
417 |
+
warnings.warn(
|
418 |
+
"Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`"
|
419 |
+
)
|
420 |
+
|
421 |
+
residual = hidden_states
|
422 |
+
hidden_states = self.input_layernorm(hidden_states)
|
423 |
+
|
424 |
+
# Self Attention
|
425 |
+
hidden_states, self_attn_weights = self.self_attn(
|
426 |
+
hidden_states=hidden_states,
|
427 |
+
attention_mask=attention_mask,
|
428 |
+
position_ids=position_ids,
|
429 |
+
past_key_value=past_key_value,
|
430 |
+
output_attentions=output_attentions,
|
431 |
+
use_cache=use_cache,
|
432 |
+
position_embeddings=position_embeddings,
|
433 |
+
**kwargs,
|
434 |
+
)
|
435 |
+
hidden_states = residual + hidden_states
|
436 |
+
|
437 |
+
# Fully Connected
|
438 |
+
residual = hidden_states
|
439 |
+
hidden_states = self.post_attention_layernorm(hidden_states)
|
440 |
+
post_attention_layernorm_hidden_states = hidden_states
|
441 |
+
if isinstance(self.mlp, MegrezMoeMoE):
|
442 |
+
hidden_states = self.mlp(hidden_states, pre_gate_hidden_states=pre_gate_hidden_states)
|
443 |
+
else:
|
444 |
+
hidden_states = self.mlp(hidden_states)
|
445 |
+
hidden_states = residual + hidden_states
|
446 |
+
pre_gate_hidden_states = post_attention_layernorm_hidden_states
|
447 |
+
|
448 |
+
if self.pre_gate and self.layer_number < self.config.num_hidden_layers - 1:
|
449 |
+
hidden_states = torch.cat([pre_gate_hidden_states, hidden_states], dim=0)
|
450 |
+
|
451 |
+
outputs = (hidden_states,)
|
452 |
+
|
453 |
+
if output_attentions:
|
454 |
+
outputs += (self_attn_weights,)
|
455 |
+
|
456 |
+
return outputs
|
457 |
+
|
458 |
+
|
459 |
+
MegrezMoe_START_DOCSTRING = r"""
|
460 |
+
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
461 |
+
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
|
462 |
+
etc.)
|
463 |
+
|
464 |
+
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
465 |
+
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
466 |
+
and behavior.
|
467 |
+
|
468 |
+
Parameters:
|
469 |
+
config ([`MegrezMoeConfig`]):
|
470 |
+
Model configuration class with all the parameters of the model. Initializing with a config file does not
|
471 |
+
load the weights associated with the model, only the configuration. Check out the
|
472 |
+
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
473 |
+
"""
|
474 |
+
|
475 |
+
|
476 |
+
@add_start_docstrings(
|
477 |
+
"The bare MegrezMoe Model outputting raw hidden-states without any specific head on top.",
|
478 |
+
MegrezMoe_START_DOCSTRING,
|
479 |
+
)
|
480 |
+
class MegrezMoePreTrainedModel(PreTrainedModel):
|
481 |
+
config_class = MegrezMoeConfig
|
482 |
+
base_model_prefix = "model"
|
483 |
+
supports_gradient_checkpointing = True
|
484 |
+
_no_split_modules = ["MegrezMoeDecoderLayer"]
|
485 |
+
_skip_keys_device_placement = "past_key_values"
|
486 |
+
_supports_flash_attn_2 = True
|
487 |
+
_supports_cache_class = True
|
488 |
+
|
489 |
+
def _init_weights(self, module):
|
490 |
+
std = self.config.initializer_range
|
491 |
+
if isinstance(module, nn.Linear):
|
492 |
+
module.weight.data.normal_(mean=0.0, std=std)
|
493 |
+
if module.bias is not None:
|
494 |
+
module.bias.data.zero_()
|
495 |
+
elif isinstance(module, nn.Embedding):
|
496 |
+
module.weight.data.normal_(mean=0.0, std=std)
|
497 |
+
if module.padding_idx is not None:
|
498 |
+
module.weight.data[module.padding_idx].zero_()
|
499 |
+
|
500 |
+
|
501 |
+
MegrezMoe_INPUTS_DOCSTRING = r"""
|
502 |
+
Args:
|
503 |
+
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
|
504 |
+
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
|
505 |
+
it.
|
506 |
+
|
507 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
508 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
509 |
+
|
510 |
+
[What are input IDs?](../glossary#input-ids)
|
511 |
+
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
|
512 |
+
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
|
513 |
+
|
514 |
+
- 1 for tokens that are **not masked**,
|
515 |
+
- 0 for tokens that are **masked**.
|
516 |
+
|
517 |
+
[What are attention masks?](../glossary#attention-mask)
|
518 |
+
|
519 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
520 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
521 |
+
|
522 |
+
If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
|
523 |
+
`past_key_values`).
|
524 |
+
|
525 |
+
If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
|
526 |
+
and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
|
527 |
+
information on the default strategy.
|
528 |
+
|
529 |
+
- 1 indicates the head is **not masked**,
|
530 |
+
- 0 indicates the head is **masked**.
|
531 |
+
position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
532 |
+
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
|
533 |
+
config.n_positions - 1]`.
|
534 |
+
|
535 |
+
[What are position IDs?](../glossary#position-ids)
|
536 |
+
past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*):
|
537 |
+
Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
|
538 |
+
blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
|
539 |
+
returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
|
540 |
+
|
541 |
+
Two formats are allowed:
|
542 |
+
- a [`~cache_utils.Cache`] instance;
|
543 |
+
- Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
|
544 |
+
shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy
|
545 |
+
cache format.
|
546 |
+
|
547 |
+
The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the
|
548 |
+
legacy cache format will be returned.
|
549 |
+
|
550 |
+
If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
|
551 |
+
have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
|
552 |
+
of shape `(batch_size, sequence_length)`.
|
553 |
+
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
|
554 |
+
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
|
555 |
+
is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
|
556 |
+
model's internal embedding lookup matrix.
|
557 |
+
use_cache (`bool`, *optional*):
|
558 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
|
559 |
+
`past_key_values`).
|
560 |
+
output_attentions (`bool`, *optional*):
|
561 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
|
562 |
+
tensors for more detail.
|
563 |
+
output_hidden_states (`bool`, *optional*):
|
564 |
+
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
|
565 |
+
more detail.
|
566 |
+
return_dict (`bool`, *optional*):
|
567 |
+
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
|
568 |
+
"""
|
569 |
+
|
570 |
+
|
571 |
+
@add_start_docstrings(
|
572 |
+
"The bare MegrezMoe Model outputting raw hidden-states without any specific head on top.",
|
573 |
+
MegrezMoe_START_DOCSTRING,
|
574 |
+
)
|
575 |
+
class MegrezMoeModel(MegrezMoePreTrainedModel):
|
576 |
+
"""
|
577 |
+
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`MegrezMoeDecoderLayer`]
|
578 |
+
|
579 |
+
Args:
|
580 |
+
config: MegrezMoeConfig
|
581 |
+
"""
|
582 |
+
|
583 |
+
def __init__(self, config: MegrezMoeConfig):
|
584 |
+
super().__init__(config)
|
585 |
+
self.padding_idx = config.pad_token_id
|
586 |
+
self.vocab_size = config.vocab_size
|
587 |
+
|
588 |
+
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
|
589 |
+
self.rotary_emb = LlamaRotaryEmbedding(config=config)
|
590 |
+
self.layers = nn.ModuleList(
|
591 |
+
[MegrezMoeDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
|
592 |
+
)
|
593 |
+
self._use_flash_attention_2 = config._attn_implementation == "flash_attention_2"
|
594 |
+
self.norm = MegrezMoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
595 |
+
|
596 |
+
self.gradient_checkpointing = False
|
597 |
+
# Initialize weights and apply final processing
|
598 |
+
self.post_init()
|
599 |
+
|
600 |
+
def get_input_embeddings(self):
|
601 |
+
return self.embed_tokens
|
602 |
+
|
603 |
+
def set_input_embeddings(self, value):
|
604 |
+
self.embed_tokens = value
|
605 |
+
|
606 |
+
@add_start_docstrings_to_model_forward(MegrezMoe_INPUTS_DOCSTRING)
|
607 |
+
def forward(
|
608 |
+
self,
|
609 |
+
input_ids: torch.LongTensor = None,
|
610 |
+
attention_mask: Optional[torch.Tensor] = None,
|
611 |
+
position_ids: Optional[torch.LongTensor] = None,
|
612 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
613 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
614 |
+
use_cache: Optional[bool] = None,
|
615 |
+
output_attentions: Optional[bool] = None,
|
616 |
+
output_hidden_states: Optional[bool] = None,
|
617 |
+
**flash_attn_kwargs,
|
618 |
+
) -> Union[Tuple, BaseModelOutputWithPast]:
|
619 |
+
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
|
620 |
+
output_hidden_states = (
|
621 |
+
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
622 |
+
)
|
623 |
+
use_cache = use_cache if use_cache is not None else self.config.use_cache
|
624 |
+
|
625 |
+
# retrieve input_ids and inputs_embeds
|
626 |
+
if input_ids is not None and inputs_embeds is not None:
|
627 |
+
raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
|
628 |
+
elif input_ids is not None:
|
629 |
+
batch_size, seq_length = input_ids.shape[:2]
|
630 |
+
elif inputs_embeds is not None:
|
631 |
+
batch_size, seq_length = inputs_embeds.shape[:2]
|
632 |
+
else:
|
633 |
+
raise ValueError("You have to specify either input_ids or inputs_embeds")
|
634 |
+
|
635 |
+
if self.gradient_checkpointing and self.training:
|
636 |
+
if use_cache:
|
637 |
+
logger.warning_once(
|
638 |
+
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`transformers."
|
639 |
+
)
|
640 |
+
use_cache = False
|
641 |
+
|
642 |
+
past_key_values_length = 0
|
643 |
+
if use_cache:
|
644 |
+
use_legacy_cache = not isinstance(past_key_values, Cache)
|
645 |
+
if use_legacy_cache:
|
646 |
+
past_key_values = DynamicCache.from_legacy_cache(past_key_values)
|
647 |
+
past_key_values_length = past_key_values.get_usable_length(seq_length)
|
648 |
+
|
649 |
+
if position_ids is None:
|
650 |
+
device = input_ids.device if input_ids is not None else inputs_embeds.device
|
651 |
+
position_ids = torch.arange(
|
652 |
+
past_key_values_length,
|
653 |
+
seq_length + past_key_values_length,
|
654 |
+
dtype=torch.long,
|
655 |
+
device=device,
|
656 |
+
)
|
657 |
+
position_ids = position_ids.unsqueeze(0)
|
658 |
+
|
659 |
+
if inputs_embeds is None:
|
660 |
+
inputs_embeds = self.embed_tokens(input_ids)
|
661 |
+
if self._use_flash_attention_2:
|
662 |
+
# 2d mask is passed through the layers
|
663 |
+
attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
|
664 |
+
else:
|
665 |
+
# 4d mask is passed through the layers
|
666 |
+
attention_mask = _prepare_4d_causal_attention_mask(
|
667 |
+
attention_mask,
|
668 |
+
(batch_size, seq_length),
|
669 |
+
inputs_embeds,
|
670 |
+
past_key_values_length,
|
671 |
+
)
|
672 |
+
|
673 |
+
# embed positions
|
674 |
+
hidden_states = inputs_embeds
|
675 |
+
|
676 |
+
# decoder layers
|
677 |
+
all_hidden_states = () if output_hidden_states else None
|
678 |
+
all_self_attns = () if output_attentions else None
|
679 |
+
next_decoder_cache = None
|
680 |
+
|
681 |
+
position_embeddings = self.rotary_emb(hidden_states, position_ids=position_ids)
|
682 |
+
for layer_idx, decoder_layer in enumerate(self.layers):
|
683 |
+
if output_hidden_states:
|
684 |
+
all_hidden_states += (hidden_states,)
|
685 |
+
|
686 |
+
shared_layer_idx = (
|
687 |
+
(layer_idx - self.config.first_k_dense_replace)
|
688 |
+
// self.config.experts_shared_frequency
|
689 |
+
* self.config.experts_shared_frequency
|
690 |
+
+ self.config.first_k_dense_replace
|
691 |
+
)
|
692 |
+
if layer_idx >= self.config.first_k_dense_replace and shared_layer_idx != layer_idx:
|
693 |
+
decoder_layer.mlp.set_experts(self.layers[shared_layer_idx].mlp.experts)
|
694 |
+
|
695 |
+
if self.gradient_checkpointing and self.training:
|
696 |
+
layer_outputs = self._gradient_checkpointing_func(
|
697 |
+
decoder_layer.__call__,
|
698 |
+
hidden_states,
|
699 |
+
attention_mask,
|
700 |
+
position_ids,
|
701 |
+
past_key_values,
|
702 |
+
output_attentions,
|
703 |
+
use_cache,
|
704 |
+
position_embeddings,
|
705 |
+
**flash_attn_kwargs,
|
706 |
+
)
|
707 |
+
else:
|
708 |
+
layer_outputs = decoder_layer(
|
709 |
+
hidden_states,
|
710 |
+
attention_mask=attention_mask,
|
711 |
+
position_ids=position_ids,
|
712 |
+
past_key_value=past_key_values,
|
713 |
+
output_attentions=output_attentions,
|
714 |
+
use_cache=use_cache,
|
715 |
+
position_embeddings=position_embeddings,
|
716 |
+
**flash_attn_kwargs,
|
717 |
+
)
|
718 |
+
if layer_idx >= self.config.first_k_dense_replace and shared_layer_idx != layer_idx:
|
719 |
+
decoder_layer.mlp.set_experts(None)
|
720 |
+
hidden_states = layer_outputs[0]
|
721 |
+
|
722 |
+
if output_attentions:
|
723 |
+
all_self_attns += (layer_outputs[1],)
|
724 |
+
|
725 |
+
hidden_states = self.norm(hidden_states)
|
726 |
+
# add hidden states from the last decoder layer
|
727 |
+
if output_hidden_states:
|
728 |
+
all_hidden_states += (hidden_states,)
|
729 |
+
|
730 |
+
return BaseModelOutputWithPast(
|
731 |
+
last_hidden_state=hidden_states,
|
732 |
+
past_key_values=past_key_values,
|
733 |
+
hidden_states=all_hidden_states,
|
734 |
+
attentions=all_self_attns,
|
735 |
+
)
|
736 |
+
|
737 |
+
|
738 |
+
class MegrezMoeForCausalLM(MegrezMoePreTrainedModel):
|
739 |
+
_tied_weights_keys = ["lm_head.weight"]
|
740 |
+
|
741 |
+
def __init__(self, config):
|
742 |
+
super().__init__(config)
|
743 |
+
self.model = MegrezMoeModel(config)
|
744 |
+
self.vocab_size = config.vocab_size
|
745 |
+
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
|
746 |
+
|
747 |
+
# Initialize weights and apply final processing
|
748 |
+
self.post_init()
|
749 |
+
|
750 |
+
def get_input_embeddings(self):
|
751 |
+
return self.model.embed_tokens
|
752 |
+
|
753 |
+
def set_input_embeddings(self, value):
|
754 |
+
self.model.embed_tokens = value
|
755 |
+
|
756 |
+
def get_output_embeddings(self):
|
757 |
+
return self.lm_head
|
758 |
+
|
759 |
+
def set_output_embeddings(self, new_embeddings):
|
760 |
+
self.lm_head = new_embeddings
|
761 |
+
|
762 |
+
def set_decoder(self, decoder):
|
763 |
+
self.model = decoder
|
764 |
+
|
765 |
+
def get_decoder(self):
|
766 |
+
return self.model
|
767 |
+
|
768 |
+
@add_start_docstrings_to_model_forward(MegrezMoe_INPUTS_DOCSTRING)
|
769 |
+
@replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
|
770 |
+
def forward(
|
771 |
+
self,
|
772 |
+
input_ids: torch.LongTensor = None,
|
773 |
+
attention_mask: Optional[torch.Tensor] = None,
|
774 |
+
position_ids: Optional[torch.LongTensor] = None,
|
775 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
776 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
777 |
+
labels: Optional[torch.LongTensor] = None,
|
778 |
+
use_cache: Optional[bool] = None,
|
779 |
+
output_attentions: Optional[bool] = None,
|
780 |
+
output_hidden_states: Optional[bool] = None,
|
781 |
+
return_dict: Optional[bool] = None,
|
782 |
+
) -> Union[Tuple, CausalLMOutputWithPast]:
|
783 |
+
r"""
|
784 |
+
Args:
|
785 |
+
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
786 |
+
Labels for computing the masked language modeling loss. Indices should either be in `[0, transformers.,
|
787 |
+
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
|
788 |
+
(masked), the loss is only computed for the tokens with labels in `[0, transformers., config.vocab_size]`.
|
789 |
+
|
790 |
+
Returns:
|
791 |
+
|
792 |
+
Example:
|
793 |
+
|
794 |
+
```python
|
795 |
+
>>> from transformers import AutoTokenizer, MegrezMoeForCausalLM
|
796 |
+
|
797 |
+
>>> model = MegrezMoeForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
|
798 |
+
>>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
|
799 |
+
|
800 |
+
>>> prompt = "Hey, are you conscious? Can you talk to me?"
|
801 |
+
>>> inputs = tokenizer(prompt, return_tensors="pt")
|
802 |
+
|
803 |
+
>>> # Generate
|
804 |
+
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
|
805 |
+
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
806 |
+
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
|
807 |
+
```"""
|
808 |
+
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
|
809 |
+
output_hidden_states = (
|
810 |
+
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
811 |
+
)
|
812 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
813 |
+
|
814 |
+
# decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
|
815 |
+
outputs = self.model(
|
816 |
+
input_ids=input_ids,
|
817 |
+
attention_mask=attention_mask,
|
818 |
+
position_ids=position_ids,
|
819 |
+
past_key_values=past_key_values,
|
820 |
+
inputs_embeds=inputs_embeds,
|
821 |
+
use_cache=use_cache,
|
822 |
+
output_attentions=output_attentions,
|
823 |
+
output_hidden_states=output_hidden_states,
|
824 |
+
return_dict=return_dict,
|
825 |
+
)
|
826 |
+
|
827 |
+
hidden_states = outputs[0]
|
828 |
+
logits = self.lm_head(hidden_states)
|
829 |
+
logits = logits.float()
|
830 |
+
|
831 |
+
loss = None
|
832 |
+
if labels is not None:
|
833 |
+
# Shift so that tokens < n predict n
|
834 |
+
shift_logits = logits[..., :-1, :].contiguous()
|
835 |
+
shift_labels = labels[..., 1:].contiguous()
|
836 |
+
# Flatten the tokens
|
837 |
+
loss_fct = CrossEntropyLoss()
|
838 |
+
shift_logits = shift_logits.view(-1, self.config.vocab_size)
|
839 |
+
shift_labels = shift_labels.view(-1)
|
840 |
+
# Enable model parallelism
|
841 |
+
shift_labels = shift_labels.to(shift_logits.device)
|
842 |
+
loss = loss_fct(shift_logits, shift_labels)
|
843 |
+
|
844 |
+
if not return_dict:
|
845 |
+
output = (logits,) + outputs[1:]
|
846 |
+
return (loss,) + output if loss is not None else output
|
847 |
+
|
848 |
+
return CausalLMOutputWithPast(
|
849 |
+
loss=loss,
|
850 |
+
logits=logits,
|
851 |
+
past_key_values=outputs.past_key_values,
|
852 |
+
hidden_states=outputs.hidden_states,
|
853 |
+
attentions=outputs.attentions,
|
854 |
+
)
|
855 |
+
|
856 |
+
def prepare_inputs_for_generation(
|
857 |
+
self,
|
858 |
+
input_ids,
|
859 |
+
past_key_values=None,
|
860 |
+
attention_mask=None,
|
861 |
+
inputs_embeds=None,
|
862 |
+
**kwargs,
|
863 |
+
):
|
864 |
+
if past_key_values is not None:
|
865 |
+
if isinstance(past_key_values, Cache):
|
866 |
+
cache_length = past_key_values.get_seq_length()
|
867 |
+
past_length = past_key_values.seen_tokens
|
868 |
+
# max_cache_length = past_key_values.get_max_length()
|
869 |
+
max_cache_length = past_key_values.get_max_cache_shape()
|
870 |
+
else:
|
871 |
+
cache_length = past_length = past_key_values[0][0].shape[2]
|
872 |
+
max_cache_length = None
|
873 |
+
|
874 |
+
# Keep only the unprocessed tokens:
|
875 |
+
# 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
|
876 |
+
# some of the inputs are exclusivelly passed as part of the cache (e.g. when passing input_embeds as
|
877 |
+
# input)
|
878 |
+
if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
|
879 |
+
input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
|
880 |
+
# 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
|
881 |
+
# input_ids based on the past_length.
|
882 |
+
elif past_length < input_ids.shape[1]:
|
883 |
+
input_ids = input_ids[:, past_length:]
|
884 |
+
# 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
|
885 |
+
|
886 |
+
# If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
|
887 |
+
if (
|
888 |
+
max_cache_length is not None
|
889 |
+
and attention_mask is not None
|
890 |
+
and cache_length + input_ids.shape[1] > max_cache_length
|
891 |
+
):
|
892 |
+
attention_mask = attention_mask[:, -max_cache_length:]
|
893 |
+
|
894 |
+
position_ids = kwargs.get("position_ids", None)
|
895 |
+
if attention_mask is not None and position_ids is None:
|
896 |
+
# create position_ids on the fly for batch generation
|
897 |
+
position_ids = attention_mask.long().cumsum(-1) - 1
|
898 |
+
position_ids.masked_fill_(attention_mask == 0, 1)
|
899 |
+
if past_key_values:
|
900 |
+
position_ids = position_ids[:, -input_ids.shape[1] :]
|
901 |
+
|
902 |
+
# if `inputs_embeds` are passed, we only want to use them in the 1st generation step
|
903 |
+
if inputs_embeds is not None and past_key_values is None:
|
904 |
+
model_inputs = {"inputs_embeds": inputs_embeds}
|
905 |
+
else:
|
906 |
+
model_inputs = {"input_ids": input_ids}
|
907 |
+
|
908 |
+
model_inputs.update(
|
909 |
+
{
|
910 |
+
"position_ids": position_ids,
|
911 |
+
"past_key_values": past_key_values,
|
912 |
+
"use_cache": kwargs.get("use_cache"),
|
913 |
+
"attention_mask": attention_mask,
|
914 |
+
}
|
915 |
+
)
|
916 |
+
return model_inputs
|
917 |
+
|
918 |
+
@staticmethod
|
919 |
+
def _reorder_cache(past_key_values, beam_idx):
|
920 |
+
reordered_past = ()
|
921 |
+
for layer_past in past_key_values:
|
922 |
+
reordered_past += (
|
923 |
+
tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
|
924 |
+
)
|
925 |
+
return reordered_past
|
926 |
+
|
927 |
+
|
928 |
+
@add_start_docstrings(
|
929 |
+
"""
|
930 |
+
The MegrezMoe Model transformer with a sequence classification head on top (linear layer).
|
931 |
+
|
932 |
+
[`MegrezMoeForSequenceClassification`] uses the last token in order to do the classification, as other causal models
|
933 |
+
(e.g. GPT-2) do.
|
934 |
+
|
935 |
+
Since it does classification on the last token, it requires to know the position of the last token. If a
|
936 |
+
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
937 |
+
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
|
938 |
+
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
|
939 |
+
each row of the batch).
|
940 |
+
""",
|
941 |
+
MegrezMoe_START_DOCSTRING,
|
942 |
+
)
|
943 |
+
class MegrezMoeForSequenceClassification(MegrezMoePreTrainedModel):
|
944 |
+
def __init__(self, config):
|
945 |
+
super().__init__(config)
|
946 |
+
self.num_labels = config.num_labels
|
947 |
+
self.model = MegrezMoeModel(config)
|
948 |
+
self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
|
949 |
+
|
950 |
+
# Initialize weights and apply final processing
|
951 |
+
self.post_init()
|
952 |
+
|
953 |
+
def get_input_embeddings(self):
|
954 |
+
return self.model.embed_tokens
|
955 |
+
|
956 |
+
def set_input_embeddings(self, value):
|
957 |
+
self.model.embed_tokens = value
|
958 |
+
|
959 |
+
@add_start_docstrings_to_model_forward(MegrezMoe_INPUTS_DOCSTRING)
|
960 |
+
def forward(
|
961 |
+
self,
|
962 |
+
input_ids: torch.LongTensor = None,
|
963 |
+
attention_mask: Optional[torch.Tensor] = None,
|
964 |
+
position_ids: Optional[torch.LongTensor] = None,
|
965 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
966 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
967 |
+
labels: Optional[torch.LongTensor] = None,
|
968 |
+
use_cache: Optional[bool] = None,
|
969 |
+
output_attentions: Optional[bool] = None,
|
970 |
+
output_hidden_states: Optional[bool] = None,
|
971 |
+
return_dict: Optional[bool] = None,
|
972 |
+
) -> Union[Tuple, SequenceClassifierOutputWithPast]:
|
973 |
+
r"""
|
974 |
+
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
|
975 |
+
Labels for computing the sequence classification/regression loss. Indices should be in `[0, transformers.,
|
976 |
+
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
|
977 |
+
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
|
978 |
+
"""
|
979 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
980 |
+
|
981 |
+
transformer_outputs = self.model(
|
982 |
+
input_ids,
|
983 |
+
attention_mask=attention_mask,
|
984 |
+
position_ids=position_ids,
|
985 |
+
past_key_values=past_key_values,
|
986 |
+
inputs_embeds=inputs_embeds,
|
987 |
+
use_cache=use_cache,
|
988 |
+
output_attentions=output_attentions,
|
989 |
+
output_hidden_states=output_hidden_states,
|
990 |
+
return_dict=return_dict,
|
991 |
+
)
|
992 |
+
hidden_states = transformer_outputs[0]
|
993 |
+
logits = self.score(hidden_states)
|
994 |
+
|
995 |
+
if input_ids is not None:
|
996 |
+
batch_size = input_ids.shape[0]
|
997 |
+
else:
|
998 |
+
batch_size = inputs_embeds.shape[0]
|
999 |
+
|
1000 |
+
if self.config.pad_token_id is None and batch_size != 1:
|
1001 |
+
raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
|
1002 |
+
if self.config.pad_token_id is None:
|
1003 |
+
sequence_lengths = -1
|
1004 |
+
else:
|
1005 |
+
if input_ids is not None:
|
1006 |
+
sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1).to(
|
1007 |
+
logits.device
|
1008 |
+
)
|
1009 |
+
else:
|
1010 |
+
sequence_lengths = -1
|
1011 |
+
|
1012 |
+
pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
|
1013 |
+
|
1014 |
+
loss = None
|
1015 |
+
if labels is not None:
|
1016 |
+
labels = labels.to(logits.device)
|
1017 |
+
if self.config.problem_type is None:
|
1018 |
+
if self.num_labels == 1:
|
1019 |
+
self.config.problem_type = "regression"
|
1020 |
+
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
|
1021 |
+
self.config.problem_type = "single_label_classification"
|
1022 |
+
else:
|
1023 |
+
self.config.problem_type = "multi_label_classification"
|
1024 |
+
|
1025 |
+
if self.config.problem_type == "regression":
|
1026 |
+
loss_fct = MSELoss()
|
1027 |
+
if self.num_labels == 1:
|
1028 |
+
loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
|
1029 |
+
else:
|
1030 |
+
loss = loss_fct(pooled_logits, labels)
|
1031 |
+
elif self.config.problem_type == "single_label_classification":
|
1032 |
+
loss_fct = CrossEntropyLoss()
|
1033 |
+
loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
|
1034 |
+
elif self.config.problem_type == "multi_label_classification":
|
1035 |
+
loss_fct = BCEWithLogitsLoss()
|
1036 |
+
loss = loss_fct(pooled_logits, labels)
|
1037 |
+
if not return_dict:
|
1038 |
+
output = (pooled_logits,) + transformer_outputs[1:]
|
1039 |
+
return ((loss,) + output) if loss is not None else output
|
1040 |
+
|
1041 |
+
return SequenceClassifierOutputWithPast(
|
1042 |
+
loss=loss,
|
1043 |
+
logits=pooled_logits,
|
1044 |
+
past_key_values=transformer_outputs.past_key_values,
|
1045 |
+
hidden_states=transformer_outputs.hidden_states,
|
1046 |
+
attentions=transformer_outputs.attentions,
|
1047 |
+
)
|
special_tokens_map.json
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"eos_token": {
|
3 |
+
"content": "<|turn_end|>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": false,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"pad_token": {
|
10 |
+
"content": "<|pad|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
}
|
16 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"added_tokens_decoder": {
|
4 |
+
"120000": {
|
5 |
+
"content": "<|eos|>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": false,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false,
|
10 |
+
"special": true
|
11 |
+
},
|
12 |
+
"120001": {
|
13 |
+
"content": "<|unk|>",
|
14 |
+
"lstrip": false,
|
15 |
+
"normalized": false,
|
16 |
+
"rstrip": false,
|
17 |
+
"single_word": false,
|
18 |
+
"special": true
|
19 |
+
},
|
20 |
+
"120002": {
|
21 |
+
"content": "<|pad|>",
|
22 |
+
"lstrip": false,
|
23 |
+
"normalized": false,
|
24 |
+
"rstrip": false,
|
25 |
+
"single_word": false,
|
26 |
+
"special": true
|
27 |
+
},
|
28 |
+
"120003": {
|
29 |
+
"content": "<|role_start|>",
|
30 |
+
"lstrip": false,
|
31 |
+
"normalized": false,
|
32 |
+
"rstrip": false,
|
33 |
+
"single_word": false,
|
34 |
+
"special": true
|
35 |
+
},
|
36 |
+
"120004": {
|
37 |
+
"content": "<|role_end|>",
|
38 |
+
"lstrip": false,
|
39 |
+
"normalized": false,
|
40 |
+
"rstrip": false,
|
41 |
+
"single_word": false,
|
42 |
+
"special": true
|
43 |
+
},
|
44 |
+
"120005": {
|
45 |
+
"content": "<|turn_end|>",
|
46 |
+
"lstrip": false,
|
47 |
+
"normalized": false,
|
48 |
+
"rstrip": false,
|
49 |
+
"single_word": false,
|
50 |
+
"special": true
|
51 |
+
},
|
52 |
+
"120006": {
|
53 |
+
"content": "<|code_start|>",
|
54 |
+
"lstrip": false,
|
55 |
+
"normalized": false,
|
56 |
+
"rstrip": false,
|
57 |
+
"single_word": false,
|
58 |
+
"special": true
|
59 |
+
},
|
60 |
+
"120007": {
|
61 |
+
"content": "<|code_end|>",
|
62 |
+
"lstrip": false,
|
63 |
+
"normalized": false,
|
64 |
+
"rstrip": false,
|
65 |
+
"single_word": false,
|
66 |
+
"special": true
|
67 |
+
},
|
68 |
+
"120008": {
|
69 |
+
"content": "<|commit_start|>",
|
70 |
+
"lstrip": false,
|
71 |
+
"normalized": false,
|
72 |
+
"rstrip": false,
|
73 |
+
"single_word": false,
|
74 |
+
"special": true
|
75 |
+
},
|
76 |
+
"120009": {
|
77 |
+
"content": "<|commit_end|>",
|
78 |
+
"lstrip": false,
|
79 |
+
"normalized": false,
|
80 |
+
"rstrip": false,
|
81 |
+
"single_word": false,
|
82 |
+
"special": true
|
83 |
+
},
|
84 |
+
"120010": {
|
85 |
+
"content": "<|diff_start|>",
|
86 |
+
"lstrip": false,
|
87 |
+
"normalized": false,
|
88 |
+
"rstrip": false,
|
89 |
+
"single_word": false,
|
90 |
+
"special": true
|
91 |
+
},
|
92 |
+
"120011": {
|
93 |
+
"content": "<|diff_end|>",
|
94 |
+
"lstrip": false,
|
95 |
+
"normalized": false,
|
96 |
+
"rstrip": false,
|
97 |
+
"single_word": false,
|
98 |
+
"special": true
|
99 |
+
},
|
100 |
+
"120012": {
|
101 |
+
"content": "<|code_execution_start|>",
|
102 |
+
"lstrip": false,
|
103 |
+
"normalized": false,
|
104 |
+
"rstrip": false,
|
105 |
+
"single_word": false,
|
106 |
+
"special": true
|
107 |
+
},
|
108 |
+
"120013": {
|
109 |
+
"content": "<|code_execution_end|>",
|
110 |
+
"lstrip": false,
|
111 |
+
"normalized": false,
|
112 |
+
"rstrip": false,
|
113 |
+
"single_word": false,
|
114 |
+
"special": true
|
115 |
+
},
|
116 |
+
"120014": {
|
117 |
+
"content": "<|image_start|>",
|
118 |
+
"lstrip": false,
|
119 |
+
"normalized": false,
|
120 |
+
"rstrip": false,
|
121 |
+
"single_word": false,
|
122 |
+
"special": true
|
123 |
+
},
|
124 |
+
"120015": {
|
125 |
+
"content": "<|image_end|>",
|
126 |
+
"lstrip": false,
|
127 |
+
"normalized": false,
|
128 |
+
"rstrip": false,
|
129 |
+
"single_word": false,
|
130 |
+
"special": true
|
131 |
+
},
|
132 |
+
"120016": {
|
133 |
+
"content": "<|image_pad|>",
|
134 |
+
"lstrip": false,
|
135 |
+
"normalized": false,
|
136 |
+
"rstrip": false,
|
137 |
+
"single_word": false,
|
138 |
+
"special": true
|
139 |
+
},
|
140 |
+
"120017": {
|
141 |
+
"content": "<|video_start|>",
|
142 |
+
"lstrip": false,
|
143 |
+
"normalized": false,
|
144 |
+
"rstrip": false,
|
145 |
+
"single_word": false,
|
146 |
+
"special": true
|
147 |
+
},
|
148 |
+
"120018": {
|
149 |
+
"content": "<|video_end|>",
|
150 |
+
"lstrip": false,
|
151 |
+
"normalized": false,
|
152 |
+
"rstrip": false,
|
153 |
+
"single_word": false,
|
154 |
+
"special": true
|
155 |
+
},
|
156 |
+
"120019": {
|
157 |
+
"content": "<|video_pad|>",
|
158 |
+
"lstrip": false,
|
159 |
+
"normalized": false,
|
160 |
+
"rstrip": false,
|
161 |
+
"single_word": false,
|
162 |
+
"special": true
|
163 |
+
},
|
164 |
+
"120020": {
|
165 |
+
"content": "<|audio_start|>",
|
166 |
+
"lstrip": false,
|
167 |
+
"normalized": false,
|
168 |
+
"rstrip": false,
|
169 |
+
"single_word": false,
|
170 |
+
"special": true
|
171 |
+
},
|
172 |
+
"120021": {
|
173 |
+
"content": "<|audio_end|>",
|
174 |
+
"lstrip": false,
|
175 |
+
"normalized": false,
|
176 |
+
"rstrip": false,
|
177 |
+
"single_word": false,
|
178 |
+
"special": true
|
179 |
+
},
|
180 |
+
"120022": {
|
181 |
+
"content": "<|audio_pad|>",
|
182 |
+
"lstrip": false,
|
183 |
+
"normalized": false,
|
184 |
+
"rstrip": false,
|
185 |
+
"single_word": false,
|
186 |
+
"special": true
|
187 |
+
},
|
188 |
+
"120023": {
|
189 |
+
"content": "<|function_start|>",
|
190 |
+
"lstrip": false,
|
191 |
+
"normalized": false,
|
192 |
+
"rstrip": false,
|
193 |
+
"single_word": false,
|
194 |
+
"special": true
|
195 |
+
},
|
196 |
+
"120024": {
|
197 |
+
"content": "<|function_end|>",
|
198 |
+
"lstrip": false,
|
199 |
+
"normalized": false,
|
200 |
+
"rstrip": false,
|
201 |
+
"single_word": false,
|
202 |
+
"special": true
|
203 |
+
},
|
204 |
+
"120025": {
|
205 |
+
"content": "<|turn_end>",
|
206 |
+
"lstrip": false,
|
207 |
+
"normalized": false,
|
208 |
+
"rstrip": false,
|
209 |
+
"single_word": false,
|
210 |
+
"special": true
|
211 |
+
}
|
212 |
+
},
|
213 |
+
"clean_up_tokenization_spaces": true,
|
214 |
+
"eos_token": "<|turn_end|>",
|
215 |
+
"extra_special_tokens": {},
|
216 |
+
"model_max_length": 32768,
|
217 |
+
"chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|role_start|>system<|role_end|>你是Megrez-MoE,将针对用户的问题给出详细的、积极的回答。<|turn_end|>' }}{% endif %}{{ '<|role_start|>' + message['role'] + '<|role_end|>' + message['content'] + '<|turn_end|>' }}{% endfor %}{% if add_generation_prompt %}{{ '<|role_start|>assistant<|role_end|>' }}{% endif %}",
|
218 |
+
"pad_token": "<|pad|>",
|
219 |
+
"padding_side": "right",
|
220 |
+
"tokenizer_class": "PreTrainedTokenizer"
|
221 |
+
}
|