JunHowie commited on
Commit
afb6654
·
verified ·
1 Parent(s): ddcca95

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model.safetensors.index.json filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
.mdl ADDED
Binary file (55 Bytes). View file
 
.msc ADDED
Binary file (5.02 kB). View file
 
.mv ADDED
@@ -0,0 +1 @@
 
 
1
+ Revision:master,CreatedAt:1746725023
README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B/blob/main/LICENSE
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - Qwen3
7
+ - gptq
8
+ - int8
9
+ - 量化修复
10
+ - vLLM
11
+ base_model:
12
+ - Qwen/Qwen3-235B-A22B
13
+ base_model_relation: quantized
14
+ ---
15
+ # 通义千问3-235B-A22B-GPTQ-Int8
16
+ 基础型 [Qwen/Qwen3-235B-A22B](https://www.modelscope.cn/models/Qwen/Qwen3-235B-A22B)
17
+
18
+ ### 【模型更新日期】
19
+ ```
20
+ 2025-05-09
21
+ 1. 首次commit
22
+ 2. 确定支持8卡的`tensor-parallel-size` + `expert-parallel` 启动
23
+ 3. 必须 gptq_marlin 启动;不支持 Compute 7 显卡启动:vllm没有实现原生GPTQ的moe模块。
24
+ ```
25
+
26
+ ### 【依赖】
27
+
28
+ ```
29
+ vllm==0.8.5
30
+ transformers==4.51.3
31
+ ```
32
+
33
+ <div style="
34
+ background: rgba(255, 193, 61, 0.15);
35
+ padding: 16px;
36
+ border-radius: 6px;
37
+ border: 1px solid rgba(255, 165, 0, 0.3);
38
+ margin: 16px 0;
39
+ ">
40
+ ### 【💡新版 VLLM MoE 注意事项💡】
41
+
42
+ #### 1. 需使用V0推理模式
43
+ 启动vllm之前,先设置环境变量
44
+ ```
45
+ export VLLM_USE_V1=0
46
+ ```
47
+
48
+ #### 2. `gptq_marlin.py` 存在小bug,需要补丁
49
+ 将附件中的文件替换至
50
+
51
+ ```.../vllm/model_executor/layers/quantization/gptq_marlin.py```
52
+
53
+ 否则会出现下述报错
54
+ ```
55
+ raise NotImplementedError(
56
+ NotImplementedError: Apply router weight on input is not supported forfused Marlin MoE method.
57
+ ```
58
+ </div>
59
+
60
+ <div style="
61
+ background: rgba(255, 0, 200, 0.15);
62
+ padding: 16px;
63
+ border-radius: 6px;
64
+ border: 1px solid rgba(255, 0, 200, 0.3);
65
+ margin: 16px 0;
66
+ ">
67
+ ### 【💡通义千问3-235B-A22B 注意事项💡】
68
+
69
+ #### 1. 启动vllm的时候,要记得使用专家并行模式(`--enable-expert-parallel`),否则不能单节点8卡启动。
70
+ 启动示例:
71
+ ```commandline
72
+ vllm serve \
73
+ tclf90/Qwen3-235B-A22B-GPTQ-Int8 \
74
+ --served-model-name Qwen3-235B-A22B-GPTQ-Int8 \
75
+ --max-num-seqs 8 \
76
+ --max-model-len 32768 \
77
+ --max-seq-len-to-capture 32768 \
78
+ --gpu-memory-utilization 0.98 \
79
+ --tensor-parallel-size 8 \
80
+ --enable-expert-parallel \
81
+ --disable-log-requests \
82
+ --trust-remote-code
83
+ ```
84
+ </div>
85
+
86
+
87
+ ### 【模型列表】
88
+
89
+ | 文件大小 | 最近更新时间 |
90
+ |---------|--------------|
91
+ | `226GB` | `2025-05-09` |
92
+
93
+
94
+
95
+ ### 【模型下载】
96
+
97
+ ```python
98
+ from modelscope import snapshot_download
99
+ snapshot_download('tclf90/Qwen3-235B-A22B-GPTQ-Int8', cache_dir="本地路径")
100
+ ```
101
+
102
+
103
+ ### 【介绍】
104
+ # Qwen3-235B-A22B
105
+ <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
106
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
107
+ </a>
108
+
109
+ ## Qwen3 Highlights
110
+
111
+ Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
112
+
113
+ - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
114
+ - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
115
+ - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
116
+ - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
117
+ - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
118
+
119
+ ## Model Overview
120
+
121
+ **Qwen3-235B-A22B** has the following features:
122
+ - Type: Causal Language Models
123
+ - Training Stage: Pretraining & Post-training
124
+ - Number of Parameters: 235B in total and 22B activated
125
+ - Number of Paramaters (Non-Embedding): 234B
126
+ - Number of Layers: 94
127
+ - Number of Attention Heads (GQA): 64 for Q and 4 for KV
128
+ - Number of Experts: 128
129
+ - Number of Activated Experts: 8
130
+ - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
131
+
132
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
133
+
134
+ ## Best Practices
135
+
136
+ To achieve optimal performance, we recommend the following settings:
137
+
138
+ 1. **Sampling Parameters**:
139
+ - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
140
+ - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
141
+ - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
142
+
143
+ 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
144
+
145
+ 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
146
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
147
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
148
+
149
+ 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
150
+
151
+ ## Processing Long Texts
152
+
153
+ Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
154
+
155
+ YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
156
+
157
+ - Modifying the model files:
158
+ In the `config.json` file, add the `rope_scaling` fields:
159
+ ```json
160
+ {
161
+ ...,
162
+ "rope_scaling": {
163
+ "rope_type": "yarn",
164
+ "factor": 4.0,
165
+ "original_max_position_embeddings": 32768
166
+ }
167
+ }
168
+ ```
169
+ For `llama.cpp`, you need to regenerate the GGUF file after the modification.
170
+
171
+ - Passing command line arguments:
172
+
173
+ For `vllm`, you can use
174
+ ```shell
175
+ vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
176
+ ```
177
+
178
+ For `sglang`, you can use
179
+ ```shell
180
+ python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
181
+ ```
182
+
183
+ For `llama-server` from `llama.cpp`, you can use
184
+ ```shell
185
+ llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
186
+ ```
187
+
188
+ ## To completely disable thinking, you could use a custom chat template when starting the model:
189
+ [Vllm Guide](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes)
190
+
191
+ ```
192
+ vllm serve ...model_path... --chat-template ./qwen3_nonthinking.jinja
193
+ ```
194
+
195
+
196
+ ## Citation
197
+
198
+ If you find our work helpful, feel free to give us a cite.
199
+
200
+ ```
201
+ @misc{qwen3,
202
+ title = {Qwen3},
203
+ url = {https://qwenlm.github.io/blog/qwen3/},
204
+ author = {Qwen Team},
205
+ month = {April},
206
+ year = {2025}
207
+ }
config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name_or_path": "tclf90/Qwen3-235B-A22B-GPTQ-Int8",
3
+ "architectures": [
4
+ "Qwen3MoeForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 151643,
9
+ "decoder_sparse_step": 1,
10
+ "eos_token_id": 151645,
11
+ "head_dim": 128,
12
+ "hidden_act": "silu",
13
+ "hidden_size": 4096,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 12288,
16
+ "max_position_embeddings": 40960,
17
+ "max_window_layers": 94,
18
+ "mlp_only_layers": [],
19
+ "model_type": "qwen3_moe",
20
+ "moe_intermediate_size": 1536,
21
+ "norm_topk_prob": true,
22
+ "num_attention_heads": 64,
23
+ "num_experts": 128,
24
+ "num_experts_per_tok": 8,
25
+ "num_hidden_layers": 94,
26
+ "num_key_value_heads": 4,
27
+ "output_router_logits": false,
28
+ "rms_norm_eps": 1e-06,
29
+ "rope_scaling": null,
30
+ "rope_theta": 1000000.0,
31
+ "router_aux_loss_coef": 0.001,
32
+ "sliding_window": null,
33
+ "tie_word_embeddings": false,
34
+ "torch_dtype": "float16",
35
+ "transformers_version": "4.51.0",
36
+ "use_cache": true,
37
+ "use_sliding_window": false,
38
+ "vocab_size": 151936,
39
+ "quantization_config": {
40
+ "quant_method": "gptq",
41
+ "bits": 8,
42
+ "group_size": 128,
43
+ "sym": true,
44
+ "desc_act": false,
45
+ "block_name_to_quantize": null,
46
+ "module_name_preceding_first_block": null,
47
+ "modules_in_block_to_quantize": null
48
+ }
49
+ }
configuration.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"framework": "pytorch", "task": "others", "allow_remote": true}
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "temperature": 0.6,
10
+ "top_k": 20,
11
+ "top_p": 0.95,
12
+ "transformers_version": "4.51.0"
13
+ }
gptq_marlin.py ADDED
@@ -0,0 +1,643 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPDX-License-Identifier: Apache-2.0
2
+
3
+ from typing import Any, Callable, Dict, List, Optional, Set, Union
4
+
5
+ import torch
6
+
7
+ import vllm.model_executor.layers.fused_moe # noqa
8
+ from vllm import _custom_ops as ops
9
+ from vllm.logger import init_logger
10
+ from vllm.model_executor.layers.fused_moe.layer import (
11
+ FusedMoE, FusedMoEMethodBase, FusedMoeWeightScaleSupported)
12
+ from vllm.model_executor.layers.linear import (LinearMethodBase,
13
+ set_weight_attrs)
14
+ from vllm.model_executor.layers.quantization.base_config import (
15
+ QuantizationConfig, QuantizeMethodBase)
16
+ from vllm.model_executor.layers.quantization.kernels.mixed_precision import (
17
+ MPLinearLayerConfig, choose_mp_linear_kernel)
18
+ from vllm.model_executor.layers.quantization.utils import replace_parameter
19
+ from vllm.model_executor.layers.quantization.utils.gptq_utils import (
20
+ get_linear_quant_method)
21
+ from vllm.model_executor.layers.quantization.utils.marlin_utils import (
22
+ check_marlin_supported, check_moe_marlin_supports_layer,
23
+ marlin_moe_permute_scales, marlin_repeat_scales_on_all_ranks,
24
+ verify_marlin_supported)
25
+ from vllm.model_executor.parameter import (ChannelQuantScaleParameter,
26
+ GroupQuantScaleParameter,
27
+ PackedColumnParameter,
28
+ PackedvLLMParameter,
29
+ RowvLLMParameter)
30
+ from vllm.platforms import current_platform
31
+ from vllm.scalar_type import scalar_types
32
+
33
+ logger = init_logger(__name__)
34
+
35
+
36
+ class GPTQMarlinConfig(QuantizationConfig):
37
+ """Config class for GPTQ Marlin"""
38
+
39
+ # (num_bits, is_sym) -> quant_type
40
+ TYPE_MAP = {
41
+ (4, True): scalar_types.uint4b8,
42
+ (8, True): scalar_types.uint8b128,
43
+ }
44
+
45
+ def __init__(self, weight_bits: int, group_size: int, desc_act: bool,
46
+ is_sym: bool, lm_head_quantized: bool,
47
+ dynamic: Dict[str, Dict[str, Union[int, bool]]],
48
+ full_config: Dict[str, Any]) -> None:
49
+ super().__init__()
50
+ if desc_act and group_size == -1:
51
+ # In this case, act_order == True is the same as act_order == False
52
+ # (since we have only one group per output channel)
53
+ desc_act = False
54
+
55
+ # GPTQModel use `dynamic` config property to allow per module
56
+ # quantization config so each module can be individually optimized.
57
+ # Format is Dict[str, Dict] where key is a regex string that can
58
+ # perform both positive ("+:" prefixed) or negative ("-:" prefixed)
59
+ # matching of a module.
60
+ # Default to positive match, override base quant config mode, if no
61
+ # prefix is used. Value is in dict format of field key and override
62
+ # value.
63
+ # Negative matching will skip quantization init for this module
64
+ # entirely:
65
+ # non-quantized inference. More details and quantization examples can be
66
+ # found at: https://github.com/ModelCloud/GPTQModel
67
+ # Example:
68
+ # # last 1/2 of the layers 10-21 has 8bit vs 4bit for 0-9
69
+ # # last 1/4 of the layers 16-21 has 8bit and group_size 64
70
+ # dynamic = {
71
+ # #`.*\.` matches the layers_node prefix
72
+ # # positive match layer 10-15
73
+ # r"+:.*\.(?:1[0-5])\..*": {"bits": 8,},
74
+ # # positive match layer 16-21
75
+ # r"+:.*\.(?:1[6-9]|20|21)\..*": {"bits": 8, "group_size": 64,},
76
+ # r"-:.*\.moe\..*": {}, # negative match (skip) all `moe` layers
77
+ # }
78
+ self.dynamic = dynamic
79
+
80
+ self.weight_bits = weight_bits
81
+ self.is_sym = is_sym
82
+
83
+ self.pack_factor = 32 // weight_bits # packed into int32
84
+ self.group_size = group_size
85
+ self.desc_act = desc_act
86
+ self.lm_head_quantized = lm_head_quantized
87
+ self.full_config = full_config
88
+
89
+ if (weight_bits, is_sym) not in self.TYPE_MAP:
90
+ raise ValueError("Unsupported quantization config: "
91
+ f"bits={weight_bits}, sym={is_sym}")
92
+
93
+ self.quant_type = self.TYPE_MAP[(weight_bits, is_sym)]
94
+
95
+ def __repr__(self) -> str:
96
+ return (f"GPTQMarlinConfig(quant_type={self.quant_type}, "
97
+ f"group_size={self.group_size}, "
98
+ f"desc_act={self.desc_act}, "
99
+ f"lm_head_quantized={self.lm_head_quantized}), "
100
+ f"dynamic={self.dynamic}")
101
+
102
+ @classmethod
103
+ def get_name(cls) -> str:
104
+ return "gptq_marlin"
105
+
106
+ @classmethod
107
+ def get_supported_act_dtypes(cls) -> List[torch.dtype]:
108
+ return [torch.half, torch.bfloat16]
109
+
110
+ @classmethod
111
+ def get_min_capability(cls) -> int:
112
+ return 80
113
+
114
+ @classmethod
115
+ def get_config_filenames(cls) -> List[str]:
116
+ return ["quantize_config.json"]
117
+
118
+ @classmethod
119
+ def from_config(cls, config: Dict[str, Any]) -> "GPTQMarlinConfig":
120
+ dynamic = cls.get_from_keys_or(config, ["dynamic"], default={})
121
+ dynamic = {} if dynamic is None else dynamic
122
+
123
+ weight_bits = cls.get_from_keys(config, ["bits"])
124
+ group_size = cls.get_from_keys(config, ["group_size"])
125
+ desc_act = cls.get_from_keys(config, ["desc_act"])
126
+ is_sym = cls.get_from_keys(config, ["sym"])
127
+ lm_head_quantized = cls.get_from_keys_or(config, ["lm_head"],
128
+ default=False)
129
+ return cls(weight_bits, group_size, desc_act, is_sym,
130
+ lm_head_quantized, dynamic, config)
131
+
132
+ @classmethod
133
+ def override_quantization_method(cls, hf_quant_cfg,
134
+ user_quant) -> Optional[str]:
135
+ can_convert = cls.is_gptq_marlin_compatible(hf_quant_cfg)
136
+
137
+ is_valid_user_quant = (user_quant is None or user_quant == "marlin"
138
+ or user_quant == "gptq_marlin")
139
+
140
+ if can_convert and is_valid_user_quant:
141
+ msg = ("The model is convertible to {} during runtime."
142
+ " Using {} kernel.".format(cls.get_name(), cls.get_name()))
143
+ logger.info(msg)
144
+ return cls.get_name()
145
+
146
+ if can_convert and user_quant == "gptq":
147
+ logger.info("Detected that the model can run with gptq_marlin"
148
+ ", however you specified quantization=gptq explicitly,"
149
+ " so forcing gptq. Use quantization=gptq_marlin for"
150
+ " faster inference")
151
+ return None
152
+
153
+ def get_quant_method(self, layer: torch.nn.Module,
154
+ prefix: str) -> Optional["QuantizeMethodBase"]:
155
+ if isinstance(layer, FusedMoE):
156
+ from vllm.model_executor.layers.quantization.moe_wna16 import (
157
+ MoeWNA16Config)
158
+ if not check_moe_marlin_supports_layer(layer, self.group_size):
159
+ logger.warning(
160
+ f"Layer '{prefix}' is not supported by GPTQMoeMarlin. "
161
+ "Falling back to Moe WNA16 kernels.")
162
+ return MoeWNA16Config.from_config(
163
+ self.full_config).get_quant_method(layer, prefix)
164
+ return GPTQMarlinMoEMethod(self)
165
+ return get_linear_quant_method(self, layer, prefix,
166
+ GPTQMarlinLinearMethod)
167
+
168
+ @classmethod
169
+ def is_gptq_marlin_compatible(cls, quant_config: Dict[str, Any]):
170
+ quant_method = quant_config.get("quant_method", "").lower()
171
+ num_bits = quant_config.get("bits")
172
+ group_size = quant_config.get("group_size")
173
+ sym = quant_config.get("sym")
174
+ desc_act = quant_config.get("desc_act")
175
+
176
+ if not current_platform.is_cuda():
177
+ return False
178
+
179
+ if quant_method != "gptq":
180
+ return False
181
+
182
+ # Marlin conversion is only valid if required properties are found
183
+ if (num_bits is None or group_size is None or sym is None
184
+ or desc_act is None):
185
+ return False
186
+
187
+ if (num_bits, sym) not in cls.TYPE_MAP:
188
+ return False
189
+
190
+ return check_marlin_supported(quant_type=cls.TYPE_MAP[(num_bits, sym)],
191
+ group_size=group_size)
192
+
193
+
194
+ class GPTQMarlinLinearMethod(LinearMethodBase):
195
+ """Linear method for GPTQ Marlin.
196
+
197
+ Args:
198
+ quant_config: The GPTQ Marlin quantization config.
199
+ """
200
+
201
+ _kernel_backends_being_used: Set[str] = set()
202
+
203
+ def __init__(self, quant_config: GPTQMarlinConfig) -> None:
204
+ self.quant_config = quant_config
205
+
206
+ # Verify supported on platform.
207
+ verify_marlin_supported(quant_type=self.quant_config.quant_type,
208
+ group_size=self.quant_config.group_size)
209
+
210
+ def create_weights(
211
+ self,
212
+ layer: torch.nn.Module,
213
+ input_size_per_partition: int,
214
+ output_partition_sizes: List[int],
215
+ input_size: int,
216
+ output_size: int,
217
+ params_dtype: torch.dtype,
218
+ **extra_weight_attrs,
219
+ ) -> None:
220
+ output_size_per_partition = sum(output_partition_sizes)
221
+ is_row_parallel = input_size != input_size_per_partition
222
+ weight_loader = extra_weight_attrs.get("weight_loader")
223
+
224
+ mp_linear_kernel_config = MPLinearLayerConfig(
225
+ full_weight_shape=(input_size, output_size),
226
+ partition_weight_shape=\
227
+ (input_size_per_partition, output_size_per_partition),
228
+ weight_type=self.quant_config.quant_type,
229
+ act_type=params_dtype,
230
+ group_size=self.quant_config.group_size,
231
+ zero_points=False,
232
+ has_g_idx=self.quant_config.desc_act
233
+ )
234
+
235
+ kernel_type = choose_mp_linear_kernel(mp_linear_kernel_config)
236
+
237
+ if kernel_type.__name__ not in self._kernel_backends_being_used:
238
+ logger.info("Using %s for GPTQMarlinLinearMethod",
239
+ kernel_type.__name__)
240
+ self._kernel_backends_being_used.add(kernel_type.__name__)
241
+
242
+ # Normalize group_size
243
+ if self.quant_config.group_size != -1:
244
+ group_size = self.quant_config.group_size
245
+ else:
246
+ group_size = input_size
247
+
248
+ # Determine sharding
249
+ if marlin_repeat_scales_on_all_ranks(self.quant_config.desc_act,
250
+ self.quant_config.group_size,
251
+ is_row_parallel):
252
+ # By setting scale_dim == None, weight_loader will
253
+ # repeat the scales on each GPU in TP>1 case.
254
+ scales_and_zp_input_dim = None
255
+ scales_and_zp_size = input_size // group_size
256
+ else:
257
+ # By setting scale_dim == 0, weight_loader will
258
+ # shard the scales in TP>1 case.
259
+ scales_and_zp_input_dim = 0
260
+ scales_and_zp_size = input_size_per_partition // group_size
261
+
262
+ # Quantized weights
263
+ qweight = PackedvLLMParameter(
264
+ data=torch.empty(
265
+ input_size_per_partition // self.quant_config.pack_factor,
266
+ output_size_per_partition,
267
+ dtype=torch.int32,
268
+ ),
269
+ input_dim=0,
270
+ output_dim=1,
271
+ packed_dim=0,
272
+ packed_factor=self.quant_config.pack_factor,
273
+ weight_loader=weight_loader)
274
+
275
+ # Activation order
276
+ g_idx = RowvLLMParameter(data=torch.empty(
277
+ input_size_per_partition,
278
+ dtype=torch.int32,
279
+ ),
280
+ input_dim=0,
281
+ weight_loader=weight_loader)
282
+
283
+ qzeros_args = {
284
+ "data":
285
+ torch.empty(
286
+ scales_and_zp_size,
287
+ output_size_per_partition // self.quant_config.pack_factor,
288
+ dtype=torch.int32,
289
+ ),
290
+ "weight_loader":
291
+ weight_loader
292
+ }
293
+ weight_scale_args = {
294
+ "data":
295
+ torch.empty(
296
+ scales_and_zp_size,
297
+ output_size_per_partition,
298
+ dtype=params_dtype,
299
+ ),
300
+ "weight_loader":
301
+ weight_loader
302
+ }
303
+
304
+ if scales_and_zp_input_dim is None:
305
+ scales = ChannelQuantScaleParameter(output_dim=1,
306
+ **weight_scale_args)
307
+ qzeros = PackedColumnParameter(
308
+ output_dim=1,
309
+ packed_dim=1,
310
+ packed_factor=self.quant_config.pack_factor,
311
+ **qzeros_args)
312
+
313
+ else:
314
+ scales = GroupQuantScaleParameter(output_dim=1,
315
+ input_dim=0,
316
+ **weight_scale_args)
317
+ qzeros = PackedvLLMParameter(
318
+ input_dim=0,
319
+ output_dim=1,
320
+ packed_dim=1,
321
+ packed_factor=self.quant_config.pack_factor,
322
+ **qzeros_args)
323
+
324
+ layer.register_parameter("qweight", qweight)
325
+ layer.register_parameter("g_idx", g_idx)
326
+ layer.register_parameter("scales", scales)
327
+ layer.register_parameter("qzeros", qzeros)
328
+
329
+ self.kernel = kernel_type(mp_linear_kernel_config,
330
+ w_q_param_name="qweight",
331
+ w_s_param_name="scales",
332
+ w_zp_param_name="qzeros",
333
+ w_gidx_param_name="g_idx")
334
+
335
+ def process_weights_after_loading(self, layer: torch.nn.Module) -> None:
336
+ self.kernel.process_weights_after_loading(layer)
337
+
338
+ def apply(
339
+ self,
340
+ layer: torch.nn.Module,
341
+ x: torch.Tensor,
342
+ bias: Optional[torch.Tensor] = None,
343
+ ) -> torch.Tensor:
344
+ return self.kernel.apply_weights(layer, x, bias)
345
+
346
+
347
+ class GPTQMarlinMoEMethod(FusedMoEMethodBase):
348
+ """MoE Marlin method with quantization."""
349
+
350
+ def __init__(self, quant_config: GPTQMarlinConfig) -> None:
351
+ self.quant_config = quant_config
352
+
353
+ def create_weights(
354
+ self,
355
+ layer: torch.nn.Module,
356
+ num_experts: int,
357
+ hidden_size: int,
358
+ intermediate_size_per_partition: int,
359
+ params_dtype: torch.dtype,
360
+ **extra_weight_attrs,
361
+ ):
362
+ intermediate_size_full = extra_weight_attrs.pop(
363
+ "intermediate_size_full")
364
+
365
+ self.is_k_full = (not self.quant_config.desc_act) or (
366
+ intermediate_size_per_partition == intermediate_size_full)
367
+
368
+ if self.quant_config.group_size != -1:
369
+ scales_size13 = hidden_size // self.quant_config.group_size
370
+ w2_scales_size = (intermediate_size_full
371
+ if self.quant_config.desc_act else
372
+ intermediate_size_per_partition)
373
+ scales_size2 = (w2_scales_size // self.quant_config.group_size)
374
+ strategy = FusedMoeWeightScaleSupported.GROUP.value
375
+ else:
376
+ scales_size13 = 1
377
+ scales_size2 = 1
378
+ strategy = FusedMoeWeightScaleSupported.CHANNEL.value
379
+
380
+ extra_weight_attrs.update({
381
+ "quant_method": strategy,
382
+ "is_transposed": True
383
+ })
384
+ # Fused gate_up_proj (column parallel)
385
+ w13_qweight = torch.nn.Parameter(
386
+ torch.empty(
387
+ num_experts,
388
+ hidden_size // self.quant_config.pack_factor,
389
+ 2 * intermediate_size_per_partition,
390
+ dtype=torch.int32,
391
+ ),
392
+ requires_grad=False,
393
+ )
394
+ layer.register_parameter("w13_qweight", w13_qweight)
395
+ set_weight_attrs(w13_qweight, extra_weight_attrs)
396
+ # down_proj (row parallel)
397
+ w2_qweight = torch.nn.Parameter(
398
+ torch.empty(
399
+ num_experts,
400
+ intermediate_size_per_partition //
401
+ self.quant_config.pack_factor,
402
+ hidden_size,
403
+ dtype=torch.int32,
404
+ ),
405
+ requires_grad=False,
406
+ )
407
+ layer.register_parameter("w2_qweight", w2_qweight)
408
+ set_weight_attrs(w2_qweight, extra_weight_attrs)
409
+ # up_proj scales
410
+ w13_scales = torch.nn.Parameter(
411
+ torch.empty(num_experts,
412
+ scales_size13,
413
+ 2 * intermediate_size_per_partition,
414
+ dtype=params_dtype),
415
+ requires_grad=False,
416
+ )
417
+ layer.register_parameter("w13_scales", w13_scales)
418
+ set_weight_attrs(w13_scales, extra_weight_attrs)
419
+ # down_proj scales
420
+ w2_scales = torch.nn.Parameter(
421
+ torch.empty(num_experts,
422
+ scales_size2,
423
+ hidden_size,
424
+ dtype=params_dtype),
425
+ requires_grad=False,
426
+ )
427
+ layer.register_parameter("w2_scales", w2_scales)
428
+ set_weight_attrs(w2_scales, extra_weight_attrs)
429
+ # dont shard the w2 scales when running act order
430
+ set_weight_attrs(w2_scales,
431
+ {"load_full_w2": self.quant_config.desc_act})
432
+ # up_proj scales
433
+ w13_qzeros = torch.nn.Parameter(
434
+ torch.empty(num_experts,
435
+ scales_size13,
436
+ 2 * intermediate_size_per_partition //
437
+ self.quant_config.pack_factor,
438
+ dtype=params_dtype),
439
+ requires_grad=False,
440
+ )
441
+ layer.register_parameter("w13_qzeros", w13_qzeros)
442
+ set_weight_attrs(w13_qzeros, extra_weight_attrs)
443
+ # down_proj scales
444
+ w2_qzeros = torch.nn.Parameter(
445
+ torch.empty(num_experts,
446
+ scales_size2,
447
+ hidden_size // self.quant_config.pack_factor,
448
+ dtype=params_dtype),
449
+ requires_grad=False,
450
+ )
451
+ layer.register_parameter("w2_qzeros", w2_qzeros)
452
+ set_weight_attrs(w2_qzeros, extra_weight_attrs)
453
+ # dont shard the w2 scales when running act order
454
+ set_weight_attrs(w2_qzeros,
455
+ {"load_full_w2": self.quant_config.desc_act})
456
+ w13_g_idx = torch.nn.Parameter(
457
+ torch.empty(
458
+ num_experts,
459
+ hidden_size,
460
+ dtype=torch.int32,
461
+ ),
462
+ requires_grad=False,
463
+ )
464
+ layer.register_parameter("w13_g_idx", w13_g_idx)
465
+ set_weight_attrs(w13_g_idx, extra_weight_attrs)
466
+ w2_g_idx = torch.nn.Parameter(
467
+ torch.empty(
468
+ num_experts,
469
+ intermediate_size_per_partition,
470
+ dtype=torch.int32,
471
+ ),
472
+ requires_grad=False,
473
+ )
474
+ layer.register_parameter("w2_g_idx", w2_g_idx)
475
+ set_weight_attrs(w2_g_idx, extra_weight_attrs)
476
+ w13_g_idx_sort_indices = torch.nn.Parameter(
477
+ torch.empty(
478
+ num_experts,
479
+ hidden_size,
480
+ dtype=torch.int32,
481
+ ),
482
+ requires_grad=False,
483
+ )
484
+ layer.register_parameter("w13_g_idx_sort_indices",
485
+ w13_g_idx_sort_indices)
486
+ set_weight_attrs(w13_g_idx_sort_indices, extra_weight_attrs)
487
+ w2_g_idx_sort_indices = torch.nn.Parameter(
488
+ torch.empty(
489
+ num_experts,
490
+ intermediate_size_per_partition,
491
+ dtype=torch.int32,
492
+ ),
493
+ requires_grad=False,
494
+ )
495
+ layer.register_parameter("w2_g_idx_sort_indices",
496
+ w2_g_idx_sort_indices)
497
+ set_weight_attrs(w2_g_idx_sort_indices, extra_weight_attrs)
498
+
499
+ device = layer.w13_qweight.device
500
+ sms = torch.cuda.get_device_properties(device).multi_processor_count
501
+ layer.workspace = torch.zeros((sms * 4, ),
502
+ dtype=torch.int,
503
+ device=device,
504
+ requires_grad=False)
505
+
506
+ def process_weights_after_loading(self, layer: torch.nn.Module) -> None:
507
+
508
+ # Process act_order
509
+ if self.quant_config.desc_act:
510
+ # Get sorting based on g_idx
511
+ num_experts = layer.w13_g_idx.shape[0]
512
+ w13_g_idx_sort_indices = torch.empty_like(layer.w13_g_idx)
513
+ w2_g_idx_sort_indices = torch.empty_like(layer.w2_g_idx)
514
+ w13_sorted_g_idx = torch.empty_like(layer.w13_g_idx)
515
+ w2_sorted_g_idx = torch.empty_like(layer.w2_g_idx)
516
+ for e in range(num_experts):
517
+ w13_g_idx_sort_indices[e] = torch.argsort(
518
+ layer.w13_g_idx[e]).to(torch.int32)
519
+ w2_g_idx_sort_indices[e] = torch.argsort(layer.w2_g_idx[e]).to(
520
+ torch.int32)
521
+ w13_sorted_g_idx[e] = layer.w13_g_idx[e][
522
+ w13_g_idx_sort_indices[e]]
523
+ w2_sorted_g_idx[e] = layer.w2_g_idx[e][
524
+ w2_g_idx_sort_indices[e]]
525
+ replace_parameter(layer, "w13_g_idx", w13_sorted_g_idx)
526
+ replace_parameter(layer, "w2_g_idx", w2_sorted_g_idx)
527
+ replace_parameter(layer, "w13_g_idx_sort_indices",
528
+ w13_g_idx_sort_indices)
529
+ replace_parameter(layer, "w2_g_idx_sort_indices",
530
+ w2_g_idx_sort_indices)
531
+ else:
532
+ # Reset g_idx related tensors
533
+ num_experts = layer.w13_g_idx.shape[0]
534
+ device = layer.w13_g_idx.device
535
+ layer.w13_g_idx = torch.nn.Parameter(
536
+ torch.empty((num_experts, 0), dtype=torch.int32,
537
+ device=device),
538
+ requires_grad=False,
539
+ )
540
+ layer.w2_g_idx = torch.nn.Parameter(
541
+ torch.empty((num_experts, 0), dtype=torch.int32,
542
+ device=device),
543
+ requires_grad=False,
544
+ )
545
+ layer.w13_g_idx_sort_indices = torch.nn.Parameter(
546
+ torch.empty((num_experts, 0), dtype=torch.int32,
547
+ device=device),
548
+ requires_grad=False,
549
+ )
550
+ layer.w2_g_idx_sort_indices = torch.nn.Parameter(
551
+ torch.empty((num_experts, 0), dtype=torch.int32,
552
+ device=device),
553
+ requires_grad=False,
554
+ )
555
+ # Repack weights
556
+ marlin_w13_qweight = ops.gptq_marlin_moe_repack(
557
+ layer.w13_qweight,
558
+ layer.w13_g_idx_sort_indices,
559
+ layer.w13_qweight.shape[1] * self.quant_config.pack_factor,
560
+ layer.w13_qweight.shape[2],
561
+ self.quant_config.quant_type.size_bits,
562
+ )
563
+ replace_parameter(layer, "w13_qweight", marlin_w13_qweight)
564
+ marlin_w2_qweight = ops.gptq_marlin_moe_repack(
565
+ layer.w2_qweight,
566
+ layer.w2_g_idx_sort_indices,
567
+ layer.w2_qweight.shape[1] * self.quant_config.pack_factor,
568
+ layer.w2_qweight.shape[2],
569
+ self.quant_config.quant_type.size_bits,
570
+ )
571
+ replace_parameter(layer, "w2_qweight", marlin_w2_qweight)
572
+ # Repack scales
573
+ marlin_w13_scales = marlin_moe_permute_scales(
574
+ s=layer.w13_scales,
575
+ size_k=layer.intermediate_size_per_partition,
576
+ size_n=layer.w13_scales.shape[2],
577
+ group_size=self.quant_config.group_size,
578
+ )
579
+ replace_parameter(layer, "w13_scales", marlin_w13_scales)
580
+ marlin_w2_scales = marlin_moe_permute_scales(
581
+ s=layer.w2_scales,
582
+ size_k=layer.w2_scales.shape[1] *
583
+ (self.quant_config.group_size if self.quant_config.group_size != -1
584
+ else self.quant_config.pack_factor),
585
+ size_n=layer.w2_scales.shape[2],
586
+ group_size=self.quant_config.group_size,
587
+ )
588
+ replace_parameter(layer, "w2_scales", marlin_w2_scales)
589
+
590
+ def apply(
591
+ self,
592
+ layer: torch.nn.Module,
593
+ x: torch.Tensor,
594
+ router_logits: torch.Tensor,
595
+ top_k: int,
596
+ renormalize: bool,
597
+ use_grouped_topk: bool = False,
598
+ topk_group: Optional[int] = None,
599
+ num_expert_group: Optional[int] = None,
600
+ global_num_experts: int = -1,
601
+ expert_map: Optional[torch.Tensor] = None,
602
+ custom_routing_function: Optional[Callable] = None,
603
+ scoring_func: str = "softmax",
604
+ e_score_correction_bias: Optional[torch.Tensor] = None,
605
+ apply_router_weight_on_input: bool = False,
606
+ activation: str = "silu",
607
+ ) -> torch.Tensor:
608
+ assert activation == "silu", "Only SiLU activation is supported."
609
+ if apply_router_weight_on_input:
610
+ raise NotImplementedError(
611
+ "Apply router weight on input is not supported for"
612
+ "fused Marlin MoE method.")
613
+
614
+ topk_weights, topk_ids = FusedMoE.select_experts(
615
+ hidden_states=x,
616
+ router_logits=router_logits,
617
+ use_grouped_topk=use_grouped_topk,
618
+ top_k=top_k,
619
+ renormalize=renormalize,
620
+ topk_group=topk_group,
621
+ num_expert_group=num_expert_group,
622
+ custom_routing_function=custom_routing_function,
623
+ scoring_func=scoring_func,
624
+ e_score_correction_bias=e_score_correction_bias)
625
+
626
+ return torch.ops.vllm.fused_marlin_moe(
627
+ x,
628
+ layer.w13_qweight,
629
+ layer.w2_qweight,
630
+ layer.w13_scales,
631
+ layer.w2_scales,
632
+ router_logits,
633
+ topk_weights,
634
+ topk_ids,
635
+ global_num_experts=global_num_experts,
636
+ expert_map=expert_map,
637
+ g_idx1=layer.w13_g_idx,
638
+ g_idx2=layer.w2_g_idx,
639
+ sort_indices1=layer.w13_g_idx_sort_indices,
640
+ sort_indices2=layer.w2_g_idx_sort_indices,
641
+ num_bits=self.quant_config.quant_type.size_bits,
642
+ workspace=layer.workspace,
643
+ is_k_full=self.is_k_full)
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors.index.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5407ce97b8572b8d5eb6b450fdf13e8d2d96358e40f9eddeb984843930d3c88
3
+ size 13211434
qwen3_nonthinking.jinja ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for message in messages[::-1] %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- if ns.multi_step_tool and message.role == "user" and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
+ {%- set ns.multi_step_tool = false %}
22
+ {%- set ns.last_query_index = index %}
23
+ {%- endif %}
24
+ {%- endfor %}
25
+ {%- for message in messages %}
26
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
27
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
28
+ {%- elif message.role == "assistant" %}
29
+ {%- set content = message.content %}
30
+ {%- set reasoning_content = '' %}
31
+ {%- if message.reasoning_content is defined and message.reasoning_content is not none %}
32
+ {%- set reasoning_content = message.reasoning_content %}
33
+ {%- else %}
34
+ {%- if '</think>' in message.content %}
35
+ {%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
36
+ {%- set reasoning_content = message.content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
37
+ {%- endif %}
38
+ {%- endif %}
39
+ {%- if loop.index0 > ns.last_query_index %}
40
+ {%- if loop.last or (not loop.last and reasoning_content) %}
41
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
42
+ {%- else %}
43
+ {{- '<|im_start|>' + message.role + '\n' + content }}
44
+ {%- endif %}
45
+ {%- else %}
46
+ {{- '<|im_start|>' + message.role + '\n' + content }}
47
+ {%- endif %}
48
+ {%- if message.tool_calls %}
49
+ {%- for tool_call in message.tool_calls %}
50
+ {%- if (loop.first and content) or (not loop.first) %}
51
+ {{- '\n' }}
52
+ {%- endif %}
53
+ {%- if tool_call.function %}
54
+ {%- set tool_call = tool_call.function %}
55
+ {%- endif %}
56
+ {{- '<tool_call>\n{"name": "' }}
57
+ {{- tool_call.name }}
58
+ {{- '", "arguments": ' }}
59
+ {%- if tool_call.arguments is string %}
60
+ {{- tool_call.arguments }}
61
+ {%- else %}
62
+ {{- tool_call.arguments | tojson }}
63
+ {%- endif %}
64
+ {{- '}\n</tool_call>' }}
65
+ {%- endfor %}
66
+ {%- endif %}
67
+ {{- '<|im_end|>\n' }}
68
+ {%- elif message.role == "tool" %}
69
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
70
+ {{- '<|im_start|>user' }}
71
+ {%- endif %}
72
+ {{- '\n<tool_response>\n' }}
73
+ {{- message.content }}
74
+ {{- '\n</tool_response>' }}
75
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
76
+ {{- '<|im_end|>\n' }}
77
+ {%- endif %}
78
+ {%- endif %}
79
+ {%- endfor %}
80
+ {%- if add_generation_prompt %}
81
+ {{- '<|im_start|>assistant\n<think>\n\n</think>\n\n' }}
82
+ {%- endif %}
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
3
+ size 11422654
tokenizer_config.json ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set content = message.content %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is defined and message.reasoning_content is not none %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in message.content %}\n {%- set content = message.content.split('</think>')[-1].lstrip('\\n') %}\n {%- set reasoning_content = message.content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- endif %}\n{%- endif %}",
231
+ "clean_up_tokenization_spaces": false,
232
+ "eos_token": "<|im_end|>",
233
+ "errors": "replace",
234
+ "model_max_length": 131072,
235
+ "pad_token": "<|endoftext|>",
236
+ "split_special_tokens": false,
237
+ "tokenizer_class": "Qwen2Tokenizer",
238
+ "unk_token": null
239
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff