Fixed the code formatting.
Browse files
README.md
CHANGED
@@ -4,22 +4,24 @@ base_model:
|
|
4 |
- Qwen/Qwen3-235B-A22B
|
5 |
library_name: transformers
|
6 |
---
|
|
|
7 |
# Qwen3-235B-A22B-AWQ
|
8 |
|
9 |
-
Uploaded by Eric Hartford
|
10 |
|
11 |
-
Copied from Modelscope https://www.modelscope.cn/models/swift/Qwen3-235B-A22B-AWQ
|
12 |
|
13 |
-
Original model https://huggingface.co/Qwen/Qwen3-235B-A22B
|
14 |
|
15 |
-
|
16 |
|
|
|
17 |
import torch
|
18 |
from modelscope import AutoModelForCausalLM, AutoTokenizer
|
19 |
|
20 |
model_name = "swift/Qwen3-235B-A22B-AWQ"
|
21 |
|
22 |
-
|
23 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
24 |
model = AutoModelForCausalLM.from_pretrained(
|
25 |
model_name,
|
@@ -27,7 +29,7 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
27 |
device_map="auto"
|
28 |
)
|
29 |
|
30 |
-
|
31 |
prompt = "Give me a short introduction to large language model."
|
32 |
messages = [
|
33 |
{"role": "user", "content": prompt}
|
@@ -36,18 +38,18 @@ text = tokenizer.apply_chat_template(
|
|
36 |
messages,
|
37 |
tokenize=False,
|
38 |
add_generation_prompt=True,
|
39 |
-
enable_thinking=True
|
40 |
)
|
41 |
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
42 |
|
43 |
-
|
44 |
generated_ids = model.generate(
|
45 |
**model_inputs,
|
46 |
max_new_tokens=32768
|
47 |
)
|
48 |
-
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
49 |
|
50 |
-
|
51 |
try:
|
52 |
# rindex finding 151668 (</think>)
|
53 |
index = len(output_ids) - output_ids[::-1].index(151668)
|
@@ -59,49 +61,43 @@ content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("
|
|
59 |
|
60 |
print("thinking content:", thinking_content)
|
61 |
print("content:", content)
|
|
|
62 |
|
63 |
# Original Modelcard
|
64 |
|
65 |
# Qwen3-235B-A22B
|
66 |
-
|
67 |
-
|
68 |
-
</a>
|
69 |
|
70 |
## Qwen3 Highlights
|
71 |
|
72 |
-
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
|
73 |
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
|
80 |
## Model Overview
|
81 |
|
82 |
**Qwen3-235B-A22B** has the following features:
|
83 |
-
- Type: Causal Language Models
|
84 |
-
- Training Stage: Pretraining & Post-training
|
85 |
-
- Number of Parameters: 235B in total and 22B activated
|
86 |
-
- Number of Paramaters (Non-Embedding): 234B
|
87 |
-
- Number of Layers: 94
|
88 |
-
- Number of Attention Heads (GQA): 64 for Q and 4 for KV
|
89 |
-
- Number of Experts: 128
|
90 |
-
- Number of Activated Experts: 8
|
91 |
-
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
|
92 |
-
|
93 |
-
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
94 |
|
95 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
96 |
|
97 |
-
|
98 |
|
99 |
-
|
100 |
-
|
101 |
-
KeyError: 'qwen3_moe'
|
102 |
-
```
|
103 |
|
104 |
-
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
|
105 |
```python
|
106 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
107 |
|
@@ -124,7 +120,7 @@ text = tokenizer.apply_chat_template(
|
|
124 |
messages,
|
125 |
tokenize=False,
|
126 |
add_generation_prompt=True,
|
127 |
-
enable_thinking=True
|
128 |
)
|
129 |
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
130 |
|
@@ -133,15 +129,13 @@ generated_ids = model.generate(
|
|
133 |
**model_inputs,
|
134 |
max_new_tokens=32768
|
135 |
)
|
136 |
-
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
137 |
|
138 |
# parsing thinking content
|
139 |
try:
|
140 |
-
# rindex finding 151668 (</think>)
|
141 |
index = len(output_ids) - output_ids[::-1].index(151668)
|
142 |
except ValueError:
|
143 |
index = 0
|
144 |
-
|
145 |
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
|
146 |
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
|
147 |
|
@@ -149,172 +143,73 @@ print("thinking content:", thinking_content)
|
|
149 |
print("content:", content)
|
150 |
```
|
151 |
|
152 |
-
|
153 |
-
- SGLang:
|
154 |
-
```shell
|
155 |
-
python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B --reasoning-parser qwen3 --tp 8
|
156 |
-
```
|
157 |
-
- vLLM:
|
158 |
-
```shell
|
159 |
-
vllm serve Qwen/Qwen3-235B-A22B --enable-reasoning --reasoning-parser deepseek_r1
|
160 |
-
```
|
161 |
-
|
162 |
-
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
|
163 |
|
164 |
-
|
165 |
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
### `enable_thinking=True`
|
171 |
|
172 |
-
|
173 |
|
174 |
-
```
|
175 |
-
|
176 |
-
messages,
|
177 |
-
tokenize=False,
|
178 |
-
add_generation_prompt=True,
|
179 |
-
enable_thinking=True # True is the default value for enable_thinking
|
180 |
-
)
|
181 |
```
|
182 |
|
183 |
-
|
184 |
|
185 |
-
|
186 |
-
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
|
187 |
|
|
|
188 |
|
189 |
-
### `enable_thinking=
|
190 |
|
191 |
-
|
192 |
|
193 |
```python
|
194 |
text = tokenizer.apply_chat_template(
|
195 |
messages,
|
196 |
tokenize=False,
|
197 |
add_generation_prompt=True,
|
198 |
-
enable_thinking=
|
199 |
)
|
200 |
```
|
201 |
|
202 |
-
|
203 |
|
204 |
-
|
205 |
-
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
|
206 |
|
207 |
-
|
208 |
|
209 |
-
|
210 |
|
211 |
-
|
212 |
-
|
213 |
-
```python
|
214 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
215 |
-
|
216 |
-
class QwenChatbot:
|
217 |
-
def __init__(self, model_name="Qwen/Qwen3-235B-A22B"):
|
218 |
-
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
|
219 |
-
self.model = AutoModelForCausalLM.from_pretrained(model_name)
|
220 |
-
self.history = []
|
221 |
-
|
222 |
-
def generate_response(self, user_input):
|
223 |
-
messages = self.history + [{"role": "user", "content": user_input}]
|
224 |
-
|
225 |
-
text = self.tokenizer.apply_chat_template(
|
226 |
-
messages,
|
227 |
-
tokenize=False,
|
228 |
-
add_generation_prompt=True
|
229 |
-
)
|
230 |
-
|
231 |
-
inputs = self.tokenizer(text, return_tensors="pt")
|
232 |
-
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
|
233 |
-
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
|
234 |
-
|
235 |
-
# Update history
|
236 |
-
self.history.append({"role": "user", "content": user_input})
|
237 |
-
self.history.append({"role": "assistant", "content": response})
|
238 |
-
|
239 |
-
return response
|
240 |
-
|
241 |
-
# Example Usage
|
242 |
-
if __name__ == "__main__":
|
243 |
-
chatbot = QwenChatbot()
|
244 |
-
|
245 |
-
# First input (without /think or /no_think tags, thinking mode is enabled by default)
|
246 |
-
user_input_1 = "How many r's in strawberries?"
|
247 |
-
print(f"User: {user_input_1}")
|
248 |
-
response_1 = chatbot.generate_response(user_input_1)
|
249 |
-
print(f"Bot: {response_1}")
|
250 |
-
print("----------------------")
|
251 |
-
|
252 |
-
# Second input with /no_think
|
253 |
-
user_input_2 = "Then, how many r's in blueberries? /no_think"
|
254 |
-
print(f"User: {user_input_2}")
|
255 |
-
response_2 = chatbot.generate_response(user_input_2)
|
256 |
-
print(f"Bot: {response_2}")
|
257 |
-
print("----------------------")
|
258 |
-
|
259 |
-
# Third input with /think
|
260 |
-
user_input_3 = "Really? /think"
|
261 |
-
print(f"User: {user_input_3}")
|
262 |
-
response_3 = chatbot.generate_response(user_input_3)
|
263 |
-
print(f"Bot: {response_3}")
|
264 |
-
```
|
265 |
-
|
266 |
-
> [!NOTE]
|
267 |
-
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
|
268 |
-
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
|
269 |
|
270 |
## Agentic Use
|
271 |
|
272 |
-
Qwen3 excels in tool calling
|
273 |
|
274 |
-
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
|
275 |
```python
|
276 |
from qwen_agent.agents import Assistant
|
277 |
|
278 |
-
# Define LLM
|
279 |
llm_cfg = {
|
280 |
'model': 'Qwen3-235B-A22B',
|
281 |
-
|
282 |
-
# Use the endpoint provided by Alibaba Model Studio:
|
283 |
-
# 'model_type': 'qwen_dashscope',
|
284 |
-
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
|
285 |
-
|
286 |
-
# Use a custom endpoint compatible with OpenAI API:
|
287 |
-
'model_server': 'http://localhost:8000/v1', # api_base
|
288 |
'api_key': 'EMPTY',
|
289 |
-
|
290 |
-
# Other parameters:
|
291 |
-
# 'generate_cfg': {
|
292 |
-
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
|
293 |
-
# # Do not add: When the response has been separated by reasoning_content and content.
|
294 |
-
# 'thought_in_content': True,
|
295 |
-
# },
|
296 |
}
|
297 |
|
298 |
-
# Define Tools
|
299 |
tools = [
|
300 |
-
{'mcpServers': {
|
301 |
-
|
302 |
-
|
303 |
-
|
304 |
-
|
305 |
-
"fetch": {
|
306 |
-
"command": "uvx",
|
307 |
-
"args": ["mcp-server-fetch"]
|
308 |
-
}
|
309 |
-
}
|
310 |
-
},
|
311 |
-
'code_interpreter', # Built-in tools
|
312 |
]
|
313 |
|
314 |
-
# Define Agent
|
315 |
bot = Assistant(llm=llm_cfg, function_list=tools)
|
316 |
|
317 |
-
# Streaming generation
|
318 |
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
|
319 |
for responses in bot.run(messages=messages):
|
320 |
pass
|
@@ -323,81 +218,50 @@ print(responses)
|
|
323 |
|
324 |
## Processing Long Texts
|
325 |
|
326 |
-
Qwen3 natively supports
|
327 |
-
|
328 |
-
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
|
329 |
-
|
330 |
-
- Modifying the model files:
|
331 |
-
In the `config.json` file, add the `rope_scaling` fields:
|
332 |
-
```json
|
333 |
-
{
|
334 |
-
...,
|
335 |
-
"rope_scaling": {
|
336 |
-
"rope_type": "yarn",
|
337 |
-
"factor": 4.0,
|
338 |
-
"original_max_position_embeddings": 32768
|
339 |
-
}
|
340 |
-
}
|
341 |
-
```
|
342 |
-
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
|
343 |
-
|
344 |
-
- Passing command line arguments:
|
345 |
-
|
346 |
-
For `vllm`, you can use
|
347 |
-
```shell
|
348 |
-
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
|
349 |
-
```
|
350 |
-
|
351 |
-
For `sglang`, you can use
|
352 |
-
```shell
|
353 |
-
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
|
354 |
-
```
|
355 |
-
|
356 |
-
For `llama-server` from `llama.cpp`, you can use
|
357 |
-
```shell
|
358 |
-
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
|
359 |
-
```
|
360 |
-
|
361 |
-
> [!IMPORTANT]
|
362 |
-
> If you encounter the following warning
|
363 |
-
> ```
|
364 |
-
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
|
365 |
-
> ```
|
366 |
-
> please upgrade `transformers>=4.51.0`.
|
367 |
-
|
368 |
-
> [!NOTE]
|
369 |
-
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
|
370 |
-
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
|
371 |
-
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
|
372 |
-
|
373 |
-
> [!NOTE]
|
374 |
-
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
|
375 |
-
|
376 |
-
> [!TIP]
|
377 |
-
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
|
378 |
|
379 |
-
|
380 |
|
381 |
-
|
382 |
|
383 |
-
|
384 |
-
|
385 |
-
|
386 |
-
|
|
|
|
|
|
|
387 |
|
388 |
-
|
|
|
|
|
389 |
|
390 |
-
|
391 |
-
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
|
392 |
-
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
|
393 |
|
394 |
-
|
|
|
|
|
395 |
|
396 |
-
|
397 |
|
398 |
-
|
399 |
|
400 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
401 |
@misc{qwen3,
|
402 |
title = {Qwen3},
|
403 |
url = {https://qwenlm.github.io/blog/qwen3/},
|
|
|
4 |
- Qwen/Qwen3-235B-A22B
|
5 |
library_name: transformers
|
6 |
---
|
7 |
+
|
8 |
# Qwen3-235B-A22B-AWQ
|
9 |
|
10 |
+
*Uploaded by Eric Hartford*
|
11 |
|
12 |
+
Copied from Modelscope [Modelscope link](https://www.modelscope.cn/models/swift/Qwen3-235B-A22B-AWQ)
|
13 |
|
14 |
+
Original model: [huggingface.co/Qwen/Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B)
|
15 |
|
16 |
+
## Modelscope AWQ Modelcard
|
17 |
|
18 |
+
```python
|
19 |
import torch
|
20 |
from modelscope import AutoModelForCausalLM, AutoTokenizer
|
21 |
|
22 |
model_name = "swift/Qwen3-235B-A22B-AWQ"
|
23 |
|
24 |
+
# load the tokenizer and the model
|
25 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
26 |
model = AutoModelForCausalLM.from_pretrained(
|
27 |
model_name,
|
|
|
29 |
device_map="auto"
|
30 |
)
|
31 |
|
32 |
+
# prepare the model input
|
33 |
prompt = "Give me a short introduction to large language model."
|
34 |
messages = [
|
35 |
{"role": "user", "content": prompt}
|
|
|
38 |
messages,
|
39 |
tokenize=False,
|
40 |
add_generation_prompt=True,
|
41 |
+
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
|
42 |
)
|
43 |
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
44 |
|
45 |
+
# conduct text completion
|
46 |
generated_ids = model.generate(
|
47 |
**model_inputs,
|
48 |
max_new_tokens=32768
|
49 |
)
|
50 |
+
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
51 |
|
52 |
+
# parsing thinking content
|
53 |
try:
|
54 |
# rindex finding 151668 (</think>)
|
55 |
index = len(output_ids) - output_ids[::-1].index(151668)
|
|
|
61 |
|
62 |
print("thinking content:", thinking_content)
|
63 |
print("content:", content)
|
64 |
+
```
|
65 |
|
66 |
# Original Modelcard
|
67 |
|
68 |
# Qwen3-235B-A22B
|
69 |
+
|
70 |
+
[Chat interface](https://chat.qwen.ai/) <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;" />
|
|
|
71 |
|
72 |
## Qwen3 Highlights
|
73 |
|
74 |
+
Qwen3 is the latest generation of large language models in the Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
|
75 |
|
76 |
+
* **Seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) within a single model, ensuring optimal performance across various scenarios.
|
77 |
+
* **Enhanced reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
|
78 |
+
* **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following to deliver a more natural, engaging, and immersive conversational experience.
|
79 |
+
* **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and non-thinking modes and achieving leading performance among open-source models in complex agent-based tasks.
|
80 |
+
* **Support for 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
|
81 |
|
82 |
## Model Overview
|
83 |
|
84 |
**Qwen3-235B-A22B** has the following features:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
|
86 |
+
* **Type**: Causal Language Model
|
87 |
+
* **Training Stage**: Pretraining & Post-training
|
88 |
+
* **Parameters**: 235B total, 22B activated
|
89 |
+
* **Non-Embedding Parameters**: 234B
|
90 |
+
* **Layers**: 94
|
91 |
+
* **Attention Heads (GQA)**: 64 for Q, 4 for KV
|
92 |
+
* **Experts**: 128 (8 activated)
|
93 |
+
* **Context Length**: 32,768 tokens natively (**131,072 tokens with YaRN**)
|
94 |
|
95 |
+
For more details, including benchmark evaluation, hardware requirements, and inference performance, refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [documentation](https://qwen.readthedocs.io/en/latest/).
|
96 |
|
97 |
+
## Quickstart
|
98 |
+
|
99 |
+
> **Note:** Use `transformers>=4.51.0` to avoid `KeyError: 'qwen3_moe'`.
|
|
|
100 |
|
|
|
101 |
```python
|
102 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
103 |
|
|
|
120 |
messages,
|
121 |
tokenize=False,
|
122 |
add_generation_prompt=True,
|
123 |
+
enable_thinking=True # default True
|
124 |
)
|
125 |
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
126 |
|
|
|
129 |
**model_inputs,
|
130 |
max_new_tokens=32768
|
131 |
)
|
132 |
+
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
133 |
|
134 |
# parsing thinking content
|
135 |
try:
|
|
|
136 |
index = len(output_ids) - output_ids[::-1].index(151668)
|
137 |
except ValueError:
|
138 |
index = 0
|
|
|
139 |
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
|
140 |
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
|
141 |
|
|
|
143 |
print("content:", content)
|
144 |
```
|
145 |
|
146 |
+
### Deployment
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
147 |
|
148 |
+
* **SGLang**:
|
149 |
|
150 |
+
```shell
|
151 |
+
python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B --reasoning-parser qwen3 --tp 8
|
152 |
+
```
|
|
|
|
|
153 |
|
154 |
+
* **vLLM**:
|
155 |
|
156 |
+
```shell
|
157 |
+
vllm serve Qwen/Qwen3-235B-A22B --enable-reasoning --reasoning-parser deepseek_r1
|
|
|
|
|
|
|
|
|
|
|
158 |
```
|
159 |
|
160 |
+
Local use supported by Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers.
|
161 |
|
162 |
+
## Switching Between Thinking and Non-Thinking Modes
|
|
|
163 |
|
164 |
+
> **Tip:** Use `enable_thinking` in `tokenizer.apply_chat_template` or soft switches `/think` and `/no_think` in prompts.
|
165 |
|
166 |
+
### `enable_thinking=True`
|
167 |
|
168 |
+
The model generates reasoning wrapped in `<think>...</think>` followed by the final response.
|
169 |
|
170 |
```python
|
171 |
text = tokenizer.apply_chat_template(
|
172 |
messages,
|
173 |
tokenize=False,
|
174 |
add_generation_prompt=True,
|
175 |
+
enable_thinking=True
|
176 |
)
|
177 |
```
|
178 |
|
179 |
+
> **Note:** For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, `MinP=0`. Do **not** use greedy decoding.
|
180 |
|
181 |
+
### `enable_thinking=False`
|
|
|
182 |
|
183 |
+
No `<think>...</think>` blocks are produced. Recommended settings: `Temperature=0.7`, `TopP=0.8`, `TopK=20`, `MinP=0`.
|
184 |
|
185 |
+
### Soft Switches
|
186 |
|
187 |
+
* Add `/think` or `/no_think` to include or omit reasoning in the same `enable_thinking=True` setting.
|
188 |
+
* Soft switches are ignored when `enable_thinking=False`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
189 |
|
190 |
## Agentic Use
|
191 |
|
192 |
+
Qwen3 excels in tool calling. Use [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) for streamlined integration.
|
193 |
|
|
|
194 |
```python
|
195 |
from qwen_agent.agents import Assistant
|
196 |
|
|
|
197 |
llm_cfg = {
|
198 |
'model': 'Qwen3-235B-A22B',
|
199 |
+
'model_server': 'http://localhost:8000/v1',
|
|
|
|
|
|
|
|
|
|
|
|
|
200 |
'api_key': 'EMPTY',
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
201 |
}
|
202 |
|
|
|
203 |
tools = [
|
204 |
+
{'mcpServers': {
|
205 |
+
'time': {'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']},
|
206 |
+
'fetch': {'command': 'uvx', 'args': ['mcp-server-fetch']}
|
207 |
+
}},
|
208 |
+
'code_interpreter',
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
209 |
]
|
210 |
|
|
|
211 |
bot = Assistant(llm=llm_cfg, function_list=tools)
|
212 |
|
|
|
213 |
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
|
214 |
for responses in bot.run(messages=messages):
|
215 |
pass
|
|
|
218 |
|
219 |
## Processing Long Texts
|
220 |
|
221 |
+
Qwen3 natively supports up to 32,768 tokens. For up to 131,072 tokens, use YaRN [ArXiv:2309.00071](https://arxiv.org/abs/2309.00071).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
222 |
|
223 |
+
### Enabling YaRN
|
224 |
|
225 |
+
* **Transformers config**:
|
226 |
|
227 |
+
```json
|
228 |
+
{
|
229 |
+
"rope_scaling": {"rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768}
|
230 |
+
}
|
231 |
+
```
|
232 |
+
|
233 |
+
* **vLLM**:
|
234 |
|
235 |
+
```shell
|
236 |
+
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
|
237 |
+
```
|
238 |
|
239 |
+
* **llama.cpp**:
|
|
|
|
|
240 |
|
241 |
+
```shell
|
242 |
+
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
|
243 |
+
```
|
244 |
|
245 |
+
> **Important:** Upgrade to `transformers>=4.51.0` to avoid warnings about unrecognized keys in `rope_scaling`.
|
246 |
|
247 |
+
> **Tip:** Only enable YaRN when needed for long contexts; default 40,960 embeddings suffice for most scenarios.
|
248 |
|
249 |
+
## Best Practices
|
250 |
+
|
251 |
+
1. **Sampling Parameters**
|
252 |
+
|
253 |
+
* Thinking mode: `Temperature=0.6`, `TopP=0.95`, `TopK=20`, `MinP=0`.
|
254 |
+
* Non-thinking mode: `Temperature=0.7`, `TopP=0.8`, `TopK=20`, `MinP=0`.
|
255 |
+
2. **Output Length**: 32,768 tokens usually enough; up to 38,912 for complex tasks.
|
256 |
+
3. **Standardised Format**
|
257 |
+
|
258 |
+
* **Math**: Request step-by-step reasoning and final answer in `\boxed{}`.
|
259 |
+
* **MCQs**: Use JSON to specify answers, e.g., `"answer": "C"`.
|
260 |
+
4. **History**: Exclude `<think>` blocks from conversation history.
|
261 |
+
|
262 |
+
## Citation
|
263 |
+
|
264 |
+
```bibtex
|
265 |
@misc{qwen3,
|
266 |
title = {Qwen3},
|
267 |
url = {https://qwenlm.github.io/blog/qwen3/},
|