littlebird13 commited on
Commit
9e5494c
·
verified ·
1 Parent(s): 2180ded

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -10
README.md CHANGED
@@ -1,6 +1,3 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
  # Qwen3-235B-A22B-FP8
5
 
6
  ## Qwen3 Highlights
@@ -85,16 +82,18 @@ print("thinking content:", thinking_content)
85
  print("content:", content)
86
  ```
87
 
88
- For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
89
- - vLLM:
90
  ```shell
91
- vllm serve Qwen/Qwen3-235B-A22B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
92
  ```
93
- - SGLang:
94
  ```shell
95
- python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-FP8 --reasoning-parser deepseek-r1
96
  ```
97
 
 
 
98
  ## Note on FP8
99
 
100
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
@@ -130,8 +129,8 @@ However, please pay attention to the following known issues:
130
  ## Switching Between Thinking and Non-Thinking Mode
131
 
132
  > [!TIP]
133
- > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
134
- > Please refer to our documentation for [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) and [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) users.
135
 
136
  ### `enable_thinking=True`
137
 
 
 
 
 
1
  # Qwen3-235B-A22B-FP8
2
 
3
  ## Qwen3 Highlights
 
82
  print("content:", content)
83
  ```
84
 
85
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
86
+ - SGLang:
87
  ```shell
88
+ python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-FP8 --reasoning-parser qwen3
89
  ```
90
+ - vLLM:
91
  ```shell
92
+ vllm serve Qwen/Qwen3-235B-A22B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
93
  ```
94
 
95
+ For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
96
+
97
  ## Note on FP8
98
 
99
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
 
129
  ## Switching Between Thinking and Non-Thinking Mode
130
 
131
  > [!TIP]
132
+ > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
133
+ > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
134
 
135
  ### `enable_thinking=True`
136