Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ Please follow the license of the original model.
|
|
18 |
|
19 |
**INT4 VLLM Inference on CUDA**(**at least 8*80G**)
|
20 |
|
21 |
-
To serve using vLLM with 8x 80GB GPUs, use the following command:
|
22 |
```sh
|
23 |
VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-reasoner --model OPEA/DeepSeek-R1-int4-asym-AutoRound-awq
|
24 |
```
|
|
|
18 |
|
19 |
**INT4 VLLM Inference on CUDA**(**at least 8*80G**)
|
20 |
|
21 |
+
Please note that when using VLLM for inference, the quantization mode must be asymmetric. To serve using vLLM with 8x 80GB GPUs, use the following command:
|
22 |
```sh
|
23 |
VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-reasoner --model OPEA/DeepSeek-R1-int4-asym-AutoRound-awq
|
24 |
```
|