DeepSeek V3 AWQ
AWQ of DeepSeek V3.
This quant modified some of the model code to fix an overflow issue when using float16.
To serve using vLLM with 8x 80GB GPUs, use the following command:
VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-chat --model cognitivecomputations/DeepSeek-V3-AWQ
You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking here.
Inference speed with batch size 1 and short prompt:
- 8x H100: 48 TPS
- 8x A100: 38 TPS
Note:
- Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
- vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.
- Downloads last month
- 8,481
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support model that require custom code execution.
Model tree for cognitivecomputations/DeepSeek-V3-AWQ
Base model
deepseek-ai/DeepSeek-V3