DeepSeek-R1-Distill-Qwen-32B-Q2-6
This model was converted to MLX from deepseek-ai/DeepSeek-R1-Distill-Qwen-32B, using mixed 2/6 bit quantization. This scheme preserves quality much more than a standard 2-bit quantization.
Use with mlx
pip install mlx-lm
python -m mlx_lm.chat --model pcuenq/DeepSeek-R1-Distill-Qwen-32B-Q2-6 --max-tokens 10000 --temp 0.6 --top-p 0.7
- Downloads last month
- 43
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for pcuenq/DeepSeek-R1-Distill-Qwen-32B-Q2-6
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B