Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
nisten
/
deepseek-r1-qwen32b-mlx-6bit
like
14
Text Generation
Transformers
Safetensors
qwen2
code
conversational
text-generation-inference
Inference Endpoints
6-bit
License:
mit
Model card
Files
Files and versions
Community
3
Train
Deploy
Use this model
This is a 6bit quant of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
This is a 6bit quant of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
Probably the sweet spot for running o1 at home :)
Downloads last month
204
Safetensors
Model size
6.66B params
Tensor type
FP16
·
U32
·
Inference Providers
NEW
Text Generation
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.
Model tree for
nisten/deepseek-r1-qwen32b-mlx-6bit
Base model
Qwen/Qwen2.5-32B
Finetuned
Qwen/Qwen2.5-Coder-32B
Quantized
(
18
)
this model