ModelCloud optimized and validated quants that pass/meet strict quality assurance on multiple benchmarks. No one quantize
-
ModelCloud/QwQ-32B-gptqmodel-4bit-vortex-v1
Text Generation • 7B • Updated • 1.66k • 11 -
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
Text Generation • 2B • Updated • 2k • 7 -
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v1
Text Generation • 2B • Updated • 15 • 5 -
ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1
2B • Updated • 13 • 3