kaitchup/DeepSeek-R1-Distill-Llama-8B-AutoRound-GPTQ-4bit Text Generation • Updated Jan 27 • 1.34k • 1
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v1 Text Generation • Updated Jan 24 • 69 • 5
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2 Text Generation • Updated Jan 24 • 477 • 7
ConfidentialMind/Mistral-Small-24B-Instruct-2501_GPTQ_G128_W4A16_MSE Text Classification • Updated Feb 18 • 545 • 1
ConfidentialMind/Mistral-Small-24B-Instruct-2501_GPTQ_G32_W4A16 Text Generation • Updated Feb 23 • 588 • 1