EXL3 quantization of Qwen3-0.6B, 8 bits per weight, including output layers.

HumanEval (argmax)

Model Q4 Q6 Q8 FP16
Qwen3-0.6B-exl3-8bpw-h8 0.0% 38.4% 40.9% 40.2%
Qwen3-0.6B-Base-exl3-8bpw-h8 0.0% 36.0% 37.2% 36.6%
Downloads last month
11
Safetensors
Model size
454M params
Tensor type
FP16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for isogen/Qwen3-0.6B-exl3-8bpw-h8

Finetuned
Qwen/Qwen3-0.6B
Quantized
(92)
this model