EXL3 quantization of Josiefied-Qwen3-8B-abliterated-v1, 4 bits per weight.

HumanEval (argmax)

Model Q4 Q6 Q8 FP16
Josiefied-Qwen3-8B-abliterated-v1-exl3-4bpw 84.1 85.4 86.0 85.4
Josiefied-Qwen3-8B-abliterated-v1-exl3-6bpw 86.6 85.4 86.0 85.4
Josiefied-Qwen3-8B-abliterated-v1-exl3-8bpw-h8 85.4 86.6 85.4 86.6
Qwen3-8B-exl3-4bpw 86.0 85.4 86.0 87.2
Qwen3-8B-exl3-6bpw 84.8 86.0 87.2 87.2
Qwen3-8B-exl3-8bpw-h8 86.0 87.2 86.6 86.6
Downloads last month
2
Safetensors
Model size
2.6B params
Tensor type
FP16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for isogen/Josiefied-Qwen3-8B-abliterated-v1-exl3-4bpw

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Quantized
(13)
this model