Jade Qwen 3 4B - 4bit quantization for MLX

A systems progamming Qwen finetune.

Jade

Model description

Please view the model description on the non-quantized version.

Downloads last month
504
Safetensors
Model size
629M params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dougiefresh/jade_qwen3_4b_mlx_4bit

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(93)
this model

Datasets used to train dougiefresh/jade_qwen3_4b_mlx_4bit