llama-quantize --tensor-type ffn_up_exps=q4_0 --tensor-type ffn_down_exps=q4_0 --tensor-type ffn_gate_exps=q4_0 Qwen3-30B-A3B.gguf Qwen3-30B-A3B-main-Q8_0-experts-Q4_0.gguf q8_0
llama-quantize --tensor-type ffn_up_exps=iq4_xs --tensor-type ffn_down_exps=iq4_xs --tensor-type ffn_gate_exps=iq4_xs Qwen3-30B-A3B.gguf Qwen3-30B-A3B-main-Q8_0-experts-IQ4_XS.gguf q8_0
- Downloads last month
- 10
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support