llama-quantize --tensor-type ffn_up_exps=q4_0 --tensor-type ffn_down_exps=q4_0 --tensor-type ffn_gate_exps=q4_0 Qwen3-235B-A22B.gguf Qwen3-235B-A22B-Q8_0-EXP-Q4_0.gguf q8_0
- Downloads last month
- 53
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for hjc4869/Qwen3-235B-A22B-GGUF
Base model
Qwen/Qwen3-235B-A22B