--- base_model: Qwen/Qwen3-30B-A3B library_name: transformers license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE license: apache-2.0 tags: - qwen3 - qwen - gptq - 8bit --- # 8-bit Quantization of the Qwen3 30B A3B Model Quantized using GPTQModel quantiziation config: ``` quant_config = QuantizeConfig( bits=8, group_size=32, sym=True, desc_act=False, true_sequential=True, pack_dtype=torch.int32, damp_percent=0.1 ) ```