license: apache-2.0 train: false inference: false pipeline_tag: text-generation base_model: Qwen/Qwen3-32B base_model_relation: quantized quantized_by: ArtusDev tags: - hqq
This is an HQQ all 4-bit (group-size=64) quantized Qwen3-32B model.