--- license: mit language: - en base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --- AWQ 4 bits quantization from DeepSeek-R1-Distill-Qwen-32B commit 10d6a0388c80991c8fd8b54223146e7cbe33dfa5 ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B" commit_hash = "10d6a0388c80991c8fd8b54223146e7cbe33dfa5" # Download the model and tokenizer at the specific commit hash model = AutoAWQForCausalLM.from_pretrained(model_name, revision=commit_hash) tokenizer = AutoTokenizer.from_pretrained(model_name, revision=commit_hash) quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" } model.quantize(tokenizer, quant_config=quant_config) ```