--- library_name: transformers tags: - torchao license: apache-2.0 language: - multilingual base_model: - Qwen/Qwen3-32B pipeline_tag: text-generation --- [Qwen3-32B](https://huggingface.co/Qwen3/Qwen3-32B) model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) float8 dynamic activation and float8 weight quantization (per row granularity), by PyTorch team. Use it directly, or serve using [vLLM](https://docs.vllm.ai/en/latest/) with TODO VRAM reduction, TODO speedup and little to no accuracy impact on H100. # 1. Inference with vLLM TODO # 2. Inference with Transformers TODO # 3. Quantization Recipe Install the required packages: ```Shell pip install git+https://github.com/huggingface/transformers@main pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126 pip install torch pip install accelerate ``` Use the following code to get the quantized model: ```Py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig model_id = "Qwen/Qwen3-32B" ## Step 1: Convert to float8 from torchao.quantization import Float8DynamicActivationFloat8WeightConfig, PerRow quant_config = Float8DynamicActivationFloat8WeightConfig(granularity=PerRow()) quantization_config = TorchAoConfig(quant_type=quant_config) quantized_model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config, ) tokenizer = AutoTokenizer.from_pretrained(model_id) ## Step 2: Sanity check prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to("cuda") # conduct text completion generated_ids = quantized_model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 () index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) # Step 3: Upload to HF USER_ID = "YOUR_USER_ID" MODEL_NAME = model_id.split("/")[-1] save_to = f"{USER_ID}/{MODEL_NAME}-float8dq" quantized_model.push_to_hub(save_to, safe_serialization=False) tokenizer.push_to_hub(save_to) ``` # 4. Model Quality TODO # 5. Peak Memory Usage TODO # 6. Model Performance TODO # 7. Disclaimer PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein.