Qwen3-4B is quantized by the PyTorch team using torchao with 8-bit embeddings and 8-bit dynamic activations with 4-bit weight linears (8da4w). The model is suitable for mobile deployment with ExecuTorch.

We provide the quantized pte for direct use in ExecuTorch. (The provided pte file is exported with a max_seq_length/max_context_length of 1024; if you wish to change this, re-export the quantized model following the instructions in Exporting to ExecuTorch.)

Running in a mobile app

The pte file can be run with ExecuTorch on a mobile phone. See the instructions for doing this in iOS. On iPhone 15 Pro, the model runs at 14.8 tokens/sec and uses 3379 Mb of memory.

image/png

Quantization Recipe

First need to install the required packages:

pip install git+https://github.com/huggingface/transformers@main
pip install git+https://github.com/pytorch/ao.git@main

Untie Embedding Weights

We want to quantize the embedding and lm_head differently. Since those layers are tied, we first need to untie the model:

from transformers import (
  AutoModelForCausalLM,
  AutoProcessor,
  AutoTokenizer,
)
import torch

model_id = "Qwen/Qwen3-4B"
untied_model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)

print(untied_model)
from transformers.modeling_utils import find_tied_parameters
print("tied weights:", find_tied_parameters(untied_model))
if getattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings"):
    setattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings", False)

untied_model._tied_weights_keys = []
untied_model.lm_head.weight = torch.nn.Parameter(untied_model.lm_head.weight.clone())

print("tied weights:", find_tied_parameters(untied_model))

USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{USER_ID}/{MODEL_NAME}-untied-weights"

untied_model.push_to_hub(save_to)
tokenizer.push_to_hub(save_to)

# or save locally
save_to_local_path = f"{MODEL_NAME}-untied-weights"
untied_model.save_pretrained(save_to_local_path)
tokenizer.save_pretrained(save_to)

Note: to push_to_hub you need to run

pip install -U "huggingface_hub[cli]"
huggingface-cli login

and use a token with write access, from https://huggingface.co/settings/tokens

Quantization

We used following code to get the quantized model:

from transformers import (
  AutoModelForCausalLM,
  AutoProcessor,
  AutoTokenizer,
  TorchAoConfig,
)
from torchao.quantization.quant_api import (
    IntxWeightOnlyConfig,
    Int8DynamicActivationIntxWeightConfig,
    ModuleFqnToConfig,
    quantize_,
)
from torchao.quantization.granularity import PerGroup, PerAxis
import torch

# we start from the model with untied weights
model_id = "Qwen/Qwen3-4B"
USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
untied_model_id = f"{USER_ID}/{MODEL_NAME}-untied-weights"
untied_model_local_path = f"{MODEL_NAME}-untied-weights"

embedding_config = IntxWeightOnlyConfig(
    weight_dtype=torch.int8,
    granularity=PerAxis(0),
)
linear_config = Int8DynamicActivationIntxWeightConfig(
    weight_dtype=torch.int4,
    weight_granularity=PerGroup(32),
    weight_scale_dtype=torch.bfloat16,
)
quant_config = ModuleFqnToConfig({"_default": linear_config, "model.embed_tokens": embedding_config})
quantization_config = TorchAoConfig(quant_type=quant_config, include_embedding=True, untie_embedding_weights=True, modules_to_not_convert=[])

# either use `untied_model_id` or `untied_model_local_path`
quantized_model = AutoModelForCausalLM.from_pretrained(untied_model_id, torch_dtype=torch.float32, device_map="auto", quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)

# Push to hub
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{USER_ID}/{MODEL_NAME}-8da4w"
quantized_model.push_to_hub(save_to, safe_serialization=False)
tokenizer.push_to_hub(save_to)

# Manual testing
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
    {
        "role": "system",
        "content": "",
    },
    {"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
    templated_prompt,
    return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
    generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])

The response from the manual testing is:

Hello! I'm Qwen, a large language model developed by Alibaba Cloud. While I don't have consciousness or personal experiences, I can engage in conversations with you and help answer questions. I can talk to you, share thoughts, and even have fun! What's on your mind?

Model Quality

Benchmark
Qwen3-4B Qwen3-4B-8da4w
Popular aggregated benchmark
mmlu 68.38 66.74
mmlu_pro 49.71 46.73
bbh 74.86 67.47
Reasoning
gpqa_main_zeroshot 33.93 31.03
Multilingual
m_mmlu 50.41 47.13
mgsm_en_cot_en 30.40 29.20
Math
gsm8k 84.76 82.87
leaderboard_math_hard (v3) 48.19 44.94
Overall 55.08 52.01
Reproduce Model Quality Results

We rely on lm-evaluation-harness to evaluate the quality of the quantized model.

Need to install lm-eval from source: https://github.com/EleutherAI/lm-evaluation-harness#install

baseline

lm_eval --model hf --model_args pretrained=Qwen3/Qwen3-4B --tasks mmlu --device cuda:0 --batch_size auto

int8 dynamic activation and int4 weight quantization (8da4w)

lm_eval --model hf --model_args pretrained=pytorch/Qwen3-4B-8da4w --tasks mmlu --device cuda:0 --batch_size auto

Exporting to ExecuTorch

We can run the quantized model on a mobile phone using ExecuTorch. Once ExecuTorch is set-up, exporting and running the model on device is a breeze.

We first convert the quantized checkpoint to one ExecuTorch's LLM export script expects by renaming some of the checkpoint keys. The following script does this for you. We have uploaded the converted checkpoint pytorch_model_converted.bin for convenience.

python -m executorch.examples.models.qwen3.convert_weights $(huggingface-cli download pytorch/Qwen3-4B-8da4w) pytorch_model_converted.bin

Once the checkpoint is converted, we can export to ExecuTorch's pte format with the XNNPACK delegate. The below command exports with a max_seq_length/max_context_length of 1024, but it can be changed as desired.

PARAMS="executorch/examples/models/qwen3/4b_config.json"
python -m executorch.examples.models.llama.export_llama \
  --model "qwen3-4b" \
  --checkpoint "pytorch_model_converted.bin" \
  --params "$PARAMS" \
  -kv \
  --use_sdpa_with_kv_cache \
  -d fp32
  -X \
  --metadata '{"get_bos_id":199999, "get_eos_ids":[200020,199999]}' \
  --max_seq_length 1024 \
  --max_context_length 1024 \
  --output_name="qwen3-4b-8da4w-1024-cxt.pte"

After that you can run the model in a mobile app (see Running in a mobile app).

Disclaimer

PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations.

Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein.

Downloads last month
135
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for pytorch/Qwen3-4B-8da4w

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(93)
this model

Collection including pytorch/Qwen3-4B-8da4w