Transformers documentation
Exporting to production
Exporting to production
Export Transformers’ models to different formats for optimized runtimes and devices. Deploy the same model to cloud providers or run it on mobile and edge devices. You don’t need to rewrite the model from scratch for each deployment environment. Freely deploy across any inference ecosystem.
ExecuTorch
ExecuTorch runs PyTorch models on mobile and edge devices. It exports a model into a graph of standardized operators, compiles the graph into an ExecuTorch program, and executes it on the target device. The runtime is lightweight and calculates the execution plan ahead of time.
Install Optimum ExecuTorch from source.
git clone https://github.com/huggingface/optimum-executorch.git
cd optimum-executorch
pip install '.[dev]'Export a Transformers model to ExecuTorch with the CLI tool.
optimum-cli export executorch \
--model "Qwen/Qwen3-8B" \
--task "text-generation" \
--recipe "xnnpack" \
--use_custom_sdpa \
--use_custom_kv_cache \
--qlinear 8da4w \
--qembedding 8w \
--output_dir="hf_smollm2"Run the following command to view all export options.
optimum-cli export executorch --helpONNX
ONNX is a shared language for describing models from different frameworks. It represents models as a graph of standardized operators with well-defined types, shapes, and metadata. Models serialize into compact protobuf files that you can deploy across optimized runtimes and engines.
Optimum ONNX exports models to ONNX with configuration objects. It supports many architectures and is easily extendable. Export models through the CLI tool or programmatically.
Install Optimum ONNX.
uv pip install optimum-onnx
optimum-cli
Specify a model to export and the output directory with the --model argument.
optimum-cli export onnx --model Qwen/Qwen3-8B Qwen/Qwen3-8b-onnx/Run the following command to view all available arguments or refer to the Export a model to ONNX with optimum.exporters.onnx guide for more details.
optimum cli export onnx --helpTo export a local model, save the weights and tokenizer files in the same directory. Pass the directory path to the --model argument and use the --task argument to specify the task. If you don’t provide --task, the system auto-infers it from the model or uses an architecture without a task-specific head.
optimum-cli export onnx --model path/to/local/model --task text-generation Qwen/Qwen3-8b-onnx/Deploy the model with any runtime that supports ONNX, including ONNX Runtime.
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8b-onnx")
model = ORTModelForCausalLM.from_pretrained("Qwen/Qwen3-8b-onnx")
inputs = tokenizer("Plants generate energy through a process known as ", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs))optimum.onnxruntime
Export Transformers’ models programmatically with Optimum ONNX. Instantiate a ORTModel with a model and set export=True. Save the ONNX model with save_pretrained.
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoTokenizer
ort_model = ORTModelForCausalLM.from_pretrained("Qwen/Qwen3-8b", export=True)
tokenizer = AutoTokenizer.from_pretrained("onnx/")
ort_model.save_pretrained("onnx/")
tokenizer.save_pretrained("onnx/")