Transformers documentation

EETQ

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.49.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

EETQ

The Easy & Efficient Quantization for Transformers (EETQ) library supports int8 weight-only per-channel quantization for NVIDIA GPUs. It uses high-performance GEMM and GEMV kernels from FasterTransformer and TensorRT-LLM. The attention layer is optimized with FlashAttention2. No calibration dataset is required, and the model doesn’t need to be pre-quantized. Accuracy degradation is negligible owing to the per-channel quantization.

EETQ further supports fine-tuning with PEFT.

Install EETQ from the release page or source code. CUDA 11.4+ is required for EETQ.

release page
source code
pip install --no-cache-dir https://github.com/NetEase-FuXi/EETQ/releases/download/v1.0.0/EETQ-1.0.0+cu121+torch2.1.2-cp310-cp310-linux_x86_64.whl

Quantize a model on-the-fly by defining the quantization data type in EetqConfig.

from transformers import AutoModelForCausalLM, EetqConfig

quantization_config = EetqConfig("int8")
model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-3.1-8B",
    torch_dtype="auto",
    device_map="auto",
    quantization_config=quantization_config
)

Save the quantized model with save_pretrained() so it can be reused again with from_pretrained().

quant_path = "/path/to/save/quantized/model"
model.save_pretrained(quant_path)
model = AutoModelForCausalLM.from_pretrained(quant_path, device_map="auto")
< > Update on GitHub