Transformers documentation

SpQR

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

SpQR

The SpQR quantization algorithm involves a 16x16 tiled bi-level group 3-bit quantization structure with sparse outliers.

To quantize a model with SpQR, refer to the Vahe1994/SpQR repository.

Load a SpQR-quantized model with from_pretrained().

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

quantized_model = AutoModelForCausalLM.from_pretrained(
    "elvircrn/Llama-2-7b-SPQR-3Bit-16x16-red_pajama-hf",
    torch_dtype=torch.half,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("elvircrn/Llama-2-7b-SPQR-3Bit-16x16-red_pajama-hf")
< > Update on GitHub