QLIP

[📂 GitHub] [📃 QLIP Tech Report] [🔗 Project Page] [🤗 HF Model]

Introduction

We introduce Quantized Language-Image Pretraining (QLIP), a visual tokenization method that combines state-of-the-art reconstruction quality with state-of-the-art zero-shot image understanding. QLIP trains a binary-spherical-quantization-based autoencoder with reconstruction and language-image alignment objectives. We are the first to show that the two objectives do not need to be at odds. We balance the two loss terms dynamically during training and show that a two-stage training pipeline effectively mixes the large-batch requirements of image-language pre-training with the memory bottleneck imposed by the reconstruction objective. We validate the effectiveness of QLIP for multimodal understanding and text-conditioned image generation with a single model. Specifically, QLIP serves as a drop-in replacement for the visual encoder for LLaVA and the image tokenizer for LlamaGen with comparable or even better performance. Finally, we demonstrate that QLIP enables a unified mixed-modality auto-regressive model for understanding and generation.

Model Zoo

We provide the following models:

model name #bits CR↑ 0-shot↑ rFID↓ HF Link
QLIP-B-16-256 28 219.4 74.3 3.21 🤗 link
QLIP-B-8-256 28 54.8 75.6 0.70 🤗 link
QLIP-L-14-392 28 168 79.1 1.46 🤗 link

Note:

  • CR: compression ratio = 24/(#bits)*patch_size^2;
  • 0-shot: zero-shot classification accuracy on IN-1k-val;
  • rFID: reconstruction FID on IN-1k-val.

Citing QLIP

@article{zhao2025qlip,
  title={QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation},
  author={Zhao, Yue and Xue, Fuzhao and Reed, Scott and Fan, Linxi and Zhu, Yuke and Kautz, Jan and Yu, Zhiding and Krähenbühl, Philipp and Huang, De-An},
  journal={arXiv preprint arXiv:2502.yyyyy},
  year={2025}
}

Acknowledgement

The project builds upon the following open-source efforts:

  • EVA-CLIP: We use EVA-CLIP as initialization which significantly speeds up the training convergence.

  • LLaVA: We use LLaVA to evaluate the multimodal understanding performance.

  • LlamaGen: We build the text-to-image generation evaluation on top of LlamaGen.

  • Lingua: We build the unified multimodal model on top of Lingua.

Downloads last month
10
Safetensors
Model size
239M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including nvidia/QLIP-B-8-256