Papers
arxiv:2508.20691

MobileCLIP2: Improving Multi-Modal Reinforced Training

Published on Aug 28
Authors:
,
,
,
,
,
,

Abstract

MobileCLIP2 improves zero-shot image-text accuracy through enhanced multi-modal training, better CLIP teacher ensembles, and diverse captioner fine-tuning, achieving state-of-the-art results with lower latency.

AI-generated summary

Foundation image-text models such as CLIP with zero-shot capabilities enable a wide array of applications. MobileCLIP is a recent family of image-text models at 3-15ms latency and 50-150M parameters with state-of-the-art zero-shot accuracy. The main ingredients in MobileCLIP were its low-latency and light architectures and a novel multi-modal reinforced training that made knowledge distillation from multiple caption-generators and CLIP teachers efficient, scalable, and reproducible. In this paper, we improve the multi-modal reinforced training of MobileCLIP through: 1) better CLIP teacher ensembles trained on the DFN dataset, 2) improved captioner teachers trained on the DFN dataset and fine-tuned on a diverse selection of high-quality image-caption datasets. We discover new insights through ablations such as the importance of temperature tuning in contrastive knowledge distillation, the effectiveness of caption-generator fine-tuning for caption diversity, and the additive improvement from combining synthetic captions generated by multiple models. We train a new family of models called MobileCLIP2 and achieve state-of-the-art ImageNet-1k zero-shot accuracies at low latencies. In particular, we observe 2.2% improvement in ImageNet-1k accuracy for MobileCLIP2-B compared with MobileCLIP-B architecture. Notably, MobileCLIP2-S4 matches the zero-shot accuracy of SigLIP-SO400M/14 on ImageNet-1k while being 2times smaller and improves on DFN ViT-L/14 at 2.5times lower latency. We release our pretrained models (https://github.com/apple/ml-mobileclip) and the data generation code (https://github.com/apple/ml-mobileclip-dr). The data generation code makes it easy to create new reinforced datasets with arbitrary teachers using distributed scalable processing.

Community

This comment has been hidden

Sign up or log in to comment

Models citing this paper 30

Browse 30 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.20691 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.20691 in a Space README.md to link it from this page.

Collections including this paper 2