GenRecal: Generation after Recalibration from Large to Small Vision-Language Models
Abstract
GenRecal, a novel distillation framework, improves performance of vision-language models by aligning feature representations across different architectures.
Recent advancements in vision-language models (VLMs) have leveraged large language models (LLMs) to achieve performance on par with closed-source systems like GPT-4V. However, deploying these models in real-world scenarios, particularly on resource-constrained devices, remains challenging due to their substantial computational demands. This has spurred interest in distilling knowledge from large VLMs into smaller, more efficient counterparts. A key challenge arises here from the diversity of VLM architectures, which are built on different LLMs and employ varying token types-differing in vocabulary size, token splits, and token index ordering. To address this challenge of limitation to a specific VLM type, we present Generation after Recalibration (GenRecal), a novel, general-purpose distillation framework for VLMs. GenRecal incorporates a Recalibrator that aligns and adapts feature representations between heterogeneous VLMs, enabling effective knowledge transfer across different types of VLMs. Through extensive experiments on multiple challenging benchmarks, we demonstrate that GenRecal significantly improves baseline performances, eventually outperforming large-scale open- and closed-source VLMs.
Community
- Project page: https://byungkwanlee.github.io/GenRecal-page/
- Authors: Byung-Kwan Lee (1,2*), Ryo Hachiuma (1), Yong Man Ro (2), Yu-Chiang Frank Wang (1,3), Yueh-Hua Wu (1)
- 1: NVIDIA, 2: KAIST, 3: National Taiwan University
- *: Work Done during Internship
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MASSV: Multimodal Adaptation and Self-Data Distillation for Speculative Decoding of Vision-Language Models (2025)
- Towards General Continuous Memory for Vision-Language Models (2025)
- MMRL++: Parameter-Efficient and Interaction-Aware Representation Learning for Vision-Language Models (2025)
- Slot-MLLM: Object-Centric Visual Tokenization for Multimodal LLM (2025)
- Are Unified Vision-Language Models Necessary: Generalization Across Understanding and Generation (2025)
- VScan: Rethinking Visual Token Reduction for Efficient Large Vision-Language Models (2025)
- LLaDA-V: Large Language Diffusion Models with Visual Instruction Tuning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper