A Glimpse to Compress: Dynamic Visual Token Pruning for Large Vision-Language Models
Abstract
A dynamic pruning framework, GlimpsePrune, improves efficiency in Large Vision-Language Models by adaptively removing irrelevant visual tokens without degrading performance.
Visual token compression is critical for Large Vision-Language Models (LVLMs) to efficiently process high-resolution inputs. Existing methods that typically adopt fixed compression ratios cannot adapt to scenes of varying complexity, often causing imprecise pruning that discards informative visual tokens and results in degraded model performance. To address this issue, we introduce a dynamic pruning framework, GlimpsePrune, inspired by human cognition. It takes a data-driven ''glimpse'' and prunes irrelevant visual tokens in a single forward pass before answer generation. This approach prunes 92.6% of visual tokens while on average fully retaining the baseline performance on free-form VQA tasks. The reduced computational cost also enables more effective fine-tuning: an enhanced GlimpsePrune+ achieves 110% of the baseline performance while maintaining a similarly high pruning rate. Our work paves a new way for building more powerful and efficient LVLMs.
Community
GlimpsePrune is a dynamic visual token pruning framework designed for Large Vision-Language Models (LVLMs). Through fast training on a small amount of data (e.g., less than 1 hour on 20K GQA data), GlimpsePrune enables Qwen2.5-VL-7B to prune an average of 92.6% of visual tokens before generating a response, while maintaining performance comparable to the original model.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LaCo: Efficient Layer-wise Compression of Visual Tokens for Multimodal Large Language Models (2025)
- Rethinking Visual Token Reduction in LVLMs under Cross-modal Misalignment (2025)
- HiPrune: Training-Free Visual Token Pruning via Hierarchical Attention in Vision-Language Models (2025)
- Fast3D: Accelerating 3D Multi-modal Large Language Models for Efficient 3D Scene Understanding (2025)
- TransPrune: Token Transition Pruning for Efficient Large Vision-Language Model (2025)
- Grounding-Aware Token Pruning: Recovering from Drastic Performance Drops in Visual Grounding Caused by Pruning (2025)
- VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper