LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for MLLMs
This repository contains the LLaVA-SP model, presented in the paper LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for MLLMs.
Abstract
The architecture of multimodal large language models (MLLMs) commonly connects a vision encoder, often based on CLIP-ViT, to a large language model. While CLIP-ViT works well for capturing global image features, it struggles to model local relationships between adjacent patches, leading to weaker visual representation, which in turn affects the detailed understanding ability of MLLMs. To solve this, we propose LLaVA-SP, which only adds six spatial visual tokens to the original visual tokens to enhance the visual representation. Our approach offers three key advantages: 1)We propose a novel Projector, which uses convolutional kernels to derive visual spatial tokens from ViT patch features, simulating two visual spatial ordering approaches: from central region to global
and from abstract to specific
. Then, a cross-attention mechanism is applied to fuse fine-grained visual information, enriching the overall visual representation. 2) We present two model variants: LLaVA-SP-Cropping, which focuses on detail features through progressive cropping, and LLaVA-SP-Pooling, which captures global semantics through adaptive pooling, enabling the model to handle diverse visual understanding tasks. 3) Extensive experiments show that LLaVA-SP, fine-tuned with LoRA, achieves significant performance improvements across various multimodal benchmarks, outperforming the state-of-the-art LLaVA-1.5 model in multiple tasks with nearly identical inference latency.
Code and Models
The official code and models are available on GitHub: https://github.com/CnFaker/LLaVA-SP
Citation
If you find this work useful, please consider citing the paper:
@article{lou2025llavasp,
title={LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for MLLMs},
author={Lou, Haoran and Fan, Chunxiao and Liu, Ziyan Liu and Wu, Yuexin Wu and Wang, Xinliang},
journal={arXiv preprint arXiv:2507.00505},
year={2025}
}
- Downloads last month
- 13