Arxiv: https://arxiv.org/abs/2505.22019
Github: https://github.com/Alibaba-NLP/VRAG
π The training code and demo will be released.
β¨ Model Description
VRAG is a Retrieval-Augmented Generation (RAG) model specifically designed for handling visually rich information. It integrates visual perception capabilities and is optimized via a reinforcement learning (RL) framework to significantly enhance the understanding and reasoning of visual content. The model can interact with search engines to efficiently retrieve relevant images and documents and generate accurate answers.
VRAG enables VLMs to progressively gather information from a coarse-grained to a fine-grained perspective. It is a purely visual RAG agent. VRAG-RL is a novel reinforcement learning framework tailored for training VLMs to effectively reason, retrieve, and understand visually rich information.
π» Intended Use
- Visual Document Question Answering: Extracting information from slides, reports, and other documents to answer questions.
- Multimodal Information Retrieval: Searching for relevant images and text within large-scale visual document collections.
- Chart and Layout Understanding: Analyzing charts, tables, and layout structures to extract key information.
π€ Key Features
Visual Perception: Equipped with a visual perception action space, the model can focus on information-dense regions of images and acquire information from coarse to fine levels. Enhanced Retrieval: Optimized retrieval efficiency through a fine-grained reward function, ensuring the model quickly retrieves relevant images and documents. Multi-turn Reasoning: Supports multi-turn interactions, allowing the model to build a high-quality context through multiple interactions with search engines.
π Quick Start
Please refer to https://github.com/Alibaba-NLP/VRAG.
- Downloads last month
- 183