Papers
arxiv:2506.20670

MMSearch-R1: Incentivizing LMMs to Search

Published on Jun 25
ยท Submitted by kimingng on Jun 27
Authors:
,
,
,
,
Bo Li ,
,

Abstract

MMSearch-R1, a reinforcement learning framework, enables large multimodal models to perform efficient, on-demand, multi-turn search in real-world environments, outperforming existing approaches.

AI-generated summary

Robust deployment of large multimodal models (LMMs) in real-world scenarios requires access to external knowledge sources, given the complexity and dynamic nature of real-world information. Existing approaches such as retrieval-augmented generation (RAG) and prompt engineered search agents rely on rigid pipelines, often leading to inefficient or excessive search behaviors. We present MMSearch-R1, the first end-to-end reinforcement learning framework that enables LMMs to perform on-demand, multi-turn search in real-world Internet environments. Our framework integrates both image and text search tools, allowing the model to reason about when and how to invoke them guided by an outcome-based reward with a search penalty. To support training, We collect a multimodal search VQA dataset through a semi-automated pipeline that covers diverse visual and textual knowledge needs and curate a search-balanced subset with both search-required and search-free samples, which proves essential for shaping efficient and on-demand search behavior. Extensive experiments on knowledge-intensive and info-seeking VQA tasks show that our model not only outperforms RAG-based baselines of the same model size, but also matches the performance of a larger RAG-based model while reducing search calls by over 30%. We further analyze key empirical findings to offer actionable insights for advancing research in multimodal search.

Community

Paper author Paper submitter
This comment has been hidden
Paper author Paper submitter

This paper presents MMSearch-R1, an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search tools. On knowledge-intensive and info-seeking VQA tasks, MMSearch-R1 model outperforms same-size traditional RAG baselines and cuts search calls by over 30%.

๐Ÿซก๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.20670 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.20670 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.20670 in a Space README.md to link it from this page.

Collections including this paper 2