GroupRank: A Groupwise Reranking Paradigm Driven by Reinforcement Learning
Abstract
Large Language Models have shown strong potential as rerankers to enhance the overall performance of RAG systems. However, existing reranking paradigms are constrained by a core theoretical and practical dilemma: Pointwise methods, while simple and highly flexible, evaluate documents independently, making them prone to the Ranking Myopia Trap, overlooking the relative importance between documents. In contrast, Listwise methods can perceive the global ranking context, but suffer from inherent List Rigidity, leading to severe scalability and flexibility issues when handling large candidate sets. To address these challenges, we propose Groupwise, a novel reranking paradigm. In this approach, the query and a group of candidate documents are jointly fed into the model, which performs within-group comparisons to assign individual relevance scores to each document. This design retains the flexibility of Pointwise methods while enabling the comparative capability of Listwise methods. We further adopt GRPO for model training, equipped with a heterogeneous reward function that integrates ranking metrics with a distributional reward aimed at aligning score distributions across groups. To overcome the bottleneck caused by the scarcity of high quality labeled data, we further propose an innovative pipeline for synthesizing high quality retrieval and ranking data. The resulting data can be leveraged not only for training the reranker but also for training the retriever. Extensive experiments validate the effectiveness of our approach. On two reasoning intensive retrieval benchmarks, BRIGHT and R2MED.
Community
The GroupRank paper presents a very elegant solution to a fundamental dilemma in reranking for RAG systems. It cleverly bridges the gap between Pointwise and Listwise methods with a new 'Groupwise' paradigm. By evaluating documents in small, manageable groups, it gains the comparative context that Pointwise methods lack, while avoiding the rigidity and scalability issues of traditional Listwise approaches. The use of reinforcement learning with a unique reward function, combined with their innovative data synthesis pipeline, makes this a powerful and practical contribution, especially for complex, reasoning-based retrieval tasks.
good job!
groupwise, sounds interesting
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Rethinking Reasoning in Document Ranking: Why Chain-of-Thought Falls Short (2025)
- Retro*: Optimizing LLMs for Reasoning-Intensive Document Retrieval (2025)
- E2Rank: Your Text Embedding can Also be an Effective and Efficient Listwise Reranker (2025)
- TeaRAG: A Token-Efficient Agentic Retrieval-Augmented Generation Framework (2025)
- MARAG-R1: Beyond Single Retriever via Reinforcement-Learned Multi-Tool Agentic Retrieval (2025)
- Enhancing Transformer-Based Rerankers with Synthetic Data and LLM-Based Supervision (2025)
- Embedding-Based Context-Aware Reranker (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper