Papers
arxiv:2410.09584

Toward General Instruction-Following Alignment for Retrieval-Augmented Generation

Published on Oct 12
· Submitted by dongguanting on Oct 15
Authors:
,
,

Abstract

Following natural instructions is crucial for the effective application of Retrieval-Augmented Generation (RAG) systems. Despite recent advancements in Large Language Models (LLMs), research on assessing and improving instruction-following (IF) alignment within the RAG domain remains limited. To address this issue, we propose VIF-RAG, the first automated, scalable, and verifiable synthetic pipeline for instruction-following alignment in RAG systems. We start by manually crafting a minimal set of atomic instructions (<100) and developing combination rules to synthesize and verify complex instructions for a seed set. We then use supervised models for instruction rewriting while simultaneously generating code to automate the verification of instruction quality via a Python executor. Finally, we integrate these instructions with extensive RAG and general data samples, scaling up to a high-quality VIF-RAG-QA dataset (>100k) through automated processes. To further bridge the gap in instruction-following auto-evaluation for RAG systems, we introduce FollowRAG Benchmark, which includes approximately 3K test samples, covering 22 categories of general instruction constraints and four knowledge-intensive QA datasets. Due to its robust pipeline design, FollowRAG can seamlessly integrate with different RAG benchmarks. Using FollowRAG and eight widely-used IF and foundational abilities benchmarks for LLMs, we demonstrate that VIF-RAG markedly enhances LLM performance across a broad range of general instruction constraints while effectively leveraging its capabilities in RAG scenarios. Further analysis offers practical insights for achieving IF alignment in RAG systems. Our code and datasets are released at https://FollowRAG.github.io.

Community

Paper author Paper submitter
edited 30 days ago

TLDR: We present VIF-RAG, an automated, scalable, and verifiable framework that significantly enhances instruction-following alignment in RAG systems, backed by the FollowRAG Benchmark for thorough evaluation and practical insights.

VIF-RAG, the first automated, scalable, and verifiable data synthetic framework. VIFRAG uniquely combines augmented rewriting with diverjse validation processes to synthesize high-quality instruction-following alignment data from almost scratch (<100), scaling up to over 100K samples.

截屏2024-10-15 13.03.23.png

FollowRAG, the first benchmark designed to comprehensively evaluate LLM’s complex instructionfollowing abilities in RAG tasks. FollowRAG includes nearly 3K test samples, spanning four knowledgeintensive QA benchmarks and 22 types of constraints. Its design ensures seamless integration with various RAG benchmarks, providing strong scalability.

image.png

Paper author Paper submitter
edited 30 days ago

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.09584 in a model README.md to link it from this page.

Datasets citing this paper 4

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.09584 in a Space README.md to link it from this page.

Collections including this paper 11