jiaruz2 commited on
Commit
2c1bdd7
Β·
verified Β·
1 Parent(s): 29b4461

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -100
README.md CHANGED
@@ -9,23 +9,12 @@ task_categories:
9
  - table-question-answering
10
  ---
11
 
12
- # RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking
13
-
14
  πŸ“„ [Paper](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG)
15
 
16
- ## Introduction
17
 
18
  Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
19
 
20
- This repository provides the implementation of **T-RAG**, a novel table-corpora-aware RAG framework featuring:
21
-
22
- - **Hierarchical Memory Index** – organizes heterogeneous table knowledge at multiple granularities.
23
- - **Multi-Stage Retrieval** – coarse-to-fine retrieval combining clustering, subgraph reasoning, and PageRank.
24
- - **Graph-Aware Prompting** – injects relational priors into LLMs for structured tabular reasoning.
25
- - **MultiTableQA Benchmark** – a large-scale dataset with **57,193 tables** and **23,758 questions** across various tabular tasks.
26
-
27
- ## MultiTableQA Benchmark Details
28
-
29
  For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
30
  | Dataset | Link |
31
  |-----------------------|------|
@@ -38,93 +27,6 @@ For MultiTableQA, we release a comprehensive benchmark, including five different
38
 
39
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
40
 
41
- ---
42
-
43
- ## Sample Usage
44
-
45
- The following sections provide instructions on how to set up the environment, prepare the MultiTableQA data, run T-RAG retrieval, and perform downstream inference with LLMs. For more detailed information, please refer to the [official GitHub repository](https://github.com/jiaruzouu/T-RAG).
46
-
47
- ### 1. Installation
48
-
49
- To get started with the T-RAG framework, first clone the repository and install the necessary dependencies:
50
-
51
- ```bash
52
- git clone https://github.com/jiaruzouu/T-RAG.git
53
- cd T-RAG
54
-
55
- conda create -n trag python=3.11.9
56
- conda activate trag
57
-
58
- # Install dependencies
59
- pip install -r requirements.txt
60
- ```
61
-
62
- ### 2. MultiTableQA Data Preparation
63
-
64
- To download and preprocess the **MultiTableQA** benchmark:
65
-
66
- ```bash
67
- cd table2graph
68
- bash scripts/prepare_data.sh
69
- ```
70
-
71
- This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits.
72
-
73
- ### 3. Run T-RAG Retrieval
74
-
75
- To run hierarchical index construction and multi-stage retrieval:
76
-
77
- **Stage 1 & 2: Table to Graph Construction & Coarse-grained Multi-way Retrieval**
78
-
79
- Stages 1 & 2 include:
80
- - Table Linearization
81
- - Multi-way Feature Extraction
82
- - Hypergraph Construction by Multi-way Clustering
83
- - Typical Node Selection for Efficient Table Retrieval
84
- - Query-Cluster Assignment
85
-
86
- To run this,
87
-
88
- ```bash
89
- cd src
90
- cd table2graph
91
- bash scripts/table_cluster_run.sh # or python scripts/table_cluster_run.py
92
- ```
93
-
94
- **Stage 3: Fine-grained sub-graph Retrieval**
95
- Stage 3 includes:
96
- - Local Subgraph Construction
97
- - Iterative Personalized PageRank for Retrieval.
98
-
99
- To run this,
100
- ```bash
101
- cd src
102
- cd table2graph
103
- python scripts/subgraph_retrieve_run.py
104
- ```
105
-
106
- *Note: Our method supports different embedding methods such as E5, contriever, sentence-transformer, etc.*
107
-
108
- ### 4. Downstream Inference with LLMs
109
-
110
- Evaluate T-RAG with an (open/closed-source) LLM of your choice (e.g., GPT-4o, Claude-3.5, Qwen):
111
-
112
- For Closed-source LLM, please first insert your key under `key.json`
113
- ```json
114
- {
115
- "openai": "<YOUR_OPENAI_API_KEY>",
116
- "claude": "<YOUR_CLAUDE_API_KEY>"
117
- }
118
- ```
119
-
120
- To run end-to-end model inference and evaluation,
121
-
122
- ```bash
123
- cd src
124
- cd downstream_inference
125
- bash scripts/overall_run.sh
126
- ```
127
-
128
  ---
129
  # Citation
130
 
@@ -132,7 +34,7 @@ If you find our work useful, please cite:
132
 
133
  ```bibtex
134
  @misc{zou2025rag,
135
- title={{RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking}},
136
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
137
  year={2025},
138
  eprint={2504.01346},
 
9
  - table-question-answering
10
  ---
11
 
 
 
12
  πŸ“„ [Paper](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG)
13
 
14
+ ## πŸ” Introduction
15
 
16
  Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
17
 
 
 
 
 
 
 
 
 
 
18
  For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
19
  | Dataset | Link |
20
  |-----------------------|------|
 
27
 
28
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ---
31
  # Citation
32
 
 
34
 
35
  ```bibtex
36
  @misc{zou2025rag,
37
+ title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking},
38
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
39
  year={2025},
40
  eprint={2504.01346},