jiaruz2 nielsr HF Staff commited on
Commit
29b4461
Β·
verified Β·
1 Parent(s): 5467483

Enhance MultiTableQA dataset card with introduction, comprehensive sample usage, and task category (#1)

Browse files

- Enhance MultiTableQA dataset card with introduction, comprehensive sample usage, and task category (0fcee3a4784494d8efafd6121e9163375ceec615)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +110 -6
README.md CHANGED
@@ -1,14 +1,31 @@
1
  ---
 
2
  configs:
3
  - config_name: table
4
- data_files: "wtq_table.jsonl"
5
  - config_name: test_query
6
- data_files: "wtq_query.jsonl"
7
-
8
- license: mit
9
  ---
 
 
 
10
  πŸ“„ [Paper](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG)
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
13
  | Dataset | Link |
14
  |-----------------------|------|
@@ -21,14 +38,101 @@ For MultiTableQA, we release a comprehensive benchmark, including five different
21
 
22
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ---
25
  # Citation
26
 
27
  If you find our work useful, please cite:
28
 
29
  ```bibtex
30
- @misc{zou2025gtrgraphtableragcrosstablequestion,
31
- title={GTR: Graph-Table-RAG for Cross-Table Question Answering},
32
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
33
  year={2025},
34
  eprint={2504.01346},
 
1
  ---
2
+ license: mit
3
  configs:
4
  - config_name: table
5
+ data_files: wtq_table.jsonl
6
  - config_name: test_query
7
+ data_files: wtq_query.jsonl
8
+ task_categories:
9
+ - table-question-answering
10
  ---
11
+
12
+ # RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking
13
+
14
  πŸ“„ [Paper](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG)
15
 
16
+ ## Introduction
17
+
18
+ Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
19
+
20
+ This repository provides the implementation of **T-RAG**, a novel table-corpora-aware RAG framework featuring:
21
+
22
+ - **Hierarchical Memory Index** – organizes heterogeneous table knowledge at multiple granularities.
23
+ - **Multi-Stage Retrieval** – coarse-to-fine retrieval combining clustering, subgraph reasoning, and PageRank.
24
+ - **Graph-Aware Prompting** – injects relational priors into LLMs for structured tabular reasoning.
25
+ - **MultiTableQA Benchmark** – a large-scale dataset with **57,193 tables** and **23,758 questions** across various tabular tasks.
26
+
27
+ ## MultiTableQA Benchmark Details
28
+
29
  For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
30
  | Dataset | Link |
31
  |-----------------------|------|
 
38
 
39
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
40
 
41
+ ---
42
+
43
+ ## Sample Usage
44
+
45
+ The following sections provide instructions on how to set up the environment, prepare the MultiTableQA data, run T-RAG retrieval, and perform downstream inference with LLMs. For more detailed information, please refer to the [official GitHub repository](https://github.com/jiaruzouu/T-RAG).
46
+
47
+ ### 1. Installation
48
+
49
+ To get started with the T-RAG framework, first clone the repository and install the necessary dependencies:
50
+
51
+ ```bash
52
+ git clone https://github.com/jiaruzouu/T-RAG.git
53
+ cd T-RAG
54
+
55
+ conda create -n trag python=3.11.9
56
+ conda activate trag
57
+
58
+ # Install dependencies
59
+ pip install -r requirements.txt
60
+ ```
61
+
62
+ ### 2. MultiTableQA Data Preparation
63
+
64
+ To download and preprocess the **MultiTableQA** benchmark:
65
+
66
+ ```bash
67
+ cd table2graph
68
+ bash scripts/prepare_data.sh
69
+ ```
70
+
71
+ This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits.
72
+
73
+ ### 3. Run T-RAG Retrieval
74
+
75
+ To run hierarchical index construction and multi-stage retrieval:
76
+
77
+ **Stage 1 & 2: Table to Graph Construction & Coarse-grained Multi-way Retrieval**
78
+
79
+ Stages 1 & 2 include:
80
+ - Table Linearization
81
+ - Multi-way Feature Extraction
82
+ - Hypergraph Construction by Multi-way Clustering
83
+ - Typical Node Selection for Efficient Table Retrieval
84
+ - Query-Cluster Assignment
85
+
86
+ To run this,
87
+
88
+ ```bash
89
+ cd src
90
+ cd table2graph
91
+ bash scripts/table_cluster_run.sh # or python scripts/table_cluster_run.py
92
+ ```
93
+
94
+ **Stage 3: Fine-grained sub-graph Retrieval**
95
+ Stage 3 includes:
96
+ - Local Subgraph Construction
97
+ - Iterative Personalized PageRank for Retrieval.
98
+
99
+ To run this,
100
+ ```bash
101
+ cd src
102
+ cd table2graph
103
+ python scripts/subgraph_retrieve_run.py
104
+ ```
105
+
106
+ *Note: Our method supports different embedding methods such as E5, contriever, sentence-transformer, etc.*
107
+
108
+ ### 4. Downstream Inference with LLMs
109
+
110
+ Evaluate T-RAG with an (open/closed-source) LLM of your choice (e.g., GPT-4o, Claude-3.5, Qwen):
111
+
112
+ For Closed-source LLM, please first insert your key under `key.json`
113
+ ```json
114
+ {
115
+ "openai": "<YOUR_OPENAI_API_KEY>",
116
+ "claude": "<YOUR_CLAUDE_API_KEY>"
117
+ }
118
+ ```
119
+
120
+ To run end-to-end model inference and evaluation,
121
+
122
+ ```bash
123
+ cd src
124
+ cd downstream_inference
125
+ bash scripts/overall_run.sh
126
+ ```
127
+
128
  ---
129
  # Citation
130
 
131
  If you find our work useful, please cite:
132
 
133
  ```bibtex
134
+ @misc{zou2025rag,
135
+ title={{RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking}},
136
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
137
  year={2025},
138
  eprint={2504.01346},