readme
Browse files
README.md
CHANGED
@@ -1,3 +1,45 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
task_categories:
|
4 |
+
- table-question-answering
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
---
|
8 |
+
|
9 |
+
# CataTQA: A Benchmark for Tool-Augmented LLM Question Answering over Heterogeneous Catalysis Tables
|
10 |
+
|
11 |
+
Despite their success in general question answering, large language models (LLMs) struggle with hallucinations and inaccurate reasoning in scientific domains.
|
12 |
+
A major challenge stems from experimental data, which are often stored in external sources like supplementary materials and domain-specific databases. These tables are large, heterogeneous, and semantically complex, making them difficult for LLMs to interpret.
|
13 |
+
While external tools show promise, current benchmarks fail to assess LLMs' ability to navigate this data—particularly in locating relevant tables, retrieving key columns, interpreting experimental conditions, and invoking tools.
|
14 |
+
To address this gap, we introduce CataTQA, a new benchmark for catalytic materials. CataTQA features an automated dataset framework and four auxiliary tools. We evaluate tool-enhanced LLMs across five dimensions: table location, column retrieval, condition analysis, tool calling, and question answering, identifying their strengths and weaknesses.
|
15 |
+
Our work sets a new benchmark for evaluating LLMs in scientific fields and paves the way for future advancements. All data and code are publicly available on GitHub.
|
16 |
+
|
17 |
+
## Dataset Field Description
|
18 |
+
|
19 |
+
- **question**:A table question.
|
20 |
+
- **refer_dataset**:Generate a reference dataset of questions and answers.
|
21 |
+
- **column names**The column name used to generate the problem.
|
22 |
+
- **condition_column**:The column names that need to be filled in to generate the problem.
|
23 |
+
- **answer_column**:Column name of the answer.
|
24 |
+
- **condition**:Conditions contained in the question.
|
25 |
+
- **answer**:Answers to questions.
|
26 |
+
- **tool**:Tools for answering questions.
|
27 |
+
- **level**:The level of the problem.
|
28 |
+
- **question description**:Question type description.
|
29 |
+
- **refer_template**:Template question.
|
30 |
+
|
31 |
+
## Example
|
32 |
+
|
33 |
+
{
|
34 |
+
"question": "Identify the material ID linked to a total energy per atom of -4.093124536666667.",
|
35 |
+
"refer_dataset": "table67",
|
36 |
+
"column names": ["energy_per_atom", "material_id"],
|
37 |
+
"condition_column": ["energy_per_atom"],
|
38 |
+
"answer_column": ["material_id"],
|
39 |
+
"condition": {"energy_per_atom": "-4.093124536666667"},
|
40 |
+
"tool": "search_value",
|
41 |
+
"answer": {"material_id": "2dm-6"},
|
42 |
+
"level": "simple",
|
43 |
+
"question description":"In a tabular data structure, locate the cells that meet the requirements.",
|
44 |
+
"refer_template": "Identify the material ID linked to a total energy per atom of {}."
|
45 |
+
}
|