Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,90 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# FinAR-Bench Dataset
|
6 |
+
|
7 |
+
[GitHub Repository](https://github.com/sw4tanonymous/faia-bench)
|
8 |
+
|
9 |
+
This repository contains the FinAR-Bench dataset, which is designed to assess the capabilities of Large Language Models (LLMs) in performing financial fundamental analysis. The dataset focuses on three key tasks:
|
10 |
+
|
11 |
+
|
12 |
+
1. Information Extraction
|
13 |
+
2. Indicator Computation
|
14 |
+
3. Logical Reasoning
|
15 |
+
|
16 |
+
|
17 |
+
## Dataset Components
|
18 |
+
|
19 |
+
### 1. Company Tables and PDFs (`pdf_data`)
|
20 |
+
This directory contains financial statements extracted from 2023 annual reports of 100 companies listed on the Shanghai Stock Exchange (SSE). Each file is named using the company's stock code and contains the financial statement section of their annual report in PDF format.
|
21 |
+
|
22 |
+
### 2. Extracted Text (`pdf_extractor_result/txt_output`)
|
23 |
+
This directory contains text extracted from the PDF documents using six different PDF extraction tools:
|
24 |
+
- PyMuPDF
|
25 |
+
- PyPDF
|
26 |
+
- pdftotext
|
27 |
+
- PDFMiner
|
28 |
+
- Mineru
|
29 |
+
- pdfplumber
|
30 |
+
|
31 |
+
Each file is named using the company's stock code and contains the processed text output from these extraction tools.
|
32 |
+
|
33 |
+
### 3. Development Set (`dev.txt`)
|
34 |
+
Contains evaluation tasks for 10 companies, where each company's data includes:
|
35 |
+
- 6 fact extraction tasks
|
36 |
+
- 6 financial indicator computation tasks
|
37 |
+
- 1 logical reasoning task
|
38 |
+
|
39 |
+
|
40 |
+
### 4. Test Set (`test.txt`)
|
41 |
+
Contains evaluation tasks for 90 companies, following the same structure as the development set:
|
42 |
+
- 6 fact extraction tasks per company
|
43 |
+
- 6 financial indicator computation tasks per company
|
44 |
+
- 1 logical reasoning task per company
|
45 |
+
|
46 |
+
|
47 |
+
|
48 |
+
## Data Structure
|
49 |
+
|
50 |
+
The dataset is organized into several key files and directories:
|
51 |
+
|
52 |
+
1. `dev.txt` and `test.txt`: Contains the evaluation data in JSON format, with each entry including:
|
53 |
+
- `table`: Financial statements data in markdown table format (derived from XBRL data from Shanghai Stock Exchange), including:
|
54 |
+
- Income Statement
|
55 |
+
- Balance Sheet
|
56 |
+
- Cash Flow Statement
|
57 |
+
- `instances`: A list of tasks, where each task contains:
|
58 |
+
- `task_id`: Unique identifier for the task
|
59 |
+
- `task`: The specific task description
|
60 |
+
- `ground_truth`: The expected answer in markdown table format
|
61 |
+
- `task_type`: Type of task (fact, indicator, or reasoning)
|
62 |
+
- `task_num`: Number of items to extract/calculate
|
63 |
+
- `company`: Company name
|
64 |
+
- `company_code`: Stock code
|
65 |
+
- `conditions`: (for reasoning tasks) List of conditions to evaluate
|
66 |
+
|
67 |
+
2. `pdf_extractor_result/txt_output/`: Directory containing the raw extracted text from PDFs using various PDF extraction tools
|
68 |
+
|
69 |
+
3. `pdf_data/`: Directory containing the original PDF files of financial statements
|
70 |
+
|
71 |
+
|
72 |
+
|
73 |
+
|
74 |
+
Each company's evaluation set contains 13 tasks (6 fact extraction + 6 indicator computation + 1 reasoning task), and the data is provided in three formats:
|
75 |
+
1. XBRL-derived markdown tables (in dev.txt/test.txt)
|
76 |
+
2. Extracted text files from PDFs
|
77 |
+
3. Original PDF files
|
78 |
+
|
79 |
+
|
80 |
+
## Usage
|
81 |
+
|
82 |
+
These datasets are hosted on Hugging Face and can be accessed using the Hugging Face datasets library.
|
83 |
+
For best results, we recommend using them together with the code and evaluation scripts provided in our [GitHub Repository](https://github.com/sw4tanonymous/faia-bench).
|
84 |
+
|
85 |
+
Example:
|
86 |
+
```python
|
87 |
+
from datasets import load_dataset
|
88 |
+
dataset = load_dataset("sw4tanonymous/FinAR-Bench")
|
89 |
+
# See https://github.com/sw4tanonymous/FinAR-Bench for code examples and evaluation scripts.
|
90 |
+
```
|