h-aida commited on
Commit
8ba65c7
·
verified ·
1 Parent(s): bfb57e4

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - visual-question-answering
4
+ - table-question-answering
5
+ language:
6
+ - ja
7
+ tags:
8
+ - table-qa
9
+ - visual-qa
10
+ - japanese
11
+ - ntcir
12
+ size_categories:
13
+ - 10K<n<100K
14
+ ---
15
+
16
+ # TableCellQA Dataset
17
+
18
+ This dataset is for Table Question Answering (Table QA), derived from tables in Japanese annual securities reports used in the NTCIR-18 U4 shared task.
19
+
20
+ This dataset was proposed in our paper: [Enhancing Large Vision-Language Models with Layout Modality for Table Question Answering on Japanese Annual Securities Reports](https://arxiv.org/abs/2505.17625).
21
+
22
+ ## Key Differences from Original Dataset
23
+
24
+ 1. **Multimodal Support**: This dataset supports multimodal inputs (image, layout, text) for comprehensive table understanding
25
+ 2. **Direct Cell Value Extraction**: Unlike the original task, this dataset focuses on direct extraction of cell values, removing the need for arithmetic operations or other transformations
26
+
27
+ ## Dataset Description
28
+
29
+ - **Language**: Japanese
30
+ - **Task**: Table Question Answering
31
+ - **Format**: Images with OCR text and question-answer pairs
32
+ - **Source**: NTCIR-18 U4 Task
33
+
34
+ ## Dataset Structure
35
+
36
+ Each example contains:
37
+ - `id`: Unique identifier
38
+ - `sample_id`: Original sample ID
39
+ - `image`: Table image (PNG format)
40
+ - `text_w_bbox`: Raw OCR data with bounding box information (JSON format)
41
+ - `question`: Question about the table
42
+ - `answer`: Answer to the question
43
+ - `question_type`: Type of question (table_qa)
44
+ - `dataset`: Dataset name (ntcir18-u4)
45
+
46
+ ## Usage
47
+
48
+ ```python
49
+ from datasets import load_dataset
50
+ import json
51
+
52
+ dataset = load_dataset("stockmark/u4-table-cell-qa")
53
+
54
+ # Access OCR data with bounding boxes
55
+ sample = dataset["train"][0]
56
+ ocr_data = json.loads(sample["text_w_bbox"])
57
+
58
+ # Each OCR element contains:
59
+ # - "box": [x1, y1, x2, y2] - bounding box coordinates
60
+ # - "text": extracted text
61
+ # - "label": classification label (if available)
62
+ # - "words": word-level information (if available)
63
+
64
+ for ocr_item in ocr_data:
65
+ print(f"Text: {ocr_item['text']}")
66
+ print(f"Box: {ocr_item['box']}")
67
+ ```
68
+
69
+ ## Citation
70
+
71
+ ### Original Dataset
72
+
73
+ This dataset is based on the NTCIR-U4 task. We thank the original authors for making their data available.
74
+
75
+ **Data Source:**
76
+ - 本データは金融庁 EDINET で公開されている有価証券報告書を基に編集・加工したものです。
77
+ - This data is based on securities reports published on EDINET (Financial Services Agency of Japan), which have been edited and processed.
78
+
79
+ **Attribution:**
80
+ - 出典:EDINET(金融庁)/ Source: EDINET (Financial Services Agency of Japan)
81
+ - 編集・加工:ストックマーク株式会社(NTCIR-18 U4 タスク関連データ)/ Edited and processed by: Stockmark Inc. (NTCIR-18 U4 Task related data)
82
+
83
+ **References:**
84
+ - Task Overview: https://sites.google.com/view/ntcir18-u4/
85
+ - Data and Code (GitHub): https://github.com/nlp-for-japanese-securities-reports/ntcir18-u4
86
+
87
+ ```bibtex
88
+ @article{EMTCIR2024,
89
+ title = {Understanding Tables in Financial Documents: Shared Tasks for Table Retrieval and Table QA on Japanese Annual Security Reports},
90
+ author = {Yasutomo Kimura and Eisaku Sato and Kazuma Kadowaki and Hokuto Ototake},
91
+ journal = {Proceedings of the SIGIR-AP 2024 Workshops EMTCIR 2024},
92
+ month = {12},
93
+ year = {2024},
94
+ url = {https://ceur-ws.org/Vol-3854/}
95
+ }
96
+ ```
97
+
98
+ ### Our Paper
99
+
100
+ If you use this dataset, please cite our paper:
101
+
102
+ ```bibtex
103
+ @article{aida2025enhancinglargevisionlanguagemodels,
104
+ title={Enhancing Large Vision-Language Models with Layout Modality for Table Question Answering on Japanese Annual Securities Reports},
105
+ author={Hayato Aida and Kosuke Takahashi and Takahiro Omi},
106
+ year={2025},
107
+ eprint={2505.17625},
108
+ archivePrefix={arXiv},
109
+ primaryClass={cs.CL},
110
+ url={https://arxiv.org/abs/2505.17625},
111
+ }
112
+ ```
113
+
114
+ ### This Dataset
115
+
116
+ If you use this processed dataset, please also cite:
117
+
118
+ ```bibtex
119
+ @dataset{table_cell_qa_2025,
120
+ title={TableCellQA Dataset},
121
+ author={Hayato Aida},
122
+ year={2025},
123
+ publisher={Hugging Face},
124
+ url={https://huggingface.co/datasets/your-username/ntcir-u4-table-qa}
125
+ }
126
+ ```
dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "test", "valid"]}
test/data-00000-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b4b904e4b750fae65a158c0ed2400b966ea67cd2047ef85a423556d7a033cb9
3
+ size 464255824
test/data-00001-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bc1b8f6000b696c1e3fe41f117cfc8574f68e7c49d6efa14c2eed6d88d67bc9
3
+ size 473026504
test/data-00002-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5060e5087828708b6d7b50a85b2fb20417b221cd959b4a423880d67e7c1956b5
3
+ size 488199696
test/dataset_info.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "id": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "sample_id": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "image": {
14
+ "_type": "Image"
15
+ },
16
+ "text_w_bbox": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "question": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ },
24
+ "answer": {
25
+ "dtype": "string",
26
+ "_type": "Value"
27
+ },
28
+ "question_type": {
29
+ "dtype": "string",
30
+ "_type": "Value"
31
+ },
32
+ "dataset": {
33
+ "dtype": "string",
34
+ "_type": "Value"
35
+ }
36
+ },
37
+ "homepage": "",
38
+ "license": ""
39
+ }
test/state.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00003.arrow"
5
+ },
6
+ {
7
+ "filename": "data-00001-of-00003.arrow"
8
+ },
9
+ {
10
+ "filename": "data-00002-of-00003.arrow"
11
+ }
12
+ ],
13
+ "_fingerprint": "de9da57f366646d2",
14
+ "_format_columns": null,
15
+ "_format_kwargs": {},
16
+ "_format_type": null,
17
+ "_output_all_columns": false,
18
+ "_split": null
19
+ }
train/data-00000-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2378e9f5efd3a2cd830049ef23faa927eaae72bc114ff6a98e7099ef39272bb9
3
+ size 490256592
train/data-00001-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3c858cd91bd55eaf6f66b864cc305d5c112dc4817aa035dc28062b039760e3a
3
+ size 491214504
train/data-00002-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abdc4904fa7f5b06df50cd5302fe92d10142115a01c7b0b2f35a0831502bf4ad
3
+ size 493986568
train/data-00003-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8404f19d58bf36b2974f513741c4b655cfd6438d711b354f80df32bace8d7545
3
+ size 493450104
train/data-00004-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb3e74f33cac26cba992b4679453f81cd1ad7d7f076460c828017f472c7fa8ab
3
+ size 500856104
train/data-00005-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c46a23b6f9469ba76ce2bf570b8bc5a4434143c6c7ab5bbca16a96138c9c044
3
+ size 507394240
train/data-00006-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e12ab0f0aa79e481f4f7fe6f80b1d3103ea3fa3fbecf180d09bb186bc08dd8cc
3
+ size 495779896
train/data-00007-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8211e75d94ffe0743c6a718c54bae8a843000231575720383bd4439b6e8bc920
3
+ size 498094712
train/data-00008-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8320ce75c9de85eb880b6c33da133397e7d59bec4995d6c83dc021c09011278f
3
+ size 502534616
train/data-00009-of-00010.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53f1e39851bc8b65bf2ed2514c243601afe4ed04be6136709adc27af89ea362e
3
+ size 501251688
train/dataset_info.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "id": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "sample_id": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "image": {
14
+ "_type": "Image"
15
+ },
16
+ "text_w_bbox": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "question": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ },
24
+ "answer": {
25
+ "dtype": "string",
26
+ "_type": "Value"
27
+ },
28
+ "question_type": {
29
+ "dtype": "string",
30
+ "_type": "Value"
31
+ },
32
+ "dataset": {
33
+ "dtype": "string",
34
+ "_type": "Value"
35
+ }
36
+ },
37
+ "homepage": "",
38
+ "license": ""
39
+ }
train/state.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00010.arrow"
5
+ },
6
+ {
7
+ "filename": "data-00001-of-00010.arrow"
8
+ },
9
+ {
10
+ "filename": "data-00002-of-00010.arrow"
11
+ },
12
+ {
13
+ "filename": "data-00003-of-00010.arrow"
14
+ },
15
+ {
16
+ "filename": "data-00004-of-00010.arrow"
17
+ },
18
+ {
19
+ "filename": "data-00005-of-00010.arrow"
20
+ },
21
+ {
22
+ "filename": "data-00006-of-00010.arrow"
23
+ },
24
+ {
25
+ "filename": "data-00007-of-00010.arrow"
26
+ },
27
+ {
28
+ "filename": "data-00008-of-00010.arrow"
29
+ },
30
+ {
31
+ "filename": "data-00009-of-00010.arrow"
32
+ }
33
+ ],
34
+ "_fingerprint": "3662d7d608197500",
35
+ "_format_columns": null,
36
+ "_format_kwargs": {},
37
+ "_format_type": null,
38
+ "_output_all_columns": false,
39
+ "_split": null
40
+ }
valid/data-00000-of-00002.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d2c63c37a656522ef5f80250e6aec595023fa9eddcf52f2380d44f66ffa741a
3
+ size 352545512
valid/data-00001-of-00002.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef4c88c12bbaaabc0b4bc6eba0e4b2af5adc2ebd94b5f4b92f28ae97ee67d3f0
3
+ size 357696800
valid/dataset_info.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "id": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "sample_id": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "image": {
14
+ "_type": "Image"
15
+ },
16
+ "text_w_bbox": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "question": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ },
24
+ "answer": {
25
+ "dtype": "string",
26
+ "_type": "Value"
27
+ },
28
+ "question_type": {
29
+ "dtype": "string",
30
+ "_type": "Value"
31
+ },
32
+ "dataset": {
33
+ "dtype": "string",
34
+ "_type": "Value"
35
+ }
36
+ },
37
+ "homepage": "",
38
+ "license": ""
39
+ }
valid/state.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00002.arrow"
5
+ },
6
+ {
7
+ "filename": "data-00001-of-00002.arrow"
8
+ }
9
+ ],
10
+ "_fingerprint": "2c349f627254e135",
11
+ "_format_columns": null,
12
+ "_format_kwargs": {},
13
+ "_format_type": null,
14
+ "_output_all_columns": false,
15
+ "_split": null
16
+ }