Datasets:
File size: 4,562 Bytes
8ba65c7 248862e 8ba65c7 1fe8597 8ba65c7 1fe8597 8ba65c7 1fe8597 8ba65c7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
task_categories:
- visual-question-answering
- table-question-answering
language:
- ja
license: cc-by-4.0
tags:
- table-qa
- visual-qa
- japanese
- ntcir
size_categories:
- 10K<n<100K
---
# TableCellQA Dataset
This dataset is for Table Question Answering (Table QA), derived from tables in Japanese annual securities reports used in the NTCIR-18 U4 shared task.
This dataset was proposed in our paper: [Enhancing Large Vision-Language Models with Layout Modality for Table Question Answering on Japanese Annual Securities Reports](https://arxiv.org/abs/2505.17625).
## Key Differences from Original Dataset
1. **Multimodal Support**: This dataset supports multimodal inputs (image, layout, text) for comprehensive table understanding
2. **Direct Cell Value Extraction**: Unlike the original task, this dataset focuses on direct extraction of cell values, removing the need for arithmetic operations or other transformations
## Dataset Description
- **Language**: Japanese
- **Task**: Table Question Answering
- **Format**: Images with OCR text and question-answer pairs
- **Source**: NTCIR-18 U4 Task
## Dataset Structure
Each example contains:
- `id`: Unique identifier
- `sample_id`: Original sample ID
- `image`: Table image (PNG format)
- `text_w_bbox`: Raw OCR data with bounding box information (JSON format)
- `question`: Question about the table
- `answer`: Answer to the question
- `question_type`: Type of question (table_qa)
- `dataset`: Dataset name (ntcir18-u4)
## Usage
```python
from datasets import load_dataset
import json
dataset = load_dataset("stockmark/u4-table-cell-qa")
# Access OCR data with bounding boxes
sample = dataset["train"][0]
ocr_data = json.loads(sample["text_w_bbox"])
# Each OCR element contains:
# - "box": [x1, y1, x2, y2] - bounding box coordinates
# - "text": extracted text
# - "label": classification label (if available)
# - "words": word-level information (if available)
for ocr_item in ocr_data:
print(f"Text: {ocr_item['text']}")
print(f"Box: {ocr_item['box']}")
```
## License
This dataset is licensed under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
## Citation
### Original Dataset
This dataset is based on the NTCIR-U4 task. We thank the original authors for making their data available.
**Data Source:**
- 本データは金融庁 EDINET で公開されている有価証券報告書を基に編集・加工したものです。
- This data is based on securities reports published on EDINET (Financial Services Agency of Japan), which have been edited and processed.
**Attribution:**
本データセットを利用する際は、本データセットの作者、および元のデータソースの両方に対するクレジット(帰属表示)をお願いします。
When using this dataset, please provide attribution to both the creator of this dataset and the original data source.
- 出典:EDINET(金融庁)/ Source: EDINET (Financial Services Agency of Japan)
- 編集・加工:ストックマーク株式会社(NTCIR-18 U4 タスク関連データ)/ Edited and processed by: Stockmark Inc. (NTCIR-18 U4 Task related data)
**References:**
- Task Overview: https://sites.google.com/view/ntcir18-u4/
- Data and Code (GitHub): https://github.com/nlp-for-japanese-securities-reports/ntcir18-u4
```bibtex
@article{EMTCIR2024,
title = {Understanding Tables in Financial Documents: Shared Tasks for Table Retrieval and Table QA on Japanese Annual Security Reports},
author = {Yasutomo Kimura and Eisaku Sato and Kazuma Kadowaki and Hokuto Ototake},
journal = {Proceedings of the SIGIR-AP 2024 Workshops EMTCIR 2024},
month = {12},
year = {2024},
url = {https://ceur-ws.org/Vol-3854/}
}
```
### Our Paper
If you use this dataset, please cite our paper:
```bibtex
@article{aida2025enhancinglargevisionlanguagemodels,
title={Enhancing Large Vision-Language Models with Layout Modality for Table Question Answering on Japanese Annual Securities Reports},
author={Hayato Aida and Kosuke Takahashi and Takahiro Omi},
year={2025},
eprint={2505.17625},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.17625},
}
```
### This Dataset
If you use this processed dataset, please also cite:
```bibtex
@dataset{table_cell_qa_2025,
title={TableCellQA Dataset},
author={Hayato Aida},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/stockmark/u4-table-cell-qa}
}
```
|