u4-table-cell-qa / README.md
h-aida's picture
Upload folder using huggingface_hub
248862e verified
metadata
task_categories:
  - visual-question-answering
  - table-question-answering
language:
  - ja
license: cc-by-4.0
tags:
  - table-qa
  - visual-qa
  - japanese
  - ntcir
size_categories:
  - 10K<n<100K

TableCellQA Dataset

This dataset is for Table Question Answering (Table QA), derived from tables in Japanese annual securities reports used in the NTCIR-18 U4 shared task.

This dataset was proposed in our paper: Enhancing Large Vision-Language Models with Layout Modality for Table Question Answering on Japanese Annual Securities Reports.

Key Differences from Original Dataset

  1. Multimodal Support: This dataset supports multimodal inputs (image, layout, text) for comprehensive table understanding
  2. Direct Cell Value Extraction: Unlike the original task, this dataset focuses on direct extraction of cell values, removing the need for arithmetic operations or other transformations

Dataset Description

  • Language: Japanese
  • Task: Table Question Answering
  • Format: Images with OCR text and question-answer pairs
  • Source: NTCIR-18 U4 Task

Dataset Structure

Each example contains:

  • id: Unique identifier
  • sample_id: Original sample ID
  • image: Table image (PNG format)
  • text_w_bbox: Raw OCR data with bounding box information (JSON format)
  • question: Question about the table
  • answer: Answer to the question
  • question_type: Type of question (table_qa)
  • dataset: Dataset name (ntcir18-u4)

Usage

from datasets import load_dataset
import json

dataset = load_dataset("stockmark/u4-table-cell-qa")

# Access OCR data with bounding boxes
sample = dataset["train"][0]
ocr_data = json.loads(sample["text_w_bbox"])

# Each OCR element contains:
# - "box": [x1, y1, x2, y2] - bounding box coordinates
# - "text": extracted text
# - "label": classification label (if available)
# - "words": word-level information (if available)

for ocr_item in ocr_data:
    print(f"Text: {ocr_item['text']}")
    print(f"Box: {ocr_item['box']}")

License

This dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

Citation

Original Dataset

This dataset is based on the NTCIR-U4 task. We thank the original authors for making their data available.

Data Source:

  • 本データは金融庁 EDINET で公開されている有価証券報告書を基に編集・加工したものです。
  • This data is based on securities reports published on EDINET (Financial Services Agency of Japan), which have been edited and processed.

Attribution: 本データセットを利用する際は、本データセットの作者、および元のデータソースの両方に対するクレジット(帰属表示)をお願いします。 When using this dataset, please provide attribution to both the creator of this dataset and the original data source.

  • 出典:EDINET(金融庁)/ Source: EDINET (Financial Services Agency of Japan)
  • 編集・加工:ストックマーク株式会社(NTCIR-18 U4 タスク関連データ)/ Edited and processed by: Stockmark Inc. (NTCIR-18 U4 Task related data)

References:

@article{EMTCIR2024,
 title  = {Understanding Tables in Financial Documents: Shared Tasks for Table Retrieval and Table QA on Japanese Annual Security Reports},
 author = {Yasutomo Kimura and Eisaku Sato and Kazuma Kadowaki and Hokuto Ototake},
 journal = {Proceedings of the SIGIR-AP 2024 Workshops EMTCIR 2024},
 month  = {12},
 year  = {2024},
 url   = {https://ceur-ws.org/Vol-3854/}
}

Our Paper

If you use this dataset, please cite our paper:

@article{aida2025enhancinglargevisionlanguagemodels,
      title={Enhancing Large Vision-Language Models with Layout Modality for Table Question Answering on Japanese Annual Securities Reports}, 
      author={Hayato Aida and Kosuke Takahashi and Takahiro Omi},
      year={2025},
      eprint={2505.17625},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.17625}, 
}

This Dataset

If you use this processed dataset, please also cite:

@dataset{table_cell_qa_2025,
  title={TableCellQA Dataset},
  author={Hayato Aida},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/stockmark/u4-table-cell-qa}
}