ICDAR-13-Logical / README.md
saeed11b95's picture
Create README.md
dcaf155 verified
---
license: mit
language:
- en
tags:
- table-structure-recognition
- table-detection
- ocr
- document-ai
- icdar
---
# ICDAR-2013-Logical: A Line-Level Logical Conversion of the ICDAR 2013 Table Dataset
## Dataset Description
This dataset is a converted and enhanced version of the **ICDAR 2013 Table Competition dataset**, specifically reformatted for modern **Table Structure Recognition (TSR)** and **OCR** tasks. ๐Ÿ“œ
The primary contribution of this version is the creation of a direct link between low-level OCR output and the table's logical structure. For each table, the dataset provides:
1. A high-resolution **cropped PNG image** of the table region (rendered at 144 DPI).
2. A detailed **JSON file** that maps each detected text line's physical bounding box to its logical grid coordinates (`[row_start, row_end, col_start, col_end]`).
This format is ideal for training and evaluating Document AI models that perform OCR and table understanding concurrently.
---
## How to Use
You can load an example by pairing the images from the `cropped_images` directory with the JSON annotations in `logical_gt`.
```python
import json
from PIL import Image
from pathlib import Path
# Assume dataset is loaded or cloned locally
base_path = Path("./") # Path to the dataset directory
# Get a list of all examples
gt_files = list((base_path / "logical_gt").glob("*.json"))
example_file = gt_files[0]
# Load the annotation data
with open(example_file, 'r') as f:
annotations = json.load(f)
# Load the corresponding image
image_path = base_path / "cropped_images" / (example_file.stem + ".png")
image = Image.open(image_path)
# Display the first annotation for the first line of text
first_line = annotations[0]
print(f"Text: {first_line['text']}")
print(f"Bounding Box: {first_line['box']}")
print(f"Logical Coordinates: {first_line['logical_coords']}")
# image.show()