Datasets:
π¦ Dhivehi Synthetic Document Layout + Textline Dataset
This dataset contains synthetically generated image-document pairs with detailed layout annotations and ground-truth Dhivehi text extractions.
Itβs designed for document layout analysis, visual document understanding, OCR fine-tuning, and related tasks specifically for Dhivehi script.
Note: this version image are compressed. Raw version π Repository: Hugging Face Datasets
π Dataset Summary
- Total Examples: ~58,738
- Image Content: Synthetic Dhivehi documents generated to simulate real-world layouts, including headlines, textlines, pictures, and captions.
- Annotations:
- Bounding boxes (
bbox
) - Object areas (
area
) - Object categories (
category
) - Ground-truth parsed text, split into:
headline
(major headings)textline
(paragraph or text body lines)
- Bounding boxes (
β οΈ Important Note
This dataset is synthetic β no real-world documents or personal data were used. It was generated programmatically to train and evaluate models under controlled conditions, without legal or ethical concerns tied to real-world data.
π·οΈ Categories
Label ID | Label Name |
---|---|
0 | Textline |
1 | Heading |
2 | Picture |
3 | Caption |
4 | Columns |
π Features
Field | Type |
---|---|
image_id |
int64 |
image |
image |
width |
int64 |
height |
int64 |
objects |
List of: |
id
: int64area
: int64bbox
: [x, y, width, height] (float32)category
: label (class label 0β4) | |ground_truth.gt_parse
|headline
: list of stringstextline
: list of strings |
π Split
Split | # Examples | Size (bytes) |
---|---|---|
Train | 58,738 | ~84.31 GB (compressed) |
π¦ Download
- Download size: ~93.32 GB
- Uncompressed dataset size: ~84.31 GB
π§ Example Use (with π€ Datasets)
from datasets import load_dataset
dataset = load_dataset("alakxender/od-syn-page-annotations")
categories = dataset.features["objects"].feature["category"].names
id2label = {i: name for i, name in enumerate(categories)}
print(id2label)
sample = dataset['train'][0]
print("Image ID:", sample['image_id'])
print("Image size:", sample['width'], "x", sample['height'])
print("First object category:", sample['objects']['category'][0])
print("First headline:", sample['ground_truth']['gt_parse']['headline'][0])
π Visualize
import numpy as np
from PIL import Image, ImageDraw, ImageFont
from datasets import load_dataset
def get_color(idx):
palette = [
"red", "green", "blue", "orange", "purple", "cyan", "magenta", "yellow", "lime", "pink"
]
return palette[idx % len(palette)]
def draw_bboxes(sample, id2label, save_path=None):
"""
Draw bounding boxes and labels on a single dataset sample.
Args:
sample: A dataset example (dict) with 'image' and 'objects'.
id2label: Mapping from category ID to label name.
save_path: If provided, saves the image to this path.
Returns:
PIL Image with drawn bounding boxes.
"""
image = sample["image"]
annotations = sample["objects"]
image = Image.fromarray(np.array(image))
draw = ImageDraw.Draw(image)
try:
font = ImageFont.truetype("arial.ttf", 14)
except:
font = ImageFont.load_default()
for category, box in zip(annotations["category"], annotations["bbox"]):
x, y, w, h = box
color = get_color(category)
draw.rectangle((x, y, x + w, y + h), outline=color, width=2)
label = id2label[category]
bbox = font.getbbox(label)
text_width = bbox[2] - bbox[0]
text_height = bbox[3] - bbox[1]
draw.rectangle([x, y, x + text_width + 4, y + text_height + 2], fill=color)
draw.text((x + 2, y + 1), label, fill="black", font=font)
if save_path:
image.save(save_path)
print(f"Saved image to {save_path}")
else:
image.show()
return image
# Load one sample
dataset = load_dataset("alakxender/od-syn-page-annotations", split="train[:1]")
# Get category mapping
categories = dataset.features["objects"].feature["category"].names
id2label = {i: name for i, name in enumerate(categories)}
# Draw bounding boxes on the first sample
draw_bboxes(
sample=dataset[0],
id2label=id2label,
save_path="sample_0.png"
)
- Downloads last month
- 31