You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

πŸ“„ IndicDLP :

A Foundational Dataset for Multi-Lingual and Multi-Domain Document Layout Parsing

IndicDLP is a large-scale foundational dataset designed to foster research in document layout parsing across multiple languages and domains. It contains richly annotated document images in 11 Indic languages plus English, across diverse document types.

ICDAR 2025 (Oral) β€” πŸ† Best Student Paper Runner-Up Award

PDF + arXiv Project + Homepage GitHub Repo

πŸ‘©β€πŸ’» Authors:

  • Oikantik Nath (IIT Madras)
  • Sahithi Kukkala (IIIT Hyderabad)
  • Mitesh Khapra (IIT Madras)
  • Ravi Kiran Sarvadevabhatla (IIIT Hyderabad)

🧾 Dataset Summary

  • Languages: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, Telugu
  • Document Types (12): Novels, Textbooks, Magazines, Acts & Rules, Research Papers, Manuals, Brochures, Syllabi, Question Papers, Notices, Forms, Newspapers
  • Layout Labels: 42 physical & logical layout classes
  • Total Images: 119,806
  • Format: PNG images with JSON annotations (COCO format)
  • Annotation Tool: Shoonya, built on Label Studio
  • Content: Both scanned and digitally-born documents

πŸ“‚ Dataset Structure

The dataset is divided into:

  • images/train, images/val, images/test
  • annotations/train.json, val.json, test.json

Each image has corresponding annotations in COCO-style format.


πŸ“₯ Download

You can load the dataset from the Hugging Face Hub with:

from datasets import load_dataset

dataset = load_dataset("IndicDLP/IndicDLP-dataset")

πŸ“¦ Dataset

for the model checkpoints finetuned on the IndicDLP dataset visit IndicDLP model homepage For details, annotation schema, and scripts, visit the IndicDLP project homepage.


πŸ“‘ Citation

If you use these models or the dataset, please cite:

@InProceedings{10.1007/978-3-032-04614-7_2,
author="Nath, Oikantik
and Kukkala, Sahithi
and Khapra, Mitesh
and Sarvadevabhatla, Ravi Kiran",
editor="Yin, Xu-Cheng
and Karatzas, Dimosthenis
and Lopresti, Daniel",
title="IndicDLP: A Foundational Dataset for Multi-lingual and Multi-domain Document Layout Parsing",
booktitle="Document Analysis and Recognition -- ICDAR 2025",
year="2026",
publisher="Springer Nature Switzerland",
address="Cham",
pages="23--39",
abstract="Document layout analysis is essential for downstream tasks such as information retrieval, extraction, OCR, and digitization. However, existing large-scale datasets like PubLayNet and DocBank lack fine-grained region labels and multilingual diversity, making them insufficient for representing complex document layouts. Human-annotated datasets such as {\$}{\$}M^{\{}6{\}}Doc{\$}{\$}M6Doc and {\$}{\$}{\backslash}text {\{}D{\}}^{\{}4{\}}{\backslash}text {\{}LA{\}}{\$}{\$}D4LA offer richer labels and greater domain diversity, but are too small to train robust models and lack adequate multilingual coverage. This gap is especially pronounced for Indic documents, which encompass diverse scripts yet remain underrepresented in current datasets, further limiting progress in this space. To address these shortcomings, we introduce IndicDLP, a large-scale foundational document layout dataset spanning 11 representative Indic languages alongside English and 12 common document domains. Additionally, we curate UED-mini, a dataset derived from DocLayNet and {\$}{\$}M^{\{}6{\}}Doc{\$}{\$}M6Doc, to enhance pretraining and provide a solid foundation for Indic layout models. Our experiments demonstrate that fine-tuning existing English models on IndicDLP significantly boosts performance, validating its effectiveness. Moreover, models trained on IndicDLP generalize well beyond Indic layouts, making it a valuable resource for document digitization. This work bridges gaps in scale, diversity, and annotation granularity, driving inclusive and efficient document understanding.",
isbn="978-3-032-04614-7"
}

πŸ™ Acknowledgements


πŸ“¬ Contact

For issues in running code/links not working, please reach out to Sahithi Kukkala or Oikantik Nath or mention in the ISSUES section. For questions or collaborations, please reach out to Dr. Ravi Kiran Sarvadevabhatla


Downloads last month
8