--- language: my tags: - OCR - Burmese - NLP - text license: apache-2.0 datasets: - minthanthtoo-cs/Burmese-Classics-OCR-RAW --- # Burmese (Myanmar) Books Dataset – Burmese Classics OCR (Daily Rolling Project) [](https://www.apache.org/licenses/LICENSE-2.0) ## Overview A **daily rolling dataset of Burmese books**, built for AI, OCR, and NLP research. Each entry includes OCR text with metadata: title, author, page index, and Burmese character ratio. This project fills a **critical gap** in Burmese-language resources: - Scarcity of **public-domain Burmese text**. - High **technical and financial barriers** to corpus building. - Enables **incremental, open access** for researchers and developers. > **Note:** This is a personal, incremental effort — updates may be partial. > **Source:** Derived from the **largest freely accessible Burmese book archive**, fully open and unrestricted. > ⚠️ 170 books are actively being fixed due to OCR errors (error logs maintained). > 1,999 books contain duplicate entries that are being resolved. > ## Dataset Format Data is stored as **Parquet shards** (~200 MB each) from source JSONL files. Each Parquet record includes: - `uid` – unique identifier, assigned internally by the **Burmese Classics Online** website - `title` – book title - `author` – book author - `page` – page index (starting from 1) - `burmese_percent` – proportion of Burmese characters on the page - `text` – OCR text content of the page > **1 jsonl line = 1 page** (raw OCR output). ``` features = { "uid": datasets.Value("string"), "title": datasets.Value("string"), "author": datasets.Value("string"), "burmese_percent": datasets.Value("float32"), "text": datasets.Value("string"), } ``` --- ## Dataset Scale (Excluding Predominantly English Categories) - **Total Books:** \~18,000 *(currently at 2nd parquet - **3,384**)* - **Total Pages:** \~2,000,000 *(currently at 2nd parquet - **573,527**)* - **Approx. Burmese Characters:** \~3B *(currently at 2nd parquet - **747,255,396**)* - **Estimated Tokens:** \~2B *(currently at 2nd parquet - **\~374M tokens**)* > ⚠️ These totals exclude categories likely to contain mostly English content: `Technical`, `Newspaper | Journal`, and `Language`. --- ## Accessing Data The dataset can be accessed programmatically from Hugging Face using the `datasets` library: ```python from datasets import load_dataset dataset = load_dataset("minthanthtoo-cs/Burmese-Classics-OCR-RAW") print(dataset) # Access the 7th entry (index 6) sample = dataset['train'][6] # Print metadata and 1.2k text snippet print(f"Title : {sample['title']}") print(f"Author : {sample['author']}") print(f"UID : {sample['uid']}") print(f"Burmese % : {sample['burmese_percent']:.2f}%") print("Text snippet:", sample['text'][:1200].replace('\n', ' '), "...") ``` - Each shard is automatically loaded as a dataset split. - The Parquet format ensures fast, columnar access and efficient filtering. --- ## Processing Notes - Each book contains \~3k–100k Burmese characters (~2k–100k tokens). - Only books meeting a minimum Burmese content threshold are included. - Dataset is rolling: new books are added daily. --- ## Visualizations (Preview)
(These charts highlight which PDFs provide the most content relative to their file size in MB)
![]() Distribution of Pages per Book |
![]() File Size vs Pages (MB/Page) |