freococo's picture
Update README.md
85b49f9 verified
|
raw
history blame
5.44 kB
metadata
datasets:
  - freococo/Google_Myanmar_ASR
tags:
  - audio
  - asr
  - speech-recognition
  - webdataset
  - Myanmar
license: cc0-1.0
language:
  - my
task_categories:
  - automatic-speech-recognition
pretty_name: Google Myanmar ASR Dataset (WebDataset)
size_categories:
  - 1K<n<10K

Google Myanmar ASR Dataset (WebDataset Version)

This repository provides a clean, user-friendly, and robust version of the Google Myanmar ASR Dataset, which is derived from the OpenSLR-80 Burmese Speech Corpus.

This version has been carefully re-processed into the WebDataset format. Each sample consists of a .wav audio file and a clean .json metadata file, packaged into sharded .tar archives. This format is highly efficient for large-scale training of ASR models.


Dataset Description

This dataset consists of 16 kHz .wav audio files and their corresponding transcriptions, formatted for training and evaluating automatic speech recognition (ASR) models in the Burmese (Myanmar) language.

Key Highlights

  • Language: Myanmar (Burmese)
  • Sample Rate: 16,000 Hz
  • Format: WebDataset (.tar archives containing .wav, .txt, and .json files)
  • Total Samples: 2,530 examples
  • Split: All data is combined into a single train split for maximum flexibility.

Dataset Structure

Each sample within the WebDataset archives contains three components:

  1. A .wav file with the audio data.
  2. A .txt file with the transcription for easy access.
  3. A .json file with all associated metadata.

The JSON metadata for each sample has the following clean structure:

Field Description Data Type
__key__ A unique identifier for the sample. string
file_name The name of the corresponding .wav file. string
transcript The transcription (space-separated syllables). string
speaker The identified speaker (Female / Male). string
duration The duration of the audio in seconds. float
// Example of a clean .json file in the dataset
{
  "__key__": "bur_9762_9943594974",
  "file_name": "bur_9762_9943594974.wav",
  "transcript": "α€” α€™α€·α€Ί ဆန် ထွက် α€œα€€α€Ί α€–α€€α€Ί ခြောက် များ α€€α€­α€― α€„α€šα€Ί α€„α€šα€Ί α€€ α€α€Šα€Ία€Έ α€€ မြင် α€–α€°α€Έ ၏",
  "speaker": "Female",
  "duration": 5.12
}

Preprocessing Details

The dataset was re-processed with the following steps to ensure quality and usability:

  1. Data Consolidation: Audio files from the original train and test splits were moved into a single collection.
  2. Metadata Extraction: Metadata was extracted from the original .parquet files.
  3. Data Cleaning:
    • Fields containing null values (such as the original transcript and gender fields) were removed to prevent errors.
    • The reliable tokenized_transcription was promoted to be the main transcript.
    • A clean JSON file was generated for every corresponding audio file.
  4. WebDataset Packaging: The validated (wav, json) pairs were packaged into sharded .tar archives using the WebDataset format for efficient, streaming access.

How to Use

You can easily stream this dataset using the Hugging Face datasets library. The library handles the WebDataset format automatically.

from datasets import load_dataset

# Load the dataset
# The `streaming=True` mode is highly recommended for large datasets
dataset = load_dataset("freococo/Google_Myanmar_ASR", split="train", streaming=True)

# Iterate through the first few samples
print("First 5 samples:")
for i, sample in enumerate(dataset.take(5)):
    print(f"\n--- Sample {i+1} ---")
    print(f"Transcript: {sample['text']}")
    # The audio is automatically decoded
    print(f"Audio Sampling Rate: {sample['audio']['sampling_rate']}")
    # Access other metadata from the flattened JSON
    print(f"Speaker: {sample['json']['speaker']}")
    print(f"Duration: {sample['json']['duration']}")

Attribution

This dataset is derived from the original OpenSLR Burmese Speech Corpus, curated and published by Google.

Original Citation

@inproceedings{oo-etal-2020-burmese,
  title     = {Burmese Speech Corpus, Finite-State Text Normalization and Pronunciation Grammars with an Application to Text-to-Speech},
  author    = {Oo, Yin May and Wattanavekin, Theeraphol and Li, Chenfang and De Silva, Pasindu and Sarin, Supheakmungkol and Pipatsrisawat, Knot and Jansche, Martin and Kjartansson, Oddur and Gutkin, Alexander},
  booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
  year      = {2020},
  pages     = {6328--6339},
  address   = {Marseille, France},
  publisher = {European Language Resources Association (ELRA)},
  url       = {https://www.aclweb.org/anthology/2020.lrec-1.777},
  ISBN      = {979-10-95546-34-4}
}

License

This dataset is released under the Creative Commons Zero (CC0 1.0 Universal) license.

You may freely use, share, modify, and redistribute the dataset for any purpose, including commercial use, without attribution. However, attribution to the original source is encouraged when possible.