|
--- |
|
datasets: |
|
- freococo/rohingya_asr_audio |
|
language: |
|
- rhg |
|
tags: |
|
- speech |
|
- audio |
|
- voa |
|
- rohingya |
|
- self-supervised |
|
- webdataset |
|
- public-domain |
|
pretty_name: VOA Rohingya ASR |
|
license: pddl |
|
task_categories: |
|
- automatic-speech-recognition |
|
- audio-to-audio |
|
- audio-classification |
|
language_creators: |
|
- found |
|
source_datasets: |
|
- original |
|
--- |
|
|
|
**This is the first public Rohingya language ASR dataset in AI history.** |
|
|
|
## Overview |
|
|
|
This dataset contains broadcast audio recordings from the **Voice of America (VOA) Rohingya Service**. Each file represents a daily news segment, typically 30 minutes in length, automatically segmented into chunks of 5–15 seconds for use in **self-supervised ASR**, **pretraining**, **language identification**, and more. |
|
|
|
The content was aired publicly as part of VOA’s Rohingya-language radio program and is therefore released under a **public domain dedication** (U.S. Government speech, [17 U.S.C. § 105](https://www.govinfo.gov/content/pkg/USCODE-2011-title17/html/USCODE-2011-title17-chap1-sec105.htm)). |
|
|
|
The dataset is stored in **WebDataset format**, with each `.tar` archive containing paired `.audio` (MP3) and `.json` metadata files for each segment. |
|
|
|
## Acknowledgments |
|
|
|
This dataset would not exist without the dedication and professionalism of the **Voice of America Rohingya Service** — especially the **journalists, editors, producers, and engineers** who continue broadcasting trusted news and public service content to marginalized communities. |
|
|
|
Special gratitude goes to: |
|
|
|
- VOA multilingual teams who **created, edited, and voiced** this content |
|
- The **American people**, whose hard-earned taxpayer contributions make public media like VOA possible |
|
- The open-source, low-resource, and humanitarian tech community — for tools, models, and continued support |
|
|
|
This dataset is released in the hope that it will: |
|
- Advance multilingual speech technology |
|
- Empower access to information |
|
- Amplify underrepresented voices across the world |
|
|
|
## Metrics |
|
|
|
| Metric | Value | |
|
|-------------------|--------------| |
|
| Total audio hours | **357.55 h** | |
|
| Audio chunks | **131,860** | |
|
| Shard count | **14** | |
|
| Average chunk size| 6–15 sec | |
|
| Format | WebDataset | |
|
| License | Public Domain (VOA / U.S. Gov) | |
|
|
|
## Quick-start |
|
|
|
You can load and stream the dataset from Hugging Face using the `datasets` library: |
|
|
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset( |
|
"freococo/rohingya_asr_audio", |
|
split="train", |
|
streaming=True |
|
) |
|
|
|
for sample in dataset: |
|
print(sample["audio"]) # Audio object |
|
print(sample["file_name"]) # Chunk file name |
|
print(sample["download_url"]) # Original source URL |
|
print(sample["duration"]) # Duration in seconds |
|
|
|
## Known Limitations |
|
|
|
This dataset was created through automatic chunking of full-length VOA Rohingya news broadcasts. As a result, developers should be aware of the following limitations: |
|
|
|
- **No transcriptions** are included. This dataset is not aligned for supervised training unless transcribed independently. |
|
- Some chunks may contain **non-speech segments** such as: |
|
- Music intros and outros |
|
- Jingles or filler transitions |
|
- Background crowd noise or environmental sounds |
|
- Silent or low-audio intervals |
|
- **No speaker labeling** is provided. Voice diversity, accents, and gender variation exist, but are unlabeled. |
|
- **Broadcast mixing artifacts** may affect ASR performance in noisy conditions (e.g., overlayed music, crossfades, background hum). |
|
|
|
Despite these challenges, the dataset is suitable for: |
|
- Pretraining ASR models (wav2vec2-style) |
|
- Unsupervised learning |
|
- Language ID and diarization |
|
- Synthetic data generation |
|
|
|
We recommend applying **speech detection filters**, **VAD**, or **manual quality control** for downstream supervised tasks. |
|
|
|
## Dataset Details |
|
|
|
Each training sample is stored as: |
|
|
|
- `.audio` — MP3 audio content (~5–15 seconds) |
|
- `.json` — metadata with: |
|
|
|
- `file_name`: full chunk filename (e.g., `20250310_0001.audio`) |
|
- `original_file`: e.g., `20250310` |
|
- `publish_date`: ISO 8601 format (e.g., `2025-03-10`) |
|
- `download_url`: original VOA source URL |
|
- `duration`: chunk duration in seconds |
|
|
|
These files are stored in `.tar` archives, split into ~10,000-sample shards named like: |
|
|
|
rohingya-00000.tar |
|
rohingya-00001.tar |
|
... |
|
|
|
Each archive follows [WebDataset format](https://github.com/webdataset/webdataset), making it easy to use with PyTorch and Hugging Face streaming. |
|
|
|
## License & Reuse |
|
|
|
All content is in the **public domain** under U.S. law: |
|
|
|
> U.S. Government speech recordings (VOA staff broadcasts) are public domain under [17 U.S.C. § 105](https://www.govinfo.gov/content/pkg/USCODE-2011-title17/html/USCODE-2011-title17-chap1-sec105.htm). |
|
|
|
Some broadcasts may contain music or third-party clips. Please verify manually if using for commercial purposes. |
|
|
|
## Citation |
|
|
|
If you use this dataset in research, please cite: |
|
|
|
> **Freococo (2025).** |
|
> *VOA Rohingya ASR* |
|
> Hugging Face: [https://huggingface.co/datasets/freococo/rohingya_asr_audio](https://huggingface.co/datasets/freococo/rohingya_asr_audio) |
|
> Public-domain speech segments from VOA Rohingya news programming. |
|
> Released under `pddl`. |
|
|