freococo commited on
Commit
85b49f9
Β·
verified Β·
1 Parent(s): cfedf38

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -3
README.md CHANGED
@@ -1,3 +1,137 @@
1
- ---
2
- license: cc0-1.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - freococo/Google_Myanmar_ASR
4
+ tags:
5
+ - audio
6
+ - asr
7
+ - speech-recognition
8
+ - webdataset
9
+ - Myanmar
10
+ license: cc0-1.0
11
+ language:
12
+ - my
13
+ task_categories:
14
+ - automatic-speech-recognition
15
+ pretty_name: Google Myanmar ASR Dataset (WebDataset)
16
+ size_categories:
17
+ - 1K<n<10K
18
+ ---
19
+
20
+ # Google Myanmar ASR Dataset (WebDataset Version)
21
+
22
+ This repository provides a clean, user-friendly, and robust version of the **Google Myanmar ASR Dataset**, which is derived from the [OpenSLR-80 Burmese Speech Corpus](https://openslr.org/80/).
23
+
24
+ This version has been carefully re-processed into the **WebDataset** format. Each sample consists of a `.wav` audio file and a clean `.json` metadata file, packaged into sharded `.tar` archives. This format is highly efficient for large-scale training of ASR models.
25
+
26
+ ---
27
+
28
+ ## Dataset Description
29
+
30
+ This dataset consists of 16 kHz `.wav` audio files and their corresponding transcriptions, formatted for training and evaluating **automatic speech recognition (ASR)** models in the Burmese (Myanmar) language.
31
+
32
+ ### Key Highlights
33
+
34
+ - **Language**: Myanmar (Burmese)
35
+ - **Sample Rate**: 16,000 Hz
36
+ - **Format**: WebDataset (`.tar` archives containing `.wav`, `.txt`, and `.json` files)
37
+ - **Total Samples**: 2,530 examples
38
+ - **Split**: All data is combined into a single `train` split for maximum flexibility.
39
+
40
+ ---
41
+
42
+ ## Dataset Structure
43
+
44
+ Each sample within the WebDataset archives contains three components:
45
+
46
+ 1. A `.wav` file with the audio data.
47
+ 2. A `.txt` file with the transcription for easy access.
48
+ 3. A `.json` file with all associated metadata.
49
+
50
+ The JSON metadata for each sample has the following clean structure:
51
+
52
+ | Field | Description | Data Type |
53
+ |---------------|-------------------------------------------------|-----------|
54
+ | `__key__` | A unique identifier for the sample. | `string` |
55
+ | `file_name` | The name of the corresponding `.wav` file. | `string` |
56
+ | `transcript` | The transcription (space-separated syllables). | `string` |
57
+ | `speaker` | The identified speaker (`Female` / `Male`). | `string` |
58
+ | `duration` | The duration of the audio in seconds. | `float` |
59
+
60
+ ```json
61
+ // Example of a clean .json file in the dataset
62
+ {
63
+ "__key__": "bur_9762_9943594974",
64
+ "file_name": "bur_9762_9943594974.wav",
65
+ "transcript": "α€” α€™α€·α€Ί ဆန် ထွက် α€œα€€α€Ί α€–α€€α€Ί ခြောက် များ α€€α€­α€― α€„α€šα€Ί α€„α€šα€Ί α€€ α€α€Šα€Ία€Έ α€€ မြင် α€–α€°α€Έ ၏",
66
+ "speaker": "Female",
67
+ "duration": 5.12
68
+ }
69
+ ```
70
+ ---
71
+
72
+ ## Preprocessing Details
73
+
74
+ The dataset was re-processed with the following steps to ensure quality and usability:
75
+
76
+ 1. **Data Consolidation**: Audio files from the original `train` and `test` splits were moved into a single collection.
77
+ 2. **Metadata Extraction**: Metadata was extracted from the original `.parquet` files.
78
+ 3. **Data Cleaning**:
79
+ - Fields containing `null` values (such as the original `transcript` and `gender` fields) were removed to prevent errors.
80
+ - The reliable `tokenized_transcription` was promoted to be the main `transcript`.
81
+ - A clean JSON file was generated for every corresponding audio file.
82
+ 4. **WebDataset Packaging**: The validated `(wav, json)` pairs were packaged into sharded `.tar` archives using the WebDataset format for efficient, streaming access.
83
+
84
+ ---
85
+
86
+ ## How to Use
87
+
88
+ You can easily stream this dataset using the Hugging Face `datasets` library. The library handles the WebDataset format automatically.
89
+
90
+ ```python
91
+ from datasets import load_dataset
92
+
93
+ # Load the dataset
94
+ # The `streaming=True` mode is highly recommended for large datasets
95
+ dataset = load_dataset("freococo/Google_Myanmar_ASR", split="train", streaming=True)
96
+
97
+ # Iterate through the first few samples
98
+ print("First 5 samples:")
99
+ for i, sample in enumerate(dataset.take(5)):
100
+ print(f"\n--- Sample {i+1} ---")
101
+ print(f"Transcript: {sample['text']}")
102
+ # The audio is automatically decoded
103
+ print(f"Audio Sampling Rate: {sample['audio']['sampling_rate']}")
104
+ # Access other metadata from the flattened JSON
105
+ print(f"Speaker: {sample['json']['speaker']}")
106
+ print(f"Duration: {sample['json']['duration']}")
107
+ ```
108
+
109
+ ---
110
+
111
+ ## Attribution
112
+
113
+ This dataset is derived from the original [OpenSLR Burmese Speech Corpus](https://openslr.org/80/), curated and published by Google.
114
+
115
+ ### Original Citation
116
+
117
+ ```
118
+ @inproceedings{oo-etal-2020-burmese,
119
+ title = {Burmese Speech Corpus, Finite-State Text Normalization and Pronunciation Grammars with an Application to Text-to-Speech},
120
+ author = {Oo, Yin May and Wattanavekin, Theeraphol and Li, Chenfang and De Silva, Pasindu and Sarin, Supheakmungkol and Pipatsrisawat, Knot and Jansche, Martin and Kjartansson, Oddur and Gutkin, Alexander},
121
+ booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
122
+ year = {2020},
123
+ pages = {6328--6339},
124
+ address = {Marseille, France},
125
+ publisher = {European Language Resources Association (ELRA)},
126
+ url = {https://www.aclweb.org/anthology/2020.lrec-1.777},
127
+ ISBN = {979-10-95546-34-4}
128
+ }
129
+ ```
130
+
131
+ ---
132
+
133
+ ## License
134
+
135
+ This dataset is released under the **Creative Commons Zero (CC0 1.0 Universal)** license.
136
+
137
+ > You may freely use, share, modify, and redistribute the dataset for any purpose, including commercial use, without attribution. However, attribution to the original source is encouraged when possible.