freococo commited on
Commit
0f03027
·
verified ·
1 Parent(s): fe12186

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -25,13 +25,13 @@ size_categories:
25
 
26
  This dataset was created by scraping and segmenting over 4,000 episodes of the VOA Burmese morning radio program. From that archive, 3,687 MP3 files were extracted and processed. This dataset contains sentence-level audio chunks suitable for ASR and speech-related model training.
27
 
28
- The current release (`voa_batch_001.tar` and `voa_batch_002.tar`) contains a combined total of **~152,300 sentence-level audio chunks** derived from the first **420 MP3 files** in the archive, totaling approximately **420 hours** of segmented audio.
29
 
30
  - 🗂️ Contains ~152,300 sentence-level audio chunks
31
  - ⏱️ Total duration: ~420 hours (1.5 million seconds)
32
  - 🔊 Average chunk length: ~9.9 seconds
33
  - 🎵 Audio format: 16kHz mono MP3
34
- - 📦 2 WebDataset .tar files (`b1`, `b2`)
35
  - 📏 Min chunk: 0.04 sec | Max chunk: 15.00 sec
36
 
37
  Each .mp3 is paired with a .json file containing structured metadata including file_name, broadcast_date, url, and duration.
@@ -40,13 +40,11 @@ Each .mp3 is paired with a .json file containing structured metadata including f
40
 
41
  ## Usage
42
 
43
- ## Usage
44
-
45
  ```python
46
  from datasets import load_dataset
47
 
48
  # Stream all TARs under /train/
49
- ds = load_dataset("freococo/voa_myanmar_asr_audio", split="train", streaming=True)
50
 
51
  sample = next(iter(ds))
52
  meta = sample["json"]
@@ -139,7 +137,7 @@ If you use this dataset, please cite it as:
139
  author = {freococo},
140
  title = {VOA Burmese – ASR-Ready WebDataset},
141
  year = {2025},
142
- url = {https://huggingface.co/datasets/freococo/voa_myanmar_asr_audio},
143
  note = {Segmented WebDataset format, public-domain VOA speech}
144
  }
145
  ```
 
25
 
26
  This dataset was created by scraping and segmenting over 4,000 episodes of the VOA Burmese morning radio program. From that archive, 3,687 MP3 files were extracted and processed. This dataset contains sentence-level audio chunks suitable for ASR and speech-related model training.
27
 
28
+ The current release (`voa_batch_001.tar` and `voa_batch_003.tar`) contains a combined total of **~152,300 sentence-level audio chunks** derived from the first **420 MP3 files** in the archive, totaling approximately **420 hours** of segmented audio.
29
 
30
  - 🗂️ Contains ~152,300 sentence-level audio chunks
31
  - ⏱️ Total duration: ~420 hours (1.5 million seconds)
32
  - 🔊 Average chunk length: ~9.9 seconds
33
  - 🎵 Audio format: 16kHz mono MP3
34
+ - 📦 2 WebDataset .tar files
35
  - 📏 Min chunk: 0.04 sec | Max chunk: 15.00 sec
36
 
37
  Each .mp3 is paired with a .json file containing structured metadata including file_name, broadcast_date, url, and duration.
 
40
 
41
  ## Usage
42
 
 
 
43
  ```python
44
  from datasets import load_dataset
45
 
46
  # Stream all TARs under /train/
47
+ ds = load_dataset("freococo/voa_myanmar_asr_audio_2", split="train", streaming=True)
48
 
49
  sample = next(iter(ds))
50
  meta = sample["json"]
 
137
  author = {freococo},
138
  title = {VOA Burmese – ASR-Ready WebDataset},
139
  year = {2025},
140
+ url = {https://huggingface.co/datasets/freococo/voa_myanmar_asr_audio_2},
141
  note = {Segmented WebDataset format, public-domain VOA speech}
142
  }
143
  ```