File size: 1,069 Bytes
cf6b19d 6007b05 cf6b19d 6007b05 cf6b19d a89355f cf6b19d 0313124 cf6b19d 2285dee a89355f d6cd38d a89355f d6cd38d a89355f cf6b19d d38547f 5d75cb6 d38547f 4c3786d d38547f 26c0363 d38547f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
license: cc
language:
- id
pretty_name: LibriVox Filtered ID
size_categories:
- n<1K
task_categories:
- automatic-speech-recognition
---
# Librivox Filtered ID
Filtered Librivox Indonesian dataset<br>
<br>
Audio has been preprocessed using FFmpeg as: wav -ar 16000 -ac 1 (mono 16kHz sample_rate) for Whisper-ready finetuning<br>
Selected audio datasets on: ['id']['universal-declaration-of-human-rights']<br>
num_rows: 136<br>
Original dataset: <a href="https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia">indonesian-nlp/librivox-indonesia</a><br>
## Format
Each example is a dictionary with the following fields:
```json
{
"path": "audio/librivox_id_1.wav",
"audio": {
"path": "audio/librivox_id_1.wav",
"array": [...],
"sampling_rate": 16000
},
"sentence": "Some transcription"
}
```
## Load dataset
Use HuggingFace datasets v2.18:
```bash
pip install datasets==2.18.0
```
Use HuggingFace datasets to load:
```python
from datasets import load_dataset, Audio
dataset = load_dataset("Willy030125/librivox_filtered_id")
``` |