Update README.md
Browse files
README.md
CHANGED
@@ -1,21 +1,45 @@
|
|
1 |
-
|
2 |
dataset_info:
|
3 |
features:
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
splits:
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
download_size: 20907025567
|
15 |
dataset_size: 20860321574.68
|
16 |
configs:
|
17 |
-
- config_name: default
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
dataset_info:
|
3 |
features:
|
4 |
+
- name: text
|
5 |
+
dtype: string
|
6 |
+
- name: audio
|
7 |
+
dtype: audio
|
8 |
+
- name: duration
|
9 |
+
dtype: float64
|
10 |
splits:
|
11 |
+
- name: train
|
12 |
+
num_bytes: 20860321574.68
|
13 |
+
num_examples: 104520
|
14 |
download_size: 20907025567
|
15 |
dataset_size: 20860321574.68
|
16 |
configs:
|
17 |
+
- config_name: default
|
18 |
+
data_files:
|
19 |
+
- split: train
|
20 |
+
path: data/train-*
|
21 |
+
|
22 |
+
dataset_description: >
|
23 |
+
This is a high-quality **Text-to-Speech (TTS)** dataset for the **Urdu language**.
|
24 |
+
It contains **104,520 audio-text pairs**, each sampled at **22,500 Hz mono**, and is ideal for building neural TTS systems.
|
25 |
+
|
26 |
+
### Highlights:
|
27 |
+
- Clean and phonetically rich Urdu text
|
28 |
+
- Studio-quality audio recordings
|
29 |
+
- Mono channel at 22.5 kHz sampling rate
|
30 |
+
- Includes duration metadata for each clip
|
31 |
+
- Perfect for models like Tacotron, FastSpeech, Glow-TTS, and VITS
|
32 |
+
|
33 |
+
### Format:
|
34 |
+
- **text**: Urdu transcription
|
35 |
+
- **audio**: High-quality mono audio (22.5 kHz)
|
36 |
+
- **duration**: Length of the audio clip in seconds
|
37 |
+
|
38 |
+
### License:
|
39 |
+
This dataset is released for **research and educational purposes only**.
|
40 |
+
**Commercial use is strictly prohibited.**
|
41 |
+
|
42 |
+
Please refer to the accompanying `LICENSE` file for full terms and conditions.
|
43 |
+
|
44 |
+
### Citation:
|
45 |
+
If you use this dataset in your research, please cite the dataset creators appropriately.
|