Sh1man commited on
Commit
ff1fdd7
·
verified ·
1 Parent(s): a82aa14

Add files using upload-large-folder tool

Browse files
README.md CHANGED
@@ -1,40 +1,49 @@
1
  ---
 
2
  language:
3
  - ru
4
- license: cc-by-nc-4.0
5
- task_categories:
6
- - automatic-speech-recognition
7
- size_categories:
8
- - 10K<n<100K
9
  tags:
10
  - audio
11
  - speech
12
  - Russian
13
  - ASR
 
14
  - voice
15
- pretty_name: open_stt
16
- dataset_info:
17
- features:
18
- - name: id
19
- dtype: string
20
- - name: path
21
- dtype: string
22
- - name: text
23
- dtype: string
24
- - name: duration
25
- dtype: float32
26
- - name: audio
27
- dtype: audio
28
- config_name: asr_calls_v2, buriy_audio_books_2, public_youtube700
29
- splits:
30
- - name: train
31
- - name: validate
 
 
 
 
32
  ---
 
 
33
 
 
 
 
 
 
 
 
 
34
 
35
- ## Dataset Description
36
 
37
- open_stt is a Russian dataset for speech research.
38
 
39
  ## Subsets
40
 
@@ -43,41 +52,8 @@ The dataset contains three subsets:
43
  - **buriy_audio_books_2**: books recordings
44
  - **public_youtube700**: youtube recordings
45
 
46
- ## Dataset Structure
47
-
48
- This dataset is organized in the Common Voice format:
49
-
50
- - `/audio/{subset}/{split}/` - Contains TAR files with audio files
51
- - `/metadata/{subset}/` - Contains TSV files with transcriptions
52
- - `/n_shards.json` - Contains information about the number of shards for each subset and split
53
-
54
- ## Usage
55
-
56
- ```python
57
- from datasets import load_dataset
58
- from torch.utils.data import DataLoader
59
-
60
- # Загрузка public_youtube700 подмножества, тренировочного сплита
61
- dataset = load_dataset("Sh1man/silero_open_stt", "public_youtube700", split="train")
62
-
63
- # Или использование всего датасета
64
- dataset = load_dataset("Sh1man/silero_open_stt", "public_youtube700")
65
- train_dataset = dataset["train"]
66
- test_dataset = dataset["test"]
67
-
68
- # Создание DataLoader
69
- dataloader = DataLoader(dataset, batch_size=32)
70
-
71
- # Доступ к данным
72
- for batch in dataloader:
73
- print(batch["audio"])
74
- print(batch["text"])
75
- ```
76
-
77
-
78
  ## Dataset Statistics
79
-
80
-
81
  ### asr_calls_v2 subset
82
 
83
  | Split | Samples |
@@ -100,4 +76,20 @@ for batch in dataloader:
100
  |-------|--------|
101
  | train | 4386 |
102
  | validate | 2925 |
103
- | **Total** | **7311** |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-4.0
3
  language:
4
  - ru
 
 
 
 
 
5
  tags:
6
  - audio
7
  - speech
8
  - Russian
9
  - ASR
10
+ - mp3
11
  - voice
12
+ configs:
13
+ - config_name: asr_calls_v2
14
+ data_files:
15
+ - split: train
16
+ path: "asr_calls_v2/train/*.tar"
17
+ - split: validate
18
+ path: "asr_calls_v2/validate/*.tar"
19
+ - config_name: buriy_audio_books_2
20
+ data_files:
21
+ - split: train
22
+ path: "buriy_audio_books_2/train/*.tar"
23
+ - split: validate
24
+ path: "buriy_audio_books_2/validate/*.tar"
25
+ - config_name: public_youtube700
26
+ data_files:
27
+ - split: train
28
+ path: "public_youtube700/train/*.tar"
29
+ - split: validate
30
+ path: "public_youtube700/validate/*.tar"
31
+ size_categories:
32
+ - n<100K
33
  ---
34
+ ## Dataset Description
35
+ Набор данных validated.tsv отфильтрованный по down_votes = 0
36
 
37
+ ### Usage
38
+
39
+ ```python
40
+ from datasets import load_dataset, Audio
41
+
42
+ dataset = load_dataset("Sh1man/silero_open_stt", "asr_calls_v2", split="train")
43
+ print(dataset[0]['wav'])
44
+ ```
45
 
 
46
 
 
47
 
48
  ## Subsets
49
 
 
52
  - **buriy_audio_books_2**: books recordings
53
  - **public_youtube700**: youtube recordings
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ## Dataset Statistics
56
+
 
57
  ### asr_calls_v2 subset
58
 
59
  | Split | Samples |
 
76
  |-------|--------|
77
  | train | 4386 |
78
  | validate | 2925 |
79
+ | **Total** | **7311** |
80
+
81
+ ### Licensing Information
82
+
83
+ Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
84
+
85
+ ### Citation Information
86
+
87
+ ```
88
+ @inproceedings{commonvoice:2020,
89
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
90
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
91
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
92
+ pages = {4211--4215},
93
+ year = 2020
94
+ }
95
+ ```
asr_calls_v2/train/train-00000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6895e20477622e281dc81aad17b17af5416f8bbd38b3a24ec6ca0805148490a
3
+ size 564162560
asr_calls_v2/validate/validate-00000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8687d05813b83ad2672ae9235898361075f0dfb25981391b4413d6b96a8aad42
3
+ size 376606720
buriy_audio_books_2/train/train-00000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4037513f910b279e04998e51e2a55b70587fe77acd3a33e32109959b3080938e
3
+ size 360683520
buriy_audio_books_2/validate/validate-00000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18bb70e03841533560c129c7b94a632a82b22f8b34e5b13c9fb697e8621840e4
3
+ size 233963520
public_youtube700/train/train-00000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc8e65a31f185e8c6b6257a9d53bc82d9fdae1208487d7e82b9c1f05ca5de6b9
3
+ size 327536640
public_youtube700/validate/validate-00000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f8fc603ec0d3f7b47b8be22f24e14e3721974fb8818d2c0937424522f1da283
3
+ size 214886400