shenyunhang commited on
Commit
df062e4
·
verified ·
1 Parent(s): ed4679e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+
6
+ ---
7
+
8
+ Data source:
9
+ https://openslr.org/93/
10
+
11
+ Audios and transcriptions are extracted to wav and txt files.
12
+
13
+ ---
14
+
15
+
16
+
17
+
18
+
19
+ <div class="container">
20
+ <h2 class="slrStyle"> AISHELL-4
21
+ </h2>
22
+ <p class="resource"> <b>Identifier:</b> SLR111 </p>
23
+ <p class="resource"> <b>Summary:</b> A Free Mandarin Multi-channel Meeting Speech Corpus, provided by Beijing Shell Shell Technology Co.,Ltd
24
+ </p>
25
+ <p class="resource"> <b>Category:</b> Speech
26
+ </p>
27
+ <p class="resource"> <b>License:</b> CC BY-SA 4.0
28
+ </p>
29
+ <p class="resource"> <b>Downloads (use a mirror closer to you):</b> <br>
30
+ <a href="https://www.openslr.org/resources/111/train_L.tar.gz"> train_L.tar.gz </a> [7.0G] &nbsp; ( Training set of large room, 8-channel microphone array speech
31
+ ) &nbsp; Mirrors:
32
+ <a href="https://us.openslr.org/resources/111/train_L.tar.gz"> [US] </a> &nbsp;
33
+ <a href="https://openslr.elda.org/resources/111/train_L.tar.gz"> [EU] </a> &nbsp;
34
+ <a href="https://openslr.magicdatatech.com/resources/111/train_L.tar.gz"> [CN] </a> &nbsp;
35
+ <br><a href="https://www.openslr.org/resources/111/train_M.tar.gz"> train_M.tar.gz </a> [25G] &nbsp; ( Training set of medium room, 8-channel microphone array speech
36
+ ) &nbsp; Mirrors:
37
+ <a href="https://us.openslr.org/resources/111/train_M.tar.gz"> [US] </a> &nbsp;
38
+ <a href="https://openslr.elda.org/resources/111/train_M.tar.gz"> [EU] </a> &nbsp;
39
+ <a href="https://openslr.magicdatatech.com/resources/111/train_M.tar.gz"> [CN] </a> &nbsp;
40
+ <br><a href="https://www.openslr.org/resources/111/train_S.tar.gz"> train_S.tar.gz </a> [14G] &nbsp; ( Training set of small room, 8-channel microphone array speech
41
+ ) &nbsp; Mirrors:
42
+ <a href="https://us.openslr.org/resources/111/train_S.tar.gz"> [US] </a> &nbsp;
43
+ <a href="https://openslr.elda.org/resources/111/train_S.tar.gz"> [EU] </a> &nbsp;
44
+ <a href="https://openslr.magicdatatech.com/resources/111/train_S.tar.gz"> [CN] </a> &nbsp;
45
+ <br><a href="https://www.openslr.org/resources/111/test.tar.gz"> test.tar.gz </a> [5.2G] &nbsp; ( Test set
46
+ ) &nbsp; Mirrors:
47
+ <a href="https://us.openslr.org/resources/111/test.tar.gz"> [US] </a> &nbsp;
48
+ <a href="https://openslr.elda.org/resources/111/test.tar.gz"> [EU] </a> &nbsp;
49
+ <a href="https://openslr.magicdatatech.com/resources/111/test.tar.gz"> [CN] </a> &nbsp;
50
+ <br><br></p><p class="resource"><b>About this resource:</b></p><div id="about">The AISHELL-4 is a sizable real-recorded Mandarin speech dataset
51
+ collected by 8-channel circular microphone array for speech processing
52
+ in conference scenarios. The dataset consists of 211 recorded meeting
53
+ sessions, each containing 4 to 8 speakers, with a total length of 120
54
+ hours. This dataset aims to bridge the advanced research on
55
+ multi-speaker processing and the practical application scenario in
56
+ three aspects. With real recorded meetings, AISHELL-4 provides
57
+ realistic acoustics and rich natural speech characteristics in
58
+ conversation such as short pause, speech overlap, quick speaker turn,
59
+ noise, etc. Meanwhile, the accurate transcription and speaker voice
60
+ activity are provided for each meeting in AISHELL-4. This allows the
61
+ researchers to explore different aspects in meeting processing,
62
+ ranging from individual tasks such as speech front-end processing,
63
+ speech recognition and speaker diarization, to multi-modality modeling
64
+ and joint optimization of relevant tasks. We also release a
65
+ PyTorch-based training and evaluation framework as a baseline system to
66
+ promote reproducible research in this field. The baseline system code
67
+ and generated samples are available
68
+ <a href="https://github.com/felixfuyihui/AISHELL-4">here</a>.
69
+ <p>
70
+
71
+ You can cite the data
72
+ using the following BibTeX entry:
73
+ </p><pre>
74
+ @inproceedings{AISHELL-4_2021,
75
+ title={AISHELL-4: An Open Source Dataset for Speech Enhancement, Separation, Recognition and Speaker Diarization in Conference Scenario},
76
+ author={Yihui Fu, Luyao Cheng, Shubo Lv, Yukai Jv, Yuxiang Kong, Zhuo Chen, Yanxin Hu, Lei Xie, Jian Wu, Hui Bu, Xin Xu, Jun Du, Jingdong Chen},
77
+ booktitle={Interspeech},
78
+ url={https://arxiv.org/abs/2104.03603},
79
+ year={2021}
80
+ }
81
+ </pre>
82
+ </div><p class="resource"> <b>External URL:</b> <a href="http://www.aishelltech.com/aishell_4"> http://www.aishelltech.com/aishell_4 </a> &nbsp; Full description from the company website
83
+ </p>
84
+
85
+ <div style="height:300px"> </div>
86
+
87
+ </div>