Datasets:

Modalities:
Audio
Text
Formats:
parquet
Libraries:
Datasets
Dask
5roop commited on
Commit
e85fdee
·
verified ·
1 Parent(s): c7be99e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -99,6 +99,11 @@ dataset_info:
99
 
100
  The master dataset can be found at http://hdl.handle.net/11356/1785.
101
 
 
 
 
 
 
102
  The ParlaSpeech-CZ dataset is built from the transcripts of parliamentary proceedings available in the Czech part of the ParlaMint corpus (http://hdl.handle.net/11356/1859), and the parliamentary recordings available from the AudioPSP dataset (https://hdl.handle.net/11234/1-5404).
103
 
104
  The dataset consists of audio segments that correspond to specific sentences in the transcripts. The transcript contains word-level alignments to the recordings, each instance consisting of character and millisecond start and end offsets, allowing for simple further segmentation of long sentences into shorter segments for ASR and other memory-sensitive applications. Sequences longer than 30 seconds have already been removed from this dataset, which should allow for a simple usage on most modern GPUs.
 
99
 
100
  The master dataset can be found at http://hdl.handle.net/11356/1785.
101
 
102
+ <div style="border: 5px solid #ff6700; padding: 10px; margin: 10px 0;">
103
+ <strong>Notice:</strong> ParlaSpeech corpora are currently in the process of enrichment with new features. Follow our progress here: <a href="http://clarinsi.github.io/parlaspeech">http://clarinsi.github.io/parlaspeech</a>
104
+ </div>
105
+
106
+
107
  The ParlaSpeech-CZ dataset is built from the transcripts of parliamentary proceedings available in the Czech part of the ParlaMint corpus (http://hdl.handle.net/11356/1859), and the parliamentary recordings available from the AudioPSP dataset (https://hdl.handle.net/11234/1-5404).
108
 
109
  The dataset consists of audio segments that correspond to specific sentences in the transcripts. The transcript contains word-level alignments to the recordings, each instance consisting of character and millisecond start and end offsets, allowing for simple further segmentation of long sentences into shorter segments for ASR and other memory-sensitive applications. Sequences longer than 30 seconds have already been removed from this dataset, which should allow for a simple usage on most modern GPUs.