RAPNIC Combined Dataset
Dataset Description
RAPNIC (Reconeixement Automàtic de la Parla per a persones amb Necessitats específIques en Comunicació) is a Catalan speech corpus collected from individuals with speech disorders, specifically cerebral palsy and Down syndrome.
This dataset was collected to develop and improve automatic speech recognition (ASR) systems that are accessible to people with speech disorders who speak Catalan.
Data Pulls Included
- LREC-PAPER: 560 recordings
- PILOT: 160 recordings
Dataset Statistics
- Speakers: 72
- Recordings: 720
- Total Duration: 1.38 hours
- Sampling Rate: 16 kHz
- Audio Format: WAV
- Language: Catalan (multiple dialects)
Disorder Distribution
- Síndrome de Down: 410 recordings
- Paràlisi cerebral: 240 recordings
- Sense resposta: 50 recordings
- Altres trastorns de la parla: 20 recordings
Gender Distribution
- Dona: 390 recordings
- Home: 320 recordings
- Sense resposta: 10 recordings
Dialect Distribution
- Central (Barcelona, Tarragona): 490 recordings
- Septentrional: 130 recordings
- Girona: 70 recordings
- Nord-Occidental (Lleida, Tortosa): 30 recordings
Data Fields
audio: Audio file (WAV format, 16 kHz)speaker_id: Unique identifier for each speaker (anonymized)filename: Original filename of the recordingtask_id: Task/prompt identifierprompt: Text that was read/spokenoriginal_duration: Duration in seconds before preprocessingtrimmed_duration: Duration in seconds after preprocessing (2s cut from end)category: Recording category (clean, duplicate, over_threshold)reason: Additional category informationage: Age range of the speakergender: Gender of the speakerdisorder: Type of speech disorderdialect: Catalan dialect varietyprovince: Province of residencecity: City of residencehasHelper: Whether the speaker had assistance during recordingdata_pull: Source data collection phase (e.g., PILOT, LREC-PAPER)
Data Collection
The data was collected using a web-based recording platform adapted from Google's Project Euphonia. Participants recorded themselves reading prompts displayed on the screen.
Preprocessing
- Each recording has 2 seconds trimmed from the end to remove silence
- Duplicate recordings (same speaker, same task) were identified and marked
- Recordings over 10 seconds were flagged for review
Data Splits
This is a test upload with 10 samples per speaker. This dataset includes all recordings (clean, duplicates, and over-threshold).
Ethical Considerations
- All participants provided informed consent
- Data is anonymized (speaker IDs do not contain personally identifiable information)
- The dataset complies with GDPR regulations
- This dataset should be used to improve accessibility technology for people with speech disorders
Citation
If you use this dataset, please cite:
[Citation information to be added]
Contact
For questions or access requests, please contact: [contact information]
License
This dataset is released under the Creative Commons Attribution 4.0 International License (CC-BY-4.0).
- Downloads last month
- 11