Datasets:
AVE Speech: A Comprehensive Multi-Modal Dataset for Speech Recognition Integrating Audio, Visual, and Electromyographic Signals
Abstract
AVE Speech is a large-scale Mandarin speech corpus that pairs synchronized audio, lip video and surface electromyography (EMG) recordings. The dataset contains 100 sentences read by 100 native speakers. Each participant repeated the full corpus ten times, yielding over 55 hours of data per modality. These complementary signals enable research on robust acoustic and non-acoustic speech recognition.
About the Dataset
The AVE Speech Dataset includes a 100-sentence Mandarin Chinese corpus with audio signals, lip-region video recordings, and six-channel electromyography (EMG) data, collected from 100 participants. Each subject read the entire corpus ten times, with each sentence averaging approximately two seconds in duration, resulting in over 55 hours of multi-modal speech data per modality. It will be made publicly available once the related paper has been accepted for publication.
The related source code is available at: 👉 AVE-Speech Code on Github
Corpus Design
Index | Chinese Sentence | Phonetic Transcription (Mandarin) | Tone | English Translation |
---|---|---|---|---|
#0 | 我饿了 | wo e le | 3 4 5 | I'm hungry |
#1 | 我口渴 | wo kou ke | 3 3 3 | I'm thirsty |
#2 | 我吃饱了 | wo chi bao le | 3 1 3 5 | I'm full |
#3 | 水太烫了 | shui tai tang le | 3 4 4 5 | The water is too hot |
#4 | 我太累了 | wo tai lei le | 3 4 4 5 | I'm too tired |
... | ... | ... | ... | ... |
#99 | 向右转 | xiang you zhuan | 4 4 3 | Turn right |
#100 | (无指令) | None | None | None |
For more details, please refer to the file phonetic_transcription.xlsx
.
Usage
Each ZIP file, when extracted, contains sessions numbered from 1 to 10. In a few rare cases, some sessions may be missing. Within each session, there are multiple files, each named according to the index found in the phonetic_transcription.xlsx
file, corresponding to a specific Chinese sentence.
Citation
If you use this dataset in your work, please cite it as:
@article{zhou2025ave,
title={AVE Speech: A Comprehensive Multi-Modal Dataset for Speech Recognition Integrating Audio, Visual, and Electromyographic Signals},
author={Zhou, Dongliang and Zhang, Yakun and Wu, Jinghan and Zhang, Xingyu and Xie, Liang and Yin, Erwei},
journal={IEEE Transactions on Human-Machine Systems},
year={2025}
}
- Downloads last month
- 187