Model Card for whisat
Whisper ASR model tuned for child speech in the classroom on public corpora of children's speech Research conducted as part of NSF-ISAT
This was tuned in Transformers and converted to a format compatible with openai-whisper
usage:
whisper.load_model(<local_path_to_model>, "whisper-model.pt"), device=device)
Model Details
Model Description
K-12 school classrooms have proven to be a challenging environment for Automatic Speech Recognition (ASR) systems, both due to background noise and conversation, and differences in linguistic and acoustic properties from adult speech, on which the majority of ASR systems are trained and evaluated. We report on experiments to improve ASR for child speech in the classroom by training and fine-tuning transformer models on public corpora of adult and child speech augmented with classroom background noise. By tuning OpenAI’s Whisper model we achieve a 38% relative reduction in word error rate (WER) to 9.2% on the public MyST dataset of child speech – the lowest yet reported – and a 7% relative reduction to reach 54% WER on a more challenging classroom speech dataset (ISAT). We also introduce a novel beam hypothesis rescoring method that incorporates a speed-aware term to capture prior knowledge of human speaking rates, as well as a Large Language Model, to select among hypotheses. We demonstrate the effectiveness of this technique on both publicly-available datasets and a classroom speech dataset.
- Finetuned from model [optional]: openai/whisper-large-v2
Model Sources [optional]
Training Details
Training Data
Utterances sourced from: MyST CuKids CSLU
Citation
R. Southwell et al., "Automatic Speech Recognition Tuned for Child Speech in the Classroom," ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Republic of, 2024, pp. 12291-12295, doi: 10.1109/ICASSP48485.2024.10447428. BibTeX:
@INPROCEEDINGS{10447428, author={Southwell, Rosy and Ward, Wayne and Trinh, Viet Anh and Clevenger, Charis and Clevenger, Clay and Watts, Emily and Reitman, Jason and D’Mello, Sidney and Whitehill, Jacob}, booktitle={ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, title={Automatic Speech Recognition Tuned for Child Speech in the Classroom}, year={2024}, volume={}, number={}, pages={12291-12295}, keywords={Training;Oral communication;Signal processing;Linguistics;Transformers;Acoustics;Background noise;Automatic Speech Recognition;Child Speech;Language Modeling;Transfer Learning;Transformers}, doi={10.1109/ICASSP48485.2024.10447428}}
Model tree for levicu/whisat-openai
Base model
openai/whisper-large-v2