In-Context Learning Boosts Speech Recognition via Human-like Adaptation to Speakers and Language Varieties
Abstract
In-context learning in Phi-4 Multimodal demonstrates significant improvements in automatic speech recognition robustness with a small number of example utterances, showing a performance profile similar to human listeners.
Human listeners readily adjust to unfamiliar speakers and language varieties through exposure, but do these adaptation benefits extend to state-of-the-art spoken language models? We introduce a scalable framework that allows for in-context learning (ICL) in Phi-4 Multimodal using interleaved task prompts and audio-text pairs, and find that as few as 12 example utterances (~50 seconds) at inference time reduce word error rates by a relative 19.7% (1.2 pp.) on average across diverse English corpora. These improvements are most pronounced in low-resource varieties, when the context and target speaker match, and when more examples are provided--though scaling our procedure yields diminishing marginal returns to context length. Overall, we find that our novel ICL adaptation scheme (1) reveals a similar performance profile to human listeners, and (2) demonstrates consistent improvements to automatic speech recognition (ASR) robustness across diverse speakers and language backgrounds. While adaptation succeeds broadly, significant gaps remain for certain varieties, revealing where current models still fall short of human flexibility. We release our prompts and code on GitHub.
Community
ICL can achieve SOTA ASR (If you have some labeled data)
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MiniMax-Speech: Intrinsic Zero-Shot Text-to-Speech with a Learnable Speaker Encoder (2025)
- GOAT-TTS: LLM-based Text-To-Speech Generation Optimized via A Dual-Branch Architecture (2025)
- Language translation, and change of accent for speech-to-speech task using diffusion model (2025)
- Granite-speech: open-source speech-aware LLMs with strong English ASR capabilities (2025)
- KIT's Offline Speech Translation and Instruction Following Submission for IWSLT 2025 (2025)
- SViQA: A Unified Speech-Vision Multimodal Model for Textless Visual Question Answering (2025)
- SIFT-50M: A Large-Scale Multilingual Dataset for Speech Instruction Fine-Tuning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper