Papers
arxiv:2510.16156

AsyncVoice Agent: Real-Time Explanation for LLM Planning and Reasoning

Published on Oct 17
· Submitted by Yueqian Lin on Oct 21
Authors:
,
,
,
,
,

Abstract

AsyncVoice Agent, with its asynchronous architecture, enhances human-AI collaboration by enabling real-time interaction and interruption of the model's reasoning process, significantly reducing latency while maintaining accuracy.

AI-generated summary

Effective human-AI collaboration on complex reasoning tasks requires that users understand and interact with the model's process, not just receive an output. However, the monolithic text from methods like Chain-of-Thought (CoT) prevents this, as current interfaces lack real-time verbalization and robust user barge-in. We present AsyncVoice Agent, a system whose asynchronous architecture decouples a streaming LLM backend from a conversational voice frontend. This design allows narration and inference to run in parallel, empowering users to interrupt, query, and steer the model's reasoning process at any time. Objective benchmarks show this approach reduces interaction latency by more than 600x compared to monolithic baselines while ensuring high fidelity and competitive task accuracy. By enabling a two-way dialogue with a model's thought process, AsyncVoice Agent offers a new paradigm for building more effective, steerable, and trustworthy human-AI systems for high-stakes tasks.

Community

Paper author Paper submitter

Sharing our latest work on making LLM reasoning or planning systems more transparent. We built the "AsyncVoice Agent" to provide real-time, "think-aloud" audio explanations of its reasoning process.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.16156 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.16156 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.16156 in a Space README.md to link it from this page.

Collections including this paper 1