Papers
arxiv:2510.10494

Tracing the Traces: Latent Temporal Signals for Efficient and Accurate Reasoning

Published on Oct 12
· Submitted by Martina Vilas on Oct 16
Authors:
,
,
,

Abstract

Latent-Trajectory signals improve inference-time efficiency by predicting productive reasoning paths, reducing token usage and enhancing accuracy.

AI-generated summary

Reasoning models improve their problem-solving ability through inference-time scaling, allocating more compute via longer token budgets. Identifying which reasoning traces are likely to succeed remains a key opportunity: reliably predicting productive paths can substantially reduce wasted computation and improve overall efficiency. We introduce Latent-Trajectory signals that characterize the temporal evolution of a model's internal representations during the generation of intermediate reasoning tokens. By measuring the overall change in latent representations between the start and end of reasoning, the change accumulated across intermediate steps, and the extent to which these changes advance toward the final state, we show that these signals predict solution accuracy more reliably than both cross-layer metrics and output-based confidence measures. When used to guide answer selection across multiple sampled generations, Latent-Trajectory signals make test-time scaling more effective and efficient than majority voting, reducing token usage by up to 70% while preserving and even improving accuracy by 2.6% on average. Moreover, these predictive signals often emerge early in the reasoning trace, enabling early selection and allocation of compute to the most promising candidates. Our findings contribute not only practical strategies for inference-time efficiency, but also a deeper interpretability perspective on how reasoning processes are represented and differentiated in latent space.

Community

Paper author Paper submitter

🧠 New paper: Tracing the Traces: Latent Temporal Signals for Efficient and Accurate Reasoning

We introduce Latent Trajectory (LT) signals, training free metrics derived from a model’s hidden state dynamics during reasoning. These signals capture how internal representations evolve over time and predict which reasoning traces are likely to succeed.

⚙️ The three LT signals:
Net Change: measures how far the model’s internal representation moves from the start to the end of a reasoning trace.
Cumulative Change: measures the total amount of representational movement through latent space.
Aligned Change: measures how consistently intermediate updates point toward the final latent state.

💡 Using LT signals for trace and answer selection reduces inference cost by up to 70% of tokens while improving accuracy by 2.6% compared to majority vote.

LT signals emerge early in the reasoning process, enabling early path pruning and efficient compute allocation.

🔗 arXiv: arXiv:2510.10494
🧑‍💻 Code: coming soon

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.10494 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.10494 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.10494 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.