Test-Time Policy Adaptation for Enhanced Multi-Turn Interactions with LLMs
Abstract
ROSA, a lightweight algorithm, enhances multi-turn interactions in LLMs by adapting to user feedback in real-time, improving both task effectiveness and efficiency.
Large Language Models (LLMs) employ multi-turn interaction as a fundamental paradigm for completing complex tasks. However, their performance often degrades in extended interactions, as they are typically trained on static, single-turn data, which hinders their ability to adapt to real-time user feedback. To address this limitation, we first propose a new paradigm: Test-Time Policy Adaptation for Multi-Turn Interactions (T2PAM), which utilizes user feedback from the ongoing interaction as a reward signal to estimate a latent optimal policy aligned with user preferences, then updates a small subset of parameters to steer the model toward this policy, ultimately enabling efficient in-conversation self-correction. We then introduce Optimum-Referenced One-Step Adaptation (ROSA), a lightweight algorithm that operationalizes T2PAM. ROSA guides the model parameters toward a theoretical optimal policy in a single, efficient update step, avoiding costly iterative gradient-based optimization and minimizing computational overhead. We provide a rigorous theoretical analysis guaranteeing that the policy of ROSA converges to the preference of user as the number of interactions increases. Extensive experiments on challenging benchmark demonstrate that ROSA achieves significant improvements in both task effectiveness and efficiency.
Community
We introduce ROSA, a lightweight algorithm for our test-time adaptation paradigm that enables LLMs to perform efficient in-conversation self-correction by updating parameters online using real-time user feedback.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- T-POP: Test-Time Personalization with Online Preference Feedback (2025)
- MAVIS: Multi-Objective Alignment via Value-Guided Inference-Time Search (2025)
- From Clicks to Preference: A Multi-stage Alignment Framework for Generative Query Suggestion in Conversational System (2025)
- Temporal Self-Rewarding Language Models: Decoupling Chosen-Rejected via Past-Future (2025)
- One-Token Rollout: Guiding Supervised Fine-Tuning of LLMs with Policy Gradient (2025)
- AMFT: Aligning LLM Reasoners by Meta-Learning the Optimal Imitation-Exploration Balance (2025)
- The Era of Real-World Human Interaction: RL from User Conversations (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper