Papers
arxiv:2510.11370

Stabilizing MoE Reinforcement Learning by Aligning Training and Inference Routers

Published on Oct 13
· Submitted by Adina Yakefu on Oct 27
Authors:
,
,
,
,
,

Abstract

Rollout Routing Replay (R3) stabilizes reinforcement learning training in Mixture-of-Experts models by reducing discrepancies between training and inference routing behaviors.

AI-generated summary

Reinforcement learning (RL) has emerged as a crucial approach for enhancing the capabilities of large language models. However, in Mixture-of-Experts (MoE) models, the routing mechanism often introduces instability, even leading to catastrophic RL training collapse. We analyze the training-inference consistency of MoE models and identify a notable discrepancy in routing behaviors between the two phases. Moreover, even under identical conditions, the routing framework can yield divergent expert selections across repeated forward passes. To address this foundational inconsistency, we propose Rollout Routing Replay (R3), a method that records routing distributions from the inference engine and replays them during training. R3 significantly reduces training-inference policy KL divergence and mitigates extreme discrepancies without compromising training speed. Extensive experiments on various settings confirm that R3 succeeds in stabilizing RL training, preventing collapse and outperforming methods such as GSPO and TIS. We believe this work can offer a new solution for stabilizing RL in MoE models.

Community

Paper submitter

Rollout Routing Replay (R3), a method that stabilizes reinforcement learning in Mixture-of-Experts (MoE) models by replaying routing distributions from inference during training, preventing collapse and improving consistency.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.11370 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.11370 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.11370 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.