Papers
arxiv:2509.23102

Multiplayer Nash Preference Optimization

Published on Sep 27
· Submitted by Fang Wu on Sep 30
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

Multiplayer Nash Preference Optimization (MNPO) extends Nash learning from human feedback to handle complex, non-transitive human preferences by formulating alignment as an n-player game.

AI-generated summary

Reinforcement learning from human feedback (RLHF) has emerged as the standard paradigm for aligning large language models (LLMs) with human preferences. However, reward-based methods built on the Bradley-Terry assumption struggle to capture the non-transitive and heterogeneous nature of real-world preferences. To address this, recent studies have reframed alignment as a two-player Nash game, giving rise to Nash learning from human feedback (NLHF). While this perspective has inspired algorithms such as INPO, ONPO, and EGPO with strong theoretical and empirical guarantees, they remain fundamentally restricted to two-player interactions, creating a single-opponent bias that fails to capture the full complexity of realistic preference structures. In this work, we introduce Multiplayer Nash Preference Optimization (MNPO), a novel framework that generalizes NLHF to the multiplayer regime. It formulates alignment as an n-player game, where each policy competes against a population of opponents while being regularized toward a reference model. Our framework establishes well-defined Nash equilibria in multiplayer settings and extends the concept of duality gap to quantify approximation quality. We demonstrate that MNPO inherits the equilibrium guarantees of two-player methods while enabling richer competitive dynamics and improved coverage of diverse preference structures. Through comprehensive empirical evaluation, we show that MNPO consistently outperforms existing NLHF baselines on instruction-following benchmarks, achieving superior alignment quality under heterogeneous annotator conditions and mixed-policy evaluation scenarios. Together, these results establish MNPO as a principled and scalable framework for aligning LLMs with complex, non-transitive human preferences. Code is available at https://github.com/smiles724/MNPO.

Community

Paper author Paper submitter

🚀 New paper: Multiplayer Nash Preference Optimization (MNPO)

Preference optimization for LLMs has mostly been stuck in the two-player game setting (DPO, IPO, INPO, EGPO…). But real human feedback is messy, diverse, and non-transitive—it looks much more like a multiplayer game.

We introduce MNPO, the first framework that generalizes Nash learning from human feedback to the multiplayer regime.
âś… Theoretically grounded: defines multiplayer Nash equilibria & duality gap.
âś… Algorithmically scalable: unifies many existing PO methods as special cases.
✅ Empirically strong: MNPO outperforms all NLHF baselines on AlpacaEval 2, Arena-Hard, and MT-Bench—sometimes even surpassing much larger LLMs and GPT-5 on alignment benchmarks

Paper: https://arxiv.org/abs/2509.23102

Code: https://github.com/smiles724/MNPO

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.23102 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.23102 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.23102 in a Space README.md to link it from this page.

Collections including this paper 2