Papers
arxiv:2509.25779

Planner-R1: Reward Shaping Enables Efficient Agentic RL with Smaller LLMs

Published on Sep 30
Authors:
,
,

Abstract

Agentic RL with large language models, particularly smaller models, achieved high performance and efficiency on the TravelPlanner benchmark through reward shaping without overfitting.

AI-generated summary

We investigated Agentic RL with large language models on the TravelPlanner benchmark. Our approach, Planner-R1, achieved a 56.9\% final-pass rate with only 180 training queries, a 2.7times improvement over GPT-5's 21.2% baseline and the strongest agentic result on the public leaderboard. A central finding was that smaller models (8B) were highly responsive to reward shaping: with dense process-level signals, they reached competitive performance while being 3.5times more compute-efficient and 1.5times more memory-efficient than 32B models. Larger models were more robust under sparse rewards but exhibited smaller relative gains from shaping and higher variance across runs. While curriculum learning offered no significant benefit, shaped rewards consistently amplified learning dynamics, making 8B models the most efficient setting for agentic RL. Crucially, these gains did not come at the cost of overfitting: fine-tuned models mostly maintained or exceeded baseline performance on out-of-domain tasks, including Multi-IF, NaturalPlan, and tau-Bench. These results establish reward shaping as a decisive lever for scaling agentic RL, highlight the competitive strength of smaller models, and demonstrate that efficiency can be achieved without sacrificing generalization.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.25779 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.25779 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.25779 in a Space README.md to link it from this page.

Collections including this paper 1