ARPO: Agentic Reinforced Policy Optimization

This repository contains the official checkpoint for ARPO: Agentic Reinforced Policy Optimization, a novel agentic Reinforcement Learning (RL) algorithm designed for training multi-turn Large Language Model (LLM)-based agents.

Paper Paper Code Model Dataset

Abstract

Large-scale reinforcement learning with verifiable rewards (RLVR) has demonstrated its effectiveness in harnessing the potential of large language models (LLMs) for single-turn reasoning tasks. In realistic reasoning scenarios, LLMs can often utilize external tools to assist in task-solving processes. However, current RL algorithms inadequately balance the models' intrinsic long-horizon reasoning capabilities and their proficiency in multi-turn tool interactions. To bridge this gap, we propose Agentic Reinforced Policy Optimization (ARPO), a novel agentic RL algorithm tailored for training multi-turn LLM-based agents. Through preliminary experiments, we observe that LLMs tend to exhibit highly uncertain behavior, characterized by an increase in the entropy distribution of generated tokens, immediately following interactions with external tools. Motivated by this observation, ARPO incorporates an entropy-based adaptive rollout mechanism, dynamically balancing global trajectory sampling and step-level sampling, thereby promoting exploration at steps with high uncertainty after tool usage. By integrating an advantage attribution estimation, ARPO enables LLMs to internalize advantage differences in stepwise tool-use interactions. Our experiments across 13 challenging benchmarks in computational reasoning, knowledge reasoning, and deep search domains demonstrate ARPO's superiority over trajectory-level RL algorithms. Remarkably, ARPO achieves improved performance using only half of the tool-use budget required by existing methods, offering a scalable solution for aligning LLM-based agents with real-time dynamic environments. Our code and datasets are released at this https URL

Overview

ARPO's core principle is to encourage the policy model to adaptively branch sampling during high-entropy tool-call rounds, thereby efficiently aligning step-level tool-use behaviors.

intro

In figure (left), the initial tokens generated by the LLM after receiving each round of tool-call feedback consistently exhibit a high entropy. This indicates that external tool-call significantly introduces uncertainty into the LLM’s reasoning process.

In the figure (right), we validate ARPO's performance across 13 datasets. Notably, Qwen3-14B with ARPO excelled in Pass@5, achieving 61.2% on GAIA and 24.0% on HLE, while requiring only about half the tool calls compared to GRPO during training.

Usage

This model can be loaded and used with the Hugging Face transformers library. Ensure you have the library installed (pip install transformers) and optionally accelerate for optimized loading (pip install accelerate).

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load the model and tokenizer
# You can choose other ARPO checkpoints from the Hugging Face collection:
# https://huggingface.co/collections/dongguanting/arpo-688229ff8a6143fe5b4ad8ae
model_id = "dongguanting/Qwen2.5-7B-ARPO" 

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16, # Use torch.float16 if bfloat16 is not supported on your GPU
    device_map="auto",          # Automatically distributes model across available devices (e.g., GPUs)
    trust_remote_code=True      # Required for Qwen2 models due to custom code
)

# Example chat completion using the Qwen chat template
messages = [
    {"role": "user", "content": "Hello, how are you today?"},
]

# Apply chat template and tokenize
# This formats the messages according to the model's specific chat format (e.g., <|im_start|>user
...)
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

input_ids = tokenizer([text], return_tensors="pt").to(model.device)

# Generate response
generated_ids = model.generate(
    input_ids.input_ids,
    max_new_tokens=512,
    do_sample=True,
    temperature=0.7,
    top_k=20,
    top_p=0.8,
    repetition_penalty=1.05,
    # Use EOS and PAD token IDs from the model's configuration
    eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|im_end|>")],
    pad_token_id=tokenizer.pad_token_id,
)

# Decode and print the response, skipping special tokens
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

# For detailed multi-turn conversations and tool-use examples,
# refer to the official GitHub repository's Quick Start section.

Citation

If you find this work helpful, please cite our paper:

@misc{dong2025arpo,
      title={Agentic Reinforced Policy Optimization}, 
      author={Guanting Dong and Hangyu Mao and Kai Ma and Licheng Bao and Yifei Chen and Zhongyuan Wang and Zhongxia Chen and Jiazhen Du and Huiyang Wang and Fuzheng Zhang and Guorui Zhou and Yutao Zhu and Ji-Rong Wen and Zhicheng Dou},
      year={2025},
      eprint={2507.19849},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.19849}, 
}

License

This project is released under the MIT License.

Acknowledge

This training implementation builds upon Tool-Star, Llama Factory, verl and ReCall. For evaluation, we rely on WebThinker, HIRA, WebSailor, Search-o1, and FlashRAG. The Python interpreter design references ToRA and ToRL, while our models are trained using Qwen2.5. We express our sincere gratitude to these projects for their invaluable contributions to the open-source community.

Contact

For any questions or feedback, please reach out to us at [email protected].

Downloads last month
10
Safetensors
Model size
3.4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including dongguanting/Qwen2.5-3B-ARPO