Dataset Viewer
forum id
stringlengths 10
10
| title
stringlengths 14
154
| scores
sequencelengths 2
12
| text
stringlengths 31.9k
634k
|
---|---|---|---|
WWXjMYZxfH | MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions | [
6,
8,
8,
6,
3
] | Published as a conference paper at ICLR 2025
MA-RLHF: REINFORCEMENT LEARNING FROM HU-
MAN FEEDBACK WITH MACRO ACTIONS
Yekun Chai∗ Haoran Sun∗ Huang Fang Shuohuan Wang Yu Sun Hua Wu
Baidu Inc.
{chaiyekun,fanghuang,wangshuohuan}@baidu.com
[email protected]
ABSTRACT
Reinforcement learning from human feedback (RLHF) has demonstrated effec-
tiveness in aligning large language models (LLMs) with human preferences. How-
ever, token-level RLHF suffers from the credit assignment problem over long
sequences, where delayed rewards make it challenging for the model to discern
which actions contributed to preferred outcomes. This hinders learning efficiency
and slows convergence. In this paper, we propose MA-RLHF, a simple yet ef-
fective RLHF framework that incorporates macro actions — sequences of tokens
or higher-level language constructs — into the learning process. By operating
at higher level of abstraction, our approach reduces the temporal distance be-
tween actions and rewards, facilitating faster and more accurate credit assignment.
This results in more stable policy gradient estimates and enhances learning effi-
ciency within each episode, all without increasing computational complexity dur-
ing training or inference. We validate our approach through extensive experiments
across various model sizes and tasks, including text summarization, dialogue gen-
eration, question answering, and program synthesis. Our method achieves sub-
stantial performance improvements over standard RLHF, with performance gains
of up to 30% in text summarization and code generation, 18% in dialogue, and 8%
in question answering tasks. Notably, our approach reaches parity with vanilla
RLHF 1.7 ∼ 2 times faster in terms of training time and continues to outper-
form it with further training. We make our code and data publicly available at
https://github.com/ernie-research/MA-RLHF.
1
INTRODUCTION
Recent advancements in large language models (LLMs) have revolutionized natural language pro-
cessing tasks, demonstrating impressive capabilities across a wide range of applications such as
code generation (Roziere et al., 2023; Chai et al., 2023; Lozhkov et al., 2024), mathematical rea-
soning (Lewkowycz et al., 2022; Anil et al., 2023), and dialogue assistance (OpenAI, 2023; Team
et al., 2023; Anthropic). Despite these successes, aligning LLMs with human values and preferences
remains a critical challenge. Reinforcement learning from human feedback (RLHF) has emerged as
a promising approach to address this alignment issue by incorporating human evaluations into the
training process (Christiano et al., 2017; Ziegler et al., 2019; Stiennon et al., 2020).
Existing RLHF (Ouyang et al., 2022; Bai et al., 2022; Askell et al., 2021) methods mainly opti-
mize decisions at the level of individual tokens, and require to process a vast number of minute
adjustments. However, this fine-grained training paradigm can lead to the credit assignment prob-
lem (Kaelbling et al., 1996; Pang et al., 2019; Machado et al., 2023b; Pignatelli et al., 2023), partic-
ularly when dealing with long-distance dependencies. As LLM agents attempt to optimize decisions
across extensive sequences, the difficulty in attributing the credits of actions to specific tokens com-
plicates the reinforcement learning (RL) process (Pignatelli et al., 2024). Moreover, the use of
subword tokenization, such as Byte-Pair Encoding (Sennrich et al., 2016), often splits words into
∗Equal contribution. Correspondence to: YC.
Work done during HS’s internship at Baidu.
1
Published as a conference paper at ICLR 2025
smaller pieces. For instance, OpenAI’s ChatGPT1 treats each token as three quarters of a word on
average, resulting in sequences that are 33% longer than word counts (OpenAI, 2024) and further
exacerbates the credit assignment problem.
Additionally, standard RLHF methods may overlook essential local co-occurrence patterns or in-
herent structures between adjacent tokens in natural language. For example, consider the phrase
Big Apple2, treating Big and Apple as isolated decisions misses the cohesive meaning of the
term, which actually refers to the “New York City”. The token-level granularity of natural language
can hinder the agent’s ability to capture high-level language constructs in RL optimization, as some
sequences are better understood when evaluated holistically.
To address these challenges, we propose a new framework called macro-action RLHF (MA-RLHF)
that incorporate macro action — sequences of tokens or high-level language constructs — into the
RLHF framework. The concept of macro actions, has been explored in the literature of planning
(Iba, 1989; Korf, 1985; Sacerdoti, 1974) and reinforcement learning (Thrun & Schwartz, 1994;
Precup et al., 1997; Hauskrecht et al., 2013), simplifies decision-making by operating at high levels
of temporal abstraction under the framework of semi-Markov Decision Processes (SMDPs) (Sutton
et al., 1999b). Macro actions leverage temporal abstraction by chunking the sequences and reducing
the decision resolution, enabling the agent to learn from “long-sighted” macro-level actions instead
of “short-sighted” token-level actions. This can potentially lead to improved learning efficiency
and scalability. Alternatively, MA-RLHF can also be interpreted from the perspective of reversing
tokenization; MA-RLHF serves as a de-tokenization process to reconstruct high-level language units
from subword pieces. By merging tokens into macro actions, we reduce the number of decision
points and shorten decision trajectories, alleviating the credit assignment problem caused by long
temporal distances.
To conclude, our main contributions are as follows:
• We propose MA-RLHF, a simple yet effective RLHF framework that integrates the macro ac-
tions into RLHF to align LLMs with human preference. We demonstrate the effectiveness of
our approach through extensive experiments across various datasets and tasks, including text
summarization, dialogue generation, question answering, and code generation.
• We show that MA-RLHF achieves 1.7× to 2× faster learning efficiency in reward scores during
training compared to the standard token-level RLHF, without introducing additional computa-
tional costs during training or inference. MA-RLHF also exhibits strong scalability across
model sizes ranging from 2B to 27B parameters.
• Our analysis reveals that MA-RLHF exhibits robust generalization capabilities under varying
experimental settings, such as temperature values and rejection sampling, consistently outper-
forms the standard RLHF approaches.
2 PRELIMINARIES
We introduce some basic concepts and notations used in RL and RLHF.
2.1 REINFORCEMENT LEARNING AND POLICY OPTIMIZATION
Problem Definition RL addresses the problem of finding a policy to make optimal sequential de-
cisions in environments modeled as a Markov Decision Process (MDP) (Sutton & Barto, 1999).
An MDP is defined by the tuple (S, A, P, r, ρ0, γ), where S denotes a finite set of states, A is a
finite set of actions, P : S × A × S → [0, 1] represents the state transition probability distribution,
r : S × A → R is the reward function, ρ0 : S → [0, 1] defines the initial state distribution, and
γ ∈ (0, 1) is the discount factor that determines the importance of future rewards.
Given a trajectory (s0, a0, s1, a1, · · · ), a reward rt = r(st, at) is received at each time t. The
(cid:3) measures the expected return
state-action value function Qπ(st, at) = Est+1,at+1,...
of taking action at at state st and following policy π thereafter. The value function Vπ(st) =
(cid:3) estimates the expected return from state st under the policy π. The ad-
Eat,st+1,...
vantage function Aπ(st, at) = Qπ(st, at) − Vπ(st) reflects the relative value of taking action at at
state st compared to the average value of the state.
l=0 γlrt+l
l=0 γlrt+l
(cid:2)(cid:80)∞
(cid:2)(cid:80)∞
1https://platform.openai.com/tokenizer
2https://en.wikipedia.org/wiki/Big_Apple
2
Published as a conference paper at ICLR 2025
The goal of RL is to find an optimal policy πθ(a | s), parameterized by θ, that maximizes the ex-
pected cumulative discounted reward: J(θ) = Es0,a0,... [(cid:80)∞
t=0 γtrt], where s0 ∼ ρ0(s0) represents
the initial state distribution, at ∼ πθ(at | st) denotes the action selection based on the policy, and
st+1 ∼ P (st+1 | st, at) specifies the state transition dynamics.
Proximal Policy Optimization Policy gradient methods are a common approach for optimizing
policies by estimating the gradient of a performance objective with respect to the policy parameters
θ. The policy gradient is given by: ∇θJ(θ) = E [(cid:80)∞
t=0 At∇θ log πθ(at | st)], where the expectation
E[·] is taken over the randomness of the initial state, policy, and state-transition. The policy gradient
guides us how to adjust the policy parameters to improve the expected return. Among the family
of policy gradient methods, Proximal Policy Optimization (Schulman et al., 2017, PPO) is perhaps
the most widely-used one due to its simplicity and empirical effectiveness. PPO simplifies TRPO
(Schulman et al., 2015) by using a clipped surrogate objective function to penalize large deviations
from the old policy, thereby ensuring more stable updates. Specifically, PPO introduces a clipped
objective function:
J ppo-clip(θ) = Et
(cid:20)
min
(cid:18) πθ(at | st)
πθold(at | st)
At, clip(
πθ(at | st)
πθold(at | st)
, 1 − ϵ, 1 + ϵ)At
,
(1)
(cid:19)(cid:21)
where ϵ is a hyperparameter that defines the range for clipping. The expectation Et[. . . ] indicates the
empirical average over a finite batch of samples. Nowadays, PPO usually comes as the first choice
for RL practitioners.
2.2 RLHF FOR HUMAN ALIGNMENT
The post-training of LLMs (Stiennon et al., 2020; Ouyang et al., 2022) is a multi-stage training
paradigm to align LLMs with human preferences. Post-training typically involves three stages:
(1) Supervised Fine-Tuning (SFT) stage: A pre-trained language model (LM) is fine-tuned on a
dataset of human demonstrations, learning to generate responses that align with human instructions
and preferences.
N
(2) Reward Modeling (RM) stage: A reward model is trained on a labeled preference dataset D =
(xi, y+
i , y−
i )
is preferred
over y−
i by human annotators. The reward model rϕ(x, y), parameterized by ϕ, is trained using the
ranking loss: LRM = − log σ(log(rϕ(x, y+) − rϕ(x, y−))), where σ denotes the sigmoid function.
i=1, consisting of prompts xi and pairs of responses (y+
i ), where y+
i , y−
i
(3) RLHF stage: The RL fine-tuning utilizes the RM to provide feedback on the generated outputs,
optimizing the policy using RL methods such as PPO. The reward signal is modified by incorpo-
rating a Kullback-Leibler (KL) divergence penalty to balance the exploration of new policies with
adherence to the SFT model. The reshaped reward is defined as:
R(x, y) = rϕ(x, y) − βDKL(πθ(· | x) ∥ πsft(· | x)),
where πθ represents the policy learned through RL, πsft is the policy produced from the SFT stage,
and β > 0 is a hyperparameter that controls the strength of the KL penalty.
In the RLHF stage, the PPO algorithm, as detailed in Equation (1), is employed to optimize the RL
policy. In the context of RLHF, we denote the state st = {s0, a0, a1, . . . , at−1} as the sequence
of tokens generated up to time step t, while s0 represents the initial states, i.e., the prompt, and at
represents the token selected at the t-th position.
3 MARCO-ACTION RLHF
3.1 REVISITING MACRO ACTIONS (OPTIONS)
Macro actions, also referred to as options (Sutton et al., 1999b), are high-level constructs that
encapsulate a sequence of primitive actions (i.e., subword tokens); by its definition, macro actions
allows an agent to operate at a coarser temporal scale.
Formally, a macro action is characterized by three components: (1) a policy π : S × A → [0, 1]
which guides the action selection among actions; (2) a termination condition ζ : S + → [0, 1], which
determines where the macro action should end; (3) a initiation set I ⊆ S, which is a subset of states
3
Published as a conference paper at ICLR 2025
Figure 1: Illustration of the MA-RLHF optimization framework. Standard RLHF makes decisions
and evaluates value scores at the token level, while MA-RLHF makes decisions over sequences of
tokens at a coarser temporal scale.
that macro actions can begin with. Once initiated with a state s0 ∈ I, the macro action follows policy
π until it reaches the termination condition according to ζ. Intuitively, the use of carefully designed
macro actions can extend decision-making temporally, it allows the agent to avoid “short-sighted”
token-level decisions and encourage “long-sighted” macro-level decisions, thereby simplifies the
decision-making process and potentially enhances learning efficiency.
3.2 RLHF WITH MACRO ACTIONS
We describe how we integrate macro-actions into the existing RLHF framework, the resulting frame-
work is named as macro-action RLHF (MA-RLHF).
3.2.1 FORMALIZATION OF MACRO ACTIONS
We denote macro actions as ω1, ω2, . . . , ωτ . In the context of LLMs, a macro action ωτ consists of a
sequence of consecutive tokens, i.e., ωτ = {atτ , atτ +1, . . . , atτ +1−1}, where tτ is the starting index
of the τ -th macro action. We let |ωτ | denotes the number of primitive actions that ωτ contains.
Unless otherwise specified, we use τ to index macro actions/states and use t to index primitive
actions/states.
As mentioned in §3.1, macro actions are defined by the policy model, the termination condition
and the initiation set. In MA-RLHF, we set the policy model the same as the standard token-level
RLHF and let the initiation set to be any possible sequence of tokens. Therefore, the macro action
used in MA-RLHF is decided solely by the termination condition, which plays a crucial rule in the
MA-RLHF framework. We explore three termination conditions in this work:
• n-gram based termination: Following Vezhnevets et al. (2016), we find that n-grams serve as a
simple yet effective termination condition for macro actions, i.e., |ωτ | = n, where n represents the
length of the n-gram. We consider two variants of the n-gram termination condition: (a) Fixed n-
gram: We group tokens into fixed-length n-grams, simplifying the action space while maintaining
common linguistic patterns. We empirically find fixed n-gram macro action perform best and use
it as the default setup. (b) Randomized n-gram: We randomly select the length of a n-gram from
a predefined list of lengths n ∈ {2, 3, 5, 10} to introduce variability, allowing the policy to adapt
to different sequence lengths.
• Parsing-based termination: ωτ is derived from syntactic or semantic parsing of the input text,
aligning macro actions with grammatical structures like phrases or clauses. Concretely, we tra-
verse the constituent tree of the entire sequence using depth-first search (DFS), expanding non-
terminal nodes until current non-terminal state contains no more than a specified threshold of leaf
tokens, set at C = 5.
• Perplexity-based (PPL) termination: Perplexity measures the likelihood of a sequence of to-
kens. Here, the perplexity of a macro action is proportional to the averaged entropy of the token
within it, i.e., ppl(ωτ ) ∝ − 1
log pa. A macro action terminates until it reaches a
|ωτ |
token that has negative impact on the perplexity of the macro action. Mathematically, we con-
struct ωτ = {atτ , . . . , atτ +1−1} such that ppl(ωτ ∪ atτ +1) > ppl(ωτ ) and ppl({atτ , . . . , ai}) ≥
ppl({atτ , . . . , ai+1}) for all tτ ≤ i ≤ tτ +1 − 2.
a∈ωτ
(cid:80)
4
TimeMDPMacro actions over MDPaction|𝜔!|𝑡!macro action𝑡!"#MA-PPO…𝜔$!𝜔!RMPPO…𝑎$%𝑎%Vanilla RLHF MA-RLHF …episode𝑅(⋅)rewardENDend of the trajectory …OptimizeReward ModelPublished as a conference paper at ICLR 2025
After determining the macro action based on the termination condition, we apply the state value
function and importance sampling at the macro level Equation (1). We provide the details of imple-
mentation in Appendix D.1.
3.2.2 POLICY OPTIMIZATION WITH MACRO ACTIONS
In MA-RLHF, we adapt the PPO algorithm for optimization, referred to as MA-PPO. In the context
of LLMs, expanding the action space with additional macro actions/tokens results in re-architecting
the LLM’s vocabulary and retraining the model, which is computationally prohibitive. Thus, we
maintain the original action space as pretrained LLMs, which can be treated as “single-step” prim-
itive options as noted in (Sutton et al., 1999b). The policy πθ still outputs probabilities over
individual tokens, but for optimization, we consider the joint probability of the macro action:
πθ(ωτ | sτ ) = (cid:81)tτ +1
πθ(at | a<t). The macro reward for executing the macro action ωτ at the
t=tτ
(cid:12)
(cid:3), where rt is the reward received at
macro time step τ is defined as: Rτ = E(cid:2) (cid:80)|ωτ |−1
(cid:12) sτ
time step t, and we set the discount factor ρ = 1 in our experiments.
ρirtτ +i
i=0
Each macro action represents a contiguous sequence of tokens, and is treated as an option in the
SMDP framework. The option-level value function with macro action is then estimated as:
V π(sτ , ωτ ) = E (cid:2)Rτ + γV π(stτ +1) (cid:12)
(cid:12) sτ , ωτ
(cid:3) ,
where γ is the discount factor for future rewards beyond the macro action.
The advantage function Aπ(sτ , ωτ ) in MA-PPO determines how much the chosen macro action
outperforms the average, which is defined as Aπ(sτ , ωτ ) = Qπ(sτ , ωτ ) − V π(sτ ). Similar to
the definition stated in §2, Qπ(sτ , ωτ ) is the expected return conditioned on executing ωτ at state
sτ , which is calculated by summing the immediate macro rewards from the macro action with the
discounted value of the subsequent state.
In MA-PPO, the objective function is adapted for MA-level evaluation. The policy gradient is
computed based on the advantage of the MA sequences:
(cid:34)
(cid:32)
LMA-PPO(θ) = Eτ
min
(cid:1)
(cid:0)ωτ | sτ
(cid:1)
(cid:0)ωτ | sτ
πθ
πθold
ˆAτ , clip
(cid:32)
(cid:1)
(cid:0)ωτ | sτ
(cid:1) , 1 − ϵ, 1 + ϵ
(cid:0)ωτ | sτ
πθ
πθold
(cid:33)
(cid:33)(cid:35)
ˆAτ
,
where ˆAτ is the estimated advantage at macro time step τ , ϵ is a constant that defines the range for
clipping, and πθold is the policy before the update.
3.2.3 CONNECTION TO PREVIOUS METHODS
MA-RLHF builds on and generalizes prior work in the RLHF literature by varying the length of
macro actions. When the macro action length is set to 1, MA-RLHF reduces to the standard token-
level RLHF (Stiennon et al., 2020; Ouyang et al., 2022), operating as an MDP. Conversely, if we
allow |ωτ | → ∞, then MA-RLHF converges toward methods like RLOO (Ahmadian et al., 2024),
REINFORCE (Williams, 1992; Sutton et al., 1999a), and GRPO (Shao et al., 2024), approximating
a contextual bandit problem where decisions are made based on the entire sequence context. By
varying the length of macro actions |ωτ |, MA-RLHF provides a flexible framework that balances
the granularity of action decisions. We provide further analysis on the impact of |ωτ | in §4.3.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Tasks and Datasets We evaluate MA-RLHF on three different datasets for open-ended generation
tasks: TL;DR (Stiennon et al., 2020) dataset for text summarization, Anthropic Helpful and Harm-
less (HH-RLHF) (Bai et al., 2022) for dialogue generation3, and WebGPT Comparison (Nakano
et al., 2021) for question answering. Additionally, we evaluate MA-RLHF on code generation using
the APPS (Hendrycks et al., 2021) dataset. More details can be found in Appendix B.1.
3https://huggingface.co/datasets/Dahoas/full-hh-rlhf
5
Published as a conference paper at ICLR 2025
Figure 2: Test RM scores of Gemma-2B and Gemma-7B models
on the TL;DR dataset. The shaded regions represent the standard
deviation on test RM scores across training runs.
Figure 3: RM score distribution
for PPO and MA-PPO (2B) at fi-
nal steps (4.6k) on TL;DR.
Figure 4: Win rates of MA-PPO against vanilla PPO on TL;DR (left), HH-RLHF (middle) and
WebGPT Comparisons (right), estimated by GPT-4 and Human.
Base Models and Training Details For open-ended generation tasks, we use pre-trained Gemma-
2B (Team et al., 2024) as our base model; we further adopt Gemma-7B and Gemma-2-27B to test
the scaling trend. For the program synthesis task, we use CodeGemma-1.1-2B and CodeGemma-
1.1-7B-it as our base models. The data split for SFT / RM / PPO and the hyperparameters used in
SFT / RM / PPO stages are detailed in Appendix B.2. The implementation details of MA-PPO can
be found in Appendix E.
Evaluation For open-ended generation tasks, our evaluation metrics includes RM scores, GPT-4
pairwise evaluation, and human pairwise evaluation. To compute the RM score, we randomly sam-
ple 2k validation instances for the TL;DR and HH-RLHF datasets and use the default validation set
of the WebGPT dataset. For GPT-4 and human evaluations, we simulate the win-rate on 50 instances
that are drawn from the instances used in the RM evaluation. The GPT-4 and human evaluations are
based on task-specific criterion: relevance, coherence, consistency, and fluency for TL;DR; help-
fulness for HH-RLHF; factual accuracy, coherence, and usefulness for WebGPT. We followed prior
studies (Askell et al., 2021; Zheng et al., 2024) by randomizing the order of responses during evalu-
ation to mitigating potential evaluation biases. The prompts used by the GPT-4 evaluation are placed
in Appendix F.1, and the annotation rules used for human evaluation are given in Appendix F.2. For
the program synthesis task, we utilize pass@1 and pass@5 metrics to assess the performance of the
model, evaluated on the provided 5k test set.
4.2 MAIN RESULTS
In this section, we present the main results of applying MA-PPO across three key tasks: summa-
rization, dialogue, and question answering. The main takeaway is that MA-PPO consistently out-
performs vanilla PPO in terms of both training efficiency and generation quality; MA-PPO obtains
a significant improvement in testing reward model scores and human/GPT-4 evaluation win rates.
TL;DR Summarization For the TL;DR summarization task, MA-PPO shows a marked improve-
ment over vanilla PPO. As shown in Figure 2, MA-PPO achieves parity with vanilla PPO approx-
imately 1.7 – 2 times faster during training. Specifically, Gemma-2B trained with 1.7k MA-PPO
updates reaches similar testing RM scores obtained by vanilla PPO trained with 3.7k steps. We
also find similar trends when scaling up the parameter sizes to 7B, demonstrating the generalized
capability of MA-PPO on model sizes.
Moreover, Figure 3 highlights the distribution of RM scores, where MA-PPO consistently shifts
towards higher RM sores compared to vanilla PPO. Further evaluation using GPT-4, given in the left
figure of Figure 4, shows that MA-PPO achieves 78% and 86% win rate over vanilla PPO for the 2B
and 7B models, respectively. Human evaluation gives similar results, where MA-PPO obtains win
rates of 74% and 69%, further demonstrating the effectiveness of macro actions. The final testing
RM scores of MA-PPO and vanilla PPO are given in Table 2.
6
2.2×68%1.9×30%321012RM Score0.0%2.5%5.0%7.5%10.0%PercentageVanilla PPOMA-PPO020406080100% Win RateHuman EvaluationGemma-7BHuman EvaluationGemma-2BGPT-4 EvaluationGemma-7BGPT-4 EvaluationGemma-2B69%74%86%78%10%10%21%16%14%22%WinTieLoss020406080100% Win RateHuman EvaluationGemma-7BHuman EvaluationGemma-2BGPT-4 EvaluationGemma-7BGPT-4 EvaluationGemma-2B56%52%72%58%24%20%2%4%20%28%26%38%WinTieLossFacutalAccuracyFacutalAccuracyCoherenceCoherenceOverallUserfulnessOverallUserfulnessHumanEval.HumanEval.0204060Win Rate %MA-PPO v.s. Vanilla PPO 2BMA-PPO v.s. Vanilla PPO 7BWinTieLossPublished as a conference paper at ICLR 2025
Table 1: Agreement among RM, GPT-
4, and human evaluations on TL;DR.
#Param
RM GPT-4 Human
RM
GPT-4
Human
RM
GPT-4
Human
2B
7B
100%
78%
76%
100%
78%
74%
-
100%
58%
-
100%
64%
-
-
100%
-
-
100%
Table 2: Test RM scores of vanilla PPO and MA-PPO
on TL;DR, HH-RLHF, and WebGPT datasets.
Model
TL;DR
HH-RLHF WebGPT
Vanilla PPO (2B)
MA-PPO (2B)
Vanilla PPO (7B)
MA-PPO (7B)
0.84
1.41+68%
1.90
2.47+30%
1.31
1.55+18%
1.05
1.24+18%
-0.62
-0.60+3%
-0.61
-0.56+8%
Figure 5: Performance of MA-PPO with various macro action termination strategies on the TL;DR
dataset using Gemma-2B. Left: Test RM scores for different termination strategies. Right: GPT-
4 evaluation across four dimensions – relevance, coherence, consistency, and fluency – comparing
different MA termination methods.
HH-RLHF Dialogue We use the HH-RLHF dataset to evaluate the helpfulness and harmlessness
of single-turn dialogues. MA-PPO shows clear advantages over vanilla PPO, as depicted in the
middle figure of Figure 4. GPT-4 evaluations show that MA-PPO yields a 72% win rate for the
Gemma-7B model, compared to 58% for the Gemma-2B model. Human evaluation results align
with these findings, with the win rate increasing from 52% to 56% as model size scales from 2B
to 7B. The testing RM score of MA-PPO and vanilla PPO are presented in Table 2. These results
highlight the scalability and effectiveness of MA-PPO in dialogue tasks. We refer to Appendix C.1
for detailed experimental results.
WebGPT Comparisons We evaluate MA-PPO on the WebGPT Comparison dataset for question-
answering tasks. As shown in Figure 4 (Right), MA-PPO consistently outperforms vanilla PPO, with
GPT-4 evaluations yielding a win rate of 64% for the Gemma-7B model. This result demonstrate
the robustness of MA-PPO across different tasks, including more structured tasks like question
answering. More experimental details refer to Appendix C.2.
Validating Model-based Judgments with Human Evaluation We evaluate the reliability of our
evaluation methods by calculating the agreement between the reward model, GPT-4, and human
evaluators. Since GPT-4 and human evaluations are conducted pairwise, we determine the reward
model’s win rate by selecting the summary with the higher RM score. The results, shown in Table 1,
demonstrate that the reward model aligns more closely with both GPT-4 and human evaluations.
Furthermore, the agreement between GPT-4 and human evaluators averaged 62% across models,
reinforcing the consistency and validity of our evaluation framework.
4.3 ANALYZING THE USE OF MACRO ACTIONS
We study the performance of various termination strategies. Unless otherwise specified, we conduct
our analysis on the TL;DR dataset.
4.3.1 EXPLORING DIFFERENT STRATEGIES FOR MA TERMINATION (ζ)
In MA-RLHF, the termination condition (ζ) for macro actions is critical as it determines when a
macro action should conclude. We compare the performance of various termination strategies, par-
ticularly on reward maximization and linguistic coherence. The termination strategies studied in
this section including fixed / randomized n-gram-based, parsing-based, and perplexity-based termi-
nation, as aforementioned in §3.2.1; please see Figure 12 for detailed illustration.
Figure 5 illustrates the overall test-set performance on RM scores (Left) and GPT-4 evaluation scores
(Right) with different MA termination strategies. All macro action termination strategies outperform
the vanilla PPO approach, underscoring the importance of temporal abstraction in decision-making.
Figure 5 (Left) shows that n-gram based approach, both fixed and randomized, achieves the opti-
7
01000200030004000Training step0.500.250.000.250.500.751.001.251.50RM scoreVanilla PPOMA-PPO (Fixed)MA-PPO (PPL)MA-PPO (Randomized)MA-PPO (Parsing)referenceRelevanceCoherenceConsistencyFluency34567GPT-4 ScoreVanilla PPOMA-PPO (Fixed)MA-PPO (PPL)MA-PPO (Randomized)MA-PPO (Parsing)Published as a conference paper at ICLR 2025
Figure 6: Test RM scores of different n values in MA-PPO
evaluated by corresponding RM on the TL;DR (left) and HH-
RLHF (right) dataset.
Figure 7: GPT-4 scores of
vanilla PPO and MA-PPO with
different n values on TL;DR.
Figure 8: The effect of temperature on RM scores for varying sample sizes (Best-of-N ) across
models. (Left): RM score of the SFT model under different temperatures and sample sizes. (Mid):
RM score of vanilla PPO under the same settings. (Right): RM score of MA-PPO.
mal results among others. Notably, randomized n-gram-based termination performs the best across
multiple dimensions, including relevance, coherence, and consistency, as shown in Figure 5 (Right).
As expected, the perplexity-based termination enhances fluency, and is most suited for tasks that
prioritize smooth and natural language generation. Furthermore, parsing-based termination shows
promising ability to handle complex grammar, as it is designed to better capture linguistic structures.
4.3.2 ABLATION STUDY: VARYING n IN MA-RLHF
The n-gram based macro action strategy in MA-RLHF uses a hyper-parameter n to control the
length of macro actions. Notably, when n = 1, MA-PPO is equivalent to vanilla PPO, and treats
the problem as a traditional Markov Decision Process (MDP), making decisions token by token. In
contrast, setting n → ∞ corresponds to the REINFORCE algorithm (McGovern & Sutton, 1998),
where the entire sequence is treated as a single macro action, akin to a contextual bandit problem, as
discussed in § 3.2.3. For intermediate values of n (i.e., n ∈ (1, ∞)), MA-PPO falls under the SMDP
framework, which allows for temporally extended actions; see §3. This continuum between MDPs
and contextual bandits highlights the flexibility of the MA-RLHF approach in handling varying
levels of temporal abstraction.
RM Scores We conducted experiments with varying values of n (n ∈ {3, 5, 10, ∞}) on the TL;DR
and HH-RLHF datasets. Figure 6 shows that all values of n lead to performance improvements
over the vanilla PPO (n = 1), indicating the advantage of modeling sequences of tokens as macro
actions. Notably, for the TL;DR dataset, n = ∞ yields the highest RM score, suggesting that
treating the entire sequence as a macro action is particularly effective for the summarization task.
For the HH-RLHF dataset, setting n = 10 gives the best performance, likely because this task
benefits from moderate-length macro actions that can capture essential linguistic structures while
maintaining sufficient granularity.
GPT-4 Evaluation Analysis As shown in Figure 7, setting n = 5 strikes a good balance between
relevance, coherence, consistency; it outperforms both smaller and larger values of n. These findings
align with the semi-MDP framework: increasing n allows for better credit assignment and context
retention, but excessive abstraction (e.g., n = ∞) sacrifices fine-grained control. Overall, moderate
values of n = 5 and n = 10 provide the best trade-offs, highlighting the adaptability across tasks.
4.4 GENERALIZATION PROBING IN MACRO ACTIONS
Robustness on Rejection Sampling vs. Temperature Best-of-N (a.k.a, rejection sampling) (Tou-
vron et al., 2023) enhances response quality by selecting the highest-reward response from N
samples generated by the policy model. We compare MA-PPO, SFT, and vanilla PPO using the
best-of-N sampling across various temperatures T ∈ {0.2, 0.4, 0.6, 0.8, 1.0, 1.2} and sample sizes
N ∈ {4, 8, 16, 32}. As shown in Figure 8, best-of-N sampling improves RM scores for all methods,
8
01000200030004000Training step0.500.250.000.250.500.751.001.251.50RM scoreVanilla PPOMA-PPO (n = 3)MA-PPO (n = 5)MA-PPO (n = 10)MA-PPO (n = )reference010002000300040005000Training step0.000.250.500.751.001.251.501.75RM scoreVanilla PPOMA-PPO (n = 3)MA-PPO (n = 5)MA-PPO (n = 10)MA-PPO (n = )referenceRelevanceCoherenceConsistencyFluency34567GPT-4 ScoreVanilla PPOMA-PPO (n = 3)MA-PPO (n = 5)MA-PPO (n = 10)MA-PPO (n = )481632Best-of-N0.00.20.40.60.8RM ScoreSFTT=0.2T=0.4T=0.6T=0.8T=1.0T=1.2481632Best-of-N1.101.151.201.251.301.351.401.45RM ScorePPOT=0.2T=0.4T=0.6T=0.8T=1.0T=1.2481632Best-of-N1.501.551.601.651.701.751.801.85RM ScoreMA-PPOT=0.2T=0.4T=0.6T=0.8T=1.0T=1.2Published as a conference paper at ICLR 2025
Figure 9: Evaluation results for vanilla PPO and MA-PPO on Gemma-2-27B using the TL;DR
dataset. Left: RM scores on validation set. Mid: Distribution of RM scores for vanilla PPO and
MA-PPO (27B) at final steps (4.6k). Right: Scaling trending on TL;DR dataset across 2B, 7B, and
27B model size, showing RM scores, GPT-4 evaluation, human evaluation results.
Figure 10: RM score shifting pattern after RLHF training; Left: RM scores of best-of-N (N = 8)
sampling compared to the SFT model. Mid Left: RM scores of vanilla PPO compared to the SFT
model. Mid Right: RM scores of MA-PPO (n = 5) compared to the SFT model. Right: RM scores
of MA-PPO (n = ∞) compared to the SFT model.
with performance increasing as N grows. We observe that SFT and vanilla PPO are sensitive to
temperature variations, requiring specific adjustments to achieve optimal results. In contrast, MA-
PPO demonstrates robustness in sampling temperature, it consistently delivers the best performance
at T = 1.2 and shows consistent improvement across all tested temperatures. Moreover, MA-PPO
maintains stable performance across varying temperature settings, as detailed in Appendix D.4,
highlighting its robustness and generalization capabilities under different sampling temperatures.
Scaling Trends up to 27B Models We evaluate the performance of MA-PPO across different model
sizes, specifically Gemma-2B, 7B, and 27B. As demonstrated in Figure 9 (Left and Mid), MA-
PPO consistently surpasses vanilla PPO, exhibiting higher RM scores throughout training. Figure 9
(Right) presents the scaling trend of MA-PPO across the 2B, 7B, and 27B models in terms of testing
RM scores, GPT-4, and human evaluations. The experimental results underscore the scalability and
robust performance of MA-PPO across varying model sizes.
Analyzing the Impact on RM Score Distribution We evaluate the RM score distribution shift after
applying RLHF using vanilla PPO and MA-PPO on the TL;DR dataset, with the SFT model serving
as the baseline. To further contextualize the impact of RLHF, we include the Best-of-N sampling
(N = 8) on the SFT model. As illustrated in Figure 10, Best-of-N enhances overall response quality
but falls short compared to RLHF. While vanilla PPO shifts the distribution towards higher RM
scores, it leaves a significant number of low-quality, long-tailed instances. In contrast, MA-PPO
demonstrates a more pronounced positive impact, effectively reduces the number of low-quality
outliers and improves overall score distribution compared with the vanilla PPO. This highlights the
robustness of MA-PPO in enhancing response quality through RLHF.
4.5 ADDITIONAL ANALYSIS
Impact on L2-Norm of Advantage and Q Values We present the L2-norm of both the advantage
and Q-values for MA-PPO and vanilla PPO during training in Figure 11. The advantage function,
which reflects the difference between the expected return (Q-value) and the baseline, is critical in
guiding policy optimization. A lower L2-norm of both the advantage and Q-values suggests more
stable and less noisy policy updates, likely contributing to faster learning speed observed in §4.2.
t=1 ∇θ log πθ(a|s) · R(cid:3),
The policy gradient for a sequence of length T is given by: ∇θJ = E(cid:2) (cid:80)T
where R is the sequence reward provided by the RM. In the case of using n-gram based macro
actions, the sequence length is reduced by a factor of n, shortening the decision horizon: T → T /n.
This reduction in the number of actions, T /n, where n > 1, implies that the temporal distance
between actions and corresponding rewards is decreased, thus reducing the variance in the gradient
9
1.9×13%21012345RM Score0.0%1.2%2.5%3.8%5.0%6.2%7.5%8.8%PercentageVanilla PPOMA-PPO2B7B27B2B7B27B2B7B27B012345Reward ScoreRM ScoreGPT-4 Eval.Human Eval.Vanilla PPOMA-PPO01020304050Win Rate0.00.20.40.60.81.0SFT RM Score0.00.20.40.60.81.0BoN RM Score010020001002000.00.20.40.60.81.0SFT RM Score0.00.20.40.60.81.0Vanilla PPO RM Score010020001002000.00.20.40.60.81.0SFT RM Score0.00.20.40.60.81.0MA-PPO (n=5) RM Score010020001002000.00.20.40.60.81.0SFT RM Score0.00.20.40.60.81.0MA-PPO (n=∞) RM Score01002000100200Published as a conference paper at ICLR 2025
Figure 11: L2 Norm of advantages and Q-values during
training for MA-PPO and vanilla PPO. Left: L2 norm of ad-
vantages over training steps; Right: L2 norm of Q-values.
Table 3: Pass@k (k = {1, 5}) metric
evaluated on the APPS test set.
Method
pass@1
pass@5
CodeGemma-2B
CodeGemma-7B
PPO MA-PPO PPO MA-PPO
Inter.
Intro.
Comp.
All
Inter.
Intro.
Comp.
All
2.82
15.26
0.92
4.92
4.10
17.30
1.70
6.26
3.25+15%
16.56+8%
0.94+2%
5.45+11%
4.37+7%
18.30+6%
1.60-6%
6.60+5%
4.26
20.90
1.21
6.98
6.57
23.30
2.30
9.06
6.22+46%
26.74+28%
2.00+65%
9.48+35%
8.37+27%
30.30+30%
3.30+43%
11.74+30%
estimate and improving credit assignment. We refer readers to Mann & Mannor (2014) for the
theoretical foundations of variance reduction through macro actions and their benefits in RL.
Case Study We show some qualitative examples in Appendix G.1, demonstrating that MA-PPO can
produce more coherent and contextually appropriate responses compared to vanilla PPO, capturing
both short/long-term dependencies effectively.
Extended Experiments: Code Generation We further assess the effectiveness of MA-PPO on the
code generation task. Following Shojaee et al. (2023); Liu et al. (2023), we utilize the compiler sig-
nal as the final reward; see Appendix B.5 for implementation details. We compare the performance
of MA-PPO and vanilla PPO using the pass@k (k=1, 5) metric (Chen et al., 2021) on the 5k test
set of the APPS dataset (Hendrycks et al., 2021). As shown in Table 3, MA-PPO significantly out-
performs vanilla PPO in both pass @ 1 and pass @ 5 metrics, with more pronounced improvements
as model size scales. Notably, for the 7B model, MA-PPO achieves an improvement of +35% in
pass@1 and +30% in pass@5 over vanilla PPO, demonstrating the effectiveness of our approach in
code generation tasks.
5 RELATED WORK
LLM Alignment RLHF have shown impressive success in aligning LLMs with human preferences
through multi-stage training, including SFT, RM, and RL fine-tuning (Ziegler et al., 2019; Stien-
non et al., 2020; Ouyang et al., 2022; Sun et al., 2025). Recent research has explored optimization
methods for RL in LLMs, employing both online (Ahmadian et al., 2024; Farebrother et al., 2024;
Shen et al., 2024; Chakraborty et al., 2024; Shao et al., 2024) and offline RL algorithms (Snell et al.,
2023; Hu et al., 2023; Yu et al., 2024) to address training instability, improve efficiency (Tang et al.,
2024) and diversity (Sun et al., 2025). Improvements to RM learning have been proposed, such as
parameter scaling (Gao et al., 2023), fine-grained reward (Wu et al., 2023), tool use (Li et al., 2024),
and model merging (Ram´e et al., 2024; Rame et al., 2024). Alternatively, direct policy optimiza-
tion (Rafailov et al., 2024; Ethayarajh et al., 2024; Gheshlaghi Azar et al., 2023; Rosset et al., 2024)
has emerged as a promising approach, bypassing the instability of RL while directly aligning mod-
els to human preferences. In this paper, we enhance the RLHF action space by integrating macro
actions, a well-established concept in RL (Sutton et al., 1999b; Mann & Mannor, 2014).
Macro Action in RL Macro actions introduce temporal abstraction in RL by grouping sequences of
primitive actions, reducing decision complexity and improving long-horizon credit assignment (Pre-
cup et al., 1997; Hauskrecht et al., 2013; Sutton et al., 1999b; Pignatelli et al., 2024; Machado et al.,
2023a). This method has demonstrated its utility in speeding up convergence and stabilizing policy
updates in various domains (Mann & Mannor, 2014; Solway et al., 2014). Our work applies macro
actions to RLHF in LLM training, leveraging this structure to enhance scalability and optimize credit
assignment over extended sequences.
6 CONCLUSION AND FUTURE WORK
In this paper, we introduced MA-RLHF, a novel framework that incorporates macro actions into
RLHF to enhance the alignment of LLMs with human preferences. Our approach demonstrates con-
sistent improvements across multiple tasks, including summarization, dialogue generation, question
answering, and code generation. Notably, MA-RLHF achieves parity with vanilla RLHF 1.7x to 2x
faster in reward scores without incurring additional computational overhead, showing robust scala-
bility across model sizes ranging from 2B to 27B parameters. It is promising to explore MA-RLHF
in complex step-by-step reasoning tasks for future research.
10
01000200030004000Step12345L2 Norm of AdvantagesMA-PPOVanilla PPO01000200030004000Step2468101214L2 Norm of Q ValuesMA-PPOVanilla PPOPublished as a conference paper at ICLR 2025
REPRODUCIBILITY STATEMENT
We are committed to ensuring the reproducibility of the experiments presented in Section 4. To this
end, we make the source code and model checkpoints publicly available at https://github.
com/ernie-research/MA-RLHF. The detailed source code for training and evaluating both
the conventional RLHF and our proposed MA-RLHF approach is included in the supplementary
materials. We believe that these efforts will enable researchers to rigorously verify our findings and
build upon our work.
ACKNOWLEDGMENTS
We would like to express our gratitude to the anonymous reviewers for their insightful and construc-
tive feedback.
REFERENCES
Arash Ahmadian, Chris Cremer, Matthias Gall´e, Marzieh Fadaee, Julia Kreutzer, Ahmet ¨Ust¨un, and
Sara Hooker. Back to basics: Revisiting reinforce style optimization for learning from human
feedback in llms. arXiv preprint arXiv:2402.14740, 2024.
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Pas-
sos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H.
Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Mor-
eira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu,
Yujing Zhang, Gustavo Hern´andez ´Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A.
Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin
Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Cl´ement Crepy, Shachi Dave,
Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark D´ıaz, Nan Du, Ethan Dyer, Vladimir Fein-
berg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann,
Lucas Gonzalez, and et al. Palm 2 technical report. CoRR, abs/2305.10403, 2023.
doi:
10.48550/arXiv.2305.10403. URL https://doi.org/10.48550/arXiv.2305.10403.
Anthropic.
Introducing the next generation of Claude — anthropic.com. https://www.
anthropic.com/news/claude-3-family. [Accessed 22-07-2024].
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones,
Nicholas Joseph, Benjamin Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny
Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom B. Brown,
Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. A general language assistant as a
laboratory for alignment. CoRR, abs/2112.00861, 2021. URL https://arxiv.org/abs/
2112.00861.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn
Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jack-
son Kernion, Tom Conerly, Sheer El Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Her-
nandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine
Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, Benjamin
Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning
from human feedback. CoRR, abs/2204.05862, 2022. doi: 10.48550/arXiv.2204.05862. URL
https://doi.org/10.48550/arXiv.2204.05862.
Yekun Chai, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, and Hua Wu. ERNIE-code: Beyond
In Anna Rogers, Jordan
English-centric cross-lingual pretraining for programming languages.
Boyd-Graber, and Naoaki Okazaki (eds.), Findings of the Association for Computational Linguis-
tics: ACL 2023, pp. 10628–10650, Toronto, Canada, July 2023. Association for Computational
Linguistics. doi: 10.18653/v1/2023.findings-acl.676. URL https://aclanthology.org/
2023.findings-acl.676.
11
Published as a conference paper at ICLR 2025
Souradip Chakraborty, Jiahao Qiu, Hui Yuan, Alec Koppel, Furong Huang, Dinesh Manocha, Am-
rit Singh Bedi, and Mengdi Wang. Maxmin-rlhf: Towards equitable alignment of large language
models with diverse human preferences. arXiv preprint arXiv:2402.08925, 2024.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared
Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large
language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei.
Deep reinforcement learning from human preferences. In Isabelle Guyon, Ulrike von Luxburg,
Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett
(eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neu-
ral Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp.
4299–4307, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/
d5e2c0adad503c91f91df240d0cd4e49-Abstract.html.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models.
arXiv preprint arXiv:2407.21783, 2024.
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model
alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306, 2024.
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. Eli5:
Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for
Computational Linguistics, pp. 3558–3567, 2019.
Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Ta¨ıga, Yevgen Chebotar, Ted Xiao, Alex
Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, et al. Stop regressing: Training
value functions via classification for scalable deep rl. arXiv preprint arXiv:2403.03950, 2024.
Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In An-
dreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan
Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023,
Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 10835–
10866. PMLR, 2023. URL https://proceedings.mlr.press/v202/gao23h.html.
Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal
Valko, and R´emi Munos. A general theoretical paradigm to understand learning from human
preferences. arXiv e-prints, pp. arXiv–2310, 2023.
Milos Hauskrecht, Nicolas Meuleau, Leslie Pack Kaelbling, Thomas L Dean, and Craig Boutilier.
arXiv preprint
Hierarchical solution of markov decision processes using macro-actions.
arXiv:1301.7381, 2013.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo,
Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. Measuring
coding challenge competence with APPS.
In Joaquin Vanschoren and Sai-Kit Yeung (eds.),
Proceedings of the Neural Information Processing Systems Track on Datasets and Bench-
marks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021. URL
https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/
hash/c24cd76e1ce41366a4bbe8a49b02a028-Abstract-round2.html.
Jian Hu, Li Tao, June Yang, and Chandler Zhou. Aligning language models with offline reinforce-
ment learning from human feedback. CoRR, abs/2308.12050, 2023. doi: 10.48550/ARXIV.2308.
12050. URL https://doi.org/10.48550/arXiv.2308.12050.
Glenn A Iba. A heuristic approach to the discovery of macro-operators. Machine Learning, 3:
285–317, 1989.
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly
supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meet-
ing of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601–1611,
2017.
12
Published as a conference paper at ICLR 2025
Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: a
survey. J. Artif. Int. Res., 4(1):237–285, May 1996. ISSN 1076-9757.
Richard E Korf. Learning to solve problems by searching for macro-operators. 1985.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski,
Imanol Schlag, Theo Gutman-
Solv-
In NeurIPS, 2022.
http://papers.nips.cc/paper_files/paper/2022/hash/
Vinay V. Ramasesh, Ambrose Slone, Cem Anil,
Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari,
ing quantitative
URL
18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html.
reasoning problems with language models.
and Vedant Misra.
Lei Li, Yekun Chai, Shuohuan Wang, Yu Sun, Hao Tian, Ningyu Zhang, and Hua Wu. Tool-
In The Twelfth International Conference on Learning Represen-
augmented reward modeling.
tations, 2024. URL https://openreview.net/forum?id=d94x0gWTUX.
Jiate Liu, Yiqin Zhu, Kaiwen Xiao, Qiang Fu, Xiao Han, Wei Yang, and Deheng Ye. RLTF: re-
inforcement learning from unit test feedback. Trans. Mach. Learn. Res., 2023, 2023. URL
https://openreview.net/forum?id=hjYmsV6nXZ.
Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane
Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, et al. Starcoder 2 and the stack v2: The
next generation. arXiv preprint arXiv:2402.19173, 2024.
Marlos C. Machado, Andre Barreto, Doina Precup, and Michael Bowling. Temporal abstraction in
reinforcement learning with the successor representation. Journal of Machine Learning Research,
24(80):1–69, 2023a. URL http://jmlr.org/papers/v24/21-1213.html.
Marlos C Machado, Andre Barreto, Doina Precup, and Michael Bowling. Temporal abstraction in
reinforcement learning with the successor representation. Journal of Machine Learning Research,
24(80):1–69, 2023b.
Timothy Mann and Shie Mannor. Scaling up approximate value iteration with options: Bet-
In Eric P. Xing and Tony Jebara (eds.), Proceedings of the
ter policies with fewer iterations.
31st International Conference on Machine Learning, volume 32 of Proceedings of Machine
Learning Research, pp. 127–135, Bejing, China, 22–24 Jun 2014. PMLR. URL https:
//proceedings.mlr.press/v32/mann14.html.
Amy McGovern and Richard S Sutton. Macro-actions in reinforcement learning: An empirical
analysis. Computer Science Department Faculty Publication Series, pp. 15, 1998.
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo-
pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted
question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. doi: 10.48550/arXiv.2303.08774.
URL https://doi.org/10.48550/arXiv.2303.08774.
OpenAI. What are tokens and how to count them?
articles/4936856-what-are-tokens-and-how-to-count-them, 2024.
cessed 30-09-2024].
https://help.openai.com/en/
[Ac-
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser
Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan
Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In
NeurIPS, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/
b1efde53be364a73914f58805a001731-Abstract-Conference.html.
Zhen-Jia Pang, Ruo-Ze Liu, Zhou-Yu Meng, Yi Zhang, Yang Yu, and Tong Lu. On reinforcement
learning for full-length game of starcraft. In Proceedings of the AAAI Conference on Artificial
Intelligence, volume 33, pp. 4691–4698, 2019.
13
Published as a conference paper at ICLR 2025
Eduardo Pignatelli, Johan Ferret, Matthieu Geist, Thomas Mesnard, Hado van Hasselt, Olivier
Pietquin, and Laura Toni. A survey of temporal credit assignment in deep reinforcement learning.
arXiv preprint arXiv:2312.01072, 2023.
Eduardo Pignatelli, Johan Ferret, Matthieu Geist, Thomas Mesnard, Hado van Hasselt, and Laura
Toni. A survey of temporal credit assignment in deep reinforcement learning. Trans. Mach. Learn.
Res., 2024, 2024. URL https://openreview.net/forum?id=bNtr6SLgZf.
Doina Precup, Richard S Sutton, and Satinder P Singh. Planning with closed-loop macro actions.
In Working notes of the 1997 AAAI Fall Symposium on Model-directed Autonomous Systems, pp.
70–76. Citeseer, 1997.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea
Finn. Direct preference optimization: Your language model is secretly a reward model. Advances
in Neural Information Processing Systems, 36, 2024.
Alexandre Rame, Guillaume Couairon, Corentin Dancette, Jean-Baptiste Gaya, Mustafa Shukor,
Laure Soulier, and Matthieu Cord. Rewarded soups: towards pareto-optimal alignment by in-
terpolating weights fine-tuned on diverse rewards. Advances in Neural Information Processing
Systems, 36, 2024.
Alexandre Ram´e, Nino Vieillard, L´eonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier
Bachem, and Johan Ferret. WARM: on the benefits of weight averaged reward models.
In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria,
July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=
s7RDnNUJy6.
Corby Rosset, Ching-An Cheng, Arindam Mitra, Michael Santacroce, Ahmed Awadallah, and
Tengyang Xie. Direct nash optimization: Teaching language models to self-improve with general
preferences. arXiv preprint arXiv:2404.03715, 2024.
Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, J´er´emy Rapin, et al. Code llama: Open foundation models for code.
arXiv preprint arXiv:2308.12950, 2023.
Earl D Sacerdoti. Planning in a hierarchy of abstraction spaces. Artificial intelligence, 5(2):115–135,
1974.
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region
policy optimization. In International conference on machine learning, pp. 1889–1897. PMLR,
2015.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy
optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with
subword units. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of
the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715–1725, Berlin,
Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162.
URL https://aclanthology.org/P16-1162.
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li,
Yu Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open
language models. arXiv preprint arXiv:2402.03300, 2024.
Wei Shen, Xiaoying Zhang, Yuanshun Yao, Rui Zheng, Hongyi Guo, and Yang Liu.
Improv-
ing reinforcement learning from human feedback using contrastive rewards. arXiv preprint
arXiv:2403.07708, 2024.
Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni, and Chandan K. Reddy. Execution-based code
generation using deep reinforcement learning. Trans. Mach. Learn. Res., 2023, 2023. URL
https://openreview.net/forum?id=0XBuaxqEcG.
14
Published as a conference paper at ICLR 2025
Charlie Snell, Ilya Kostrikov, Yi Su, Sherry Yang, and Sergey Levine. Offline RL for natural lan-
guage generation with implicit language Q learning. In The Eleventh International Conference on
Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
URL https://openreview.net/forum?id=aBH_DydEvoH.
Alec Solway, Carlos Diuk, Natalia C´ordova, Debbie Yee, Andrew G Barto, Yael Niv, and Matthew M
Botvinick. Optimal behavioral hierarchy. PLoS computational biology, 10(8):e1003779, 2014.
Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford,
Dario Amodei, and Paul F. Christiano. Learning to summarize from human feedback. CoRR,
abs/2009.01325, 2020. URL https://arxiv.org/abs/2009.01325.
Haoran Sun, Yekun Chai, Shuohuan Wang, Yu Sun, Hua Wu, and Haifeng Wang. Curiosity-driven
reinforcement learning from human feedback. arXiv preprint arXiv:2501.11463, 2025.
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. Robotica, 17(2):
229–235, 1999.
Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient meth-
ods for reinforcement learning with function approximation. Advances in neural information
processing systems, 12, 1999a.
Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A frame-
work for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181–
211, 1999b.
Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, Yuan Cao, Eugene Tarassov,
R´emi Munos, Bernardo ´Avila Pires, Michal Valko, Yong Cheng, and Will Dabney. Understanding
the performance gap between online and offline alignment algorithms. CoRR, abs/2405.08448,
2024. doi: 10.48550/ARXIV.2405.08448. URL https://doi.org/10.48550/arXiv.
2405.08448.
Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly
capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya
Pathak, Laurent Sifre, Morgane Rivi`ere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open
models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024.
Sebastian Thrun and Anton Schwartz. Finding structure in reinforcement learning. Advances in
neural information processing systems, 7, 1994.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko-
lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher,
Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy
Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel
Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya
Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar
Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan
Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen
Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan
Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aur´elien Rodriguez,
Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-
tuned chat models. CoRR, abs/2307.09288, 2023. doi: 10.48550/arXiv.2307.09288. URL
https://doi.org/10.48550/arXiv.2307.09288.
Alexander Vezhnevets, Volodymyr Mnih, Simon Osindero, Alex Graves, Oriol Vinyals, John Aga-
piou, et al. Strategic attentive writer for learning macro-actions. Advances in neural information
processing systems, 29, 2016.
15
Published as a conference paper at ICLR 2025
Michael V¨olske, Martin Potthast, Shahbaz Syed, and Benno Stein. Tl; dr: Mining reddit to learn
automatic summarization. In Proceedings of the Workshop on New Frontiers in Summarization,
pp. 59–63, 2017.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine learning, 8:229–256, 1992.
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith,
Mari Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better rewards
In Thirty-seventh Conference on Neural Information Processing
for language model training.
Systems, 2023. URL https://openreview.net/forum?id=CSbGXyCswu.
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Am-
mar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, et al. Deepspeed-
chat: Easy, fast and affordable rlhf training of chatgpt-like models at all scales. arXiv preprint
arXiv:2308.01320, 2023.
Zishun Yu, Yunzhe Tao, Liyu Chen, Tao Sun, and Hongxia Yang. $\mathcal{B}$-coder: Value-
based deep reinforcement learning for program synthesis. In The Twelfth International Confer-
ence on Learning Representations, 2024. URL https://openreview.net/forum?id=
fLf589bx1f.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and
chatbot arena. Advances in Neural Information Processing Systems, 36, 2024.
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul
Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv
preprint arXiv:1909.08593, 2019.
A LIMITATIONS
While our work demonstrates the effectiveness of MA-RLHF across multiple tasks, there are sev-
eral limitations that leave room for future improvements.
In our implementation, we apply the
identical action / vocabulary space as pretrained LLMs, considering the fact that defining macro
actions as one options (e.g., one macro action per n-gram) would require re-architecting the LLM’s
vocabulary and retraining the model, which is computationally infeasible. Meanwhile, our macro
action termination methods are rule-based, including linguistics- or perplexity-driven approaches;
future research could explore more complex or learnable termination strategies to further enhance
performance. Furthermore, regarding the generalization of MA-RLHF, our experiments are con-
ducted using models with up to 27B parameters; exploring more advanced models, such as LLaMA
3.1 405B (Dubey et al., 2024) or other state-of-the-art architectures and tasks (e.g., mathematical
and complex reasoning), may provide additional insights into the scalability of MA-RLHF. Lastly,
although we observe significant improvements in training efficiency, further investigation into the
trade-offs between training stability and performance under diverse real-world conditions is neces-
sary. Addressing these limitations will pave the way for more robust applications of MA-RLHF.
B EXPERIMENTAL DETAILS
B.1 DATASETS AND TASKS
TL;DR Summarization In this task, the policy is asked to generate summarizations for Reddit
posts. This dataset consists of 93k human-annotated preference pairs and 86k pairs for validation.
The trainable pairs are derived from the Reddit TL;DR (V¨olske et al., 2017) dataset. Additionally, a
portion of the validation pairs is sourced from the CNN Daily Mails, which serves as the test set for
out-of-distribution generalization.
HH-RLHF With the Anthropic HH-RLHF dataset, the policy is asked to generate a helpful and
harmless response given a single-turn dialogue or multi-turn dialogue. This dataset provides 112k
preference-labeled instances for training, and 12.5k for validation.
16
Published as a conference paper at ICLR 2025
WebGPT Comparisons The WebGPT Comparisons dataset contains QA pairs from the ELI5 (Fan
et al., 2019) and the TriviaQA (Joshi et al., 2017). The policy is responsible for information retrieval
and response generation. In our experimental setup, we focus exclusively on the generation task.
The policy must generate a response that balances factual accuracy and coherence. This dataset
contains 19.6k instances for training. We split 5% instances for validation, as no separate validation
set is provided.
Code Generation For this task, we leverage the APPS dataset, which contains 5k training and 5k
validation instances. The policy must write executable code based on a natural language described
in the question, using Python as the target programming language.
We present the data statistics in Table 4.
Table 4: Statistics of datasets involved in experiments. The number of tokens are calculated with
Gemma-2B tokenizer.
Dataset
Num. of
Comparisons
Num. of
Train Samples
Num. of
Test Samples
Avg. Tokens
in Prompt
Avg. Tokens
in Chosen
Avg. Tokens
in Rejected
Anthropic HH-RLHF
OpenAI Summarization
OpenAI WebGPT
APPS
127.5k
179k
19.6k
10k
112k
92.9k
18.5k
5k
12.5k
86.1k
979
5k
160
325
49
453
83
35
149
203
75
33
137
-
B.2 TRAINING DETAILS
Following the procedure used by InstructGPT (Ouyang et al., 2022), we fine-tune both the SFT
model and the reward model on the same dataset to avoid a distribution gap. We implement our
training code with the Deepspeed-Chat package (Yao et al., 2023).
SFT Training We split the dataset into three parts, allocating 20% of the data in the supervised fine-
tuning stage. We use the prompts and the chosen sentences as the instruction data. For the TL;DR
Summarize dataset, we concatenate the post and summarization following the approach of Stiennon
et al. (2020). For the single-turn dialogue and the question answering dataset, we apply a human-
assistant chat template to format the instructions. For the program synthesis dataset, we format the
instruction data in line with Hendrycks et al. (2021).
Reward Modeling In this stage, we use 40% of the data to train the reward model for each dataset,
formatting the preference data the same way as in the SFT training stage. We initialize the reward
model using the fine-tuned SFT model. Due to the lack of preference pairs in the program synthesis
dataset, this stage is omitted for this task.
PPO Training Similar to previous stages, the remaining 40% of the data is used to optimize the
policy model. The SFT model initializes the policy model, and the reward model initializes the critic
model. For the program synthesis dataset, 80% of the data is used in this stage, with both the policy
and critic models initialized using the SFT model. The pass@1 metric serves as the reward signal
for program synthesis, compensating for the absence of a reward model. While training 7B model
on TL;DR dataset using MA-PPO, we encountered unstable training with a KL coefficient of 0.05.
Reducing the coefficient to 0.01 for the 7B model led to more stable optimization.
Table 5 lists the hyperparameters used across all training stages for each task.
B.3 NOTATIONS
In Table 6, we present the notations used in our paper.
B.4 DETAILS OF MACRO ACTION TERMINATION
The general form of the segmentation rule is thus tτ +1 = tτ + |ωτ |, where |ωτ | is determined by the
chosen criterion, such as n-grams, random, parsing, or perplexity-based segmentation.
1. Fixed n-gram length: For all macro actions, we set |ωτ | = n, where n is a constant value.
17
Published as a conference paper at ICLR 2025
Table 5: Hyper-parameters for training Gemma series of models in MA-PPO and vanilla PPO.
Gemma
CodeGemma
Hyper-Parameter
Batch size
2B
64 for WebGPT
512 for others
7B
128
SFT
Epochs
3
Learning rate
LR scheduler
Warmup ratio
Batch size
RM
Epochs
PPO
Learning rate
LR scheduler
Warmup ratio
Batch size
Policy learning rate
Critic learning rate
Epochs
PPO epochs
Rollout
Clip ratio
λ in GAE
γ in GAE
KL coefficient
Max prompt length
Max response length
Warmup steps
Temperature
Top-p
Top-k
1e-4 for WebGPT
5e-5 for others
cosine
0.1
32 for WebGPT
64 for others
1
2e-5 for WebGPT
1e-5 for others
cosine
0.1
256
1.5e-5
1.5e-5
4 for WebGPT
1 for others
1
1
0.2
0.95
1
0.05
512
512
200
0.8
1.0
50
5 for WebGPT
1 for others
2e-5
cosine
0.1
128 for TL;DR
64 for HH-RLHF
32 for WebGPT
1
1e-6
cosine
0.1
256
1e-6
1e-6
4 for WebGPT
1 for others
1
1
0.2
0.95
1
0.1 for WebGPT
0.05 for others
512
512
200
0.8
1.0
50
27B
128
3
5e-6
cosine
0.1
128
1
8e-6
cosine
0.1
256
7e-7
1e-6
1
1
1
0.2
0.95
1
0.1
512
512
0
0.8
1.0
50
2B
16
1
7B
32
1
5e-6
2e-6
cosine
0
cosine
0
-
-
-
-
-
16
5e-7
5e-5
1
1
1
0.2
0.95
1
0.05
600
512
20
1.0
1.0
5
-
-
-
-
-
16
5e-7
5e-5
1
1
1
0.2
0.95
1
0.05
600
512
20
1.0
1.0
5
Figure 12: Illustration of four termination rules for macro actions in the MA-RLHF framework.
Each termination rule outputs a list of |ωτ |. In the parsing based termination, the macro action is
determined when the token number of the current node is less than C = 4, which is represented as
a number in the tree node.
2. Randomized n-gram length: We define a list of {|ωτ |} = {2, 3, 5, 10} to model macro actions.
This list is repeated multiple times to cover the length of the sample, in practice, we repeat this list
3 times. If the total length of macro actions can not match the number of tokens, a large number
will be considered as an additional |ωτ | to mitigate this gap, which is similar to the |ωτ | = ∞.
We shuffle the list and take this as a random-based length.
3. Parsing-based length: We parse the response into a constituent tree and perform a depth-first
search (DFS) to identify macro action length. Two rules guide the termination of |ωτ |: (1) nodes
18
Perplexity Based Termination5555551.75PPL:1.711.821.781.741.6924235105310210325Random SelectMacro ActionTermination𝑎!𝑎". . .𝑎#𝑎#$"𝑎!𝑎". . .𝑎#𝑎#$%𝑎#$"𝜔!𝜔&$"𝜔&ActionsMacro ActionsRandomized 𝑛-gram Based TerminationFixed 𝑛-gram Based TerminationParsing Based Termination5|𝜔!|=5SNPDNNPPNPNVPVNPDNthecrewfromMarsrepairthespaceship1213Published as a conference paper at ICLR 2025
Table 6: List of notation used in this paper.
Sym.
Meaning
RL
A finite set of states.
A finite set of actions.
The state transition probability distribution.
The reward function.
The initial state distribution.
The discount factor related with future rewards.
Policy parameterized by θ.
The expected cumulative discount reward.
The actions selected by the policy.
S
A
P
r
ρ0
γ
πθ(a | s)
η(π)
at
Qπ(st, at) The state-action value function.
Vπ(st)
Aπ(st, at)
Gt
The state value function.
The advantage function.
The expected return.
RLHF
rϕ(x, y)
x
y+
y−
β
η
t
The reward model parameterized by ϕ.
Prompt.
Chosen response.
Rejected response.
KL coefficient.
The range for clipping in PPO.
Time step of tokens.
Macro Action
ζ
I
τ
ωτ
tτ
στ
Termination condition.
Initiation set.
The index of macro action/state/reward.
Macro action at time step τ .
Time step of macro actions.
The weight used to measure the value of macro action.
with fewer than C tokens mark the end of a macro action; (2) nodes with single token are included
in the last macro action, avoiding single-token termination conditions like punctuation. Due to
differences between the training and parsing tokenizers, we revert to the standard PPO method
when discrepancies occur. We set the cut-off threshold C = 5, providing optimal granularity in
practice.
4. Perplexity-based length: Given a response y generated by policy model, we calculate the per-
plexity pt at any time step t by treating y≤t as the ground truth response. This process lever-
ages the logits from the reference model, avoiding additional forward passes. Intuitively, se-
lecting the macro actions based on perplexity P = {p0, p1, . . . , p|y|} can be defined as se-
lecting tokens which consistently attribute to the decrease of the perplexity given partial sen-
tence. Mathematically, it can be represented as ωτ = {atτ , atτ +1, . . . , atτ +|ωτ |−1} where
Ptτ = {ptτ , ptτ +1, . . . , ptτ +|ωτ |−1} exhibits a monotonic decreasing pattern.
B.5 TRAINING SETTINGS OF PROGRAM SYNTHESIS
Defining the reward score solely based on the state “Accept” or “Wrong Answer” is somewhat re-
strictive, as some generated code may pass certain unit tests while failing others. These actions
should also receive positive signals to encourage the policy to maximize the number of passed unit
tests. To address this, we incorporate an adaptive compiler signal into the reward feedback as previ-
19
Published as a conference paper at ICLR 2025
Figure 13: Test RM scores evaluated by corresponding re-
ward model of Gemma-2B and Gemma-7B model on HH-
RLHF dataset.
Figure 14: Distribution of test
RM scores for vanilla PPO
and MA-PPO (2B) at final
steps (5.6k) on the HH-RLHF
dataset.
Figure 15: Test RM scores evaluated by corresponding reward
model of Gemma-2B and Gemma-7B model on the WebGPT
Comparisons dataset.
Figure 16: Distribution of test
RM scores for vanilla PPO and
MA-PPO (2B) at final steps
(3.2k) on WebGPT dataset.
ous work (Shojaee et al., 2023; Liu et al., 2023):
R(x, y) =
− 0.3 + 1.3 ·
Npass
Npass + Nfail
,
− 0.6,
− 1.0,
if y successfully compiled.
if y received runtime error.
if y received compile rrror.
where x represents the prompt, and y represents the code snippet generated by the policy model.
C ADDITIONAL EXPERIMENTS RESULTS
C.1 RESULTS OF DIALOGUE GENERATION
In Figure 13, we demonstrate the RM scores on the validation set of vanilla PPO and MA-PPO. It
shows that MA-PPO surpasses vanilla PPO under RM evaluation, MA-PPO achieves parity perfor-
mance at 3100 step and 2600 step for 2B and 7B models, respectively, while vanilla PPO at 5100
step and 5400 step. Generally, MA-PPO is 1.6-2x faster than vanilla PPO. Figure 14 compares the
RM score distribution of both methods.
C.2 RESULTS OF QUESTION ANSWERING
We assess the performance of MA-PPO on the OpenAI WebGPT Comparison dataset, which focuses
on the question answering task.
Figure 15 presents the evaluation results based on the reward model. We observe that the policy
model is challenging to optimize in this task, likely due to the suboptimal performance of the reward
model. We applied early stopping during PPO training since the policy model exhibited reward
hacking behavior which generated repetition tokens to inflate higher reward scores towards the end
of training. Despite this, evaluations on the saved checkpoints show that MA-PPO still outperforms
vanilla PPO across both tested model sizes. The reward score distribution in Figure 16 further
confirms that MA-PPO achieves superior reward scores.
20
010002000300040005000Training step0.50.00.51.01.52.02.5RM score (2B)Vanilla PPOMA-PPO010002000300040005000Training step1.00.50.00.51.01.52.0RM score (7B)Vanilla PPOMA-PPO3210123RM Score0.0%1.0%2.0%3.0%4.0%5.0%6.0%7.0%PercentageVanilla PPOMA-PPO50010001500200025003000Training step0.750.700.650.600.550.50RM score (2B)Vanilla PPOMA-PPO50010001500200025003000Training step0.700.650.600.550.500.45RM score (7B)Vanilla PPOMA-PPO0.90.80.70.60.50.4RM Score0.0%0.5%1.0%1.5%2.0%2.5%PercentageVanilla PPOMA-PPOPublished as a conference paper at ICLR 2025
Table 7: Test RM scores of SFT model,
vanilla PPO, MA-PPO, and baselines: DPO
and RLOO on TL;DR and HH-RLHF
datasets.
Method
SFT
DPO
RLOO
PPO
MA-PPO (n=5)
RM Score
(TL;DR)
RM Score
(HH-RLHF)
-0.64
0.03
0.81
0.83
1.40
0.13
0.64
-
1.31
1.55
Figure 17: Win rates of DPO and RLOO
against PPO and MA-PPO on TL;DR and
HH-RLHF estimated by GPT-4.
When using GPT-4 as the judge, we consider three different metrics to evaluate the answers
generated by the policy: factual accuracy, coherence, and usefulness overall, following previous
work (Nakano et al., 2021). The win rates depicted in Figure 4 (Right) show that MA-PPO consis-
tently outperforms the policy trained with vanilla PPO across all criteria. Notably, MA-PPO achieves
higher win rates in coherence and usefulness compared to factual accuracy. Human evaluation was
conducted to select the preferred answer between those generated by the two policy models. Re-
sults in Figure 4 (Right) show that answers produced by MA-PPO were predominantly preferred by
human annotators.
C.3 COMPARING WITH ADDITIONAL BASELINES
In this section, we compare MA-PPO with two additional baselines: DPO (Rafailov et al., 2024)
and RLOO (Ahmadian et al., 2024) on Gemma-2B model. Both of the methods are implemented
with Deepspeed-Chat. Specifically, DPO models are trained on TL;DR and HH-RLHF datasets,
with the same data split as we used when training PPO. RLOO model is trained on TL;DR dataset
only, with the same policy and reward model initialization as PPO. For the training details of DPO,
the learning rate is set to 2e-7, with β = 0.1 for TL;DR and β = 0.01 for HH-RLHF. The policy
and reference models are initialized using the same SFT model as in PPO. For RLOO, the learning
rate for the policy model is set to 1.5e-5, and the number of online samples is K = 4. All other
hyperparameters are kept consistent with PPO.
We demonstrate the results evaluated by reward model score in Table 7, and win rates estimated
by GPT-4 in Figure 17. On TL;DR dataset, DPO fails to gain improvement compared to PPO and
MA-PPO, while RLOO achieves similar performance compared to PPO, but outperformed by MA-
PPO. On HH-RLHF dataset, DPO exhibits superior performance than PPO but still underperforms
the MA-PPO.
C.4 EXPERIMENTS ON LLAMA-3.2-3B
Table 8: Test RM scores of Llama-3.2-3B models
on TL;DR dataset.
We conduct experiments on Llama-3.2-3B
model to validate the generalizability of our
method across different model families. The
experiments are conducted on TL;DR dataset,
following the same data split as Gemma-2B.
We set the learning rates of actor and critic to
5e-6 and 1e-5, and the KL coefficient is set to
0.1. Table 8 demonstrate the results evaluated
by RM score, we show MA-PPO still remark-
ably outperforms vanilla PPO. Using GPT-4 to assess the win rate, MA-PPO obtains 61% win, 4%
tie and 34% loss rate compared against PPO. These results prove the generalizability of our method.
SFT
PPO
MA-PPO (n=5)
RM Score (TL;DR)
2.38
3.33
3.96
Method
21
020406080100% Win RateDPO v.s. PPOHH-RLHFDPO v.s. MA-PPOHH-RLHFDPO v.s. PPOTL;DRDPO v.s. MA-PPOTL;DRRLOO v.s. PPOTL;DRRLOO v.s. MA-PPOTL;DR52%42%34%8%50%24%4%8%10%12%2%4%44%50%56%80%48%72%WinTieLossPublished as a conference paper at ICLR 2025
Figure 18: Illustration of value function of macro actions in MA-RLHF framework. It takes the
outputs from the value function of tokens as input, and returns the value of macro actions with
different στ assignment.
Table 9: Pass@1 metric evaluated when apply-
ing different termination conditions on APPS
dataset.
Dataset
Termination RM Score
GPT-4 Win Rate
(v.s. PPO)
TL;DR
HH-RLHF
Fixed 5-gram
Parsing
PPL
Fixed 5-gram
Parsing
1.40
1.37
1.27
1.55
1.64
78%
78%
72%
58%
62%
Table 10: Test RM scores and GPT-4
win rates when applying different termina-
tion conditions on TL;DR and HH-RLHF
datasets.
Termination
Fixed 10-gram Parsing
PPL
pass@1
Inter.
Intro.
Comp.
All
3.25
16.56
0.94
5.45
3.17
17.05
1.24
5.56
3.04
16.36
0.80
5.26
D FURTHER ANALYSIS
D.1 VALUE FUNCTION ESTIMATION OF MACRO ACTION
When implementing the macro actions,
the value function of macro actions is estimated
through the value function of tokens. This process can be formulated as: V π(sτ , ωτ ) =
(cid:80)|ωτ |
i=0 σtτ +iV π(stτ +i, atτ +i), where στ = {σtτ , · · · , σtτ +|ωτ |} control the contribution of each
value function of tokens.
In this section, we explore several assignments of στ and their effectiveness on MA-PPO. Figure 18
illustrates macro action value function with different στ assignments:
1. Equal assignment: We treats the contributions of each value function of tokens equally when
i=1. This is the naive assign-
considering the value function of macro actions, i.e., στ = { 1
ment in MA-PPO used in all our experiments.
|ωτ | }τ
2. Unit assignment Since a macro action is a higher-level construct of a sequence of actions, we
can use the value function of the last action as the macro action’s value function, where στ =
{0, 0, · · · , 0, 1}.
3. Position decayed assignment The contributions of each value function of tokens are determined
by taking the position into consideration. We define στ based on the position of the token, i.e.,
στ = {
σ =
1.
(|ωτ |−i) , this construction ensures (cid:80)
, where H = (cid:80)|ωτ |−1
(|ωτ |−i)·H }|ωτ |−1
σ∈στ
i=0
i=0
1
1
We tested these approaches with fixed n-gram based termination on TL;DR dataset, with n = 5.
We report the RM score and GPT-4 score as previous. Results in Figure 19 show that the equal
assignment yields higher RM scores. However, the unit assignment achieves the best consistency
and fluency according to GPT-4 evaluations.
22
+0.50.5+0.50.5𝜎!0.60.9𝑉(𝑠!)Equal Assignment+01+01𝜎!0.90.3𝑉(𝑠!)Unit Assignment+1/32/3+1/32/3𝜎!0.70.7𝑉(𝑠!)Position Decay Assignment𝑎"𝑎#𝑎$𝑎%√Critic Model0.30.90.31.5Value Function𝑉(𝑠&)ActionsPublished as a conference paper at ICLR 2025
Figure 19: Performance of MA-PPO with different value function estimations in MA-PPO on
TL;DR dataset for Gemma-2B model. Left test RM scores. Right GPT-4 scores on 4 dimensions.
D.2 TERMINATION CONDITIONS ON DIFFERENT TASKS
In this section, we analysis the effectiveness of termination conditions on TL;DR, HH-RLHF, and
APPS datasets. When implementing parsing-based termination condition on APPS dataset, we use
a programming-language-based parser.4 The results of TL;DR and HH-RLHF datasets are shown
in Table 9 and Table 10. We can notice that parsing-based termination condition performs well
on the HH-RLHF tasks, with higher RM score and win rate than fixed 5-gram based termination
condition. While on the TL;DR dataset, parsing-based termination condition also achieves excellent
performance compared to fixed 5-gram termination condition. On APPS dataset, parsing-based
termination condition achieves the best results, except for the interview level task. These results
demonstrate that construct macro action with linguistic information indeed brings performance gain
to MA-PPO.
D.3
IMPACT OF RLHF ON REWARD SCORE DISTRIBUTION
Figure 20: RM score shifting pattern after RLHF training. Left presents the RM score of best of 8
sampling on vanilla PPO compared to the vanilla PPO. Mid Left presents the RM score of best of
8 sampling on MA-PPO compared to the MA-PPO. Mid Right presents the RM score of MA-PPO
(n = 5) compared to the vanilla PPO model. Right presents the RM scores of MA-PPO (n = ∞)
compared to the vanilla PPO model.
We apply Best-of-N sampling on both vanilla PPO and MA-PPO. The RM score shifting patterns
for these methods are illustrated in Figure 20 (Left and Mid Left). From the results, we can conclude
that Best-of-N sampling continues to enhance the performance of RLHF models effectively.
In Figure 20 (Mid Right and Right), we compare the MA-PPO with vanilla PPO using settings of
n = 5 and n = ∞, both of which demonstrate positive effects on the RM score distribution.
D.4
IMPACT OF SAMPLING TEMPERATURE
In the previous experiments, the results were sampled with a temperature temp = 0.8 to align
with the sampling strategy used during training. In this section, we examine the effect of sampling
4RedBaronhttps://github.com/PyCQA/redbaron
23
01000200030004000Training step0.500.250.000.250.500.751.001.251.50RM scoreVanilla PPOMA-PPO (Equal)MA-PPO (Unit)MA-PPO (Position)referenceRelevanceCoherenceConsistencyFluency34567GPT-4 ScoreVanilla PPOMA-PPO (Equal)MA-PPO (Unit)MA-PPO (Position)0.00.20.40.60.81.0PPO RM Score0.00.20.40.60.81.0PPO BoN RM Score010020001002000.00.20.40.60.81.0MA-PPO RM Score0.00.20.40.60.81.0MA-PPO BoN RM Score010020001002000.00.20.40.60.81.0PPO RM Score0.00.20.40.60.81.0MA-PPO (n=5) RM Score010020001002000.00.20.40.60.81.0PPO RM Score0.00.20.40.60.81.0MA-PPO (n=∞) RM Score01002000100200Published as a conference paper at ICLR 2025
Figure 21: Test reward scores evaluated by the corresponding reward model for summarizations
generated with different sampling temperature on the TL;DR dataset.
Figure 22: Illustration of the macro action-RLHF (MA-RLHF) framework.
temperature on response quality. We vary the temperature temp ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0}, and
report the results in Figure 21. The performance of both methods remains stable when temp < 0.8.
However, the performance of vanilla PPO begins to decline after temp = 0.8, whereas MA-PPO
continues to demonstrate stable performance, even at temp = 1.0.
Algorithm 1: Framework of Macro Action RLHF.
Input: Prompts: X = {x0, x1, . . . , xn}; Policy model: πpolicy;Reference model:
πref ; Critic model: πcritic; Reward model: πrm; Termination rule ζ(·) in
Section 3.2.1; Value function estimation σtτ in Section D.1.
Output: Policy loss Lppo, Critic loss Lvalue.
foreach prompt xi in X do
Make experience using policy model y := πpolicy(x);
Get value V (st) := πcritic(x, st) at every time step t ∈ [0, |y|);
Get reward score at current experience r := πrm(x, y);
Compute macro actions {ωτ }m
foreach macro action ωτ in {ωτ }m
τ =1 based on the termination rule {ωτ }m
τ =1 do
Compute macro action value function
V π(sτ , ωτ ) = (cid:80)|ωτ |
i=0 σtτ +iV π(stτ +i, atτ +i);
τ =1 := ζ(y);
Obtain ˆAτ and ˆQτ with GAE(V π(sτ , ωτ ), r);
(cid:16) πθ(ωτ |sτ )
Optimize Lppo = ˆE
min
πθold (ωτ |sτ )
∥V π(sτ , ωτ ) − ˆQτ ∥2(cid:105)
(cid:104)
Optimize Lvalue = ˆE
(cid:104)
ˆAτ , clip( πθ(ωτ |sτ )
πθold (ωτ |sτ ) , 1 − ϵ, 1 + ϵ) ˆAτ
(cid:17)(cid:105)
E MA-RLHF ALGORITHMS
Figure 22 illustrates the framework of MA-RLHF. In practice, to implement MA-RLHF, once the
macro actions are obtained via the termination function, we compute their value (as estimated by
the critic model) and rewards (based on a per-token KL penalty) using the value function estimation.
With these values and rewards, we apply Generalized Advantage Estimation (GAE) without modi-
fication to derive advantage estimates and state-action value functions. These advantage estimates
and state-action value functions are then used to all tokens within the macro action during the opti-
24
0.00.20.40.60.81.0Temperature (2B)0.000.250.500.751.001.251.501.752.00Reward scoreVanilla PPOMA-PPO0.00.20.40.60.81.0Temperature (7B)1.001.251.501.752.002.252.502.75Reward scoreVanilla PPOMA-PPO𝑎!𝑎". . .𝑎#?Policy ModelToken-Level RLHFQuery√Critic Model𝑅#𝑉(𝑠")𝑉(𝑠!). . .√RewardModelActionsValues𝑉(𝑠#)Macro Action RLHF𝑉(𝑠$,𝜔$)𝑉(𝑠")𝑉(𝑠!)Macro ActionTermination𝜔!𝜔". . .𝜔$𝑎!𝑎"𝑎#|𝜔"|=2|𝜔#|=1𝑎%. . .22|𝜔$|1. . .MA ValuesMacro Actions𝑅$𝑅&$𝑅&%MA Rewards (KL)?Policy ModelQuery√Critic Model𝑅#√RewardModelRL Optimization𝑉(𝑠$,𝜔$)MA Values𝑅$MA RewardsGAE Function𝑄$𝑉(𝑠$,𝜔$)ℒ&&’ℒ()*+,AdvantagesState-Action Value𝑉(𝑠#)Values𝑅#Rewards𝐴#𝑎#𝐴$𝜔$𝑄#𝑉(𝑠#)Published as a conference paper at ICLR 2025
mization of both the policy and critic models. The macro action RLHF algorithm, utilizing PPO, is
detailed in Algorithm 1.
In this implementation, the introduced additional time complexity is in the option termination. While
fixed n-gram based, randomized n-gram based, and perplexity based terminations achieves same
time complexity, the time complexity of parsing based termination is related to the constituent tree
which we applied DFS to obtain |ωτ |. During the inference stage, our MA-PPO will not introduce
additional complexity since it only works at the training stage.
We provide the Pytorch code for implementation of the macro action in PPO below:
25
Published as a conference paper at ICLR 2025
Obtain Macro Action Positions
def get_macro_action_positions(self, start, mask, termination=’ngram’,
n_gram: int=None, ppl: List[torch.float16]=None, repeat_times: int=
None, cutoff: int=None):
sequence = [start]
if termination == ’ngram’:
assert n_gram is not None
current_count = 0
for i in range(mask[:, start:].size(1) - 1):
current_count += mask[0, start + i].item()
if current_count == n_gram:
sequence.append(start + i + 1)
current_count = 0
elif termination == ’randomized_ngram’:
k_list = torch.tensor([2, 3, 5, 10], dtype=int)
k_list = torch.repeat_interleave(k_list, 3)
k_list = k_list[torch.randperm(k_list.size()[-1])]
indexed_k_list = torch.cumsum(k_list, dim=-1)
sequence = [n for n in range(start, mask[:, start:].size(1) -
1)]
indexed_k_list = [x.item() for x in indexed_k_list if x.item()
< len(sequence)]
sequence = [start] + [sequence[i] for i in indexed_k_list]
elif termination == ’ppl’:
assert ppl is not None
for i in range(1, len(ppl)):
if ppl[i] > ppl[i - 1]:
sequence.append(start + i)
elif termination == ’parser’:
if len(node.leaves()) < 1:
return False, ma_length + 1
if len(node.leaves()) < cutoff:
sequence.append(ma_length + node.leaves())
return True, ma_length + node.leaves()
for nxt_node in node.childs():
state, ma_length_ = dfs(nxt_node, ma_length)
if !state:
sequence[-1] = ma_length_
ma_length = ma_length_
return True, ma_length
sequence.append(int(mask.size(1) - 1))
return sequence
Calculate Values / Rewards of Macro Action
def get_macro_action_values(self, values, mask, start, sequence):
split_list = torch.diff(torch.tensor(sequence)).tolist()
splited_values = torch.split(values[:, start:], split_list, dim=-1)
splited_mask = torch.split(mask[:, start:], split_list, dim=-1)
inplace_values = torch.zeros(1, len(split_list), dtype=values.dtype
).to(values.device)
for idx, (value_i, mask_i) in enumerate(zip(splited_values,
splited_mask)):
masked_values = value_i[mask_i != 0]
inplace_values[0, idx] = torch.mean(masked_values) if
masked_values.numel() > 0 else 0.0
return inplace_values
26
Published as a conference paper at ICLR 2025
Calculate Policy Model Loss
def policy_loss_macro_action(self, logprobs, old_logprobs, advantages,
mask, sequence):
log_ratio = (logprobs - old_logprobs) * mask
ratio = torch.exp(log_ratio)
# calculate loss with macro action
split_list = torch.diff(torch.tensor(sequence)).tolist()
split_ratio = torch.split(ratio, split_list, dim=-1)
split_mask = torch.split(mask, split_list, dim=-1)
pg_loss = 0.0
total_mask_sum = 0.0
for i in range(len(split_list)):
ratio_i = split_ratio[i]
mask_i = split_mask[i]
advantages_i = advantages[:, i]
pg_loss1 = -advantages_i * ratio_i
pg_loss2 = -advantages_i * torch.clamp(ratio_i, 1.0 - self.
cliprange, 1.0 + self.cliprange)
pg_loss += torch.sum(torch.max(pg_loss1, pg_loss2) * mask_i)
total_mask_sum += mask_i.sum()
pg_loss = pg_loss / total_mask_sum
return pg_loss
Calculate Critic Model Loss
def critic_loss_macro_action(self, values, old_values, returns, mask,
sequence):
values_clipped = torch.clamp(
values,
old_values - self.cliprange_value,
old_values + self.cliprange_value,
)
if self.compute_fp32_loss:
values = values.float()
values_clipped = values_clipped.float()
# calculate loss with macro action
split_list = torch.diff(torch.tensor(sequence)).tolist()
splited_values = torch.split(values, split_list, dim=-1)
splited_values_clipped = torch.split(values_clipped, split_list,
dim=-1)
splited_mask = torch.split(mask, split_list, dim=-1)
total_vf_loss = 0.0
total_mask_sum = 0.0
for i in range(len(splited_values)):
vf_loss1 = (splited_values[i] - returns[:, i])**2
vf_loss2 = (splited_values_clipped[i] - returns[:, i])**2
vf_loss = 0.5 * torch.sum(
torch.max(vf_loss1, vf_loss2) * splited_mask[i])
total_vf_loss += vf_loss
total_mask_sum += splited_mask[i].sum()
total_vf_loss = total_vf_loss / total_mask_sum
return total_vf_loss
27
Published as a conference paper at ICLR 2025
PPO
# In PPO algorithm
start = prompts.size()[-1] - 1
action_mask = attention_mask[:, 1:]
...
sequence = get_macro_action_positions(start, action_mask, termination=’
ngram’, n_gram=n_gram)
macro_action_old_values = get_macro_action_values(old_values,
action_mask, start, sequence)
macro_action_old_rewards = get_macro_action_values(old_rewards,
action_mask, start, sequence)
advantages, returns = get_advantages_and_returns(sumed_old_values,
sumed_old_rewards)
policy_loss = policy_loss_macro_action(policy_log_prob[:, start:],
log_probs[:, start:], advantages, action_mask[:, start:], sequence)
critic_loss = critic_loss_macro_action(value[:, start:], old_values[:,
start:], returns, action_mask[:, start:], sequence)
F EVALUATION DETAILS
F.1 GPT-4 EVALUATION PROMPTS
In our experiments, we take GPT-4 as a main judgment of the quality of policy models. The prompts
used to generate win rates using GPT-4 are listed below. We utilize the gpt-4o-05-13 for all of
our experiments. The order of the responses generated by policy models is randomly chosen for all
experiments.
TL;DR GPT-4 Evaluation Prompt
You will be given two summaries written for an article. Your task is to pick the better one between
them, based on the four criteria. Please make sure you read and understand these instructions carefully.
Relevance - selection of important content from the source. The summary should include only impor-
tant information from the source document. Annotators were instructed to penalize summaries which
contained redundancies and excess information.
Coherence - the collective quality of all sentences. We align this dimension with the DUC quality ques-
tion of structure and coherence whereby “the summary should be well-structured and well-organized.
The summary should not just be a heap of related information, but should build from sentence to a
coherent body of information about a topic.”
Consistency - the factual alignment between the summary and the summarized source. A factually
consistent summary contains only statements that are entailed by the source document. Annotators
were also asked to penalize summaries that contained hallucinated facts.
Fluency - the quality of the summary in terms of grammar, spelling, punctuation, word choice, and
sentence structure.
You should output single character to indicate which summary you think is better.
‘A’ stands for
Summary A and ‘B’ stands for Summary B. If you think both summaries are equally good, output ‘E’.
Article / Post:{article / post}
Summary A:{summary a}
Summary B:{summary b}
Your Choice (only a single character):
28
Published as a conference paper at ICLR 2025
HH-RLHF GPT-4 Evaluation Prompt
For the following query to a chatbot assistant, which response is more helpful?
First provide a one-sentence comparison of the two responses and explain which you feel is more
helpful. Second, on a new line, state only ‘A’ or ‘B’ to indicate which response is more helpful. If they
are equally good or bad, state ‘E’. Your response should use the json format, with “comparison” and
“choice” as keys.
Query: {query}
Response A: {response a}
Response B: {response b}
Your Judgment:
WebGPT Comparisons GPT-4 Evaluation Prompt
You will be given two response written for an question. Your task is to pick the better one between
them, based on these criteria.
Factual accuracy - which answer is more factually accurate?
Coherence - which answer is easier to follow?
Usefulness overall - all things considered, which answer would be more helpful to the person who
asked this question?
You should output with a json format where the key is the criteria and the value is the choice you made,
using ‘A’ stands for Response A and ‘B’ stands for Response B. If you think both responses are equally
good, output ‘E’.
Question: {question}
Answer A: {answer a}
Answer B: {answer b}
Your Judgment (you should also output the reason, note that you are allowed to think both responses
are equally good, then output with ‘E’):
F.2 HUMAN EVALUATION
To estimate the quality from a human perspective, we collect human preference data on the TL;DR,
HH-RLHF, and WebGPT datasets. Human annotators select the preferred response based on task-
specific criteria. For TL;DR, the evaluation criteria focus on three main perspectives:
1. Hallucination: this considers whether the generated summary includes any additional informa-
tion not present in the original post or article.
2. Verbosity: this assesses if the summary includes unnecessary context that could be removed
without negatively impacting its quality.
3. Overall Quality: this measures the general coherence, informativeness, and readability of the
generated summary.
For evaluation on TL;DR dataset, the annotators should first compare the overall quality of two
responses. If overall qualities are equally good for responses, then they should choose the winner
based on hallucination and verbosity.
In the context of HH-RLHF, annotators focus on the helpfulness of the responses:
1. Instruction Following: whether the generated response follows the requirements in the instruc-
tion
2. Usefulness: whether the advices in the response are applicable, and does the response ideally
guide the user on what to do next.
Annotators are instructed to choose the response based on these aspects, while excluding superfi-
cial replies such as ”You’re welcome.” For the WebGPT dataset, the primary evaluation factor is
factual accuracy. Annotators are provided with retrieval information relevant to the question from
the dataset to aid in their judgment. They are tasked with selecting the answer that most accurately
matches the retrieved information.
During the evaluation process, annotators are presented with a prompt and two responses, each
generated by either vanilla PPO or MA-PPO. To ensure impartiality and prevent annotators from
29
Published as a conference paper at ICLR 2025
guessing which model produced which response, we shuffle the positions of the responses. Anno-
tators are given three choices: response A wins, response B wins, or a tie. The results are then
collected to calculate the win rates for each model.
For evaluations on the TL;DR and HH-RLHF datasets using 7B models, we conduct the human
evaluation with 3 different annotators and collect their preference data to report the win rates. For
all other human evaluations, we conduct them with a single annotator. The inter-rater agreement
achieves an average of 68% on total 100 samples. On the TL;DR dataset the agreement is 64%, and
on the HH-RLHF dataset the agreement is 72% across 50 samples per task.
G GENERATED EXAMPLES
G.1 CASE STUDY
When evaluating the responses of MA-RLHF with human annotators, we observe that the MA-
RLHF exhibits coherence and contextual appropriate abilities. We illustrate this phenomenon with
an example by comparing MA-RLHF with the baseline in Table 11. We found that the MA-PPO
method tends to generate responses with phrases more than the baseline method. Specifically, for
nouns, it commonly includes adjectives for modification to make the generated summaries more
accurate, such as “feeding indoor cat food”.
G.2 EXAMPLE RESPONSES
In this section, we demonstrate some examples of validation sets to highlight the superiority of
MA-PPO. In Table 12, we feature examples from the TL;DR dataset. Compared to the responses
generated by vanilla PPO, the responses from MA-PPO offer more concise and relevant details
about the situation. Table 13 showcases a dialogue focusing on the differences between Spanish
and Italian. MA-PPO provides a more helpful response, addressing both cultural and grammatical
perspectives. In Table 14, we compare answers from the WebGPT dataset. The responses provided
by MA-PPO are more factual and accurate compared to those generated by vanilla PPO.
30
Published as a conference paper at ICLR 2025
Table 11: Example response from TL;DR dataset to illustrate the responses generated by MA-RLHF
is more coherence and contextual appropriate compared to baseline. We highlight the context that
we observed with green .
Prompt
Vanilla PPO
Fixed 3-gram
Fixed 5-gram
Fixed 10-gram
Fixed ∞-gram
POST Subreddit: r/cats
Hello everyone! One of my cats is about 10 years old now, she is pretty much strictly
indoors save for some time she spends on our screened in porch each day. (She likes
to watch the birds in the yard while she suns herself by the pool, quite the princess).
Anyway, when she was younger she was very active and quite small, however with
I feed her indoor cat food
age she has put on a pretty hefty amount of weight.
for weight control, I’ve switched brands a few times trying to find something that
works, I’ve cut back on feeding her by a lot (she gets very angry and demanding
when she wants food but I don’t give in) however, nothing really seems to work.
I’ve tried cat toys, and bought a harness thinking I could try to walk her but she just
lays down and looks at me like I’m stupid. Basically I just want to know if you all
have any suggestions for exercise or food. I care about her and don’t want this to
get any worse. I also have another cat that eats the same amount and type of food
as her and is a completely normal weight and only a year younger, however he is a
male, not sure if that makes a difference in predisposition for weight gain. They are
also both fixed.
10 year old cat has gained a noticeable amount of weight and is becoming increas-
ingly inactive. Looking for suggestions for exercise or food to help her get healthier.
Male and female cats are both affected.
10 year old cat put on hefty amount of weight despite feeding indoor cat food for
weight control. Any suggestions for exercise or food to help with weight control and
prevent further weight gain? Specifically wondering if there are any suggestions
for cats that are both female and fixed.
10 year old indoor cat has put on significant amount of weight despite being fed
low calorie/weight control food. Need suggestions for exercise or food to help with
weight management. Also have another cat that eats same type/amount of food as
her and is a completely normal weight and only a year younger, however he is a
male. Any insights would be appreciated.
is
old
year
weight
cat
10
despite feeding indoor cat food for weight control .
Any suggestions for ex-
ercise or food to help her lose weight and keep her healthy? Also, is there a
difference in predisposition for weight gain between male and female cats when
fed same amount/type of food?
significant
putting
on
food for weight control,
put on hefty amount of weight with age ,
One 10 year old cat has
indoor cat
feeding by lot,
food,
pletely
however male
not sure if that makes difference predisposition for weight gain , both fixed.
feed
tried various brands and cut back
exercise or
same age and weight as her and com-
and
no
also have another cat
and
nothing really seems to work
normal weight
in terms of
currently
issues
Randomized n-gram 10 year old cat has put on a lot of weight, feed her indoor cat food for weight control
yet nothing seems to work. Any suggestions for exercise or food to address this?
Other cat that eats same amount and type of food as her is a normal weight and a year
younger, however is male, not sure if that makes a difference in predisposition for
weight gain.
Perplexity based
Parsing based
yo
cat
has
weight
put
10
despite diet change and other attempts at exercise/activity.
suggestions
for exercise or food to help her lose weight? Also, any differences between males
and females regarding predisposition for weight gain?
significant
amount
Any
on
of
cat
old
year
indoor
of weight
gained
10
despite proper diet and exercise ,
due to age related health issues .
potentially
Need suggestions for exercise or food changes to help reduce weight and
Is weight gain common in older cats, and does
maintain proper body weight .
significant
amount
has
gender play a factor in predisposition ?
31
Published as a conference paper at ICLR 2025
Table 12: Example responses from TL;DR dataset. The summarization of MA-PPO exactly capture
the details of the post.
Prompt
Vanilla PPO 2B
RM Score: 0.75
MA-PPO 2B
RM Score: 1.55
Vanilla PPO 7B
RM Score: 1.71
MA-PPO 7B
RM Score: 2.61
Vanilla PPO 27B
RM Score: 3.45
MA-PPO 27B
RM Score: 3.95
POST Subreddit: r/running
Last night I experienced my first ever absolutely miserable run where in-
stead of feeling energized and accomplished when it was over, I lay on my
bathroom floor shivering and hugging the toilet.
I have had frustrating runs before and I have had runs when my knee/an-
kle/toe hurt and made me want to quit but last night was far worse than any
of those.
Earlier in the day, I ate a sandwich and fell asleep on the couch for a two
hour surprise nap. When I woke up my stomach was slightly unsettled like
I was about to have heartburn. I decided to do out for my long run anyways
because I always have heartburn and it doesn’t usually stop me. This run
was scheduled to be my longest yet at 1:30:00 (I am training for a half).
At about 45 minutes I started to get stomach cramps so I slowed how much
water I was taking in. The entire run I had to walk several times which is
unusual for me but my times were better than expected so I thought I was
just pushing too hard and tried to take it easy on the way back.
My post long run ritual is usually to drink several long drinks once I get
back to the car since now I won’t be jostling my innards around, I can do
what I want. When I got home, I felt like I probably shouldn’t try eating
right away so I had about 1/3 bottle of Gatorade.
Well...Gatorade was the wrong answer. Within 20 minutes I was violently
heaving into my kitchen garbage and I lost all of the Gatorade and likely
most of the water I had while running.
My question is, do I blame my sandwich from earlier or my run? I am
worried that 1:30:00 is my limit.
I haven’t considered quitting my half
marathon training until now because everything up to this point has been
fine.
1:30:00 run was extremely stressful and painful. Do I blame sandwich from
earlier or run itself? I am worried that 1:30:00 is my limit.
1:30:00 long run resulted in severe stomach cramps and forced heavy heav-
ing into kitchen garbage. Do I blame my sandwich from earlier or my run?
Concerned that 1:30:00 is my limit.
1:30:00 run caused serious stomach pains and I puked up all of the water I
drank while running. Do I blame the sandwich for my stomach pain or the
run itself?
1:30:00 long run, had stomach cramps and had to walk often, drank
Gatorade post run, vomited violently within 20 minutes, now worried that
1:30 is my limit, do I blame sandwich or run? Worried about quitting half
marathon training. Help?
45 minutes into my run my stomach started to cramp and I was vomiting
profusely within 20 minutes of finishing my 1:30:00 long run. Do I blame
my sandwich from earlier or my run? Does this mean my limit is 1:30:00
or is it something else?
I went out for a 1:30:00 long run yesterday and came home with stomach
cramps and nausea. I drank 1/3 bottle of Gatorade when I got home and got
violently sick. I lost that and most of the water I drank while running. Do
I blame my sandwich from earlier or my run? I am worried that 1:30:00 is
my limit and that I won’t be able to finish my half marathon training.
32
Published as a conference paper at ICLR 2025
Table 13: Example responses from HH-RLHF dataset, MA-PPO method presents more helpful
response compared to vanilla PPO, and is prefered by GPT-4 and human.
Prompt
Vanilla PPO 2B
RM Score: 1.07
MA-PPO 2B
RM Score: 1.24
Vanilla PPO 7B
RM Score: 0.80
MA-PPO 7B
RM Score: 1.22
Human: Would you be able to explain the differences between the Spanish
and Italian language? Assistant: Of course. Can you tell me more about
the specific areas where you’re interested in knowing more? Human: I’m
thinking between the Spanish spoken in Mexico and Italian spoken in Italy.
Assistant:
The main difference between Spanish and Italian is that Spanish is an Indo-
European language with a very rich historical background, while Italian is
an Italian Romance language derived from Latin. Spanish has influences
from many different languages, while Italian has fewer and more limited
influences from other languages.
Here are some of the main differences between Spanish spoken in Mexico
and Italian spoken in Italy: 1. Spanish spoken in Mexico tends to be more
relaxed and casual, while Italian spoken in Italy tends to be more formal
and courteous. 2. Spanish in Mexico is heavily influenced by indigenous
languages and has a rich vocabulary, while Italian is mostly derived from
Latin and has a more simple and elegant sound. 3. Spanish is spoken
throughout Latin America, while Italian is confined to the Italian peninsula.
Overall, Spanish and Italian have many subtle differences that contribute to
their unique cultural identities.
The main differences between the Spanish spoken in Mexico and the Italian
spoken in Italy are in the pronunciation and grammar of the two languages.
In Mexico, the Spanish spoken is much more similar to the Spanish spoken
in Spain. In Italy, the Italian spoken is similar to the Italian spoken in other
parts of Europe, but with its own unique style of pronunciation. In both
languages, the grammar is very similar, though the Italian language does
have some unique grammar rules.
Sure, here are some of the main differences between the Spanish spoken in
Mexico and the Italian spoken in Italy: - Spanish is a latin language, which
is a language that was derived from the latin language of ancient Rome.
Italian is based on the ancient language of the romans of ancient Rome. -
Spanish is spoken in a number of different countries throughout the world,
whereas Italian is the native language of Italy, and is only spoken in some
parts of northern Italy. - Spanish is spoken primarily in Latin America, and
has taken on many local variations and differences due to the influence of
local indigenous languages (i.e. Mexican Spanish is quite different from
Spanish spoken in Argentina). Italian is primarily spoken in Italy and has
more localized variations. - There are many differences in grammar, vo-
cabulary, and pronunciation between Spanish and Italian that are reflected
in the differences between Mexican Spanish and Italian. Some of the most
important differences in grammar include the use of masculine and femi-
nine gender to refer to objects, and the different use of the tenses. There are
many differences in vocabulary that also exist between the two languages,
and the pronunciation of each language also varies.
33
Published as a conference paper at ICLR 2025
Table 14: Example responses from WebGPT dataset.
Prompt
What causes that sudden sinking, heavy, hot flush feeling when you become anxious
and how are the effects felt so rapidly?
Vanilla PPO 2B
RM Score: -0.68
MA-PPO 2B
RM Score: -0.41
Vanilla PPO 7B
RM Score: -0.41
MA-PPO 7B
RM Score: -0.39
Anxiety has been linked to several physical symptoms, including a rapid heartbeat,
trembling, dizziness, and a sense of impending doom. While some of these symp-
toms may be due to the anxiety-provoking situation causing the symptoms, some
may be the result of an anxiety disorder itself. The symptoms that result from anx-
iety can be divided into physical symptoms, emotional symptoms, and behavioral
symptoms. Physical symptoms of anxiety include nausea, dizziness, and a queasy
or empty feeling in the stomach. Emotional symptoms include feeling helpless,
hopeless, afraid, irritable, and worried. Behavioral symptoms include avoiding so-
cial interactions, remaining in bed all day, and not performing daily activities.
The “hot flush” sensation that some people experience when they become anxious
is the result of the nervous system’s fight-or-flight response. When the body is
faced with a stressful or dangerous situation, the nervous system’s adrenaline re-
lease causes blood to rush through the cardiovascular system to the muscles. This
causes the body to become more efficient at using oxygen and energy, and it can
also cause the skin to turn red and hot. This response is designed to help the body
deal with stress and danger, and it can include a range of symptoms that impact
different parts of the body. For example, elevated blood pressure can cause the kid-
neys to release more urine, and adrenaline can speed up the heart rate. It can also
cause the mouth to dry out as the nervous system reacts to the perceived threat. In
addition, anxiety can cause the chest to tighten and breathing to become quicker and
shallower, which can cause the heart rate to increase. It can also cause the digestive
system to slow down, and the muscles of the gastrointestinal tract to tighten, which
can cause nausea and abdominal cramping.
There are several possible explanations as to why we feel hot and uncomfortable
when anxious. One explanation is that anxiety causes the body to release hormones
and chemicals into the bloodstream, which then circulate throughout the body and
affect organs, tissues, and nerve cells. A different explanation is that a reaction
to stress causes the body to release hormones and chemicals into the bloodstream,
which then circulate throughout the body and affect organs, tissues, and nerve cells.
Another explanation is that the body’s stress response causes the release of hor-
mones and chemicals into the bloodstream, which then circulate throughout the
body and affect organs, tissues, and nerve cells. A fourth explanation is that anxiety
triggers the release of cortisol, which increases blood sugar levels and causes the
body to burn fat and muscle tissue for energy.
When we become anxious, it is because our brain has encountered something that
it sees as a threat. The brain then sends out a warning signal, which takes the form
of a fight-or-flight response. This response causes a number of physical effects,
including increased heart rate and breathing rate, increased blood flow to muscles,
and elevated blood sugar. This is because when we are threatened our body needs
to be able to run or fight, and to do this we need increased oxygen and energy. Our
body also needs to be able to protect itself, so we have increased blood flow to our
muscles. When we are in a high-stress situation, our body produces serotonin and
dopamine, which help us to feel good and focus on the task at hand. At the same
time, our body produces norepinephrine and cortisol. Norepinephrine and cortisol
work together to increase our heart rate, breath rate, and blood pressure, which in
turn increases the flow of oxygen and nutrients to our muscles. Thus, when we
are anxious, our body’ way of protecting us is by increasing our awareness and
increasing the flow of nutrients and oxygen to our muscles, so we can run or fight
if needed. All of these processes occur so quickly because our body’ systems are
already set up to respond instantly to threats.
34
|
iZeQBqJamf | Language models scale reliably with over-training and on downstream tasks | [
6,
6,
6,
8
] | "Published as a conference paper at ICLR 2025\n\nLANGUAGE MODELS SCALE RELIABLY WITH OVER-\nTRAINING(...TRUNCATED) |
lzdFImKK8w | "Boltzmann-Aligned Inverse Folding Model as a Predictor of Mutational Effects on Protein-Protein Int(...TRUNCATED) | [
6,
10,
6,
8
] | "Published as a conference paper at ICLR 2025\n\nBOLTZMANN-ALIGNED INVERSE FOLDING MODEL AS\nA PREDI(...TRUNCATED) |
DKkQtRMowq | Improving Data Efficiency via Curating LLM-Driven Rating Systems | [
6,
6,
6,
5
] | "Published as a conference paper at ICLR 2025\n\nIMPROVING DATA EFFICIENCY VIA CURATING\nLLM-DRIVEN (...TRUNCATED) |
IQi8JOqLuv | "On Designing General and Expressive Quantum Graph Neural Networks with Applications to MILP Instanc(...TRUNCATED) | [
6,
8,
5
] | "Published as a conference paper at ICLR 2025\n\nON DESIGNING GENERAL AND EXPRESSIVE QUANTUM\nGRAPH (...TRUNCATED) |
N4NhVN30ph | TOP-ERL: Transformer-based Off-Policy Episodic Reinforcement Learning | [
6,
8,
8,
8
] | "Published as a conference paper at ICLR 2025\n\nTOP-ERL: TRANSFORMER-BASED OFF-POLICY\nEPISODIC REI(...TRUNCATED) |
p6ncr0eTKE | Task-Adaptive Pretrained Language Models via Clustered-Importance Sampling | [
6,
8,
6,
6
] | "Published as a conference paper at ICLR 2025\n\nTASK-ADAPTIVE PRETRAINED LANGUAGE MODELS\nVIA CLUST(...TRUNCATED) |
6oWFn6fY4A | Towards Understanding Why Label Smoothing Degrades Selective Classification and How to Fix It | [
6,
6,
8,
6
] | "Published as a conference paper at ICLR 2025\n\nTOWARDS UNDERSTANDING WHY LABEL SMOOTHING\nDEGRADES(...TRUNCATED) |
d16mJDyQN6 | Identifying latent state transitions in non-linear dynamical systems | [
6,
8,
6,
6
] | "Published as a conference paper at ICLR 2025\n\nIDENTIFYING LATENT STATE TRANSITIONS IN\nNON-LINEAR(...TRUNCATED) |
p4cLtzk4oe | Exploring Local Memorization in Diffusion Models via Bright Ending Attention | [
8,
8,
6
] | "Published as a conference paper at ICLR 2025\n\nEXPLORING LOCAL MEMORIZATION IN DIFFUSION\nMODELS V(...TRUNCATED) |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 14