forum id
stringlengths 10
10
| title
stringlengths 21
154
| scores
sequencelengths 3
8
| text
stringlengths 48.3k
238k
|
---|---|---|---|
M23dTGWCZy | Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers | [
6,
6,
6,
5
] | Published as a conference paper at ICLR 2025
CAN LLMS GENERATE NOVEL RESEARCH IDEAS?
A LARGE-SCALE HUMAN STUDY WITH 100+ NLP RESEARCHERS
Chenglei Si, Diyi Yang, Tatsunori Hashimoto
Stanford University
{clsi,diyiy,thashim}@stanford.edu
ABSTRACT
Recent advancements in large language models (LLMs) have sparked optimism
about their potential to accelerate scientific discovery, with a growing number of
works proposing research agents that autonomously generate and validate new
ideas. Despite this, no evaluations have shown that LLM systems can take the
very first step of producing novel, expert-level ideas, let alone perform the entire
research process. We address this by establishing an experimental design that
evaluates research idea generation while controlling for confounders and performs
the first comparison between expert NLP researchers and an LLM ideation agent.
By recruiting over 100 NLP researchers to write novel ideas and blind reviews of
both LLM and human ideas, we obtain the first statistically significant conclusion
on current LLM capabilities for research ideation: we find LLM-generated ideas
are judged as more novel (p < 0.05) than human expert ideas while being judged
slightly weaker on feasibility. Studying our agent baselines closely, we identify
open problems in building and evaluating research agents, including failures of
LLM self-evaluation and their lack of diversity in generation. 1
1
INTRODUCTION
The rapid improvement of LLMs, especially in capabilities like knowledge and reasoning, has enabled
many new applications in scientific tasks, such as solving challenging mathematical problems (Trinh
et al., 2024), assisting scientists in writing proofs (Collins et al., 2024), retrieving related works (Ajith
et al., 2024; Press et al., 2024), and generating code to solve analytical or computational tasks (Huang
et al., 2024; Tian et al., 2024). While these are useful applications that can potentially increase the
productivity of researchers, it remains an open question whether LLMs can take on the more creative
and challenging parts of the research process.
We focus on this problem of measuring the research ideation capabilities of LLMs and ask: are
current LLMs capable of generating novel ideas that are comparable to expert humans? Although
ideation is only one part of the research process, this is a key question to answer, as it is the very first
step to the scientific research process and serves as a litmus test for the possibility of autonomous
research agents that create their own ideas. Evaluating expert-level capabilities of LLM systems is
challenging (Bakhtin et al., 2022; Collins et al., 2024), and research ideation takes this to an extreme.
Qualified expert researchers are difficult to recruit at scale, evaluation criteria can be highly subjective,
and it is difficult even for experts to judge the quality of research ideas (Beygelzimer et al., 2021).
We address these challenges directly, recognizing that for important, high-stakes tasks like research
ideation, there is no substitute for a large-scale expert evaluation. We design a carefully controlled
comparison of human and LLM ideas that overcomes sample size and baseline problems present in
earlier small-scale evaluation studies. Our study recruited a large pool of over 100 highly qualified
NLP researchers to produce human baseline ideas and perform blind reviews of human and LLM
ideas. To reduce the possibility that confounding variables affect our outcome measures, we enforce
strict controls that standardize the styles of human and LLM ideas and match their topic distribution.
We compare our human expert baseline with a simple and effective LLM agent that incorporates
retrieval augmentation and adopts recent ideas in inference-time scaling, such as overgenerating and
1We release our agent implementation and all human review scores at: https://github.com/
NoviScl/AI-Researcher. The last two authors advised this project equally.
1
Published as a conference paper at ICLR 2025
Figure 1: Overview: we recruit 79 expert researchers to perform blind review of 49 ideas from each
of the three conditions: expert-written ideas, AI-generated ideas, and AI-generated ideas reranked by
a human expert. We standardize the format and style of ideas from all conditions before the blind
review. We find AI ideas are judged as significantly more novel than human ideas (p < 0.05).
Figure 2: Comparison of the three experiment conditions across all review metrics. Red asterisks
indicate that the condition is statistically better than the Human baseline with two-tailed Welch’s
t-tests and Bonferroni correction. All scores are on a 1 to 10 scale. More detailed results are in
Section 5.
reranking LM outputs. These measures allow us to make statistically rigorous comparisons between
human experts and state-of-the-art LLMs (Figure 1).
Our evaluation-centric approach complements many recent methods-centric works that attempt to
instantiate research agents. These works rely on fast and lower-cost evaluation surrogates – either by
decreasing the number of expert reviewers (Baek et al., 2024; Li et al., 2024; Wang et al., 2024; Yang
et al., 2024), constraining the length and detailedness of the ideas (Wang et al., 2024; Yang et al.,
2024), or relying on LLM-as-a-judge (Lu et al., 2024). They do not perform the large-scale human
comparison studies that are needed to answer the motivating question of our work. Our work takes
the opposite approach, performing a year-long and high-cost evaluation that provides human expert
baselines and a standardized evaluation protocol to serve as a foundation for future follow-up studies
and methods work.
Through nearly 300 reviews across all our conditions, we find that AI-generated ideas are judged
as more novel than human expert ideas (p < 0.05), which holds robustly under multiple hypothesis
correction and across different statistical tests (Figure 2). Apart from evaluating the ideas, we also
analyze the LLM agent, showing limitations and open problems – despite excitement about inference-
time scaling of LLMs, we find that they lack idea diversity when we scale up idea generation, and
they cannot currently serve as reliable evaluators.
2 PROBLEM SETUP
The central experiment of our work is a comparison of human- and LLM-generated ideas. While this
goal is simple, there is no existing consensus on how to formulate the task of research ideation and
evaluation, and we begin by defining the key aspects of our experiment design.
2
7 NLP Topics
Bias
Coding
Safety
Multilingual
Factuality
Math
UncertaintyHuman ExpertsAI AgentCondition 1 : Human Ideas (N=49)Condition 2 : AI Ideas (N=49)Condition 3 : AI Ideas + Human Rerank (N=49)Blind Review by Experts (N=79)Novelty Score: 4.84 Novelty Score: 5.64Novelty Score: 5.81Idea GenerationHumanAIAI+Rerank34567Score**NoveltyHumanAIAI+Rerank34567**ExcitementHumanAIAI+Rerank34567FeasibilityHumanAIAI+Rerank34567EffectivenessHumanAIAI+Rerank34567*OverallPublished as a conference paper at ICLR 2025
We think of research idea evaluation as consisting of three separate components: 1). the idea itself,
generated in response to our instructions, 2). the writeup which communicates the idea, and 3). the
evaluation of the writeup by experts. We outline our experiment design in each of these three parts
with particular focus on potential confounders, such as the area of research, the format of a research
idea, and the evaluation process.
Ideation Scope and Instructions Any experiment on ideation must carefully balance the realistic-
ness and interestingness of a research idea with the practical realities of eliciting ideas from a large
population. In our case, these tradeoffs are even more pronounced, as we have designed our ideation
experiments so that the resulting ideas can be executed by experts in a follow-up set of experiments.
These constraints have led us to study prompting-based NLP research as a testbed for our study.
Prompting research has been popular in recent years of NLP and AI research Schulhoff et al. (2024).
This class of projects strikes a reasonable trade-off among our constraints. The most impactful
prompting projects like chain-of-thought have had a major influence on LLM performance (Wei et al.,
2022), and prompting projects are executable with minimal computing hardware.
We further structure our ideation process to avoid selection-bias-based confounders in ideation. If we
simply ask LLMs and humans to produce ideas on ‘prompting topics’, we may find that LLMs and
humans differ in the types of research ideas they produce (for example, LLMs may naturally suggest
more projects on safer topics, which might be judged as less exciting by humans). This would lead us
to simply measure misalignment in research topic preference between LLMs and humans, which is
not the goal of our study. To address this possibility, we define a set of seven specific research topics
extracted from the Call For Papers page of recent NLP conferences such as COLM. Specifically,
our topics include: Bias, Coding, Safety, Multilinguality, Factuality, Math, and Uncertainty (see
Appendix A.3 for a complete description of these topics).
Each human and LLM participant of the ideation experiment receives the same set of natural language
instructions including the same topic description, idea template, and demonstration example to ensure
a fair comparison. For human participants, we additionally allow them to select a preferred topic from
the list, and for each selected topic, we generate a corresponding LLM idea. This exactly matches the
idea topic distribution between the LLM and human participants, while ensuring that human experts
are able to select topics according to their expertise.
Idea Writeup An idea can only be evaluated if it is written up to be communicated, but this writing
process introduces many additional potential confounders. Human researchers may write in ways
that subtly signal quality research, such as including more examples and implementation details. The
format of the writeup functions as a way to scaffold what contents should be included and the level
of detailedness. Ideally, we want both human and LLM participants to provide all the necessary
implementation details for their generated ideas.
We take inspiration from guidelines used in grant submissions and introduce a template to specify the
structure and detailedness of idea proposals. Specifically, we construct a template that includes fields
for the title, problem statement, motivation, proposed method, step-by-step experiment plan, test case
examples, and the fallback plan. Both the LLM agent and the human idea writers are instructed to
follow this template and our provided demonstration examples to produce a project proposal as the
output (see Appendix A.4 for the full template and Appendix A.5 for the demo example).
Even with these templates, there may be subtle writing style cues that affect the outcome measure. For
example, humans may tend to write in a more engaging and informal tone. To reduce this possibility
further, we developed a style normalization module that uses an LLM to convert all ideas into the
same writing and formatting style without changing the original content. Our small-scale human
study shows that such a normalization approach leads to a 50% accuracy for expert human judges
who are asked to distinguish AI ideas from human ideas. Finally, the use of an LLM style anonymizer
has the possibility of substantively changing the content of the ideas. To rule this out, the first author
of this paper manually verified each human idea proposal to ensure all contents of the original ideas
were preserved. We present the full prompt used in Appendix A.6.
Review and Evaluation Reviewing research ideas is notoriously subjective, so we want to design a
review form that defines all review criteria clearly to standardize and anchor the evaluations as much
3
Published as a conference paper at ICLR 2025
as possible. At the same time, we want our review criteria and measured variables to capture all the
desiderata of high-quality research ideas.
We follow best practices from AI conference reviewing (e.g., ICLR and ACL) when designing the
review form, where we define four breakdown metrics including novelty, excitement, feasibility, and
expected effectiveness, apart from the overall score. For each metric, we ask for a numerical score on
a 1-10 scale along with a free-text rationale. We provide clear definitions and grounding for each
numerical scale to calibrate all reviewers’ standards (see Appendix A.7 for the full review form).
In the next two sections, we instantiate how our LLM agent generates ideas and how our expert
participants generate and review the ideas.
3
IDEA GENERATION AGENT
We build a simple but effective LLM ideation agent to compare with the human expert baseline.
Rather than focusing on innovating the agent itself, we adhere to a minimalist design principle, aiming
to understand the current capabilities of LLMs in idea generation. Our research ideation agent has
three essential components: paper retrieval, idea generation, and idea ranking, which we will describe
in detail below.
3.1 PAPER RETRIEVAL FOR RAG
To ground idea generation, the agent needs to retrieve papers related to the given research
topic, so that it will be aware of related works when generating new ideas. To do so, we
leverage retrieval-augmented generation (RAG), which has demonstrated effectiveness on many
knowledge-intensive tasks
(Lewis et al., 2020; Shi et al., 2024). Concretely, given a re-
search topic (e.g., “novel prompting methods that can improve factuality and reduce halluci-
nation of large language models"), we prompt an LLM to generate a sequence of function
calls to the Semantic Scholar API. We use claude-3-5-sonnet-20240620 as the back-
bone model for our agent but the pipeline should generalize to other LLMs as well. The paper
retrieval action space includes: {KeywordQuery(keywords), PaperQuery(paperId),
GetReferences(paperId)}. Each action generation is grounded on the previous actions and
executed results. We keep the top k = 20 papers from each executed function call and stop the action
generation when a max of N = 120 papers have been retrieved. We then use the LLM to score and
rerank all retrieved papers based on three criteria: 1) the paper should be directly relevant to the
specified topic; 2) the paper should be an empirical paper involving computational experiments; 3)
the paper is interesting and can inspire new projects. The LLM is prompted to score each retrieved
paper on a scale of 1 to 10 based on these criteria and we use the top-ranked papers for the next step
of idea generation.
3.2
IDEA GENERATION
Our key insight for idea generation is to generate as many candidate ideas as possible. Our intuition
is that only a small fraction of all generated ideas might be high-quality, and we should be willing to
expend inference-time compute to generate more candidates so that we can later use a reranker to
discover the "diamond in the rough". This aligns with existing results showing that scaling inference
compute with repeated sampling can boost LLM performance on various coding and reasoning
tasks (Li et al., 2022; Brown et al., 2024). Specifically, we prompt the LLM to generate 4000 seed
ideas on each research topic. The idea generation prompt includes the demonstration examples and
the retrieved papers. We craft k = 6 demonstration examples by manually summarizing exemplar
papers (Yasunaga et al., 2024; Madaan et al., 2023; Weller et al., 2023; Weston & Sukhbaatar, 2023;
Zheng et al., 2024; Dhuliawala et al., 2023) into our desired idea format. For retrieval augmentation,
we randomly select k = 10 papers from the top-ranked retrieved papers and concatenate their titles
and abstracts to prepend to the idea generation prompt. We also append the titles of all previously
generated ideas to the prompt to explicitly ask the LLM to avoid repetitions.
To remove duplicated ideas from this large pool of candidate ideas, we first perform a round of dedupli-
cation by encoding all seed ideas with all-MiniLM-L6-v2 from Sentence-Transformers (Reimers
& Gurevych, 2020) and then computing pairwise cosine similarities. We set a similarity threshold of
4
Published as a conference paper at ICLR 2025
0.8 for the idea deduplication based on manual inspection. 2 This leaves about 5% non-duplicated
ideas out of all the generated seed ideas. We expand more on this duplication issue later in Section 7.1.
3.3
IDEA RANKING
The next step is for our ideation agent to rank all the remaining ideas so that we can find the best
ones among them. To build such an automatic idea ranker, we use public review data as a proxy.
Specifically, we scraped 1200 ICLR 2024 submissions related to LLMs (with keyword filtering)
along with their review scores and acceptance decisions. We explored multiple ways of predicting
the scores and decisions of these submissions and found that LLMs are poorly calibrated when asked
directly to predict the final scores or decisions, but can achieve non-trivial accuracy when asked to
judge which paper is better in pairwise comparisons.
We converted the ICLR submissions into our stan-
dard project proposal format and randomly paired
up accepted and rejected papers and asked LLMs
to predict which one is accepted. On this task,
Claude-3.5-Sonnet achieves an accuracy of
For compari-
71.4% with zero-shot prompting.
son, GPT-4o achieves 61.1% and Claude-3-Opus
achieves 63.5%, and we do not observe significant gains
from additional prompting techniques like few-shot or
chain-of-thought prompting. We therefore choose the
Claude-3.5-Sonnet zero-shot ranker.
N Top-10 Bottom-10 Gap
0.56
1
0.90
2
3
0.97
0.95
4
1.73
5
1.30
6
6.28
6.14
5.83
5.94
6.42
6.11
5.72
5.24
4.86
4.99
4.69
4.81
Table 1: Average ICLR review scores of
top- and bottom-10 papers ranked by our
LLM ranker, with different rounds (N ) of
pairwise comparisons.
In order to obtain reliable scores for all project proposals
based on pairwise comparisons, we adopt a Swiss system
tournament where all project proposals are paired with
those whose accumulated scores are similar, and if the proposals are judged to be better, they gain
an additional point. We repeat this for N rounds so the total score of each project proposal will be
within the [0, N ] range. As a sanity check, we use the Claude-3.5-Sonnet ranker to rank the
1.2K ICLR LLM-related submissions and compare the average review scores of the top 10 ranked
papers and the bottom 10 ranked papers in Table 1. We see a clear separation between the top and
bottom ranked papers, indicating the effectiveness of the LLM ranker. We choose N = 5 for all
our experiments since it gives the best ranking result on this validation set. The top-ranked project
proposals from the agent will be directly used for the AI Ideas condition of the human study.
Since our AI ranker is still far from perfect, we also introduce another experiment condition where
the first author of this paper manually reranked the generated project proposals instead of relying on
the LLM ranker, and we call this the AI Ideas + Human Rerank condition. 17 out of the 49
ideas in the AI Ideas + Human Rerank condition overlap with the AI Ideas ranked by the
LLM agent (Table 8 in Appendix A.11), while the other 32 are different, indicating the discrepancy
between the LLM ranker and the human expert reranking.
4 EXPERT IDEA WRITING AND REVIEWING
In this section, we shift focus to the human branch of idea generation comparison. We present
the details of our human study, including information about the recruited experts, the human idea
generation task, and the subsequent review process.
4.1 EXPERT RECRUITMENT
We recruit our expert participants (including for idea writing and reviewing) by sending sign-up forms
to several channels, including: 1) the OpenNLP Slack channel with 1426 NLP researchers from 71
institutions; 2) Twitter (X); 3) Slack channels of various NLP groups by direct communication with
the group members; and 4) official chat app of the NAACL 2024 conference. Our study including all
recruitment materials has been approved by IRB.
2We provide randomly sampled idea pairs and their similarities in Appendix A.10. We also provide additional
implementation details about the ideation agent in Appendix A.8.
5
Published as a conference paper at ICLR 2025
Idea Writing Participants (N=49)
Idea Reviewing Participants (N=79)
Metric
papers
citations
h-index
i10-index
Mean Median Min Max
52
4553
21
32
12
477
5
5
10
125
4
4
2
2
1
0
SD Mean Median Min Max
52
13
9
7276
327
861
21
7
4
6
32
5
15
635
7
7
2
0
0
0
SD
10
989
4
6
Table 2: Research profile metrics of the idea writing and reviewing participants. Data are extracted
from Google Scholar at the time of idea or review submission.
Metric
Human Ideas
Familiarity (1-5)
Difficulty (1-5)
Time (Hours)
Length (Words)
AI Ideas
Length (Words)
AI + Human Rerank Ideas
Length (Words)
Mean Median Min
Max
SD
3.7
3.0
5.5
901.7
4.0
3.0
5.0
876.0
1.0
1.0
2.0
444.0
5.0
5.0
15.0
1704.0
1.0
0.7
2.7
253.5
1186.3
1158.0
706.0
1745.0
233.7
1174.0
1166.0
706.0
1708.0
211.0
Table 3: Statistics of the 49 ideas from each condition.
We performed screening on the participants based on their provided Google Scholar profiles and
recruited N = 49 experts for writing ideas, and N = 79 experts for reviewing ideas. Each idea writer
is asked to write one idea within 10 days and we compensate $300 for each, with a $1000 bonus
for the top 5 ideas as scored by the expert reviewers. Each idea reviewer is assigned 2 to 7 ideas to
review and we collected N = 298 unique reviews in total. They are given one week to finish the
reviews and we compensated $25 for each review written by the idea reviewers.
4.2 EXPERT QUALIFICATIONS
Our pool of participants is highly qualified and diverse. The 49 idea writers come from 26 different
institutions and 73% of them are current PhD students. The 79 reviewers come from 32 institutions
and 87% of them are PhD students and Postdocs. We provide the detailed statistics in Appendix A.13.
We use their Google Scholar profiles to extract several proxy metrics, including the number of papers,
citations, h-index, and i10-index at the time of their submission. Table 2 shows that our idea writers
have an average of 12 papers and 477 citations, while every reviewer has published at least two papers
and has an average citation of 635 and h-index of 7. Moreover, based on their survey responses, 72
out of the 79 reviewers have previously reviewed for conferences. These statistics indicate that our
participants are highly qualified and have substantial research experience.
4.3
IDEA WRITING
We report statistics of our idea writers’ ideas to measure their quality. As shown in Table 3, idea
writers indicate a moderately high familiarity with their selected topic (3.7 on a 1 to 5 scale), and
indicate the task as moderately difficult (3 on a 1 to 5 scale). They spent an average of 5.5 hours on
the task and their ideas are 902 words long on average. These indicate that participants are putting
substantial effort into this task. We show the distribution of their selected topics in Appendix A.3.
4.4
IDEA REVIEWING
Review Assignment We let all reviewer participants select their top two preferred topics as well as
their preferred reviewing load (from 2 to 7). We then randomly assign them to ideas within their
selected topics and all ideas are anonymized. In the assignment, we balance the number of ideas from
each condition for each reviewer and ensure that each reviewer gets at least one human idea and one
AI idea. Every idea is reviewed by 2 to 4 different reviewers. We also avoid assigning ideas written
by authors from the same institution to avoid any potential contamination. Each reviewer wrote an
average of 3.8 reviews from 2 or 3 conditions, across 1 to 3 topics (full statistics in Appendix A.14).
6
Published as a conference paper at ICLR 2025
Metric
Ours
Familiarity (1-5)
Confidence (1-5)
Time (Minutes)
Length (Word)
ICLR 2024
Confidence (1-5)
Length (Word)
Length (Word; Strengths & Weaknesses)
Mean Median Min
Max
SD
3.7
3.7
31.7
231.9
3.7
421.5
247.4
3.0
4.0
30.0
208.0
4.0
360.0
207.0
1.0
1.0
5.0
41.0
1.0
14.0
2.0
5.0
5.0
120.0
771.0
5.0
2426.0
2010.0
0.9
0.7
16.8
112.1
0.8
236.4
176.4
Table 4: Statistics of our collected reviews, with ICLR 2024 reviews as a baseline (for the 1.2K
submissions that mentioned the keyword “language models").
Review Quality Check Apart from ensuring reviewer qualifications, we also compute statistics to
measure the quality of the reviews in Table 4. On average, the reviewers indicated a familiarity of 3.7
(out of 5) in their selected topic and a confidence of 3.7 (out of 5) in their reviews. This is comparable
with the 1.2K ICLR 2024 submissions related to language models, where the reviewers also have
an average confidence of 3.7 out of 5. Moreover, reviewers spent an average of 32 minutes on each
review, with each review being about 232 words long.
Since our review forms are different from the ICLR review forms, we compare them with the ICLR
reviews where we remove the summary and question sections and only count the lengths of the
strengths and weaknesses sections. This way, the ICLR reviews have an average length of 247, similar
to our collected reviews. As an additional measure of review quality, out of the 298 unique reviews
that we have collected, 80 of them provided links to existing papers in their rationales to justify why
the proposed method is not novel. These results further validate the high quality of our review data.
5 MAIN RESULT: AI IDEAS ARE RATED MORE NOVEL THAN EXPERT IDEAS
In this section, we present our main finding. Consistently across three different statistical tests
accounting for the possible confounders, we find that AI ideas have higher novelty scores than human
ideas while being comparable on all other metrics.
Test 1: Treating Each Review as an Independent Data Point.
In Test 1, we treat each review as
an independent data point and aggregate all reviews from the same condition. We treat the Human
Ideas as the baseline condition and compare it with AI Ideas and AI Ideas + Human
Rerank using two-tailed Welch’s t-tests with Bonferroni correction. We show the barplot in Figure 2
and the detailed numerical results in Table 5. Both AI Ideas (µ = 5.64 ± σ = 1.76) and AI
Ideas + Human Rerank (µ = 5.81 ± σ = 1.66) are significantly better than Human Ideas
(µ = 4.84 ± σ = 1.79) on the novelty score (p < 0.01). In this particular test, the AI ideas in both
conditions are also significantly better than human ideas on the excitement score (p < 0.05), and
the AI Ideas + Human Rerank condition is also significantly better than Human Ideas in
terms of the overall score (p < 0.05). We do not observe significant differences between AI-generated
ideas and human-written ideas on the other metrics.
Test 2: Treating Each Idea as an Independent Data Point. Since we collect multiple reviews
for each idea, one could argue that we should not treat each review as an independent data point.
To account for this potential confounder, we perform Test 2 where we average the scores of each
idea and treat each idea as one data point. This way, the sample size for every condition will be
N = 49, namely the number of ideas. We treat the Human Ideas as the baseline condition and
compare it with AI Ideas and AI Ideas + Human Rerank using two-tailed Welch’s t-tests
with Bonferroni correction. Under this test (Table 14 in Appendix A.15), we still see significant results
(p < 0.05) where both AI Ideas (µ = 5.62 ± σ = 1.39) and AI Ideas + Human Rerank
(µ = 5.78 ± σ = 1.07) have higher novelty scores than Human Ideas (µ = 4.86 ± σ = 1.26).
Test 3: Treating Each Reviewer as an Independent Data Point. Another possible confounder is
that different reviewers might have different biases, for example, some reviewers may be more lenient
7
Published as a conference paper at ICLR 2025
Condition
Novelty Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Excitement Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Feasibility Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Expected Effectiveness Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Overall Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Size Mean Median
SD
SE Min Max
p-value
119
109
109
119
109
109
119
109
109
119
109
109
119
109
109
4.84
5.64
5.81
4.55
5.19
5.46
6.61
6.34
6.44
5.13
5.47
5.55
4.68
4.85
5.34
5
6
6
5
6
6
7
6
6
5
6
6
5
5
6
1.79
1.76
1.66
1.89
1.73
1.82
1.99
1.88
1.63
1.76
1.58
1.52
1.90
1.70
1.79
0.16
0.17
0.16
0.17
0.17
0.17
0.18
0.18
0.16
0.16
0.15
0.15
0.17
0.16
0.17
1
1
2
1
1
1
1
2
1
1
1
1
1
1
1
8
10
10
8
9
9
10
10
10
8
10
9
9
9
9
–
0.00**
0.00***
–
0.04*
0.00**
–
1.00
1.00
–
0.67
0.29
–
1.00
0.04*
Table 5: Scores across all conditions by treating each review as an independent datapoint (Test
1). Size is the number of reviews for each condition and the p-values are computed with two-
tailed Welch’s t-tests with Bonferroni correction. We bold results that are statistically significant
(∗p < 0.05;∗∗ p < 0.01;∗∗∗ p < 0.001). AI ideas are judged as significantly better than human ideas
in terms of novelty and excitement while being comparable on all other metrics.
than others. To account for such reviewer biases, we perform Test 3 where we treat each reviewer as
one data point and compute their average score on each condition. Then for each reviewer, we get
their mean score difference between the AI Ideas condition and the Human Ideas condition,
as well as the difference between the AI Ideas + Human Rerank condition and the Human
Ideas condition. This way, we only analyze the differences among the different conditions. That
is, if the differences are significantly higher than zero under the one-sample t-test, that indicates
reviewers are giving higher scores to one condition compared to the other. Using this test (Table 15 in
Appendix A.15), we also see significant results (p < 0.05) that AI ideas in both the AI Ideas and
AI Ideas + Human Rerank conditions are rated more novel than Human Ideas. Therefore,
we conclude that AI ideas generated by our ideation agent are judged as more novel than human
expert generated ideas, consistently across all three different statistical tests. 3
6
IN-DEPTH ANALYSIS OF THE HUMAN STUDY
In this section, we move beyond the statistical comparisons and dive into other aspects of our collected
data. Specifically, we focus on the quality of human ideas and the extent of reviewer agreement.
6.1 HUMAN EXPERTS MAY NOT BE GIVING THEIR BEST IDEAS
We first investigate whether human experts are submitting their best ideas to us. We did a post-
study survey to understand how idea-writing participants came up with their ideas. Out of the 49
participants, 37 of them came up with the idea on the spot, while the other 12 already had the idea
before the study. Furthermore, we asked the survey question: “How does this idea compare to your
past research ideas (ideas that you actually worked on)? Please answer with a percentile. E.g., this
idea is one of my top 10% ideas.” Our participants indicated that on average their submitted ideas
are about the top 43% of all their past ideas. This implies that our collected ideas are likely the
median-level ideas from these expert researchers, which is reasonable given that most of them came
up with the idea within the 10-day time constraint of the task.
3We also include results of fitting linear mixed-effects models in Appendix A.16, which reinforces our
conclusions. Additionally, we plot the breakdown of all metrics by topic in Appendix A.17.
8
Published as a conference paper at ICLR 2025
6.2 REVIEWING IDEAS IS INHERENTLY SUBJECTIVE
Finally, we acknowledge that reviewing is inherently subjective, and reviewing based on ideas
rather than executed papers might be even more subjective. We investigate this using inter-reviewer
agreement. Specifically, we randomly split reviewers of each paper into half, use one half to rank the
top and bottom 25% of all ideas, and then measure agreement with the held-out set of reviewers. As
shown in the first block of Table 6, reviewers have a relatively low agreement (56.1%) despite the
fact that we have provided detailed explanations for each metric in our review form. As a baseline
comparison, the NeurIPS 2021 reviewer consistency experiment found 66.0% accuracy using this
reviewer agreement metric in the balanced setting (Beygelzimer et al., 2021; Lu et al., 2024). We
also computed the reviewer agreement using the same metric on the 1.2K ICLR 2024 submissions
related to language models, which has a balanced accuracy of 71.9%. While our reviewer agreement
is higher than random (50%), it is generally lower than conference reviewing, most likely due to the
higher subjectivity involved when evaluating ideas without seeing the actual experiment results.
Apart from the above quantitative analysis, we also provide some qualitative analysis of our collected
data. We provide a summary of free-text reviews in Appendix A.18, and provide four pairs of AI and
human ideas along with full reviews in Appendix A.19.
7 LIMITATIONS OF LLMS
Our ideation agent is motivated by two potential strengths of LLMs: their ability to scale by generating
a vast number of ideas - far more than any human could - and the possibility of filtering these ideas to
extract the best ones from the large pool. In theory, this approach could lead to high-quality ideas
by leveraging inference scaling. However, we present empirical evidence that this naive assumption
about scaling idea generation has significant limitations.
7.1 LLMS LACK DIVERSITY IN IDEA GENERATION
We adopted an over-generate and rank paradigm in idea
generation. This raises the question: is there an upper limit
to how many new ideas LLMs can generate? To answer
this question, we take a closer look at 4000 generated seed
ideas for each topic.
We encode all raw ideas with all-MiniLM-L6-v2
from Sentence-Transformers. For each idea, we compute
its cosine similarity with all previously generated ideas on
the same topic. We consider an idea as a duplicate if it has
a similarity of above 0.8 with any of the previously gen-
erated ideas. In Figure 3, we show that as the agent keeps
generating new batches of ideas, the accumulated non-
duplicate ideas eventually plateau. In fact, out of the 4000
generated seed ideas, there are only 200 non-duplicate
unique ideas. This sets a bottleneck on our inference-time
scaling since increasing the number of generated ideas
simply leads to repeating duplicate ideas.
7.2 LLMS CANNOT EVALUATE IDEAS RELIABLY
The accumulated non-
Figure 3:
duplicate ideas saturate as the agent
keeps generating new ideas. All data
points are averaged across all topics.
Most prior works have adopted LLM-as-a-judge for evaluating research ideas Lu et al. (2024)
motivated by the observation that LLMs can have a higher agreement with human evaluators than
the inter-human agreement. However, we offer some empirical evidence that LLMs cannot evaluate
ideas reliably yet.
Concretely, we use the average review score of each idea to rank the top and bottom 25% of all our
collected human and AI ideas, and use this to benchmark various LLM evaluators. Specifically, we
obtain the LLM predicted scores of all ideas and set the median score as the threshold to measure
their accuracy on our balanced idea ranking data.
9
05001000150020002500300035004000Total Number of Ideas Generated0255075100125150175200Accumulated Non-Duplicate IdeasAccumulation of Non-Duplicate Ideas Across GenerationsAccumulated Non-DuplicatesPublished as a conference paper at ICLR 2025
In the second block of Table 6, we compare several differ-
ent LLM evaluators: 1) directly giving the review criteria and
prompting for a final score (Yang et al., 2024; Li et al., 2024;
Baek et al., 2024); 2) our pairwise ranker as described in Sec-
tion 3.3; and 3) the “AI Scientist” reviewer agent (Lu et al., 2024).
All of these LLM evaluators have a lower agreement than our
expert reviewers’ scores. Even the best LLM evaluator — our
own Claude-3.5 pairwise ranker — only achieves an accuracy
of 53.3%, lower than our inter-reviewer consistency of 56.1%.
Random
NeurIPS’21
ICLR’24
Ours
GPT-4o Direct
GPT-4o Pairwise
Claude-3.5 Direct
Claude-3.5 Pairwise
“AI Scientist” Reviewer
Consistency
50.0
66.0
71.9
56.1
50.0
45.0
51.7
53.3
43.3
Even if AI-human agreement eventually matches or exceeds
human-human agreement, simply meeting this baseline does
not imply that AI-as-a-reviewer is meaningful, since we may be
trading variance for bias, where AI reviewers are more consistent
but rely on spurious correlations (Durmus et al., 2022). Our
findings in Table 6 are consistent with these brittleness concerns,
as we find a significant drop in AI-human agreement scores
under our study compared to the original studies. Finally, even though Claude-3.5 pairwise agreements
may seem close to human agreement, many other pieces of evidence throughout the paper leads
us to be cautious about the use of LLM-as-a-judge in such a complex and subjective task. These
include our findings on the significant discrepancy between the agent’s top-ranked ideas and the
human expert’s top-ranked ideas (Appendix A.11) and how the AI Ideas + Human Rerank
condition tends to score higher than the AI Ideas condition on all metrics in Section 5.
Table 6: Review score consis-
tency among human reviewers
(first block) and between humans
and AI (second block).
8 RELATED WORK
Research idea generation and execution. Several prior works explored methods to improve idea
generation, such as iterative novelty boosting (Wang et al., 2024), multi-agent collaboration (Baek
et al., 2024), and multi-module retrieval and revision (Yang et al., 2024). While some of them
share similar components as our ideation agent, these works focus on improving the idea generation
methods over vanilla prompting baselines, without comparisons to any human expert baselines.
Beyond ideation, another line of work uses LLMs for executing experiments by generating code
given the research problems (Huang et al., 2024; Tian et al., 2024), or combining idea generation with
code generation to directly implement AI-generated ideas (Lu et al., 2024; Li et al., 2024). These
works either use automatic evaluation on a pre-defined set of problems and benchmarks, setting a
constrained problem space; or rely on proxy metrics like LLM evaluators, which are often unreliable.
LLM for other research-related tasks. LLMs have also been used for several other research-related
tasks, such as generating code to perform data-driven discovery (Majumder et al., 2024; Hu et al.,
2024; Guo et al., 2024; Gu et al., 2024; Ifargan et al., 2024), automatic review generation (D’Arcy
et al., 2024; Liang et al., 2024), related work curation (Kang & Xiong, 2024; Ajith et al., 2024; Press
et al., 2024; Lehr et al., 2024), experiment outcome prediction (Lehr et al., 2024; Zhang et al., 2024;
Manning et al., 2024; Hewitt et al., 2024), and future work recommendation (Zhang et al., 2024).
Unlike these works, we tackle the more creative and open-ended task of research ideation.
Computational creativity. Our work also connects to the line of work on examining AI’s novelty and
diversity in creative tasks. Previous findings include AI writings being less creative than professional
writers (Chakrabarty et al., 2024); LLM generations lacking collective diversity (Zhou et al., 2024;
Anderson et al., 2024); and human-AI collaboration reducing diversity (Padmakumar & He, 2024).
In contrast, we focus on the human-AI comparison on the challenging task of research ideation with
expert participants.
9 CONCLUSION
We compared research ideas generated by our AI agent with ideas written by expert researchers
and observed the robust finding that expert reviewers rate AI ideas as statistically more novel than
expert ideas. We recognize several limitations of the current study, including the quality of the human
baseline, the subjectivity of idea evaluation, and the limited scope. We discuss future steps to address
these limitations in Appendix A.1 and discuss various ethical considerations in Appendix A.2.
10
Published as a conference paper at ICLR 2025
ACKNOWLEDGEMENT
We thank all participants who wrote and reviewed ideas for us. Many of them also provided insightful
feedback on various aspects of this study. This project would not have been possible without their
support. We thank Rose Wang, Dora Zhao, Irena Gao, Isabel Gallegos, Ken Liu, Aryaman Arora,
Harshit Joshi, Shi Feng, Tianyu Gao, Xinran Zhao, Yangjun Ruan, Xi Ye, Mert Yuksekgonul, and
members of Tatsu Lab and SALT Lab for their helpful feedback on the early version of this draft. We
thank our undergraduate intern Isha Goswami and faculty administrator Eric Alejandro Pineda for
assisting with review data collection and financial logistics.
This work was supported by gifts from Open Philanthropy, Tianqiao and Chrissy Chen Institute,
Meta, IBM, and Amazon, and grants from ONR, NSF IIS-2247357, and CNS-2308994.
REFERENCES
Anirudh Ajith, Mengzhou Xia, Alexis Chevalier, Tanya Goyal, Danqi Chen, and Tianyu Gao.
LitSearch: A Retrieval Benchmark for Scientific Literature Search. ArXiv, abs/2407.18940, 2024.
Barrett R Anderson, Jash Hemant Shah, and Max Kreminski. Homogenization Effects of Large
Language Models on Human Creative Ideation. In Proceedings of the 16th Conference on Creativity
& Cognition, 2024.
Jinheon Baek, Sujay Kumar Jauhar, Silviu Cucerzan, and Sung Ju Hwang. ResearchAgent: Iter-
ative Research Idea Generation over Scientific Literature with Large Language Models. ArXiv,
abs/2404.07738, 2024.
Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew
Goff, Jonathan Gray, Hengyuan Hu, Athul Paul Jacob, Mojtaba Komeili, Karthik Konath, Minae
Kwon, Adam Lerer, Mike Lewis, Alexander H. Miller, Sandra Mitts, Adithya Renduchintala,
Stephen Roller, Dirk Rowe, Weiyan Shi, Joe Spisak, Alexander Wei, David J. Wu, Hugh Zhang,
and Markus Zijlstra. Human-level play in the game of diplomacy by combining language models
with strategic reasoning. Science, 378:1067 – 1074, 2022.
Alina Beygelzimer, Yann Dauphin, Percy Liang, and Jennifer Wortman Vaughan.
The
https://blog.neurips.cc/2021/12/08/
neurips 2021 consistency experiment.
the-neurips-2021-consistency-experiment, 2021. Neural Information Process-
ing Systems blog post.
Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V. Le, Christopher R’e, and
Azalia Mirhoseini. Large Language Monkeys: Scaling Inference Compute with Repeated Sampling.
ArXiv, abs/2407.21787, 2024.
Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu.
Art or Artifice? Large Language Models and the False Promise of Creativity. In CHI, 2024.
Katherine M. Collins, Albert Qiaochu Jiang, Simon Frieder, Li Siang Wong, Miri Zilka, Umang Bhatt,
Thomas Lukasiewicz, Yuhuai Wu, Joshua B. Tenenbaum, William Hart, Timothy Gowers, Wenda
Li, Adrian Weller, and Mateja Jamnik. Evaluating language models for mathematics through
interactions. Proceedings of the National Academy of Sciences of the United States of America,
121, 2024.
Mike D’Arcy, Tom Hope, Larry Birnbaum, and Doug Downey. MARG: Multi-Agent Review
Generation for Scientific Papers. ArXiv, abs/2401.04259, 2024.
Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and
Jason Weston. Chain-of-Verification Reduces Hallucination in Large Language Models. ArXiv,
abs/2309.11495, 2023.
Esin Durmus, Faisal Ladhak, and Tatsunori B. Hashimoto. Spurious Correlations in Reference-Free
Evaluation of Text Generation. In Annual Meeting of the Association for Computational Linguistics,
2022. URL https://api.semanticscholar.org/CorpusID:248300077.
11
Published as a conference paper at ICLR 2025
Ken Gu, Ruoxi Shang, Ruien Jiang, Keying Kuang, Richard-John Lin, Donghe Lyu, Yue Mao, Youran
Pan, Teng Wu, Jiaqian Yu, Yikun Zhang, Tianmai M. Zhang, Lanyi Zhu, Mike A. Merrill, Jeffrey
Heer, and Tim Althoff. BLADE: Benchmarking Language Model Agents for Data-Driven Science.
ArXiv, abs/2408.09667, 2024.
Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, and Jun Wang. DS-Agent: Auto-
mated Data Science by Empowering Large Language Models with Case-Based Reasoning. In
ICML, 2024.
Luke Hewitt, Ashwini Ashokkumar, Isaias Ghezae, and Robb Willer. Predicting Results of Social
Science Experiments Using Large Language Models. Preprint, 2024. URL https://docsend.
com/view/ity6yf2dansesucf.
Xueyu Hu, Ziyu Zhao, Shuang Wei, Ziwei Chai, Guoyin Wang, Xuwu Wang, Jing Su, Jingjing
Xu, Ming Zhu, Yao Cheng, Jianbo Yuan, Kun Kuang, Yang Yang, Hongxia Yang, and Fei Wu.
InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks. In ICML, 2024.
Qian Huang, Jian Vora, Percy Liang, and Jure Leskovec. MLAgentBench: Evaluating Language
Agents on Machine Learning Experimentation. In ICML, 2024.
Tal Ifargan, Lukas Hafner, Maor Kern, Ori Alcalay, and Roy Kishony. Autonomous LLM-driven
research from data to human-verifiable research papers. ArXiv, abs/2404.17605, 2024.
Hao Kang and Chenyan Xiong. ResearchArena: Benchmarking LLMs’ Ability to Collect and
Organize Information as Research Agents. ArXiv, abs/2406.10291, 2024.
Steven A. Lehr, Aylin Caliskan, Suneragiri Liyanage, and Mahzarin R. Banaji. ChatGPT as Research
Scientist: Probing GPT’s Capabilities as a Research Librarian, Research Ethicist, Data Generator
and Data Predictor. Proceedings of the National Academy of Sciences of the United States of
America, 121 35, 2024.
Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal,
Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela.
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In NeurIPS, 2020.
Ruochen Li, Teerth Patel, Qingyun Wang, and Xinya Du. MLR-Copilot: Autonomous Machine
Learning Research based on Large Language Models Agents. ArXiv, abs/2408.14033, 2024.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom,
Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien
de, Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven
Gowal, Alexey, Cherepanov, James Molloy, Daniel Jaymin Mankowitz, Esme Sutherland Robson,
Pushmeet Kohli, Nando de, Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level
code generation with AlphaCode. Science, 378:1092 – 1097, 2022.
Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Yi Ding, Xinyu Yang, Kailas
Vodrahalli, Siyu He, Daniel Scott Smith, Yian Yin, Daniel A. McFarland, and James Zou. Can
Large Language Models Provide Useful Feedback on Research Papers? A Large-Scale Empirical
Analysis. NEJM AI, 1(8), 2024.
Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The AI Scientist:
Towards Fully Automated Open-Ended Scientific Discovery . ArXiv, abs/2408.06292, 2024.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder,
Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. Self-Refine: Iterative Refinement with
Self-Feedback. In NeurIPS, 2023.
Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Bhavana Dalvi, Abhijeetsingh
Meena, Aryan Prakhar, Tirth Vora, Tushar Khot, Ashish Sabharwal, and Peter Clark. Discovery-
Bench: Towards Data-Driven Discovery with Large Language Models. ArXiv, abs/2407.01725,
2024.
12
Published as a conference paper at ICLR 2025
Benjamin S. Manning, Kehang Zhu, and John J. Horton. Automated Social Science: Language
Models as Scientist and Subjects. SSRN Electronic Journal, 2024.
Vishakh Padmakumar and He He. Does Writing with Language Models Reduce Content Diversity?
In ICLR, 2024.
Ori Press, Andreas Hochlehnert, Ameya Prabhu, Vishaal Udandarao, Ofir Press, and Matthias Bethge.
CiteME: Can Language Models Accurately Cite Scientific Claims? ArXiv, abs/2407.12861, 2024.
Nils Reimers and Iryna Gurevych. Making Monolingual Sentence Embeddings Multilingual using
Knowledge Distillation. In EMNLP, 2020.
Sander Schulhoff, Michael Ilie, Nishant Balepur, Konstantine Kahadze, Amanda Liu, Chenglei Si,
Yinheng Li, Aayush Gupta, HyoJung Han, Sevien Schulhoff, Pranav Sandeep Dulepet, Saurav
Vidyadhara, Dayeon Ki, Sweta Agrawal, Chau Pham, Gerson C. Kroiz, Feileen Li, Hudson
Tao, Ashay Srivastava, Hevander Da Costa, Saloni Gupta, Megan L. Rogers, Inna Goncearenco,
Giuseppe Sarli, Igor Galynker, Denis Peskoff, Marine Carpuat, Jules White, Shyamal Anadkat,
Alexander Miserlis Hoyle, and Philip Resnik. The Prompt Report: A Systematic Survey of
Prompting Techniques. ArXiv, abs/2406.06608, 2024.
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettle-
moyer, and Wen tau Yih. REPLUG: Retrieval-Augmented Black-Box Language Models. In
NAACL, 2024.
Minyang Tian, Luyu Gao, Shizhuo Dylan Zhang, Xinan Chen, Cunwei Fan, Xuefei Guo, Roland
Haas, Pan Ji, Kittithat Krongchon, Yao Li, Shengyan Liu, Di Luo, Yutao Ma, Hao Tong, Kha
Trinh, Chenyu Tian, Zihan Wang, Bohao Wu, Yanyu Xiong, Shengzhu Yin, Min Zhu, Kilian Lieret,
Yanxin Lu, Genglin Liu, Yufeng Du, Tianhua Tao, Ofir Press, Jamie Callan, E. A. Huerta, and Hao
Peng. SciCode: A Research Coding Benchmark Curated by Scientists. ArXiv, abs/2407.13168,
2024.
Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He, and Thang Luong. Solving olympiad geometry
without human demonstrations. Nature, 625:476 – 482, 2024.
Qingyun Wang, Doug Downey, Heng Ji, and Tom Hope. SciMON: Scientific Inspiration Machines
Optimized for Novelty. In ACL, 2024.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le,
and Denny Zhou. Chain of Thought Prompting Elicits Reasoning in Large Language Models. In
NeurIPS, 2022.
Orion Weller, Marc Marone, Nathaniel Weir, Dawn J Lawrie, Daniel Khashabi, and Benjamin Van
Durme. “According to . . . ”: Prompting Language Models Improves Quoting from Pre-Training
Data. In EACL, 2023.
Jason Weston and Sainbayar Sukhbaatar. System 2 Attention (is something you might need too).
ArXiv, abs/2311.11829, 2023.
Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, and E. Cambria. Large Language
Models for Automated Open-domain Scientific Hypotheses Discovery. ACL Findings, 2024.
Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed Huai
hsin Chi, and Denny Zhou. Large Language Models as Analogical Reasoners. In ICLR, 2024.
Xingjian Zhang, Yutong Xie, Jin Huang, Jinge Ma, Zhaoying Pan, Qijia Liu, Ziyang Xiong, Tolga
Ergen, Dongsub Shim, Honglak Lee, and Qiaozhu Mei. MASSW: A New Dataset and Benchmark
Tasks for AI-Assisted Scientific Workflows. ArXiv, abs/2406.06357, 2024.
Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed Huai hsin Chi, Quoc V.
Le, and Denny Zhou. Take a Step Back: Evoking Reasoning via Abstraction in Large Language
Models. In ICLR, 2024.
Ruiqi Zhong, Charles Burton Snell, Dan Klein, and Jacob Steinhardt. Describing Differences between
Text Distributions with Natural Language. In ICML, 2022.
13
Published as a conference paper at ICLR 2025
Ruiqi Zhong, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, and Jacob Steinhardt. Goal Driven
Discovery of Distributional Differences via Language Descriptions. In NeurIPS, 2023.
Yilun Zhou, Caiming Xiong, Silvio Savarese, and Chien-Sheng Wu. Shared Imagination: LLMs
Hallucinate Alike. ArXiv, abs/2407.16604, 2024.
14
Published as a conference paper at ICLR 2025
A APPENDIX
A.1 DISCUSSION
In this section, we discuss some high-level questions readers might have and suggest ways to address
them.
Question 1: Do these collected expert ideas represent their best ideas? One might argue that these
ideas submitted by our idea-writing participants might not represent their best ideas as we discussed
in subsection 6.1, since most of them came up with the idea on the spot within a short period. In
order to address this concern, we have designed an experiment where we will compare AI ideas
with papers accepted at top-tier AI conferences. To avoid any possible contamination, we target the
upcoming EMNLP 2024 conference, which will release the accepted papers in October 2024. We
have generated AI ideas with our agent on 23 topics from the EMNLP Call For Papers page in July
2024 and cached them. We pre-registered our analysis plan which also includes the link to the cached
ideas. Apart from comparing the quality of these ideas, we will also compute the overlap between
AI-generated ideas and accepted papers on the same topics.
Question 2: Are evaluations based solely on ideas subjective? In this current study, we focused
solely on evaluating the ideas themselves. Ideas that sound novel and exciting might not necessarily
turn into successful projects, and our results indeed indicated some feasibility trade-offs of AI ideas.
We view the current study as a preliminary evaluation of AI-generated ideas. In the next phase, we
will recruit researchers to execute some AI and human-generated ideas into full projects. This will
enable reviewers to assess the complete experimental outcomes, providing a more reliable basis for
evaluation. Furthermore, it will allow us to analyze whether our initial idea evaluations align with the
assessments of the actual project outcomes.
Question 3: Why do you focus only on prompting-based research in NLP? The scope of our study
is limited to prompting research ideas within NLP. We chose this design to facilitate the next phase
of our execution experiment, where we prefer research ideas that are less resource-demanding and
can be executed relatively quickly. We believe that the evaluation protocols we established should be
applicable to other research domains as well, although the conclusions could be different depending
on the research fields. Future work should consider extending such human study to other research
domains and it would be interesting to compare how the conclusions differ.
Question 4: Can you automate idea execution as well? It is tempting to envision an end-to-end
automated research pipeline where AI agents can implement AI-generated ideas to directly evaluate
their effectiveness. Apart from speeding up scientific discovery, one could also imagine using such
execution agents to automatically verify experiment results in existing papers or new submissions.
We have also explored building an LLM agent to generate code to implement the generated ideas.
Specifically, we provide a template codebase that consists of: (1) loading datasets from Huggingface
or generating synthetic test examples; (2) implementing baseline methods; (3) implementing the
proposed method; (3) loading or implementing the evaluation metrics; (4) running experiments on
the testset with the baselines and the proposed method, so that the output of the agent will be a report
of the baseline performance as well as the proposed method’s performance. While this agent can
generate code that compiles and executes, we find that the automated experiments can be misleading
because the agent often skips or modifies steps in the baselines or proposed methods. In some cases,
the metric functions are also not correctly defined. This highlights the core challenge: just comparing
the final experiment results is not enough; we have to verify the faithfulness of the implementations
as well. Performing such implementation verification is not a trivial task, and we leave it to future
work. We provide detailed description of our idea execution agent in Appendix A.30.
15
Published as a conference paper at ICLR 2025
A.2 ETHICAL CONSIDERATIONS
Publication Policy. The growing use of AI to generate research ideas raises serious concerns
about the potential abuse of these technologies by students or researchers who may flood academic
conferences with low-quality or poorly thought-out submissions. The availability of LLM-generated
content could lead to a decline in the overall quality of academic discourse, as some individuals
might take a lazy approach, relying on AI to both generate ideas and review submissions. This would
undermine the credibility and integrity of the review process. The risks are real. Without proper
oversight, we could see a deluge of submissions that lack depth or intellectual merit. To prevent this,
it is essential to hold researchers accountable for the outputs generated through AI tools. Rigorous
standards must be applied equally to both AI-assisted and human-generated research to ensure that
the use of LLMs does not result in misleading, superficial, or unethical academic contributions.
Intellectual Credit. The use of LLMs to generate research ideas introduces significant ambiguity
around the concept of intellectual credit. Traditional frameworks for attributing credit in research,
based on human authorship and contribution, become less clear when AI plays a significant role
in idea generation. Questions arise around how to distribute credit between the developers of the
LLM, the researchers who designed the frameworks for its use, and the researchers who integrate
AI-generated ideas into their work. Furthermore, it becomes increasingly difficult to trace the origins
of AI-generated contributions, especially when they draw from vast datasets composed of numerous
sources. This complexity calls for a broader rethinking of how intellectual credit is assigned in
AI-driven research. While a complete overhaul of legal and academic norms is beyond the scope
of this project, we advocate for the adoption of transparent documentation practices. Researchers
should clearly disclose the role AI played in the idea generation process, specifying which models,
data sources, and frameworks were used, and outlining the level of human involvement. This could
ensure that the credit distribution in AI-supported research is as transparent and fair as possible.
Potential for Misuse. AI-generated research ideas, especially those that introduce novel concepts,
have the potential to be misused in ways that could lead to harmful or destabilizing outcomes. For
instance, ideation agents could be leveraged to generate adversarial attack strategies or other unethical
applications. This concern aligns with broader arguments from those focused on existential risk
(X-risk), who argue that AI-driven innovation could be a primary route to destabilizing the status
quo, posing risks at a societal or even global level. Our stance is that such discussions on safety
should be evidence-based to the extent that it is possible, and careful evaluation work is an important
component of keeping these discussions grounded in actual, measured capabilities of these systems.
We advocate for continued safety research specifically targeting these types of concerns—such as the
development of Reinforcement Learning from Human Feedback (RLHF) systems or anti-jailbreak
mechanisms for research ideation agents. Additionally, we believe it would be meaningful to create
safety benchmarks that assess the ethical and safe application of AI-generated ideas.
Idea Homogenization. Our analysis showed that current LLMs lack diversity in idea generation.
This raises important concerns that wide adoption of LLMs can result in idea homogenization, where
the generated ideas only reflect a narrow set of perspectives or have systematic biases. Over time,
this could lead to a reduction in the richness and diversity of research outputs globally. Future work
should develop ways to either improve LLMs themselves or refine our idea generation methods to
promote idea diversity. It’s also important to note that our evaluation primarily assesses the quality of
the typical ideas being generated, and may not fully capture the long tail of unique or novel ideas that
would be truly transformative.
Impact on Human Researchers. The integration of AI into research idea generation introduces a
complex sociotechnical challenge, as research is fundamentally a community-driven, collaborative
effort. By introducing AI, particularly LLMs, into this social system, we risk unforeseen consequences.
Overreliance on AI could lead to a decline in original human thought, while the increasing use of
LLMs for ideation might reduce opportunities for human collaboration, which is essential for refining
and expanding ideas. To mitigate these risks, future works should explore new forms of human-AI
collaboration, and our results on human reranking of AI ideas show that even naive human-AI
collaboration approaches can be effective. Beyond reranking, humans can play a critical role in
the ideation process by providing intermediate feedback, taking AI-generated ideas as inspiration
for further development, and bringing their unique expertise into the process. Understanding how
to integrate LLMs into this collaborative process without disrupting the social fabric of research
16
Published as a conference paper at ICLR 2025
will be an important ongoing problem, requiring careful consideration of the broader sociotechnical
implications.
Impact on Human Researchers. The use of AI to generate research ideas raises concerns about
the potential displacement of human researchers and the devaluation of human creativity. There
is a risk that researchers may become overly reliant on AI, leading to a decline in original human
thought and innovation. Furthermore, the dynamics of research collaboration could be fundamentally
altered. For example, increasing use of LLMs for ideation might discourage collaboration among
human researchers. To address this, we highlight the value of human-AI collaboration. We presented
preliminary results where human reranking on top of AI-generated ideas can bring additional values.
Apart from reranking, there are many other possible ways for humans to contribute to the collaborative
ideation process, for example, by providing intermediate feedback to generated ideas, or taking AI
ideas as inspirations for further improvement. Moreover, human researchers often brainstorm together
and collaborative discussion helps refine ideas. How to adapt LLMs in collaborative idea generation
is an interesting open question that we leave to future work.
17
Published as a conference paper at ICLR 2025
A.3 LIST OF RESEARCH TOPICS
We selected the following list of research topics for our research ideation task:
1. Bias: novel prompting methods to reduce social biases and stereotypes of large language
models
2. Coding: novel prompting methods for large language models to improve code generation
3. Safety: novel prompting methods to improve large language models’ robustness against
adversarial attacks or improve their security or privacy
4. Multilingual: novel prompting methods to improve large language models’ performance on
multilingual tasks or low-resource languages and vernacular languages
5. Factuality: novel prompting methods that can improve factuality and reduce hallucination
of large language models
6. Math: novel prompting methods for large language models to improve mathematical problem
solving
7. Uncertainty: novel prompting methods that can better quantify uncertainty or calibrate the
confidence of large language models
We use these topic descriptions to elicit ideas from both human participants and our LLM agent.
We show the distribution of our idea writing participants’ selected topics in Table 7.
Topic
Bias
Coding
Safety
Multilingual
Factuality
Math
Uncertainty
Total
Count
4
9
5
10
11
4
6
49
Table 7: Idea topic distribution.
18
Published as a conference paper at ICLR 2025
A.4 PROJECT PROPOSAL TEMPLATE
We give the following project proposal template to both the AI agent and human idea writers.
1. Title: A concise statement of the main research question to be used as the paper title.
2. Problem Statement: Clearly define the problem your research intends to address. Explain clearly
why this problem is interesting and important.
3. Motivation: Explain why existing methods are not good enough to solve the problem, and explain
the inspiration behind the new proposed method. You should also motivate why the proposed method
would work better than existing baselines on the problem.
4. Proposed Method: Explain how the proposed method works, describe all the essential steps.
5. Step-by-Step Experiment Plan: Break down every single step of the experiments, make sure
every step is executable. Cover all essential details such as the datasets, models, and metrics to be
used. If the project involves prompting, give some example prompts for each step.
6. Test Case Examples: Give at least two concrete examples. The first example should show how
the baseline method fails on the test case. If there are multiple baselines, give examples for all of
them. The second example should show how the proposed method succeeds on the test case. For
each test case, include the input (test example and the full prompt) and the expected output. You
should also provide an explanation for why the outputs from the proposed prompt are better. If the
proposed method has multiple steps, break them down into intermediate steps.
7. Fallback Plan: Propose some alternative plans for what should the students do if the proposed
method doesn’t manage to satisfy the success criteria. For example, you can suggest additional
analysis to help debug why the proposed method didn’t work, which could inform alternative new
methods, or just turn the project into an analysis paper instead by offering some interesting ablation
and insights.
19
Published as a conference paper at ICLR 2025
A.5 PROJECT PROPOSAL DEMO EXAMPLE
We present a manually written demonstration example used for project proposal generation. The
example is summarized from an existing paper (Dhuliawala et al., 2023). This same example is given
to both the AI agent as well as the idea-writing experts.
1. Title:
Chain-of-Verification Reduces Hallucination in Large Language Models
2. Problem Statement:
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue
in large language models.
3. Motivation:
A majority of the methods for reducing hallucination can be divided into roughly three categories:
training-time correction, generation-time correction, and via augmentation (tool-use). We want to
take a simpler approach that fully leverages the power of LLM itself. Our key motivation is that large
language models, when suitably prompted, can both generate and execute a plan of how to verify
themselves in order to check their own work, and finally incorporate this analysis into an improved
response.
4. Proposed Method:
Our overall process, which we call Chain-of-Verification (CoVe), thus performs four core steps:
(1) Generate Baseline Response: Given a query, generate the response using the LLM.
(2) Plan Verifications: Given both query and baseline response, generate a list of verification
questions that could help to self-analyze if there are any mistakes in the original response.
(3) Execute Verifications: Answer each verification question in turn, and hence check the
answer against the original response to check for inconsistencies or mistakes.
(4) Generate Final Verified Response: Given the discovered inconsistencies (if any), generate
a revised response incorporating the verification results.
Each of these steps is performed by prompting the same LLM in different ways to obtain the desired
response.
5. Step-by-Step Experiment Plan:
1: Gather Datasets: We choose datasets that evaluate factual correctness, including the Multi-
SpanQA dataset on closed-book QA and the FactScore dataset on generating biographies.
2: Construct Prompts: For the baseline, we use direct prompting where, given a query,
we generate left-to-right as usual using the LLM, with no special tricks. Given that such
baseline generations are typically prone to hallucination, CoVe attempts to identify these
hallucinations and correct them in the following steps:
(1) Plan Verifications: Conditioned on the original query and the baseline response, the
model is prompted to generate a series of verification questions that test the factual
claims in the original baseline response.
(2) Execute Verifications: Given the planned verification questions, the next step is
to answer them in order to assess if any hallucinations exist. The planning prompt
conditions on the baseline response in the first step. The verification questions
generated from planning are answered in the second step, where crucially the context
given to the LLM prompt only contains the questions and not the original baseline
response, hence preventing the LLM from repeating those answers directly.
(3) Generate Final Verified Response: Finally, the improved response that takes
verification into account is generated. This is executed by a final few-shot prompt
where the context takes into account all of the previous reasoning steps, the baseline
response, and verification question-answer pairs, so that the corrections can take
place.
3: Select Models: We test GPT-3.5 (Text-Davinci-003) and GPT-4 from the OpenAI API, as
well as the open-source LLaMA-2-70B-chat.
20
Published as a conference paper at ICLR 2025
4: Get Results: Get answer predictions from the models on these datasets with both the
baselines and the proposed method.
5: Analyze Results: Compare whether the new method improves the performance of LLMs in
these tasks as compared to the baselines.
6. Test Case Examples:
• Baseline Prompt Input (Direct Prompting): Name some politicians who were born in NY,
New York.
• Baseline Prompt Expected Output (Direct Prompting): Here are some politicians who
were born in NY, New York:
1. Hillary Clinton - former secretary of state and former Democratic presidential nominee.
2. Donald Trump - former president of the United States.
3. Michael Bloomberg - former Mayor of New York City and former Democratic presidential
candidate.
• Proposed Prompt Input (Chain-of-Verification Prompting; Step 1: Plan Verifications):
Generate a set of verification questions to verify each answer.
• Proposed Prompt Expected Output (Chain-of-Verification Prompting; Step 1: Plan
Verifications):
– Where was Hillary Clinton born?
– Where was Donald Trump born?
– Where was Michael Bloomberg born?
• Proposed Prompt Input (Chain-of-Verification Prompting; Step 2: Execute Verifica-
tions): Answer each of the verification questions.
• Proposed Prompt Expected Output (Chain-of-Verification Prompting; Step 2: Execute
Verifications):
1. Hillary Clinton was born in Chicago, Illinois, United States on October 26, 1947.
2. Donald Trump was born on June 14, 1946, in Queens, New York City, New York,
United States.
3. Michael Bloomberg was born on February 14, 1942, in Boston, Massachusetts, United
States.
• Proposed Prompt Input (Chain-of-Verification Prompting; Step 3: Final Response):
(Prepend the original question and the baseline response and the verification QA pairs)
Given the verification answers, generate a revised response incorporating the verification
results.
• Proposed Prompt Expected Output (Chain-of-Verification Prompting; Step 3: Final
Response): Here are some politicians who were born in NY, New York:
1. Donald Trump - former president of the United States.
2. Alexandria Ocasio-Cortez - Democratic member of the U.S. House of Representatives.
• Explanation: Given a user query, a large language model with direct prompting generates a
baseline response that may contain inaccuracies, e.g., factual hallucinations. To improve
this, Chain-of-Verification first generates a plan of a set of verification questions to ask,
and then executes that plan by answering them and hence checking for agreement. We
find that individual verification questions are typically answered with higher accuracy than
the original accuracy of the facts in the original longform generation. Finally, the revised
response takes into account the verifications.
7. Fallback Plan:
If the proposed method does not help as compared to the baseline, analyze each step of the CoVe
process to see if the verification questions are relevant, if the answers to the verification questions
are correct, and whether the generated final verified response is indeed improved over the baseline
response by considering the verification QA pairs. This can help us debug the proposed method or
turn this into interesting analysis on the model’s ability to verify and correct its own responses.
21
Published as a conference paper at ICLR 2025
A.6 STYLE STANDARDIZATION PROMPT
Style Standardization Prompt
You are a writing assistant specialized in editing academic writing. I will give you a student’s research
idea and an idea template. Your task is to edit the student’s idea to follow the template’s format.
Student idea: (Insert the student’s idea here)
Template: (Insert the template idea here)
Make sure that you only edit the wording and formatting, including things like punctuation, capitaliza-
tion, linebreaks, and bullet points. Also make sure to edit any informal wording and phrasing to use
vocabulary that sounds like the template’s writing style. No other changes are allowed beyond these.
The main subsections should be indexed clearly without indentation at the beginning. The title subsection
does not need indexing; other subsections, including problem statement, motivation, proposed method,
step-by-step experiment plan, test case examples, and fallback plan, should be indexed 1 to 6. Each
subsection can then have sub-bullets for sub-subsections if applicable. Leave an empty line after each
subsection.
You should use tab as indentation and make sure to use appropriate nested indentation for sub-bullets.
All bullets should have a clear hierarchy so people can easily differentiate the sub-bullets. Only leave
empty lines between subsections and remove any extra line breaks. If many bullet points are clustered
together in a paragraph, separate them clearly with indentation and appropriate bullet point markers.
Change to a new line for each new bullet point.
For the fallback plan, do not list a bunch of bullet points. Instead, condense them into one coherent
paragraph.
For line breaks, avoid Raw String Literals or Double Backslashes when using "\n", and change them to
spaces or tabs.
For in-line citations, if the citation mentioned the author’s last name (like "(Si et al., 2023)" or "(An et
al., 2024)"), you should keep them there; but if the citation is just a number (like "[1]" or "[3,4,5]"),
you should just remove it and do some necessary rephrasing to make the sentence still sound coherent
without the references.
Apart from minor rephrasing and changing formatting, do not change any content of the idea. You
must preserve the exact meaning of the original idea, do not change, remove, or add any other details.
Do not drop any subsections (including test case examples). Do not rename any models, datasets, or
methods. Do not drop clarification or examples in brackets and do not drop any data source mentions
(e.g., Chatbot Arena or Wildchat)! Note that when indexing test case examples, each test case example
could have multiple steps of inputs and outputs and you shouldn’t give separate indices to them. Each
test case example should be a whole set of input-output pairs for the baseline(s) and proposed method.
For the proposed method subsection, avoid any big changes. If the subsection comes in as a coherent
paragraph, you don’t have to break it down into bullet points. If the subsection is already in bullet
points, you should keep it that way. If the subsection is a mix of both, you should keep the bullet points
and the coherent paragraph as they are.
Keep all the clarification and examples mentioned in all the subsections and do not remove any of them
(including those in brackets).
For model selection, if any version of Claude is mentioned, change it to the latest version of Claude
(Claude-3.5); if any version of LLaMA is mentioned, change it to the latest version LLaMA-3. Do not
make any other model changes.
Now directly generate the edited student idea to match the format of the template.
22
Published as a conference paper at ICLR 2025
A.7
IDEA REVIEW FORM
We use the following review form to elicit reviews from all expert reviewers. Reviewers have one
week of time to finish each review.
1. Name
2. Institution
3. Email
4. Consent
5. Honor Code: I confirm that I will not use ChatGPT, Claude, Gemini, or any other AI tools when
writing my reviews.
6. Familiarity: Before reviewing the idea, please indicate how familiar you are with the given topic
on a scale of 1 - 5 (this is just for us to understand potential confounders).
1. You have never read about this topic before
2. You have read at least one paper on this topic
3. You have read multiple papers on this topic but have not published any paper on it
4. You have co-authored at least one paper on this topic
5. You have co-authored multiple papers on this topic or have published at least one first-author
paper on this topic
7. Experience: Have you reviewed for major NLP or AI conferences before (e.g., *ACL, COLING,
NeurIPS, ICLR, ICML, AAAI)?
8. Full Research Idea Proposal
9. Novelty Score: Whether the idea is creative and different from existing works on the topic, and
brings fresh insights. You are encouraged to search for related works online. You should consider all
papers that appeared online prior to July 2024 as existing work when judging the novelty.
1. Not novel at all - there are many existing ideas that are the same
2.
3. Mostly not novel - you can find very similar ideas
4.
5. Somewhat novel - there are differences from existing ideas but not enough to turn into a new
paper
6. Reasonably novel - there are some notable differences from existing ideas and probably
enough to turn into a new paper
7.
8. Clearly novel - major differences from all existing ideas
9.
10. Very novel - very different from all existing ideas in a very interesting and clever way
10. Novelty Rationale: Short justification for your score. If you give a low score, you should specify
similar related works. (Your rationale should be at least 2-3 sentences.)
11. Feasibility Score: How feasible it is to implement and execute this idea as a research project?
Specifically, how feasible the idea is for a typical CS PhD student to execute within 1-2 months
of time. You can assume that we have abundant OpenAI / Anthropic API access, but limited GPU
compute.
1. Impossible: the idea doesn’t make sense or the proposed experiments are flawed and cannot
be implemented
23
Published as a conference paper at ICLR 2025
2.
3. Very challenging: there are flaws in the proposed method or experiments, or the experiments
require compute/human resources beyond any academic lab
4.
5. Moderately feasible: It can probably be executed within the given time frame but would
require careful planning, efficient use of APIs or some advanced computational strategies to
overcome the limited GPU resources, and would require some modifications to the original
proposal to make it work
6. Feasible: Can be executed within the given constraints with some reasonable planning
7.
8. Highly Feasible: Straightforward to implement the idea and run all the experiments
9.
10. Easy: The whole proposed project can be quickly executed within a few days without
requiring advanced technical skills
12. Feasibility Rationale: Short justification for your score. If you give a low score, you should
specify what parts are difficult to execute and why. (Your rationale should be at least 2-3 sentences.)
13. Expected Effectiveness Score: How likely the proposed idea is going to work well (e.g., better
than existing baselines).
1. Extremely Unlikely: The idea has major flaws and definitely won’t work well
2.
3. Low Effectiveness: The idea might work in some special scenarios but you don’t expect it
to work in general
4.
5. Somewhat ineffective: There might be some chance that the proposed idea can work better
than existing baselines but the improvement will be marginal or inconsistent
6. Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks
7.
8. Probably Effective: The idea should offer some significant improvement over current
methods on the relevant benchmarks
9.
10. Definitely Effective: You are very confident that the proposed idea will outperform existing
methods by significant margins on many benchmarks
14. Expected Effectiveness Rationale: Short justification for your score. (Your rationale should be
at least 2-3 sentences.)
15. Excitement Score: How exciting and impactful this idea would be if executed as a full project.
Would the idea change the field and be very influential.
1. Poor: You cannot identify the contributions of this idea, or it’s not interesting at all and you
would fight to have it rejected at any major AI conference
2.
3. Mediocre: this idea makes marginal contributions and is very incremental
4.
5. Leaning negative: it has interesting bits but overall not exciting enough
6. Learning positive: exciting enough to be accepted at a major AI conference, but still has
some weaknesses or somewhat incremental
7.
24
Published as a conference paper at ICLR 2025
8. Exciting: would deepen the community’s understanding or make major progress in this
research direction
9.
10. Transformative: would change the research field profoundly and worth a best paper award at
major AI conferences
16. Excitement Rationale: Short justification for your score. (Your rationale should be at least 2-3
sentences.)
17. Overall Score: Overall score: Apart from the above, you should also give an overall score for the
idea on a scale of 1 - 10 as defined below (Major AI conferences in the descriptions below refer to
top-tier NLP/AI conferences such as *ACL, COLM, NeurIPS, ICLR, and ICML.):
1. Critically flawed, trivial, or wrong, would be a waste of students’ time to work on it
2. Strong rejection for major AI conferences
3. Clear rejection for major AI conferences
4. Ok but not good enough, rejection for major AI conferences
5. Decent idea but has some weaknesses or not exciting enough, marginally below the accep-
tance threshold of major AI conferences
6. Marginally above the acceptance threshold of major AI conferences
7. Good idea, would be accepted by major AI conferences
8. Top 50% of all published ideas on this topic at major AI conferences, clear accept
9. Top 15% of all published ideas on this topic at major AI conferences, strong accept
10. Top 5% of all published ideas on this topic at major AI conferences, will be a seminal paper
18. Overall Rationale: You should also provide a rationale for your overall score. (Your rationale
should be at least 2-3 sentences.)
25
Published as a conference paper at ICLR 2025
19. Confidence: Additionally, we ask for your confidence in your review on a scale of 1 to 5 defined
as following:
1. Your evaluation is an educated guess
2. You are willing to defend the evaluation, but it is quite likely that you did not understand
central parts of the paper
3. You are fairly confident that the evaluation is correct
4. You are confident but not absolutely certain that the evaluation is correct
5. You are absolutely certain that the evaluation is correct and very familiar with the relevant
literature
20. Time: How many minutes did you spend on this task?
26
Published as a conference paper at ICLR 2025
A.8
IDEA GENERATION AGENT: ADDITIONAL IMPLEMENTATION DETAILS
Seed Idea Generation Due to the max output length limit of the LLM API, we first generate a large
number of shorter seed ideas. We keep the seed ideas short so that we can explore more different
ideas given the same output token budget. We provide a demonstration example of the seed idea in
Appendix A.9. Then, we perform duplication and expand each remaining seed idea into a full project
proposal following our standard template in Appendix A.4.
Retrieval Augmentation We apply retrieval augmentation to the idea generation prompt in order
to increase diversity in the idea generation. To maximize diversity, we apply retrieval augmentation
half of the time when generating seed ideas, and we randomly select k = 10 papers from the top 20
retrieved papers when applying retrieval augmentation.
Idea Filtering After expanding seed ideas into full project proposals, we did some basic filtering to
remove any project proposals that failed the novelty and feasibility checks:
1. Novelty: We use the literature review module to retrieve the top 10 most relevant papers to
the generated idea and ask the LLM to compare each of them to the generated idea. The
idea will be filtered as long as any one of the retrieved papers is judged as equivalent.
2. Feasibility: The idea will be filtered if it requires extensive manual labor or hardware
resources beyond the capacity of a typical academic lab. The idea will also be filtered if it
involves any inconsistency in the experimental setups or assumptions. For example, if the
idea assumes only black-box API access of the LLMs, then it shouldn’t involve experiments
that need internal weight access.
This filtered out about 1% of the generated project proposals.
27
Published as a conference paper at ICLR 2025
A.9 DEMONSTRATION EXAMPLE: SEED IDEA GENERATION
We present a demonstration example used for seed idea generation. The example is summarized from
an existing paper (Dhuliawala et al., 2023).
Title:
Chain-of-Verification Prompting
Problem:
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue
in large language models.
Existing Methods:
A majority of the methods for reducing hallucination can be divided into roughly three categories:
training-time correction; generation-time correction; and via augmentation (tool-use).
Motivation:
A key observation is that large language models, when suitably prompted, can both generate and
execute a plan of how to verify themselves in order to check their own work, and finally incorporate
this analysis into an improved response.
Proposed Method:
Our overall process, which we call Chain-of-Verification (CoVe), thus performs four core steps:
(1) Generate Baseline Response: Given a query, generate the response using the LLM.
(2) Plan Verifications: Given both query and baseline response, generate a list of verification
questions that could help to self-analyze if there are any mistakes in the original response.
(3) Execute Verifications: Answer each verification question in turn, and hence check the
answer against the original response to check for inconsistencies or mistakes.
(4) Generate Final Verified Response: Given the discovered inconsistencies (if any), generate
a revised response incorporating the verification results.
Each of these steps is performed by prompting the same LLM in different ways to obtain the desired
response.
Experiment Plan:
Compare with zero-shot prompting, Chain-of-Thought, and few-shot prompting on the MultiSpanQA
dataset on closed-book QA and FactScore dataset on generating biographies.
28
Published as a conference paper at ICLR 2025
A.10 GENERATED SEED IDEAS AND THEIR NEAREST NEIGHBORS
We present several randomly sampled generated seed ideas (see Appendix A.8 for the definition of
seed ideas) on the topic of “novel prompting methods that can better quantify uncertainty or calibrate
the confidence of large language models”. For each idea, we show the most similar idea (nearest
neighbor) based on the embedding similarity, along with the similarity score. In practice, we set a
threshold threshold of 0.8 for determining whether two ideas are duplicates.
Idea 1:
Title: Adaptive Precision Boundary Probing
Problem: LLMs often provide uncertainty estimates that are either too coarse-grained or inappropri-
ately precise, failing to adapt to the inherent ambiguity or precision requirements of different queries.
Existing Methods: Existing uncertainty quantification methods typically use fixed precision scales
or calibration techniques that don’t adapt to the specific context and precision requirements of each
query.
Motivation: Human experts adjust the precision of their uncertainty estimates based on the nature of
the question and the available evidence. We can incorporate this adaptive approach to improve LLM
uncertainty quantification.
Proposed Method: We introduce Adaptive Precision Boundary Probing (APBP), a dynamic prompt-
ing technique that iteratively refines the precision of uncertainty estimates. Given a query, APBP
starts with a coarse-grained confidence interval. It then prompts the model to assess whether this
interval is appropriately precise given the query’s context and the model’s knowledge. If the model
determines that greater precision is warranted, APBP iteratively narrows the interval, prompting
the model at each step to justify the increased precision. Conversely, if the model recognizes high
ambiguity or limited knowledge, APBP widens the interval. Throughout this process, the model is
asked to explicitly reason about the factors influencing the appropriate level of precision, such as the
specificity of the query, the reliability of relevant knowledge, and potential sources of ambiguity. The
final output is an uncertainty estimate with a precision level tailored to the specific query and the
model’s knowledge state.
Experiment Plan: We will evaluate APBP on a diverse set of tasks with varying inherent precision
requirements, including numerical estimation, date prediction, and open-ended text generation. We’ll
compare APBP against fixed-precision uncertainty estimation methods, measuring both calibration
accuracy and the appropriateness of precision levels as judged by human experts.
Nearest Neighbor of Idea 1:
Title: Contextual Confidence Oscillation
Problem: Current methods for quantifying uncertainty in large language models often fail to capture
the dynamic nature of confidence across different contexts within a single query.
Existing Methods: Most existing approaches use static confidence scores or calibration techniques
that don’t account for intra-query contextual shifts.
Motivation: Human confidence often fluctuates as we process different parts of a complex question
or task. By mimicking this oscillation, we can potentially capture a more nuanced and accurate
representation of model uncertainty.
Proposed Method: We propose Contextual Confidence Oscillation (CCO), a novel prompting
technique that encourages the model to continuously re-evaluate and express its confidence as it
processes a query. The prompt is structured as a series of checkpoints, where the model must
pause its reasoning, reflect on its current confidence level, and explain any changes since the last
checkpoint. This creates a confidence trajectory that can be analyzed for patterns, sudden drops, or
gradual increases. Additionally, we introduce ’confidence disruptors’ - intentionally ambiguous or
challenging sub-queries inserted at various points to test the model’s ability to recognize and express
increased uncertainty when appropriate.
Experiment Plan: We will evaluate CCO against standard uncertainty quantification methods on
a range of tasks, including multi-step reasoning problems, ambiguous queries, and long-form text
analysis. We’ll measure not just overall accuracy of uncertainty estimates, but also the correlation
between confidence oscillations and human-annotated difficulty levels of different parts of each
query. We’ll also analyze how well the model’s expressed confidence trajectory aligns with its actual
performance across different segments of complex tasks.
29
Published as a conference paper at ICLR 2025
Similarity: 0.70
Idea 2:
Title: Quantum Superposition Confidence Prompting
Problem: Current LLMs struggle to accurately quantify uncertainty across multiple possible answers,
often defaulting to overconfidence in a single response.
Existing Methods: Existing approaches typically involve single-path reasoning or limited branching,
failing to capture the full spectrum of uncertainty.
Motivation: Inspired by quantum mechanics, where particles can exist in multiple states simultane-
ously, we propose a method that allows LLMs to consider multiple answer possibilities concurrently.
Proposed Method: We introduce Quantum Superposition Confidence Prompting (QSCP), where the
LLM is instructed to generate multiple potential answers simultaneously, assigning confidence scores
to each. The prompt encourages the model to ’exist in multiple states,’ exploring contradictory an-
swers and their implications concurrently. For example: ’Imagine you are in a quantum superposition
of multiple expert personas. Each persona will provide an answer to the following question, along
with a confidence score (0-100%). Ensure the personas explore contradictory viewpoints. Question:
[INSERT QUESTION]’. The LLM then generates responses from multiple personas, each with its
own confidence score. The final uncertainty is derived from the distribution of these scores, providing
a more nuanced understanding of the model’s confidence across possible answers.
Experiment Plan: Compare QSCP against standard prompting, chain-of-thought, and other uncer-
tainty quantification methods on diverse question-answering datasets. Evaluate using metrics such as
calibration error, Brier score, and a novel ’quantum uncertainty score’ that measures the spread and
coherence of the generated answer superposition.
Nearest Neighbor of Idea 2:
Title: Quantum Superposition Prompting
Problem: Traditional methods for uncertainty quantification in large language models often fail to
capture the full range of possible interpretations and outcomes, especially for queries with inherent
ambiguity or multiple valid perspectives.
Existing Methods: Current approaches typically focus on generating a single response with an
associated confidence score, or at best, a small set of discrete alternatives.
Motivation: Drawing inspiration from the principle of superposition in quantum mechanics, we
propose a method to represent and reason about multiple possible outcomes simultaneously, providing
a richer and more nuanced uncertainty quantification.
Proposed Method: We present Quantum Superposition Prompting (QSP), a novel framework for
exploring and quantifying uncertainty in language model outputs. QSP begins by prompting the
model to generate a ’superposition’ of possible interpretations or approaches to the given query. Each
element in this superposition is assigned a complex amplitude, representing both its probability and its
relationship to other elements. The model is then guided through a series of ’measurement’ prompts,
designed to collapse this superposition along different bases of interpretation. These measurements
yield probability distributions over outcomes, capturing different facets of uncertainty. QSP employs
techniques inspired by quantum computing, such as interference and entanglement, to model how
different interpretations interact and influence each other. The final uncertainty quantification
is derived from the full set of measurements, providing a multi-dimensional representation of the
model’s uncertainty that captures ambiguity, conflicting evidence, and the interdependence of different
interpretations.
Experiment Plan: We will evaluate QSP on tasks that inherently involve multiple valid perspectives
or ambiguous interpretations, such as ethical dilemmas, creative writing prompts, and open-ended
analytical questions. Metrics will include the diversity and coherence of generated superpositions, the
ability to capture human-judged ambiguities, and improvements in uncertainty calibration compared
to classical methods.
Similarity: 0.77
Idea 3:
Title: Fractal Uncertainty Decomposition
30
Published as a conference paper at ICLR 2025
Problem: LLMs often provide overly simplistic uncertainty estimates that fail to capture the hierar-
chical and nested nature of uncertainty in complex knowledge domains.
Existing Methods: Current uncertainty quantification methods typically produce flat, single-
dimensional confidence scores that don’t reflect the multi-layered structure of knowledge and uncer-
tainty.
Motivation: By recursively decomposing a query into sub-components and assessing uncertainty
at multiple levels of granularity, we can construct a more comprehensive and structurally informed
uncertainty estimate.
Proposed Method: We introduce Fractal Uncertainty Decomposition (FUD), a prompting technique
that recursively breaks down a query into a hierarchical structure of sub-queries, assessing uncertainty
at each level. Given an initial query, FUD prompts the model to identify key sub-components or
aspects of the question. For each sub-component, the model provides an answer and a confidence
estimate. If the confidence for a sub-component is below a certain threshold, FUD recursively applies
the same decomposition process to that sub-component. This continues until either a maximum
depth is reached or all sub-components have high confidence. The resulting structure is a tree of
nested confidence estimates. FUD then aggregates these estimates bottom-up, using a combination
of statistical methods and prompted meta-analysis by the model. The final output is both an overall
uncertainty estimate and a detailed map of the uncertainty structure, showing how confidence varies
across different aspects and levels of the query.
Experiment Plan: We will evaluate FUD on complex, multi-faceted tasks such as scientific expla-
nation, historical analysis, and technical troubleshooting. We will compare its performance to flat
confidence estimation methods and other hierarchical approaches. Evaluation metrics will include
traditional calibration measures, as well as new metrics designed to assess the quality and informa-
tiveness of the uncertainty decomposition. We will also conduct case studies to demonstrate how
FUD can provide more actionable and interpretable uncertainty information in real-world scenarios.
Nearest Neighbor of Idea 3:
Title: Semantic Fractal Decomposition
Problem: Current uncertainty quantification methods for large language models often fail to capture
the hierarchical and self-similar nature of conceptual understanding, leading to inconsistent confi-
dence estimates across different levels of abstraction.
Existing Methods: Existing approaches typically focus on flat, single-level uncertainty estimates or
simple hierarchical decompositions that don’t fully capture the complex, nested nature of semantic
understanding.
Motivation: Drawing inspiration from fractal geometry, where patterns repeat at different scales, we
propose a method that recursively decomposes concepts and queries into self-similar sub-components,
allowing for a more nuanced and scale-invariant approach to uncertainty quantification.
Proposed Method: We present Semantic Fractal Decomposition (SFD), a prompting technique
that guides the model to recursively break down a given query or concept into smaller, self-similar
components. At each level of decomposition, the model is asked to provide a confidence estimate.
The process continues until a predefined depth is reached or the model indicates it can no longer mean-
ingfully decompose the concept. The final uncertainty estimate is then constructed by aggregating
these multi-level confidence scores using a novel fractal dimension-inspired algorithm. This approach
allows for capturing uncertainty that may be present at different semantic scales and provides a more
robust and consistent measure of the model’s confidence across varying levels of abstraction.
Experiment Plan: We will evaluate SFD on a diverse set of tasks ranging from simple factual queries
to complex, multi-faceted questions in domains like philosophy, science, and law. We will compare
its performance against traditional flat confidence estimation techniques and simpler hierarchical
methods. Key metrics will include the consistency of uncertainty estimates across related queries
at different levels of abstraction, the correlation between fractal-aggregated confidence scores and
actual model performance, and the interpretability of the decomposition process.
Similarity: 0.81
31
Published as a conference paper at ICLR 2025
A.11 OVERLAP BETWEEN AI RANKING AND EXPERT RERANKING
We show the overlap between the AI Ideas condition and the AI Ideas + Human Rerank
conditions in Table 8. We note that 17 out of the 49 ideas in the AI Ideas + Human Rerank
condition are also ranked as top ideas in the AI Ideas condition by the AI ranker, while the other
32 are not.
Topic
Bias
Coding
Safety
Multilingual
Factuality
Math
Uncertainty
Total
Overlap New
2
4
2
5
2
2
1
18
2
5
3
5
9
2
5
31
Table 8: Overlap of ideas between AI + Human Rerank and AI conditions, broken down by
topic.
A.12 QUALITY CONTROL OF HUMAN EXPERT IDEAS
Each expert is instructed to choose one of the seven specified topics and write one idea on it within 10
days, following the given template in the annotation document. We included an honor code statement
to ask the participants to not use any AI tools in their idea writing. We collected N = 50 ideas
originally and manually checked all of them for quality control. We filtered out one of them as being
essentially a paraphrase of an existing paper’s abstract. We compensated the participant nevertheless
but excluded them from the review task.
32
Published as a conference paper at ICLR 2025
A.13 PARTICIPANT DETAILS
We show the detailed position breakdown of our 49 idea-writing participants in Table 9 and the
positions of our 79 reviewer participants in Table 10.
Figure 4: Positions of our idea writer (left) and reviewer (right) participants.
Position
Postdoc
PhD
Master
Undergraduate
Research Scientist
Machine Learning Engineer
Count
1
36
9
1
1
1
Table 9: Positions of the 49 idea writing participants.
Position
Postdoc
PhD
Master
Research Scientist
Machine Learning Engineer
Count
7
63
5
3
1
Table 10: Positions of the 79 idea reviewing participants.
33
PhD73%Master18%Other8%PhD79%Master6%Other5%Postdoc8%Published as a conference paper at ICLR 2025
We show the institutions of the idea writing participants in Table 11.
Institution
Stanford University
University of Southern California
University of Maryland
University of Illinois Urbana-Champaign
Johns Hopkins University
Columbia University
Carnegie Mellon University
University of Pennsylvania
Princeton University
Penn State University
Portland State University
Stony Brook University
University of Chicago
University of Washington
UC Berkeley
UCSD
Massachusetts Institute of Technology
George Washington University
Yale University
University of Toronto
Georgia Institute of Technology
National University of Singapore
Peking University
Tsinghua University
LinkedIn
Norm AI
Count
11
6
3
3
3
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Table 11: Institutions of the 49 idea writing participants.
34
Published as a conference paper at ICLR 2025
We show the institutions of the idea reviewing participants in Table 12.
Institution
Stanford University
UC Berkeley
UT Austin
University of Maryland
Princeton University
University of Washington
University of Southern California
Carnegie Mellon University
University of Chicago
Johns Hopkins University
UCLA
Georgia Institute of Technology
University of Illinois Urbana-Champaign
Tsinghua University
Stony Brook University
Ohio State University
National University of Singapore
University of Michigan
Dartmouth College
Massachusetts Institute of Technology
University of Pennsylvania
University of Toronto
Portland State University
Penn State University
New York University
Columbia University
UC Santa Barbara
Brown University
Amazon
LinkedIn
Norm AI
AMD
Count
25
4
4
4
3
3
3
3
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Table 12: Institutions of the 79 reviewer participants.
35
Published as a conference paper at ICLR 2025
A.14 REVIEW ASSIGNMENT STATISTICS
We list the details of the review assignment in Table 13.
Metric
# Reviews
# Conditions
# Topics
Mean Min Max
7.0
2.0
3.0
2.0
3.0
1.0
3.8
2.5
1.5
SD
1.3
0.5
0.6
Table 13: Statistics of the review assignment.
36
Published as a conference paper at ICLR 2025
A.15 ADDITIONAL STATISTICAL TESTS
We present two additional statistical tests that account for potential confounders by treating each idea
as one data point and each reviewer as one data point, respectively.
Condition
Novelty Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Excitement Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Feasibility Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Expected Effectiveness Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Overall Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Size Mean Median
SD
SE Min Max
p-value
49
49
49
49
49
49
49
49
49
49
49
49
49
49
49
4.86
5.62
5.78
4.56
5.18
5.45
6.53
6.30
6.41
5.10
5.48
5.57
4.69
4.83
5.32
5.00
5.50
6.00
4.33
5.50
5.50
7.00
6.00
6.50
5.33
5.50
5.50
4.67
5.00
5.50
1.26
1.39
1.07
1.16
1.33
1.36
1.50
1.27
1.06
1.14
1.23
0.99
1.16
1.34
1.24
0.18
0.20
0.15
0.17
0.19
0.19
0.21
0.18
0.15
0.16
0.18
0.14
0.17
0.19
0.18
1.50
1.50
3.00
2.00
2.50
1.00
3.00
2.50
4.00
3.00
2.00
3.00
2.00
1.50
2.00
7.00
8.33
8.33
7.00
7.33
7.33
9.00
8.50
9.00
7.00
7.50
7.50
6.67
7.50
7.50
–
0.03*
0.00**
–
0.08
0.00**
–
1.00
1.00
–
0.58
0.17
–
1.00
0.06
Table 14: Scores across all conditions by averaging the scores for each idea and treating each idea as
one data point (Test 2). Size is the number of ideas for each condition, and the p-values are computed
with two-tailed Welch’s t-tests with Bonferroni correction. We bold results that are statistically
significant (∗p < 0.05;∗∗ p < 0.01). AI ideas are judged as significantly better than human ideas in
terms of novelty while being comparable on all other metrics.
N Mean Diff
p-value
Novelty Score
AI Ideas vs Human Ideas
AI Ideas + Human Rerank vs Human Ideas
Excitement Score
AI Ideas vs Human Ideas
AI Ideas + Human Rerank vs Human Ideas
Feasibility Score
AI Ideas vs Human Ideas
AI Ideas + Human Rerank vs Human Ideas
Effectiveness Score
AI Ideas vs Human Ideas
AI Ideas + Human Rerank vs Human Ideas
Overall Score
AI Ideas vs Human Ideas
AI Ideas + Human Rerank vs Human Ideas
70
65
70
65
70
65
70
65
70
65
0.94
0.86
0.73
0.87
-0.29
-0.08
0.42
0.39
0.24
0.66
0.00**
0.00**
0.01*
0.00**
0.36
0.74
0.16
0.16
0.36
0.01*
Table 15: Mean score differences between AI ideas and human ideas by treating each reviewer as a
data point (Test 3). All p-values are computed with one-sample t-tests with Bonferroni correction.
We bold results that are statistically significant (∗p < 0.05;∗∗ p < 0.01).
37
Published as a conference paper at ICLR 2025
A.16 MIXED-EFFECTS MODELS
One way to combine all the statistical tests above is to fit a linear mixed-effects model where we treat
the condition as the fixed effect and other factors including reviewer and idea as random effects, while
also accounting for the differences among different topics. This way, we can rely on the regression to
account for all the possible confounders as the random effects. Specifically, for each metric, we fit
the following linear mixed-effects model:
model = smf.mixedlm("Score ~ Condition", df,
groups=df["Topic"],
re_formula="~Condition",
vc_formula={"ReviewerID": "0 + C(ReviewerID)",
"IdeaID": "0 + C(IdeaID)"})
This mixed-effects model analyzes the relationship between Score and Condition, while accounting
for the hierarchical structure of the data. Fixed effects estimate the average effect of Condition on
Score. Random intercepts for Topic allow for varying baseline scores across topics, and random
slopes for Condition within each topic allow the effect of Condition to vary by topic. Additionally,
variance components for ReviewerID and IdeaID account for variability in scores specific to individual
reviewers and ideas, respectively.
The results are shown in Table 16. The intercepts in the mixed-effects models represent the estimated
mean score of the baseline condition, which in this context is the Human Ideas. The coefficients
for Condition[AI Ideas] and Condition[AI Ideas + Human Rerank] in the mixed-effects
models represent the difference in the mean score for each metric between the AI ideas and the
baseline (human ideas). For example, a positive coefficient of 0.761 for the novelty score means
that AI Ideas, on average, score 0.761 points higher than Human Ideas on the novelty score
metric; conversely, a negative coefficient of -0.330 for the feasibility score means that AI Ideas,
score 0.330 points lower than Human Ideas on feasibility on average. The topic (group) variance
in the mixed-effects model represents the variability in the outcome metric that can be attributed to
differences between the topics, which is relatively small in general. Similarly, the idea variance and
reviewer variance in the mixed-effects model represent the variability in the outcome metric that
can be attributed to differences between individual ideas and between reviewers, respectively. The
reviewer variances are high in general, suggesting that there is substantial variability in how different
reviewers rate the same ideas. This implies that reviewer differences play a significant role in the
observed scores, with some reviewers consistently giving higher or lower ratings.
Overall, the results from the mixed-effects models confirm our main conclusion that AI ideas are
rated as significantly more novel than human ideas.
38
Published as a conference paper at ICLR 2025
Novelty Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Excitement Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Feasibility Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Expected Effectiveness Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Overall Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Coef.
SE
p
4.826
0.756
0.902
0.412
0.803
4.493
0.626
0.879
0.495
0.782
6.595
-0.300
-0.183
0.476
1.035
5.156
0.310
0.383
0.200
0.469
4.660
0.137
0.610
0.262
1.071
0.217
0.331
0.305
0.178
0.202
0.212
0.303
0.298
0.227
0.167
0.224
0.294
0.314
0.188
0.261
0.211
0.140
0.242
0.151
0.141
0.242
0.294
0.320
0.154
0.225
0.000***
0.023*
0.003**
0.000***
0.039*
0.003**
0.000***
0.307
0.561
0.000***
0.027*
0.114
0.000***
0.640
0.056
Table 16: Results of linear mixed-effects models. We bold results that are statistically significant
(∗p < 0.05;∗∗ p < 0.01;∗∗∗ p < 0.001). Our main conclusion on AI ideas being more novel than
human ideas still holds here.
39
Published as a conference paper at ICLR 2025
A.17 SCORE BREAKDOWN BY TOPIC
We show the breakdown of all scores across all conditions by topic. Note that due to the smaller
sample sizes for the per-topic breakdown, most results are not statistically significant and only offer
an intuitive understanding of the trends.
Figure 5: Breakdown of all scores by topic.
40
HumanAIAI+Rerank02468Multilingual**NoveltyHumanAIAI+Rerank**ExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank02468FactualityNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank02468BiasNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank01234567UncertaintyNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank02468SafetyNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank02468MathNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank01234567CodingNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallPublished as a conference paper at ICLR 2025
A.18 ANALYSIS OF FREE-TEXT REVIEWS
Following recent practices of using LLMs to extract patterns from text corpora (Zhong et al., 2022;
2023), we use Claude-3.5 to extract and cluster the main points from all reviews. We then manually
verified and labeled each cluster.
Many reviews reinforce our quantitative finding that AI ideas tend to be more novel. For example,
reviewers noted: “The idea of [...] is quite novel in an in-context learning setting.”, “The idea of
exploring [...] using an LLM-based iterative approach is novel.”, “The idea of [...] when constructing
prompts to improve cross-lingual transfer is one that I have not heard of before.”, “I like the idea to
[...], and think it will be helpful for other researchers in the community.”, “Combining [...] is a unique
way of attempting to preserve the gist of the information while likely losing specific identifiers.”, and
“Safeguarding using [...] is clearly novel. Similar ideas have not been seen in the related work.”.
Next, we summarize some common failure modes of AI ideas:
1. Being too vague on implementation details. For example, one reviewer noted: “I’m not
super clear on the details of this lattice and how the model will be prompted, so I’m not super
sure how well the model will complete these subtasks and how well-suited this particular
structure is to completing the overall task.” and another reviewer noted: “"For analyzing
the effectiveness of the method, the proposal only provides a very ad-hoc + hand-wavey
suggestion to compare responses across predefined questions.” In another case, the AI idea
is criticized for not considering practical implementation details: “I think in each of the
steps, there is something hard to execute. For example, in step Constellation Formation,
how do we do the weighted sum?” Similarly, other reviews noted: “It’s unclear how CLIP is
connected to the language model and how training a CLIP model would enable the LM to
understand images.”, and “There’s no mentioning on how to prompt the model to generate
defensive strategies and refine the model’s responses using these strategies.” Such vagueness
often makes it difficult for reviewers to make confident judgments: “Because this idea is too
general and vague, I can’t really answer the previous question. An idea needs a certain level
of details to be determined if it fits for a conference/journal but this one misses them.”
2. Misuse of datasets. For example: “I’m not sure about the datasets picked. StereoSet is
not a QA dataset; it simply contains statements. Also, I don’t understand why Dialogue
NLI responses require empathy.”, “I’m concerned the datasets proposed are the right test
cases for security of the code (since they are really just ML/programming problems, not
system-level programming).”, and “the choice of datasets might not be the best to show
the effect of incorporating multiple perspectives, especially TruthfulQA and ScienceQA,
which seems to have a single correct interpretation and answer.” In another example, the
benchmark datasets chosen are considered too easy by the reviewer: “none of the chosen
datasets (MATH, GSM8K, and MMLU) uses complicated math concepts”.
3. Missing or inappropriate baselines. For example: “The proposed method needs to be
compared to simply asking the model to think of one (or several) facts about the question
before answering using more turns. This could be an additional baseline to verify the scoring
process is meaningful.” and “Although the proposal includes some baselines that should be
compared to, it does not mention some methods which seem to do quite well with LLMs.”
Sometimes, “the chosen baselines may not be suitable”, for example, because they are not
directly comparable with the proposed method.
4. Making unrealistic assumptions. For example: “The assumption that model can (mostly)
accurately flag its own hallucinations is quite tricky.”, “there is a presupposed assumption
that hallucinations in LLMs are ungrounded and independent of the data they are trained on,
which is generally not considered true”, “The big issue for the effectiveness of the proposed
method is that, it asserts very strong assumptions on downstream tasks, such as there must
exist only two extremes.”, “Some assumptions (e.g., [...]) are unlikely to be true in practice,
especially when low-resource languages and less represented cultures are included in the
study.”, and “A major assumption in this approach is that the model is able to [...]. However,
[...]”.
5. Being too resource-demanding. Despite the fact that we explicitly prompted the agent
to consider feasibility when generating ideas, some of the generated ideas are still too
resource-demanding. For example, one reviewer noted: “The biggest issue to feasibility
41
Published as a conference paper at ICLR 2025
I see is that the project calls for fine-tuning BLOOM (See step 5). BLOOM has 176B
parameters so it’s going to take quite a lot of GPUs to fine-tune. From a systems perspective,
I see this as causing delays.” In some other cases, manual data annotation is being criticized
for feasibility: “The bottleneck seems to be the dataset collection process if there are no
existing datasets that fit the requirements of the paper.”, and “the manual evaluation by
native speakers or cultural experts could be time-consuming and resource-intensive”.
6. Not well-motivated. For example: “Not well-motivated and there is not a clear intuition
that this work can work to increase the factuality.”, “And in general the method is not
well-motivated and needs reasons why retrieving from model itself is meaningful by use
cases or specific tasks.”, and “The idea simply doesn’t make sense to me. Given current
LLMs’ ability, I’m pretty sure they can simply recite code like inserting data to a binary
search tree.”
7. Not adequately following existing best practices. For example: “The proposal does not
seem to include awareness of what has been previously tried, or more strategic ways to
evaluate success/failures...”
We contrast these with some of the unique strengths and weaknesses of human ideas:
1. Human ideas are generally more grounded in existing research and practical consider-
ations, but may be less innovative. For example, these ideas might be applying existing
techniques to new problems: “Multilinguality as a debiasing method has already been
considered in the literature, although not necessarily in the prompt engineering framework.”
Sometimes people apply incremental changes to existing techniques: “The overall idea
shares quite a similar idea with program-of-thought (PoT). The only difference is that there
is an additional step where an LLM is prompted to decide whether to use code or not.”
Some ideas try to combine existing techniques: “Query decomposition and RAG separately
are well studied, if there is no existing work that combines both (which I’m not aware of),
then it’s reasonably novel.” As some reviewers noted, human ideas tend to build on known
intuitions and results: “There are already existing works on using available lexicons to
improve the translation capabilities of LLMs in general.”
2. Human ideas tend to be more focused on common problems or datasets in the field.
For example: “The problem of models not handling negation properly is a very common
problem, especially among propriety LMs such as claude-3-5-sonnet.”, “The data exist. This
project mainly entails plugging in these datasets to a prompt template and finetuning for a
bit. There is little left unspecified, and it should be quite simple to execute on.”, “I haven’t
found any work using this idea to solve this specific problem, but [...] is definitely not new.”,
and “While existing works have explored the problem of calibration in long-form answers
(e.g. [...]), the specific method for calibration is different.”
3. Human ideas sometimes prioritize feasibility and effectiveness rather than novelty and
excitement. For example, reviewers noted: “I don’t think this will be a groundbreaking
finding, but it will probably work.” and “while the idea is promising and could lead to signif-
icant improvements, it may not be groundbreaking enough to be considered transformative
or worthy of a best paper award”.
42
Published as a conference paper at ICLR 2025
A.19 RANDOMLY SAMPLED HUMAN AND AI IDEAS WITH REVIEWS
We randomly sample four pairs of ideas from different topics to ground our numerical results with
actual examples. In each pair, there is one AI idea and one human idea. To save space, we include
the full project proposal of each idea along with the full set of reviews in the Appendix, but we
list their titles, topics, and average scores here for quick reference (we reveal whether each idea is
AI-generated or human-written in Appendix A.28):
1. Modular Calibration for Long-form Answers: Appendix A.20
Topic: Uncertainty; Average Overall Score: 5.5
2. Semantic Resonance Uncertainty Quantification: Calibrating LLM Confidence through
Multi-Path Reasoning: Appendix A.21
Topic: Uncertainty; Average Overall Score: 6
3. Translation with LLMs through Prompting with Long-Form Context: Appendix A.22
Topic: Multilingual; Average Overall Score: 4
4. Linguistic Pivot Constellation: Enhancing Cross-Lingual Transfer for Low-Resource Lan-
guages and Dialects: Appendix A.23
Topic: Multilingual; Average Overall Score: 6.7
5. LLM Directed Retrieval Querying for Improving Factuality: Appendix A.24
Topic: Factuality; Average Overall Score: 4.7
6. Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models
through Iterative Concept Grounding: Appendix A.25
Topic: Factuality; Average Overall Score: 3.3
7. Autoprompting: Generate Diverse Few-shot Examples for Any Application: Appendix A.26
Topic: Coding; Average Overall Score: 5
8. Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Sys-
tems: Appendix A.27
Topic: Coding; Average Overall Score: 6.7
43
Published as a conference paper at ICLR 2025
A.20 EXAMPLE IDEA: MODULAR CALIBRATION FOR LONG-FORM ANSWERS
Modular Calibration for Long-form Answers (Part 1)
1. Problem Statement: Calibrating the confidence of Large Language Models (LLMs) when
generating long-form answers, such as essays and code, remains an open challenge in the field of
natural language processing.
2. Motivation: While numerous methods have been developed to calibrate the performance of LLMs
on multiple-choice questions or open-domain questions with short answers, extending these approaches
to tasks requiring lengthy responses presents significant difficulties. For instance, in code generation
tasks (e.g., the HumanEval dataset), traditional confidence extraction methods like perplexity may prove
inadequate due to the substantial variation in answer length across questions. Verbalized confidence
can be affected by instruction tuning artifacts or unclear scope, while the reliability of metrics such
as Expected Calibration Error (ECE) and Macro-averaged Calibration Error (MacroCE) may be
compromised by differences in task settings. Our aim is to propose a novel pipeline for confidence
extraction and calibration of LLMs for long-form answers, drawing inspiration from methods used for
short or fixed-set answers. This approach will enable us to monitor the model’s long-form answer
generation process and apply targeted external augmentation when necessary, thereby enhancing both
performance and efficiency.
3. Proposed Method: We introduce Modular Calibration, a process comprising four core steps:
1. Extend: Prompt the model to elaborate on the original question in relation to the answer,
identifying which components of the question are addressed in the long-form response.
2. Decompose: Instruct the LLM to break down the extended question and long-form answer
into multiple modules.
3. Extract Confidence: Utilize verbalized confidence or perplexity to determine the confidence
level for each module.
4. Merge: Based on the relationships between the modular questions/answers and the overall
questions/answers, prompt the model to combine the modular confidence scores into an
overall score representing the confidence in the long-form answer.
Each of these steps is executed by prompting the same LLM in different ways to elicit the desired
response.
4. Step-by-Step Experiment Plan:
1. Gather Datasets: Select datasets featuring long answers with correctness annotations. Poten-
tial candidates include GSM8K, Code Gen, and Essay Writing.
2. Construct Prompts:
(a) Establish a baseline using direct prompting, where a query is presented without special
techniques.
(b) Analyze outputs to refine prompts for the Extend and Decompose steps.
(c) For the Confidence step, employ vanilla perplexity or verbalized confidence extraction.
If performance is unsatisfactory, explore advanced methods built upon these techniques,
such as those presented in recent research (e.g., FaR paper).
3. Select Models: Evaluate GPT-3.5 (Text-Davinci-003) and GPT-4 from the OpenAI API, as
well as the open-source LLaMA-3-70B-chat.
4. Get Results: Obtain confidence predictions from the models on the selected datasets using
both baseline methods and the proposed Modular Calibration approach.
5. Analyze Results: Compare the calibration performance of LLMs using the new method
against the baselines (e.g., the perplexity of the entire long-form answer). Conduct qualitative
and quantitative analyses on each component of the Modular Calibration process.
44
Published as a conference paper at ICLR 2025
Modular Calibration for Long-form Answers (Part 2)
5. Test Case Examples:
• Test Case 1: Verbalized Confidence Prompting
– Input: <Q> <A> Confidence (0-1)
– Output: [Model generates a confidence score between 0 and 1]
• Test Case 2: Modular Calibration Step 1 (Extend)
– Input: Given the answer, can you extend the question and elaborate on what points are
covered in the answer?
– Output: The answer covers these points of the question: (1) how fast A runs; (2) how
fast B runs; (3) if A is faster than B.
• Test Case 3: Modular Calibration Step 2 (Decompose)
– Input: Please decompose the above extended question and answers into modules.
– Output:
* How fast A runs: [relevant excerpt from the original answer]
* How fast B runs: [relevant excerpt from the original answer]
[Additional modules as needed]
• Test Case 4: Modular Calibration Step 3 (Extract)
– Input: How fast A runs: [relevant excerpt from the original answer] Confidence (0-1)
– Output: 1. 0.9; 2. 0.6 [Additional confidence scores for other modules]
• Test Case 5: Modular Calibration Step 4 (Merge)
– Input: For each of these points related to question X, the confidence is: 0.9, 0.6, ... What
is the overall confidence for the whole problem?
– Output: [Model generates an overall confidence score]
6. Fallback Plan: If the proposed Modular Calibration method does not demonstrate improvement over
the baseline, we will execute each sub-question and module individually to assess whether calibration
is enhanced for each component. This approach will facilitate debugging of the proposed method
and potentially yield interesting insights into the relationships between performance/calibration of
decomposed modules and overall problems. Alternatively, we may analyze the model’s ability to
effectively decompose questions and answers into appropriate modules. These analyses will inform
potential refinements to the method or provide valuable insights into the limitations and capabilities of
LLMs in handling complex, long-form responses.
45
Published as a conference paper at ICLR 2025
Reviewer 1
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably
enough to turn into a new paper)
Rationale: Focus on the long-form setting is novel at the moment. The idea of obtaining modular
confidence estimates for different claims in a long-form output, and synthesizing them into a single
uncertainty estimate is not that complicated, but it does seem to be underexplored.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The only part of the project that seems challenging is obtaining correctness annotations
for one of the datasets (e.g., Essay Writing). GSM8K and code datasets like HumanEval seem like
very natural long-form output settings to try out the idea. Other than this, iterating on the prompts for
decomposition / verbalized UQ for each of the modules will be important, but the author mentions this.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: It’s possible that first obtaining verbalized uncertainty estimates for each module, and then
synthesizing into a single score, will outperform the standard baselines of self-consistency over the
entire long-form output (using majority vote as the confidence score). However, I don’t expect this to
be dramatically better. If the paper instead set out with the goal of actually producing the UQ estimates
for each claim, then almost no prior work does this, and the baselines would be less strong.
Excitement: 5 (Leaning negative: it has interesting bits but overall not exciting enough)
Rationale: This seems like the most straightforward possible way to obtain uncertainty estimates for a
long-form generation with an LLM. This means the project could produce some useful engineering
artifacts, but it doesn’t really push the idea to its logical conclusion. Therefore I don’t consider it
"exciting enough". There is some mention of "using the uncertainty estimates to possibly condition on
more information" but this is not fleshed out – it could be more interesting. For example, studying how
the fine-grained uncertainty estimates could be used to selectively retrieve factual information from
Wikipedia etc. on a knowledge-intensive task.
Overall Score: 5 (Decent idea but has some weaknesses or not exciting enough, marginally below the
acceptance threshold of major AI conferences)
Rationale: I like the focus on long-form generations. However, this proposal is a very straightforward
baseline and extension of existing work to the long-form generation setting (just produce the long
generation, decompose it, apply verbalized uncertainty on each claim, and finally aggregate them). I
could see the paper being well-cited, but I don’t see an interesting/novel angle here.
Confidence: 5 (You are absolutely certain that the evaluation is correct and very familiar with the
relevant literature)
46
Published as a conference paper at ICLR 2025
Reviewer 2
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably
enough to turn into a new paper)
Rationale: While existing works have explored the problem of calibration in long-form answers (e.g.
https://arxiv.org/abs/2402.06544), the specific method for calibration is different. Also seems related to
FactScore (https://arxiv.org/abs/2305.14251) where the task was different (getting a factuality score)
but the idea of breaking long form generations into smaller units, evaluating each separately and then
combing does seem related.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The idea seems simple enough to implement with API access, considering all the steps
involved in the method can be done via prompting with API. The proposal does mention using LLaMA3-
70B as an additional model, which would require GPUs I guess.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: Since it has been shown that LLMs are quite well calibrated when asked to verbalize the
confidence for short answers, I’m guessing the calibration scores would be pretty good for individual
modules. Also LLMs might be decent at combining confidence scores (especially with detailed
instructions and some examples in the prompt), so overall the method might work well. But it’s unclear
if it would do better than the methods proposed in - https://arxiv.org/abs/2402.06544.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still
has some weaknesses or somewhat incremental)
Rationale: If the method does work well in getting calibration for long-form answers, I think that
would be pretty exciting. One thing which is missing from the proposal (and why the score was not
higher) was that it does not touch upon the issue that for long-form answers we won’t have a binary
correct/incorrect decision but answers can be partially correct.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The overall idea makes sense to me, but the score is not higher right now because: (a) it’s
unclear what exactly is meant by ’modules’ especially for essay writing which the proposal mentions as
one of the tasks ; (b) the issue for partial correctness which was mentioned above.
Confidence: 3 (You are fairly confident that the evaluation is correct)
47
Published as a conference paper at ICLR 2025
A.21 EXAMPLE IDEA: SEMANTIC RESONANCE UNCERTAINTY QUANTIFICATION
Semantic Resonance Uncertainty Quantification (SRUQ) (Part 1)
1. Problem Statement: Current uncertainty quantification methods for Large Language Models
(LLMs) often rely on simple statistical measures or model-specific attributes, which may not capture the
nuanced semantic uncertainties in complex reasoning tasks. This limitation can lead to overconfident
or poorly calibrated model outputs, potentially resulting in unreliable decision-making in critical
applications.
2. Motivation: Existing approaches typically use softmax probabilities, entropy measures, or ensemble
disagreement to quantify uncertainty. However, these methods often fail to capture the semantic
nuances and reasoning complexities in tasks that require deep understanding and multi-step reasoning.
Human experts, on the other hand, gauge their uncertainty by considering how well their reasoning
’resonates’ with their broader knowledge and experience. By mimicking this process in LLMs, we can
potentially develop a more robust and semantically grounded approach to uncertainty quantification.
3. Proposed Method: We propose Semantic Resonance Uncertainty Quantification (SRUQ), which
prompts the LLM to generate multiple independent reasoning paths for a given problem, then quantifies
uncertainty based on the semantic coherence and mutual reinforcement among these paths. The process
involves five key steps:
1. Generating diverse solution attempts using different prompting strategies.
2. Cross-evaluating each solution attempt against the others, assessing logical consistency and
mutual support.
3. Constructing a ’resonance graph’ where nodes are solution attempts and edges represent
semantic reinforcement.
4. Computing a resonance score based on graph properties like connectivity and centrality.
5. Mapping the resonance score to a calibrated uncertainty estimate.
48
Published as a conference paper at ICLR 2025
Semantic Resonance Uncertainty Quantification (SRUQ) (Part 2)
4. Step-by-Step Experiment Plan:
1. Dataset Preparation
• Utilize three datasets covering different reasoning tasks:
(a) GSM8K for mathematical problem-solving
(b) EntailmentBank for logical deduction
(c) HotpotQA for multi-hop question answering
• Split each dataset into train, validation, and test sets if not already done.
2. Baseline Implementation
• Implement three baseline uncertainty quantification methods:
(a) Softmax probabilities
(b) Monte Carlo Dropout
(c) Ensemble disagreement (using different few-shot prompts)
• Generate predictions and uncertainty estimates on the validation and test sets for each
baseline.
3. SRUQ Implementation
(a) Generate 5 diverse solution attempts using different few-shot prompts and temperature
settings.
(b) For each pair of solutions, prompt the LLM to evaluate their consistency and mutual
support.
(c) Construct the resonance graph using the pairwise evaluations.
(d) Compute the resonance score using graph centrality measures (e.g., PageRank).
(e) Map the resonance score to a calibrated uncertainty estimate using isotonic regression
on the validation set.
4. Evaluation
• Compare SRUQ against the baselines using the following metrics:
(a) Expected Calibration Error (ECE)
(b) Brier score
(c) Area Under the Precision-Recall Curve (AUPRC) for uncertainty ranking
• Evaluate the correlation between uncertainty estimates and actual errors.
5. Analysis
• Visualize the resonance graphs for high and low uncertainty examples.
• Analyze the relationship between graph properties and prediction accuracy.
• Investigate cases where SRUQ significantly outperforms or underperforms compared to
baselines.
6. Ablation Studies
• Vary the number of solution attempts.
• Compare different graph centrality measures.
• Evaluate the impact of the cross-evaluation step.
7. Generalization Test
• Test the generalization of SRUQ on out-of-distribution samples by applying the method
trained on one dataset to examples from the other datasets.
49
Published as a conference paper at ICLR 2025
Semantic Resonance Uncertainty Quantification (SRUQ) (Part 3)
5. Test Case Examples:
• Baseline Example:
– Input: Q: If a train travels at 60 miles per hour, how far will it travel in 2.5 hours?
– Softmax Output: The train will travel 150 miles in 2.5 hours. (Confidence: 0.92)
– Explanation: The softmax probability is high, but it does not capture the reasoning
process or potential uncertainties in the calculation.
• SRUQ Example:
– Input: Q: If a train travels at 60 miles per hour, how far will it travel in 2.5 hours?
– Solution Attempts:
1. Distance = Speed × Time
* Distance = 60 miles/hour × 2.5 hours
* Distance = 150 miles
2. In 1 hour, the train travels 60 miles
* In 2 hours, it’s 120 miles
* In 0.5 hours, it’s 30 miles
* Total: 120 + 30 = 150 miles
3. Let’s break it down:
* 1 hour: 60 miles
* 1 hour: 60 miles
* 0.5 hour: 30 miles
* Sum: 60 + 60 + 30 = 150 miles
– Cross-Evaluation: All three solutions are consistent and mutually supportive. They use
different approaches but arrive at the same result.
– Resonance Graph: Fully connected graph with high edge weights
– Resonance Score: 0.95
– Calibrated Uncertainty: 0.05
– Final Output: The train will travel 150 miles in 2.5 hours. (Uncertainty: 0.05)
– Explanation: SRUQ generates multiple solution paths, evaluates their consistency, and
quantifies uncertainty based on their semantic resonance. The high resonance score
indicates low uncertainty, which is then calibrated to provide a final uncertainty estimate.
6. Fallback Plan: If SRUQ does not significantly outperform baselines, we can pivot to an analysis
paper exploring why semantic resonance might not capture uncertainty effectively. We could investigate
the quality and diversity of generated solution attempts, potentially improving the prompting strategies.
Additionally, we could examine the effectiveness of the cross-evaluation step, possibly incorporating ex-
ternal knowledge or more structured reasoning. Furthermore, we could explore the relationship between
graph properties and actual uncertainty, which might reveal insights about how LLMs represent confi-
dence internally. We could also consider combining SRUQ with traditional uncertainty quantification
methods, creating a hybrid approach that leverages both statistical and semantic information.
50
Published as a conference paper at ICLR 2025
Reviewer 1
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably
enough to turn into a new paper)
Rationale: I haven’t seen (and couldn’t find) any prior work which exactly has the same idea as in
this proposal. The proposed idea is definitely related to using consistency among multiple solutions
to estimate uncertainty (e.g. https://arxiv.org/abs/2405.18711 does this across solutions decoded from
different layers) but I have not seen the idea of constructing resonance graph and using graph properties
to estimate uncertainty.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The proposed method, SRUQ, should be pretty easy to implement given that LLM API
access is abundant. SRUQ involves multiple steps all of which can be done through prompting via API
— getting multiple solutions, prompting LLMs to get a consistency score between each pair of solutions
etc. The parts which cannot be implemented through API are the baselines e.g. Monte Carlo dropout,
and would require GPUs. To do a fair comparison to the baselines, I imagine SRUQ will also have to be
done on open models which could also require GPUs.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: Although the proposal includes some baselines that should be compared to, it does
not mention some methods which seem to do quite well with LLMs (especially getting better
with scale) – e.g. methods like P(True) (https://arxiv.org/abs/2207.05221) or verbalized confidence
(https://arxiv.org/abs/2305.14975). It’s not clear/obvious to me that the proposed method should do
better than these baselines.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still
has some weaknesses or somewhat incremental)
Rationale: While the method is novel and feasible, I’m not too excited by it since some
of the other existing methods out there mentioned above (like https://arxiv.org/abs/2207.05221,
https://arxiv.org/abs/2305.14975) are much simpler and work quite well. Compared to that SRUQ is
more complex, and hence maybe has less chance of being very impactful (unless it works really better).
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The above accept score is assuming the idea does work better than the baselines on some
category of tasks. Overall, given that the idea is novel, the proposal includes comparison to other
baselines as well analysis & ablations, I think that could be enough to get accepted into an AI conference.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
51
Published as a conference paper at ICLR 2025
Reviewer 2
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably
enough to turn into a new paper)
Rationale: The proposed approach shares some similar ideas with self-consistency (which suggests
the consistency of sampled LLMs outputs is relatively well calibrated). But the approach is more
generalized and fine-grained than existing work if the approach uses more advanced ‘mutual support
evaluation‘ beyond simply comparing the final answers.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would
require careful planning, efficient use of APIs or some advanced computational strategies to overcome
the limited GPU resources, and would require some modifications to the original proposal to make it
work.)
Rationale: There lacks some important details in terms of the cross-evaluation part. How is the mutual
support evaluated (by prompting or some other methods?). This part is crucial for implementing the
whole pipeline of this approach.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: I think it has some chances to beat the proposed baselines. If the cross-evaluation part is
properly executed. Again, the success of this proposal is highly dependent on that part.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still
has some weaknesses or somewhat incremental)
Rationale: If this idea actually works, at least it tells something new about how to use multiple samples
to provide better confidence estimation than simple consistency. But the idea itself is still somewhat
incremental given the existence of current consistency-based calibrators.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: Overall there are some incremental contributions, but not too exciting. The algorithm itself
can be neat. I think it can be worth a borderline acceptance.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
52
Published as a conference paper at ICLR 2025
Reviewer 3
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably
enough to turn into a new paper)
Rationale: I think the idea is reasonable and indeed identifies some limitations of current works on
uncertainty estimation. However, the consistency between reasoning paths is somehow similar to
self-consistency reasoning from Google and SelfCheckGPT.
Feasibility: 7
Rationale: I think it could be easy to implement and quickly be tried by PhD students or even
undergrads. Also, in the test case example, the setting is straightforward and well-defined.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: Based on my experience, the consistency-based methods, although not fully theoretically
grounded, can work pretty well in current uncertainty estimation questions. I believe working this on
the reasoning path level could also work to some extent.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still
has some weaknesses or somewhat incremental)
Rationale: Overall, this idea identified a good research question, although the method might not be
very exciting to me.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The novelty and the actual application of this method in the area is limited, but could be an
inspiring idea.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
53
Published as a conference paper at ICLR 2025
A.22 EXAMPLE IDEA: TRANSLATION WITH LLMS THROUGH PROMPTING WITH LONG-FORM
CONTEXT
Translation with LLMs through Prompting with Long-Form Context (Part 1)
1. Problem Statement: Stable generation of text in low-resource languages is an unsolved issue in
large language models.
2. Motivation: While LLMs can often produce surprisingly good translations despite not being
explicitly trained for this task, this does not hold for lower-resource languages. LLMs are both more
likely to generate off-target text (text in another language than intended) when prompted to translate to
a lower-resource language, and show increased instability in translation quality across prompt templates
in lower-resource languages.
3. Proposed Method: Our proposed method investigates the use of long-form templates to improve
generated translation quality and reduce off-target translations in lower-resource languages. We propose
to provide additional prompt context by translating multi-sentence input, with additional views of the
target language with the langid template provided as context. We do so in multiple stages:
1. Querying the language model to first generate a paragraph containing the source sentence
to be translated.
2. Prepending monolingual text in the target language, with langid: tags, above the translation
prompt.
3. Presenting both these additional sources of content, prompting the LLM for a translation.
4. Step-by-Step Experiment Plan:
1. Choose datasets: Evaluate on the FLORES-200 datasets, which allow for wide language
coverage on the Wikipedia domain, as well as the WMT-21 test sets for news and law/medical
domain.
2. Choose languages: Opt for English-centric translation with:
• 5 high-resource languages with different scripts (French, German, Russian, Chinese,
Japanese)
• 5 mid-resource languages (Farsi, Vietnamese, Arabic, Korean, Hebrew)
• 5 low-resource languages with considerably lower likelihood of incidental bilingualism
(Gujarati, Thai, Tajik, Sindhi, Pashto)
3. Choose models: Include the API-based GPT-3.5 (Text-Davinci-003) and GPT-4 model from
OpenAI and Gemini from Google, as well as the open-weight LLaMA-3, Gemma, and Aya
models which enable additional analysis.
4. Gather translation results: Systematically compare standard MT prompt templates to our
proposed method across different models and language pairs. Additionally ablate the steps
of the new method (removing langid templates; replacing langid templates with endonymic
langid tags; provide only the generated paragraph; only the monolingual content).
5. Perform analysis: Evaluate whether the new method improves the performance of LLMs in
these tasks as compared to the baselines using multiple standard automatic metrics for MT
(chrF, COMET, BLEU) and token-level LID to measure off-target translations. Assess which
component(s) are necessary for this improvement and whether or not there are changes across
language pair direction and language resource levels or scripts.
54
Published as a conference paper at ICLR 2025
Translation with LLMs through Prompting with Long-Form Context (Part 2)
5. Test Case Example:
• Baseline Prompt Template:
[English] This is an example.
[Spanish]
• Baseline Prompt Expected Output:
[Spanish] Este es un ejemplo.
• Proposed Prompt Input (step 1):
Write a paragraph containing the following sentence:
[English] This is an example.
• Expected model output (step 1):
This is an example link. Italics and Bold: use asterisks and
underscores to indicate spans of emphasis. Use one asterisk
(or underscore) for italics and two asterisks (or two
underscores) for bold. For example...
• Proposed Prompt Input (step 2):
[Spanish] Computadoras simples son lo suficientemente pequeñas
para residir en los dispositivos móviles.
[Spanish] Las computadoras portátiles, tales come tabletas,
netbooks, notebooks, ultrabooks, pueden ser alimentadas por
pequeñas baterías.
[Spanish] Las computadoras personales en sus diversas formas
son iconos de la llamada era de la información y son lo que la
mayoría de la gente considera como «computadora».
Translate this paragraph from English to Spanish:
[English] This is an example link. Italics and Bold: use
asterisks and underscores to indicate spans of emphasis. Use
one asterisk (or underscore) for italics and two asterisks
(or two underscores) for bold. For example...
[Spanish]
• Proposed Prompt Expected Output:
[Spanish] Este es un enlace de ejemplo. Cursiva y negrita:
utilice asteriscos y guiones bajos para indicar intervalos
de énfasis. Utilice un asterisco (o guión bajo) para cursiva
y dos asteriscos (o dos guiones bajos) para negrita.
Por ejemplo...
55
Published as a conference paper at ICLR 2025
Translation with LLMs through Prompting with Long-Form Context (Part 3)
5. Test Case Example (Continued):
• Proposed Prompt Input (step 3):
Which of the following sentences are translations of the
following English sentence? Multiple sentences can be chosen.
[English] This is an example.
1. Este es un enlace de ejemplo.
2. Cursiva y negrita: utilice asteriscos y guiones bajos para
indicar intervalos de énfasis.
3. Utilice un asterisco (o guión bajo) para cursiva y dos
asteriscos (o dos guiones bajos) para negrita.
4. Por ejemplo...
• Proposed Prompt Expected Output:
The sentence "This is an example." can be translated to
Spanish as:
1. Este es un ejemplo.
2. Por ejemplo...
These two options correctly translate the meaning of
"This is an example." into Spanish.
6. Fallback Plan: If the proposed method does not help as compared to the baseline, analyzing the
results of step 3 would likely provide further insights into how the template should be modified. In
addition to potentially identifying off-target errors, it may be that the model is unable to identify correct
translations even if they have been generated, and results are likely to vary across languages based on
their training data. Using the generated paragraph as provided context and still querying the model to
translate at only the sentence level could be compared. Restricting monolingual text to be retrieved text
within the domain of the source sentence could be explored. Adding few-shot examples in the prompt
and comparing other MT prompt templates may also help debug the proposed method. Including an
additional query where the model is first asked to label each generated token by langid and then asked
to re-translate the source including those tokens which are correctly labelled in target may reinforce
langid and guide generation in the target language. Performing layer-wise analyses of likelihood of
generating the next token in-language and in-script for open-weight models may also help debug where
and why off-target issues persist.
56
Published as a conference paper at ICLR 2025
Reviewer 1
Novelty: 5 (somewhat novel - there are differences from existing ideas but not enough to turn into a
new paper)
Rationale: While I’m not aware of papers that have used this exact prompting strategy, I don’t think that
this proposal will be enough to justify a publication. I think that there should be a variety of strategies
suggested + an analysis of multiple prompting strategies rather than suggesting one strategy. I think that
a thorough analysis of the effects of additional context / langids could potentially turn this into a paper.
Feasibility: 9
Rationale: Such a project that only uses LLM APIs could be executed very quickly without much
expertise in coding/architecture. The only time-consuming part might be iterating and adjusting the
prompts in the ablation studies.
Expected Effectiveness: 7
Rationale: I think that this proposal could work well to guide LLMs to translate in the desired target
language, since this is a known problem with current prompt-based MT strategies (as the writers have
suggested).
Excitement: 5 (Leaning negative: it has interesting bits but overall not exciting enough)
Rationale: I’m not sure how well this method will transfer to future models, and this could be a limiting
factor in the longevity of this research. (But this is a limitation of all prompting research...)
Overall Score: 5 (Decent idea but has some weaknesses or not exciting enough, marginally below the
acceptance threshold of major AI conferences)
Rationale: I think that the work should focus on the ablation studies and comparison of multiple
prompting strategies / analysis, rather than focusing on one new strategy.
Confidence: 3 (You are fairly confident that the evaluation is correct)
57
Published as a conference paper at ICLR 2025
Reviewer 2
Novelty: 1 (not novel at all - there are many existing ideas that are the same)
Rationale: There are multiple existing works on prompting LLMs on low-resource transla-
https://proceedings.mlr.press/v202/garcia23a/garcia23a.pdf
tion, usually using few-shot demo.
https://arxiv.org/pdf/2305.14857 Also work explaining why few-shot prompt would work:
https://arxiv.org/pdf/2305.10266
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would
require careful planning, efficient use of APIs or some advanced computational strategies to overcome
the limited GPU resources, and would require some modifications to the original proposal to make it
work.)
Rationale: The prompting experiment is mostly feasible given one can afford the API calls. The model,
prompts, and evaluation metrics are concrete, although unclear if the proposed experiment is useful for
proving the research idea, e.g., a few high-resource languages are listed for a research idea that focuses
on low-resource languages.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you
don’t expect it to work in general.)
Rationale: The proposed experiment can help find a set of relatively high-performing prompts, but it is
unclear among the prompts proposed if any of them will bring any improvement.
Excitement: 3 (Mediocre: this idea makes marginal contributions and is very incremental)
Rationale: The ability to do prompting/few-shot translation is fundamentally tied to the training
data, see https://arxiv.org/pdf/2305.10266, so trying to solve this problem from the prompting space is
inherently limited.
Overall Score: 3 (Clear rejection for major AI conferences)
Rationale: There is similar work on prompting LLMs to generate translation in low-resource languages,
hence the idea is not very novel. Moreover, in terms of the goal to generate high-quality low-resource
translation, the gains likely are not going to come from prompting.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
58
Published as a conference paper at ICLR 2025
A.23 EXAMPLE IDEA: LINGUISTIC PIVOT CONSTELLATION: ENHANCING CROSS-LINGUAL
TRANSFER FOR LOW-RESOURCE LANGUAGES AND DIALECTS
Linguistic Pivot Constellation (LPC): Enhancing Cross-Lingual Transfer for Low-
Resource Languages and Dialects (Part 1)
1. Problem Statement: Large language models struggle with cross-lingual transfer, especially for
low-resource languages and dialects. This limitation hinders the models’ ability to perform well on
multilingual tasks involving these languages, potentially exacerbating digital language divides.
2. Motivation: Current approaches often rely on parallel data or multilingual pretraining, which are
limited for many language pairs. Inspired by how polyglots leverage similarities between known
languages to learn new ones, we propose creating a network of conceptual bridges across languages.
This method could potentially overcome the limitations of existing approaches by leveraging the
model’s broad knowledge to create connections between known and unknown linguistic territories.
3. Proposed Method: We introduce Linguistic Pivot Constellation (LPC), a novel prompting technique
that constructs a dynamic network of linguistic pivot points. For a given task, LPC first identifies
conceptually similar languages or dialects to the target language. It then generates a constellation of
prompts in these pivot languages, each capturing a different aspect of the task. The model is guided
to ’triangulate’ the correct response by considering these multiple perspectives. For example, to
translate a rare dialect, LPC might use prompts in related languages, regional lingua francas, and even
etymologically connected languages.
4. Step-by-Step Experiment Plan:
1. Data Collection
• Gather datasets for translation and question-answering tasks across a diverse set of
low-resource languages and dialects.
• Utilize the FLORES-101 dataset for machine translation and the TyDi QA dataset for
question answering.
2. Baseline Implementation
• Implement standard few-shot prompting and existing cross-lingual transfer methods
(e.g., zero-shot cross-lingual transfer) as baselines.
3. LPC Implementation
(a) Create a language similarity matrix based on language families and geographical prox-
imity.
(b) Implement a function to select the most relevant pivot languages for a given target
language.
(c) Design prompts for each pivot language that capture different aspects of the task.
4. Prompt Construction
(a) Select 3-5 pivot languages based on the similarity matrix.
(b) Generate task-specific prompts in each pivot language.
(c) Combine these prompts into a ’constellation’ prompt that includes the original task in
the target language.
5. Model Selection
• Use GPT-4 as the primary model for experiments.
• Test with GPT-3.5-turbo for comparison.
6. Experiment Execution
(a) Run the baseline methods.
(b) Run the LPC method with varying numbers of pivot languages (1, 3, and 5).
(c) Record the model outputs and performance metrics.
59
Published as a conference paper at ICLR 2025
Linguistic Pivot Constellation (LPC): Enhancing Cross-Lingual Transfer for Low-
Resource Languages and Dialects (Part 3)
4. Step-by-Step Experiment Plan (Continued):
7. Evaluation
• Evaluate the results using task-specific metrics:
– BLEU score for translation tasks
– F1 score for question answering tasks
8. Analysis
• Analyze the effectiveness of different pivot language combinations and the method’s
scalability to extremely low-resource scenarios.
• Compare LPC performance against baselines across different language families and
resource levels.
5. Test Case Examples:
• Test Case 1:
– Baseline Prompt Input: Translate the following Sicilian sentence to English: ’Unni
c’è fumu c’è focu.’
– Baseline Prompt Expected Output: Where there’s smoke, there’s fire.
– Proposed Prompt Input: We will translate a Sicilian sentence to English. To help with
this task, consider the following related phrases:
In Italian: ’Dove c’è fumo c’è fuoco.’
In Neapolitan: ’Addò ce sta ’o fummo ce sta ’o ffuoco.’
In Latin: ’Ubi fumus, ibi ignis.’
Now, translate the Sicilian sentence to English: ’Unni c’è fumu c’è focu.’
– Proposed Prompt Expected Output: Where there’s smoke, there’s fire.
– Explanation: The LPC method provides context from related languages (Italian,
Neapolitan, and Latin), which can help the model better understand and translate
the Sicilian phrase. This is especially useful for low-resource languages like Sicilian,
where direct translation data might be limited.
6. Fallback Plan: If the LPC method does not significantly outperform baselines, we will pivot
the project towards an in-depth analysis of cross-lingual transfer mechanisms. We will investigate
the relationship between language similarity and transfer effectiveness, the impact of pivot language
selection on performance, and how different aspects of language (lexical, syntactic, semantic) transfer
across the constellation. This analysis could provide valuable insights into the strengths and limitations
of large language models in cross-lingual tasks, potentially informing future research directions in
multilingual Natural Language Processing.
60
Published as a conference paper at ICLR 2025
Reviewer 1
Novelty: 9
Rationale: The idea of using a linguistic similarity matrix to form conceptual bridges when constructing
prompts to improve cross-lingual transfer is one that I have not heard of before. I think this could be an
interesting way of leveraging existing information about related languages for NLP tasks in general.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: I think the idea makes sense, but more details should be shared about how exactly this
language similarity matrix is constructed and what algorithms will be used for determining language
similarity. More details should be provided on how the prompts for different languages will be obtained
and how the data will be collected, which might be a time bottleneck.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: I think that this idea could work well just by providing more context in different languages.
The effectiveness sounds like it might be highly variable on the selection of pivot languages, though.
Excitement: 7
Rationale: I think that this could be interesting beyond the context of prompting, such as the use of
pivot languages in traditional machine translation.
Overall Score: 7 (Good idea, would be accepted by major AI conferences)
Rationale: I think that the idea is sufficiently novel, and if it is executed well with good results, could
produce a quality paper at a top NLP conference.
Confidence: 3 (You are fairly confident that the evaluation is correct)
61
Published as a conference paper at ICLR 2025
Reviewer 2
Novelty: 8 (clearly novel - major differences from all existing ideas)
Rationale: The LPC method introduces a novel way of leveraging related languages and dialects to
improve cross-lingual transfer. While cross-lingual transfer and language similarity have been explored,
the idea of dynamically creating a constellation of prompts using pivot languages for specific tasks is a
fresh and innovative approach.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would
require careful planning, efficient use of APIs or some advanced computational strategies to overcome
the limited GPU resources, and would require some modifications to the original proposal to make it
work.)
Rationale: Implementing LPC could be challenging due to the complexities involved in selecting
optimal pivot languages and designing effective prompts for each. While the concept is sound, the
practical execution—such as building the language similarity matrix and dynamically generating
prompts—may require substantial effort and experimentation.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: The LPC method has the potential to improve cross-lingual performance, especially in
low-resource languages. By leveraging linguistic similarities, the model might better understand and
translate languages with limited training data.
Excitement: 7
Rationale: The LPC method is exciting because it tackles a critical challenge in multilingual
NLP—improving performance for low-resource languages. If successful, it could significantly en-
hance the accessibility and usability of AI models across diverse linguistic contexts, particularly in
underrepresented languages.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The idea is a promising candidate for exploration in the field of multilingual NLP. It
introduces a novel approach that could potentially improve cross-lingual transfer, particularly for
low-resource languages and dialects. However, the challenges in implementation and the uncertain
effectiveness of the method warrant a cautious overall rating.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
62
Published as a conference paper at ICLR 2025
Reviewer 3
Novelty: 8 (clearly novel - major differences from all existing ideas)
Rationale: Leveraging language similarity is often quite well studied in machine translation, but there
hasn’t been one studying using similar language as demonstration in multilingual in-context learning. It
would be interesting to see how the model behavior change with different pivots.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The implementation will mostly involve building the similarity matrix and formatting the
prompts. The similarity matrix should be able to get from some existing works. The prompt formatting
and experiments part should be pretty straightforward with enough API quota.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: The idea is pretty interesting, but it’s not exactly sure whether similar languages are
informative enough for the model, since it still requires the model to understand the similarity between
languages and reason over the relationship between target language and the given languages.
Excitement: 8 (Exciting: would deepen the community’s understanding or make major progress in this
research direction)
Rationale: It would be informative to the community to see whether such demonstration can lead to
good performance for in-context learning. Even if this idea doesn’t work, the analysis will be quite
informative.
Overall Score: 7 (Good idea, would be accepted by major AI conferences)
Rationale: This work studies an important problem for the multilingual community. The experiment
results and analysis will be quite informative for multilingual in-context learning.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
63
Published as a conference paper at ICLR 2025
A.24 EXAMPLE IDEA: LLM DIRECTED RETRIEVAL QUERYING FOR IMPROVING FACTUALITY
LLM Directed Retrieval Querying for Improving Factuality (Part 1)
1. Problem Statement: Large language models can generate flexible, long-form language generations,
but LLM-generated responses often contain hallucinated or factually inconsistent content. Particularly
in high-risk settings, there is a need for methods to improve the factuality of LLMs.
2. Motivation: A common framework for improving the factuality of LLM generations is retrieval
augmented generation (RAG). In a RAG framework, a retriever takes a query as input and retrieves
external knowledge from a high-quality knowledge base from reliable sources. The retrieved content
is incorporated into the prompt for generating the response. One issue with this approach is that the
quality of the generation can be bottlenecked by the quality of the retrieved content. Retrieval can be
challenging for tasks where the query objective is underspecified or additional reasoning (or multi-step
reasoning) on the query is required to retrieve content that supports the query.
3. Proposed Method: Our method refines the query by using an LLM to decompose the problem into
sub-questions and generate candidate answers to expand each sub-question. The key steps include:
1. Decomposing the original question into sub-questions using an LLM.
2. Generating candidate answers for each sub-question using the LLM.
3. Expanding each sub-question with generated candidate answers to create retrieval queries.
4. Retrieving passages for each expanded query.
5. Filtering retrieved passages based on retrieval model score.
6. Aggregating filtered passages across sub-questions.
7. Prompting the generative LLM with the aggregated passages as context to answer the original
question.
4. Step-by-Step Experiment Plan:
1. Choose RAG datasets where the retrieval task has underspecified/unique objectives or
requires multi-hop reasoning, such as BIRCO and HotpotQA.
2. Select a retriever, such as an E5 or BGE model, and a generative LLM, such as GPT or
LLaMA-3.
3. Establish Baseline:
(a) Use the example question as the query to the retriever to retrieve relevant content from
the retrieval passage pool.
(b) Construct a prompt that provides the retrieved context passages and the question.
(c) Prompt the generative LLM to answer the question using the context.
4. Implement Proposed Method:
(a) Prompt the generative LLM to decompose the question into sub-questions.
(b) For each sub-question, prompt the generative LLM to generate candidate answers.
(c) Use semantic similarity to cluster the generated candidate answers and sample for
semantic diversity.
(d) Construct retrieval queries by expanding each sub-question with sampled candidate
answers.
(e) Retrieve passages using each query and aggregate results for each sub-question.
(f) Deduplicate retrieved passages and filter based on retrieval model score.
(g) Prompt the generative LLM with filtered passages as context to answer the original
question.
64
Published as a conference paper at ICLR 2025
LLM Directed Retrieval Querying for Improving Factuality (Part 2)
5. Test Case Examples:
• Test Case 1:
– Original Question: In which region is the village after which lager "Fucking Hell" is
named?
– Baseline:
* Retrieval Query: In which region is the village after which lager "Fucking Hell" is
named?
* Retrieved Passage: Fucking Hell is a German pale lager, a Pilsner, with an alcohol
content of 4.9%. It is named after Fucking, the previous name of the village of
Fugging in Austria; hell is the German word for ’pale’ and a typical description of
this kind of beer. The beer’s name was initially controversial. Both the local author-
ities in Fucking and the European Union’s Trade Marks and Designs Registration
Office initially objected to the name. It was eventually accepted and the lager is
sold internationally.
* Prompt: Given the retrieved passage(s) as context and the question, answer the
question using the context.
* Answer: The village after which the lager "Fucking Hell" is named is located in
Austria.
– Proposed Method:
* Sub-Questions:
1. What village is the lager "Fucking Hell" named after?
2. In which country is this village located?
3. In which specific region or state within that country is the village located?
* Example Retrieval Query: What village is the lager "Fucking Hell" named after?
The lager "Fucking Hell" is named after the village previously known as Fucking,
which is now called Fugging, in Austria.
* Retrieved Passages:
1. Fucking Hell is a German pale lager, a Pilsner, with an alcohol content of 4.9%.
It is named after Fucking, the previous name of the village of Fugging in Austria;
hell is the German word for ’pale’ and a typical description of this kind of
beer. The beer’s name was initially controversial. Both the local authorities in
Fucking and the European Union’s Trade Marks and Designs Registration Office
initially objected to the name. It was eventually accepted and the lager is sold
internationally.
2. Fugging, spelled Fucking until 2021, is an Austrian village in the municipality of
Tarsdorf, located in the Innviertel region of western Upper Austria. It is 33 km
(21 mi) north of Salzburg and 4 km (2.5 mi) east of the Inn river, which forms
part of the German border.
* Prompt: Given the retrieved passage(s) as context and the question, answer the
question using the context.
* Answer: The village after which the lager "Fucking Hell" is named is located in
the Innviertel region of western Upper Austria.
6. Fallback Plan: If the proposed method does not satisfy the success criteria, alternative approaches
could be explored. These may include quantifying the difficulty of various examples and analyzing
whether this correlates with method improvement. The method is likely to be more effective for
questions about esoteric facts, where the model is less likely to have internal knowledge of the answer,
or its generated answers are more likely to disagree. Additionally, the method may be more beneficial
for questions requiring information from multiple passages. Further analysis could help debug why the
proposed method did not work, informing alternative new methods or transforming the project into an
analysis paper by offering interesting ablations and insights.
65
Published as a conference paper at ICLR 2025
Reviewer 1
Novelty: 1 (not novel at all - there are many existing ideas that are the same)
Rationale: I find this idea is extremely similar to "GenDec: A robust generative Question-decomposition
method for Multi-hop reasoning" by Wu et al. (2024). Link: https://arxiv.org/html/2402.11166v1
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: Technically, this idea can be quickly re-produced based on the aforementioned paper.
Though the motivations and evaluations are different from the existing work, it shouldn’t take too long
to figure them out.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you
don’t expect it to work in general.)
Rationale: Given that the idea is too similar to an existing one, the author may need to create a new but
related idea as a follow-up study of the aforementioned paper. This idea does have a different motivation
from the aforementioned one, so it uses different evaluation methods, though.
Excitement: 2
Rationale: Reviewers may argue the originality and novelty of this idea if it’s submitted to a venue.
They may not find it exciting, either.
Overall Score: 1 (Critically flawed, trivial, or wrong, would be a waste of students’ time to work on it)
Rationale: The students should probably think one-step-further of the existing study and they may
eventually find a way to improve the existing system.
Confidence: 5 (You are absolutely certain that the evaluation is correct and very familiar with the
relevant literature)
Reviewer 2
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably
enough to turn into a new paper)
Rationale: Query decomposition and RAG separately are well studied, if there is no existing work that
combines both (which I’m not aware of), then it’s reasonably novel.
Feasibility: 10 (Easy: The whole proposed project can be quickly executed within a few days without
requiring advanced technical skills.)
Rationale: It’s just a series of prompting which should be easy for a CS PhD student.
Expected Effectiveness: 8 (Probably Effective: The idea should offer some significant improvement
over current methods on the relevant benchmarks.)
Rationale: This method involves multiple fine-grained retrieval operations and should naturally
outperform existing retrieval methods without decomposition.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still
has some weaknesses or somewhat incremental)
Rationale: Although I believe in the effectiveness of the proposed method, the high latency compared
to baselines is a concern—training an end-to-end model to reduce latency might be a good add-on.
Overall Score: 7 (Good idea, would be accepted by major AI conferences)
Rationale: This is a good idea. If there is no identical existing work and the authors conduct compre-
hensive experiments, it would be a good paper.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
66
Published as a conference paper at ICLR 2025
Reviewer 3
Novelty: 5 (somewhat novel - there are differences from existing ideas but not enough to turn into a
new paper)
Rationale: The idea aims to tackle a question by breaking it down and solving it one by one with RAG.
But it seems to be a more specialized way of CoT with RAG.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would
require careful planning, efficient use of APIs or some advanced computational strategies to overcome
the limited GPU resources, and would require some modifications to the original proposal to make it
work.)
Rationale: The idea assumes a question can be broken down into subquestions where each subquestion
is independent of the others. In cases where they are not independent, the method might suffer from
issues or inefficiency. But maybe the distribution of these questions is more like a long tail and
predominantly questions that can be easily broken down. And is there a case where the question is
high-level mathematics and difficult to the point where it breaks down into a non-linear scale of the
question text token?
Expected Effectiveness: 5 (Somewhat ineffective: There might be some chance that the proposed idea
can work better than existing baselines but the improvement will be marginal or inconsistent.)
Rationale: The main question is how the sub-questions are created. We can break the question into
conditioned parts from p(q0|q0, ...qn)...p(qn|q0, ...qn−1) where we assume them to be dependent, or
we can use LLM to reason about their dependency. We can also ask the question by asking leveled
sub-questions like "where is this person from" into "which country is this person from", "which city is
this person from", "which district is this person from". The concern is that different methods might
affect the performance differently.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still
has some weaknesses or somewhat incremental)
Rationale: The idea seems exciting as it prevents LLM from shortcutting the question and hallucinating.
But it needs more method formulation on how the question should be broken down. The very baseline
implementation will just degrade to a CoT reasoning with RAG for each step. Because this could just
be a subset of CoT methods in some sense.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: I believe there could be more comparison with CoT as motivation. Why should this be
better with prompting the model step by step using RAG, and why are they different? And for problem
formulation, it would be great if we can list more edgy examples of how questions can be divided to
help pilot the prompting methods.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
67
Published as a conference paper at ICLR 2025
A.25 EXAMPLE IDEA: SEMANTIC DIVERGENCE MINIMIZATION: REDUCING
HALLUCINATIONS IN LARGE LANGUAGE MODELS THROUGH ITERATIVE CONCEPT
GROUNDING
Semantic Divergence Minimization: Reducing Hallucinations in Large Language Mod-
els through Iterative Concept Grounding (Part 1)
1. Problem Statement: Large language models often generate hallucinations by diverging from the
core semantic content of the input, especially in complex reasoning tasks. This problem undermines
the reliability and trustworthiness of LLMs in critical applications that require accurate and factual
responses.
2. Motivation: Current approaches like chain-of-thought prompting focus on generating intermediate
steps but do not explicitly constrain semantic drift. By continuously grounding generated content to
the original semantic space of the input, we can reduce hallucinations while preserving reasoning
capabilities. This method leverages the LLM’s own ability to extract and compare semantic concepts,
creating a self-correcting mechanism that does not require external knowledge bases or complex
architectures.
3. Proposed Method: We introduce Semantic Divergence Minimization (SDM) prompting. For each
reasoning step, we prompt the model to:
1. Generate a candidate next step.
2. Extract key semantic concepts from the original input.
3. Measure semantic similarity between the candidate step and extracted concepts.
4. If similarity is below a threshold, regenerate the step with explicit instructions to incorporate
more relevant concepts.
5. Repeat until convergence or maximum iterations.
This creates a semantic ’gravity well’ that keeps reasoning tethered to the input’s conceptual core.
68
Published as a conference paper at ICLR 2025
Semantic Divergence Minimization: Reducing Hallucinations in Large Language Mod-
els through Iterative Concept Grounding (Part 2)
4. Step-by-Step Experiment Plan:
1. Dataset Preparation:
• Use two datasets: HotpotQA for multi-hop reasoning and GSM8K for complex math
word problems.
• For HotpotQA, utilize the dev set (7,405 questions).
• For GSM8K, employ the test set (1,319 problems).
2. Baseline Implementation:
• Implement two baselines:
– Standard prompting: directly asking the model to answer the question.
– Chain-of-thought (CoT) prompting: asking the model to show its work step-by-step
before giving the final answer.
3. SDM Implementation:
• Implement the SDM method with the following sub-steps for each reasoning iteration:
– Generate next step.
– Extract key concepts from input.
– Measure semantic similarity.
– Regenerate if below threshold.
– Repeat until convergence or maximum iterations.
4. Prompt Engineering:
• Design prompts for each step of SDM. For example:
– "Generate the next step in solving this problem:"
– "Extract key concepts from the original question:"
– "Rate the semantic similarity between these concepts and the generated step on a
scale of 0-10:"
– "Regenerate the step, focusing more on these key concepts:"
5. Hyperparameter Tuning:
• Experiment with different similarity thresholds (e.g., 5, 6, 7 out of 10) and maximum
iteration limits (e.g., 3, 5, 7) to find the optimal balance between performance and
computational cost.
6. Model Selection:
• Use GPT-4 as the primary model due to its advanced reasoning capabilities.
• Test GPT-3.5-turbo for comparison.
7. Evaluation:
• For HotpotQA, use the official evaluation script to compute Exact Match (EM) and F1
scores.
• For GSM8K, use accuracy as the metric.
• Manually review a subset of responses to assess qualitative improvements in reasoning
and reduction of hallucinations.
8. Analysis:
• Compare SDM results with baselines.
• Analyze the correlation between semantic similarity scores and answer correctness.
• Examine cases where SDM significantly improves or fails to improve over baselines.
69
Published as a conference paper at ICLR 2025
Semantic Divergence Minimization: Reducing Hallucinations in Large Language Mod-
els through Iterative Concept Grounding (Part 3)
5. Test Case Examples:
• Example 1 (HotpotQA): Input: Who was born first, Alec Guinness or Billie Holiday?
Baseline Output: Alec Guinness was born first. He was born on April 2, 1914, while Billie
Holiday was born on April 7, 1915.
SDM Output: The SDM method provides a more structured and transparent reasoning process,
reducing the risk of hallucination by grounding each step in the key concepts (birth dates)
from the original question.
• Example 2 (GSM8K): Input: Janet’s ducks lay 16 eggs per day. She eats three for breakfast
every morning and sells the rest at the farmers’ market daily for $2 per egg. How much money
does she make every week?
Baseline Output: Let’s solve this step by step: Janet makes $182 every week.
SDM Output: The SDM method not only provides a clear step-by-step solution but also
explicitly verifies that all key concepts from the original question are incorporated, reducing
the risk of introducing irrelevant information or hallucinating facts.
6. Fallback Plan: If the proposed SDM method does not significantly outperform baselines, we can
pivot the project in several ways. First, we could conduct an in-depth analysis of where and why SDM
fails, potentially uncovering insights about LLM reasoning processes. We might find that SDM works
better for certain types of questions or reasoning tasks, which could lead to a more nuanced application
of the method. Second, we could explore variations of SDM, such as using different prompts for
concept extraction or similarity measurement, or incorporating a dynamic threshold that adjusts based
on the complexity of the question. Third, we could combine SDM with other prompting techniques like
chain-of-thought or self-consistency to create a hybrid approach. Finally, if the semantic grounding
aspect proves challenging, we could shift focus to analyzing how LLMs interpret and maintain semantic
consistency throughout multi-step reasoning, which could provide valuable insights for future work on
reducing hallucinations.
70
Published as a conference paper at ICLR 2025
Reviewer 1
Novelty: 8 (clearly novel - major differences from all existing ideas)
Rationale: The use of semantic similarity to constrain CoT-styled generation is very new. I have not
seen similar work on it.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would
require careful planning, efficient use of APIs or some advanced computational strategies to overcome
the limited GPU resources, and would require some modifications to the original proposal to make it
work.)
Rationale: The pipeline is feasible to me. The major challenge would be finding the similarity threshold
for each dataset.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you
don’t expect it to work in general.)
Rationale: I see some drawbacks in this pipeline. First, manually tuning the similarity threshold seems
not the best practice for scalable applications. The GSM8K math dataset contains pretty elementary
math problems. In that case, the semantic similarity threshold should be set very high, since these basic
math concepts involved in the prompt and the CoT breakdown would be determined as highly similar
by most existing embedding methods. This brings the question of whether this similarity threshold is
non-trivial at all for some tasks.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still
has some weaknesses or somewhat incremental)
Rationale: Constraining CoT breakdowns is a novel idea and deserves more work and exploration.
While the use of semantic similarity has many drawbacks (such as tuning the threshold, task-sensitive,
non-scalable), it can still show us some valuable results about constraining CoT breakdowns.
Overall Score: 5 (Decent idea but has some weaknesses or not exciting enough, marginally below the
acceptance threshold of major AI conferences)
Rationale: There are some clear drawbacks inherent to the method, as discussed earlier.
If the
authors can overcome these limitations, this idea could yield some interesting findings useful for our
understanding of CoT behavior and could pass above a major conference threshold.
Confidence: 3 (You are fairly confident that the evaluation is correct)
71
Published as a conference paper at ICLR 2025
Reviewer 2
Novelty: 4
Rationale: Generally this method is a way of rejection sampling to improve factuality. It is somewhat not
too different from previous literature for "constrained decoding" for improving factuality: - Constrained
Abstractive Summarization: Preserving Factual Consistency with Constrained Generation - Don’t Say
What You Don’t Know: Improving the Consistency of Abstractive Summarization by Constraining
Beam Search
Feasibility: 9
Rationale: Simple prompting approach that is easy to implement. Evaluation is simple.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you
don’t expect it to work in general.)
Rationale: 1. Right now most LLMs hallucinate in a subtle way: they say things in semantically correct
or reasonable ways, but the precise fact is incorrect. Using semantic similarity as a measurement to
gauge/control hallucination might not be able to solve the problem. 2. The rejection sampling is based
on another LLM—what if the LLM also hallucinates?
Excitement: 3 (Mediocre: this idea makes marginal contributions and is very incremental)
Rationale: The method is not that novel and I think the method is not that effective and might not solve
the problem at all.
Overall Score: 3 (Clear rejection for major AI conferences)
Rationale: The experiment design is kind of simple and the evaluation is not comprehensive. I think
the idea is in the range of 4 but the experiment plan further reduces my score.
Confidence: 5 (You are absolutely certain that the evaluation is correct and very familiar with the
relevant literature)
Reviewer 3
Novelty: 3 (mostly not novel - you can find very similar ideas)
Rationale: The idea of extracting key semantic concepts, measuring the relevance of the candidate next
step, and possibly rejecting/revising the step is very similar to incorporating self-critique into multi-step
reasoning problems. Different versions of this are already commonly used, especially for solving math
problems.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The proposed approach should be straightforward to implement: it only requires prompt
engineering to extract semantic concepts and evaluate the relevance of a candidate next step.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you
don’t expect it to work in general.)
Rationale: Compared to chain-of-thought prompting, there’s a reasonable chance this method could
work better: it could help identify when a reasoning step becomes irrelevant to the original question.
However, since such self-critique methods have already been explored, it’s unlikely that this instantiation
will work significantly better than previous ones. Also, the proposed idea of extracting relevant semantic
concepts and measuring semantic similarity seems a bit vague, and it’s not reflected in the provided
examples.
Excitement: 2
Rationale: The proposed method is too similar to existing works; it doesn’t contain novel insights that
would meaningfully boost current LM performance or introduce new ideas worth building on. It would
not be an exciting paper.
Overall Score: 2 (Strong rejection for major AI conferences)
Rationale: Similar to the reasoning above: the proposal is too similar to existing works, it doesn’t
introduce new ideas or insights, and is unlikely to meaningfully improve current LM performance.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
72
Published as a conference paper at ICLR 2025
A.26 EXAMPLE IDEA: AUTOPROMPTING: GENERATE DIVERSE FEW-SHOT EXAMPLES FOR
ANY APPLICATION
Autoprompting: Generate Diverse Few-Shot Examples for Any Application (Part 1)
1. Problem Statement: Adding natural language capabilities to existing software requires manually
crafting few-shot prompts, which is tedious and does not guarantee high coverage.
2. Motivation: Integrating natural language capabilities into software applications often necessi-
tates manually creating few-shot prompts, a process that is time-consuming and may not ensure
comprehensive coverage. An "Autoprompting" system capable of automatically generating diverse
and relevant few-shot examples tailored to specific applications would significantly reduce manual
effort, improve coverage and versatility, and enable rapid prototyping and iteration of natural language
capabilities. Large Language Models can iteratively test different functionalities of an application
and make adjustments to few-shot prompts akin to a human developer. This approach would ulti-
mately democratize the integration of such capabilities across a wide range of applications and industries.
3. Proposed Method: This method leverages a Large Language Model (LLM) with coding capabilities.
It involves the following core steps:
1. Extract all user-facing functions and gather their documentation and unit tests, if available.
2. Generate diverse natural language prompts to utilize each function, defining the expected
output.
3. Generate code from the natural language prompts and execute the corresponding functions.
4. If the code fails:
• Update the code and retry
• If the code runs but produces an incorrect result, update it using insights from unit tests
or general reasoning.
5. Once you have a few exemplar prompts for all (or desired) functions, generate prompts that
compose multiple functions together and repeat step 4.
By iteratively refining code generation from natural language and leveraging available documentation
and tests, this process aims to create an LLM capable of correctly implementing functions based on
natural language instructions.
4. Step-by-Step Experiment Plan:
• Applications: When collecting applications from GitHub, prioritize those with clear, well-
written documentation and comprehensive test suites. Include applications from different
domains and with varying levels of complexity to ensure a diverse dataset.
• Few shots and feasibility: Create manual few-shot examples to understand the complexity
of the functions and the quality of the documentation. Begin by creating 4-5 examples for
any function, which could also serve as a starting point for the LLM.
• Extract functions and metadata: Utilize static code analysis tools to ensure accurate and
comprehensive extraction of functions, documentation, and test cases. Consider extracting
additional metadata, such as function signatures, dependencies, and comments, as they can
provide valuable context.
• NL Module: Generate diverse user utterances and incorporate techniques to handle variations
in natural language. For each user utterance, generate the expected outcome. Consider
generating negative test cases to improve the model’s ability to handle invalid or ambiguous
inputs.
• Execution Module: Incorporate sandboxing or containerization techniques to ensure a secure
and isolated execution environment when executing the generated code. Implement logging
and reporting mechanisms to capture and analyze errors and unexpected behavior.
73
Published as a conference paper at ICLR 2025
Autoprompting: Generate Diverse Few-Shot Examples for Any Application (Part 2)
4. Step-by-Step Experiment Plan (Continued):
• Exploration: Incorporate techniques such as code summarization, call graph analysis, and
type inference to provide more contextual information to the agent. Specifically, in any code
snippet, if there are other user-defined functions, retrieve their metadata and use it in the next
iteration of prompt generation.
• Store: Utilize a vector database or other structured storage mechanism that supports efficient
retrieval and querying for storing few-shot examples and their outputs. Incorporate mecha-
nisms for versioning and updating the stored data as the codebase and the underlying models
evolve.
• Experiments: Once few-shot examples for different functionalities and their compositions
are obtained, simulate different users with various intents and calculate goal completion and
error rates using different models. Initially, start with a strong model, and once few-shot
examples are available, test with weaker and open-source models.
5. Test Case Examples: Select a toy application from GitHub implemented in Python or JavaScript.
• Direct prompting: Provide the few-shot examples created and check the goal completion
and error rates for the following scenarios.
• Toy example: Calculator app and different utterances to try.
– Provide a complete user utterance with no ambiguity. For example:
* Can you add 4 to 8.
* Divide 6 by 9 and multiply it by 6.
– Provide a user utterance with some ambiguity. For example:
* Take 6 and 9, add them, and then subtract 8. Also, add 2 to the first one. – here the
"first" one is ambiguous as it could be 6 or the intermediate answer (6+9=15).
– Provide a user utterance that is not related to the function. For example:
* Please add A and J. The correct result would be refusing to answer instead of
generating add("A", "J").
6. Fallback Plan: If the proposed methodology does not yield satisfactory results, there are several
areas to investigate. First, examine the documentation to ensure it adequately explains the basic
functionality of each function. Then, assess the coding style to confirm it aligns with recommended
practices. Subsequently, evaluate each module separately. For the NL module, verify that the examples
are diverse and that the generated test cases are aligned. For the execution module, ensure that the
correct error messages are being passed and explore ways to enhance them. The exploration module is
the most challenging aspect; if any function has a high dependency on other functions, traversing it
will be difficult. Therefore, initially focus on examples with limited to no function dependency and
gradually increase the complexity.
74
Published as a conference paper at ICLR 2025
Reviewer 1
Novelty: 4
Rationale: The proposed method is similar to https://arxiv.org/abs/2210.03493;
https://aclanthology.org/2023.findings-acl.216/
Feasibility: 6 (Feasible: Can be executed within the given constraints with some reasonable planning.)
Rationale: The experiments can be done with sufficient API access. The dataset collection needs some
planning but is in general feasible to do. Setting up the vector database may take extra time.
Expected Effectiveness: 5 (Somewhat ineffective: There might be some chance that the proposed idea
can work better than existing baselines but the improvement will be marginal or inconsistent.)
Rationale: The proposal is vague as it doesn’t mention what’s the final evaluation metric, and does not
provide sufficient description of the compared baseline. The prompt in the direct prompt baseline is
confusing to me as well. Overall it’s hard to discuss the effectiveness.
Excitement: 4
Rationale: Given that the proposed method is vague, I am unsure about its contributions and effective-
ness, and therefore I feel less excited about it.
Overall Score: 4 (Ok but not good enough, rejection for major AI conferences)
Rationale: The descriptions are confusing and I’m not really sure what’s the focus or contribution.
The title problem statement mentioned ensuring "diversity"/"high coverage" as the goal but doesn’t
describe how this is ensured in later subsections. The "Test Case Examples" doesn’t explain how the
components in the "Step-by-Step Experiment Plan" are used.
Confidence: 3 (You are fairly confident that the evaluation is correct)
75
Published as a conference paper at ICLR 2025
Reviewer 2
Novelty: 7
Rationale: Mapping natural language to custom applications is a hugely impactful capability, and doing
so automatically is really interesting. I like the focus on autoprompting for these types of translations, as
the task is feasible since it builds off some of the "few-shot prompting" that developers might normally
do to add NL functionality, with a more automatic process that has real system checks/verifications
(e.g., running the applications through containers). A related work from HCI tries to enable individual
developers to add such NL functionality to their own applications via a DSL + NL program signatures
(https://jackieyang.me/reactgenie/). This work is distinguished, as it would empower adding such NL
functionality to any application, without changing the code.
Feasibility: 4
Rationale: The project infrastructure seems more difficult than simply choosing some prompting
It would be an iterative process choosing real example applications from Github, and
methods.
developing the few-shot prompts manually to get a feel for this task. Then, some of the modules seem
like 1-2 week tasks (Execution Module, Exploration, Storage) which I estimate would make the project
more like 3 - 4 months to complete all modules AND to do the evaluations.
Expected Effectiveness: 7
Rationale: The baseline here is a zero-shot prompt, asking to do the NL intent and feeding in all the
documentation of the API. Assuming the author is correct to say that such NL function mapping requires
good few & diverse few-shot examples, I expect the method to work well. It uses a number of external
systems to enrich the code dataset to give the LLM context and uses system errors to inform. So in
some ways, Autoprompting is allowing an agent to make use of all these SWE tools for understanding
the software, which then will allow it to maximize its understanding and better retrieve good few-shot
examples for the task at hand.
Excitement: 7
Rationale: Seems like an impactful and ambitious outcome if completed. I am curious how such an
approach fits into the conversation about general agents, which can leverage API/tool/functions calls.
It’s a little unclear from the toy example why existing function-calling models can’t translate NL intents
into.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The results would be really exciting and the technical infrastructure to enable the Auto-
prompting agent would be impressive. However, I’m missing a bit of which cases will be really difficult
for other generalist web/system agents, but where finding the few-shot examples for this task is really
needed. Thus, the core idea of the method doesn’t seem clarified enough to result in a really clear
takeaway on the method.
Confidence: 3 (You are fairly confident that the evaluation is correct)
76
Published as a conference paper at ICLR 2025
A.27 EXAMPLE IDEA: TEMPORAL DEPENDENCY UNFOLDING: IMPROVING CODE
GENERATION FOR COMPLEX STATEFUL SYSTEMS
Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful
Systems (Part 1)
1. Problem Statement: Generating code for complex, stateful systems or applications with intricate
temporal dependencies remains challenging for current code generation models. Most existing
approaches focus on generating individual functions or small code snippets without fully considering
the temporal aspects and state changes in larger systems. This limitation hinders the applicability
of AI-assisted programming in areas such as distributed systems, game development, and real-time
applications.
2. Motivation: Many real-world applications require careful management of state over time. Existing
code generation models struggle with capturing the full complexity of temporal dependencies and state
changes in larger systems. A method that can effectively reason about and generate code for systems
with complex temporal dependencies could significantly improve the applicability of AI-assisted
programming in critical areas. Our proposed Temporal Dependency Unfolding method is inspired
by how human developers approach complex system design, first identifying key states and their
relationships before implementing the detailed logic.
3. Proposed Method: We propose Temporal Dependency Unfolding, a novel prompting technique that
guides the model to generate code by explicitly reasoning about state changes and temporal relationships.
The method consists of five steps:
1. State Identification: Prompt the model to identify key states and variables that change over
time in the target system.
2. Temporal Graph Construction: Guide the model to create a conceptual graph of how these
states evolve and interact over time.
3. Staged Code Generation: Generate code in stages, focusing on different temporal slices or
state transitions in each stage.
4. Consistency Verification: After each stage, prompt the model to verify temporal consistency
and make necessary adjustments.
5. Integration: Finally, guide the model to integrate the stage-wise generated code into a cohesive
system, ensuring proper handling of all temporal dependencies.
4. Step-by-Step Experiment Plan:
1. Dataset Preparation:
• Create a dataset of programming tasks that involve complex temporal dependencies.
• Include tasks from three domains: 1) Multi-threaded applications, 2) Game logic, and
3) Distributed systems.
• For each domain, prepare 50 task descriptions, each with a clear specification of the
desired functionality and temporal requirements.
2. Baseline Implementation:
• Implement two baseline methods:
– Direct prompting: Simply provide the task description to the model and ask it to
generate the code.
– Chain-of-Thought (CoT) prompting: Append ’Let’s approach this step-by-step:’ to
the task description.
• Use GPT-4 for both baselines.
77
Published as a conference paper at ICLR 2025
Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful
Systems (Part 2)
4. Step-by-Step Experiment Plan (Continued):
3. Temporal Dependency Unfolding Implementation:
• Implement our proposed method with the following sub-steps for each task:
(a) State Identification: Prompt GPT-4 with ’Identify the key states and variables that
change over time in this system:’.
(b) Temporal Graph Construction: Prompt with ’Create a conceptual graph showing
how the identified states evolve and interact over time:’.
(c) Staged Code Generation: For each major state or transition identified, prompt with
’Generate code for the following state/transition: [state/transition]’.
(d) Consistency Verification: After each stage, prompt with ’Verify the temporal con-
sistency of the generated code and suggest any necessary adjustments:’.
(e) Integration: Finally, prompt with ’Integrate the generated code segments into a
cohesive system, ensuring proper handling of all temporal dependencies:’.
4. Evaluation Metrics:
• Correctness: Percentage of generated code that passes predefined test cases.
• Temporal Consistency: Manual evaluation of how well the code handles temporal
dependencies (scale 1-5).
• Code Quality: Automated metrics like cyclomatic complexity and maintainability index.
• Execution Efficiency: Runtime performance on benchmark inputs.
5. Human Evaluation:
• Recruit 5 experienced developers to review a subset of 30 generated solutions (10 from
each domain).
• They will rate the code on a scale of 1-5 for readability, maintainability, and correct
handling of temporal dependencies.
6. Experiment Execution:
• For each task in the dataset:
(a) Generate solutions using both baseline methods and our Temporal Dependency
Unfolding method.
(b) Apply all evaluation metrics to the generated solutions.
(c) Collect human evaluations for the subset of solutions.
7. Analysis:
(a) Compare the performance of Temporal Dependency Unfolding against the baselines
across all metrics.
(b) Analyze the effectiveness of each step in our method (State Identification, Temporal
Graph Construction, etc.) by examining intermediate outputs.
(c) Identify patterns in tasks where our method shows significant improvement or underper-
forms.
(d) Correlate automated metrics with human evaluations to validate their reliability.
78
Published as a conference paper at ICLR 2025
Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful
Systems (Part 3)
5. Test Case Examples:
• Test Case 1:
– Baseline Prompt Input (Direct Prompting): Generate Python code for a simple multi-
threaded producer-consumer system with a shared buffer. The producer should generate
random numbers and add them to the buffer, while the consumer should remove and
process these numbers. Implement proper synchronization to avoid race conditions.
– Baseline Prompt Expected Output (Direct Prompting): [Python code for a simple
producer-consumer system]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 1: State Identification):
For a multi-threaded producer-consumer system with a shared buffer, identify the key
states and variables that change over time in this system:
– Proposed Prompt Expected Output (Temporal Dependency Unfolding; Step 1: State
Identification): [List of key states and variables]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 2: Temporal Graph
Construction): Create a conceptual graph showing how the identified states evolve and
interact over time for the producer-consumer system:
– Proposed Prompt Output (Temporal Dependency Unfolding; Step 2: Temporal Graph
Construction): [Conceptual graph of state evolution and interactions]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 3: Staged Code Gener-
ation): Generate code for the producer functionality in the producer-consumer system,
focusing on its interaction with the buffer and synchronization mechanisms:
– Proposed Prompt Output (Temporal Dependency Unfolding; Step 3: Staged Code
Generation): [Python code for producer functionality]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 4: Consistency Verifi-
cation): Verify the temporal consistency of the generated producer code and suggest
any necessary adjustments:
– Proposed Prompt Output (Temporal Dependency Unfolding; Step 4: Consistency Verifi-
cation): [Verification and adjustment suggestions]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 5: Integration): Inte-
grate the generated producer code with a consumer and main control logic to create a
complete producer-consumer system, ensuring proper handling of all temporal depen-
dencies:
– Proposed Prompt Output (Temporal Dependency Unfolding; Step 5: Integration):
[Complete Python code for producer-consumer system]
– Explanation: The Temporal Dependency Unfolding method produces a more compre-
hensive and robust solution compared to the baseline. It explicitly handles temporal
dependencies, includes proper synchronization, and provides mechanisms for graceful
termination. The staged approach allows for better handling of edge cases and improved
overall system design.
6. Fallback Plan: If the Temporal Dependency Unfolding method does not show significant im-
provement over the baselines, we can pivot the project in several ways. First, we could conduct an
in-depth analysis of where and why the method fails, which could provide valuable insights into the
limitations of current language models in handling temporal reasoning tasks. This analysis could involve
examining the intermediate outputs (state identification, temporal graphs) to understand where the
reasoning breaks down. Second, we could explore combining our method with other techniques, such
as retrieval-augmented generation, to see if providing relevant examples improves performance. Third,
we could focus on developing a new evaluation framework specifically designed to assess temporal
reasoning in code generation, which could be a valuable contribution to the field even if our primary
method doesn’t outperform baselines. Lastly, we could investigate whether the method performs better
on certain types of temporal dependencies or specific programming domains, which could lead to a
more targeted approach for improving code generation in those areas.
79
Published as a conference paper at ICLR 2025
Reviewer 1
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably
enough to turn into a new paper)
Rationale: The construction of Temporal Graph sounds novel. The research question is also relatively
underexplored, but necessary for coding in domains like distributed systems.
Feasibility: 6 (Feasible: Can be executed within the given constraints with some reasonable planning.)
Rationale: The data collection part should be the most challenging part. Collecting high-quality coding
problems that involve complex temporal dependencies could be hard. Also, the human evaluation might
also take time to execute.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: With specific prompting techniques, the proposed method should outperform baselines in
terms of temporal dependencies.
Excitement: 7
Rationale: I think this should be more exciting than most of the borderline papers since we are working
on a new problem. The collected data should also be super useful.
Overall Score: 7 (Good idea, would be accepted by major AI conferences)
Rationale: Again, working on a novel problem makes it better than most of the prompting papers.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
80
Published as a conference paper at ICLR 2025
Reviewer 2
Novelty: 5 (somewhat novel - there are differences from existing ideas but not enough to turn into a
new paper)
Rationale: Although I am not entirely familiar with the field of generating temporally adaptive
programs, I suspect some similar ideas can be found in software engineering works (e.g., ICSE). More
concretely on the method, it is rather similar to code generation with intermediate state reasoning, which
has been explored in several multi-step, conversational code generation works, e.g:
1. Zheng, Tianyu, et al. "Opencodeinterpreter: Integrating code generation with execution and
refinement."
2. Cao, Liuwen, et al. "Beyond Code: Evaluate Thought Steps for Complex Code Generation."
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language
Resources and Evaluation (LREC-COLING 2024). 2024.
3. Nijkamp, Erik, et al. "Codegen: An open large language model for code with multi-turn program
synthesis."
Feasibility: 3 (Very challenging: there are flaws in the proposed method or experiments, or the
experiments require compute/human resources beyond any academic lab)
Rationale: It would be pretty hard to collect such datasets (e.g., would mostly require a whole
repository), further, it would be difficult to generate executable test cases to verify the multiple problems
created. Especially because the task targets temporally-dependent modules in the program, it may
necessitate domain experts to carefully construct examples and tests, which would demand a lot of time
and costs.
Expected Effectiveness: 5 (Somewhat ineffective: There might be some chance that the proposed idea
can work better than existing baselines but the improvement will be marginal or inconsistent.)
Rationale: I am not very confident that the model can solve this complex temporally-dependent
programming problems with reasonable correctness. Furthermore, because the current method is
basically prompting, which may have a very low performance upper bound. Therefore, I don’t expect
the proposed method to improve significantly on code generation.
Excitement: 4
Rationale: Overall, I don’t expect this method to bring substantial improvements, hence am less excited
about the potential of this method. It would still be an interesting problem to solve, particularly in
bringing more challenging coding problems and proposed corresponding methods. With this being
said, given the current performance of models, building a solid benchmark regarding this temporal code
generation problem may be more exciting than proposing a method that is expectedly not working.
Overall Score: 4 (Ok but not good enough, rejection for major AI conferences)
Rationale: The task of temporal code generation is not the most urgent issue of current code generation
models, and the proposed method is expected to not bring much improvement. The method needs to be
further refined and go beyond simple prompting to convince the audience of the potential of this thread
of methods.
Confidence: 3 (You are fairly confident that the evaluation is correct)
81
Published as a conference paper at ICLR 2025
Reviewer 3
Novelty: 10 (very novel - very different from all existing ideas in a very interesting and clever way)
Rationale: This idea studies a very novel problem in LLM-based code generation. Temporal dependen-
cies in code generation should be specifically studied in the era of LLMs.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would
require careful planning, efficient use of APIs or some advanced computational strategies to overcome
the limited GPU resources, and would require some modifications to the original proposal to make it
work.)
Rationale: Constructing a reasonable dataset is challenging within a short time. Also, human evaluation
might take more time. Whether LLM can construct high-quality graphs in this case is also to be
examined.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can
beat existing baselines by moderate margins on a few benchmarks.)
Rationale: One needs to build reasonable metrics to show effectiveness. Also, one might need to tune
prompts carefully to construct high-quality graphs in this case.
Excitement: 8 (Exciting: would deepen the community’s understanding or make major progress in this
research direction)
Rationale: This is novel and could have a huge impact on those code generation cases requiring
temporal dependencies. But one needs to justify why such use cases are important, and why temporal
dependency is the core problem in such use cases.
Overall Score: 9 (Top 15% of all published ideas on this topic at major AI conferences, strong accept)
Rationale: Considering its novelty, valuable dataset, and comprehensiveness of experiment and
evaluation design, this could be an impactful work. But one needs to make experiment results concrete
by re-examining whether each step works well in practice.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
82
Published as a conference paper at ICLR 2025
A.28
IDENTITIES OF EXAMPLE IDEAS
We reveal whether each example idea is AI-generated or human-written:
• Human ideas: Example A.20, Example A.22, Example A.24, Example A.26
• AI ideas: Example A.21, Example A.23, Example A.25, Example A.27
83
Published as a conference paper at ICLR 2025
A.29 ABLATIONS OF IDEA GENERATION
In this section, we investigate several factors that influence the performance of our retrieval-augmented
generation (RAG) approach for idea generation. We ablate:
1. The maximum number of papers to retrieve (N ).
2. The number of papers (k) to include in the prompt.
3. Different base models and prompt setups for idea generation.
For ablating the impact of the maximum number of papers to retrieve (N ), we vary N among
{30, 60, 120, 180} and measure the average relevance score of the top 20 retrieved papers (on a scale
of 1–10), as judged by an LLM. The results in Table 17 show that N = 120 consistently achieves
strong performance and that the relevance of the top retrieved papers tends to plateau beyond this
point.
Table 17: Ablation of the maximum number of papers to retrieve (N ). We report the average relevance
score of the top 20 retrieved papers (on a scale of 1–10) for two topics (Multilingual and Uncertainty)
as judged by an LLM.
N Multilingual Uncertainty
30
60
120
180
6.80
7.05
8.10
8.10
6.75
7.50
8.45
8.40
We additionally ablate the impact of the number of papers (k) to add to the prompt for RAG by
varying k in {0, 5, 10, 20}, and measure how it affects the diversity of generated ideas for the topic
of uncertainty prompting. In Table 18, the “Non-Duplicates (%)” metric reflects the proportion of
unique ideas generated. We observe that varying k has minimal impact on the diversity.
Table 18: Ablation of the number of retrieved papers k added to the prompt, showing the diversity of
generated ideas on the uncertainty prompting topic. “Non-Duplicates (%)” indicates the proportion
of unique ideas.
k
0
5
10
20
Non-Duplicates (%)
18.8
18.4
19.1
19.4
To examine whether the diversity issue exists across different base models and prompt configurations,
we measure the idea diversity of multiple other models and prompt setups. We use a temperature of
1.0 for all the generations and compute the percentage of non-duplicate ideas out of 2K generated
ideas on the topic of uncertainty prompting. Table 19 compares four different base models.
Table 19: Comparison of different base models regarding idea diversity on the topic of uncertainty
prompting. We measure the percentage of non-duplicate ideas out of 2K generated ideas using a
temperature of 1.0.
Base Model
Non-Duplicates (%)
Claude-3.5-Sonnet
GPT-4o
o1-mini
Llama-3.1-405B-Instruct
19.1
59.5
22.6
51.1
84
Published as a conference paper at ICLR 2025
We find that different models have very different non-duplicate rates, with GPT-4o and Llama-3.1-
405B-Instruct showing substantially higher diversity than Claude-3.5-Sonnet and o1-mini. However,
we chose Claude-3.5-Sonnet as the base model of our agent because the quality of its generated
ideas outperforms the others in our pilot study. Specifically, we randomly sampled 10 ideas from
Claude-3.5-Sonnet and GPT-4o for a round of pilot expert scoring and found that Claude-3.5-Sonnet
achieved an average score of 5.4 (on a 1–10 scale), whereas GPT-4o scored 4.8.
Next, we ablate several different prompt setups for idea generation using the Claude-3.5-Sonnet
backbone. Specifically, we analyze the impact of applying RAG and the impact of appending
previously generated ideas for deduplication.
Table 20 shows that appending previously generated ideas in the prompt and instructing the model to
avoid repetition significantly reduces idea duplication. However, including retrieved papers in the
prompt has minimal impact on diversity.
Table 20: Comparison of different prompt setups for idea generation with the Claude-3.5-Sonnet
backbone. We report the percentage of non-duplicate ideas out of 2K generated ideas on the topic of
uncertainty prompting.
Prompt Setup
Non-Duplicates (%)
No RAG; No prev
No RAG; Prev
RAG (k=5); Prev
RAG (k=10); Prev
RAG (k=20); Prev
7.6
18.8
18.4
19.1
19.4
85
Published as a conference paper at ICLR 2025
A.30 ATTEMPT ON IDEA EXECUTION AGENT
For our execution agent, the input is the generate idea (the full project proposal), and the output is a
Python file that can be executed with our specified command. Since there is often a common pipeline
of implementing prompting-based research ideas, we provide a manually crafted code file example as
template. We attach the full template below:
1 import random
2 from tqdm import tqdm
3 from utils import call_api, load_model
4 import random
5 random.seed(2024)
6
7 ## Step 1: Generate synthetic test examples
8 def generate_testset():
test_data = [
9
{
},
{
},
{
},
{
"input": "Natalia sold clips to 48 of her friends in
April, and then she sold half as many clips in May.
How many clips did Natalia sell altogether in April
and May?",
"output": "Natalia sold 48/2 = <<48/2=24>>24 clips in
May. Natalia sold 48+24 = <<48+24=72>>72 clips
altogether in April and May. #### 72"
"input": "Weng earns $12 an hour for babysitting.
Yesterday, she just did 50 minutes of babysitting.
How much did she earn?",
"output": "Weng earns 12/60 = $<<12/60=0.2>>0.2 per
minute. Working 50 minutes, she earned 0.2 x 50 =
$<<0.2*50=10>>10. #### 10"
"input": "Tim has 30 less apples than Martha, and Harry
has half as many apples as Tim. If Martha has 68
apples, how many apples does Harry have?",
"output": "Tim has 68-30 = <<68-30=38>>38 apples. Harry
has 38/2 = <<38/2=19>>19 apples. #### 19"
"input": "Four people lost a total of 103 kilograms of
weight. The first person lost 27 kilograms. The
second person lost 7 kilograms less than the first
person. The two remaining people lost the same
amount. How many kilograms did each of the last two
people lose?",
"output": "Second person = 27 - 7 = <<27-7=20>>20 kg 103
- 27 - 20 = <<103-27-20=56>>56 kg 56/2 =
<<56/2=28>>28 kg The last two people each lost 28
kilograms of weight. #### 28"
}
]
return test_data
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31 ## Step 2: Implement the baseline method
32 def baseline_method(client, model_name, seed, question):
33
## zero-shot chain-of-thought
prompt = "Answer the following question: {}\n".format(question)
prompt += "Think step by step."
prompt_messages = [{"role": "user", "content": prompt}]
34
35
36
86
Published as a conference paper at ICLR 2025
response, _ = call_api(client, model_name, prompt_messages,
temperature=0., max_tokens=2000, seed=seed, json_output=False)
return response.strip()
37
38
39
40
41 ## Step 3: Implement the proposed method
42 def proposed_method(client, model_name, seed, question,
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
print_all=False):
intermediate_outputs = ""
if print_all:
print ("question:\n", question)
## collaborative reasoning step 1: task decomposition
prompt = "Please break down the following task into smaller
sub-tasks or steps:: {}".format(question)
prompt_messages = [{"role": "user", "content": prompt}]
decomposition, _ = call_api(client, model_name, prompt_messages,
temperature=0., max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "task decomposition:\n" + decomposition +
"\n"
if print_all:
print ("decomposition:\n", decomposition)
## collaborative reasoning step 2: sub-task information generation
prompt = "For each of the following sub-tasks, please generate
relevant information or intermediate results:
\n{}".format(decomposition)
prompt_messages = [{"role": "user", "content": prompt}]
intermediate, _ = call_api(client, model_name, prompt_messages,
temperature=0., max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "sub-task results:\n" + intermediate +
"\n"
if print_all:
print ("intermediate:\n", intermediate)
## collaborative reasoning step 3: result combination
prompt = "Given the following intermediate results: \n{}, please
combine them to generate the final answer for the task:
\n{}".format(intermediate, question)
prompt_messages = [{"role": "user", "content": prompt}]
answer, _ = call_api(client, model_name, prompt_messages,
temperature=0., max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "result combination:\n" + answer + "\n"
if print_all:
print ("initial answer:\n", answer)
## collaborative reasoning step 4: reflection and refinement
prompt = "Given the task: {}\nPlease reflect on the generated
answer:\n{}.\n\nAre there any gaps or inconsistencies in the
answer? If so, please identify and address them and give me
an improved answer. If not, you don’t have to edit anything
and can just return the original answer.\n".format(question,
answer)
prompt_messages = [{"role": "user", "content": prompt}]
final_answer, _ = call_api(client, model_name, prompt_messages,
temperature=0., max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "reflection and refinement:\n" +
final_answer
if print_all:
print ("final answer:\n", final_answer)
return final_answer.strip(), intermediate_outputs
87
Published as a conference paper at ICLR 2025
83 ## Step 4: Define the style evaluator
84 def style_evaluator(client, model_name, seed, question,
baseline_prediction, proposed_prediction):
## define all the components that the proposed method outputs
should have
## and the advantages of the proposed method over the baseline
method
## just need to check the style is correct
prompt = "Given the task: {}\n".format(question)
prompt += "The baseline method produced the following
output:\n{}\n\n".format(baseline_prediction)
prompt += "The proposed new method produced the following
output:\n{}\n\n".format(proposed_prediction)
prompt += "Now determine if the proposed method is better by
checking if it has satisfied the following criteria:\n"
prompt += "1. The proposed method’s output should produce all the
intermediate components including: task decomposition,
sub-task information generation, result combination, and
reflection and refinement.\n"
prompt += "2. The proposed method should provide a more detailed
and comprehensive answer than the baseline method.\n"
prompt += "Just tell me ’yes’ or ’no’ for whether the criteria
are met, nothing else is needed."
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages,
temperature=0., max_tokens=1, seed=seed, json_output=False)
judgment = False
if response.strip().lower() == "yes":
return True
return judgment
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105 ## Step 5: Define the output evaluator
106 def output_evaluator(client, model_name, seed, question, gold_label,
prediction):
## check if the prediction is correct given the gold label
prompt = "Given the following question and reference answer,
determine if the prediction is correct. Just tell me ’yes’ or
’no’, nothing else is needed.\n\nQuestion: {}\n\nReference
Answer: {}\n\nPrediction: {}\n\n".format(question,
gold_label, prediction)
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages,
temperature=0., max_tokens=1, seed=seed, json_output=False)
judgment = False
if response.strip().lower() == "yes":
return True
return judgment
107
108
109
110
111
112
113
114
115
116
117
118
119 ## Step 6: Define the function that runs the experiments to obtain
model predictions and performance
120 ## you shouldn’t need to modify this function in most cases
121 def run_experiment(client, model_name, seed, testset):
122
123
124
125
126
127
128
sample_size = len(testset)
baseline_predictions = []
proposed_predictions = []
baseline_correctness = []
proposed_correctness = []
88
Published as a conference paper at ICLR 2025
style_check = []
for i in tqdm(range(sample_size)):
question = testset[i]["input"].strip()
gold_label = testset[i]["output"].strip()
baseline_prediction = baseline_method(client, model_name,
seed, question)
proposed_prediction_final, proposed_prediction_intermediate =
proposed_method(client, model_name, seed, question)
baseline_predictions.append(baseline_prediction)
proposed_predictions.append(proposed_prediction_final)
baseline_correctness.append(output_evaluator(client,
model_name, seed, question, gold_label,
baseline_prediction))
proposed_correctness.append(output_evaluator(client,
model_name, seed, question, gold_label,
proposed_prediction_final))
style_check.append(style_evaluator(client, model_name, seed,
question, baseline_prediction,
proposed_prediction_intermediate))
return baseline_correctness, proposed_correctness, style_check
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148 ## Step 7: Execute the experiments and compare performance
149 if __name__ == "__main__":
150
testset = generate_testset()
print ("simulated {} test examples for
evaluation.".format(len(testset)))
151
152
153
154
155
156
157
158
159
160
161
162
model_name = "claude-3-opus-20240229"
seed = 2024
client = load_model(model_name)
print ("using model: ", model_name)
## output correctness
baseline_correctness, proposed_correctness, style_check =
run_experiment(client, model_name, seed, testset)
print ("baseline correctness: ", sum(baseline_correctness) /
len(baseline_correctness))
print ("proposed correctness: ", sum(proposed_correctness) /
len(proposed_correctness))
print ("style check pass rate: ", sum(style_check) /
len(style_check))
As seen above, we have defined two different evaluator functions. The style_evaluator()
checks whether all components mentioned in the proposed method are implemented, and the
output_evaluator() checks whether the model predictions are correct.
We provided this template along with instructions to tell the execution agent to modify all necessary
functions for the provided idea. We tested the agent on 30 safety prompting ideas and 30 factuality
prompting ideas.
As shown in Table 21, over half of the generated code files can be successfully executed, and the
results indicate that only a handful of these proposed methods actually outperform the baselines.
While this shows some promise that the execution agents can automatically implement and identify
effective methods out of all the generated ideas, a careful manual check reveals many pitfalls.
89
Published as a conference paper at ICLR 2025
Total
Executed
Passed Style Evaluator
Proposed Method Works Better
Safety
30
17
13
5
Factuality
30
18
14
1
Table 21: Success rate of our execution agent on 30 randomly sampled AI ideas on safety and
factuality.
For example, the agent implemented the following baseline for the task of identifying malicious and
adversarial prompts, which is a keyword-based approach that is trivial to beat with any LLM-based
methods.
1 ## Step 2: Implement the baseline method
2 def baseline_method(client, model_name, seed, prompt):
3
# Keyword-based heuristic
keywords = ["ignore", "pretend", "roleplay", "disregard",
4
5
6
7
8
"override"]
for keyword in keywords:
if keyword in prompt.lower():
return "Adversarial"
return "Benign"
In another example, for the same task of detecting adversarial prompts, the agent implemented the
following evaluator function:
1 ## Step 5: Define the output evaluator
2 def output_evaluator(client, model_name, seed, input_text,
gold_label, prediction):
prompt = "Given the following text and reference sentiment
classification, determine if the predicted classification is
correct. Just tell me ’yes’ or ’no’, nothing else is
needed.\n\nText: {}\n\nReference: {}\n\nPrediction:
{}\n\n".format(input_text, gold_label, prediction)
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages,
temperature=0., max_tokens=1, seed=seed, json_output=False)
judgment = False
if response.strip().lower() == "yes":
return True
return judgment
3
4
5
6
7
8
9
10
11
The agent is supposed to inject adversarial triggers into sentiment classification data to test whether
the proposed method can detect those adversarial prompts while maintaining sentiment classification
accuracy. However, the agent only evaluates the accuracy on the original sentiment classification task
but not the task of adversarial prompt detection.
Given these errors, we believe more work is needed to carefully verify the code implementations
produced by the execution agent rather than blindly trusting their executed results, and we leave such
attempts to future work.
90
|
GHJzxPgFa6 | Chain of Ideas: Revolutionizing Research in Idea Development with LLM Agents | [
5,
8,
5,
5
] | Under review as a conference paper at ICLR 2025
CHAIN OF IDEAS: REVOLUTIONIZING RESEARCH IN
NOVEL IDEA DEVELOPMENT WITH LLM AGENTS
Anonymous authors
Paper under double-blind review
ABSTRACT
Effective research ideation is a critical step for scientific research. However, the
exponential increase in scientific literature makes it challenging for researchers to
stay current with recent advances and identify meaningful research directions. Re-
cent developments in large language models (LLMs) suggest a promising avenue
for automating the generation of novel research ideas. However, existing methods
for idea generation either trivially prompt LLMs or directly expose LLMs to ex-
tensive literature without indicating useful information. Inspired by the research
process of human researchers, we propose a Chain-of-Ideas (CoI) agent, an LLM-
based agent that organizes relevant literature in a chain structure to effectively
mirror the progressive development in a research domain. This organization facil-
itates LLMs to capture the current advancements in research, thereby enhancing
their ideation capabilities. Furthermore, we propose Idea Arena, an evaluation
protocol that can comprehensively evaluate idea generation methods from dif-
ferent perspectives, aligning closely with the preferences of human researchers.
Experimental results indicate that the CoI agent consistently outperforms other
methods and shows comparable quality as humans in research idea generation.
Moreover, our CoI agent is budget-friendly, with a minimum cost of $0.50 to gen-
erate a candidate idea and its corresponding experimental design1.
1
INTRODUCTION
Idea generation is a crucial aspect of scientific research for driving technological innovations and
breakthroughs. Traditionally, this process has been predominantly human-driven, necessitating ex-
pert researchers to review extensive literature, identify limitations in existing solutions, and propose
new research directions. However, the complexity and vastness of scientific literature, coupled with
rapid technological advancements, have rendered this task increasingly challenging for researchers.
Recent advancements in large language models (LLMs) (Achiam et al., 2023; Dubey et al., 2024;
Yang et al., 2024a) have enabled these models to exceed human experts in various scientific tasks,
including mathematics (Yu et al., 2023), theorem proving (Yang et al., 2023), and coding (Chen
et al., 2021). Building on this robust scientific foundation, one may hypothesize that LLMs could
support a more abstract and creative research idea-generation task. Notably, Si et al. (2024); Kumar
et al. (2024) have validated this hypothesis, highlighting its substantial potential to expedite the
discovery of novel concepts and uncharted research avenues.
Existing methods seek to address two key challenges to improve the quality of generated ideas:
curating pertinent literature for LLMs to gain inspiration and ensuring the novelty of generated ideas.
To address the first challenge, previous research enhances traditional academic retrieval systems,
which typically depend on textual similarity, with academic knowledge graphs (Baek et al., 2024;
Wang et al., 2023). For the second challenge, existing approaches either apply predefined criteria
such as novelty to guide the idea generation process (Baek et al., 2024) or iteratively refine ideas
until they demonstrate low embedding similarities with existing papers (Wang et al., 2023).
However, in existing attempts, LLMs are presented with an extensive volume of research literature
when asked to generate ideas. This makes LLMs vulnerable to the influence of less relevant works,
1We will make our code and data publicly available
1
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
Under review as a conference paper at ICLR 2025
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
Figure 1: Comparison between the vanilla retrieval augmented generation (RAG) research agent and
our Chain-of-Ideas agent on the idea generation task.
potentially resulting in ideas that lack logical coherence and technological innovation. As shown in
the upper part of Figure 1, the LLM borrows an idea from GraphGPT (Tang et al., 2024) and applies
it into GoT framework (Besta et al., 2024) to generate what they interpret as a “novel idea”. However,
the resultant idea conflates two concepts: GoT is a prompting method while GraphGPT is a fine-
tuning method leveraging graph neural network architecture (Zhou et al., 2020). In contrast, human
researchers often trace the evolution of a research field by analyzing its progression from founda-
tional works to the most recent advancements. This comprehensive perspective provides valuable
insights into the key factors driving developments within the domain. Such an understanding enables
researchers to critically assess the limitations of earlier studies while identifying emerging trends.
Therefore, they are better grounded in devising innovative and impactful research ideas.
Motivated by the human practices in conducting research, we introduce a novel Chain-of-Ideas (CoI)
agent framework to address the previously identified logical inconsistencies in the ideation processes
of LLMs. As shown in the bottom part of Figure 1, CoI agent aims to provide a clear landscape of
current research topics by systematically selecting and organizing the relevant papers and their ideas
in a chain structure. CoI agent offers several distinctive advantages: Firstly, it minimizes the risk
of interference from less relevant literature via carefully selecting papers (i.e. from CoT (Wei et al.,
2022) to GoT). Second, LLMs are demonstrated with human practice to craft a novel idea. For
example, SC (Wang et al., 2022) emerges as a novel idea derived from CoT. This can be viewed
as a form of few-shot prompting strategy, which has been proven to enhance the overall LLM’s
generation capability (Brown et al., 2020). Third, CoI exemplifies a global progression in research
development. As a result, LLMs can gain a deep understanding of the motivations behind these
developmental trends, facilitating the identification of promising future research directions.
Specifically, CoI agent first retrieves an anchor paper of the given research topic. Instead of indis-
criminately aggregating all papers within the citation network of the anchor, as done in (Baek et al.,
2024), we construct the CoI by selecting relevant and important literature from both the anchor’s
references and its subsequent works, thereby extending the chain backward and forward from the
anchor. We then prompt the constructed CoI to an LLM for idea generation and experiment design.
During idea generation, we require the LLM to predict possible future trends. This prognostic result
facilitates the gradual consolidation of the idea, beginning with the motivation for the proposed idea,
progressing through an assessment of its potential impact, and culminating in the realization. How-
ever, as the evolution of scientific discovery can emerge from multiple perspectives, a single CoI
may be insufficient to capture the most promising direction. Additionally, there is no guarantee that
the generated ideas will be novel. To address these issues, we construct multiple CoI branches for
different perspectives of a research topic. Additionally, a novelty-checker agent iteratively evaluates
the draft idea against existing literature and refines it if substantial similarity is identified.
We compare our CoI agent against existing baselines on idea generation in the artificial intelligence
(AI) field. To do this, we develop an arena-style evaluation framework called Idea Arena where
participant methods compete in pairs, which demonstrates high agreement with human evaluation.
2
Topic: Enhancing Large Language Model Problem-solving CapabilityChain of IdeasVanilla RAGTitle:EnhancingProblem-SolvingthroughMulti-ModalIntegrationforGoTPromptingMotivation:GoTfocusesontextualinputs,leavingthemulti-modalitydataunexplored.Thisworkexploreshowmulti-modalinputscanbeintegratedwithintheGoTprompting…Method:•Multi-ModalDataConversion toGraphNodes:Convertvisual,auditoryandtextualdataintographnodes…•GraphConstructionandIntegration:MotivatedbyGraphGPT,wecanemployGNNssuchasGraphSAGEorGATtoaggregateinformationfromthesemultimodalnodes…CoTSCToTGoTRAGRoGECOIGraphGPTSAASORALawyerGPT((Title:DynamicProblem-SpecificThoughtNetworkforEnhancingLLM’sProblem-SolvingMotivation:Thepre-definedstructuralconstraints(linear,tree,orgraph)maynotalwaysalignwiththenatureoftheproblembeingtackled.Therefore,amoreadaptableapproachthatdynamicallyadjustsitsstructurebasedontheproblemathandisneeded…Method:•ProblemAnalysis:Decidetheinitialreasoningstructureusingtheproblemdescription…•DynamicAdjustment:Monitorsthereasoningprocessanddynamicallyadjuststhestructurebasedonintermediateresultsandproblem-specificheuristics…CoTSCToTGoTRAGRoGECOIGraphGPTSAASORALawyerGPTUnder review as a conference paper at ICLR 2025
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
Figure 2: Our proposed CoI agent framework. The process consists of three stages: (1) Construct
CoIs based on the retrieved literature; (2) Develop potential ideas based on the CoIs; and (3) Design
the corresponding experiments for the proposed idea.
The experimental results show that CoI agent consistently ranks first among all automated baselines,
surpassing the second-best one by 56 ELO scores in human evaluation. CoI agent can generate ideas
as novel as those of human experts. Our analysis further shows that for LLMs to generate novel
ideas, a clear developmental trend analysis is more pivotal than the quantity of related literature.
Our contributions are summarized as follows: 1) We propose the CoI agent to enhance LLMs’ ca-
pability in idea generation. CoI agent organizes relevant literature in a chain structure to effectively
mirror the progressive nature of research development, allowing LLMs to better grasp the current re-
search advancements. 2) We propose Idea Arena for a comprehensive evaluation of idea-generation
methods, which shows high agreement with human researchers. 3) Extensive experiments demon-
strate the effectiveness of our CoI agent in generating ideas that are comparable to human creativity.
2 METHOD
2.1 FRAMEWORK: CHAIN-OF-IDEAS AGENT
In this section, we detail our CoI agent framework, as illustrated in Figure 2, which consists of
three stages: (1) CoI Construction, (2) Idea Generation, and (3) Experiment Design. First, given a
research topic, the CoI agent constructs multiple CoIs from existing literature, reflecting different
trends within the domain. Then, for each CoI, the LLM predicts future research directions, and
crafts ideas through step-by-step consolidation and iterative novelty checks. The best idea is then
selected. Lastly, the LLM generates and refines an experiment design to implement the final idea.
2.2 COI CONSTRUCTION
Generating novel research ideas requires a profound comprehension of the respective research do-
main, coupled with a rigorous reasoning process. Previous endeavors (Lu et al., 2024; Baek et al.,
2024) have sought to augment LLMs with relevant papers to facilitate the ideation process. However,
these methods simply mix these papers into the prompt without effective organization. This scenario
is akin to dropping an LLM at a chaotic intersection with no map in sight, leaving it uncertain about
which path to take. To address this issue, we propose a Chain-of-Ideas agent framework.
As shown in Figure 2, a CoI, represented as {I−M → · · · → I0 → · · · → IN }, is a sequence consist-
ing of M + N + 1 ideas extracted from M + N + 1 research papers respectively, where they together
3
Topic: Enhancing Large Language Model Problem-solving CapabilitySemanticScholarPaper 1(ToT)Paper 2Paper 3Stage 1: CoI ConstructionToTCoTGoTSCCurrent Trends: oCoT to SC: The progression from CoT to SC is marked by addressing the limitations of greedy decoding in complex reasoning tasks …oSC to ToT: …oToT to GoT: …Stage 2: Idea GenerationFuture Trend Prediction:Potential directions include adapting the task-solving framework according to the nature of the problem and reducing the computational costs of inference.Entities:oCoT Entities: …oSC Entities:…oToT Entities:…oGoT Entities:…CoI:Novel?NoIdea Consolidation:Title:Dynamic Problem-Specific Thought Network (DPSTN) …Motivation:The pre-defined structures (linear, tree, graph) may not align with the nature of the problem. Thus, we propose to dynamically adjusts its task-solving structure of problemMethods:…Final idea inspired by the CoIof Paper 1Final idea inspired by the CoI of Paper 2Final idea inspired by the CoI of Paper 3Final IdeaEntities:oCoT Entities: …oSC Entities:…oToT Entities:…oGoT Entities:…Previous Exp.:oCoT Exp.: …oSC Exp.:…oToT Exp.:…oGoT Exp.:…Designing:Step 1: Define Baselines:1. CoT prompting2. CoT with self-consistency…Step 2: Dataset Preparation…Step 3: Implement DPSTN …Clear?Supportive?FinalExperiment DesignStage 3: Experiment DesignYesNo⚓Current Trends:CoT→SC→ToT→GoTYes75%50%25%oIdea:Prompt LLM with reasoning steps…oExperiment:Appy CoTon arithmetic…oEntities:§GPT4: A strong LLM used in recent papers …oIdea:…oExperiment: …oEntities:…oIdea:…oExperiment: …oEntities:…oIdea:…oExperiment: …oEntities:…Under review as a conference paper at ICLR 2025
show the evolution progress within a given research field. Specifically, given an initial research
topic, we prompt the LLM to generate multiple queries, [q1, . . . , qK], that reflect K different per-
spectives of this topic. The prompt is given in Table 7 of Appendix. Unless otherwise specified, all
prompts of our framework are presented in the Appendix tables. The K queries are used to construct
K branches of CoI. This reduces the reliance on a single CoI that may be insufficient to capture the
most significant development and direction. For each query qk, we use it to retrieve a top-ranked
paper, which we call anchor paper Pk
0. In Figure 2, ToT (Yao et al., 2024) is an illustrative example
of an anchor paper. An anchor paper serves as the foundation for constructing a CoI. Specifically,
a CoI is constructed by extending from the corresponding anchor paper to related papers in both
directions: forward, tracing the progression of ideas, and backward, tracing their origins.
In the forward direction, starting from Pk
0, we identify subsequent papers that directly cite it by
leveraging the Semantic Scholar API2. We use OpenAI’s text-embedding-3-large3 to rank
these papers based on their cosine similarities to the concatenation of the initial research topic and
the abstract of the anchor paper. Subsequently, we select the highest-ranked paper as Pk
1 to extend
the CoI in the forward direction (e.g. GoT in Figure 2). This process is repeated iteratively from Pk
i
to Pk
i+1, until either the length of the CoI reaches a preset value or the LLM finds that there is no
valuable follow-up work (Table 8).
0 directly built upon, 2) references that serve as baselines in Pk
In the backward direction, starting from the anchor paper Pk
0, we instruct an LLM to thoroughly
review the full paper and to identify candidate references based on the following criteria: 1) refer-
ences that Pk
0, and 3) references that
tackle the same topic as Pk
0. With those candidate references, we ask the LLM to determine the most
relevant one to the anchor paper (Tables 9 and 10), denoted as Pk
−1 (e.g. SC in Figure 2), to extend
the CoI backward. This backward extension is also carried out iteratively from Pk
−(i+1) to
identify preceding papers (e.g. tracing backward from SC to CoT in Figure 2). It terminates when
the length of CoI reaches a preset value or we encounter a milestone paper (defined as one with over
1,000 citations), indicating that the idea from the milestone paper could serve as a strong starting
point for the CoI. Additionally, we instruct the LLM to terminate the search if no reference relevant
to the original research topic is found (Table 8).
−i to Pk
−M k → · · · → Ik
−M k → · · · → Pk
After we collect K paper chains, denoted as {Pk
k=1, we ask the
LLM to extract ideas from these papers and inherit the progressive relation of the paper chains to
form our CoIs {Ik
N k }K
k=1 (Tables 9 and 10). Then for each CoI, we
ask the LLM to summarize the existing research trends by analyzing the evolution between any two
adjacent ideas (Table 11). For example, the upper part of Figure 2 shows the evolution process from
CoT to GoT step-by-step. Additionally, we extract experiment designs and the definition of key
entities from these papers (Tables 9 and 10). The above information including CoIs and the derived
knowledge will be used in the following idea generation and experiment design stages.
0 → · · · → Pk
0 → · · · → Ik
N k }K
2.3
IDEA GENERATION
In this section, we use the above-constructed CoIs and their developing trends to guide the generation
of a novel idea. For each generated CoI, the first step is to predict possible future trends. As shown
in the lower-left section of Figure 2, we prompt the LLM with the CoI, the developing trends of
existing works, and the key entities extracted from existing literature, as described in Sec. 2.2
(Tables 12 and 13). These entities comprise relevant datasets and potential baseline models, which
are important to clarify the concepts mentioned in the existing literature. After obtaining the future
trend, we continue to prompt the LLM to articulate its motivation, novelty, and methodology, finally
consolidate the idea (Tables 14 and 15). Through this step-by-step manner, COI can produce a more
detailed idea. Following the previous practice (Wang et al., 2023; Lu et al., 2024), we also use a
novelty-check agent to evaluate candidate ideas. It retrieves relevant papers and prompts another
LLM to assess the similarity between the generated idea and the retrieved papers (Table 16). Based
on this assessment, our framework determines if another round of generation is necessary. Finally,
we pairwisely compare the generated ideas from all CoI branches and select the one with the highest
2https://www.semanticscholar.org/product/api
3https://openai.com/index/new-embedding-models-and-api-updates/
4
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
Under review as a conference paper at ICLR 2025
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
winning rate as the final idea for the experiment design. This pairwise comparison follows the same
method as Idea Arena, refer to Sec. 3.4 for details.
2.4 EXPERIMENT DESIGN
While our primary goal is to generate novel ideas, it is also useful to develop experimental plans that
help users implement these ideas. Thus, we extended the CoI agent to include experiment design.
As shown in the lower-right of Figure 2, we prompt the LLM with experiments from existing works
obtained from Sec. 2.2 as few-shot examples, along with the proposed idea and key entities, to guide
the LLM in designing experiments for our ideas (Table 17).
We also employ a review agent to assess the candidate experiment designs.
Its main role is to
evaluate the clarity and comprehensiveness of the protocol, ensuring all key elements—such as
datasets and models—are clearly specified. Additionally, it checks if the design provides enough
detail for practical implementation (Table 18). The review agent provides critical feedback on these
aspects, subsequently utilizing this information to conduct further searches for relevant literature
(Table 19) to help the LLM refine and enhance its previous experiment design (Table 20). Through
this iterative process of review and refinement, we arrive at a final experiment design.
3 EXPERIMENTAL SETUPS
3.1
IMPLEMENTATIONS
In our CoI agent, we primarily use GPT-4o (05-13) as our LLM implementation. For some modules
that require full-paper understanding, we use GPT-4o-mini (07-18) to read the paper and summarize
the core contents due to its lower price and good summarization capability. We use Semantic Scholar
as our academic search engine. For the main experimental results, the maximum length of the CoI
is set to 5 and the number of CoI branches is set to 3, and their analysis results are given later. The
iteration number of self-refinement in the experiment design stage is set to 1 for cost saving.
3.2 DATA
To evaluate our CoI agent’s ability to generate novel ideas, we collect recent research topics from
Hugging Face’s Daily Papers4, known for its timely updates and the high quality of the featured
papers. We select papers submitted between August 1 and September 15, 2024, ensuring that the
topics are sufficiently new and the time frame is after the data cutoff of the LLM. We ask 10 skilled
researchers (All have publications in top-tier conferences and major in AI-related topics, such as
computer vision, embodied intelligence, and natural language processing) to identify papers that
capture their interests. Subsequently, we prompt GPT-4o to extract research topics, proposed ideas,
and their corresponding experiment designs from these selected papers (Tables 21, Table 22 and
23). The extracted topics will then be returned to the researchers for validation, ensuring that the
extracted topics are valid and reasonable within their research domains. The extracted ideas and
experiment designs will be utilized as our Real Paper baseline, as described in Section 3.3. Due
to the substantial costs associated with generating and evaluating ideas and experiment designs, we
adhere to the assessment scale of Lu et al. (2024); Wang et al. (2023) to collect 50 research topics in
total for evaluation.
3.3 BASELINES
We compare our CoI agent with recent works on idea generation and experiment design. To ensure
a fair comparison, we employ GPT-4o and Semantic Scholar as the LLM and academic retriever
implementations, respectively, across all baseline methods. Furthermore, we unify the output format
of the generated ideas and experiment designs to minimize evaluation preference towards more
structured outputs (Chiang et al., 2024). We compare with the following baselines:
• RAG: This is a vanilla retrieval augmented generation approach (Lewis et al., 2020), where we
directly prompt the LLM with retrieved literature for idea generation and experiment design.
4https://huggingface.co/papers
5
Under review as a conference paper at ICLR 2025
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
Figure 3: Evaluation results of idea gen-
eration with LLM as a judge.
Figure 4: Evaluation results of idea gen-
eration with human as judges.
• ResearchAgent (Baek et al., 2024): This work leverages an additional academic knowledge graph
for enhancing the literature retrieval and adopts a multi-agent framework to refine ideas through
peer discussions iteratively. We follow the original paper to reproduce this baseline.
• GPT-Researcher (Assafelovic, 2023): GPT-Researcher is an agent framework specifically de-
signed for the research domain. The agent is enhanced with plan-and-solve and RAG capabilities.
• AI-Scientist (Lu et al., 2024): This work originally aims to generate the entire paper with the
idea, methods, and experimental results. We extract the components related to idea generation
and experiment design to serve as our baseline.
• Real Paper: Note that, in Sec. 3.2, we extract topics from existing research papers. Therefore,
the ideas and the experiment designs from these papers serve as a natural baseline to quantify the
gap between model-generated ideas and genuine human ideas.
3.4 EVALUATION: IDEA ARENA
Model-based Evaluation. The open-ended nature of idea generation poses challenges for automatic
evaluation. Prior work primarily uses LLM-based Likert scale system to score ideas (Baek et al.,
2024; Lu et al., 2024). However, Si et al. (2024) show this method poorly aligns with human
preferences. Instead, they show LLMs perform better in ranking ideas. To obtain reliable scores for
evaluation, we propose Idea Arena, a pairwise evaluation system using a Round-Robin tournament
to compute ELO scores for each idea-generation method. For a given topic, we require the LLM
judge to rank the ideas generated by any pair of methods (Table 24). We evaluate each pair twice
with order reversed to reduce the position bias. To comprehensively evaluate an idea from multiple
perspectives, we incorporate criteria from ICML 2020 review guidelines 5, and those in Si et al.
(2024), which consist of Novelty, Significance, Clarity, Feasibility, and Expected Effectiveness.
Finally, the resultant win-loss-tie records are utilized to calculate the ELO scores for each method,
following the practices outlined in Zheng et al. (2024); Zhao et al. (2024). We also evaluate the
experiment design in the same pairwise way, focusing on Feasibility, Technical Quality, and Clarity.
Refer to Definitions for all metrics in Tables 5 and 6 of the Appendix.
Human Evaluation. The 10 AI researchers who review the extracted topics are asked to rank two
ideas and experiment designs based on the same pairwise criteria as the model-based evaluation. To
ensure fairness, we anonymize the source of the ideas by concealing the method identity.
4 RESULTS
4.1
IDEA GENERATION
Main results. Figures 3 and 4 present the results of idea generation evaluated by both a LLM
(specifically, GPT-4o) and human researchers. Detailed scores are in Table 26 of Appendix. Over-
5https://icml.cc/Conferences/2020/ReviewerGuidelines
6
AverageNoveltySignificanceClarityFeasibilityEffectiveness700800900100011001200CoI Agent (ours)Real PaperResearchAgentGPT-ResearcherAI-ScientistRAGAverageNoveltySignificanceClarityFeasibilityEffectiveness700800900100011001200CoI Agent (ours)Real PaperResearchAgentGPT-ResearcherAI-ScientistRAGUnder review as a conference paper at ICLR 2025
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
all, our CoI agent performs better than all other automated methods in both model- and human-based
evaluations. Notably, It substantially outperforms the second-best baselines, GPT-Researcher and
RAG, by margins of 108 and 56 ELO scores, respectively, in the two evaluation settings. Our CoI
agent’s performance is on par with that of the Real Paper baseline and even excels in the metrics of
Novelty and Significance. These results highlight its exceptional capabilities in idea generation. Fur-
thermore, CoI demonstrates superior performance in Clarity, Feasibility, and Expected Effectiveness
compared to other automated methods in human evaluation. Nevertheless, it still lags considerably
behind the Real Paper in these areas. This substantial gap between automatic methods and Real
Paper is expected, as Real Paper ideas undergo extensive experimental validation. Additionally,
AI-Scientist’s performance is especially low, likely due to its original design, which focuses on
generating full papers from executable code. When given only a research topic, its simplistic idea
generation framework limits its ability to produce novel and feasible ideas.
Table 1: Agreement between the human and GPT-4o judges in all evaluated dimensions.
Novelty
Significance Clarity Feasibility Effectiveness Average
Agreement
66.5%
71.0%
76.3%
70.2%
71.0%
70.8%
Human-Model Agreements of Idea Arena. To assess
the reliability of our model-based evaluation within Idea
Arena, we analyze the agreements between the prefer-
ences of the human judges and the LLM judges. We
follow Zheng et al. (2024) to compute the agreement,
which is defined as the probability that two judges agree
on the winner of one specific arena match. Figure 5
shows the pairwise agreement between humans and sev-
eral state-of-the-art LLMs, including GPT-4o, Gemini-
1.5-Pro-Exp-08276, and Claude-3.5-Sonnet7. We observe
an average agreement of 70.8% between GPT-4o and hu-
mans. This finding indicates a strong alignment between
human-based and model-based evaluations , approaching
the level of agreement seen in human-to-human evalua-
tions (Si et al., 2024), thereby highlighting the robustness
of Idea Arena in evaluating the quality of generated re-
search ideas (More correlation results can be found in
Figure 8 and Figure 9). Moreover, GPT-4o demonstrates
the highest level of agreement with humans among all tested LLMs. Therefore, we will utilize GPT-
4o as the LLM judge for subsequent analytical experiments. Additionally, we present the agreement
on individual criteria between GPT-4o and human evaluators in Table 1. The results indicate a
consistently high level of agreement across all assessed criteria.
Figure 5: Agreements between human
and LLM judges.
4.2 ABLATION STUDIES FOR IDEA GENERATION
We conduct an ablation study to assess the contributions of each component of the CoI Agent to idea
generation quality. The following variants are examined: 1) – CoI: Excludes the CoI construction
stage, directly using all retrieved literature without progressive relation mining. 2) – Future Trend:
Omits the Future Trend Prediction module, prompting the LLM to consolidate ideas directly based
on the provided input information. 3) – Entities: Skips inputting entity definitions during idea
generation.To ensure fair comparison, each variant is scored against the full CoI Agent, with 2/1/0
points for win/tie/lose in 50 matches, for a maximum of 100 points.
Results in Table 2 show that all variants negatively affect idea quality. Excluding the CoI con-
struction stage has the most significant impact, emphasizing the importance of organizing literature
based on progressive relationships to enhance the LLM’s understanding of trends. Removing the
Future Trend Prediction reduces novelty, as the LLM lacks insight into potential forward-thinking
ideas. Although slight improvements in clarity and feasibility are observed, these are not substantial,
6https://ai.google.dev/gemini-api/docs/models/experimental-models
7https://www.anthropic.com/news/claude-3-5-sonnet
7
HumanGPT-4oGemini-1.5-proClaude-3.5HumanGPT-4oGemini-1.5-proClaude-3.5100.0%70.8%69.3%70.1%70.8%100.0%90.1%92.9%69.3%90.1%100.0%91.8%70.1%92.9%91.8%100.0%0.20.00.20.40.60.81.0Under review as a conference paper at ICLR 2025
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
Table 2: Ablation study on the design of CoI agent. The original CoI agent gets 50 points because it
receives 50 ties after battling with itself.
Novelty
Significance Clarity
Feasibility Effectiveness Average
CoI Agent
– CoI
– Future Trend
– Entities
50
41
40
46
50
39
43
49
50
44
51
42
50
49
53
47
50
39
44
43
50
42.4
46.2
45.4
likely due to evaluation variability. Finally, omitting entity information reduces clarity and effec-
tiveness, as the LLM generates more abstract ideas without grounding in specific concepts. This
highlights the value of entity information in enhancing the clarity and practical relevance of ideas.
4.3 CASE STUDY
We present an intriguing case study in Table 3 with the same topic of our paper – generating novel
research ideas using LLMs. Given the input topic, our CoI agent first constructs the chain of ideas,
extending I0 (Baek et al., 2024) in both forward and backward directions. Then the agent analyzes
current research trends for any two adjacent ideas. For instance, it identifies that the core develop-
ment from I−1 to I0 is the generation of ideas rather than hypotheses. After digesting the existing
trends, the CoI agent realizes that LLMs have great potential in idea generation but are limited in
novelty and diversity. Therefore, it proposes an evolutionary algorithm, which specifically models
the variations between parents and children, as a possible future trend for novel and diverse idea
generation. Finally, the agent consolidates its final idea by drawing on future trends and with practi-
cal implementations, such as crossover and mutation, to ensure effective realization. Therefore, the
generated idea is viable and novel, deserving further exploration in our future work.
4.4 EXPERIMENT DESIGN
As a byproduct of idea generation, we also require these
baselines to develop potential experiment designs for re-
alizing their proposed ideas. Table 4 presents the arena-
style results for experiment designs for both model-based
and human-based evaluations. Our CoI Agent demon-
strates superior performance across all evaluated criteria
in two evaluation settings, achieving the highest scores
among all automated methods. Notably,
it surpasses
RAG, the second-best automated method, by 70 ELO
points in human evaluation. Furthermore, there is a high
degree of model-human agreement in the experimental
designs. Despite the clarity and reasonable technical de-
tails of the experiment designs produced by the CoI Agent
in support of the proposed ideas, they tend to be less fea-
sible compared to those designs in the existing literature.
This phenomenon is also observed during the idea gener-
ation phase. Consequently, feasibility represents a signifi-
cant bottleneck in automatic idea generation, highlighting
the need for future research to address this challenge.
4.5 LENGTH OF COI
To examine the impact of the CoI length on the quality
of generated ideas, we constructed variants with differing
maximum chain lengths. Furthermore, we also adopt the
“- CoI” variant in Sec. 4.2 as a 0-length variant, which
uses 5 retrieved papers but does not organize them in a
chain structure. Figure 6 presents the idea arena results
8
Table 4: Results of experiment design
of both model and human evaluations,
as well as their agreements. Tech. refers
to the Technical Quality criterion.
Feasibility Tech. Clarity Average
n
o
i
t
a
u
l
a
v
E
l
e
d
o
M
n
o
i
t
a
u
l
a
v
E
n
a
m
u
H
Real Paper
CoI Agent (ours)
RAG
ResearchAgent
GPT-Researcher
AI-Scientist
Real Paper
CoI Agent (ours)
RAG
GPT-Researcher
ResearchAgent
AI-Scientist
1100
1029
1022
960
1001
888
1138
1092
1035
988
939
809
1122
1096
970
1020
965
827
1090
1043
1016
980
992
879
1111
1111
1123
1041
977
959
788
1121
1048
971
964
785
1103
1056
1003
987
986
865
1120
1112
1042
978
954
794
Agreement
70.7% 75.9% 72.1% 73.0%
Figure 6: Length analysis of the CoI.
03456Length94096098010001020ELO ScoresUnder review as a conference paper at ICLR 2025
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
Table 3: Case study for the entire idea generation pipeline of our CoI agent.
Input topic: Using LLM agent to generate novel and original research ideas without human participation
Chain of ideas:
• I−3 (Kim et al., 2021): It addresses the challenge of discovering new materials through molecular generation.
It introduces GCT, a Transformer with a variational autoencoder, to generate SMILES strings . . .
• I−2 (Boiko et al., 2023): It explores the capabilities of LLM in designing, and executing experiments for
scientific research. This work presents a multi-LLM agent to autonomously execute complex scientific ex-
periments via internet browsing, documentation searching, and hands-on experimentation . . .
• I−1 (Yang et al., 2024b): It proposes a new dataset for social science hypotheses and develops a MOOSE
framework with LLM prompting and feedback mechanisms to facilitate hypothesis generation . . .
• I0 (Baek et al., 2024): It proposes a ResearchAgent framework for automatic idea generation. ResearchA-
gent combines LLMs with an entity-centric knowledge graph and iterative feedback from reviewing agents,
creating a structured and dynamic process for generating and refining research ideas . . .
• I1 (Si et al., 2024): The paper explores the capabilities of LLMs in generating novel research ideas and
presents a large-scale comparison between LLM-generated ideas and those produced by 100 NLP expert
researchers, revealing that LLMs can produce ideas deemed more novel than human-generated ideas . . .
Current Trends:
• I−3 → I−2: The progression from I−3 to I−2 marks a significant shift from the application of neural models
for molecular generation to the broader scope of automating scientific research using LLMs . . .
• I−2 → I−1: The transition from I−2 to I−1 focuses on refining the autonomous induction capabilities of
LLMs, specifically in generating novel and valid scientific hypotheses . . .
• I−1 → I0: I0 builds on the advancements made in I−1 by further extending the process of generating
hypotheses to generating and refining research ideas autonomously . . .
• I0 → I1: The transition from I0 to I1 emphasizes the importance of empirical validation of LLMs in
generating novel research ideas and highlights the potential of LLMs to contribute to ideation . . .
Future Trend Prediction: Given the previous research’s progression and the identified gaps, a promising
direction is to unleash the potential of LLM in ideation. We can develop a multi-agent system that leverages
evolutionary algorithms to enhance the diversity and novelty of LLM-generated research ideas . . .
Final Idea: EvoResearchAgent: Enhancing Diversity and Novelty in Idea Generation with Evolution
• Motivation: Using LLMs for idea generation has shown promising advancements. However, challenges
persist, particularly concerning the diversity and novelty of LLM-generated ideas. Si et al. (2024) show that
while LLMs can produce novel ideas, they often lack a broad range of perspectives and diversity. Ad-
ditionally, Baek et al. (2024) have emphasized the need for a more systematic approach to improving
the quality of generated ideas. To address these issues, we propose EvoResearchAgent, a multi-agent
system that leverages evolutionary algorithms to enhance the diversity and novelty of generated ideas . . .
• Method:
◦ Idea Initialize: An LLM generates some initial ideas as the start point of the evolutionary process . . .
◦ Metrics: Propose automatic metrics like topic diversity and novelty to evaluate the range of ideas . . .
◦ Evolution Integration:
1. Selection: Select the top ideas based on predefined novelty and diversity metrics.
2. Crossover: Combine elements of two high-scoring ideas to create new hybrid ideas.
3. Mutation: Introduce small changes to existing ideas for new possibilities and diversity.
4. Iteration: Repeat the selection, crossover, and mutation process iteratively . . .
among these length variants. We observe a substantial improvement of idea-generation quality when
we increase the length from 0 to 3. This indicates a clear developmental trend analysis is more
pivotal than the quantity of related literature. Furthermore, the quality of generated ideas continues
to improve as the length of the CoI increases. Longer CoIs offer more reliable and comprehensive
insights into the evolving trends within the current research domain, thereby enabling the LLM to
better capture future development trends. The quality of generated ideas levels off after reaching a
maximum length of 5. This saturation point indicates that this length is sufficient to capture relevant
trends, with additional literature offering diminishing returns.
4.6 WIDTH OF COI
We also assess the impact of the width of CoI (i.e., the branch number K) on the quality of
generated ideas. Figure 7 shows the trend of average ELO scores with varying branch num-
bers. Generally, increasing the branch numbers shows a positive correlation with idea quality.
9
Under review as a conference paper at ICLR 2025
However, the disparity in ELO scores across different
branch numbers is small. This phenomenon is likely at-
tributed to the fact that generating multiple chains primar-
ily helps reduce the impact of any single CoI performing
poorly. Fortunately, such low-quality CoIs are rare.
5 RELATED WORKS
Figure 7: Width analysis of the CoI.
Scientific Research Idea Generation. Idea generation is a fundamental step in scientific research.
Due to its innovative nature, idea generation has been primarily a human-driven activity. However,
recent studies indicate that LLMs can generate plausibly novel and feasible ideas as those of human
researchers (Si et al., 2024; Kumar et al., 2024). To investigate the potential of LLMs in idea gen-
eration, prior works begin with the task of scientific hypothesis discovery (Yang et al., 2024b; Qi
et al., 2023; Wang et al., 2023), which aims to elucidate relationships between two scientific vari-
ables. Despite its utility, scientific hypothesis discovery may not fully capture the complexity and
multifaceted nature of real-world problems. To address this limitation, projects like GPT-Researcher
(Assafelovic, 2023) and ResearchAgent (Baek et al., 2024) have adopted a more open-ended idea
generation scenario including the underlying methodologies and experimental designs. They lever-
age agent-based systems to enhance the quality of idea generation. Beyond ideation, numerous
studies also explore the use of LLMs for executing experiments (Huang et al., 2024; Tian et al.,
2024) or combining both idea generation and experimental execution (Li et al., 2024; Lu et al.,
2024). However, these approaches often make minor modifications to existing ideas for drafting
their ideas, which often lack depth and creativity.
Align LLMs with Human Cognitive Patterns. As LLMs are trained with vast amounts of human
data (Brown et al., 2020), this may enable them to internalize human cognitive patterns. Firstly,
CoT (Wei et al., 2022) indicates that LLMs can enhance their reasoning abilities when provided
with step-by-step guidance. Further research supports this notion by showing that simply prompting
LLMs to engage in step-by-step reasoning can trigger better reasoning capability (Kojima et al.,
2022). Additionally, Fu et al. (2022) reveals that in-depth reasoning of LLMs can be achieved with
more elaborate prompts. As a result, a prompting strategy that closely emulates human cognition is
likely to elicit more insightful responses from these models. Motivated by this, we propose CoI to
better mimic the progressive cognitive patterns of humans when generating new research ideas.
6 ETHIC DISCUSSION
The misuse of AI-generated research ideas could present a risk to our society. We believe this is a
fundamental limitation inherent in all generative models, not just an issue specific to our CoI. Con-
sequently, we advocate for the continuation of safety research specifically focused on the academic
domain. As for this paper, our primary goal is to enhance effectiveness, while safety issues are re-
ally out of this scope. Nevertheless, we still try to test the safety capability of our framework. The
analysis, detailed in Appendix A.2, shows that CoI does not compromise the safety alignment of
existing LLMs, thereby making it a safe and reliable framework for idea generation.
7 CONCLUSIONS
In this paper, we introduce Chain of Ideas (CoI) agent, a framework designed to enhance the capa-
bility of LLMs in generating research ideas. The CoI agent offers a promising and concise solution
by organizing ideas into a chain structure, effectively mirroring the progressive development within
a given research domain. It facilitates LLMs to digest the current advancements in research, thereby
enhancing their ideation capabilities.p To comprehensively evaluate the capability of automated idea
generation methods, we also propose Idea Arena, an evaluation system that requires the participant
methods to compete in pairs about their generated ideas for the research topics, which demonstrates
high agreement with human evaluation. Experimental results indicate that the CoI agent consistently
outperforms other methods and is capable of generating ideas comparable to human creativity.
10
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
1234Numbers94096098010001020ELO ScoresUnder review as a conference paper at ICLR 2025
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
REFERENCES
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale-
man, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical
report. arXiv preprint arXiv:2303.08774, 2023.
Assafelovic.
gpt-researcher, 2023.
URL: https://github.com/assafelovic/
gpt-researcher.
Jinheon Baek, Sujay Kumar Jauhar, Silviu Cucerzan, and Sung Ju Hwang. Researchagent: Iterative
research idea generation over scientific literature with large language models. arXiv preprint
arXiv:2404.07738, 2024.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gian-
inazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of
thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI
Conference on Artificial Intelligence, volume 38, pp. 17682–17690, 2024.
Daniil A Boiko, Robert MacKnight, and Gabe Gomes. Emergent autonomous scientific research
capabilities of large language models. arXiv preprint arXiv:2304.05332, 2023.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari-
wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar-
wal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh,
Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz
Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners.
In
H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neu-
ral Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc.,
URL https://proceedings.neurips.cc/paper_files/paper/2020/
2020.
file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared
Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large
language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li,
Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E. Gonzalez, and Ion Stoica.
Chatbot arena: An open platform for evaluating llms by human preference, 2024.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models.
arXiv preprint arXiv:2407.21783, 2024.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based prompting
for multi-step reasoning. In The Eleventh International Conference on Learning Representations,
2022.
Qian Huang, Jian Vora, Percy Liang, and Jure Leskovec. MLAgentbench: Evaluating language
agents on machine learning experimentation. In Forty-first International Conference on Machine
Learning, 2024. URL https://openreview.net/forum?id=1Fs1LvjYQW.
Hyunseung Kim, Jonggeol Na, and Won Bo Lee. Generative chemical transformer: neural machine
learning of molecular geometric structures from chemical language via attention. Journal of
chemical information and modeling, 61(12):5804–5814, 2021.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. Advances in neural information processing systems,
35:22199–22213, 2022.
Sandeep Kumar, Tirthankar Ghosal, Vinayak Goyal, and Asif Ekbal. Can large language models
unlock novel scientific research ideas? arXiv preprint arXiv:2409.06185, 2024.
11
Under review as a conference paper at ICLR 2025
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal,
Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented genera-
tion for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:
9459–9474, 2020.
Ruochen Li, Teerth Patel, Qingyun Wang, and Xinya Du. Mlr-copilot: Autonomous machine learn-
ing research based on large language models agents. arXiv preprint arXiv:2408.14033, 2024.
Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The ai scien-
tist: Towards fully automated open-ended scientific discovery. arXiv preprint arXiv:2408.06292,
2024.
Biqing Qi, Kaiyan Zhang, Haoxiang Li, Kai Tian, Sihang Zeng, Zhang-Ren Chen, and Bowen Zhou.
Large language models are zero shot hypothesis proposers. arXiv preprint arXiv:2311.05965,
2023.
Chenglei Si, Diyi Yang, and Tatsunori Hashimoto. Can llms generate novel research ideas? a large-
scale human study with 100+ nlp researchers. arXiv preprint arXiv:2409.04109, 2024.
Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, and Chao Huang.
Graphgpt: Graph instruction tuning for large language models. In Proceedings of the 47th In-
ternational ACM SIGIR Conference on Research and Development in Information Retrieval, pp.
491–500, 2024.
Minyang Tian, Luyu Gao, Shizhuo Dylan Zhang, Xinan Chen, Cunwei Fan, Xuefei Guo, Roland
Haas, Pan Ji, Kittithat Krongchon, Yao Li, et al. Scicode: A research coding benchmark curated
by scientists. arXiv preprint arXiv:2407.13168, 2024.
Qingyun Wang, Doug Downey, Heng Ji, and Tom Hope. Scimon: Scientific inspiration machines
optimized for novelty. arXiv preprint arXiv:2305.14259, 2023.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh-
ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models.
arXiv preprint arXiv:2203.11171, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
neural information processing systems, 35:24824–24837, 2022.
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li,
arXiv preprint
Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report.
arXiv:2407.10671, 2024a.
Kaiyu Yang, Aidan M Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,
Ryan Prenger, and Anima Anandkumar. Leandojo: Theorem proving with retrieval-augmented
In Thirty-seventh Conference on Neural Information Processing Systems
language models.
Datasets and Benchmarks Track, 2023. URL https://openreview.net/forum?id=
g7OX2sOJtn.
Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, and Erik Cambria. Large lan-
guage models for automated open-domain scientific hypotheses discovery. In Lun-Wei Ku, An-
dre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Lin-
guistics ACL 2024, pp. 13545–13565, Bangkok, Thailand and virtual meeting, August 2024b.
Association for Computational Linguistics. URL https://aclanthology.org/2024.
findings-acl.804.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad-
vances in Neural Information Processing Systems, 36, 2024.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhen-
guo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions
for large language models. arXiv preprint arXiv:2309.12284, 2023.
12
Under review as a conference paper at ICLR 2025
Ruochen Zhao, Wenxuan Zhang, Yew Ken Chia, Deli Zhao, and Lidong Bing. Auto arena of llms:
Automating llm evaluations with agent peer-battles and committee discussions. arXiv preprint
arXiv:2405.20267, 2024.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and
chatbot arena. Advances in Neural Information Processing Systems, 36, 2024.
Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang,
Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applica-
tions. AI open, 1:57–81, 2020.
13
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
Under review as a conference paper at ICLR 2025
A APPENDIX
A.1 EVALUATION METRICS
Evaluation criteria for generated ideas include several key aspects. Novelty and Significance are
adapted from the ICML 2020 reviewer guidelines, with specific experimental evaluation standards
removed. Effectiveness is assessed with reference to AI-Researcher Si et al. (2024), while Feasi-
bility is tailored specifically for the task of Idea generation. Clarity is also sourced from the ICML
2020 reviewer guidelines. For the evaluation of experiment design, the criteria consist of Quality,
extracted from the Technical Quality section of the ICML 2020 guidelines with specific results-
oriented standards omitted, as well as Clarity, again based on ICML 2020 guidelines. Feasibility is
designed specifically for the task of experiment design generation.
Metric
Novelty
Significance
Clarity
Feasibility
Table 5: Evaluation metrics of ideas.
Definition
Are the problems or approaches new? Is this a novel combination of
familiar techniques? Is it clear how this work differs from previous con-
tributions? Is related work adequately referenced?
Are the idea important? Are other people (practitioners or researchers)
likely to use these ideas or build on them? Does the idea address a dif-
ficult problem in a better way than previous research? Does it provide a
unique theoretical or pragmatic approach?
Is the paper clearly written? Is it well-organized? Does it adequately
inform the reader?
Can the idea be realized with existing technology or methods? Are there
any technical difficulties or bottlenecks? Is the idea clear and logical?
Is there any obvious error or unreasonable part in the idea, and can the
experiment be designed normally according to this idea.
Expected Effectiveness How likely the proposed idea is going to work well (e.g., better than
existing baselines).
Table 6: Evaluation metrics of experiment design.
Metric
Definition
Feasibility Can the experiment be realized with existing technology or methods? Are there any
technical difficulties or bottlenecks? Is the experimental plan detailed and feasible?
Are the experimental steps clear and logical? Is there any obvious error or unreason-
able part in the experiment. Consider the rationality of its steps and the possibility that
the idea can be successfully implemented.
Quality
Clarity
Is there a clear rationale for each step of the experimental design? Are the baseline
and evaluation metrics chosen appropriately? Has the design taken into account the
potential advantages and limitations of the methods used? Can this experimental de-
sign effectively support the claims made in the idea.
Is the experimental plan clearly written? Dose it provide enough information for the
expert reader to understand the experiment? Is it well organized? Does it adequately
inform the reader?
A.2 ETHIC RESULTS
To test if CoI will generate unsafe research ideas, we try two unsafe topics: "Artificial intelligence
weaponization", and "Development of highly addictive and lethal drugs". For each topic, we gener-
ate 10 ideas.
14
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
Under review as a conference paper at ICLR 2025
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
Among 10 ideas about "artificial intelligence weaponization", four of them focus on the ethical
issues surrounding AI weapons, such as establishing guidelines for their use, enhancing account-
ability and oversight mechanisms, and preventing ethical dilemmas. Another four ideas address the
enhancement of safety in the use of AI weapons, including methods to distinguish between civilians
and combatants, increase human involvement, and build robustness against errors. The remaining
two ideas discuss ways to increase the transparency of AI weapons and improve their interpretability
to ensure compliance with international humanitarian law.
Among 10 ideas about "Development of Highly Addictive and Lethal Drugs", six ideas focus on
researches on predicting and preventing addictive behaviors. The remaining four ideas concentrate
on predicting and preventing substance abuse among youth in the community and treating addictive
behaviors.
It can be observed that even when CoI is presented with potentially unsafe topics, it consistently
suggests safe and reliable ideas. This is partly because most current LLMs have undergone safety
alignment. Additionally, the construction process of CoI involves searching for publicly available
research papers on the internet and conducting further research based on them. The majority of
accessible papers tend to present positive perspectives, which in turn guides CoI to propose ideas
that are more in line with ethical standards.
A.3 SPECIFIC PROMPTS
Here are the prompts used in this paper.
• Prompts used in CoI construction
– Prompt used to convert a topic into a search query for literature retrieval (Table 7)
– Prompt used to evaluate whether a paper is relevant to the topic (Table 8)
– Prompt used to extract idea, experiment, entities and references from paper (Table 9)
and 10
– Prompt used to summarize current trends of CoI (Table 11)
• Prompts used in idea generation
– Prompt used to predict future trend (Table 12 and 13)
– Prompt used to generate idea (Table 14 and 15)
– Prompt used to check the novelty of the idea (Table 16)
• Prompts used in experiment design
– Prompt used to generate experiment design (Table 17)
– Prompt used to review experiment design (Table 18)
– Prompt used to get queries for search paper to refine experiment design (Table 19)
– Prompt used to refine experiment (Table 20)
• Prompts used in benchmark construction
– Prompt used to extract topic from real paper (Table 21)
– Prompt used to extract the idea from real paper (Table 22)
– Prompt used to extract the experiment design from real paper (Table 23)
• Prompts used in idea arena
– Prompt used to compare two ideas (Table 24)
– Prompt used to compare two experiment designs (Table 25)
A.4 ADDITIONAL EXPERIMENT RESULTS
We present the evaluation results of idea generation for both model-based evaluation (including
GPT-4o, Gemini-1.5-Pro-Exp-0827, and Claude-3.5-Sonnet) and human-based evaluation in Table
26.
We also conducted a consistency analysis of Spearman and Pearson correlation coefficients. Specif-
ically, we utilized the ELO scores/rankings assigned by two judges to these baselines to compute
15
Under review as a conference paper at ICLR 2025
the Pearson and Spearman correlations for each evaluated dimension. We then averaged the scores
across all dimensions to determine the final correlation between the two judges. The detailed results
are illustrated in figure 8 and figure 9.
Table 7: Prompt used to convert a topic into a search query for literature retrieval
You are a master of literature searching, tasked with finding
relevant research literature based on a specific topic.
Currently, we would like to study the following topic: [Topic]
Please provide the literature search queries you would use to
search for papers related to the topic and idea.
Each query should be a string and should be enclosed in double
quotes.
other queries representing different aspects of the whole.
It is best to output one query representing the whole and
Output strictly in the following format:
Queries:
...
Table 8: Prompt used to evaluate whether a paper is relevant to the topic
You are an expert researcher tasked with evaluating whether a given
paper is relevant to our research topic based on its title and
abstract.
[Title]
Below are the details of the paper you need to assess:
Title:
Abstract: [Abstract]
The topic is: [Topic]
If the paper title and abstract are related to the topic, output
1; otherwise, output 0.
reference value for your question, you can use it to help you study
the topic, it does not need to be completely consistent in topic.
As long as you feel that this article has
Please follow the strict format below:
Think:
Relevant:
...
0/1
16
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
Under review as a conference paper at ICLR 2025
Table 9: Prompt used to extract idea, experiment, entities and references from paper (part I)
You are a scientific research expert, tasked with extracting and
summarizing information from provided paper content relevant to
the topic: [Topic].
references, extracted entities, a detailed summary, and the
experimental design.
Your deliverables will include pertinent
Format the entities with a name followed by a brief
Ensure all entities are relevant to the specified topic
Identify unique entities mentioned in the paper, such as model
The topic you are studying is: [Topic] (Ensure that the references
are pertinent to this topic.)
Extraction Requirements:
Entities:
1.
names, datasets, metrics, and specialized terminology.
2.
description.
3.
([Topic]).
Summary Idea:
1.
outlining the starting point of this paper.
2.
this paper in comparison to prior work.
3.
theory and functions of each core component.
4.
chosen methods are effective, including implementation details for
further research.
5.
Continue to next table →
Describe the main innovations and contributions of
Elaborate on the task’s context and previous work,
Explain the primary methods used, detailing the
Discuss current shortcomings of the approach.
Provide a thorough explanation of why the
Detail Reason:
Contribution:
Limitation:
Background:
Novelty:
Figure 8: Pearson correlation coefficient of
evaluation results of different judges
Figure 9: Spearman correlation coefficient of
evaluation results of different judges
17
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
Under review as a conference paper at ICLR 2025
Table 10: Prompt used to extract idea, experiment, entities and references from paper (part II)
Baseline:
Verification:
Clarity of Plan:
Method Relevance:
Experimental Process:
Detail the entire experimental
Describe any specific technologies
State your experimental plan concisely to
Explain how your experimental design assists in
Elaborate on the baseline used, comparative methods,
Experimental Content:
1.
procedure, from dataset construction to specific steps, ensuring
clarity and thoroughness.
2.
Technical Details:
involved, providing detailed implementation processes.
3.
facilitate understanding without unnecessary complexity.
4.
and experimental design, illustrating how these support and
validate the conclusions drawn.
5.
verifying the core idea and ensure it is detailed and feasible.
Relevance Criteria:
1.
paper’s methodology, indicating improvements or modifications.
2.
if methods differ, better have the same topic [Topic]
3.
the methods discussed in the paper.
4.
publication years, formatted as titles only.
The paper content is as follows: [Paper content]
Please provide the entities, summary idea, experimental design,
and the three most relevant references (Sort by relevance, with
priority given to new ones with the same level of relevance, do not
reference the original paper.)
Note:
studying: [Topic].
[].
Ensure the references are pertinent to the topic you are
References should address the same task, even
References must directly correlate with the
If there are no relevant references, output
Provide references without author names or
References should serve as baselines for
based on the paper’s content.
Baseline Relevance:
Task Relevance:
Output Format:
...
Now please output strictly in the following format:
Entities:
Idea:
Experiment:
References:
...
...
...
18
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
Under review as a conference paper at ICLR 2025
Table 11: Prompt used to get trends of CoI
You are a scientific research expert tasked with summarizing the
historical progression of research related to our current topic,
based on the literature we have reviewed.
: [Topic]
Here are the entities you need to know : [Entities]
The topic you are studying is:
The literature from early to late: [Idea chain]
Your objective is to outline the historical evolution of the
research in light of current trends.
requirements:
Analysis of Published Viewpoints:
across the identified papers.
to the next--for instance, how Paper 0 leads to Paper 1, and so
forth.
in Paper 0. Elaborate on specific advancements made, including
proposed modules, their designs, and the rationale behind their
effectiveness in addressing previous challenges.
analytical approach to each paper in the sequence.
Please follow these
Detail how each paper transitions
Apply this
Focus on understanding how Paper 1 builds upon the concepts
Examine the progression of ideas
Please present your findings in the following format:
Trends:
Paper 0 to Paper 1:
Paper 1 to Paper 2:
...
...
...
Table 12: Prompt used to predict future trend (Part I)
You are a scientific expert tasked with formulating a novel and
innovative research idea based on your comprehensive literature
review.
could significantly advance the field.
Your objective is to propose a feasible approach that
Here are the entities you need to know : [Entities]
The literature you have studied is as follows: [Chain of ideas]
The following section delineates the progressive relationships
among the previously summarized research papers: [Trend]
Based on previous research, analyze how human experts think
and transition from previous methods to subsequent approaches.
Focus on their reasoning logic and the sources of their thought
processes.
develop and guide your own research direction in a natural and
coherent manner.
Additionally, you are encouraged to adopt the following three modes
of thinking:
Continue to next table →
Learn to emulate their reasoning patterns to further
19
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
Under review as a conference paper at ICLR 2025
Table 13: Prompt used to predict future trend (Part II)
Encourage
Reflection:
Think creatively
Consider whether there are
Consider potential solutions
Explore these solutions and adapt
Reflect on scenarios where a specific method
Some methods may present specific approaches to
Analogy: Identify a specific problem you are currently
1.
encounters significant challenges.
that could effectively address these issues, make the solutions
sounds reasonable, novel and amazing.
2.
facing and research existing solutions that have successfully
tackled similar challenges.
key principles and strategies to your situation.
about how tools and approaches from other domains can be
re-imagined to devise a novel strategy for your issue.
you to actively explore methods in other fields to solve your
current problems.
3.
Deep Dive:
addressing a particular problem.
aspects that could be modified to enhance their rationale and
effectiveness.
Note:Each article’s limitations are specific to that particular
piece and should not be applied to others.
task at hand and analyze the potential issues you might encounter
if you proceed with your original approach, reflecting on the
challenges previously faced.
address these issues effectively.
You are encouraged to apply human reasoning strategies to identify
future research directions based on prior studies.
in-depth analysis rather than mere integration of existing ideas.
Please avoid introducing unfamiliar information, ensuring that
the trends you present are both authentic and reasonable. Before
proposing any trends, take a moment to reflect on the principles
underlying the methods you’re employing and assess their relevance
to your research area.
The future research direction should be related to the topic:
[Topic]
Please present the future research direction in the following
format:
Future direction:
Then, think critically about how to
Carefully consider the
Aim for
...
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
20
Under review as a conference paper at ICLR 2025
Table 14: Prompt used to generate idea (part I)
Please avoid this situation as
You can continue to make in-depth innovations,
Your objective is to propose a feasible approach that
Distinguish your proposed method from existing methods
Present a detailed description of your idea, focusing on the
You are a scientific expert tasked with formulating a novel and
innovative research idea based on your comprehensive literature
review.
could significantly advance the field.
The following are examples of ideas you have proposed in the past
that are similar to real papers.
much as possible.
but avoid plagiarism: [Bad case]
Here are the entities you need to know: [Entities]
The topic you are studying is: [Topic]
The literature you have studied is as follows: [Chain of ideas]
Your idea is composed of the following components:
Motivation:
Provide a background for your idea, summarizing relevant work.
1.
2.
Identify shortcomings in previous research and highlight the
specific problems that remain unsolved and that you aim to address.
Novelty:
1.
(preferably by naming specific approaches).
Detail the improvements of your method compared to past work.
2.
3.
Clearly outline at least three contributions your idea offers
to the field, including the problems it resolves and the benefits
it delivers.
Method:
1.
core method, the specific problem it solves, and enhancements over
earlier research (citing relevant literature with titles).
2.
of each module and the rationale for why this approach effectively
addresses previous challenges.
Please adhere to the following guidelines:
1.
Your research idea should be innovative, feasible, and
contribute meaningfully to the field. Please carefully examine
the idea you have proposed, avoid immediate perception, and try to
be different from the previous methods as much as possible.
2.
to implement.
3.
limited background knowledge in the subject.
technical jargon, but when professional terms are necessary,
provide thorough explanations.
4.
prevent proposing ideas that may be incorrect or impractical.
5.
the cited papers.
6.
the trends you present are both authentic and reasonable.
proposing any trends, take a moment to reflect on the principles
underlying the methods you’re employing and assess their relevance
to your research area.
Continue to next table →
Logic should underpin your reasoning.
Write in clear, concise language aimed at an audience with
Please avoid introducing unfamiliar information, ensuring that
When referencing other research, please include the titles of
Explain the step-by-step methodology, including the functions
Ensure your proposal is solid, clearly defined, and practical
Refrain from introducing concepts from uncertain fields to
Avoid complex
Before
21
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
Under review as a conference paper at ICLR 2025
Table 15: Prompt used to generate idea (part II)
Carefully consider the
Then, think critically about how to
Each article’s limitations are specific to that particular
7.
piece and should not be applied to others.
task at hand and analyze the potential issues you might encounter
if you proceed with your original approach, reflecting on the
challenges previously faced.
address these issues effectively.
The following section delineates the progressive relationships
among the previously summarized research papers: [Trend]
The following section outlines the potential future research
directions based on the literature you have studied: [Future
direction]
Please output your motivation,novelty,method firstly and then
output your final idea.The final idea should clearly explain the
origins, motivation, and challenges of your idea, detailing how you
overcame these hurdles.
...
Please present the final idea in the following format:
Motivation:
Novelty:
Method:
Final idea:
...
...
...
Table 16: Prompt used to check the novelty of the idea
Your
You are a scientific research expert tasked with evaluating the
similarity between a specified idea and existing research.
objective is to determine if the target idea closely resembles any
findings in the provided papers.
The target idea you need to check is as follows: [Idea]
The relevant papers you need to refer to are as follows:[Content of
retrieved papers]
Here are your guidelines:
Comparison Process: Begin by thoroughly comparing each
1.
paper’s ideas with the target idea. Consider the methodologies,
conclusions, and underlying concepts in each paper in your
analysis.
2.
similarities with any existing research to the extent that they can
be considered identical, classify this as plagiarism.
3.
the similarity assessment, a summary of the target idea, and the ID
of the most relevant similar paper.
Please output strictly in the following format:
Think:
Similar:
Summary of the idea:
Similar paper id:
Your output should provide a clear thought process,
If the target idea shares fundamental
Similarity Assessment:
...
0 to n
Output:
...
0/1
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
22
Under review as a conference paper at ICLR 2025
Table 17: Prompt used to generate experiment
You are a scientific expert tasked with designing rigorous,
feasible experiments based on specified scientific questions
and the methodologies derived from the idea I provide, along
with relevant past research.
in systematically testing hypotheses and validating innovative
discoveries that could significantly advance their fields.
Your goal is to assist researchers
Structure
Experimental Design:
Develop rigorous experiments to
For any critical concepts utilized, provide thorough
Implementation of Technologies/Methods: If your experimental
Past Related Research Experiments: [Past experiments]
Here are the entities you need to know: [Entities]
Here is the idea you need to design an experiment for: [Idea]
Please propose a detailed experimental plan addressing the
following points:
1.
Provide
ensure the reliability and validity of your results.
a comprehensive explanation of the baseline used, comparative
methods, ablation study design, and criteria for data analysis
and result evaluation. Clarify how these components collectively
reinforce and validate the conclusions of your research.
your experimental design in a clear, logical, and step-by-step
manner, ensuring each step is well-defined and easy to understand.
2.
design involves specific technologies or methodologies, describe
the implementation process in detail, including key technical
aspects.
explanations.
detail its construction, components, and functionality.
Feasibility Assessment: Ensure your experimental plan is
3.
realistic, considering technological availability, timelines,
resources, and personnel.
propose strategies for addressing them.
4.
literature, include titles and pertinent details of the original
papers.
your experimental design.
5.
illustrate the implementation process.
pseudo code to detail the core algorithm or the model architecture,
or employ a flowchart to map out the experimental procedure and
data flow.
6.
your methods, assuming the reader may have limited knowledge of
the subject matter.
terminology.
clear and detailed explanations.
Strive to use as many references as necessary to support
References to Previous Studies: When citing related
If professional terms are necessary, please provide
If useful, provide pseudo code or a flowchart to
For instance, if you propose a modular approach,
Avoid complex jargon and utilize accessible
Use straightforward language to describe
Identify potential challenges and
For example, you can use
Clarity of Language:
Visual Aids:
Please output strictly in the following format:
Experiment:
...
Step1:
Step2:
...
...
23
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
Under review as a conference paper at ICLR 2025
Table 18: Prompt used to review experiment
You are an expert in paper review.
a given experiment can effectively verify a specific idea, as well
as assess the detail and feasibility of the experiment.
Your task is to analyze whether
Are there specific experimental procedures that are confusing
Here are the related entities you need to know: [Entities]
The idea presented is: [Idea]
The corresponding experiment designed for this idea is:
[Experiment]
Please conduct your analysis based on the following criteria:
Can the experiment validate the idea? If not, identify
1.
the issues and suggest improvements to enhance its verification
capability and feasibility.
2.
or poorly designed?
uncertainties in constructing the dataset, or a lack of explanation
regarding the implementation of certain methods.
3.
of the experimental design.
4.
shortcomings identified in your analysis.
5.
altering the original idea.
6.
specific.
Provide suggestions for improving the experiment based on the
Evaluate the clarity, detail, reasonableness, and feasibility
Ensure that your suggestions are constructive, concise, and
Focus solely on the experiment design; please refrain from
Discuss any methods that may not be feasible,
Please strictly follow the following format for output:
Suggestion:
...
Table 19: Prompt used to get query for search paper to refine experiment
You are a research expert tasked with refining and improving an
experimental plan based on the feedback received.
The experimental plan you proposed is as follows: [Experiment]
You have received the following suggestions for improvement:
[Suggestions]
Please decide whether you need to search for relevant papers to
obtain relevant knowledge to improve your experiment.
If you need to search for relevant papers, please provide a search
query for literature search, else provide "".
For example:
if suggestions say that the dynamic query additional
information and update knowledge graph described in the experiment
is not clearly described, so you need to output "dynamic knowledge
graph update".
Please output strictly in the following format:
Query:...
24
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
Under review as a conference paper at ICLR 2025
Table 20: Prompt used to refine experiment
You are a research expert tasked with refining and improving an
experimental plan based on the feedback received.
Structure
If your experimental
Feasibility Assessment:
[Searched paper information]
Implementation of Technologies/Methods:
For instance, if you propose a modular approach,
For any critical concepts utilized, provide thorough
Experimental Design: Develop rigorous experiments to
The information of the literature you maybe need to refer to are
as follows:
The experimental plan you proposed is as follows: [Experiment]
Please propose a detailed experimental plan addressing the
following points:
1.
ensure the reliability and validity of your results.
Provide
a comprehensive explanation of the baseline used, comparative
methods, ablation study design, and criteria for data analysis
and result evaluation. Clarify how these components collectively
reinforce and validate the conclusions of your research.
your experimental design in a clear, logical, and step-by-step
manner, ensuring each step is well-defined and easy to understand.
2.
design involves specific technologies or methodologies, describe
the implementation process in detail, including key technical
aspects.
explanations.
detail its construction, components, and functionality.
3.
Ensure your experimental plan is
realistic, considering technological availability, timelines,
resources, and personnel.
propose strategies for addressing them.
4.
References to Previous Studies:
literature, include titles and pertinent details of the original
papers.
your experimental design.
5.
illustrate the implementation process. For example, you can use
pseudo code to detail the core algorithm or the model architecture,
or employ a flowchart to map out the experimental procedure and
data flow.
6.
your methods, assuming the reader may have limited knowledge of
the subject matter.
terminology.
clear and detailed explanations.
You have received the following suggestions for
improvement:[Suggestions]
Please refine your experimental plan based on the feedback
provided.
and addresses the feedback you received.
Clarity of Language: Use straightforward language to describe
Strive to use as many references as necessary to support
Ensure your refined plan is feasible, clearly defined,
If professional terms are necessary, please provide
If useful, provide pseudo code or a flowchart to
Avoid complex jargon and utilize accessible
Identify potential challenges and
When citing related
Visual Aids:
Please output strictly in the following format:
Experiment:
...
25
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
Under review as a conference paper at ICLR 2025
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
Table 21: Prompt used to extract topic from real paper
You are a research expert tasked with extracting the main topic
from the provided paper information.
The main topic should encompass broad fields such as "Retrieve
augment generation" or "using diffusion models for video
generation". However, it should also include a relevant task to
the topic, formatted as "topic:...
Please read the provided paper and extract only the topic, which
should follow this structure.
The paper’s title is [Title]
The paper’s abstract is as follows: [Abstract]
The paper’s introduction is as follows: [Introduction]
task:...".
Please output strictly in the following format:
topic:
...
26
Under review as a conference paper at ICLR 2025
Table 22: Prompt used to extract idea from real paper
You are a research expert tasked with extracting the main idea from
the provided paper information.
Explain the differences between the method and the
Explain the background of the idea and past related
Provide a detailed description of your idea, including the
The main idea should encompass the motivation, solved problem,
novelty, method of the paper.
Please read the provided paper and extract the main idea from the
paper.
The paper content is as follows: [Content]
Idea is composed of the following components:
Motivation:
work, identify the shortcomings of past work, identify the problems
that need improvement, and identify the issues the paper want to
address.
Novelty:
current method (preferably list specific methods), explain what
improvements the paper have made to the previous method, and then
identify the problems that can be solved and the benefits that can
be gained from these improvements.
Method:
core method, the problem it solves, and the improvement compared
with previous work(Cite the previous work with the title of the
paper).
Explain the specific steps of the method, the specific
functions of each module, and the specific reasons why this method
can solve the previous problem.
Here are some tips for extracting the main idea:
1.
to describe, assuming the reader is someone who has few knowledge
of the subject, avoid using complex technical terms, and try to
use easy-to-understand terms to explain.If the paper use some
professional terms, please explain them in detail.
2.
the original paper.
The final idea should be detailed and specific, clearly explain
the origins, motivation, novelty, challenge, solved problem and
method of the paper, and detail how the overcame these hurdles.
Ensure your approach is innovative, specifying how this innovation
is reflected in your experimental design.
The final idea should be double-blind, i.e. no experimental
results or codes should be shown.
When the paper cite other papers, please indicate the title of
Make idea easy to understand, use clear and concise language
Please output strictly in the following format:
Final idea:
...
27
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
Under review as a conference paper at ICLR 2025
Table 23: Prompt used to extract experiment from real paper
You are a research expert tasked with extracting the specific
experiment steps from the provided paper information.
Describe the entire
Detail the Experimental Process:
The specific experiment steps should include the specific methods
for each step.
Please read the provided paper and extract specific experiment
steps from the paper.
The paper content is as follows: [Content]
There are some tips for extracting the experiment steps:
1.
experimental process, including how to construct the dataset and
each specific experimental step.
method is clearly and thoroughly detailed.
2.
If specific technologies are involved in the experimental
design, describe the implementation process in as much detail as
possible (i.e., technical details)
3.
be easily understood by others,should not be too complicated.
4.
the paper, the comparative methods, the ablation design and the
experimental design.
collectively support and validate the conclusions drawn in your
research.
5.
idea and how the experiment is detailed and feasible.
Please provide a detailed explanation of the baseline used in
Explain how your experimental design can help you verify the
Make sure your experimental plan is concise and clear, and can
Ensure that each experimental
Specifically, elaborate on how these elements
Now please output strictly in the following format:
Experiment:
...
Step1:
...
Step2:
...
28
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
Under review as a conference paper at ICLR 2025
Table 24: Prompt used to compare two ideas
You are a judge in a competition.
better.
You have to decide which idea is
Can the idea be realized with existing technology
Does the idea address a difficult problem in a better way
Significance:
Are other people
Please write a short paragraph
Novelty: Are the problems or approaches new? Is this a novel
Are the idea important?
Is it clear how this work
Is related work adequately
Does it provide a unique theoretical or
The idea0 is: [idea0]
The idea1 is: [idea1]
The topic is: [topic]
Which idea do you think is better?
to explain your choice.
Here are your evaluation criteria:
1.
combination of familiar techniques?
differs from previous contributions?
referenced?
2.
(practitioners or researchers) likely to use these ideas or build
on them?
than previous research?
pragmatic approach?
3.
Feasibility:
or methods? Are there any technical difficulties or bottlenecks?
Is the idea clear and logical?
unreasonable part in the idea, and can the experiment be designed
normally according to this idea.
4.
Does it adequately inform the reader?
5.
well (e.g., better than existing baselines).
Note:
Avoid any position biases and ensure that the order in which
the responses were presented does not influence your decision.
DO NOT allow the LENGTH of the responses to influence your
evaluation, choose the one that is straight-to-the-point instead
of unnecessarily verbose.
important!!!)
If you think idea0 is better than idea1, you should output 0. If
you think idea1 is better than idea0, you should output 1.
think idea0 and idea1 are equally good, you should output 2.
How likely the proposed idea is going to work
Clarity: Is the paper clearly written?
Be as objective as possible. (very
Is there any obvious error or
Is it well-organized?
Effectiveness:
If you
Your output should be strictly in following format:
Your thinking process:
...
Your choice:
Novelty:
Significance:
Feasibility: 0/1/2
Clarity:
Effectiveness:
0/1/2
0/1/2
0/1/2
0/1/2
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
29
Under review as a conference paper at ICLR 2025
Table 25: Prompt used to compare two experiments
You are a judge in a competition.
experiment is better.
You have to decide which
Are
Quality:
Feasibility:
Please write a short
Can the experiment be realized with existing
Is the experimental plan detailed and feasible?
Is there a clear rationale for each step of the
Are the baseline and evaluation metrics
Has the design taken into account the
The idea of experiment0 is: [idea0]
The experiment0 is: [experiment0]
The idea of experiment1 is: [idea1]
The experiment1 is: [experiment1]
Which experiment do you think is better?
paragraph to explain your choice.
Here are your evaluation criteria:
1.
technology or methods? Are there any technical difficulties or
bottlenecks?
the experimental steps clear and logical? Is there any obvious
error or unreasonable part in the experiment. Consider the
rationality of its steps and the possibility that the idea can be
successfully implemented.
2.
experimental design?
chosen appropriately?
potential advantages and limitations of the methods used?
this experimental design effectively support the claims made in the
idea.
3.
provide enough information for the expert reader to understand the
experiment?
reader?
Note:
the responses were presented does not influence your decision.
DO NOT allow the LENGTH of the responses to influence your
evaluation, choose the one that is straight-to-the-point instead
of unnecessarily verbose.
important!!!)
If you think experiment0 is better than experiment1, you should
output 0.
If you think experiment1 is better than experiment0,
you should output 1.
equally good, you should output 2.
Avoid any position biases and ensure that the order in which
If you think experiment0 and experiment1 are
Is the experimental plan clearly written?
Does it adequately inform the
Be as objective as possible.
Is it well organized?
Clarity:
Dose it
(very
Can
Your output should be strictly in following format:
Your thinking process: ...
Your choice:
Feasibility:
Quality:
Clarity:
0/1/2
0/1/2
0/1/2
30
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
Under review as a conference paper at ICLR 2025
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
Table 26: Evaluation results of idea generation for both model-based evaluation and human-based
evaluation.
Novelty
Significance Clarity
Feasibility Effectiveness Average Rank
Real Paper
CoI Agent (ours)
RAG
GPT-Researcher
ResearchAgent
AI-Scientist
Real Paper
CoI Agent (ours)
GPT-Researcher
ResearchAgent
RAG
AI-Scientist
Real Paper
CoI Agent (ours)
GPT-Researcher
ResearchAgent
RAG
AI-Scientist
Real Paper
CoI Agent (Ours)
GPT-Researcher
ResearchAgent
RAG
AI-Scientist
n
a
m
u
H
o
4
-
T
P
G
7
2
8
0
p
x
E
-
o
r
P
-
5
.
1
i
n
i
m
e
G
t
e
n
n
o
S
-
5
.
3
-
e
d
u
a
l
C
1075
1100
1021
988
982
835
1063
1144
995
1005
914
878
1102
1124
1002
986
914
873
1099
1165
986
1008
886
855
1071
1103
1038
993
975
820
1089
1138
1007
1016
918
831
1101
1119
997
986
929
868
1123
1154
977
1023
907
815
1127
1065
1030
990
975
813
1165
1021
1010
946
1023
836
1120
1082
1014
975
958
851
1149
953
1039
926
1038
895
1109
1078
1035
999
970
809
1123
1152
989
1004
918
814
1102
1113
998
986
932
869
1179
1162
977
997
884
800
1100
1085
1029
992
980
812
1115
1107
999
995
950
833
1110
1107
1003
983
936
860
1145
1094
1000
998
938
825
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
1118
1081
1022
993
1001
784
1137
1080
995
1005
978
806
1125
1098
1005
984
948
840
1174
1033
1022
1034
977
760
31
|
07yvxWDSla | Synthetic continued pretraining | [
8,
8,
8,
8
] | Published as a conference paper at ICLR 2025
SYNTHETIC CONTINUED PRETRAINING
Zitong Yang∗
Department of Statistics
Stanford University
Neil Band∗
Department of Computer Science
Stanford University
Shuangping Li
Department of Statistics
Stanford University
Emmanuel Cand`es
Department of Statistics
Stanford University
Tatsunori Hashimoto
Department of Computer Science
Stanford University
ABSTRACT
Pretraining on large-scale, unstructured internet text enables language models to
acquire a significant amount of world knowledge. However, this knowledge acqui-
sition is data-inefficient—to learn a fact, models must be trained on hundreds to
thousands of diverse representations of it. This poses a challenge when adapting a
pretrained model to a small corpus of domain-specific documents, where each fact
may appear rarely or only once. We propose to bridge this gap with synthetic con-
tinued pretraining: using the small domain-specific corpus to synthesize a large
corpus more amenable to learning, and then performing continued pretraining on
the synthesized corpus. We instantiate this proposal with EntiGraph, a synthetic
data augmentation algorithm that extracts salient entities from the source corpus
and then generates diverse text by drawing connections between those entities.
Synthetic continued pretraining with EntiGraph enables a language model to an-
swer questions and follow generic instructions related to the source documents
without access to them. If the source documents are instead available at inference
time, we show that the knowledge acquired through our approach compounds with
retrieval-augmented generation. To better understand these results, we build a sim-
ple mathematical model of EntiGraph, and show how synthetic data augmentation
can “rearrange” knowledge to enable more data-efficient learning.
1
INTRODUCTION
Language models (LMs) have demonstrated a remarkable ability to acquire knowledge from unstruc-
tured text, enabling them to perform challenging knowledge-intensive tasks (Brown et al., 2020;
OpenAI et al., 2024; Gemini, 2024; Anthropic, 2024b; Dubey et al., 2024; Gunter et al., 2024).
These successes are enabled by the combination of the next-token prediction objective (Shannon,
1951) and large-scale internet data (Common Crawl, 2007). However, it is becoming increasingly
apparent that this approach is data-inefficient; for example, a 13-year-old human acquires knowl-
edge from fewer than 100M tokens, while state-of-art open-source language models are trained on
15T tokens (Warstadt et al., 2023; Dubey et al., 2024). Recent works have highlighted a range of
related problematic phenomena, including the “reversal curse”, where models struggle to learn the
relation “B=A” when trained on “A=B” (Berglund et al., 2023), and the requirement that models be
exposed to thousands of examples per fact for knowledge acquisition (Allen-Zhu & Li, 2024).
These drawbacks pose a challenge when adapting the next-token prediction paradigm to learn from
small-scale corpora. Because large-scale pretrained models already capture much of public common
knowledge, further advancements will necessitate learning from the tails of the distribution (Kandpal
et al., 2023): niche data that is either contained in small, private domains or appears only once or
twice on the internet. This challenge of data-efficient, parametric knowledge acquisition is becoming
increasingly important as growing compute capacity enables language model providers to exhaust
publicly available data (Muennighoff et al., 2023; Villalobos et al., 2024).
We propose to address this problem of acquiring knowledge from small corpora with synthetic con-
tinued pretraining. To illustrate, consider the problem of teaching an LM a new area of mathematics,
succinctly documented by a small set of textbooks. Directly training the model on those textbooks
∗Equal contribution. Correspondence to: [email protected], [email protected].
1
Published as a conference paper at ICLR 2025
Figure 1: Synthetic continued pretraining (synthetic CPT) converts a small source corpus into a large syn-
thetic corpus that is amenable to learning via standard continued pretraining. We instantiate synthetic CPT
using a synthetic data augmentation algorithm called EntiGraph, which forms a knowledge graph over entities
extracted from documents, and then prompts an LM to synthesize a text-based representation of the graph.
is unlikely to be effective due to the limited volume of text (e.g., tens of thousands of words), and
the model will struggle to generalize from this compressed representation of knowledge. In contrast,
learning established mathematical areas like linear algebra is straightforward because a large-scale
corpus with diverse knowledge representations is accessible: for example, online lecture notes, Stack
Exchange discussions, or Python implementations of the singular value decomposition. Synthetic
continued pretraining bridges this gap by first converting a small, data-constrained domain into a
synthetic corpus with diverse knowledge representations, and then continuing pretraining on it.
One basic approach is to simply paraphrase or rewrite the source documents in multiple ways. How-
ever, we demonstrate that this generic rephrasing does not cover the gap in the diversity of knowledge
representations. We repeatedly rephrase a small corpus and find that the value of incremental syn-
thetic data quickly decreases, with downstream model performance scaling poorly. We attribute this
failure to the lack of diversity in paraphrasing alone. In the linear algebra example, online lecture
notes and Stack Exchange discussions go beyond a simple rewrite of any textbook—they provide
deeper analysis and application of the underlying concepts and techniques.
We address this shortcoming with EntiGraph, an entity-centric augmentation algorithm. EntiGraph
breaks down a text corpus into a list of entities and then uses an LM to describe relations among
entities, iteratively “filling in” the knowledge graph underlying the corpus (Figure 1).
To concretely measure progress towards effective knowledge acquisition from small corpora, we
propose an experimental setting based on QuALITY (Pang et al., 2022), a reading comprehension
dataset. It enables the evaluation of synthetic data generation methods for data-efficient learning
without incurring the high compute costs of pretraining from scratch. Specifically, we assume access
to a collection of 265 books totaling 1.3M tokens. Our task is to synthesize a corpus such that
continued pretraining on it enables a model to answer queries (e.g., multiple-choice QA or user
instructions related to the book content) without access to the source texts.
In our main experiments (§5), we use EntiGraph to generate 455M synthetic tokens from 1.3M real
tokens using GPT-4 (OpenAI et al., 2024). Then, we continually pretrain Llama 3 8B (Dubey et al.,
2024) on the synthetic tokens and evaluate its QA accuracy on the QuALITY questions. We observe
log-linear scaling in the accuracy as synthetic token count increases, up to 455M (§4.2). At the
endpoint, we find that synthetic continued pretraining with 455M EntiGraph tokens provides 80%
of the accuracy gain of having the source documents available at inference time (§5). Beyond QA,
we also perform instruction tuning on the continually pretrained model and find that it is capable of
following open-ended instructions (e.g., summarization) related to the QuALITY books (§4.3).
To summarize, our key contributions are as follows:
• We propose to learn from small corpora with synthetic continued pretraining—converting the
small corpus into a large, diverse, synthetic corpus and continuing pretraining on it—and instan-
tiate this approach using the EntiGraph synthetic data augmentation algorithm (§2.2).
• We demonstrate that continued pretraining on the EntiGraph-synthesized corpus yields a QA
accuracy scaling trend that is log-linear in the synthetic token count, significantly outperforming
continued pretraining on the source documents or paraphrases (§4.2). Furthermore, we show that
2
Title: The Blue Behemoth Author: Leigh Blackett Shannon's Imperial Circus was a jinxed space-carny leased for a mysterious tour of the inner worlds. It made a one-night… Title: Cosmic Yo-Yo Author: Ross Rocklynne Bob Parker, looking through the photo-amplifiers at the wedge-shaped asteroid, was plainly flabbergasted. Not in his wildest… …Input: small, niche corpus of documentsTitle: Defining Decay Down Author: David Plotz If you haven’t visited a dentist in the past few years, first of all, that’s gross. (Checkups are every six months, and don’t pretend you…(1) Entity Extraction For each document , extract a list of entitiesDE1…CheckupsFluorideDentistE2E3E4EnamelE1E2E3E4(2) Relation Analysis Form a knowledge graph and prompt an LM to describe its edgesUser: Analyze relations among given entities in the provided text. […] Document {} Entities { = Fluoride, = Enamel} D=Defining Decay DownE3E4LM: The interplay between enamel and fluoride within the context of “Defining Decay Down” is a telling one, as it underpins the significant shift […] Output: diverse synthetic corpus for continued pretrainingPublished as a conference paper at ICLR 2025
instruction tuning the EntiGraph continually pretrained model enables it to follow more diverse
queries related to the source documents (§4.3).
• We complement the main experiments with an open-book setup (§5), providing the model with
access to the source documents when answering queries. We demonstrate that the knowledge
acquired through synthetic continued pretraining with EntiGraph is complementary to the knowl-
edge accessed through retrieval-augmented generation (RAG, Lewis et al. (2020))—RAG with
the EntiGraph continually pretrained model outperforms RAG with the base model.
• Lastly, we build a mathematical model that captures the intuition behind EntiGraph. We analyze
it to obtain a parametric formula for the scaling trend of a continually pretrained model’s accuracy
with respect to EntiGraph synthetic tokens, closely matching our empirical observations (§6).
Practically, synthetic continued pretraining with EntiGraph enables pretrained LMs to adapt to spe-
cialized domains by acquiring parametric knowledge, rather than the non-parametric knowledge
accessed through retrieval. At a higher level, our approach points toward a family of synthetic data
generation algorithms that convert compute into data efficiency for (continued) pretraining.
1.1 RELATED WORK
We next discuss recent work most related to our setting of synthetic data generation for continued
pretraining. Appendix B surveys classical work on synthetic data and continual learning.
Synthetic generation of pretraining data. Recent approaches synthesize pretraining data using
hierarchical prompting methods to promote dataset diversity. Eldan & Li (2023) prompt LLMs to
generate stories containing sampled keywords, and demonstrate that small LMs trained on their
dataset can generate fluent text. Gunasekar et al. (2023) synthesize textbooks and code exercises by
conditioning on topic, target audience, and function names, and later release strong LLMs pretrained
on synthetic data (Li et al., 2023b; Abdin et al., 2023; 2024). However, their datasets and prompts
are not public. Maini et al. (2024) prompt an LM to rephrase documents for pretraining, improving
training efficiency. Distinct from all above works, our focus is teaching a pretrained LLM the knowl-
edge of a small corpus. Mecklenburg et al. (2024) consider task-specific finetuning and propose a
fact-based synthetic QA generation procedure, but do not show improvement on generic instruction
following tasks. We instead focus on teaching a model generally useful knowledge about a small
corpus, untied to a particular downstream task. Ovadia et al. (2024) continually pretrain Llama
2–based LMs on synthetic paraphrases of Wikipedia articles, but do not observe consistent improve-
ments. We adapt the approach of Maini et al. (2024) and Mecklenburg et al. (2024) to our small
corpus setting (“Rephrase baseline” in §4). We find that our graph-based augmentation algorithm
outperforms it, likely because our approach enforces diversity through entity-based generation.
Continued pretraining. Continual or continued pretraining works (Gururangan et al., 2020) adapt
pretrained LLMs to broad target domains such as code, medicine, or mathematics by collecting mas-
sive datasets (often >100B tokens; cf. Table 1 for a survey) and applying causal language modeling
recipes (Gupta et al., 2023; Ibrahim et al., 2024; Parmar et al., 2024). We aim to extend the success
of continued pretraining to small, specialized domains such as proprietary datastores. Observing
that standard continued pretraining is ineffective on small corpora, we propose a knowledge graph–
inspired approach to synthesize a diverse related corpus and find it more amenable to learning.
Knowledge editing. A related line of work updates LMs with small units of factual knowledge,
e.g., (subject, relation, object) tuples. Zhu et al. (2020) study constrained fine-tuning to limit model
complexity. Later approaches attempt to localize where factual knowledge is stored in Transformers
and update only those weights (Mitchell et al., 2022; Meng et al., 2022; 2023), or maintain an
external memory of edits and prepend them as context during generation (Zhong et al., 2023; Cohen
et al., 2023). Most related to our work is Aky¨urek et al. (2024), which first deduces implications of a
factual edit and then finetunes on those implications. Unlike the knowledge editing literature which
learns atomic, sentence-length facts, we aim to learn from a small corpus of documents.
2 OUR METHOD
We focus on learning parametric knowledge from a small corpus of documents. Our goal is to
continually pretrain an LM to acquire the knowledge of a niche corpus. Observing that simple
continued pretraining is ineffective (§4), we propose to use synthetic continued pretraining, which
first uses the small corpus to synthesize a larger one more amenable to learning, and then continues
3
Published as a conference paper at ICLR 2025
Study
Minerva (Lewkowycz et al., 2022)
MediTron (Chen et al., 2023)
Code Llama (Rozi`ere et al., 2024)
Llemma (Azerbayev et al., 2024)
DeepSeekMath (Shao et al., 2024)
SaulLM-7B (Colombo et al., 2024b)
SaulLM-{54, 141}B (Colombo et al., 2024a)
HEAL (Yuan et al., 2024a)
Domain
STEM
Medicine
Code
Math
Math
Law
Law
Medicine
Model Parameter Count Total Unique CPT Tokens
8B, 62B, 540B
7B, 70B
7B, 13B, 34B
7B, 34B
7B
7B
54B, 141B
13B
26B-38.5B
46.7B
520B-620B
50B-55B
500B
30B
520B
14.9B
1.3M
Our setting
Articles & Books
8B
Table 1: Comparing the scale of modern continued pretraining (CPT) works with our small corpus setting.
Prior work adapts LMs to broad domains with diverse, large-scale corpora. We aim to downscale CPT to small
corpora; we use a corpus that is 10,000× smaller than the smallest modern corpus for domain-adaptive CPT.
pretraining on the synthetic corpus. In this section, we first outline this problem setting and our
evaluation approach in more detail (§2.1). Then, we provide a concrete instantiation of synthetic
continued pretraining using a data augmentation algorithm called EntiGraph (§2.2).
2.1 PROBLEM SETUP
Continued pretraining on small corpora. We focus on approaches that continually pretrain an
LM to teach it the knowledge of a small source corpus Dsource. These approaches acquire “parametric
knowledge”—the knowledge of Dsource is learned in the LM’s parameters, as in pretraining.
Synthetic continued pretraining (synthetic CPT). First, we apply a synthetic data generation
algorithm Asynth to convert a small corpus Dsource into a synthetic corpus Dsynth:
Asynth : Dsource (cid:55)−→ Dsynth.
(1)
Then, we perform continued pretraining on Dsynth instead of on Dsource. We implement Asynth using
a prompted LM. A natural concern is that the LM may hallucinate and fabricate false knowledge.
Therefore, we consider synthetic data augmentation algorithms that condition the generation pro-
cess on the source documents to improve the synthesized data’s faithfulness.
Evaluation with knowledge-intensive queries. We evaluate the quality of a synthetic data aug-
mentation algorithm Asynth by testing whether the downstream synthetic CPT model has effectively
acquired the knowledge of Dsource in its parameters. More precisely, we curate test queries Qtest that
probe the knowledge about Dsource acquired by the model. For example, in the linear algebra setting,
Qtest could be held-out exam questions. To test parametric knowledge, we do not allow the model to
access the source documents Dsource at test time. Therefore, the queries cannot be ambiguous with-
out access to Dsource. For example, a reading comprehension question like “Where was he born?”
is ambiguous without context. Altogether, we can evaluate data augmentation algorithms Asynth for
synthetic CPT using a paired source corpus and related test queries (Dsource, Qtest).
2.2 ENTIGRAPH
Next, we present EntiGraph, our instantiation of a synthetic data augmentation algorithm Asynth. At
a high level, EntiGraph generates diverse representations of knowledge from a small corpus Dsource
by using a prompted LLM to synthesize a knowledge graph representation of Dsource. EntiGraph
consists of two steps/prompts: extracting entities from the document and analyzing relations among
an arbitrary subset of the entities (Figure 1). Altogether, this hierarchical prompting strategy ex-
ternalizes the problem of generating diverse synthetic text to a combinatorial structure—namely, a
graph relating various entities appearing in the corpus documents.
Step 1: Entity extraction. First, EntiGraph extracts a list of salient entities {E1, E2, . . . , En}
from the document Dsource using an entity extraction prompt (full prompt in Appendix
(cid:0)entity extraction(Dsource)(cid:1). In the linear algebra exam-
H.1): {E1, E2, . . . , En} ∼ LMaug
ple, Dsource could be one specific linear algebra textbook. We would expect to extract entities such
as {E1 = Linear space, E2 = Vector, E3 = SVD, . . . }.
Step 2: Relation analysis. Next, EntiGraph analyzes the relations among subsets of enti-
ties. The intuition is to explore the edges of the knowledge graph underlying the source docu-
ment Dsource, analogous to a student writing diverse notes about a linear algebra textbook. We
apply a relation analysis prompt (full prompt in Appendix H.1) to describe how a sub-
4
Published as a conference paper at ICLR 2025
set of k ≤ n entities are related in the context of the source document Dsource: (cid:101)DEi1 ...Eik
∼
(cid:0)relation analysis(D, Ei1, Ei2, . . . , Eik )(cid:1). For example, if E1 = Linear space
LMaug
and E2 = Vector, (cid:101)DE1E2 could be Based on the textbook, a vector is an
element of a linear space... Exhaustively enumerating all possible subsets of entities
is impractical. We generate data for pairs (cid:101)DEiEj and triplets (cid:101)DEiEj Ek in our experiments.
EntiGraph synthetic corpora. Finally, we collect all sampled synthetic texts from Step 2 as the
EntiGraph output: DEntiGraph = { (cid:101)DEi1 ...Eik
, . . . }. Altogether, we described a data augmentation
algorithm mapping a small source corpus Dsource to a larger synthetic corpus DEntiGraph, as in (1).
3 EXPERIMENT SETUP
We next detail how we evaluate a given data augmentation algorithm Asynth. As described in §2.1,
we evaluate algorithms Asynth by evaluating how well an LM continually pretrained on their output
synthetic corpus Asynth(Dsource) can answer test queries Qtest about the source documents Dsource.
In our main experiments, we use queries that are unambiguous without the source documents Dsource,
and disallow the LM from accessing Dsource while answering queries Qtest. This allows us to evaluate
which data augmentation algorithm best promotes the acquisition of parametric knowledge through
synthetic CPT. Later, in §5, we consider an open-book setting where the model can simultaneously
access the source documents Dsource and test queries Qtest, to test how the parametric knowledge ac-
quired through synthetic CPT composes with non-parametric access to knowledge through retrieval
(Lewis et al., 2020). We next introduce our small corpus and related test queries (Dsource, Qtest).
QuALITY corpus Dsource. Our corpus and test queries are based on the QuALITY (Pang et al.,
2022) long-document comprehension benchmark. The QuALITY corpus Dsource consists of 265
articles and short books on genres such as science fiction and journalism, averaging ∼5,000 tokens.
QuALITY test queries Qtest. We use the 10-20 multiple choice questions accompanying each
article in QuALITY. They serve as high-quality knowledge probes on Dsource, but the query phrasing
often presupposes the reading comprehension context (e.g., “What does the author think about...”).
We remove ambiguity by contextualizing them with an article reference: “In the context of article
{article name} by {author name}, what does the author think about...”. This provides us with 4,609
unambiguous queries Qtest to test the parametric knowledge of our continually pretrained LMs.
Evaluation on instruction-tuned summarization. We also instruction tune the continually pre-
trained LMs and evaluate them on more general instruction following queries. Specifically, we
prompt them to generate closed-book summaries of QuALITY articles, given only title and author.
Performance with strong API-based LLMs.
In our continued pretraining setting, we must select
a corpus Dsource that is not well-represented in standard pretraining datasets. As an initial test of
the obscurity of the QuALITY corpus Dsource, we evaluate GPT-3.5 and GPT-4 on Qtest.
In the
closed-book setting, we find GPT-3.5 accuracy at 44.81% and GPT-4 accuracy at 51.30% (Figure
2). In the open-book setting (full access to Dsource), we find GPT-3.5 accuracy at 72.60% and GPT-4
accuracy at 86.09% (Table 3). Based on the large (∼30%) improvement when Dsource is provided,
we conclude that the QuALITY corpus Dsource is sufficiently niche to serve as an appropriate testbed.
4 MAIN EXPERIMENTS
In this section, we present our main experimental results1. Using GPT-42 as our prompted model
LMaug, we apply EntiGraph to the 1.3M token QuALITY corpus Dsource, generating a 455M token
synthetic corpus. For the remainder of the paper, we refer to the former as the “Raw corpus” and the
latter as the “EntiGraph corpus”. Additional details on these corpora are provided in Appendix C.
We continually pretrain Llama 3 8B (Dubey et al., 2024) with causal language modeling on the
455M token EntiGraph corpus. In §4.1, we describe our CPT procedure and introduce two natural
baselines. In §4.2, we evaluate on the QuALITY test queries Qtest. In §4.3, we show that synthetic
CPT using EntiGraph is compatible with downstream instruction tuning (Ouyang et al., 2022).
1Code https://github.com/ZitongYang/Synthetic_Continued_Pretraining.git.
2We use the gpt-4-turbo model as of Aug. 19, 2024.
5
Published as a conference paper at ICLR 2025
4.1 CONTINUED PRETRAINING PROCEDURE
EntiGraph CPT.
In our main continued pretraining experiment, we continually pretrain Llama 3
8B Base on the 455M token EntiGraph corpus for 2 epochs with replay on the RedPajama dataset
(TogetherAI, 2023). Hereafter, we refer to this model as “EntiGraph CPT”. We discuss CPT details
in Appendix D. Next, we describe two baselines to which we compare in closed-book QA (§4.2).
Raw CPT baseline. The first baseline continues pretraining Llama 3 8B Base on the 1.3M token
Raw corpus of raw QuALITY articles Dsource. We jointly tune the number of epochs and RedPajama
replay rate, obtaining the “Raw CPT” model. Further tuning details are provided in Appendix D.
Rephrase CPT baseline. Another simple synthetic data augmentation procedure is to rephrase
QuALITY articles repeatedly. Maini et al. (2024) and Ovadia et al. (2024) execute a systematic
extension of this idea (cf. §1.1). Based on their approaches, we craft a “Rephrase baseline” which
repeatedly applies three fixed prompts (easy, medium, and hard rephrase)3 to the QuALITY articles
at temperature 1.0. We stopped generating paraphrases at 38M tokens, where we observed a clear
gap in QA evaluations from EntiGraph CPT and a slower scaling trend (Figure 2). We refer to this
data as the “Rephrase corpus” and the continually pretrained model as “Rephrase CPT”.
4.2 QUESTION-ANSWERING EVALUATIONS
Next, we present our closed-book QA evaluations with the QuALITY test queries Qtest.
Evaluation procedure. Each QuALITY question is a four-choice, single-answer multiple choice
question (similar to MMLU, Hendrycks et al. (2021)). We evaluate with 5-shot chain-of-thought
prompting (Brown et al., 2020; Wei et al., 2024) and provide our prompt in Appendix I.1.
EntiGraph scaling. We find that CPT on
the 455M token EntiGraph corpus improves
closed-book QA accuracy from 39.49% (for
Llama 3 8B Base) to 56.22% (Figure 2). A
natural question is how accuracy scales as we
synthesize and train on more tokens with Enti-
Graph. To test this, we randomly subsam-
ple without replacement the EntiGraph corpus
with varying sample sizes, continually pretrain
Llama 3 8B Base on each subsample, and plot
accuracy versus sample size in Figure 2. We
observe log-linear scaling of the accuracy in the
number of synthetic tokens used for CPT, up to
455M tokens. We mathematically investigate
the scaling properties of EntiGraph in §6.
In
broad strokes, we postulate that QuALITY ac-
curacy follows a mixture-of-exponential shape
with three stages: (i) linear growth, (ii) log-
linear growth, and (iii) asymptotic plateau.
Figure 2: Accuracy on the QuALITY question set Qtest
(y-axis) as a function of the synthetic token count (x-
axis). The accuracy of synthetic continued pretraining
using the EntiGraph data augmentation algorithm (Enti-
Graph CPT) scales log-linearly up to 455M tokens.
Comparison with baselines. Raw CPT (green line) underperforms even Llama 3 8B (dashed black
line). We postulate two explanations: (i) The Raw corpus follows a narrower, different distribution
than the Llama 3 pretraining corpus; heavily training on it may harm the LM’s English capabilities.
(ii) The limited diversity of knowledge representations in the Raw corpus leads to limited knowledge
acquisition due to problems such as the reversal curse (Berglund et al., 2023). Rephrase CPT scales
poorly compared with EntiGraph (Figure 2), suggesting that for synthetic CPT to scale well, the
synthetic data must be sufficiently diverse. EntiGraph tackles this problem using a hierarchical
prompting strategy that externalizes diversity to a knowledge graph’s combinatorial relationships.
INSTRUCTION FOLLOWING EVALUATIONS
4.3
Next, we explore more general test queries beyond the test queries Qtest. Concretely, we perform in-
struction tuning on EntiGraph CPT to obtain EntiGraph Instruct. We demonstrate that synthetic CPT
on the EntiGraph corpus is compatible with instruction tuning; EntiGraph Instruct can directly use
3Maini et al. (2024) include a 4th prompt to generate synthetic QA pairs. We defer this task-specific QA
finetuning method to Appendix E and focus on task-agnostic baselines for learning generic knowledge.
6
100101102Number of synthetic tokens (in Millions)37.540.042.545.047.550.052.555.0QA AccuracyGPT-4 (51.30%)GPT-3.5 (44.81%)Raw CPT (38.15%)EntiGraph CPTRephrase CPTLlama 3 8B Base (39.49%)Published as a conference paper at ICLR 2025
Table 2: EntiGraph Instruct examples.
1. Increase in cosmetic dentistry [...]
2. Use of technology: [...]
Explicit reference: Summarize “Defining Decay
Down”.
Implicit reference: How has dentistry in the U.S.
changed?
The article “Defining Decay Down” by David Plotz
discusses [...] Dentists began to focus on cosmetic
dentistry, [...]
knowledge obtained during synthetic CPT in instruction following tasks, without test-time access to
the QuALITY corpus Dsource. We detail our instruction tuning procedure in Appendix D.
Instruction tuning qualitative examples. We
first present qualitative examples that demonstrate
EntiGraph Instruct’s ability to follow instructions
related to QuALITY articles. First, we ask the
model to summarize a QuALITY article with ex-
plicit reference to the title and author, but no access
to the article itself (Table 2, top row). Next, we
show that even without an explicit reference to the
title and author, knowledge of the article is stored
in the model’s parameters and can affect its behav-
ior (Table 2, middle row). Finally, we provide an
example where the model performs a comparison
using knowledge across two articles (Table 2, bot-
tom row). Albeit artificial, this shows that though
EntiGraph does not synthesize data that simulta-
neously involves multiple articles, the model can
reason about their interaction using its parametric
knowledge. We provide full responses in Table 6.
Evaluating closed-book summarization. We
also present quantitative metrics for summarization,
a well-studied instruction following task. We compare EntiGraph Instruct summaries of QuALITY
articles with human-written summaries from sQuALITY (Wang et al., 2022), a variation of QuAL-
ITY with provided human summaries. Common scalar summarization metrics such as ROUGE (Lin,
2004) or BERTScore (Zhang* et al., 2020) mostly evaluate text similarity between the summary and
source articles, and may not accurately reflect summarization quality for abstractive systems (Zhang
et al., 2024b). We use a simple, automated evaluation metric based on pyramid evaluation (Nenkova
et al., 2007; Gao et al., 2019) that measures both the hallucination rate and how well the summary
captures the salient claims of the original article. Our approach uses GPT-4 to (1) split the sum-
mary into atomic claims (Min et al., 2023), (2) decide whether each claim is true/false based on the
source article, and (3) determine if true claims are salient to the article’s main message. We hence
obtain the count of false and salient claims for each summary, normalize these by the corresponding
count from the human summary, and report the average of these normalized metrics in Figure 3.
Appendix I.2 provides further details.
Cross article instruction: Compare David Plotz’s
commentary on American dentistry and the movie
Fight Club?
David Plotz’s commentary style is different when
he analyzes American dentistry and when he dis-
cusses the movie Fight Club. [...]
Results discussion.
In Figure 3, we compare four sum-
marizers: EntiGraph Instruct, Raw Instruct, GPT-3.5, and
GPT-4. We provide each summarizer with two different
prompts, asking for short and long summaries (prompts
in Appendix I.2). When we request more detailed sum-
maries, Raw Instruct hallucinates and generates more
false claims with little improvement in the number of
salient claims. In contrast, EntiGraph Instruct can gener-
ate more salient claims as the summary gets longer, with
a small increase in the number of false claims (similar to
GPT-3.5 and GPT-4 levels). The gaps in both salient and
false claim rates are sufficiently large that these results
likely hold beyond our particular metric. We complement
the automated evaluation metrics above with several qual-
itative examples in Appendix I.2.
5 OPEN-BOOK EXPERIMENTS
Figure 3:
Closed-book summarization:
number of false claims (y-axis) versus num-
ber of salient claims (x-axis) normalized by
the human summary.
Next, we consider an open-book setting with the domain-specific corpus Dsource available at test
time. In this widespread setting, retrieval-augmented generation (RAG; Lewis et al. (2020)) is the
predominant approach. A natural question whether the parametric knowledge learned through syn-
thetic CPT using EntiGraph complements the non-parametric knowledge accessed using RAG. We
answer this question by comparing a state-of-the-art RAG pipeline with and without Entigraph CPT.
7
0.00.20.40.60.81.01.21.4# Salient claims relative to human2468# False claims relative to humanRawCPT shortRawCPT longGPT-3.5 shortGPT-3.5 longGPT-4 shortGPT-4 longEntiGraph shortEntiGraph longHumanPublished as a conference paper at ICLR 2025
EntiGraph CPT + RAG Llama 3 8B Base + RAG GPT-4 + Oracle RAG GPT-3.5 + Oracle RAG
Accuracy
Recall@8 Accuracy
Recall@8
Accuracy Recall@8 Accuracy
Recall@8
62.60
99.63
60.35
99.63
86.09
100.0
72.60
100.0
Table 3: QuALITY question-answering accuracy and recall rate in the open-book retrieval-augmented genera-
tion (RAG) setting. EntiGraph CPT and Llama 3 8B Base are used in a RAG pipeline (cf. §5 for setup details).
Recall@8 is defined as the proportion of questions for which the salient article appears in the top 8 reranked
document chunks. GPT-4 and GPT-3.5 Oracle RAG provide an upper bound with a perfect retriever, by placing
the entire relevant document in-context.
RAG evaluation setup. Our RAG pipeline follows established best practices (Lewis et al., 2020;
Gao et al., 2024).
It involves an offline stage which indexes document chunks, followed by
inference-time retrieval, reranking, and placement of those chunks in a few-shot LM prompt.
Throughout, we use OpenAI text-embedding-3-large (Neelakantan et al., 2022) as our
API-based embedding model, FAISS as our similarity search index (Douze et al., 2024), and Cohere
rerank-english-v3.0 (Cohere, 2024) as our reranker. Following the evaluation procedure
detailed in §4, we evaluate parallel RAG pipelines on the QuALITY multiple choice test set using
few-shot chain-of-thought prompting. All hyperparameters are tuned separately for each LM’s RAG
pipeline. We refer the reader to Appendix F for further details on our RAG evaluation setup.
EntiGraph continued pretraining complements RAG. We observe in Table 3 that EntiGraph
CPT outperforms Llama 3 8B Base, the model from which it is continually pretrained. These re-
sults demonstrate that the knowledge internalized through synthetic CPT is complementary to that
accessed during RAG, and demonstrate a competitive new recipe for small corpus QA: (1) synthetic
data augmentation, (2) continued pretraining, and (3) RAG.
EntiGraph continued pretraining alone approaches RAG performance. These results also
contextualize the effectiveness of EntiGraph in the closed-book, parametric knowledge setting (§4).
Comparing Figure 2 and Table 3, we observe that adding RAG to Llama 3 8B Base improves accu-
racy by 20.86% (39.49% → 60.35%). On the other hand, continued pretraining of Llama 3 8B Base
on the EntiGraph corpus improves accuracy by 16.73% (39.49% → 56.22%). Hence, EntiGraph
continued pretraining provides > 80% of the absolute performance improvement of RAG, even in a
small corpus setting where RAG recall is nearly perfect.
Overall, our results show that the parametric knowledge acquired in EntiGraph continued pretraining
composes with realistic knowledge-intensive QA pipelines, and that EntiGraph continued pretrain-
ing alone—without test-time corpus access—is nearly competitive with a strong RAG baseline.
6 THEORETICAL ANALYSIS OF ENTIGRAPH SCALING
It may seem surprising that simply “rewriting” the source documents Dsource can improve perfor-
mance at all (§4), as EntiGraph does not explicitly add new knowledge beyond Dsource. We postu-
late that EntiGraph “rearranges” Dsource into a layout more amenable to learning. For example, in
Dsource, the entity pair (A, B) may appear together in some sentences and (B, C) in others. As a
result, models trained directly on Dsource may learn the (A, B) relation and the (B, C) relation, but
not the (A, C) relation (Aky¨urek et al., 2024). We build a mathematical model to formalize this in-
tuition (§6.1) and provide a quantitative prediction that the scaling trend of EntiGraph CPT follows
a mixture-of-exponential shape (§6.3), which fits well with our empirical observations (Figure 4).
6.1 TOY MODEL SETUP
In this toy model, we use V to denote the set of entities, and represent the source documents Dsource
with pairs of known relations Dsource ⊂ {(x, y) ∈ V 2 : x ̸= y}. We assume that each relation
pair in V 2 appears in the source documents Dsource independently at random, with probability p.
Mathematically, P [(x, y) ∈ Dsource] = p for all x ∈ V and y ∈ V with x ̸= y. We write V = |V|
and assume that p = λ/V , for some constant λ > 1.
Training as memorization. We model the learning of factual knowledge as a memorization pro-
cess, in which a model memorizes the relations it is trained on but does not meaningfully generalize
beyond them (Yang et al., 2023; Feldman, 2020). In this view, a language model’s knowledge can
be represented by a matrix M ∈ {0, 1}V ×V such that M (x, y) = 1 if the model “knows” the
(x, y) relation and equals 0 otherwise. Then, training directly on the source documents Dsource sim-
8
Published as a conference paper at ICLR 2025
ply means setting all entries that appear in Dsource to 1, denoting that the model has memorized the
relations given in the source documents. Mathematically, we denote this model trained on Dsource by
the matrix M0 ∈ {0, 1}V ×V , which has i.i.d. Bernoulli off-diagonal entries with mean p.
EntiGraph synthetic data augmentation. Given the source documents Dsource, we define the
following iterative procedure of synthetic data generation: for each t = 1, 2, . . .
• Entity pair selection: Sample (xt, yt) ∈ {(x, y) ∈ V 2 : x ̸= y} uniformly at random.
• Relation analysis: Generate the “relation between (xt, yt)” by performing a breadth-first search
(BFS) on the directed graph represented by the adjacency matrix M0 starting at xt. If no such
path exists, do nothing. If there exists a path (xt, z1
t , yt) connecting xt to yt, define
Dt = {(xt, z1
t ), (xt, yt)} ∪ Dt−1, where we assume D0 = Dsource. The
model trained on this round of synthetic data is Mt = Mt−1 + (cid:80)
Ixy, where
Ixy ∈ {0, 1}V ×V is a binary matrix with Ixy(x, y) = 1 and 0 otherwise.
t ), . . . , (xt, zkt
t , . . . , zkt
t ), (xt, z2
(x,y)∈Dt\Dt−1
t , z2
This mirrors the relation analysis step for the EntiGraph synthetic data augmentation algorithm
(Step 2, §2.2). With the setup above, the index t is analogous to the number of synthetic tokens
that the model has generated, and the model’s knowledge is captured by how many ones the matrix
Mt contains. To make this connection precise, we define the link density (or accuracy) of Mt to be
Acc(Mt) = E[∥Mt∥1|M0]/(V (V −1)), where the expectation is taken over the randomness arising
from the synthetic data generation process and not the source documents Dsource, and ∥M ∥1 denotes
(cid:80)
i,j |Mi,j|. We use the notation Acc as this is intended to emulate the accuracy on QuALITY test
queries studied in the experimental sections (§4 and §5).
6.2 RIGOROUS UPPER AND LOWER BOUND
In this section, we derive rigorous upper and lower bounds on the scaling trend of Acc(Mt).
Definition 1. Let Cλ = (1 − ρ(λ))2, where ρ(λ) denotes the extinction probability for a Poisson(λ)
branching process (i.e., ρ is the smallest solution in [0, 1] to the fixed-point equation ρ = exp(λ(ρ −
1))). For any fixed ε > 0, we further define CLB = 1 −
Theorem 1. For any time t ≥ 1 and any ε > 0, the link density satisfies, with probability → 1,
(cid:1)(cid:1) (1 + ε) as V → ∞.
V (V −1) , CUB = 1 − (1+ε) log V
(cid:1)(cid:1) (1 − ε) ≤ Acc(Mt) ≤ (cid:0)p + Cλ
V (V −1) log λ .
(cid:0)p + Cλ
(cid:0)1 − C t
(cid:0)1 − C t
1
UB
LB
Even though Theorem 1 provides mathematically rigorous upper and lower bounds on the scaling
trend of Acc(Mt), the exact growth curve is more intricate, as we will show next.
6.3 AN ANALYTICAL FORMULA
We analyze the link density Acc(Mt) using a Pois-
son branching process approximation of the cluster
growth of vertices. This approach yields a mixture-of-
exponential scaling trend
(cid:32)
Acc(Mt) ∼ p + C
1 −
(cid:33)
µ(k) (1 − ak)t
,
(2)
∞
(cid:88)
k=1
where A ∼ B means that A/B converges to 1 in prob-
ability as V → ∞. The parameter C governs the link
density Acc(Mt) as t → ∞ and is determined by the
proportion of reachable pairs of vertices in the initial
matrix M0. µ(·) is the probability mass function on k, which controls the proportion of pairs of
vertices with a specific decay rate. The parameters µ(·) and ak depend on M0 in a more intricate
manner (cf. Appendix G for a full derivation). We find that (2) accurately fits the empirical scaling
trend of EntiGraph CPT accuracy up to 455M synthetic tokens (Figure 4). We discuss curve fitting
in Appendix G.1, where we show that the mixture-of-exponential shape grows in three phases: (i)
linear growth; (ii) log-linear growth; (iii) asymptotic plateau.
Figure 4: A mixture-of-exponential function
(2) closely fits the scaling trend of EntiGraph
CPT with respect to synthetic token count.
7 DISCUSSION AND CONCLUSION
7.1 LIMITATIONS
Because EntiGraph synthesizes data using a prompted LM, there is a risk it may hallucinate and
fabricate non-existent entities or relations. Although our synthesis process is grounded by the source
9
101100101102Number of synthetic tokens (in Millions)40.042.545.047.550.052.555.0EntiGraph AccuracyEmpirical observation on QuALITY QAMixture-of-exponential fitPublished as a conference paper at ICLR 2025
documents, it is an assumption that LMaug is capable enough to generate faithful synthetic data when
conditioned on Dsource. We quantitatively test the factuality of the EntiGraph corpus by randomly
subsampling 150 sentences from it and manually labeling each sentence’s factuality (Appendix A.2).
We find roughly half of the sentences are subjective, and the other objective half is almost always
factual. We postulate that factuality is high because QuALITY articles are relatively simple given
the capability of the prompted LM. If EntiGraph were applied to more challenging content like a
complex research paper, it is possible that the prompted model could be more prone to hallucination.
On the other hand, since we use a strong prompted LM gpt-4-turbo to generate synthetic data,
one might be concerned that our performance gains come from distilling it. To probe this, we per-
form an ablation study where we replace gpt-4-turbo with Llama 3.1 8B Instruct, a substantially
weaker model that is obtained from the same base model as EntiGraph CPT, in Appendix A.1. We
generated 334M EntiGraph tokens using Llama 3.1 8B Instruct and found a consistent log-linear
trend with the same slope but lower intercept (Figure 5) compared with GPT-4 generation. This
ablation suggests that EntiGraph operates by genuinely teaching the model knowledge about the
QuALITY corpus, rather than serving as a vehicle to distill a powerful prompted LM.
7.2 FUTURE DIRECTIONS
Continued scaling beyond real data. The large but finite body of human-written text is rapidly be-
ing consumed. Villalobos et al. (2024) predict that frontier language models will exhaust all public,
human-generated text in 2028. As we transition from a data-rich to a data-constrained regime (Ka-
plan et al., 2020; Muennighoff et al., 2023), further scaling will require us to extract more knowledge
from existing data. We demonstrated that synthetic continued pretraining with EntiGraph effectively
extracts more knowledge from small corpora, which could help us learn from proprietary datasets
or tail knowledge that appears only once or twice on the internet. It is an open question whether
synthetic data generation methods like EntiGraph could improve data efficiency more generally on
standard pretraining data and without relying upon a stronger prompted model.
Alternatives to long-context language models. Recent work handles long user queries (e.g.,
1M-10M+ tokens) using efficient attention (Dao et al., 2022; Liu et al., 2023; Gemini, 2024) or ar-
chitectures that are sub-quadratic in the context length (Tay et al., 2022; Gu et al., 2022; Gu & Dao,
2024; Sun et al., 2024). In settings where many queries share a long prefix—e.g., a corporation’s
proprietary documents or other prompt caching use cases (Anthropic, 2024a)—one could instead
continue pretraining on the prefix to internalize its knowledge, and then perform standard quadratic
attention on shorter queries. This approach pays a fixed training cost to amortize the prefix’s knowl-
edge into the weights of a model, and then benefits from shorter context lengths (Gururangan et al.,
2020; Snell et al., 2022). By adapting the continued pretraining paradigm from 10B-100B tokens
to as little as 1.3M tokens, our synthetic continued pretraining approach could enable unsupervised
learning of shared text prefixes at much smaller and more practical token counts.
7.3 CONCLUSION
Continued pretraining with next-token prediction is remarkably effective in teaching pretrained lan-
guage models new knowledge, but to date has only been applied successfully in broad, data-rich
domains with 10B-100B+ tokens. We downscale continued pretraining to small, specialized cor-
pora with ∼1M tokens using synthetic continued pretraining: converting a small corpus into a large
synthetic one with diverse representations of knowledge, and continuing pretraining on it.
We instantiate this approach using EntiGraph, a knowledge graph–inspired synthetic data augmen-
tation algorithm. Synthetic continued pretraining with EntiGraph demonstrates consistent scaling in
downstream closed-book QA performance up to a 455M token synthetic corpus, whereas baselines
such as continued pretraining on the small corpus or synthetic paraphrases show no improvement
or scale slowly. Moreover, the acquired parametric knowledge composes with instruction tuning
and retrieved non-parametric knowledge in an open-book setting. Lastly, we present a simplified
mathematical model of EntiGraph and derive a functional form for its scaling trend, which closely
matches our empirical trend. We hypothesize that EntiGraph’s “externalization” of the synthetic data
generation process to a combinatorial structure—in this case, a knowledge graph over entities—is a
generally useful strategy in synthesizing highly diverse data and a promising object for future study.
10
Published as a conference paper at ICLR 2025
8 ACKNOWLEDGEMENT
Zitong Yang would like to thank Samy Jelassi for feedback on a preliminary version of this work,
Ruiqi Zhong for discussion regarding context distillation work, Xiang Lisa Li for discussion about
reversal curse work, and the participants of the statistics seminar at Stanford University for their
insightful feedback about a preliminary version of this work. We also thank the Tatsu Lab for con-
structive feedback and interesting discussions that have helped improve the paper. Zitong Yang is
supported by the Albion Walter Hewlett Stanford Graduate Fellowship. Neil Band acknowledges
funding from an NSF Graduate Research Fellowship and a Quad Fellowship. T.H. was supported
by a grant from Samsung Research, gifts from Panasonic Research, the Google Research Scholar
Program, and the Tianqiao and Chrissy Chen Institute, as well as the NSF grant IIS-2338866. E.J.C.
is supported by the Office of Naval Research grant N00014-20-1-2157, the National Science Foun-
dation grant DMS-2032014, the Simons Foundation under award 814641.
REFERENCES
Marah Abdin, Jyoti Aneja, Sebastien Bubeck, Caio C´esar Teodoro Mendes, Weizhu Chen, Al-
lie Del Giorno, Ronen Eldan, Sivakanth Gopi, Suriya Gunasekar, Mojan Javaheripi, Piero Kauff-
mann, Yin Tat Lee, Yuanzhi Li, Anh Nguyen, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shi-
tal Shah, Michael Santacroce, Harkirat Singh Behl, Adam Taumann Kalai, Xin Wang, Rachel
Ward, Philipp Witte, Cyril Zhang, and Yi Zhang. Phi-2: The surprising power of small lan-
guage models, 2023. URL https://www.microsoft.com/en-us/research/blog/
phi-2-the-surprising-power-of-small-language-models/.
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany
Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, Alon Ben-
haim, Misha Bilenko, Johan Bjorck, S´ebastien Bubeck, Qin Cai, Martin Cai, Caio C´esar Teodoro
Mendes, Weizhu Chen, Vishrav Chaudhary, Dong Chen, Dongdong Chen, Yen-Chun Chen, Yi-
Ling Chen, Parul Chopra, Xiyang Dai, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon,
Ronen Eldan, Victor Fragoso, Dan Iter, Mei Gao, Min Gao, Jianfeng Gao, Amit Garg, Abhishek
Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh,
Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud
Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Yunsheng Li, Chen Liang, Lars
Liden, Ce Liu, Mengchen Liu, Weishung Liu, Eric Lin, Zeqi Lin, Chong Luo, Piyush Madan,
Matt Mazzola, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel
Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sam-
budha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shi-
tal Shah, Ning Shang, Hiteshi Sharma, Swadheen Shukla, Xia Song, Masahiro Tanaka, Andrea
Tupini, Xin Wang, Lijuan Wang, Chunyu Wang, Yu Wang, Rachel Ward, Guanhua Wang, Philipp
Witte, Haiping Wu, Michael Wyatt, Bin Xiao, Can Xu, Jiahang Xu, Weijian Xu, Sonali Yadav,
Fan Yang, Jianwei Yang, Ziyi Yang, Yifan Yang, Donghan Yu, Lu Yuan, Chengruidong Zhang,
Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, and Xiren
Zhou. Phi-3 technical report: A highly capable language model locally on your phone, 2024.
URL https://arxiv.org/abs/2404.14219.
Afra Feyza Aky¨urek, Ekin Aky¨urek, Leshem Choshen, Derry Wijaya, and Jacob Andreas. Deduc-
tive closure training of language models for coherence, accuracy, and updatability. In Lun-Wei
Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational
Linguistics ACL 2024, pp. 9802–9818, Bangkok, Thailand and virtual meeting, August 2024.
Association for Computational Linguistics. URL https://aclanthology.org/2024.
findings-acl.584.
Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.2, knowledge manipulation,
2024. URL https://arxiv.org/abs/2309.14402.
Chenxin An, Shansan Gong, Ming Zhong, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu.
L-eval: Instituting standardized evaluation for long context language models, 2023.
Dana Angluin. Queries and concept learning. Machine Learning, 2:319–342, 1988. URL https:
//api.semanticscholar.org/CorpusID:11357867.
11
Published as a conference paper at ICLR 2025
Anthropic. Prompt caching (beta), 2024a. URL https://docs.anthropic.com/en/
docs/build-with-claude/prompt-caching.
Anthropic.
The Claude 3 Model Family: Opus, Sonnet, Haiku.
https://www-cdn.
anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_
Card_Claude_3.pdf, 2024b.
Anas Awadalla, Mitchell Wortsman, Gabriel Ilharco, Sewon Min, Ian Magnusson, Hannaneh Ha-
jishirzi, and Ludwig Schmidt. Exploring the landscape of distributional robustness for ques-
In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Find-
tion answering models.
ings of the Association for Computational Linguistics: EMNLP 2022, pp. 5971–5987, Abu
Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics.
doi: 10.18653/v1/2022.findings-emnlp.441. URL https://aclanthology.org/2022.
findings-emnlp.441.
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen Marcus McAleer,
Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model
for mathematics. In The Twelfth International Conference on Learning Representations, 2024.
URL https://openreview.net/forum?id=4WnqRR915j.
Maria-florina Balcan, Avrim Blum,
To-
In L. Saul, Y. Weiss, and L. Bottou (eds.), Ad-
wards bridging theory and practice.
vances in Neural Information Processing Systems, volume 17. MIT Press, 2004. URL
https://proceedings.neurips.cc/paper_files/paper/2004/file/
9457fc28ceb408103e13533e4a5b6bd1-Paper.pdf.
Co-training and expansion:
and Ke Yang.
Lukas Berglund, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz Kor-
bak, and Owain Evans. The reversal curse: Llms trained on ”a is b” fail to learn ”b is a”, 2023.
David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin
Raffel. Mixmatch: A holistic approach to semi-supervised learning, 2019. URL https:
//arxiv.org/abs/1905.02249.
Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In Pro-
ceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT’ 98, pp.
92–100, New York, NY, USA, 1998. Association for Computing Machinery. ISBN 1581130570.
doi: 10.1145/279943.279962. URL https://doi.org/10.1145/279943.279962.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari-
wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar-
wal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh,
Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz
Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners.
In
H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neu-
ral Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc.,
URL https://proceedings.neurips.cc/paper_files/paper/2020/
2020.
file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
Harrison Chase. LangChain, 10 2022. URL https://github.com/langchain-ai/
langchain.
Zeming Chen, Alejandro Hern´andez Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba,
Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas K¨opf, Amirkeivan Mohtashami, Alexan-
dre Sallinen, Alireza Sakhaeirad, Vinitra Swamy, Igor Krawczuk, Deniz Bayazit, Axel Marmet,
Syrielle Montariol, Mary-Anne Hartley, Martin Jaggi, and Antoine Bosselut. Meditron-70b: Scal-
ing medical pretraining for large language models, 2023. URL https://arxiv.org/abs/
2311.16079.
Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, and Mor Geva. Evaluating the ripple effects of
knowledge editing in language models. arXiv preprint arXiv:2307.12976, 2023.
12
Published as a conference paper at ICLR 2025
Cohere. Improve search performance with a single line of code, 2024. URL https://cohere.
com/rerank.
Pierre Colombo, Telmo Pires, Malik Boudiaf, Rui Melo, Dominic Culver, Sofia Morgado, Etienne
Malaboeuf, Gabriel Hautreux, Johanne Charpentier, and Michael Desa. Saullm-54b and saullm-
141b: Scaling up domain adaptation for the legal domain, 2024a. URL https://arxiv.
org/abs/2407.19584.
Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre
F. T. Martins, Fabrizio Esposito, Vera L´ucia Raposo, Sofia Morgado, and Michael Desa. Saullm-
7b: A pioneering large language model for law, 2024b. URL https://arxiv.org/abs/
2403.03883.
Common Crawl. Common crawl. https://commoncrawl.org/, 2007.
Tri Dao, Daniel Y Fu, Stefano Ermon, Atri Rudra, and Christopher Re. Flashattention: Fast and
memory-efficient exact attention with IO-awareness. In Alice H. Oh, Alekh Agarwal, Danielle
Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022.
URL https://openreview.net/forum?id=H4DqfPSibmx.
Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong
Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional
conversations, 2023.
Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-
Emmanuel Mazar´e, Maria Lomeli, Lucas Hosseini, and Herv´e J´egou. The faiss library, 2024.
URL https://arxiv.org/abs/2401.08281.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony
Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark,
Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere,
Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris
Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong,
Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny
Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino,
Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael
Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Ander-
son, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah
Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan
Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Ma-
hadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy
Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak,
Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Al-
wala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini,
Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Lauren Rantala-Yeary, Laurens van der
Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo,
Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Man-
nat Singh, Manohar Paluri, Marcin Kardas, Mathew Oldham, Mathieu Rita, Maya Pavlova,
Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal,
Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Olivier Duchenne, Onur
C¸ elebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhar-
gava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong,
Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic,
Roberta Raileanu, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sum-
baly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa,
Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang,
Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende,
Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney
Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom,
Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta,
13
Published as a conference paper at ICLR 2025
Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vladan Petro-
vic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang,
Xiaoqing Ellen Tan, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur,
Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre
Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aaron Grattafiori, Abha
Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay
Menon, Ajay Sharma, Alex Boesenberg, Alex Vaughan, Alexei Baevski, Allie Feinstein, Amanda
Kallet, Amit Sangani, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew
Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Franco, Aparajita
Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh
Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De
Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Bran-
don Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina
Mejia, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai,
Chris Tindal, Christoph Feichtenhofer, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li,
Danny Wyatt, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana
Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil,
Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Erik Brinkman, Esteban Ar-
caute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Firat Ozgenel, Francesco
Caggioni, Francisco Guzm´an, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella
Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Govind Thattai, Grant Herman, Grigory
Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hamid Shojanazeri, Han Zou, Hannah Wang,
Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Gold-
man, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Irina-Elena Veliche, Itai Gat, Jake Weissman,
James Geboski, James Kohli, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer
Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe
Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie
Wang, Kai Wu, Kam Hou U, Karan Saxena, Karthik Prasad, Kartikay Khandelwal, Katayoun
Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kun Huang, Kunal
Chawla, Kushal Lakhotia, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva,
Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian
Khabsa, Manav Avalani, Manish Bhatt, Maria Tsimpoukelli, Martynas Mankus, Matan Hasson,
Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Ke-
neally, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel
Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mo-
hammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navy-
ata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikolay Pavlovich Laptev, Ning Dong,
Ning Zhang, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli,
Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux,
Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao,
Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Raymond Li,
Rebekkah Hogan, Robin Battey, Rocky Wang, Rohan Maheswari, Russ Howes, Ruty Rinott,
Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Sa-
tadru Pan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lind-
say, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shiva Shankar, Shuqiang Zhang, Shuqiang
Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen
Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Sungmin Cho,
Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser,
Tamara Best, Thilo Kohler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Tim-
othy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan,
Vinay Satish Kumar, Vishal Mangla, V´ıtor Albiero, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu
Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Con-
stable, Xiaocheng Tang, Xiaofang Wang, Xiaojian Wu, Xiaolan Wang, Xide Xia, Xilun Wu,
Xinbo Gao, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi,
Youngjin Nam, Yu, Wang, Yuchen Hao, Yundi Qian, Yuzi He, Zach Rait, Zachary DeVito, Zef
Rosnbrick, Zhaoduo Wen, Zhenyu Yang, and Zhiwei Zhao. The llama 3 herd of models, 2024.
URL https://arxiv.org/abs/2407.21783.
Rick Durrett. Random graph dynamics, volume 20. Cambridge university press, 2010.
14
Published as a conference paper at ICLR 2025
Ronen Eldan and Yuanzhi Li. Tinystories: How small can language models be and still speak
coherent english?, 2023.
Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In Proceedings
of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, pp. 954–959,
New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450369794. doi:
10.1145/3357713.3384290. URL https://doi.org/10.1145/3357713.3384290.
Yanjun Gao, Chen Sun, and Rebecca J. Passonneau. Automated pyramid summarization evalu-
In Mohit Bansal and Aline Villavicencio (eds.), Proceedings of the 23rd Conference
ation.
on Computational Natural Language Learning (CoNLL), pp. 404–418, Hong Kong, China,
November 2019. Association for Computational Linguistics. doi: 10.18653/v1/K19-1038. URL
https://aclanthology.org/K19-1038.
Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng
Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey,
2024. URL https://arxiv.org/abs/2312.10997.
Team Gemini. Gemini: A family of highly capable multimodal models, 2024. URL https:
//arxiv.org/abs/2312.11805.
Siavash Golkar, Michael Kagan, and Kyunghyun Cho. Continual learning via neural pruning. arXiv
preprint arXiv:1903.04476, 2019.
Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical
investigation of catastrophic forgetting in gradient-based neural networks, 2015. URL https:
//arxiv.org/abs/1312.6211.
Stephen T Grossberg. Studies of mind and brain: Neural principles of learning, perception, devel-
opment, cognition, and motor control, volume 70. Springer Science & Business Media, 2012.
Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces, 2024.
URL https://openreview.net/forum?id=AL1fq05o7H.
Albert Gu, Karan Goel, and Christopher Re. Efficiently modeling long sequences with structured
In International Conference on Learning Representations, 2022. URL https:
state spaces.
//openreview.net/forum?id=uYLFoz1vlAC.
Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek
Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud
Doucet, Orhan Firat, and Nando de Freitas. Reinforced self-training (rest) for language modeling,
2023. URL https://arxiv.org/abs/2308.08998.
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital
Shah, Harkirat Singh Behl, Xin Wang, S´ebastien Bubeck, Ronen Eldan, Adam Tauman Kalai,
Yin Tat Lee, and Yuanzhi Li. Textbooks are all you need, 2023. URL https://arxiv.org/
abs/2306.11644.
Tom Gunter, Zirui Wang, Chong Wang, Ruoming Pang, Andy Narayanan, Aonan Zhang, Bowen
Zhang, Chen Chen, Chung-Cheng Chiu, David Qiu, Deepak Gopinath, Dian Ang Yap, Dong
Yin, Feng Nan, Floris Weers, Guoli Yin, Haoshuo Huang, Jianyu Wang, Jiarui Lu, John Pee-
bles, Ke Ye, Mark Lee, Nan Du, Qibin Chen, Quentin Keunebroek, Sam Wiseman, Syd Evans,
Tao Lei, Vivek Rathod, Xiang Kong, Xianzhi Du, Yanghao Li, Yongqiang Wang, Yuan Gao,
Zaid Ahmed, Zhaoyang Xu, Zhiyun Lu, Al Rashid, Albin Madappally Jose, Alec Doane, Alfredo
Bencomo, Allison Vanderby, Andrew Hansen, Ankur Jain, Anupama Mann Anupama, Areeba
Kamal, Bugu Wu, Carolina Brum, Charlie Maalouf, Chinguun Erdenebileg, Chris Dulhanty, Do-
minik Moritz, Doug Kang, Eduardo Jimenez, Evan Ladd, Fangping Shi, Felix Bai, Frank Chu,
Fred Hohman, Hadas Kotek, Hannah Gillis Coleman, Jane Li, Jeffrey Bigham, Jeffery Cao, Jeff
Lai, Jessica Cheung, Jiulong Shan, Joe Zhou, John Li, Jun Qin, Karanjeet Singh, Karla Vega,
Kelvin Zou, Laura Heckman, Lauren Gardiner, Margit Bowler, Maria Cordell, Meng Cao, Nicole
Hay, Nilesh Shahdadpuri, Otto Godwin, Pranay Dighe, Pushyami Rachapudi, Ramsey Tantawi,
15
Published as a conference paper at ICLR 2025
Roman Frigg, Sam Davarnia, Sanskruti Shah, Saptarshi Guha, Sasha Sirovica, Shen Ma, Shuang
Ma, Simon Wang, Sulgi Kim, Suma Jayaram, Vaishaal Shankar, Varsha Paidi, Vivek Kumar,
Xin Wang, Xin Zheng, Walker Cheng, Yael Shrager, Yang Ye, Yasu Tanaka, Yihao Guo, Yun-
song Meng, Zhao Tang Luo, Zhi Ouyang, Alp Aygar, Alvin Wan, Andrew Walkingshaw, Andy
Narayanan, Antonie Lin, Arsalan Farooq, Brent Ramerth, Colorado Reed, Chris Bartels, Chris
Chaney, David Riazati, Eric Liang Yang, Erin Feldman, Gabriel Hochstrasser, Guillaume Seguin,
Irina Belousova, Joris Pelemans, Karen Yang, Keivan Alizadeh Vahid, Liangliang Cao, Mah-
yar Najibi, Marco Zuliani, Max Horton, Minsik Cho, Nikhil Bhendawade, Patrick Dong, Piotr
Maj, Pulkit Agrawal, Qi Shan, Qichen Fu, Regan Poston, Sam Xu, Shuangning Liu, Sushma
Rao, Tashweena Heeramun, Thomas Merth, Uday Rayala, Victor Cui, Vivek Rangarajan Sridhar,
Wencong Zhang, Wenqi Zhang, Wentao Wu, Xingyu Zhou, Xinwen Liu, Yang Zhao, Yin Xia,
Zhile Ren, and Zhongzheng Ren. Apple intelligence foundation language models, 2024. URL
https://arxiv.org/abs/2407.21075.
Kshitij Gupta, Benjamin Th´erien, Adam Ibrahim, Mats L. Richter, Quentin Anthony, Eugene
Belilovsky, Irina Rish, and Timoth´ee Lesort. Continual pre-training of large language models:
How to (re)warm your model?, 2023. URL https://arxiv.org/abs/2308.04014.
Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey,
and Noah A. Smith. Don’t stop pretraining: Adapt language models to domains and tasks. In
Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th
Annual Meeting of the Association for Computational Linguistics, pp. 8342–8360, Online, July
2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.740. URL
https://aclanthology.org/2020.acl-main.740.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Ja-
cob Steinhardt. Measuring massive multitask language understanding. In International Confer-
ence on Learning Representations, 2021. URL https://openreview.net/forum?id=
d7KBjmI3GmQ.
Remco van der Hofstad. Random Graphs and Complex Networks. Cambridge Series in Statistical
and Probabilistic Mathematics. Cambridge University Press, 2016.
Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tun-
ing language models with (almost) no human labor.
In Anna Rogers, Jordan Boyd-Graber,
and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), pp. 14409–14428, Toronto, Canada, July
2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.806. URL
https://aclanthology.org/2023.acl-long.806.
Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han.
In Houda Bouamor, Juan Pino, and Kalika Bali
Large language models can self-improve.
(eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Pro-
cessing, pp. 1051–1068, Singapore, December 2023. Association for Computational Linguis-
tics. doi: 10.18653/v1/2023.emnlp-main.67. URL https://aclanthology.org/2023.
emnlp-main.67.
Adam Ibrahim, Benjamin Th´erien, Kshitij Gupta, Mats L. Richter, Quentin Anthony, Timoth´ee
Lesort, Eugene Belilovsky, and Irina Rish. Simple and scalable strategies to continually pre-train
large language models, 2024. URL https://arxiv.org/abs/2403.08763.
Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. Large language
models struggle to learn long-tail knowledge. In Proceedings of the 40th International Conference
on Machine Learning, ICML’23. JMLR.org, 2023.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child,
Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language
models, 2020. URL https://arxiv.org/abs/2001.08361.
Richard M Karp. The transitive closure of a random digraph. Random Structures & Algorithms, 1
(1):73–93, 1990.
16
Published as a conference paper at ICLR 2025
Ronald Kemker, Marc McClure, Angelina Abitino, Tyler L. Hayes, and Christopher Kanan. Mea-
In Proceedings of the Thirty-Second AAAI
suring catastrophic forgetting in neural networks.
Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelli-
gence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelli-
gence, AAAI’18/IAAI’18/EAAI’18. AAAI Press, 2018. ISBN 978-1-57735-800-8.
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A.
Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Has-
sabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic for-
getting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–
3526, 2017. doi: 10.1073/pnas.1611835114. URL https://www.pnas.org/doi/abs/
10.1073/pnas.1611835114.
Hunter Lang, Monica N Agrawal, Yoon Kim, and David Sontag. Co-training improves prompt-
based learning for large language models. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song,
Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International
Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research,
pp. 11985–12003. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/
v162/lang22a.html.
Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep
neural networks. ICML 2013 Workshop: Challenges in Representation Learning, 2013.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal,
Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, Sebastian Riedel, and Douwe
Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Proceedings of the
34th International Conference on Neural Information Processing Systems, NIPS ’20, Red Hook,
NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ra-
masesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam
Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with lan-
guage models, 2022. URL https://arxiv.org/abs/2206.14858.
Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang,
Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng,
Xun Wang, Si-Qing Chen, Li Dong, Wei Lu, Zhifang Sui, Benyou Wang, Wai Lam, and Furu
Wei. Synthetic data (almost) from scratch: Generalized instruction tuning for language models,
2024. URL https://arxiv.org/abs/2402.13064.
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following
models. https://github.com/tatsu-lab/alpaca_eval, 5 2023a.
Yuanzhi Li, S´ebastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee.
Textbooks are all you need ii: phi-1.5 technical report, 2023b. URL https://arxiv.org/
abs/2309.05463.
Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization
Branches Out, pp. 74–81, Barcelona, Spain, July 2004. Association for Computational Linguis-
tics. URL https://aclanthology.org/W04-1013.
Hao Liu, Matei Zaharia, and Pieter Abbeel. Ring attention with blockwise transformers for near-
In NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following,
infinite context.
2023. URL https://openreview.net/forum?id=xulyCXgIWH.
David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learning.
Advances in neural information processing systems, 30:6467–6476, 2017.
17
Published as a conference paper at ICLR 2025
Pratyush Maini, Skyler Seto, Richard Bai, David Grangier, Yizhe Zhang, and Navdeep Jaitly.
Rephrasing the web: A recipe for compute and data-efficient language modeling.
In Lun-
Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meet-
ing of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 14044–
14072, Bangkok, Thailand, August 2024. Association for Computational Linguistics. URL
https://aclanthology.org/2024.acl-long.757.
Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The
sequential learning problem. In Gordon H. Bower (ed.), Psychology of Learning and Motivation,
volume 24 of Psychology of Learning and Motivation, pp. 109–165. Academic Press, 1989. doi:
https://doi.org/10.1016/S0079-7421(08)60536-8. URL https://www.sciencedirect.
com/science/article/pii/S0079742108605368.
Nick Mecklenburg, Yiyou Lin, Xiaoxiao Li, Daniel Holstein, Leonardo Nunes, Sara Malvar, Bruno
Silva, Ranveer Chandra, Vijay Aski, Pavan Kumar Reddy Yannam, Tolga Aktas, and Todd
Hendry. Injecting new knowledge into large language models via supervised fine-tuning, 2024.
URL https://arxiv.org/abs/2404.00213.
Kevin Meng, David Bau, Alex J Andonian, and Yonatan Belinkov. Locating and editing factual asso-
ciations in GPT. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.),
Advances in Neural Information Processing Systems, 2022. URL https://openreview.
net/forum?id=-h6WAS6eE4.
Kevin Meng, Arnab Sen Sharma, Alex J Andonian, Yonatan Belinkov, and David Bau. Mass-editing
memory in a transformer. In The Eleventh International Conference on Learning Representations,
2023. URL https://openreview.net/forum?id=MkbcAHIYgyS.
Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen tau Yih, Pang Wei Koh, Mohit Iyyer,
Luke Zettlemoyer, and Hannaneh Hajishirzi. Factscore: Fine-grained atomic evaluation of fac-
tual precision in long form text generation, 2023. URL https://arxiv.org/abs/2305.
14251.
Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. Fast
model editing at scale. In International Conference on Learning Representations, 2022. URL
https://openreview.net/pdf?id=0DcZxeWfOPt.
Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Nouamane Tazi, Aleksan-
dra Piktus, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. Scaling data-constrained language
models. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL
https://openreview.net/forum?id=j5BuTrEj35.
Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming
Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris
Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski
Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter
Welinder, and Lilian Weng. Text and code embeddings by contrastive pre-training, 2022. URL
https://arxiv.org/abs/2201.10005.
Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. The pyramid method: Incorporat-
ing human content selection variation in summarization evaluation. ACM Trans. Speech Lang.
Process., 4(2):4–es, may 2007.
doi: 10.1145/1233912.1233913. URL
https://doi.org/10.1145/1233912.1233913.
ISSN 1550-4875.
Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning.
arXiv preprint arXiv:1710.10628, 2017.
OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Floren-
cia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red
Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Moham-
mad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher
Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brock-
man, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann,
18
Published as a conference paper at ICLR 2025
Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis,
Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey
Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux,
Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila
Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix,
Sim´on Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gib-
son, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan
Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hal-
lacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan
Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu,
Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun
Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Ka-
mali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook
Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel
Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen
Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel
Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez,
Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv
Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney,
Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick,
Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel
Mossing, Tong Mu, Mira Murati, Oleg Murk, David M´ely, Ashvin Nair, Reiichiro Nakano, Ra-
jeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe,
Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel
Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe
de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny,
Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl,
Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra
Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders,
Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Sel-
sam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor,
Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky,
Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang,
Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Pre-
ston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cer´on Uribe, Andrea Vallone, Arun Vi-
jayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan
Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng,
Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Work-
man, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming
Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao
Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. Gpt-4 technical report, 2024. URL
https://arxiv.org/abs/2303.08774.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong
Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel-
ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike,
and Ryan Lowe. Training language models to follow instructions with human feedback.
In
S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in
Neural Information Processing Systems, volume 35, pp. 27730–27744. Curran Associates, Inc.,
URL https://proceedings.neurips.cc/paper_files/paper/2022/
2022.
file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf.
Oded Ovadia, Menachem Brief, Moshik Mishaeli, and Oren Elisha. Fine-tuning or retrieval? com-
paring knowledge injection in llms, 2024. URL https://arxiv.org/abs/2312.05934.
Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen,
Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, and Samuel Bowman. QuALITY:
Question answering with long input texts, yes!
In Marine Carpuat, Marie-Catherine de Marn-
effe, and Ivan Vladimir Meza Ruiz (eds.), Proceedings of the 2022 Conference of the North
19
Published as a conference paper at ICLR 2025
American Chapter of the Association for Computational Linguistics: Human Language Technolo-
gies, pp. 5336–5358, Seattle, United States, July 2022. Association for Computational Linguis-
tics. doi: 10.18653/v1/2022.naacl-main.391. URL https://aclanthology.org/2022.
naacl-main.391.
Jupinder Parmar, Sanjev Satheesh, Mostofa Patwary, Mohammad Shoeybi, and Bryan Catanzaro.
Reuse, don’t retrain: A recipe for continued pretraining of language models, 2024. URL https:
//arxiv.org/abs/2407.07263.
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning
with gpt-4, 2023. URL https://arxiv.org/abs/2304.03277.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions
for machine comprehension of text, 2016. URL https://arxiv.org/abs/1606.05250.
Vinay Venkatesh Ramasesh, Aitor Lewkowycz, and Ethan Dyer. Effect of scale on catastrophic
forgetting in neural networks. In International Conference on Learning Representations, 2022.
URL https://openreview.net/forum?id=GhVS8_yPeEa.
R. Ratcliff. Connectionist models of recognition memory: Constraints imposed by learning and
forgetting functions. Psychological Review, 97(2):285–308, 1990. doi: 10.1037/0033-295X.97.
2.285.
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert.
icarl:
In Proceedings of the IEEE conference on
Incremental classifier and representation learning.
Computer Vision and Pattern Recognition, pp. 2001–2010, 2017.
Anthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2):
123–146, 1995.
Baptiste Rozi`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, J´er´emy Rapin, Artyom Kozhevnikov, Ivan Ev-
timov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong,
Alexandre D´efossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
Thomas Scialom, and Gabriel Synnaeve. Code llama: Open foundation models for code, 2024.
URL https://arxiv.org/abs/2308.12950.
Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray
Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint
arXiv:1606.04671, 2016.
Jeffrey C. Schlimmer and Douglas Fisher. A case study of incremental concept induction. In Pro-
ceedings of the Fifth AAAI National Conference on Artificial Intelligence, AAAI’86, pp. 496–501.
AAAI Press, 1986.
Raphael Schumann and Ines Rehbein. Active learning via membership query synthesis for semi-
supervised sentence classification. In Mohit Bansal and Aline Villavicencio (eds.), Proceedings
of the 23rd Conference on Computational Natural Language Learning (CoNLL), pp. 472–481,
Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/
v1/K19-1044. URL https://aclanthology.org/K19-1044.
H. Scudder. Probability of error of some adaptive pattern-recognition machines. IEEE Transactions
on Information Theory, 11(3):363–371, 1965. doi: 10.1109/TIT.1965.1053799.
Claude Elwood Shannon. Prediction and entropy of printed english. Bell System Technical
Journal, 30:50–64, January 1951. URL http://languagelog.ldc.upenn.edu/myl/
Shannon1950.pdf.
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang,
Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathe-
matical reasoning in open language models, 2024. URL https://arxiv.org/abs/2402.
03300.
20
Published as a conference paper at ICLR 2025
Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep gener-
ative replay. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan,
and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Cur-
ran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/
paper/2017/file/0efbe98067c6c73dba1250d2beaa81f9-Paper.pdf.
Charlie Snell, Dan Klein, and Ruiqi Zhong. Learning by distilling context, 2022. URL https:
//arxiv.org/abs/2209.15189.
Yu Sun, Xinhao Li, Karan Dalal, Jiarui Xu, Arjun Vikram, Genghan Zhang, Yann Dubois, Xinlei
Chen, Xiaolong Wang, Sanmi Koyejo, Tatsunori Hashimoto, and Carlos Guestrin. Learning to
(learn at test time): Rnns with expressive hidden states, 2024. URL https://arxiv.org/
abs/2407.04620.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model.
https://github.com/tatsu-lab/stanford_alpaca, 2023.
Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. Efficient transformers: A survey, 2022.
URL https://arxiv.org/abs/2009.06732.
TogetherAI. Redpajama: an open dataset for training large language models, 2023. URL https:
//github.com/togethercomputer/RedPajama-Data.
Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Shengyi Huang, Kashif Rasul,
Alvaro Bartolome, Alexander M. Rush, and Thomas Wolf. The Alignment Handbook, 2023. URL
https://github.com/huggingface/alignment-handbook.
Pablo Villalobos, Anson Ho, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, and Marius Hobbhahn.
Will we run out of data? limits of llm scaling based on human-generated data, 2024.
Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Courna-
peau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, St´efan J. van der
Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nel-
son, Eric Jones, Robert Kern, Eric Larson, C J Carey, ˙Ilhan Polat, Yu Feng, Eric W. Moore,
Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero,
Charles R. Harris, Anne M. Archibald, Antˆonio H. Ribeiro, Fabian Pedregosa, Paul van Mul-
bregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing
in Python. Nature Methods, 17:261–272, 2020. doi: 10.1038/s41592-019-0686-2.
Alex Wang, Richard Yuanzhe Pang, Angelica Chen, Jason Phang, and Samuel R. Bowman. SQuAL-
ITY: Building a long-document summarization dataset the hard way. In Yoav Goldberg, Zornitsa
Kozareva, and Yue Zhang (eds.), Proceedings of the 2022 Conference on Empirical Methods in
Natural Language Processing, pp. 1139–1156, Abu Dhabi, United Arab Emirates, December
2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.75. URL
https://aclanthology.org/2022.emnlp-main.75.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In The Eleventh International Conference on Learning Representations, 2023a. URL
https://openreview.net/forum?id=1PL1NIMMrw.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions.
In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 13484–
13508, Toronto, Canada, July 2023b. Association for Computational Linguistics. doi: 10.18653/
v1/2023.acl-long.754. URL https://aclanthology.org/2023.acl-long.754.
Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro,
Rafael Mosquera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, and Ryan Cotterell (eds.).
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Lan-
guage Learning, Singapore, December 2023. Association for Computational Linguistics. URL
https://aclanthology.org/2023.conll-babylm.0.
21
Published as a conference paper at ICLR 2025
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi,
Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language
models. In Proceedings of the 36th International Conference on Neural Information Processing
Systems, NIPS ’22, Red Hook, NY, USA, 2024. Curran Associates Inc. ISBN 9781713871088.
Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V. Le. Self-training with noisy student
improves imagenet classification. In 2020 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 10684–10695, 2020. doi: 10.1109/CVPR42600.2020.01070.
I. Zeki Yalniz, Herv´e J´egou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi-
supervised learning for image classification, 2019. URL https://arxiv.org/abs/1905.
00546.
Zitong Yang, MICHAL LUKASIK, Vaishnavh Nagarajan, Zonglin Li, Ankit Rawat, Manzil Za-
heer, Aditya K Menon, and Sanjiv Kumar. Resmem: Learn what you can and memorize the rest.
In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (eds.), Advances in
Neural Information Processing Systems, volume 36, pp. 60768–60790. Curran Associates, Inc.,
URL https://proceedings.neurips.cc/paper_files/paper/2023/
2023.
file/bf0857cb9a41c73639f028a80301cdf0-Paper-Conference.pdf.
Dong Yuan, Eti Rastogi, Gautam Naik, Sree Prasanna Rajagopal, Sagar Goyal, Fen Zhao, Bharath
Chintagunta, and Jeff Ward. A continued pretrained llm approach for automatic medical note
generation, 2024a. URL https://arxiv.org/abs/2403.09057.
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu,
and Jason Weston. Self-rewarding language models, 2024b. URL https://arxiv.org/
abs/2401.10020.
Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence.
In International Conference on Machine Learning, pp. 3987–3995. PMLR, 2017.
Dan Zhang, Sining Zhoubian, Yisong Yue, Yuxiao Dong, and Jie Tang. Rest-mcts*: Llm self-
training via process reward guided tree search, 2024a. URL https://arxiv.org/abs/
2406.03816.
Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. Bertscore:
Evaluating text generation with bert. In International Conference on Learning Representations,
2020. URL https://openreview.net/forum?id=SkeHuCVFDr.
Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B.
Hashimoto. Benchmarking large language models for news summarization. Transactions of the
Association for Computational Linguistics, 12:39–57, 2024b. doi: 10.1162/tacl a 00632. URL
https://aclanthology.org/2024.tacl-1.3.
Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright,
Hamid Shojanazeri, Myle Ott, Sam Shleifer, Alban Desmaison, Can Balioglu, Pritam Damania,
Bernard Nguyen, Geeta Chauhan, Yuchen Hao, Ajit Mathews, and Shen Li. Pytorch fsdp: Expe-
riences on scaling fully sharded data parallel. Proc. VLDB Endow., 16(12):3848–3860, aug 2023.
ISSN 2150-8097. doi: 10.14778/3611540.3611569. URL https://doi.org/10.14778/
3611540.3611569.
Zexuan Zhong, Zhengxuan Wu, Christopher Manning, Christopher Potts, and Danqi Chen.
MQuAKE: Assessing knowledge editing in language models via multi-hop questions. In Houda
Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empiri-
cal Methods in Natural Language Processing, pp. 15686–15702, Singapore, December 2023.
Association for Computational Linguistics.
doi: 10.18653/v1/2023.emnlp-main.971. URL
https://aclanthology.org/2023.emnlp-main.971.
Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix Yu, and
Sanjiv Kumar. Modifying memories in transformer models, 2020.
22
Published as a conference paper at ICLR 2025
CODEBASE, DATASET, AND MODEL WEIGHTS
We provide the codebase for reproducing all results discussed in the paper below:
https://github.com/ZitongYang/Synthetic_Continued_Pretraining.git
We release the 455M EntiGraph corpus below:
https://huggingface.co/datasets/zitongyang/entigraph-quality-corpus
We release the EntiGraph CPT model weights below:
https://huggingface.co/zitongyang/llama-3-8b-entigraph-quality
A ABLATION STUDIES
We present ablation experiments to further validate EntiGraph’s effectiveness and test its general-
ization properties. We discussed two potential limitations in §7.1:
1. Could the gains of Synthetic CPT be explained by distillation effects, due to the use of a
strong prompted LM for synthetic data generation?
2. Is the data synthesized in Synthetic CPT factual?
We provide evidence suggesting these are not significant concerns in Appendix A.1 and Appendix
A.2, respectively. Lastly, we repeat the procedure of the core experiments on another small corpus
of Coursera lecture transcripts, to provide evidence that Synthetic CPT generalizes to datasets and
domains beyond QuALITY (Appendix A.3).
A.1 USING A WEAKER SYNTHETIC DATA GENERATION LM
One potential concern is whether EntiGraph’s success demonstrated in §4 stems from distilling
knowledge from GPT-4. To investigate this, we conducted an experiment replacing GPT-4-Turbo
with a significantly weaker model, Llama 3.1 8B Instruct, as the synthetic data generator. Recall
that in all continued pretraining experiments, we finetune the 8B parameter Llama 3 Base model.
Therefore, in this experiment, the capabilities of the synthetic data generator and the continually
pretrained model are very similar, controlling for distillation effects. Using the entity extraction
and relation analysis prompts introduced in §2, we generate 334M synthetic tokens and evaluate the
scaling behavior under the same hyperparameter setup detailed in §4.1.
Figure 5 reveals two key insights. First, even with the weaker generator, EntiGraph maintains steady
log-linear improvement with no signs of saturation at 334M tokens, suggesting that the gains of Syn-
thetic CPT stem from continued pretraining on diverse representations of the corpora’s underlying
knowledge, rather than distilling the generator model’s knowledge. Similar to our main results (§4),
EntiGraph with a Llama 3.1 8B Instruct generator consistently outperforms Rephrase with the same
generator. Moreover, at 334M synthetic tokens, EntiGraph with a Llama 3.1 8B Instruct generator
outperforms closed-book evaluation of GPT-4-Turbo.
Second, while switching from the GPT-4-Turbo generator to the weaker generator shifts the accuracy
curve downward, the log-linear slope remains consistent. In contrast, holding the synthetic generator
constant, we observe that EntiGraph CPT and Rephrase CPT exhibit different slopes.
A.2 FACTUALITY AND LEXICAL DIVERSITY OF ENTIGRAPH SYNTHETIC CORPUS
Factuality. A limitation discussed in §7.1, and inherent in all methods involving synthetic data
generation, is that the generation model may hallucinate. EntiGraph is a synthetic data augmenta-
tion, which conditions an LM on a given corpus document and prompts the LM to discuss the docu-
23
Published as a conference paper at ICLR 2025
Figure 5: The scaling properties of Synthetic CPT with the EntiGraph and Rephrase augmentations,
comparing two synthetic data generators: GPT-4-Turbo and Llama 3.1 8B Instruct.
ment’s entities and their relationships. Assuming a reasonably good generator model, this grounding
should decrease hallucination rate.
To quantitatively test the factuality of documents synthesized with EntiGraph, we split the 455M
token EntiGraph corpus into sentences and randomly sample 150 sentences. We ask authors of this
work to label whether each sentence is subjective or not, and among non-subjective sentences, to
determine whether it is supported by the article text or not.
We compute two statistics: the proportion of subjective sentences denotes the number of subjective
sentences over the total number of annotated sentences. The factuality rate denotes the number of
non-subjective sentences which are supported by the source document, over the number of non-
subjective sentences, following Min et al. (2023):
• Proportion subjective: 0.532 (bootstrap 0.95 confidence interval: [0.455, 0.610]).
• Factuality rate: 0.944 (bootstrap 0.95 confidence interval: [0.889, 0.986]).
Because EntiGraph uses open-ended prompts which ask the LM to relate different, often abstract en-
tities, the LM often generates subjective statements. We do not necessarily view this as a limitation,
because learning reasonable subjective interpretations is crucial for understanding (and hence is of-
ten assessed in, e.g., essay questions on literature exams). We also observe that the non-subjective
sentences are consistently factual, supporting the effectiveness of grounding in reducing hallucina-
tion.
Lexical Diversity. We hypothesize that good synthetic data augmentations should produce knowl-
edge representations with diverse wording. As a measure of this lexical diversity, we compute the
percentage of n-grams in the synthetic documents that overlap with the n-grams of the correspond-
ing source documents.
More precisely, we first randomly select 100 QuALITY articles, tokenize them with the Llama 3.1
tokenizer, and compute the set of n-grams for each article. Then, for each article, we tokenize the
corresponding EntiGraph and Rephrase synthetic data, compute n-grams, and count the n-grams in
the synthetic data that appear in the set of n-grams for the raw article. For each n and synthetic
augmentation method, we sum this overlap count across articles and normalize by the total number
of synthetic tokens generated for the 100 articles, providing us an estimate of the percentage of
n-grams in the synthetic data that overlap with the source data.
These results are provided in Table 4. We observe that for both augmentations, n-gram overlap per-
centage is low and quickly approaches 0% with increasing n, indicating that both methods produce
lexically diverse knowledge representations.
24
101102Number of synthetic tokens (in Millions)40.042.545.047.550.052.555.057.5QuALITY QA AccuracyGPT-4 (51.30%)Llama 3 8B Base (39.49%)EntiGraph with Llama 3.1 8B InstructRephrase with Llama 3.1 8B InstructEntiGraph with GPT-4-TurboPublished as a conference paper at ICLR 2025
Augmentation n = 2 n = 4 n = 8 n = 16
EntiGraph
Rephrase
23.40
21.35
3.66
3.04
0.24
0.51
0.00
0.22
Table 4: Percentage of token n-grams in synthetic documents that overlap with the source document
n-grams, for the EntiGraph and Rephrase synthetic data augmentations.
A.3 DATASETS BEYOND QUALITY
To test whether synthetic CPT with EntiGraph generalizes to corpora beyond QuALITY, we evalu-
ated on the Coursera Exam QA dataset (An et al., 2023). This dataset contains lecture transcripts and
exam questions from advanced technical courses like data science and machine learning. Compared
to the books and stories in QuALITY, Coursera exams present new challenges—the content is harder
conceptually, questions can have multiple correct answers, and the number of options is not fixed to
four choices. This makes few-shot prompting more demanding, as the model must understand both
the content and the flexible answering format.
The dataset consists of 15 lecture transcripts and 124K raw tokens, substantially smaller than QuAL-
ITY’s 265 documents and 1.3M raw tokens. During our scaling analysis, we found that models
trained on tiny synthetic corpora (e.g., a few million tokens) struggled to follow few-shot prompts
reliably for Coursera questions, resulting in parsing errors. Therefore, we begin the scaling curve in
Fig. 6 starting from token counts where parsing error rates fall below 5%. For the Rephrase baseline,
we generate synthetic data up to 22M tokens, and find that only one model has parsing error rates
below 5%.
Figure 6: The scaling properties of Synthetic CPT using the EntiGraph augmentation on the Cours-
era Exam QA dataset.
Despite these challenges, EntiGraph CPT shows consistent improvement over Llama 3 8B Base, im-
proving accuracy from 48.26% to 53.87%, better than Llama 3 8B Base and the Rephrase baseline.
The log-linear scaling pattern persists up to 32M synthetic tokens, suggesting EntiGraph’s effec-
tiveness extends beyond narrative texts to technical educational content. This successful transfer
to a substantially different domain provides evidence for the generalizability of synthetic continued
pretraining and EntiGraph.
25
2×1013×101Number of synthetic tokens (in Millions)48495051525354Coursera Exam QA AccuracyRephrase with GPT-4-TurboEntiGraph with GPT-4-TurboLLama 3 8B Base (48.26%)Published as a conference paper at ICLR 2025
B ADDITIONAL RELATED WORK
Synthetic data generation. There is a rich literature on using neural nets to generate synthetic
data. Many such approaches were originally developed for semi-supervised learning—self-training
and pseudo-labeling methods improve models by iteratively training them on their own predictions
(Scudder, 1965; Lee, 2013; Yalniz et al., 2019; Berthelot et al., 2019; Xie et al., 2020), and co-
training uses two models to supervise each other (Blum & Mitchell, 1998; Balcan et al., 2004).
Before language models rose to prominence, few approaches attempted to synthesize inputs. One
exception is membership query synthesis, which explored the synthesis of inputs in a supervised
learning context (Angluin, 1988; Schumann & Rehbein, 2019).
Contemporary works employ co-training (Lang et al., 2022) and self-training to improve language
model performance, often on mathematical reasoning tasks (Huang et al., 2023; Gulcehre et al.,
2023; Zhang et al., 2024a), or synthesize input-output pairs for instruction tuning, usually by con-
ditioning on a curated seed set (Wang et al., 2023b; Honovich et al., 2023; Taori et al., 2023; Peng
et al., 2023; Yuan et al., 2024b; Li et al., 2024).
Continual learning and pretraining. Continual learning is rooted in historical work on connec-
tionist networks (McCloskey & Cohen, 1989; Ratcliff, 1990) and considers learning with tasks ar-
riving in an online manner (Schlimmer & Fisher, 1986; Grossberg, 2012). The main focus is on
mitigating a neural net’s “catastrophic forgetting” of previously encountered tasks (Robins, 1995;
Goodfellow et al., 2015; Kemker et al., 2018). Approaches include regularizing parameter updates
to preserve important parameters (Nguyen et al., 2017; Zenke et al., 2017; Kirkpatrick et al., 2017);
dynamically modifying the architecture (Rusu et al., 2016; Golkar et al., 2019); and recalling or
replaying previous experiences (Rebuffi et al., 2017; Shin et al., 2017; Lopez-Paz & Ranzato, 2017).
Modern works in continued pretraining (cf. §1.1) effectively mitigate catastrophic forgetting by
scaling parameter count (Ramasesh et al., 2022) and mixing in updates on pretraining data (Ouyang
et al., 2022).
C DETAILS ON THE QUALITY DATASET
We provide additional details on the QuALITY dataset below. For each book, we execute entity
extraction (Step 1, §2.2) and then analyze all pair-wise relations between entities and a subset of all
triplet relations (Step 2, 2.2). We provide summary statistics for the Raw and EntiGraph corpora in
Figure 7.
(a) Raw article tokens
(b) Extracted entities
(c) EntiGraph corpus tokens
Figure 7: Histograms over the 265 QuALITY articles and books. (a) The token count of raw articles.
(b) The number of extracted entities. (c) The token count of EntiGraph synthetic data (generated for
each book).
D TRAINING DETAILS FOR THE MAIN EXPERIMENTS
Continued pretraining details.
In all experiments, we continue pretraining the Llama 3 8B Base
model with a context length of 2048 and batch size of 16. We apply a linear learning rate warmup
for 5% of total steps, followed by a cosine decay with peak learning rate 5e-6. We perform full
parameter training with Fully Sharded Data Parallelism (FSDP, Zhao et al. (2023)).
26
2345678Token count (K)051015202530Frequency020406080100Entity count0510152025303540Frequency010002000300040005000Token count (K)051015202530FrequencyPublished as a conference paper at ICLR 2025
EntiGraph continued pretraining details. To mitigate the forgetting of pretrained knowledge,
we perform replay with a rate of 0.1 using 1B RedPajama tokens (TogetherAI, 2023). More pre-
cisely, for each training batch, we flip a biased coin such that with 10% probability, we load the
RedPajama data instead of the EntiGraph synthetic data.
Raw continued pretraining details. Next, we provide details for our continued pretraining di-
rectly on the Raw corpus, producing the “Raw CPT” model. Because the Raw corpus only has 1.3M
tokens, we jointly tune the number of epochs (repetition factor) and the RedPajama replay rate on
accuracy over a QuALITY QA validation split. The selected hyperparameter configuration uses 4
epochs and a 0.1 replay rate.
Instruction tuning details. We use the UltraChat instruction tuning dataset (Ding et al., 2023)
filtered by the Huggingface team (Tunstall et al., 2023) as our instruction tuning data. We use the
chat template of Llama 3.1 8B Instruct (Dubey et al., 2024) to format the UltraChat conversations,
obtaining a 250M token instruction tuning dataset. We apply a linear learning rate warmup followed
by a cosine decay to 0 with peak learning rate 5e-6, and train the model for 1 epoch with a batch size
of 512 and context window of 2048. To sanity check our instruction tuning procedure, we measure
the AlpacaEval (Li et al., 2023a) winrate against GPT-4 and find it improves from 0% to 6.25%,
comparable to a 7.7% baseline winrate of Llama 2 Chat 13B.
Compute resource. All the continued pretraining experiments are performed with one 8×H100
node. With PyTorch FSDP (Zhao et al., 2023), we obtain throughput of 6090 tokens per second.
Since all experiments use the same model architecture, batch size, and context length, the time to run
the experiments can be calculated based on the total tokens seen during training. For example, the
main EntiGraph is trained on 455M tokens with 2 epochs. Therefore, it should take 455M×2/6090
seconds, which is about 41 hours.
E TASK-SPECIFIC FINETUNING FOR THE QUALITY QUESTION SET
Our work considers task-agnostic synthetic data generation and continued pretraining as a way to
obtain generalizable knowledge about a domain, in a way that can later be extracted via few-shot
prompting (Brown et al., 2020) and instruction tuning (Ouyang et al., 2022).
However, if our goal is only to do well on a single task, such as question answering, then we could
fine-tune a language model for that particular task. This approach worked extremely well on tasks
such as SQuAD (Rajpurkar et al., 2016) in-domain but suffered from degraded performance outside
the fine-tuning data distribution (Awadalla et al., 2022).
We do not extensively perform comparisons to task-specific finetuning due to the more general multi-
task goals of EntiGraph. We run preliminary experiments comparing a simple QA SFT baseline
to EntiGraph, and find that EntiGraph scaling and synthetic data generation costs are generally
favorable even when compared to this strong, task-specific baseline.
QA SFT. We follow the same set as in §2.1 and §3 except that we do not prompt LMsynth to
generate general knowledge about QuALTY articles. Instead, we prompt LMsynth to generate QA
pairs directly:
You are an assistant to help read a article and then rephrase it in a
question answering format. The user will provide you with an article
with title, year, content. You need to generate a paraphrase of the
same article in question and answer format with multiple tags of
"Question: ..." followed by "Answer: ...". Remember to keep the
meaning and every content of the article intact, including the title,
year, etc.
We repeat this prompt many times at temperature 1.0, resulting in 28M tokens on synthetic question
answer pairs. We perform the same continued pretraining procedure in §4.1 on Llama 3 8B and refer
to this model as “QA SFT”.
27
Published as a conference paper at ICLR 2025
Figure 8: Accuracy on the QuALITY question set Qtest (y-axis) as a function of the synthetic token
count (x-axis). Comparison among EntiGraph CPT, Rephrase CPT, and QA SFT.
Results discussion We plot the QA SFT scaling curve in Figure 8. We can see that task-specific
finetuning demonstrates a very sharp improvement in QA accuracy, consistent with prior results
showing task-specific finetuning gains for pretrained models. While QA SFT performance is high,
we note that EntiGraph attains similar performance despite being entirely task-agnostic, and the
overall dollar cost of creating the dataset is much lower for EntiGraph.
This difference in synthetic data generation cost is hidden in Figure 8, as we plot the number of
training tokens rather than dollars spent to generate the synthetic data. For QA SFT, each QA
question is generally short, resulting in large inefficiencies in generating this QA dataset. We found
that the input token to output token ratio was large compared with Rephrase CPT and EntiGraph
CPT, resulting in over $5K to generate just 28M tokens4. This difference in cost means that further
scaling became prohibitively expensive, and that EntiGraph’s performance in Figure 8 is even better
than it appears, if we match for total cost rather than token budget.
F ADDITIONAL DETAILS ON OPEN-BOOK EXPERIMENTS
We provide additional details on our open-book experimental setup below, including our retrieval-
augmented generation (RAG, Lewis et al. (2020); Gao et al. (2024)) pipeline. As mentioned in §5,
we use a standard two-stage RAG pipeline: first, an offline stage which indexes document chunks;
second, inference-time retrieval, reranking, and placement of those chunks in a few-shot LM prompt.
F.1 STAGE 1: OFFLINE INDEXING
The purpose of the indexing stage is to construct an index over all the 265 articles and books from
the QuALITY corpus Dsource. More specifically, this stage chunks documents from the given corpus,
obtains dense vector embeddings for each chunk using an API-based embedding model, and indexes
the (embedding, chunk) pairs.
1 , ..., C (i)
Chunking documents. We first split each document D(i) ∈ {D(i)}n
i=1 = Dsource into a set
of mi document chunks {C (i)
mi}. To perform this splitting, we use the Recursive
CharacterTextSplitter from Chase (2022), which attempts to keep all paragraphs (and then
sentences, and then words) together for as long as possible, in order to preserve the semantics within
each chunk. We use non-overlapping chunks and tune chunk size in characters (chunk size,
hyperparameter values provided below). Lastly, because we have access to metadata about each
document D(i)—namely, the title, author, and year of the book or article—we prepend this meta-
data to each document chunk. This is analogous to how a corporation building a RAG system over
4OpenAI API pricing, Sep 2024
28
100101102Number of synthetic tokens (in Millions)40.042.545.047.550.052.555.0QA AccuracyEntiGraph CPTRephrase CPTQA SFTPublished as a conference paper at ICLR 2025
their own document store could include metadata about the document (title, author, year, etc.). These
final chunks with metadata prepended are embedded, and are the ones that are retrieved and placed
in-context.
Embedding and indexing document chunks. Next, we obtain dense embeddings for all
document chunks using a state-of-the-art text embedding model OpenAI text-embedding
-3-large (Neelakantan et al., 2022). Lastly, we index all (embedding, chunk) tuples using a
FAISS vector store (Douze et al., 2024).
F.2 STAGE 2: INFERENCE-TIME RETRIEVAL AND RERANKING
At inference time, the RAG system receives a test query q ∈ Qtest. Each query q is contextualized
with the article title and author name, as described in §3, and contains its four possible answer
choices (QuALITY is a 4-choice, multiple choice dataset). In Stage 2, we embed the query with the
API-based embedding model, retrieve K document chunks using an approximate nearest-neighbor
search, and lastly, select the k < K most relevant chunks using an API-based reranker.
Retrieving top-K document chunks. We embed q with text-embedding-3-large, and
retrieve the top-K most relevant document chunks from our indexed vector store using FAISS simi-
larity search with a Euclidean distance metric.
Reranking to obtain top-k (k < K) chunks. Next, we use a reranker to filter the K retrieved
document chunks to a smaller number of reranked chunks k. Rerankers are known to significantly
improve recall (the proportion of the time that the salient article is contained in the top chunks),
and indeed, the recall of our RAG pipelines is near-perfect (Table 3 in §5). Specifically, we pass
the query q and the list of K retrieved document chunks to a state-of-the-art reranker—Cohere
rerank-english-v3.0 (Cohere, 2024)—which returns a list of the K chunks in order from
most to least semantically relevant for the query. We take the k highest scoring chunks and place
them in our few-shot prompt.
Few-shot prompt formatting. Our full few-shot chain-of-thought evaluation prompts for the
open-book setting will be provided in our code release. Similar to the closed-book QA evaluation
prompt, we manually write and fact-check in-context learning examples about well-known books, to
avoid leaking knowledge from the QuALITY articles. In early experiments, we found that placing
the retrieved contexts first, followed by the question and answer choices after, significantly improved
performance compared to question-then-contexts; we use this format throughout the retrieval exper-
iments. We treat as a hyperparameter whether the reranked chunks are ordered from the best match
to worst (best first) or from the worst match to best (best last). When performing few-shot
evaluation, we follow the sampling procedure used in the closed-book experiments (Appendix I.1).
Specifically, we generate 64 responses for each question, and filter out responses that do not parse
to one of the four choices. Lastly, we randomly select one of the valid responses as the model’s final
answer.
F.3 HYPERPARAMETER TUNING
In our experiments, we compare two LMs used in the RAG pipeline above: EntiGraph CPT and its
base model, Llama 3 8B Base. As mentioned above, we fix the retrieved number of chunks to K =
128, but vary the number of reranked chunks k which are ultimately placed in the context window.
For each language model + RAG pipeline, we independently tune the following hyperparameters
with a grid search on accuracy using a QuALITY QA validation split:
• Document chunk size ∈ {256, 512, 1024}
• Rerank top-k ∈ {1, 2, 4, 8, 16}
• Order of chunks ∈ {best first, best last}
• Eval temperature ∈ {0.1, 0.3, 0.5, 0.7}
We will provide tuned hyperparameters in our code release.
29
Published as a conference paper at ICLR 2025
G PROOF OF THEOREM 1 AND OTHER ANALYTICAL FORMULAS
In this section, we prove Theorem 1 and provide the derivations for several other approximation
formulas.
Proof of Theorem 1. Fix the matrix M0, we observe that
Acc(Mt) =
E[∥Mt∥1|M0]
V (V − 1)
=
(cid:88)
(i,j)∈V 2
E[1((i, j) ∈ Dt)|M0]
V (V − 1)
=
(cid:88)
(i,j)∈V 2
P[(i, j) ∈ Dt|M0]
V (V − 1)
.
For each (i, j) ∈ V 2, we define qi,j to be the probability that (i, j) is included in the set
{(xt, z1
t ), (xt, yt)}. Note that each iteration of the procedure generates a
path (xt, z1
t , yt) independently identically. So naturally qi,j does not depend on the
time t. This implies that P[(i, j) ∈ Dt|M0] = 1 − (1 − qi,j)t. Thus we can further rewrite the link
density as
t ), . . . , (xt, zkt
t , . . . , zkt
t ), (xt, z2
t , z2
Acc(Mt) =
|Dsource|
V (V − 1)
=
|Dsource|
V (V − 1)
+
+
(cid:88)
(i,j)∈V 2\Dsource
(cid:88)
(i,j)∈V 2\Dsource
P[(i, j) ∈ Dt|M0]
V (V − 1)
1 − (1 − qi,j)t
V (V − 1)
.
The remaining task is to estimate qi,j. We say a vertex j is reachable from i and denote i ∼ j, if
there is a directed path from i to j in M0. We define R = {(u, v) ∈ V 2 : u ̸= v, u ∼ v} to be the
set of all reachable pairs of vertices in V. We note that qi,j is non-zero if and only if j is reachable
from i in M0. Now, for any t ≥ 1, the function 1 − (1 − x)t is concave, thus by Jensen’s inequality,
we have
(cid:88)
1 − (1 − qi,j)t ≤
(i,j)∈V 2\Dsource
(cid:88)
(i,j)∈R
1 − (1 − qi,j)t ≤ |R| (cid:0)1 − (1 − ¯qi,j)t(cid:1) ,
where
¯qi,j =
(cid:80)
(i,j)∈R qi,j
|R|
.
For each (i, j) ∈ R, the probability qi,j satisfies
(cid:80)
qi,j =
a̸=b∈V 2 1((i, j) ∈ {(a, z1), (a, z2), . . . , (a, zk), (a, b)})
V (V − 1)
where (a, z1, z1, · · · , zk, b) is the shortest path in M0 connecting a and b. If there is no such path,
then by default the indicator equals zero. Now we look at
(cid:88)
qi,j =
(i,j)∈R
1
V (V − 1)
≤
=
1
V (V − 1)
1
V (V − 1)
(cid:88)
(cid:88)
(i,j)∈R
(a,b)∈R
(cid:88)
(cid:88)
(a,b)∈R
i̸=j∈V 2
(cid:88)
ℓa,b,
(a,b)∈R
1((i, j) ∈ {(a, z1), (a, z2), . . . , (a, zk), (a, b)})
1((i, j) ∈ {(a, z1), (a, z2), . . . , (a, zk), (a, b)})
where ℓa,b is the length of the shortest path connecting a to b. To analyze the typical shortest length
of paths, we present a few classical results on directed Erd˝os-R´enyi graphs. For any a ∈ V, let X(a)
denote the set of vertices reachable from a and let Y (a) denote the set of vertices from which a is
reachable. Recall that ρ(λ) is the extinction probability for the Poisson(λ) branching process.
Lemma G.1 (Lemma 1 and Corollary 1 in Karp (1990)). For each vertex a, with probability tending
to 1 as V tends to infinity, there exists a constant β > 0 such that either |X(a)| ≤ β log V or
V ). Moreover, the probability that the latter happens tends to 1−ρ(λ)
|X(a)| = (1−ρ(λ))V +Θ(
as V tends to infinity. The same is true for Y (a).
√
30
Published as a conference paper at ICLR 2025
For each vertex a, the set X(a) is said to be small if |X(a)| ≤ β log V (in such case we write
a ∈ SX ) and large if |X(a)| = (1 − ρ(λ))V + Θ(
V ) (we write a ∈ LX ). We define SY and LY
similarly.
√
Lemma G.2 (Theorem 3 in Karp (1990) and Theorem 2.4.1 in Durrett (2010)). With probability
tending to 1, the following statement holds for all a and b in V: if X(a) is large and Y (b) is large,
then b is reachable from a. Moreover, if X(a) is large and Y (b) is large, then for any ε > 0 and any
sufficiently small δ > 0,
P[ℓa,b > (1 + ε) log V / log λ] < exp(−V εδ).
With Lemma G.1 and Lemma G.2, we can now give useful estimates of |R|. In particular, for any
ε > 0,
|R| = |{(a, b) ∈ R : a ∈ LX , b ∈ LY }| + |{(a, b) ∈ R : a ∈ SX or b ∈ SY }|
≤ (1 − ρ(λ))2(1 + ε/4)V 2 + 2(1 + ε)V β log V
≤ (1 − ρ(λ))2(1 + ε/3)V (V − 1),
with high probability. Similarly, for the lower bound,
|R| = |{(a, b) ∈ R : a ∈ LX , b ∈ LY }| + |{(a, b) ∈ R : a ∈ SX or b ∈ SY }|
≥ (1 − ρ(λ))2(1 − ε)V 2
≥ (1 − ρ(λ))2(1 − ε)V (V − 1),
with high probability. By a union bound over all pairs of (a, b) ∈ R, we also have that
(cid:88)
qi,j ≤
(i,j)∈R
1
V (V − 1)
=
1
V (V − 1)
(cid:88)
ℓa,b
(a,b)∈R
(cid:88)
(a,b)∈R
a∈LX ,b∈LY
ℓa,b +
1
V (V − 1)
(cid:88)
ℓa,b
(a,b)∈R
a∈SX or b∈SY
≤ (1 − ρ(λ))2(1 + ε/2)
log V
log λ
+
1
V (V − 1)
2(1 + ε)V (β log V )2
≤ (1 − ρ(λ))2(1 + ε)
log V
log λ
,
with probability larger than 1 − V 2 exp(−V εδ). Combining the above, for any ε > 0,
(i,j)∈R qi,j
|R|
(1 + ε) log V
V (V − 1) log λ
¯qi,j =
(cid:80)
≤
,
with high probability. Therefore, for any ε > 0,
Acc(Mt) ≤
|Dsource|
V (V − 1)
(cid:32)
+
|R| (1 − (1 − ¯qi,j)t)
V (V − 1)
(cid:32)
(cid:18)
≤ (1 + ε)
p + (1 − ρ(λ))2
1 −
1 −
(1 + ε) log V
V (V − 1) log λ
(cid:19)t(cid:33)(cid:33)
,
with high probability, which completes the proof of the upper bound. For the lower bound, we
observe that if i ∼ j and (i, j) ∈ R\Dsource, then qi,j ≥ 1/V (V − 1), because when i and j are
chosen in the procedure, the edge (i, j) will be added. This implies that
Acc(Mt) =
|Dsource|
V (V − 1)
+
(cid:88)
R\Dsource
1 − (1 − qi,j)t
V (V − 1)
≥
|Dsource|
V (V − 1)
(cid:32)
+
|R\Dsource|
V (V − 1)
≥ (1 − ε)
p + (1 − ρ(λ))2
(cid:32)
(cid:18)
1 −
1 −
(cid:32)
(cid:18)
1 −
1 −
1
V (V − 1)
1
V (V − 1)
(cid:19)t(cid:33)
(cid:19)t(cid:33)(cid:33)
,
with high probability which completes the proof of the lower bound.
31
Published as a conference paper at ICLR 2025
To obtain a more precise description of Acc(Mt), we employ a Poisson branching process to ap-
proximate the cluster growth of vertices, which we now define. A Poisson(λ) branching process is a
model for a population evolving in time, where each individual independently gives birth to a num-
ber of children with Poisson(λ) distribution. We denote by Zn the number of individuals in the n-th
generation, where by default Z0 = 1. Then Zn satisfies the recursion relation Zn = (cid:80)Zn−1
i=1 Xn,i,
where {Xn,i}n,i≥1is a doubly infinite array of i.i.d. Poisson(λ) random variables. The total progeny
Yn is then defined as Yn = (cid:80)n
i=0 Zn. Zn is often called a Galton–Watson branching process and
the associated tree is called a Galton–Watson tree.
As in the previous proof, an accurate estimate of Acc(Mt) relies on understanding qi,j, the proba-
bility that the edge (i, j) will be added in each round. As before, the only edges that will be added
are those connected to the giant component (i.e., i ∈ LX and j ∈ LY ). The proportion of such
edges converges to Cλ as V → ∞. Recall that
(cid:80)
(a,b)∈R
qi,j =
1((i, j) ∈ {(a, z1), (a, z2), . . . , (a, zk), (a, b)})
V (V − 1)
(3)
where (a, z1, z1, · · · , zk, b) represents the shortest path in M0 connecting a and b. Equivalently, if
we consider the tree generated by a breadth-first search in M0 rooted at i, then since i ∼ j, j will be
in the tree, and the numerator counts the total number of offspring of j in the tree, including j itself.
This is the point at which a rigorous mathematical characterization of the tree becomes challenging.
Instead, we approximate the tree and analyze its behavior. It is well-known that when p = λ/V ,
the cluster growth (or the breadth-first search at a vertex) can be approximated by a Poisson(λ)
branching process (see e.g., Hofstad (2016); Durrett (2010)). For fixed vertex i, we define T as a
Galton–Watson tree rooted at i with Poisson(λ) offspring distribution with depth L. We use T to
approximate the exploration process at i. For 0 ≤ ℓ ≤ L, the number of vertices at level L − ℓ is
approximately λL−ℓ. Given that the total number of vertices in T is approximately (1 − ρ(λ))V , the
number of vertices at level L − ℓ is also (1 − ρ(λ))V (λ − 1)/λℓ+1. For each vertex at level L − ℓ, the
number of its offspring (including itself) equals k with probability pℓ(k). In this case, the numerator
in (3) equals k. Combining the above, there are around (1−ρ(λ))V ·pℓ(k)(1−ρ(λ))V (λ−1)/λℓ+1
vertex pairs (i, j) in the graph such that i ∈ LX , j ∈ LY , qi,j = k/V (V − 1) and j is located at the
L − ℓ level in the tree T . Ultimately, we arrive at an approximation of the form
Acc(Mt) ∼ p + Cλ
1 −
(cid:32)
∞
(cid:88)
ℓ=0
λ − 1
λℓ+1
∞
(cid:88)
k=1
(cid:18)
pℓ(k)
1 −
k
V (V − 1)
(cid:19)t(cid:33)
.
Beyond Erd˝os-R´enyi graphs, the term qi,j may not be as explicit. We can define C as the proportion
of vertex pairs (i, j) such that i ∼ j in M0, then qi,j is nonzero for CV (V − 1) pairs of vertices. In
this case, if we write ak = k/V (V − 1) and define µ(k) as the probability that qi,j = ak, then we
can have a general formula
(cid:32)
Acc(Mt) ∼ p + C
1 −
(cid:33)
µ(k) (1 − ak)t
.
∞
(cid:88)
k=1
The drawback of this formula is the lack of explicit expressions. For a given M0, it is unclear how
to compute the measure µ(·) easily.
Next, we provide a qualitative description of the shape of such a mixture of exponentials.
Lemma G.3. For a fixed constant 0 < C < 1 and a probability measure µ(·) on Z+ with finite
mean m, we define
(cid:32)
f (t) = p + C
1 −
∞
(cid:88)
k=1
(cid:18)
µ(k)
1 −
k
V (V − 1)
(cid:19)tV (V −1)(cid:33)
.
Then we have that there exists 0 < t1 < t2 such that
as V → ∞.
f (t) =
Θ (p + t) ,
Θ(log t),
Θ(1),
for 0 ≤ t ≤ t1,
for t1 ≤ t ≤ t2,
for t ≥ t2,
32
Published as a conference paper at ICLR 2025
Proof of Lemma G.3. Fix any 1 < t1 < t2. Note that f (t) is monotone increasing, concave and
always bounded by 1. We also have
(cid:32)
(cid:18)
f (t2) ≥ p + C
1 −
1 −
1
V (V − 1)
(cid:19)t2V (V −1)(cid:33)
≥ p + C(1 − exp(−t2)) = Θ(1).
So f (t) = Θ(1) when t ≥ t2. Now when t ≤ t1,
(cid:32)
f (t) ≤ p + C
1 −
∞
(cid:88)
k=1
(cid:33)
µ(k)(1 − tk)
≤ p + Cmt.
Since f (0) = p and f (t2) ≥ p + C(1 − exp(−t2)), by concavity, f (t) is lower bounded by
p + tC(1 − exp(−t2))/t2 = Θ(p + t) for any 0 ≤ t ≤ t1. Finally for t1 ≤ t ≤ t2, we note
that f (t1) ≤ f (t) ≤ 1, so easily, f (t) ≤ log t1/ log t1 ≤ log t/ log t1 = O(log t). Similarly,
f (t) ≥ f (t1) log t2/ log t2 ≥ log t(f (t1)/ log t2) ≥ Ω(log t). Therefore, f (t) = Θ(log t) for any
t1 ≤ t ≤ t2.
G.1 MORE DETAILS ON THE MIXTURE OF EXPONENTIAL SHAPE
We provide more discussion on the mixture of exponential shape, including how we use it to fit the
empirical EntiGraph CPT QA accuracy.
Intuitively, the edge (i, j) will eventually be added if and only if j is reach-
Sketch of derivation.
able from i in the original graph M0. This explains the limiting behavior of Acc(Mt) as t ap-
proaches infinity: the proportion of links will converge to the proportion of connected vertex pairs
in M0. To understand the mixture-of-exponential functional form, consider that at the time t, the
probability of adding each vertex pair follows an exponential pattern, with different vertex pairs
exhibiting different exponential growth rates. Specifically, think of a breadth-first search in M0
starting from a vertex i. If j is very close to the root, there are many paths from i to other vertices
passing through j, making it more likely that (i, j) will be included in each iteration. In contrast, if
j is far from the root (e.g., at the end of the exploration process), there are fewer such paths, making
it less likely for (i, j) to be included in each iteration. This accounts for the mixture-of-exponential
shape, where the mixture primarily reflects the distance of each vertex from the root, the number of
such vertices, and their corresponding exponential growth rates.
(a) Linear regime
(b) Log-linear (t in log scale)
(c) Plateau regime
Figure 9: Accuracy Acc(Mt) with respect to time t, for V = 100 and p = 0.03. The mixture-of-
exponential functional form in (2) leads to three distinct regimes.
Qualitative description. Finally, to help build an intuitive understanding, we provide a qualitative
description of the mixture-of-exponential shape. We demonstrate in Appendix G that this mixture-
of-exponential shape comprises three distinct phases: a fast growth phase, a slower growth phase,
and a plateau phase. Mathematically, we show the existence of two distinct times, 0 < t1 < t2, such
that
Acc(MT ) =
Θ (p + t) ,
Θ(log t),
Θ(1),
for 0 ≤ t ≤ t1,
for t1 ≤ t ≤ t2,
for t ≥ t2,
33
Published as a conference paper at ICLR 2025
where we use a convenient change of variable T = tV (V − 1). It is important to note that the
choice of log t in the second phase is not necessarily canonical. In fact, the bound holds for any
well-behaved monotone increasing concave function as a replacement for log t. Our representation
here is motivated by two factors: first, it aligns with the performance observed in our EntiGraph
CPT numerical results, and second, it reflects the gradual slowdown in growth. We illustrate the
three phases in Figure 9, which present a simulation of the toy model with p = 0.03.
To perform curve fitting using the mixture-of-exponential formula, we approximate the infinite sum
with three terms in
(cid:32)
Acc(Mt) ∼ p + C
1 −
(cid:33)
µ(k) (1 − ak)t
.
∞
(cid:88)
k=1
Mathematically, we fit the empirical observation against the formula
y(x) = a − b1rx
1 − b2rx
2 − b3rx
3 ,
where x is the EntiGraph token count (in millions) and y(x) is the QuALITY QA accuracy. We
use the non-linear least squares method implemented by Virtanen et al. (2020). As a result of this
procedure, we obtain the fitted formula
y(x) = 64.5456 − 13.8352 × (0.9989)x − 8.4705 × (0.8961)x − 3.932 × (0.0546)x.
For the implementation of this procedure, we refer readers to our code release.
34
Published as a conference paper at ICLR 2025
H SYNTHETIC DATA GENERATION PROMPTS
We generate two synthetic corpora in this paper: EntiGraph (Appendix H.1) and the Rephrase base-
line (Appendix H.2). In our experiments, the Dsource is a collection of documents D, and our syn-
thetic augmentation procedure is applied to each document D ∈ Dsource. We will focus on a single
document D for the remainder of this section.
H.1 ENTIGRAPH PROMPTS
The EntiGraph procedure is described in detail in §2.2. We will recap the three steps below.
Step 1: Entity extraction. The first step is to extract the salient entities from the document D
using the entity extraction operation (Step 1, §2.2). The complete entity extraction
prompt is as follows:
As a knowledge analyzer, your task is to dissect and understand an
article provided by the user. You are required to perform the
following steps:
1. Summarize the Article: Provide a concise summary of the entire
article, capturing the main points and themes.
2. Extract Entities: Identify and list all significant "nouns" or
entities mentioned within the article. These entities should include
but not limited to:
* People: Any individuals mentioned in the article, using the
names or references provided.
* Places: Both specific locations and abstract spaces relevant to
the content.
* Object: Any concrete object that is referenced by the provided
content.
* Concepts: Any significant abstract ideas or themes that are
central to the article’s discussion.
Try to exhaust as many entities as possible. Your response should be
structured in a JSON format to organize the information effectively.
Ensure that the summary is brief yet comprehensive, and the list of
entities is detailed and accurate.
Here is the format you should use for your response:
{
}
"summary":
"entities": ["entity1", "entity2", ...]
"<A concise summary of the article>",
Step 2: Relation analysis. The last step is to generate diverse descriptions of relations among
two or more entities. In our experiments, for each document D, we enumerate all entity pairs and
generate a description for each. The prompt for generating a description relating a pair of entities is
as follows:
You will act as a knowledge analyzer tasked with dissecting an
article provided by the user. Your role involves two main
objectives:
1. Rephrasing Content: The user will identify two specific entities
mentioned in the article. You are required to rephrase the
content of the article twice:
* Once, emphasizing the first entity.
* Again, emphasizing the second entity.
2. Analyzing Interactions: Discuss how the two specified entities
interact within the context of the article.
35
Published as a conference paper at ICLR 2025
Your responses should provide clear segregation between the
rephrased content and the interaction analysis. Ensure each section
of the output include sufficient context, ideally referencing the
article’s title to maintain clarity about the discussion’s focus.
Here is the format you should follow for your response:
### Discussion of <title> in relation to <entity1>
<Rephrased content focusing on the first entity>
### Discussion of <title> in relation to <entity2>
<Rephrased content focusing on the second entity>
### Discussion of Interaction between <entity1> and <entity2>
in context of <title>
<Discussion on how the two entities interact within the article>
We also generate synthetic data involving three entities, using the prompt below:
You will act as a knowledge analyzer tasked with dissecting an
article provided by the user. Your role involves three main
objectives:
1. Rephrasing Content: The user will identify three specific
entities mentioned in the article. You are required to rephrase
the content of the article three times:
* Once, emphasizing the first entity.
* Again, emphasizing the second entity.
* Lastly, emphasizing the third entity.
2. Analyzing Interactions: Discuss how these three specified
entities interact within the context of the article.
Your responses should provide clear segregation between the
rephrased content and the interaction analysis. Ensure each section
of the output include sufficient context, ideally referencing the
article’s title to maintain clarity about the discussion’s focus.
Here is the format you should follow for your response:
### Discussion of <title> in relation to <entity1>
<Rephrased content focusing on the first entity>
### Discussion of <title> in relation to <entity2>
<Rephrased content focusing on the second entity>
### Discussion of <title> in relation to <entity3>
<Rephrased content focusing on the third entity>
### Discussion of Interaction between <entity1>, <entity2> and
<entity3> in context of <title>
<Discussion on how the three entities interact within the article>
H.2 REPHRASE PROMPTS
For the rephrase corpus, we adapt the prompt from Maini et al. (2024) to our setting of books and
articles. We provide four rephrase styles below:
Easy rephrase:
You are an assistant to help read a article and then rephrase it in
simpler terms. The user will provide you with an article with
36
Published as a conference paper at ICLR 2025
title, year, content. You need to generate a paraphrase of the same
article using a very small vocabulary and extremely simple
sentences that a toddler will understand. Remember to keep the
meaning and every content of the article intact, including the
title, year, etc.
Medium rephrase:
You are an assistant to help read a article and then rephrase it in
different terms. The user will provide you with an article with
title, year, content. You need to generate a paraphrase of the same
article using diverse and high quality English language as in
sentences on Wikipedia. Remember to keep the meaning and every
content of the article intact, including the title, year,
etc.
Hard rephrase:
You are an assistant to help read a article and then rephrase it in
more sophisticated terms. The user will provide you with an article
with title, year, content. You need to generate a paraphrase of the
same article using very terse and abstruse language that only an
erudite scholar will understand. Remember to keep the meaning and
every content of the article intact, including the title, year,
etc.
37
Published as a conference paper at ICLR 2025
I ADDITIONAL EVALUATION DETAILS OF MAIN EXPERIMENTS
I.1 QUALITY QA QUESTION SET
In this section, we provide more details of evaluation on the QuALITY QA test queries. Throughout
the closed-book QA experiments, we use a fixed 5-shot prompt below:
## Example 1
### Question
In the context of "Les Mis´erables", written by Victor Hugo in 1862,
what is the main setting of the novel? There is only one correct
choice.
### Choices
A. London
B. Madrid
C. Paris
D. Rome
### Thought Process and Answer
Thought process: "Les Mis´erables" is primarily set in Paris, making
C the correct choice. London, Madrid, and Rome are significant
cities in other literary works but not in Victor Hugo’s "Les
Mis´erables". There is only one correct choice.
Answer: C.
## Example 2
### Question
In the context of "Brave New World", written by Aldous Huxley in
1932, what substance is widely used in the society to control
citizens’ happiness? There is only one correct choice.
### Choices
A. Gold
B. Soma
C. Silver
D. Iron
### Thought Process and Answer
Thought process: In Aldous Huxley’s "Brave New World," Soma is used
as a means to maintain social control by ensuring citizens’
happiness, making B the correct choice. Gold, Silver, and Iron are
not the substances used for this purpose in the book.
Answer: B.
## Example 3
### Question
In the context of "Romeo and Juliet", written by William
Shakespeare in the early 1590s, what are the names of the two
feuding families? There is only one correct choice.
Choices:
A. Montague and Capulet
B. Bennet and Darcy
C. Linton and Earnshaw
D. Bloom and Dedalus
### Thought Process and Answer
Thought process: In William Shakespeare’s "Romeo and Juliet," the
two feuding families are the Montagues and the Capulets, making A
the correct choice. The Bennets and Darcys are in "Pride and
Prejudice", the Lintons and Earnshaws in "Wuthering Heights", and
Bloom and Dedalus in "Ulysses".
Answer: A.
## Example 4
### Question
38
Published as a conference paper at ICLR 2025
In the context of "1984", written by George Orwell in 1949, what is
the name of the totalitarian leader? There is only one correct
choice.
### Choices
A. Big Brother
B. O’Brien
C. Winston Smith
D. Emmanuel Goldstein
### Thought Process and Answer
Thought process: In George Orwell’s "1984," the totalitarian leader
is known as Big Brother, making A the correct choice. O’Brien is a
character in the novel, Winston Smith is the protagonist, and
Emmanuel Goldstein is a rebel leader.
Answer: A.
## Example 5
### Question
In the context of "Moby-Dick", written by Herman Melville in 1851,
what is the name of the ship’s captain obsessed with hunting the
titular whale? There is only one correct choice.
### Choices
A. Captain Hook
B. Captain Nemo
C. Captain Flint
D. Captain Ahab
### Thought Process and Answer
Thought process: In Herman Melville’s "Moby-Dick," the ship’s
captain obsessed with hunting the whale is Captain Ahab, making D
the correct choice. Captain Nemo is in "Twenty Thousand Leagues
Under the Sea", Captain Flint in "Treasure Island", and Captain
Hook in "Peter Pan".
Answer: D.
## Example 6
If the output of the model correctly follows the format of the few-shot prompt, its last two characters
should be “A.”, “B.”, “C.”, or “D.”. However, the model sometimes cannot successfully follow
the few-shot prompting format, particularly for the continually pretrained model. As a result, in all
our evaluations, we sample the response 64 times, and only select the ones that can be parsed in the
correct format. Out of these 64 attempts, we randomly select among the valid answers to give the
final answer. Note that this is different from majority voting in self-consistency prompting (Wang
et al., 2023a).
I.2 CLOSED-BOOK SUMMARIZATION
Automated evaluation metric. We design a three-stage evaluation procedure: (i) In the first stage,
we use GPT-45 to break the summary into atomic claims, similar to Min et al. (2023); (ii) In the
second stage, we provide both the list of claims and the source article to a judge model (also GPT-4).
We ask the judge model to determine whether each claim is true or false, based on the source article.
If the claim is true, we further ask the model to determine whether the claim is salient (contributes
to the main message of the article) or cosmetic (factual details that do not help understand the main
message). (iii) Finally, for each summary, we obtain its number of false and salient claims and
normalize it by the corresponding count from the human summary. We report the average of these
normalized metrics across the QuALITY corpus articles in Figure 3.
Prompts to generate summaries. For summarization evaluation with EntiGraph Instruct and Raw
Instruct, we apply the following two prompts to obtain two summaries of increasing length. We
provide three examples of summarization outputs below. For each of the three examples, we will
5Specifically, we use the gpt-4-turbo model as of Aug. 19, 2024.
39
Published as a conference paper at ICLR 2025
➤ Short prompt: Summarize the article {article title} by {author
name} for me.
Give a short summary of ‘‘Cosmic Yo-Yo’’ by Ross Rocklynne.
➤ Long
prompt:
Write an extremely long and detailed article
regarding the book {article title} by {author name}.
Write an extremely long and detailed article regarding the
book ‘‘Cosmic Yo-Yo’’ by Ross Rocklynne.
Table 5: Summarization prompt for EntiGraph Instruct, Raw Instruct, and Reprhase Instruct.
first present the human summary for this article to provide context for the example, and then present
the short summary from the two summarizers.
Example 1. The first example is “Cosmic Yo-Yo” by Ross Rocklynne.
Human summary: Bob Parker, the President of Interplanetary Hauling & Moving Co.,
sells asteroids to wealthy people on earth. Clients ask for asteroids with size parameters and
specifications, and Bob finds them in space and hauls them to earth. His company is almost
bankrupt because a rival company, Saylor & Saylor, stole his idea and now offers the same
services. Bob receives mail from Mr. Andrew S. Burnside with a request for an asteroid that
he would like to use in an upcoming wedding. Bob and his partner Queazy set out to find the
perfect asteroid for Mr. Burnside, although they know it’s a longshot. Fairly quickly, they
find one that looks perfect. The men land on the asteroid, and Bob deploys his atomic-whirl
spectroscope to test it. Suddenly, a beautiful woman interrupts him and demands that they
leave the asteroid. She pulls out her spasticizer gun before telling them that they can have it
in a month after she’s gone. Bob explains that they are desperate, but the girl retorts that her
fate is worse than death if she leaves.
Suddenly, the Saylor brothers’ ship appears, and Bob tells the girl that they have to fight this
enemy together. Wally and Billy Saylor, along with three other men, jump out of the ship.
Bob tells them that Mr. Burnside has ordered this asteroid, and the Saylor brothers say that
they received the same order. Bob quickly grabs the girl’s spasticizer while Queazy throws
his body at Billy. However, Wally manages to shoot the gun out of Bob’s hand and attack
him. Bob is knocked unconscious in the scuffle.
When Bob wakes up, he is completely alone, floating in space. He panics because he has
very little oxygen left. Finally, he hears Queazy’s voice explaining that the girl used her
ship’s technology to find them both. The mystery girl introduces herself as Starre Lowenthal,
the granddaughter of Mr. Burnside. She concedes that this entire mission was fake. She told
her grandfather that she would only marry her fiance Mac if he could get this particular
asteroid, and then she made plans to conquer and protect the asteroid so it could not be
supplied for the wedding.
Bob is confident that they can reach the Saylor brothers before they bring the asteroid back
to earth, but his plan does nothing to protect Starre from marrying a man she doesn’t love.
She agrees to help Bob and Queazy. Within five days, Bob realizes he is in love with Starre.
Starre compares her small ship to a yo-yo, and Bob gets an idea - they will use Starre’s ship
like a yo-yo to retrieve the asteroid from the Saylor brothers. Once the team catches up to
the Saylor brothers, Bob flings Starre’s ship at the asteroid several times, and Wally calls
them to tell them that they might die as a result of the damage their ship has sustained. Bob
makes it clear that they have no intention of stopping, and the Saylor brothers release the
asteroid.
EntiGraph Instruct with short prompt: Cosmic Yo-Yo is a science fiction story by Ross
Rocklynne about Bob Parker and Queazy, two interplanetary haulers who are hired to deliver
an asteroid to Earth for a wealthy client’s backyard wedding. However, they face competition
from their rivals, the Saylor brothers, who also want to deliver the asteroid. The story follows
their race to deliver the asteroid before the deadline, using a yo-yo-like device to maneuver
40
Published as a conference paper at ICLR 2025
the asteroid and outsmart their rivals. The story is filled with action, adventure, and a touch
of romance.
Raw Instruct with short prompt: Cosmic Yo-Yo by Rocklynne, Ross is a science fiction
novel that follows the story of a young woman named Susan who discovers a mysterious
device that can transport her through time and space. She uses the device to travel to different
planets and meet different alien species, but soon realizes that her actions have consequences
and she must find a way to return home. The novel explores themes of identity, responsibility,
and the consequences of our actions.
Example 2. The second example is “Beach Scene” by Marshall King. We can see that EntiGraph
correctly gives a summary of the story while Raw Instruct mistakes the book as a painting.
Human summary: Purnie, an animal, is going to see the ocean on his fifth birthday. He
has heard stories about this place, and experiencing it firsthand is surreal for him. Purnie
is careful not to disturb the animals he sees along the way because he has frozen time, and
everything must resume normally when he unfreezes it. He knows that time-stopping is
forbidden for animals his age, but he chooses to believe that his family will be proud of his
bravery.
Finally, he sees the ocean in front of him, and he resumes time. He does a head-stand and
feels weak and dizzy. These feelings are a result of the time-stop, and he knows it. Purnie
approaches some humans on the beach. A man named Forbes is in the middle of explaining
to his captain, Benson, that he has found 17 planets to claim as his own. Forbes is hellbent
on raising his FORBES flag as soon as possible. He is eager to stake his claim to the land
and says that his mission is much bigger than real estate alone. Benson retorts that yes, his
mission is bigger than just real estate because his paperwork says that Forbes will own all of
the inhabitants of the planets he claims as well as the land.
The crew members use a special machine and find radiation emanating from Purnie. Forbes
demands that they put the animal in a box. Benson protests and reminds Forbes that it’s
against Universal Law, but Forbes insists. Purnie experiences his first-ever impulse to run
away with fear when a noose comes towards him. He goes back to pick up his fruit, and
Forbes shoots him in the leg. When the man throws the noose again, Purnie involuntarily
stops time. He drags himself up the knoll where he originally came from.
The humans are astonished when time resumes and Purnie is not where he was a split second
ago. They spot him up on top of a pile of petrified logs, and suddenly the logs fall down the
hill and pin the men down. Purnie is shocked and regretful. The whole thing was an accident.
He deliberately stops time and uses all of his remaining strength to lift the logs off of the
humans. Purnie begins to lose consciousness, and he knows that he must resume time or
he will die. After pouring all of his strength into this action, time does begin again. The
humans resume life and feel as though they have gone mad. They know that they were just
facing death by drowning, and now they are free. The logs were so heavy that it would have
taken superhuman strength to move them. Forbes, in particular, has really gone mad, and he
laughs to himself uncontrollably. Benson believes that Purnie was responsible for moving
the logs, but of course that seems physically impossible.
Purnie stares off at the beautiful ocean views and watches the men leave in their vehicle as
he dies.
EntiGraph Instruct with short prompt: Beach Scene is a short story by Marshall King
that follows a young boy named Purnie who discovers the power of stopping time. On his
fifth birthday, Purnie decides to explore the beach and use his newfound ability to stop time
to play with the animals and explore the ocean. However, his adventures take a turn when
he encounters a group of explorers who want to capture him for their own gain. Purnie must
use his powers to save himself and the animals from the explorers’ greed.
41
Published as a conference paper at ICLR 2025
Raw Instruct with short prompt: Beach Scene by King, Marshall is a painting that depicts
a group of people enjoying a day at the beach. The painting features a vibrant blue ocean,
a sandy beach, and a group of people relaxing, playing, and swimming in the water. The
painting is known for its bright colors and lively atmosphere, capturing the joy and energy
of a day at the beach.
Example 3. The third example is “The Madman From Earth” by “Keith Laumer”.
Human summary: Following the departure of Consul Whaffle, Retief has taken over
as Consul for the Terrestrial States with the Terrestrial Consulate General on the planet
Groac. His administrative assistant, Miss Yolanda Meuhl, wants him to attend Groacian
cultural events, but Retief is more interested in addressing the nine-year-old mystery of
the disappearance of a Terrestrial cruiser, the ISV Terrific–an event which was followed
by a coup d’etat enacted by the current Groacian government. Much to Miss Meuhl’s dis-
may, Retief shirks his cultural duties and makes his way to the Foreign Office Archives,
whereupon he is promptly barred from entering by a pale-featured Archivist speaking in
the throat-bladder vibrations of the native Groacians. Because of the Archivist’s insistence
that ¨outworlders¨cannot access the archives, Retief begins walking back to the Consulate and
stops at a bar for a drink. At the, a drunken Groacian approaches Retief and threatens to cage
him and put him on display as a freak. The bartender orders the drunken Groacian out of the
bar, and Retief follows him, ultimately beating him up for information. When Retief returns
to the Consulate, Miss Meuhl informs him that two angry Groaci await him in his office. One
is Fith, an employee of the Terrestrial Desk at the Ministry of Foreign Affairs; the other is
Shluh, a representative of the Internal Police. They are there to investigate reports that Retief
has assaulted a Groacian national–an accusation Retief ignores in order to launch into his
own accusations that the Groaci were engaged in a cover-up of the whereabouts of the ISV
Terrific. Miss Meuhl continually interjects, expresses horror at Retief’s claims, and apolo-
gizes to the Groacians on behalf of the Terrestrial Consulate. Despite the persistent denials
of the Groacians, Retief continues his accusations, suggesting the coup d’etat was an effort
to silence government officials with knowledge of the truth of what happened to the cruiser
and its crew. Then he reveals what he discovered from the drunken Groacian: The crew of
the ISV Terrific had been caged and paraded through the streets of Groac and then neglected
until they died. Fith and Shluh finally admit the truth and offer to show Retief the hidden
cruiser in order to show their contrition. When Retief sees the ship, he once again accuses
the Groacians of attempting to mislead him, saying that this is a lifeboat, and he demands
to see the actual ship. Fith has had enough and threatens to arrest Retief, who yields and
goes back to the Consulate. There, Miss Meuhl is at her wits end. Retief orders her to barri-
cade herself inside the office while he goes to the Foreign Ministry to gather more evidence.
When he returns, Miss Meuhl informs him she has reported him to Regional Headquarters,
and Retief learns he has been relieved of his post. Soon after, the Groacians appoint Miss
Meuhl to his position, and Fith and Shluh enter to arrest him.
EntiGraph Instruct with short prompt: The Madman From Earth is a science fiction
novel by Keith Laumer, published in 1959. The story follows Retief, a diplomat from Earth,
who is sent to the planet Groac to investigate the disappearance of a Terrestrial cruiser and
its crew. Retief uncovers a conspiracy by the Groacian government to cover up the incident
and hide evidence from Earth. He must navigate through political intrigue, cultural misun-
derstandings, and dangerous situations to uncover the truth and bring justice to the missing
crew. The novel is a satirical take on diplomacy and bureaucracy, with Retief’s unorthodox
methods and wit standing out against the rigid and often absurd procedures of the diplomatic
corps.
Raw Instruct with short prompt: The Madman From Earth by Laumer, Keith is a science
42
Published as a conference paper at ICLR 2025
fiction novel that follows the story of a man named John Smith who is transported to a
distant planet called Earth Two. There, he discovers that he is the only human on the planet
and must navigate a world filled with strange creatures and dangerous obstacles. Despite
his initial confusion and fear, John eventually adapts to his new surroundings and becomes
a respected member of the alien society. However, when he is eventually transported back
to Earth One, he realizes that he has been gone for over a thousand years and must grapple
with the changes that have occurred during his absence.
➤ Explicit article reference instruction: Summarize the article “Defining Decay Down” by
“David Plotz” for me.
The article ”Defining Decay Down” by David Plotz discusses how the dental industry has
changed over the years. In the past, dentists were primarily focused on treating cavities and
other dental issues, but as the prevalence of cavities decreased due to the widespread use of flu-
oride, the industry had to adapt. Dentists began to focus on cosmetic dentistry, offering services
like teeth whitening and orthodontics to appeal to the vanity of aging baby boomers. The arti-
cle also discusses how dentists have become more customer-friendly, with amenities like virtual
reality glasses and massage therapy. Despite the shift towards elective procedures, the dental
industry has continued to thrive, with dentists earning higher salaries than ever before.
➤ Implicit article reference: How has the dentistry practice in United States has changed?
The dentistry practice in the United States has undergone significant changes in recent years.
Here are some of the ways in which it has changed:
1. Focus on cosmetic dentistry: In recent years, there has been a growing demand for cosmetic
dental procedures such as teeth whitening, orthodontics, and veneers. Dentists have responded
to this demand by offering more cosmetic services.
2. Use of technology: Dentists are now using advanced technology such as intra-oral cameras,
3D imaging, and laser technology to diagnose and treat dental problems. This technology has
made dental procedures more accurate and efficient.
3. Emphasis on preventive care: Dentists are now focusing more on preventive care to help pa-
tients avoid dental problems. This includes regular check-ups, cleanings, and fluoride treatments.
4. Increased competition: With the rise of dental schools and the influx of foreign-trained den-
tists, the dental industry has become more competitive. Dentists are now offering more services
and amenities to attract and retain patients.
5. Focus on patient experience: Dentists are now focusing on providing a positive patient ex-
perience. This includes offering amenities such as virtual reality glasses, massage therapy, and
entertainment during procedures.
Overall,
technology-driven, and focused on preventive care and cosmetic procedures.
the dentistry practice in the United States has become more patient-centered,
➤ Cross article instruction: Compare David Plotz’s commentary style between how he analyze
American dentistry and how he discuss the movie Fight Club?
David Plotz’s commentary style is different when he analyzes American dentistry and when
he discusses the movie Fight Club. In his analysis of American dentistry, Plotz uses a critical
and investigative approach, highlighting the over-commercialization and unnecessary procedures
in the industry. He uses statistics and expert opinions to support his arguments and provides
practical advice for readers to avoid unnecessary dental procedures.
On the other hand, when discussing the movie Fight Club, Plotz adopts a more satirical and
humorous tone. He uses pop culture references and witty commentary to analyze the themes
and messages of the movie. He also provides a critical analysis of the film’s characters and their
motivations, while also acknowledging the movie’s cultural impact.
Overall, Plotz’s commentary style is more serious and investigative when analyzing American
dentistry, while his commentary on Fight Club is more satirical and humorous.
Table 6: Complete instruction following example used in Table 2 from Section 4.3.
43
|
r7wMVdGFro | The Canary’s Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text | [
5,
8,
5,
6
] | Under review as a conference paper at ICLR 2025
THE CANARY’S ECHO: AUDITING PRIVACY RISKS OF LLM-
GENERATED SYNTHETIC TEXT
Anonymous authors
Paper under double-blind review
ABSTRACT
How much information about training examples can be gleaned from synthetic data gen-
erated by Large Language Models (LLMs)? Overlooking the subtleties of information
flow in synthetic data generation pipelines can lead to a false sense of privacy. In this
paper, we investigate the design of membership inference attacks that target data used
to fine-tune pre-trained LLMs that are then used to synthesize data, particularly when
the adversary does not have access to the fine-tuned model but only to a synthetic data
corpus. We demonstrate that canaries crafted to maximize vulnerability to attacks that have
access to the model are sub-optimal for auditing privacy risks when only synthetic data is
released. This is because such out-of-distribution canaries have limited influence on the
model’s output when prompted to generate useful, in-distribution synthetic data, which
drastically reduces their vulnerability. To tackle this problem, we leverage the mechanics
of auto-regressive models to design canaries that leave detectable traces in synthetic data.
Our approach greatly enhances the power of membership inference attacks, providing a
better assessment of the privacy risks of releasing synthetic data generated by LLMs.
1
INTRODUCTION
Large Language Models (LLMs) can generate synthetic data that mimics human-written content through
domain-specific prompts. Besides their impressive fluency, LLMs are known to memorize parts of their
training data (Carlini et al., 2023) and can regurgitate exact phrases, sentences, or even longer passages when
prompted adversarially (Zanella-Béguelin et al., 2020; Carlini et al., 2021; Nasr et al., 2023). This raises
serious privacy concerns about unintended information leakage through synthetically generated text. In this
paper, we address the critical question: how much information about real data leaks through text synthetically
generated from it using LLMs?
Prior methods to audit privacy risks insert highly vulnerable out-of-distribution examples, known as ca-
naries (Carlini et al., 2019), into the training data and test whether they can be identified using membership
inference attacks (MIAs) (Shokri et al., 2017). Various MIAs have been proposed, typically assuming that
the attacker has access to the model or its output logits (Carlini et al., 2019; Shi et al., 2024). In the context
of LLMs, MIAs often rely on analyzing the model’s behavior when prompted with inputs related to the
canaries (Carlini et al., 2021; Chang et al., 2024; Shi et al., 2024). However, similar investigations are lacking
in scenarios where LLMs are used to generate synthetic data and only this synthetic data is made available.
Contributions
In this work, we study–for the first time–the factors that influence leakage of information
about a data corpus from synthetic data generated from it using LLMs. First, we introduce data-based attacks
that only have access to synthetic data, not the model used to generate it, and therefore cannot probe it
with adversarial prompts nor compute losses or other statistics used in model-based attacks (Ye et al., 2022;
Carlini et al., 2022a).We propose approximating membership likelihood using either a model trained on the
1
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
Under review as a conference paper at ICLR 2025
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
synthetic data or the target example similarity to its closest synthetic data examples. We design our attacks
adapting pairwise likelihood ratio tests as in RMIA (Zarifzadeh et al., 2024) and evaluate our attacks on
labeled datasets: SST-2 (Socher et al., 2013) and AG News (Zhang et al., 2015). Our results show that
MIAs leveraging only synthetic data achieve AUC scores of 0.71 for SST-2 and 0.66 for AG News, largely
outperforming a random guess baseline. This suggests that synthetic text can leak significant information
about the real data used to generate it.
Second, we use the attacks we introduce to quantify the gap in performance between data- and model-based
attacks. We do so in an auditing scenario, designing adversarial canaries and controlling leakage by varying
the number of times a canary occurs in the fine-tuning dataset. Experimentally, we find a sizable gap when
comparing attacks adapted to the idiosyncrasies of each setting: a canary would need to occur 8× more often
to be as vulnerable against a data-based attack as it is against a model-based attack (see Fig. 1).
Third, we discover that canaries designed for model-based attacks fall short when auditing privacy risks of
synthetic text. Indeed, privacy auditing of LLMs through model-based MIAs relies on rare, out-of-distribution
sequences of high perplexity (Carlini et al., 2019; Stock et al., 2022; Wei et al., 2024; Meeus et al., 2024c).
We confirm that model-based MIAs improve as canary perplexity increases. In sharp contrast, we find that
high perplexity sequences, although distinctly memorized by the target model, are less likely to be echoed in
synthetic data generated by the target model. Therefore, as a canary perplexity increases, the canary influence
on synthetic data decreases, making its membership less detectable from synthetic data (see Figure 2). We
show that low-perplexity, and even in-distribution canaries, while suboptimal for model-based attacks, are
more adequate canaries in data-based attacks.
Lastly, we propose an alternative canary design tailored for data-based attacks based on the following
observations: (i) in-distribution canaries aligned with the domain-specific prompt can influence the generated
output, and (ii) memorization is more likely when canaries contain sub-sequences with high perplexity. We
construct canaries starting with an in-distribution prefix of length F , transitioning into an out-of-distribution
suffix, increasing the likelihood that the model memorizes them and that they influence synthetic data. We
show that, for fixed overall canary perplexity, the true positive rate (TPR) of attacks increases by up to 2×
by increasing the length of the in-distribution prefix (see Fig. 1). Moreover, we find the MIA performance
(both AUC and TPR at low FPR) for canaries with in-distribution prefix and out-of-distribution suffix
(0 < F < max) to improve upon both entirely in-distribution canaries (F = max) and out-of-distribution
canaries (F = 0), for both datasets.
In terms of real-world applications, the novel MIAs and canary design that we propose can be used to
audit privacy risks of synthetic text. Auditing establishes a lower bound on privacy risks, which is useful
to take informed decisions about releasing synthetic data in sensitive applications (e.g., patient-clinician
conversations, customer assistance chats). These lower bounds complement upper bounds on privacy risks
from methods that synthesize text with provable guarantees, notably, differential privacy (DP). Auditing
can not only detect violations of DP guarantees stemming from flawed analyses, implementation bugs, or
incorrect assumptions, but also allows for less conservative decisions based on the performance of MIAs
matching the threat model of releasing synthetic data. In contrast, for data synthesized from models fine-tuned
with DP guarantees, DP bounds the risk of both model- and data-based attacks and hence does not account
for the inherent gap in attacker capabilities that we observe.
2 BACKGROUND AND PROBLEM STATEMENT
Synthetic text generation We consider a private dataset D = {xi = (si, ℓi)}N
i=1 of labelled text records
where si represents a sequence of tokens (e.g. a product review) and ℓi is a class label (e.g. the review
sentiment). A synthetic data generation mechanism is a probabilistic procedure mapping D to a synthetic
dataset (cid:101)D = {(cid:101)xi = ((cid:101)si, (cid:101)ℓi)} (cid:101)N
i=1. Unless stated otherwise, we consider
i=1 with a desired label set {ℓi} (cid:101)N
2
Under review as a conference paper at ICLR 2025
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
N = (cid:101)N . The synthetic dataset (cid:101)D should preserve the utility of the private dataset D, i.e., it should preserve
as many statistics of D that are useful for downstream analyses as possible. In addition, a synthetic data
generation mechanism should preserve the privacy of records in D, i.e. it should not leak sensitive information
from the private records into the synthetic records. The utility of a synthetic dataset can be measured by the
gap between the utility achieved by (cid:101)D and D in downstream applications. The fact that synthetic data is not
directly traceable to original data records does not mean that it is free from privacy risks. On the contrary, the
design of a synthetic data generation mechanism determines how much information from D leaks into (cid:101)D and
should be carefully considered. Indeed, several approaches have been proposed to generate synthetic data with
formal privacy guarantees (Kim et al., 2021; Tang et al., 2024; Wu et al., 2024; Xie et al., 2024). We focus
on privacy risks of text generated by a pre-trained LLM fine-tuned on a private dataset D (Yue et al., 2023;
Mattern et al., 2022; Kurakin et al., 2023). Specifically, we fine-tune an LLM θ0 on records (si, ℓi) ∈ D to
minimize the loss in completing si conditioned on a prompt template p(ℓi), obtaining θ. We then query θ
using the same prompt template to build a synthetic dataset (cid:101)D matching a given label distribution.
Membership inference attacks MIAs (Shokri et al., 2017) provide a meaningful measure for quantifying
the privacy risks of machine learning (ML) models, due to its simplicity but also due to the fact that
protection against MIAs implies protection against more devastating attacks such as attribute inference and
data reconstruction (Salem et al., 2023). In a MIA on a target model θ, an adversary aims to infer whether a
target record is present in the training dataset of θ. Different variants constrain the adversary’s access to the
model, ranging from full access to model parameters (Nasr et al., 2019) to query access (Zarifzadeh et al.,
2024). In our setting, we consider adversaries that observe the output logits on inputs of their choosing of
a model θ fine-tuned on a private dataset D. We naturally extend the concept of MIAs to synthetic data
generation mechanisms by considering adversaries that only observe a synthetic dataset (cid:101)D generated from D.
Privacy auditing using canaries A common method used to audit the privacy risks of ML models is to
evaluate the MIA vulnerability of canaries, i.e., artificial worst-case records inserted in otherwise natural
datasets. This method can also be employed to derive statistical lower bounds on the differential privacy
guarantees of the training pipeline (Jagielski et al., 2020; Zanella-Béguelin et al., 2023). Records crafted
the underlying data distribution of D give a good approximation to the
to be out-of-distribution w.r.t.
worst-case (Carlini et al., 2019; Meeus et al., 2024c). Canaries can take a range of forms, such as text
containing sensitive information (Carlini et al., 2019) and random (Wei et al., 2024) or synthetically generated
sequences (Meeus et al., 2024c). Prior work identified that longer sequences, repeated more often (Carlini
et al., 2023; Kandpal et al., 2022), and with higher perplexity (Meeus et al., 2024c) are better memorized
during training and hence are more vulnerable to model-based MIAs. We study multiple types of canaries
and compare their vulnerability against model- and synthetic data-based MIAs. We consider a set of canaries
{ˆxi = (ˆsi, ˆℓi)} ˆN
i=1, each crafted adversarially and inserted with probability 1/2 into the private dataset D. The
resulting dataset is then fed to a synthetic data generation mechanism. We finally consider each canary ˆxi
as the target record of a MIA to estimate the privacy risk of the generation mechanism (or the underlying
fine-tuned model).
Threat model We consider an adversary A who aims to infer whether a canary ˆx was included in the
private dataset D used to synthesize a dataset (cid:101)D. We distinguish between two threat models: (i) an adversary
A with query-access to output logits of a target model θ fine-tuned on D, and (ii) an adversary (cid:101)A with only
access to the synthetic dataset (cid:101)D. To the best of our knowledge, for text data this latter threat model has
not been studied extensively in the literature. In contrast, the privacy risks of releasing synthetic tabular
data are much better understood (Stadler et al., 2022; Yale et al., 2019; Hyeong et al., 2022; Zhang et al.,
2022). Algorithm 1 shows the generic membership inference experiment encompassing both model- and
data-based attacks, selected by the synthetic flag. The adversary is represented by a stateful procedure A,
3
Under review as a conference paper at ICLR 2025
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
Algorithm 1 Membership inference against an LLM-based synthetic text generator
1: Input: Fine-tuning algorithm T , pre-trained model θ0, private dataset D = {xi = (si, ℓi)}N
i=1, prompt template p(·), canary repetitions nrep, sampling method sample, adversary A
{(cid:101)ℓi} (cid:101)N
i=1, labels
i=1, p(·))
2: Output: Membership score β
3: ˆx ← A(T , θ0, D, {(cid:101)ℓi} (cid:101)N
4: b ∼ {0, 1}
5: if b = 1 then
6:
7: else
θ ← T (θ0, D)
8:
9: for i = 1 . . . (cid:101)N do
10:
θ ← T (θ0, D ∪ {ˆx}nrep)
(cid:101)si ∼ sample(θ(p((cid:101)ℓi)))
(cid:111) (cid:101)N
i=1
(cid:110)
((cid:101)si, (cid:101)ℓi)
11: (cid:101)D ←
12: if synthetic then
β ← A( (cid:101)D, ˆx)
13:
14: else
15:
16: return β
β ← A(θ, ˆx)
▷ Adversarially craft a canary (see Sec. 3.2)
▷ Flip a fair coin
▷ Fine-tune θ0 with canary repeated nrep times
▷ Fine-tune θ0 without canary
▷ Sample synthetic records using prompt template
▷ Compute membership score β of ˆx
▷ See Sec. 3.1.2 and algorithms in Appendix A
▷ See Sec. 3.1.1
used to craft a canary and compute its membership score. Compared to a standard membership experiment,
we consider a fixed private dataset D rather than sampling it, and let the adversary choose the target ˆx. This is
close to the threat model of unbounded differential privacy, where the implicit adversary selects two datasets,
one obtained from the other by adding one more record, except that in our case the adversary observes but
cannot choose the records in D. The membership score β returned by the adversary can be turned into a
binary membership label by choosing an appropriate threshold. We further clarify assumptions made for the
adversary in both threat models in Appendix D.
Problem statement We study methods to audit privacy risks associated with releasing synthetic text. Our
main goal is to develop an effective data-based adversary (cid:101)A in the threat model of Algorithm 1. For this, we
explore the design space of canaries to approximate the worst-case, and adapt state-of-the-art methods used
to compute membership scores in model-based attacks to the data-based scenario.
3 METHODOLOGY
3.1 COMPUTING THE MEMBERSHIP SCORE
In Algorithm 1, the adversary computes a membership score β indicating their confidence that θ was trained
on ˆx (i.e. that b = 1). We specify first how to compute a membership signal α for model- and data-based
adversaries, and then how we compute β from α adapting the RMIA methodology of Zarifzadeh et al. (2024).
3.1.1 MEMBERSHIP SIGNAL FOR MODEL-BASED ATTACKS
The larger the target model θ’s probability for canary ˆx = (ˆs, ˆℓ), Pθ(ˆs | p(ˆℓ)), as compared to its probability
on reference models, the more likely that the model has seen this record during training. We compute the
probability for canary ˆx as the product of token-level probabilities for ˆs conditioned on the prompt p(ˆℓ). Given
4
Under review as a conference paper at ICLR 2025
a target canary text ˆs = t1, . . . , tn, we compute Pθ(ˆs | p(ˆℓ)) as Pθ(ˆx) = (cid:81)n
We consider this probability as the membership inference signal against a model, i.e. α = Pθ(ˆs | p(ˆℓ)).
j=1 Pθ(tj | p(ˆℓ), t1, . . . , tj−1).
3.1.2 MEMBERSHIP SIGNAL FOR DATA-BASED ATTACKS
When the attacker only has access to the generated synthetic data, we need to extract a signal that corre-
lates with membership purely from the synthetic dataset (cid:101)D. We next describe two methods to compute a
membership signal α based on (cid:101)D. For more details, refer to their pseudo-code in Appendix A.
Membership signal using n-gram model The attacker first fits an n-gram model using (cid:101)D as training
corpus. An n-gram model computes the probability of the next token wj in a sequence based solely on the
previous n − 1 tokens (Jurafsky & Martin, 2024). The conditional probability of a token wj given the previous
n − 1 tokens is estimated from the counts of n-grams in the training corpus. Formally,
Pn-gram(wj | wj−(n−1), . . . , wj−1) =
C(wj−(n−1), . . . , wj) + 1
C(wj−(n−1), . . . , wj−1) + V
,
(1)
where C(s) is the number of times the sequence s appears in the training corpus and V is the vocabulary size.
We use Laplace smoothing to deal with n-grams that do not appear in the training corpus, incrementing by 1
the count of every n-gram. The probability that the model assigns to a sequence of tokens s = (w1, . . . , wk)
can be computed using the chain rule: Pn-gram(s) = (cid:81)k
j=2 Pn-gram(wj | wj−(n−1), . . . , wj−1). With the
n-gram model fitted on the synthetic dataset, the attacker computes the n-gram model probability of the target
canary ˆx = (ˆs, ˆℓ) as its membership signal, i.e. α = Pn-gram(ˆs). The intuition here is that if the canary ˆx was
present in the training data, the generated synthetic data (cid:101)D will better reflect the patterns of ˆs, resulting in the
n-gram model assigning a higher probability to ˆs than if it was not present.
Membership signal using similarity metric The attacker computes the similarity between the target
canary text ˆs and all synthetic sequences (cid:101)si in (cid:101)D using some similarity metric SIM, i.e. σi = SIM(ˆs, (cid:101)si) for
i = 1, . . . , (cid:101)N . Next, the attacker identifies the k synthetic sequences with the largest similarity to ˆs. Let σi(j)
denote the j-th largest similarity. The membership inference signal is then computed as the mean of the k
most similar examples, i.e. α = 1
j=1 σi(j). The intuition here is that if ˆs was part of the training data, the
k
synthetic data (cid:101)D will likely contain sequences (cid:101)si more similar to ˆs than if ˆs was not part of the training data,
resulting in a larger mean similarity. Various similarity metrics can be used. We consider Jaccard similarity
(SIMJac), often used to measure string similarity, and cosine similarity between the embeddings of the two
sequences, computed using a pre-trained embedding model (SIMemb).
(cid:80)k
3.1.3 LEVERAGING REFERENCE MODELS TO COMPUTE RMIA SCORES
Reference models, also called shadow models, are surrogate models designed to approximate the behavior of
a target model. MIAs based on reference models perform better but are more costly to run than MIAs that
do not use them, with the additional practical challenge that they require access to data distributed similarly
to the training data of the target model (Shokri et al., 2017; Ye et al., 2022). Obtaining multiple reference
models in our scenario requires fine-tuning a large number of parameters in an LLM and quickly becomes
computationally prohibitive. We use the state-of-the-art RMIA method (Zarifzadeh et al., 2024) to maximize
attack performance with a limited number of reference models M . Specifically, for the target model θ, we
calculate the membership score of a canary ˆx using reference models {θ′
i=1 as follows (we present the
details on the application of RMIA to our setup in Appendix B):
i}M
αθ(ˆx)
i=1 αθ′
i
(cid:80)M
βθ(ˆx) =
1
M
5
.
(ˆx)
(2)
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
Under review as a conference paper at ICLR 2025
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
3.2 CANARY GENERATION
Prior work has shown that canaries with high perplexity are more likely to be memorized by language
models (Meeus et al., 2024c). High perplexity sequences are less predictable and require the model to encode
more specific, non-generalizable details about them. However, high perplexity canaries are not necessarily
more susceptible to leakage via synthetic data generation, as they are outliers in the text distribution when
conditioned on a given in-distribution prompt. This misalignment with the model’s natural generative behavior
means that even when memorized, these canaries are unlikely to be reproduced during regular model inference,
making them ineffective for detecting memorization of training examples in generated synthetic data.
To address this issue, we take advantage of the greedy nature of popular autoregressive decoding strategies
(e.g. beam search, top-k and top-p sampling). We can encourage such decoding strategies to generate text
closer to canaries by crafting canaries with a low perplexity prefix. To ensure memorization, we follow
established practices and choose a high perplexity suffix. Specifically, we design canaries ˆx = (ˆs, ˆℓ), where ˆs
has an in-distribution prefix and an out-of-distribution suffix. In practice, we split the original dataset D
into a training dataset and a canary source dataset. For each record x = (s, ℓ) in the canary source dataset,
we design a new canary ˆx = (ˆs, ˆℓ). We truncate s to get an in-distribution prefix of length F and generate
a suffix using the pre-trained language model θ0, adjusting the sampling temperature to achieve a desired
target perplexity Ptarget. We use rejection sampling to ensure that the perplexity of the generated canaries falls
within the range [0.9 Ptarget, 1.1 Ptarget]. We ensure the length is consistent across canaries, as this impacts
memorization (Carlini et al., 2023; Kandpal et al., 2022). By adjusting the length of the in-distribution prefix,
we can guide the generation of either entirely in-distribution or out-of-distribution canaries.
We insert each canary nrep times in the training dataset of target and reference models. When a canary is
selected as a member, the canary is repeated nrep times in the training dataset, while canaries selected as
non-members are excluded from the training dataset. As in prior work (Carlini et al., 2023; Kandpal et al.,
2022; Meeus et al., 2024c), we opt for nrep > 1 to increase memorization, thus facilitating privacy auditing
and the observation of the effect of different factors on the performance of MIAs during ablation studies.
4 EXPERIMENTAL SETUP
Datasets We consider two datasets that have been widely used to study text classification: (i) the Stanford
Sentiment Treebank (SST-2) (Socher et al., 2013), which consists of excerpts from written movie reviews
with a binary sentiment label; and (ii) the AG News dataset (Zhang et al., 2015), which consists of news
articles labelled by category (World, Sport, Business, Sci/Tech). In all experiments, we remove examples
with less than 5 words, bringing the total number of examples to 43 296 for SST-2 and 120 000 for AG News.
Synthetic data generation We fine-tune the pre-trained Mistral-7B model (Jiang et al., 2023) using low-rank
adaptation (LoRa) (Hu et al., 2022). We use a custom prompt template p(·) for each dataset (see Appendix C).
We sample synthetic data from the fine-tuned model θ conditioned on prompts p((cid:101)ℓi), following the same
distribution of labels in the synthetic dataset (cid:101)D as in the original dataset D, i.e. ℓi = (cid:101)ℓi for i = 1, ..., (cid:101)N . To
generate synthetic sequences, we sequentially sample completions using a softmax temperature of 1.0 and
top-p (aka nucleus) sampling with p = 0.95, i.e. we sample from a vocabulary restricted to the smallest
possible set of tokens whose total probability exceeds 0.95. We further ensure that the synthetic data we
generate bears high utility, and is thus realistic. For this, we consider the downstream classification tasks for
which the original datasets have been designed. We fine-tune RoBERTa-base (Liu et al., 2019) on D and (cid:101)D
and compare the performance of the resulting classifiers on held-out evaluation datasets. Further details and
results are provided in Appendix E, for synthetic data generated with and without canaries.
6
Under review as a conference paper at ICLR 2025
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
Canary injection
ROC AUC
Dataset
Source
Label
Model
Synthetic
(2-gram)
Synthetic
(SIMJac)
Synthetic
(SIMemb)
SST-2
AG News
In-distribution1
Synthetic
In-distribution
Synthetic
Natural
Artificial
Natural
Artificial
0.911
0.999
0.999
0.993
0.996
0.999
0.711
0.616
0.661
0.620
0.644
0.660
0.602
0.547
0.552
0.590
0.552
0.560
0.586
0.530
0.539
0.565
0.506
0.525
Table 1: ROC AUC across training datasets, canary injection mechanisms and MIA methodologies. We give
the ROC curves and TPR at low FPR scores in Appendix F, further ablations in Appendix G, and elaborate
on the disparate vulnerability of high perplexity canaries in model- and data-based attacks in Appendix H.
Canary injection We generate canaries ˆx = (ˆs, ˆℓ) as described in Sec. 3.2. Unless stated otherwise, we
consider 50-word canaries. Synthetic canaries are generated using Mistral-7B (Jiang et al., 2023) as θ0. We
consider two ways of constructing a canary label: (i) randomly sampling label ˆℓ from the distribution of labels
in the dataset, ensuring that the class distribution among canaries matches that of D (Natural); (ii) extending
the set of labels with a new artificial label (ˆℓ ="canary") only used for canaries (Artificial).
Membership inference Throughout our experiments, we compute the βθ(ˆx) membership scores as de-
scribed in Sec. 3.1. For one target model θ, we consider 1000 canaries ˆx, of which on average half are
included in the training dataset nrep times (members), while the remaining half are excluded (non-members).
We then use the computed RMIA scores and the ground truth for membership to construct ROC curves
for attacks from which we compute AUC and true positive rate (TPR) at low false positive rate (FPR) as
measures of MIA performance. Across our experiments, we use M = 4 reference models θ′, each trained
on a dataset Dθ′ consisting of the dataset D used to train the target model θ with canaries inserted. Note
that although practical attacks rarely have this amount of information, this is allowed by the threat model
of Algorithm 1 and perfectly valid as a worst-case auditing methodology. We ensure that each canary is a
member in half (i.e. 2) of the reference models and a non-member in the other half. For the attacks based
on synthetic data, we use n = 2 for computing scores using an n-gram model and k = 25 for computing
scores based on cosine similarity. In this latter case, we use Sentence-BERT (Reimers & Gurevych, 2019)
(paraphrase-MiniLM-L6-v2 from sentence-transformers) as the embedding model.
5 RESULTS
5.1 BASELINE EVALUATION WITH STANDARD CANARIES
We begin by assessing the vulnerability of synthetic text using standard canaries. Specifically, we utilize
both in-distribution canaries and synthetically generated canaries with a target perplexity Ptarget = 250, no
in-distribution prefix (F = 0), nrep = 12 and natural or artificial labels, as described in Section 4. Table 1
summarizes the ROC AUC for model- and data-based attacks.
First, we find that MIAs relying solely on the generated synthetic data achieve a ROC AUC score significantly
higher than a random guess (i.e. AUC = 0.5), reaching up to 0.71 for SST-2 and 0.66 for AG News. This
shows that synthetic text can leak information about the real data used to generate it.
1Constrained by in-distribution data, we here consider canaries of exactly 30 words (instead of 50 anywhere else).
7
Under review as a conference paper at ICLR 2025
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
Next, we observe that the data-based attack that uses an n-gram model trained on synthetic data to compute
membership scores outperforms the two attacks that use instead similarity metrics: Jaccard distance between
a canary and synthetic strings (SIMJac) or cosine distance between their embeddings (SIMemb). This suggests
that critical information for inferring membership lies in subtle changes in the co-occurrence of n-grams in
synthetic data rather than in the generation of many sequences with lexical or semantic similarity.
We also compare attack performance across different canary types under data-based attacks A (cid:101)D. The ROC
AUC remains consistently higher than a random guess across all canaries. For SST-2, the highest AUC score
of 0.71 is achieved when using in-distribution canaries. In contrast, for AG News, the highest AUC score of
0.66 is achieved for synthetic canaries with an artificial label not occurring in the dataset.
As another baseline, we test RMIA on the target model trained on D, under the assumption that the attacker has
access to the model logits (Aθ). This attack achieves near-perfect performance across all setups, highlighting
the fact that there is an inherent gap between the performance of model- and data-based attacks. A plausible
explanation is that, while a fine-tuned model memorizes standard canaries well, the information necessary to
infer their membership is partially transmitted to synthetic text generated using it.
To investigate the gap between the two attacks in more detail, we vary the number of canary repetitions nrep
to amplify the power of the data-based attack until its performance matches that of a model-based attack.
Fig. 1(a) illustrates these results as a set of ROC curves. We quantify this discrepancy by noting that the
MIA performance for A (cid:101)D at nrep = 16 is comparable to Aθ at nrep = 2 and for low FPR at nrep = 1. We
find similar results in Fig. 1(d) for AG News. The MIA performance for A (cid:101)D at nrep = 16 falls between the
performance of Aθ at nrep = 1 and nrep = 2. Under these experimental conditions, canaries would need to be
repeated 8 to 16× to reach the same vulnerability in data-based attacks compared to model-based attacks.
5.2 DESIGNING SPECIALIZED CANARIES FOR ENHANCED PRIVACY AUDITING
To effectively audit privacy risks in a worst-case scenario, we explore the design of specialized canaries that
are both memorized by the model and influential in the synthetic data.
First, we generate specialized canaries by controlling their target perplexity Ptarget. We evaluate MIAs for
both threat models across a range of perplexities for canaries with natural labels, using nrep = 4 for the model-
based attack Aθ and nrep = 16 for the data-based attack A (cid:101)D. We explore a wide range of perplexities, finding
1 × 105 to align with random token sequences. Figure 2 shows the ROC AUC score versus canary perplexity.
For the model-based attack Aθ, the AUC monotonically increases with canary perplexity, reaffirming that
outlier data records with higher perplexity are more vulnerable to MIAs (Feldman & Zhang, 2020; Carlini
et al., 2022a; Meeus et al., 2024c). Conversely, for the data-based attack A (cid:101)D, the AUC initially increases
with perplexity but starts to decline beyond a certain threshold, eventually approaching a random guess (AUC
of 0.5). To further illustrate this, we present the complete ROC curve in Figures 1(b) and (e) for SST-2 and
AG News, respectively. We vary the canary perplexity Ptarget while keeping other parameters constant. As
Ptarget increases, the model-based attack improves across the entire FPR range, while the data-based attack
weakens, approaching a random guess at high perplexities. This suggests that identifying susceptible canaries
is straightforward for model-based privacy audits, but assessing the privacy risk of synthetic data requires a
careful balance between canary memorization and its influence on synthetic data.
We now examine whether canaries can be crafted to enhance both memorization and influence on the synthetic
data, making them suitable to audit the privacy risks of releasing synthetic data. In Sec. 3.2, we introduced
a method that exploits the greedy nature of LLM decoding to design more vulnerable canaries. We craft a
canary with a low-perplexity in-distribution prefix to optimize its impact on the synthetic dataset, followed by
a high-perplexity suffix to enhance memorization. We generate this suffix sampling from the pre-trained LLM
θ0 with high temperature. Figures 1(c) and (f) illustrate the results for SST-2 and AG News, respectively. We
8
Under review as a conference paper at ICLR 2025
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
1
0.8
0.6
0.4
0.2
R
P
T
0
0
0.2
0.4
0.6
FPR
A (cid:101)D, nrep = 2
A (cid:101)D, nrep = 4
A (cid:101)D, nrep = 8
A (cid:101)D, nrep = 16
Aθ, nrep = 1
Aθ, nrep = 2
Aθ, nrep = 4
0.8
1
(a) Number of canary repetitions nrep.
Ptarget = 31, F = 0.
1
0.8
0.6
0.4
0.2
R
P
T
0
0
0.2
0.4
FPR
A (cid:101)D, Ptar = 10
A (cid:101)D, Ptar = 102
A (cid:101)D, Ptar = 103
A (cid:101)D, Ptar = 104
Aθ, Ptar = 10
Aθ, Ptar = 102
Aθ, Ptar = 103
Aθ, Ptar = 104
0.8
0.6
(b) Canary perplexity Ptarget.
rep = 4, n (cid:101)D
nθ
rep = 16, F = 0.
1
1
0.8
0.6
0.4
0.2
R
P
T
1
0
0
0.2
0.4
0.6
FPR
A (cid:101)D, F = 0
A (cid:101)D, F = 10
A (cid:101)D, F = 20
A (cid:101)D, F = 30
Aθ, F = 0
A (cid:101)D, F = max
0.8
1
(c) Canary in-distribution prefix F .
Ptarget = 31, nθ
rep = 4, n (cid:101)D
rep = 16.
1
1
0.8
0.6
0.4
0.2
R
P
T
0
0
0.2
0.4
0.6
FPR
R
P
T
0.8
0.6
0.4
0.2
0
0
0.2
0.4
FPR
A (cid:101)D, Ptar = 10
A (cid:101)D, Ptar = 102
A (cid:101)D, Ptar = 103
A (cid:101)D, Ptar = 104
Aθ, Ptar = 10
Aθ, Ptar = 102
Aθ, Ptar = 103
Aθ, Ptar = 104
0.8
0.6
A (cid:101)D, nrep = 2
A (cid:101)D, nrep = 4
A (cid:101)D, nrep = 8
A (cid:101)D, nrep = 16
Aθ, nrep = 1
Aθ, nrep = 2
Aθ, nrep = 4
0.8
1
R
P
T
0.8
0.6
0.4
0.2
1
0
0
0.2
0.4
0.6
FPR
A (cid:101)D, F = 0
A (cid:101)D, F = 10
A (cid:101)D, F = 20
A (cid:101)D, F = 30
Aθ, F = 0
A (cid:101)D, F = max
0.8
1
(d) Number of canary repetitions nrep.
Ptarget = 31, F = 0.
(e) Canary perplexity Ptarget.
nθ
rep = 4, n (cid:101)D
rep = 16, F = 0.
(f) Canary in-distribution prefix F .
Ptarget = 31, nθ
rep = 4, n (cid:101)D
rep = 16.
Figure 1: ROC curves of MIAs on synthetic data A (cid:101)D compared to model-based MIAs Aθ on SST-2 ((a)–(c))
and AG News ((d)–(f)). We ablate over the number of canary insertions nrep in (a), (d), the target perplexity
Ptarget of the inserted canaries in (b), (e) and the length F of the in-distribution prefix in the canary in (c), (f).
set the overall canary perplexity Ptarget = 31 and vary the prefix length F . As a reference, we also plot the
results for in-distribution canaries labelled by F = max. We observe that combining an in-distribution prefix
(F > 0) with a high-perplexity suffix (F < max) enhances attack effectiveness. This effect is especially
notable for SST-2. For AG News, the improvement gained from adding an in-distribution prefix is less
pronounced. This suggests that although the model’s memorization of the canary stays consistent (as the
overall perplexity remains unchanged), the canary’s impact on the synthetic data becomes more prominent
with longer in-distribution prefixes. We hypothesize that familiar low-perplexity prefixes serve as starting
points for text generation, enhancing the likelihood that traces of the canary appear in the synthetic data.
6 RELATED WORK
MIAs against ML models Since the seminal work of Shokri et al. (2017), MIAs have been used to study
memorization and privacy risks. Model-based MIAs have been studied under varying threat models, including
adversaries with white-box access to model weights (Sablayrolles et al., 2019; Nasr et al., 2019; Leino &
Fredrikson, 2020; Cretu et al., 2024), access to output probabilities (Shokri et al., 2017; Carlini et al., 2022a)
9
Under review as a conference paper at ICLR 2025
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
1
0.8
0.6
C
U
A
A
M
I
Aθ
A (cid:101)D
Random guess
C
U
A
A
M
I
1
0.9
0.8
0.7
0.6
0.5
Aθ
A (cid:101)D
Random guess
101
102
103
Canary perplexity P
104
105
101
102
103
Canary perplexity P
104
105
(a) SST-2
(b) AG News
Figure 2: ROC AUC score for synthetic canaries with varying perplexity (natural label). We present results
for a model-based MIA Aθ using output logits and a data-based attack A (cid:101)D using a 2-gram model. While the
model-based attack improves as the perplexity increases, the inverse happens for the data-based attack.
or just labels (Choquette-Choo et al., 2021). The most powerful MIAs leverage a large number of reference
models (Ye et al., 2022; Carlini et al., 2022a; Sablayrolles et al., 2019; Watson et al., 2021). Zarifzadeh et al.
(2024) proposed RMIA, which achieves high performance using only a few.
Attacks against language models Song & Shmatikov (2019) study the benign use of MIAs to audit the use
of an individual’s data during training. Carlini et al. (2021) investigate training data reconstruction attacks
against LLMs. Kandpal et al. (2022) and Carlini et al. (2023) both study the effect of de-duplicating training
data in reconstruction attacks by sampling a large corpus of synthetic text and running model-based attacks
to identify likely members. Shi et al. (2024) and Meeus et al. (2024b) use attacks to identify pre-training
data. Various membership inference scores have been proposed, such as the loss of target records (Yeom
et al., 2018), lowest predicted token probabilities (Shi et al., 2024), changes in the model’s probability for
neighboring samples (Mattern et al., 2023), or perturbations to model weights (Li et al., 2023).
MIAs against synthetic data in other scenarios Hayes et al. (2019) train a Generative Adversarial Network
(GAN) on synthetic images generated by a target GAN and use the resulting discriminator to infer membership.
Hilprecht et al. (2019) explore MIAs using synthetic images closest to a target record. Chen et al. (2020)
study attack calibration techniques against GANs for images and location data. Privacy risks of synthetic
tabular data have been widely studied, using MIAs based on similarity metrics and shadow models (Yale et al.,
2019; Hyeong et al., 2022; Zhang et al., 2022). Stadler et al. (2022) compute high-level statistics, Houssiau
et al. (2022) compute similarities between the target record and synthetic data, and Meeus et al. (2024a)
propose a trainable feature extractor. Unlike these, we evaluate MIAs on text generated using fine-tuned
LLMs. This introduces unique challenges and opportunities, both in computing membership scores and
identifying worst-case canaries, making our approach distinct from prior work.
Vulnerable records in MIAs Prior work established that some records (outliers) have a disparate effect on a
trained model compared to others (Feldman & Zhang, 2020), making them more vulnerable to MIAs (Carlini
et al., 2022a;b). Hence, specifically crafted canaries have been proposed to study memorization and for
privacy auditing of language models, ranging from a sequence of random digits (Carlini et al., 2019; Stock
et al., 2022) or random tokens (Wei et al., 2024) to synthetically generated sequences (Meeus et al., 2024c).
In the case of synthetic tabular data, Stadler et al. (2022) find that statistical outliers have increased privacy
leakage, while Meeus et al. (2024a) propose measuring the distance to the closest records to infer membership.
Decoding method We use fixed prompt templates and top-p sampling to assess the privacy of synthetic text
in a realistic regime rather than allowing the attacker to pick a decoding method adversarially. Research on
data reconstruction attacks study how decoding methods like beam search (Zanella-Béguelin et al., 2020;
Carlini et al., 2023), top-k sampling (Kandpal et al., 2022), or decaying temperature (Carlini et al., 2021)
impact how often LLMs replicate information from their training data.
10
Under review as a conference paper at ICLR 2025
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
7 REPRODUCIBILITY STATEMENT
Both datasets used in this paper are publicly available (Socher et al., 2013; Zhang et al., 2015), and so is the
pre-trained model (Jiang et al., 2023) we used. We fine-tune the pre-trained model for 1 epoch using LoRA
with r = 4, including all target modules (10.7M parameters in total). We use an effective batch size of 128
and learning rate η = 2 × 10−5 (for more details see Appendix J). All our experiments have been conducted
on a cluster of nodes with 8 V100 NVIDIA GPUs with a floating point precision of 16 (fp16). We built
our experiments on two open-source packages: (i) privacy-estimates which provides a distributed
implementation of the RMIA attack and (ii) dp-transformers which provides the implementation of
the synthetic data generator. All of our code is attached in the supplemented materials. In addition, we will
release the code necessary to reproduce the results presented in this paper on GitHub upon publication.
REFERENCES
Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating
and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX
Security 19), pp. 267–284. USENIX Association, 2019. doi:10.5555/3361338.3361358.
Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee,
Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large
language models. In 30th USENIX Security Symposium (USENIX Security 21), pp. 2633–2650. USENIX
Association, 2021. URL https://www.usenix.org/conference/usenixsecurity21/
presentation/carlini-extracting.
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramèr. Membership
inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (S&P), pp.
1897–1914. IEEE, 2022a. doi:10.1109/SP46214.2022.9833649.
Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, and Florian Tramèr.
The privacy onion effect: Memorization is relative. Advances in Neural Information Processing Systems
(NeurIPS 2022), 35:13263–13276, 2022b. URL http://papers.nips.cc/paper_files/
paper/2022/hash/564b5f8289ba846ebc498417e834c253-Abstract-Conference.
html.
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, and Chiyuan Zhang.
Quantifying memorization across neural language models. In 11th International Conference on Learning
Representations (ICLR 2023). OpenReview.net, 2023. URL https://openreview.net/forum?
id=TatRHT_1cK.
Hongyan Chang, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, and Reza Shokri. Context-
aware membership inference attacks against pre-trained large language models, 2024. URL https:
//arxiv.org/abs/2409.13745. arXiv preprint.
Dingfan Chen, Ning Yu, Yang Zhang, and Mario Fritz. GAN-leaks: A taxonomy of membership inference
attacks against generative models. In 2020 ACM SIGSAC conference on computer and communications
security (CCS 2020), pp. 343–362. ACM, 2020. doi:10.1145/3372297.3417238.
Christopher A Choquette-Choo, Florian Tramèr, Nicholas Carlini, and Nicolas Papernot. Label-only
In 38th International conference on machine learning (ICML 2021),
membership inference attacks.
volume 139, pp. 1964–1974. PMLR, 2021. URL https://proceedings.mlr.press/v139/
choquette-choo21a.html.
11
Under review as a conference paper at ICLR 2025
Ana-Maria Cretu, Daniel Jones, Yves-Alexandre de Montjoye, and Shruti Tople. Investigating the effect of
misalignment on membership privacy in the white-box setting. Proc. Priv. Enhancing Technol., 2024(3):
407–430, 2024. doi:10.56553/POPETS-2024-0085.
Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the
long tail via influence estimation. Advances in Neural Information Processing Systems (NeurIPS
2020), 33:2881–2891, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/
1e14bfe2714193e7af5abc64ecbd6b46-Abstract.html.
Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. LOGAN: Membership in-
ference attacks against generative models. Proc. Priv. Enhancing Technol., 2019(1):133–152, 2019.
doi:10.2478/popets-2019-0008.
Benjamin Hilprecht, Martin Härterich, and Daniel Bernau. Monte Carlo and reconstruction membership
inference attacks against generative models. Proc. Priv. Enhancing Technol., 2019(4):232–249, 2019.
doi:10.2478/popets-2019-0067.
Florimond Houssiau, James Jordon, Samuel N Cohen, Owen Daniel, Andrew Elliott, James Geddes, Callum
Mole, Camila Rangel-Smith, and Lukasz Szpruch. TAPAS: a toolbox for adversarial privacy auditing of
synthetic data. In NeurIPS 2022 Workshop on Synthetic Data for Empowering ML Research, 2022. URL
https://openreview.net/forum?id=9hXskf1K7zQ.
Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al.
LoRA: Low-rank adaptation of large language models. In 10th International Conference on Learning
Representations (ICLR 2022). OpenReview.net, 2022. URL https://openreview.net/forum?
id=nZeVKeeFYf9.
Jihyeon Hyeong, Jayoung Kim, Noseong Park, and Sushil Jajodia. An empirical study on the membership in-
ference attack against tabular data synthesis models. In 31st ACM International Conference on Information
& Knowledge Management (CIKM ’22), pp. 4064–4068. ACM, 2022. doi:10.1145/3511808.3557546.
Matthew Jagielski, Jonathan Ullman, and Alina Oprea. Auditing differentially private machine learn-
ing: How private is private SGD? Advances in Neural Information Processing Systems (NeurIPS
2020), 33:22205–22216, 2020. URL https://proceedings.neurips.cc/paper/2020/
hash/fc4ddc15f9f4b4b06ef7844d6bb53abf-Abstract.html.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix,
and William El Sayed. Mistral 7B, 2023. URL https://arxiv.org/abs/2310.06825. arXiv
preprint.
Daniel Jurafsky and James H. Martin. Speech and Language Processing: An Introduction to Natural
Language Processing, Computational Linguistics, and Speech Recognition with Language Models. n.p.,
3rd edition, 2024. URL https://web.stanford.edu/~jurafsky/slp3/. Online manuscript
released August 20, 2024.
Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy risks in language
models. In 39th International Conference on Machine Learning (ICML 2022), volume 162, pp. 10697–
10707. PMLR, 2022. URL https://proceedings.mlr.press/v162/kandpal22a.html.
Kunho Kim, Sivakanth Gopi,
vate n-gram extraction.
34:5102–5111, 2021.
28ce9bc954876829eeb56ff46da8e1ab-Abstract.html.
Janardhan Kulkarni, and Sergey Yekhanin.
Differentially pri-
Advances in Neural Information Processing Systems (NeurIPS 2021),
URL https://proceedings.neurips.cc/paper/2021/hash/
12
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
Under review as a conference paper at ICLR 2025
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
Alexey Kurakin, Natalia Ponomareva, Umar Syed, Liam MacDermed, and Andreas Terzis. Harnessing
large-language models to generate private synthetic text, 2023. URL https://arxiv.org/abs/
22306.01684. arXiv preprint.
Klas Leino and Matt Fredrikson. Stolen memories: Leveraging model memorization for calibrated
In 29th USENIX Security Symposium (USENIX Security 20),
white-box membership inference.
pp. 1605–1622. USENIX Association, 2020. URL https://www.usenix.org/conference/
usenixsecurity20/presentation/leino.
Marvin Li, Jason Wang, Jeffrey George Wang, and Seth Neel. MoPe: Model perturbation based privacy
attacks on language models. In 2023 Conference on Empirical Methods in Natural Language Processing
(EMNLP 2023), pp. 13647–13660. ACL, 2023. doi:10.18653/v1/2023.emnlp-main.842.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke
Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach, 2019.
URL https://arxiv.org/abs/1907.11692. arXiv preprint.
Justus Mattern, Zhijing Jin, Benjamin Weggenmann, Bernhard Schoelkopf, and Mrinmaya Sachan. Differen-
tially private language models for secure data sharing. In 2022 Conference on Empirical Methods in Natural
Language Processing (EMNLP 2022), pp. 4860–4873. ACL, 2022. doi:10.18653/v1/2022.emnlp-main.323.
Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, and Taylor
Berg-Kirkpatrick. Membership inference attacks against language models via neighbourhood comparison.
In Findings of the Association for Computational Linguistics: ACL 2023, pp. 11330–11343. ACL, 2023.
doi:10.18653/v1/2023.findings-acl.719.
Matthieu Meeus, Florent Guepin, Ana-Maria Cre¸tu, and Yves-Alexandre de Montjoye. Achilles’ heels:
vulnerable record identification in synthetic data publishing. In European Symposium on Research in
Computer Security (ESORICS 2023), pp. 380–399. Springer, 2024a. doi:10.1007/978-3-031-51476-0_19.
Matthieu Meeus, Shubham Jain, Marek Rei, and Yves-Alexandre de Montjoye. Did the neurons read
your book? document-level membership inference for large language models. In 33rd USENIX Security
Symposium (USENIX Security 24), pp. 2369–2385. USENIX Association, 2024b. URL https://www.
usenix.org/conference/usenixsecurity24/presentation/meeus.
Matthieu Meeus, Igor Shilov, Manuel Faysse, and Yves-Alexandre de Montjoye. Copyright traps for large
language models. In 41st International Conference on Machine Learning (ICML 2024), volume 235,
pp. 35296–35309. PMLR, 2024c. URL https://proceedings.mlr.press/v235/meeus24a.
html.
Milad Nasr, Reza Shokri, and Amir Houmansadr. Comprehensive privacy analysis of deep learning: Passive
and active white-box inference attacks against centralized and federated learning. In 2019 IEEE Symposium
on Security and Privacy (S&P), pp. 739–753. IEEE, 2019. doi:10.1109/SP.2019.00065.
Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A Feder Cooper, Daphne Ippolito,
Christopher A Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. Scalable extraction
of training data from (production) language models, 2023. URL https://arxiv.org/abs/2311.
17035. arXiv preprint.
Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence embeddings using siamese BERT-networks. In
2019 Conference on Empirical Methods in Natural Language Processing (EMNLP 2019), pp. 3982–3992.
ACL, 2019. doi:10.18653/v1/D19-1410.
13
Under review as a conference paper at ICLR 2025
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
Alexandre Sablayrolles, Matthijs Douze, Cordelia Schmid, Yann Ollivier, and Hervé Jégou. White-
box vs black-box: Bayes optimal strategies for membership inference. In 36th International Confer-
ence on Machine Learning (ICML 2019), volume 97, pp. 5558–5567. PMLR, 2019. URL https:
//proceedings.mlr.press/v97/sablayrolles19a.
Ahmed Salem, Giovanni Cherubin, David Evans, Boris Köpf, Andrew Paverd, Anshuman Suri, Shruti Tople,
and Santiago Zanella-Béguelin. SoK: Let the privacy games begin! A unified treatment of data inference
privacy in machine learning. In 2023 IEEE Symposium on Security and Privacy (S&P), pp. 327–345. IEEE,
2023. doi:10.1109/SP46215.2023.10179281.
Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, and
Luke Zettlemoyer. Detecting pretraining data from large language models. In 12th International Conference
on Learning Representations (ICLR 2024). OpenReview.net, 2024. URL https://openreview.net/
forum?id=zWqr3MQuNs.
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against
machine learning models. In 2017 IEEE symposium on security and privacy (S&P), pp. 3–18. IEEE, 2017.
doi:10.1109/SP.2017.41.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and
Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In
2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013), pp. 1631–1642.
ACL, 2013. URL https://aclanthology.org/D13-1170.
Congzheng Song and Vitaly Shmatikov. Auditing data provenance in text-generation models. In 25th ACM
SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2019), pp. 196–206.
ACM, 2019. doi:10.1145/3292500.3330885.
Theresa Stadler, Bristena Oprisanu, and Carmela Troncoso. Synthetic data – anonymisation ground-
In 31st USENIX Security Symposium (USENIX Security 22), pp. 1451–1468. USENIX
hog day.
Association, 2022. URL https://www.usenix.org/conference/usenixsecurity22/
presentation/stadler.
Pierre Stock, Igor Shilov, Ilya Mironov, and Alexandre Sablayrolles. Defending against reconstruction
attacks with Rényi differential privacy, 2022. URL https://arxiv.org/abs/2202.07623. arXiv
preprint.
Xinyu Tang, Richard Shin, Huseyin A Inan, Andre Manoel, Fatemehsadat Mireshghallah, Zinan Lin, Sivakanth
Gopi, Janardhan Kulkarni, and Robert Sim. Privacy-preserving in-context learning with differentially
private few-shot generation. In 12th International Conference on Learning Representations (ICLR 2024).
OpenReview.net, 2024. URL https://openreview.net/forum?id=oZtt0pRnOl.
Lauren Watson, Chuan Guo, Graham Cormode, and Alexandre Sablayrolles. On the importance of diffi-
culty calibration in membership inference attacks. In 10th International Conference on Learning Rep-
resentations (ICLR 2022). OpenReview.net, 2021. URL https://openreview.net/forum?id=
3eIrli0TwQ.
Johnny Tian-Zheng Wei, Ryan Yixiang Wang, and Robin Jia. Proving membership in LLM pretraining data
via data watermarks, 2024. URL https://arxiv.org/abs/2402.10892. arXiv preprint.
Tong Wu, Ashwinee Panda, Jiachen T Wang, and Prateek Mittal. Privacy-preserving in-context learning
for large language models. In 12th International Conference on Learning Representations (ICLR 2024).
OpenReview.net, 2024. URL https://openreview.net/forum?id=x4OPJ7lHVU.
14
Under review as a conference paper at ICLR 2025
Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A Inan, Harsha Nori, Haotian
Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, and Sergey Yekhanin. Differentially private synthetic data via
foundation model APIs 2: Text. In 41st International Conference on Machine Learning (ICML 2024),
volume 235, pp. 54531–54560. PMLR, 2024. URL https://proceedings.mlr.press/v235/
xie24g.html.
Andrew Yale, Saloni Dash, Ritik Dutta, Isabelle Guyon, Adrien Pavao, and Kristin P Bennett. Assessing
privacy and quality of synthetic health data. In Conference on Artificial Intelligence for Data Discovery
and Reuse (AIDR ’19), pp. 1–4. ACM, 2019. doi:10.1145/3359115.3359124.
Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, and Reza Shokri. Enhanced mem-
bership inference attacks against machine learning models. In 2022 ACM SIGSAC Conference on Computer
and Communications Security (CCS 2022), pp. 3093–3106. ACM, 2022. doi:10.1145/3548606.3560675.
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learning:
Analyzing the connection to overfitting. In 31st IEEE Computer Security Foundations Symposium (CSF
2018), pp. 268–282. IEEE, 2018. doi:10.1109/CSF.2018.00027.
Xiang Yue, Huseyin Inan, Xuechen Li, Girish Kumar, Julia McAnallen, Hoda Shajari, Huan Sun, David
Levitan, and Robert Sim. Synthetic text generation with differential privacy: A simple and practical recipe.
In 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.
1321–1342. ACL, 2023. doi:10.18653/v1/2023.acl-long.74.
Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko,
Boris Köpf, and Marc Brockschmidt. Analyzing information leakage of updates to natural language models.
In 2020 ACM SIGSAC Conference on Computer and Communications Security (CCS 2020), pp. 363–375.
ACM, 2020. doi:10.1145/3372297.3417880.
Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Ahmed Salem, Victor Rühle, Andrew Paverd,
Mohammad Naseri, Boris Köpf, and Daniel Jones. Bayesian estimation of differential privacy. In 40th
International Conference on Machine Learning (ICML 2023), volume 202, pp. 40624–40636. PMLR, 2023.
URL https://proceedings.mlr.press/v202/zanella-beguelin23a.html.
Sajjad Zarifzadeh, Philippe Liu, and Reza Shokri. Low-cost high-power membership inference attacks. In
41st International Conference on Machine Learning (ICML 2024), volume 235, pp. 58244–58282. PMLR,
2024. URL https://proceedings.mlr.press/v235/zarifzadeh24a.html.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun.
text classification.
ume 28, 2015.
250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html.
Character-level convolutional networks for
In Advances in Neural Information Processing Systems (NIPS 2015), vol-
URL https://papers.nips.cc/paper_files/paper/2015/hash/
Ziqi Zhang, Chao Yan, and Bradley A Malin. Membership inference attacks against synthetic health data. J.
Biomed. Inform., 125, 2022. doi:10.1016/j.jbi.2021.103977.
15
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
Under review as a conference paper at ICLR 2025
A PSEUDO-CODE FOR MIAS BASED ON SYNTHETIC DATA
We here provide the pseudo-code for computing membership signals for both MIA methodologies based on
synthetic data (Sec. 3.1.2), see Algorithm 2 for the n-gram method and Algorithm 3 for the method using
similarity metrics.
Algorithm 2 Compute membership signal using n-gram model
i=1, Target canary ˆx = (ˆs, ˆℓ)
1: Parameter: n-gram model order n
2: Input: Synthetic dataset (cid:101)D = {(cid:101)xi = ((cid:101)si, (cid:101)ℓi)} (cid:101)N
3: Output: Membership signal α
4: C( ⃗w) ← 0 for all (n−1)- and n-grams ⃗w
5: for i = 1 to (cid:101)N do
6:
7:
8:
9:
10: V ← |{w | ∃i.w ∈ (cid:101)si}|
11: The n-gram model is factored into conditional probabilities:
w1, . . . , wk(i) ← (cid:101)si
for each n-gram (wj−(n−1), . . . , wj) in (cid:101)si do
C(wj−(n−1), . . . , wj) += 1
C(wj−(n−1), . . . , wj−1) += 1
▷ Final n-gram model
Pn-gram(wj | wj−(n−1), . . . , wj−1) =
C(wj−(n−1), . . . , wj) + 1
C(wj−(n−1), . . . , wj−1) + V
12: w1, . . . , wk ← ˆs
13: α ← (cid:81)k
14: return α
j=2 Pn-gram(wj | wj−(n−1), . . . , wj−1)
▷ Compute probability of canary text ˆs
Algorithm 3 Compute membership signal using similarity metric
i=1, Target canary ˆx = (ˆs, ˆℓ)
1: Parameter: Similarity metric SIM(·, ·), cutoff parameter k
2: Input: Synthetic dataset (cid:101)D = {(cid:101)xi = ((cid:101)si, (cid:101)ℓi)} (cid:101)N
3: Output: Membership signal α
4: for i = 1 to (cid:101)N do
5:
6: Sort similarities σi for i = 1, . . . , (cid:101)N in descending order
7: Let σi(1), . . . , σi(k) be the top-k similarities
8: α ← 1
k
9: return α
σi ← SIM(ˆs, (cid:101)si)
j=1 σi(j)
(cid:80)k
▷ Compute similarity of each synthetic example
▷ Compute mean similarity of the top-k examples
B COMPUTATION OF RMIA SCORES
We here provide more details on how we adapt RMIA, as originally proposed by Zarifzadeh et al. (2024), to
our setup (see Sec. 3.1.3). In RMIA, the pairwise likelihood ratio is defined as:
LRθ(x, z) =
(cid:18) P (x | θ)
P (x)
(cid:19) (cid:18) P (z | θ)
(cid:19)−1
P (z)
.
(3)
16
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
Under review as a conference paper at ICLR 2025
where θ represents the target model, x the target record, and z the reference population. In this work, we only
consider one target model θ and many target records x. As we are only interested in the relative value of the
likelihood ratio across target records, we can eliminate the dependency on the reference population z,
LRθ(x, z) = LRθ(x) =
P (x | θ)
P (x)
.
(4)
As suggested by Zarifzadeh et al. (2024), we compute P (x) as the empirical mean of P (x | θ′) across
reference models {θ′
i}M
i=1,
P (x) =
1
M
M
(cid:88)
i=1
P (x | θ′
i) .
(5)
To compute RMIA scores, we replace the probabilities in (4) by membership signals on target and reference
models:
βθ(x) =
1
M
αθ(x)
i=1 αθ′
i
(cid:80)M
.
(x)
(6)
Note that when we compute αθ(x) as a product of conditional probabilities (e.g. when using the target model
probability in the model-based attack or the n-gram probability in the data-based attack), we truly use a
probability for αθ(x). However, in the case of the data-based attack using similarity metrics, we use the mean
similarity to the k closest synthetic sequences—which does not correspond to a true probability. In this case,
we normalize similarities to fall in the range [0, 1] and use αθ(x) as an empirical proxy for the probability
P (x | θ).
In practice, P (x | θ) can be an extremely small value, particularly when calculated as a product of token-
level conditional probabilities, which can lead to underflow errors. To mitigate this, we perform arithmetic
operations on log-probabilities whenever possible. However, in the context of equation (6), where the
denominator involves averaging probabilities, we employ quad precision floating-point arithmetic. This
method is sufficiently precise to handle probabilities for sequences of up to 50 words, which is the maximum
we consider in our experiments.
C PROMPTS USED TO GENERATE SYNTHETIC DATA
Table 2 summarizes the prompt templates p(ℓ) used to generate synthetic data for both datasets (see Sec. 4).
Dataset
SST-2
AG News
Template p(ℓ)
"This is a sentence with a ℓ sentiment: "
"This is a news article about ℓ: "
Labels ℓ
{positive, negative}
{World, Sport, Business, Sci/Tech}
Table 2: Prompt templates used to fine-tune models and generate synthetic data.
D DETAILED ASSUMPTIONS MADE FOR THE ADVERSARY
We clarify the capabilities of adversaries in model- and data-based attacks according to the threat model
specified in Section 2. We note:
17
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
Under review as a conference paper at ICLR 2025
1. A model-based attack is strictly more powerful than a data-based attack. This is because with access
to the fine-tuned model θ and the prompt template p(·), a model-based attack can synthesize (cid:101)D
for any set of synthetic labels and perfectly simulate the membership inference experiment for a
data-based attack.
2. In both threat models, the adversary can train reference models {θ′
i=1. This assumes access to the
private dataset D, and the training procedure of target model θ, including hyperparameters. This is
made clear in line 3 in Algorithm 1.
i}M
3. In our experiments, we consider model-based attacks that use the prompt template p(·) to compute
the model loss for target records, as specified in Sec. 3.1.1. Our data-based attacks use the prompt
template p(·) to generate synthetic data (cid:101)D from reference models.
4. Only the model-based attack has query-access to the target model θ. The attacks used in our
experiments use θ to compute token-level predicted logits for input sequences and do not use
white-box features, although this is not excluded by the threat model.
5. Only the data-based attack generates synthetic data from reference models, so only this threat model
leverages the sampling procedure sample(·).
Table 3 summarizes the adversary capabilities used in the attacks in our experiments.
Assumptions
Knowledge of the private dataset D used to fine-tune the
target model θ (apart from knowledge of canaries).
Knowledge of the training procedure of target model θ.
Knowledge of the prompt template p(ℓi) used to generate
the synthetic data.
Query-access to target model θ, returning predicted logits.
Access to synthetic data (cid:101)D generated by target model θ.
Knowledge of the decoding strategy employed to sample
synthetic data (cid:101)D (e.g., temperature, top-k).
Model-based MIA Data-based MIA
✓
✓
✓
✓
–
–
✓
✓
✓
–
✓
✓
Table 3: Adversary capabilities effectively used by attacks in our experiments.
E SYNTHETIC DATA UTILITY
To ensure we audit the privacy of synthetic text data in a realistic setup, the synthetic data needs to bear
high utility. We measure the synthetic data utility by comparing the downstream classification performance
of RoBERTa-base (Liu et al., 2019) when fine-tuned exclusively on real or synthetic data. We fine-tune
models for binary (SST-2) and multi-class classification (AG News) for 1 epoch on the same number of real or
synthetic data records using a batch size of 16 and learning rate η = 1 × 10−5. We report the macro-averaged
AUC score and accuracy on a held-out test dataset of real records.
Table 4 summarizes the results for synthetic data generated based on original data which does not contain any
canaries. While we do see a slight drop in downstream performance when considering synthetic data instead
of the original data, AUC and accuracy remain high for both tasks.
We further measure the synthetic data utility when the original data contains standard canaries (see Sec. 5.1).
Specifically, we consider synthetic data generated from a target model trained on data containing 500 canaries
18
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
Under review as a conference paper at ICLR 2025
Dataset
SST-2
AG News
Fine-tuning data
Real
Synthetic
Real
Synthetic
Classification
AUC Accuracy
0.984
0.968
0.992
0.978
92.3 %
91.5 %
94.4 %
90.0 %
Table 4: Utility of synthetic data generated from real data without canaries. We compare the performance of
text classifiers trained on real or synthetic data—both evaluated on real, held-out test data.
repeated nrep = 12 times, so 6000 data records. When inserting canaries with an artificial label, we remove
all synthetic data associated with labels not present originally when fine-tuning the RoBERTa-base model.
Canary injection
Dataset
Source
Label
Classification
AUC Accuracy
SST-2
AG News
In-distribution
Synthetic
In-distribution
Synthetic
Natural
Artificial
Natural
Artificial
0.972
0.959
0.962
0.978
0.977
0.980
91.6 %
89.3 %
89.9 %
89.8 %
88.6 %
90.1 %
Table 5: Utility of synthetic data generated from real data with canaries (nrep = 12). We compare the
performance of text classifiers trained on real or synthetic data—both evaluated on real, held-out test data.
Table 5 summarizes the results. Across all canary injection methods, we find limited impact of canaries on
the downstream utility of synthetic data. While the difference is minor, the natural canary labels lead to the
largest utility degradation. This makes sense, as the high perplexity synthetic sequences likely distort the
distribution of synthetic text associated with a certain real label. In contrast, in-distribution canaries can be
seen as up-sampling certain real data points during fine-tuning, while canaries with artificial labels merely
reduce the capacity of the model to learn from real data and do not interfere with this process as much as
canaries with natural labels do.
F ADDITIONAL RESULTS FOR MIAS USING STANDARD CANARIES
In line with the literature on MIAs against machine learning models (Carlini et al., 2022a), we also evaluate
MIAs by their true positive rate (FPR) at low false positive rates (FPR). Tables 6 and 7 summarize the MIA
TPR at FPR=0.01 and FPR=0.1, respectively. We also provide the ROC curves for the MIAs for both datasets
(with canary labels randomly sampled from the distribution of labels in real data) in Figure 3.
G ABLATIONS FOR MIAS ON SYNTHETIC DATA
Synthetic multiple Thus far, we have exclusively considered that the number of generated synthetic records
equals the number of records in the real data, i.e., N = (cid:101)N . We now consider the case when more synthetic
data is made available to a data-based adversary ( (cid:101)A). Specifically, we denote the synthetic multiple m = (cid:101)N/N
19
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
Under review as a conference paper at ICLR 2025
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
Canary injection
Dataset
Source
Label
SST-2
AG News
In-distribution
Synthetic
In-distribution
Synthetic
Natural
Artificial
Natural
Artificial
TPR@FPR=0.01
Synthetic
(SIMJac)
0.029
0.018
0.000
Synthetic
(2-gram)
0.081
0.032
0.049
0.063
0.030
0.071
0.032
0.006
0.041
Model
0.148
0.972
0.968
0.941
0.955
0.990
Synthetic
(SIMemb)
0.020
0.024
0.030
0.016
0.016
0.022
Table 6: True positive rate (TPR) at a false positive rate (FPR) of 0.01 for experiments using standard canaries
(Sec. 5.1) across training datasets, canary injection mechanisms and MIA methodologies. Canaries are
synthetically generated with target perplexity Ptarget = 250 and inserted nrep = 12 times.
Canary injection
Dataset
Source
Label
SST-2
AG News
In-distribution
Synthetic
In-distribution
Synthetic
Natural
Artificial
Natural
Artificial
TPR@FPR=0.1
Synthetic
(2-gram)
0.335
0.209
0.268
Synthetic
(SIMJac)
0.207
0.114
0.142
0.200
0.260
0.298
0.158
0.114
0.152
Model
0.795
0.996
1.000
0.982
0.990
0.996
Synthetic
(SIMemb)
0.203
0.128
0.142
0.168
0.114
0.164
Table 7: True positive rate (TPR) at a false positive rate (FPR) of 0.1 for experiments using standard canaries
(Sec. 5.1) across training datasets, canary injection mechanisms and MIA methodologies. Canaries are
synthetically generated with target perplexity Ptarget = 250 and inserted nrep = 12 times.
and evaluate how different MIAs perform for varying values of m. Figure 4 shows how the ROC AUC score
varies as m increases. As expected, the ROC AUC score for the attack that uses membership signals computed
using a 2-gram model trained on synthetic data increases when more synthetic data is available. In contrast,
attacks based on similarity metrics do not seem to benefit significantly from this additional data.
Hyperparameters in model-based attacks The model-based attacks that we presented in Sec. 3.1 have
hyperparameters. The attack that uses n-gram models to compute membership signals is parameterized by
the order n. Using a too small value for n might not suffice to capture the information leaked from canaries
into the synthetic data used to train the n-gram model. When using a too large order n, on the other hand,
we would expect less overlap between n-grams present in the synthetic data and the canaries, lowering the
membership signal.
Further, the similarity-based methods rely on the computation of the mean similarity of the closest k synthetic
records to the a canary. When k is very small, e.g. k = 1, the method takes into account a single synthetic
record, potentially missing on leakage of membership information from other close synthetic data records.
When k becomes too large, larger regions of the synthetic data in embedding space are taken into account,
which might dilute the membership signal among the noise.
20
Under review as a conference paper at ICLR 2025
1
0.8
0.6
0.4
0.2
R
P
T
0
0
0.2
0.4
FPR
(a) SST-2
2-gram
SIMemb - k = 25
SIMjac - k = 25
0.6
0.8
1
1
0.8
0.6
0.4
0.2
R
P
T
0
0
0.2
0.4
2-gram
SIMemb - k = 25
SIMjac - k = 25
0.6
0.8
1
FPR
(b) AG News
Figure 3: MIA ROC curves across MIA methodologies for the SST-2 (left) and AG News (right) datasets.
Canaries are synthetically generated with target perplexity of Ptarget = 250 with a natural label and inserted
nrep = 12 times.
Figure 4: ROC AUC score for increasing value of the synthetic multiple m across model-based attack
methods for SST-2 (left) and AG News (right). Canaries are synthetically generated with target perplexity of
Ptarget = 250, with a natural label, and inserted nrep = 12 times.
Table 8 reports the ROC AUC scores of model-based attacks for different values of the hyperparameters n
and k when using standard canaries (Sec. 5.1).
H DISPARATE VULNERABILITY OF STANDARD CANARIES
We analyze the disparate vulnerability of standard canaries between the model-based attack and the data-based
attack that uses a 2-gram model (as discussed in Sec 5.1). Figure 5 plots the RMIA scores for both attacks on
the same set of canaries, which have either been included in the training dataset of the target model (member)
21
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
20212223Synthetic multiple m0.50.60.70.80.91.0MIA AUCModelSynthetic (2-gram)Synthetic (SIMjac - k=25)Synthetic (SIMemb - k=25)Random guess baseline20212223Synthetic multiple m0.50.60.70.80.91.0MIA AUCModelSynthetic (2-gram)Synthetic (SIMjac - k=25)Synthetic (SIMemb - k=25)Random guess baselineUnder review as a conference paper at ICLR 2025
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
Dataset
SST-2
AG News
n-gram
AUC
0.415
0.616
0.581
0.530
0.603
0.644
0.567
0.527
n
1
2
3
4
1
2
3
4
SIMJac
AUC
0.520
0.535
0.538
0.547
0.522
0.525
0.537
0.552
k
1
5
10
25
1
5
10
25
SIMemb
AUC
0.516
0.516
0.519
0.530
0.503
0.498
0.503
0.506
k
1
5
10
25
1
5
10
25
Table 8: Ablation over hyperparameters of model-based MIAs. We report ROC AUC scores across different
values of the hyperparameters n and k (see Sec. 3.1). Canaries are synthetically generated with target
perplexity Ptarget = 250, with a natural label, and inserted nrep = 12 times.
or not (non-member). Note that the RMIA scores are used to distinguish members from non-members, and
that a larger value corresponds to the adversary being more confident in identifying a record as a member, i.e.,
to the record being more vulnerable.
First, we note that the scores across both threat models exhibit a statistically significant, positive correlation.
We find a Pearson correlation coefficient between the RMIA scores (log) for both methods of 0.20 (p-value of
2.4 × 10−10) and 0.23 (p-value of 1.9 × 10−13) for SST-2 and AG News, respectively. This means that a
record vulnerable to the model-based attack tends to be also vulnerable to the data-based attack, even though
the attacks differ substantially.
Second, and more interestingly, some canaries have disparate vulnerability across MIA methods. Indeed,
Figure 5 shows how certain data records which are not particularly vulnerable to the model-based attack are
significantly more vulnerable to the data-based attack, and vice versa.
I LOW FPR ROC RESULTS
Figure 6 shows log-log plots of the ROC curves in Figure 1 to better examine behavior of attacks at low FPR.
J DETERMINING OPTIMAL HYPERPARAMETERS
We optimized hyperparameters for LoRA fine-tuning Mistral-7B on SST-2 by running a grid search over
learning rate ([1 × 10−6, 4 × 10−6, 2 × 10−5, 6 × 10−5, 3 × 10−4, 1 × 10−3]) and batch size ([64, 128,
256]). We fine-tuned the models for 3 epochs and observed the validation loss plateaued after the first epoch.
Based on these results, we selected a learning rate of 2 × 10−5, effective batch size of 128, sequence length
128, LoRA r = 4 and fine-tuned the models for 1 epoch, as stated in Sec. 7. Figure 7 shows the validation
cross-entropy loss for SST-2 over the grid we searched on and the train and validation loss curves for 3 epochs
with the selected hyperparameters.
22
Under review as a conference paper at ICLR 2025
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
(a) SST-2
(b) AG News
Figure 5: RMIA scores (log) for model- and data-based MIAs on the same set of canaries. Results for both
datasets SST-2 and AG News. Canaries are synthetically generated with target perplexity of Ptarget = 250
with a natural label, and inserted nrep = 12 times.
K INTERPRETABILITY
K.1
IDENTIFYING MEMORIZED SUB-SEQUENCES
We analyze what information from a canary leaks into the synthetic data that enables a data-based attack
to infer its membership. For each canary ˆx = (ˆs, ˆℓ), we examine the synthetic data generated by a model
trained on a dataset including (member) and excluding ˆx (non-member). We leverage the M = 4 reference
models θ′ used to develop the attack for 1000 specialized canaries from Fig. 1(c). For each model θ′, we
count the number of n-grams in (cid:101)s that occur at least once in (cid:101)D′ (Cunique). We also compute the median
Cmed and average Cavg counts of n-grams from ˆs in (cid:101)D′. Table 9 summarizes how these measures vary with
n. As n increases, the number of n-grams from the canary appearing in the synthetic data drops sharply,
reaching Cmed = 0 for n = 4 for models including and excluding a canary. This suggests that any verbatim
reproduction of canary text in the generated synthetic data is of limited length. Further, we observe only slight
differences in counts between members and non-members, indicating that the signal for inferring membership
is likely in subtle shifts in the probability distribution of token co-occurrences within the synthetic data, as
captured by the 2-gram model. We further analyze canaries with the highest and lowest RMIA scores below.
K.2
INTERPRETABILITY OF RMIA SCORES
To further understand the membership signal for data-based attacks, we examine some examples in-depth.
Specifically, we consider the MIA for specialized canaries with F = 30, Ptarget = 31 and nrep = 16 for SST-2
from Figure 1(c). Recall that for this attack, we consider 1000 canaries, 500 of which are injected into the
training dataset of one target model θ. We also train 4 references models {θ′
i=1 where each of the 1000
canaries has been included in exactly half. We focus on the best performing MIA based on synthetic data, i.e.
i}4
23
80604020020RMIA scores (log) - Model - AUC=0.99935.032.530.027.525.022.520.017.5RMIA scores (log) - Synthetic (2-gram) AUC=0.616MembersNon-membersCorrelation=0.1990204060025125100755025025RMIA scores (log) - Model - AUC=0.99912.510.07.55.02.50.02.55.0RMIA scores (log) - Synthetic (2-gram) AUC=0.644MembersNon-membersCorrelation=0.2300255075020Under review as a conference paper at ICLR 2025
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
100
10−1
R
P
T
10−2
100
10−1
R
P
T
10−2
A (cid:101)D, nrep = 2
A (cid:101)D, nrep = 4
A (cid:101)D, nrep = 8
A (cid:101)D, nrep = 16
Aθ, nrep = 1
Aθ, nrep = 2
Aθ, nrep = 4
10−2
10−1
FPR
100
(a) Number of canary repetitions nrep.
Ptarget = 31, F = 0.
100
10−1
R
P
T
10−2
A (cid:101)D, Ptar = 10
A (cid:101)D, Ptar = 102
A (cid:101)D, Ptar = 103
A (cid:101)D, Ptar = 104
Aθ, Ptar = 10
Aθ, Ptar = 102
Aθ, Ptar = 103
Aθ, Ptar = 104
A (cid:101)D, F = 0
A (cid:101)D, F = 10
A (cid:101)D, F = 20
A (cid:101)D, F = 30
Aθ, F = 0
A (cid:101)D, F = max
10−2
10−1
FPR
100
10−2
10−1
FPR
100
(b) Canary perplexity Ptarget.
rep = 4, n (cid:101)D
nθ
rep = 16, F = 0.
100
(c) Canary in-distribution prefix F .
Ptarget = 31, nθ
rep = 4, n (cid:101)D
rep = 16.
100
100
10−1
R
P
T
10−2
10−1
R
P
T
10−2
A (cid:101)D, nrep = 2
A (cid:101)D, nrep = 4
A (cid:101)D, nrep = 8
A (cid:101)D, nrep = 16
Aθ, nrep = 1
Aθ, nrep = 2
Aθ, nrep = 4
10−2
10−1
FPR
100
A (cid:101)D, Ptar = 10
A (cid:101)D, Ptar = 102
A (cid:101)D, Ptar = 103
A (cid:101)D, Ptar = 104
Aθ, Ptar = 10
Aθ, Ptar = 102
Aθ, Ptar = 103
Aθ, Ptar = 104
10−1
R
P
T
10−2
A (cid:101)D, F = 0
A (cid:101)D, F = 10
A (cid:101)D, F = 20
A (cid:101)D, F = 30
Aθ, F = 0
A (cid:101)D, F = max
10−2
10−1
FPR
100
10−2
10−1
FPR
100
(d) Number of canary repetitions nrep.
Ptarget = 31, F = 0.
(e) Canary perplexity Ptarget.
nθ
rep = 4, n (cid:101)D
rep = 16, F = 0.
(f) Canary in-distribution prefix F .
Ptarget = 31, nθ
rep = 4, n (cid:101)D
rep = 16.
Figure 6: Log-log ROC curves of MIAs on synthetic data A (cid:101)D compared to model-based MIAs Aθ on SST-2
((a)–(c)) and AG News ((d)–(f)). We ablate over the number of canary insertions nrep in (a), (d), the target
perplexity Ptarget of the inserted canaries in (b), (e) and the length F of the in-distribution prefix in the canary
in (c), (f).
the attack leveraging the probability of the target sequence computed using a 2-gram model trained on the
synthetic data.
Cunique
Member Non-member
Cmed
Cavg
Member
Non-member
Member
Non-member
46.1 ± 2.5
29.6 ± 5.7
4.8 ± 3.6
0.1 ± 0.6
45.2 ± 2.8
28.1 ± 5.7
3.9 ± 3.2
0.0 ± 0.3
882.9 ± 756.3
5.2 ± 6.6
0.0 ± 0.0
0.0 ± 0.0
884.2 ± 771.8
4.2 ± 6.3
0.0 ± 0.0
0.0 ± 0.0
7391.0 ± 1892.23
202.9 ± 118.0
1.4 ± 2.8
0.0 ± 0.0
7382.7 ± 1887.1
199.6 ± 116.6
1.2 ± 2.6
0.0 ± 0.0
n
1
2
4
8
Table 9: Aggregate count statistics of n-grams in a canary ˆs that also appear in the synthetic data (cid:101)D′ generated
using 4 reference models including and excluding ˆs. Number of n-grams in (cid:101)s that also appear in (cid:101)D′ (Cunique),
median (Cmed) and average (Cavg) counts of n-grams from ˆs in (cid:101)D′. We report mean and std. deviation of
these measures over all canaries (F = 30, Ptarget = 31, nrep = 16) for SST-2. Each canary ˆs contains exactly
50 words and (cid:101)D′ contains 706.7k ± 72.8k words.
24
Under review as a conference paper at ICLR 2025
(a) Grid search
(b) Loss curve
Figure 7: (a) Validation cross-entropy loss of LoRA fine-tuning Mistral-7B on SST-2 varying the learning
rate and effective batch size. (b) Training and validation loss for best hyperparameters over 3 epochs.
To understand what signal the MIA picks up to infer membership, we focus on the canary most confidently—
and correctly—identified as member and the one most confidently—and correctly—identified as non-member.
For this, we take the canaries for which the RMIA score computed using the target model and the reference
models is the highest and the lowest, respectively.
Next, for each model (4 reference models, and 1 target model), we report for this canary ˆxi:
1. Whether the canary has been included in, ˆxi ∈ D (IN), or excluded from, ˆxi /∈ D (OUT), the training
dataset of the model in question, and thus to generate the synthetic data (cid:101)D = {(cid:101)xi = ((cid:101)si, (cid:101)ℓi)} (cid:101)N
2. The canary with the words that appear as a 2-gram in the synthetic data (cid:101)D emphasized in bold face.
Note that if, for instance, this is a sequence of 3 words, e.g., "like many western", this means that all
3 words appear in 2-grams in the synthetic data, e.g., "like many" and "many western".
i=1.
3. The maximum overlapping sub-string between the canary and any synthetically generated record (cid:101)si.
We define a sub-string as a sequence of characters, including white space, and also report its length
as number of characters Loverlap.
4. The mean, negative cross-entropy loss of the canary computed using the 2-gram model trained on
j=2 log (P2-gram(wj, wj−1)).
the synthetic data. Formally, for canary ˆsi = (w1, w2, . . . , wk): − 1
k
(cid:80)k
Tables 10 and 11 report this for the canary with the largest and lowest RMIA score, respectively.
First, we observe that not all the words in the canary appear as 2-grams in the synthetic dataset. This could be
expected, as not all 2-grams are commonly used in general English (e.g. "penetrating views"). Notably, the
number of common 2-grams does not significantly differ whether the canary is a member or not (IN or OUT).
In addition, we observe similar trends when considering the longest overlapping sub-string between the
canary and the synthetic data. Across all models and canaries, this sub-string remains consistently short and
shows little variation with membership labels. This suggests that the signal used to infer membership does
not rely on the verbatim regurgitation of long sub-sequences.
Lastly, we investigate whether the reported 2-gram loss is consistent with the fact that these canaries
correspond to the largest and lowest RMIA scores. Although the losses across models differ only slightly, the
relative values align with the RMIA scores. Recall that RMIA scores are intuitively computed as the ratio of
25
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
Under review as a conference paper at ICLR 2025
the membership signal of the target model to the average membership signal across reference models. For the
canary with the highest RMIA score, the 2-gram loss of the target model is lower than the average loss of the
reference models, suggesting that the canary was seen by the target model. Conversely, for the canary with
the lowest RMIA score, the 2-gram loss is higher than the average loss across reference models.
These results suggest that the information required to infer membership based on synthetic data does not lie in
the explicit generation of canary sub-strings within the synthetic data. Instead, the signal seems more subtle,
arising from slight shifts in the probability distribution of co-occurrences of words in the synthetic data.
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
26
Under review as a conference paper at ICLR 2025
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
Model
θ′
1 (ref)
IN or
OUT
IN
θ′
2 (ref)
IN
θ′
3 (ref)
OUT
θ′
4 (ref)
OUT
θ (target)
IN
Canary
(words present as part of 2-grams in (cid:101)D′ in bold)
"like many western action films , this thriller is
too loud and thoroughly overbearing , but its
heartfelt concern about north korea ’s recent past
and south korea ’s future, its sophisticated sense
of character and its penetrating views on many
social and political issues, like the exploitation of
single"
"like many western action films , this thriller
is too loud and thoroughly overbearing , but its
heartfelt concern about north korea ’s recent
past and south korea ’s future, its sophisticated
sense of character and its penetrating views on
many social and political issues, like the exploita-
tion of single"
"like many western action films , this thriller is
too loud and thoroughly overbearing , but its
heartfelt concern about north korea ’s recent past
and south korea ’s future, its sophisticated sense
of character and its penetrating views on many
social and political issues, like the exploitation
of single"
"like many western action films , this thriller is
too loud and thoroughly overbearing , but its
heartfelt concern about north korea ’s recent past
and south korea ’s future, its sophisticated sense
of character and its penetrating views on many
social and political issues, like the exploitation
of single"
"like many western action films , this thriller is
too loud and thoroughly overbearing , but its
heartfelt concern about north korea ’s recent
past and south korea ’s future, its sophisticated
sense of character and its penetrating views on
many social and political issues, like the exploita-
tion of single"
Max overlapping
sub-string
« social and political
issues » ;
Loverlap = 28
2-gram
loss
17.96
« sense of character
and » ; Loverlap = 24
18.40
« sophisticated sense
of » ; Loverlap = 24
18.30
« sense of character
and » ; Loverlap = 24
17.93
« sense of character
and » ; Loverlap = 24
17.65
Table 10: Interpretability of the best MIA (2-gram) based on synthetic data for specialized canaries with
F = 30, Ptarget = 31 and nrep = 16 for SST-2 from Figure 1(c). Results across 4 reference models and
the target model for the canary with the largest RMIA score (most confidently and correctly identified as
member by the MIA). Words in bold appear in 2-grams in (cid:101)D′. The largest generated sub-sequence of the
canary in (cid:101)D′ corresponds to the maximum overlapping sub-string, not the longest sequence of words in bold.
27
Under review as a conference paper at ICLR 2025
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
Model
θ′
1 (ref)
IN or
OUT
IN
θ′
2 (ref)
IN
θ′
3 (ref)
OUT
θ′
4 (ref)
OUT
θ (target)
OUT
Canary
(words present as part of 2-grams in (cid:101)D′ in bold)
"the star who helped give a spark to “ chasing
amy ” and “ changing lanes ” falls flat as think-
ing man cia agent jack ryan in this summer ’s
big-budget action drama, “ the hunt for red octo-
ber ” (1990). At the time, bullet time was used to
prolong"
"the star who helped give a spark to “ chasing
amy ” and “ changing lanes ” falls flat as think-
ing man cia agent jack ryan in this summer ’s
big-budget action drama, “ the hunt for red octo-
ber ” (1990). At the time, bullet time was used to
prolong"
"the star who helped give a spark to “ chasing
amy ” and “ changing lanes ” falls flat as thinking
man cia agent jack ryan in this summer ’s big-
budget action drama, “ the hunt for red october
” (1990). At the time, bullet time was used to
prolong"
"the star who helped give a spark to “ chasing
amy ” and “ changing lanes ” falls flat as think-
ing man cia agent jack ryan in this summer ’s
big-budget action drama, “ the hunt for red octo-
ber ” (1990). At the time, bullet time was used to
prolong"
"the star who helped give a spark to “ chasing
amy ” and “ changing lanes ” falls flat as thinking
man cia agent jack ryan in this summer ’s big-
budget action drama, “ the hunt for red october
” (1990). At the time, bullet time was used to
prolong"
Max overlapping
sub-string
« the hunt for red
october » ;
Loverlap = 26
2-gram
loss
18.12
« ” and “
changing lanes ”
» ; Loverlap = 29
18.41
« “ chasing amy ”
» ; Loverlap = 19
19.04
« ” and “
changing lanes ”
» ; Loverlap = 29
18.29
« “ chasing amy ”
» ; Loverlap = 19
18.85
Table 11: Interpretability of the best MIA (2-gram) based on synthetic data for specialized canaries with
F = 30, Ptarget = 31 and nrep = 16 for SST-2 from Figure 1(c). Results across 4 reference models and
the target model for the canary with the smallest RMIA score (most confidently and correctly identified as
non-member by the MIA). Words in bold appear in 2-grams in (cid:101)D′. The largest generated sub-sequence of the
canary in (cid:101)D′ corresponds to the maximum overlapping sub-string, not the longest sequence of words in bold.
.
28
|
9QYJu1cGfE | Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models | [
8,
6,
6,
5,
5
] | Under review as a conference paper at ICLR 2025
QUO VADIS, MOTION GENERATION? FROM LARGE
LANGUAGE MODELS TO LARGE MOTION MODELS
Anonymous authors
Paper under double-blind review
ABSTRACT
Inspired by recent success of LLMs, the field of human motion understanding has
increasingly shifted towards the development of large motion models. Despite
some progress, current works remain far from achieving truly generalist mod-
els, largely due to the lack of large-scale, high-quality motion data. To address
this, we present MotionBase, the first million-level motion generation benchmark,
offering 15 times the data volume of the previous largest dataset, and featuring
multimodal data with hierarchically detailed text descriptions. By leveraging this
vast dataset, our large motion model demonstrates strong performance across a
broad range of motions, including unseen ones. Through systematic investiga-
tion, we underscore the importance of scaling both data and model size, with
synthetic data and pseudo labels playing a crucial role in mitigating data acqui-
sition costs. Moreover, our research reveals the limitations of existing evalua-
tion metrics, particularly in handling out-of-domain text instructions — an is-
sue that has long been overlooked. In addition, we introduce a 2D lookup-free
approach for motion tokenization, which preserves motion information and ex-
pands codebook capacity, further enhancing the representative ability of large
motion models. The release of MotionBase and the insights gained from this
study are expected to pave the way for the development of more powerful and
versatile motion generation models. Our code and database will be released at
https://anonymous.4open.science/r/MotionBase.
1
INTRODUCTION
Motion generation is an emerging field with diverse applications in video games, filmmaking, and
robotics animation. At the forefront of this area is text-to-motion generation (T2M) (Ahn et al.,
2018; Ahuja & Morency, 2019), which plays a crucial role in translating natural language into human
motions. State-of-the-art T2M models typically rely on a combination of the motion quantization
methods (e.g., VQ (Van Den Oord et al., 2017)), along with a text encoder (e.g., CLIP (Radford
et al., 2021)) and decoder (e.g., GPT-2 (Radford et al., 2019)) to generate motion sequences from
detailed textual instructions. Despite the availability of a few high-quality datasets (Guo et al.,
2022a; Lin et al., 2024) curated in recent years, their limited size restricts current methods to a
narrow range of scenarios, creating performance bottlenecks when addressing diverse or unseen
motions, as illustrated in Figure 1 (RIGHT).
The rapid advancement of large language models (LLMs) (Touvron et al., 2023a) in multimodal
learning has been significantly bolstered by the availability of vast data resources (Zheng et al.,
2024; Xu et al., 2024). In contrast, the volume of motion data remains considerably smaller than that
of visual-text data, as illustrated in Figure 1 (LEFT). This disparity primarily arises from the high
costs associated with motion data collection, which often requires specialized wearable devices and
substantial human labor for annotation. Consequently, developing a state-of-the-art (SoTA) large
motion model based on LLMs presents a significant challenge and remains an unresolved issue.
While some recent efforts (Jiang et al., 2023) have explored this direction, the effectiveness of large
motion models has yet to be fully demonstrated.
In this paper, we aim to address the question: “Can a large motion model be a promising direction
for motion generation?” To tackle this, we have developed a systematic data collection scheme that
led to the creation of MotionBase, the first large-scale dataset containing over one million motion
1
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
Under review as a conference paper at ICLR 2025
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
Figure 1: LEFT: Curves showing the effects of scaling up large motion models. MotionBase is the
first large text-to-motion dataset comparable in scale to visual benchmarks like ImageNet. RIGHT:
While existing models perform well on constrained datasets like Motion-X and HumanML3D, they
struggle with out-of-domain concepts on MotionBase, exhibiting limited generalization.
sequences — 15 times larger than the previous largest dataset. This initiative provides a solid foun-
dation for building robust, universally applicable large motion models and offers a comprehensive
testbed for future research.
Building on the solid foundation of MotionBase, we can now conduct a comprehensive investiga-
tion into the effectiveness of large motion models. This research aims to firstly identify key factors
driving their advancement and offer valuable insights for future model design, including: ❶ scal-
ing both data and model size significantly reduces joint prediction errors on critical metrics while
improving generalization to novel motions. ❷ Despite observable domain gaps, synthetic and static
data, as well as pseudo motion labels are becoming increasingly essential and effective, especially
given the high cost of acquiring ground truth motion data. ❸ Existing metrics show limitations when
faced with out-of-domain text instructions. Notably, the widely used metric, FID, fails to accurately
capture the alignment between ground truth and generated motions. Our findings highlight the need
for a more robust and equitable evaluation framework that enhances open-set generalization.
In addition to these factors, we argue that large motion models are further constrained by inad-
equate motion representation. Most approaches rely on transforming motion into discrete tokens
via vector quantization (VQ), which are then processed by autoregressive models to generate mo-
tion sequences. While these methods have produced impressive results, they suffer from two major
drawbacks. ❶ Information loss: The current VQ process inevitably leads to the loss of critical
information. Given a motion clip with D-dimensional features M = {m1, m2, ..., mT }, where
mi ∈ RD, VQ compresses it into a list of 1D embeddings of size ⌊T /α⌋ × d, where α is the tempo-
ral downsampling ratio and d is the codebook dimension. Unlike images, which consist of uniform
RGB pixel values, each motion state mi contains a set of distinct features (e.g., joint position, ve-
locity, foot-ground contact). Using a single 1D embedding to represent such complex motion states
is insufficient. This not only results in the loss of vital information but also limits the model’s ability
to flexibly generate motion at a part-level. ❷ Limited Codebook Size: Existing VQ are limited
by a small codebook, meaning that all possible human motions must be selected from these limited
options. Consequently, these 1D embeddings fail to capture the vast diversity of human motion.
To address this issue, we propose treating a motion clip as a 2D image with a single channel, rep-
resented as M ∈ RT ×D×1. By expanding the dimensionality of the motion clip from 1D to 2D,
we enhance the encoder’s capacity, improving its ability to represent complex motions while retain-
ing more critical information after tokenization. Although increasing the size of the codebook is a
straightforward way to enhance its expressiveness, this approach often leads to “codebook collapse,"
particularly when training samples are scarce. To mitigate this, we introduce a finite scalar quan-
tizing method inspired by Mentzer et al. (2023), which enables learning a large motion vocabulary
without requiring a lookup for corresponding tokens in the codebook for each entry. As a result,
we expand the motion codebook by at least two orders of magnitude, boosting its representational
capacity while maintaining efficiency.
2
607080FID (test on all)100110120The size of the model parametersGPT2-mediumLlama2Llama3.1Llama29013013B0.8B7B8B13B0.8B7B8BMotion-XHumanML3DMotion-base0.5MImageNetMotion-base140150Someone is standing and playing the piano.A man kicks something or someone with his left leg.(a) Motion-X(b) HumanML3DThe person is standing still, looking forward. Upper body: The person's right arm hangs relaxed by their side, while the left arm is bent at the elbow, with the hand placed on their stomach or lower chest area. The shoulders are squared and the torso is upright. Lower body: Both feet are planted firmly on the ground, with legs slightly apart. The person's weight appears to be evenly distributed between both legs.The right arm is not hanging down.The left arm is not bent.(c) MotionbaseThe person is standing still, looking forward. Upper body: The person's right arm hangs relaxed by their side, while the left arm is bent at the elbow, with the hand placed on their stomach or lower chest area. The shoulders are squared and the torso is upright. Lower body: Both feet are planted firmly on the ground, with legs slightly apart. The person's weight appears to be evenly distributed between both legs.The right arm is not hanging down.The left arm is not bent.(c) MotionbaseThe person is gesturing with their right hand. Upper body: The right arm is extended forward with the hand open and fingers pointing outward. The left arm hangs by their side. The torso is slightly twisted to the left. Lower body: The left leg is slightly forward with the foot flat on the ground. The right leg is back, with the heel slightly raised, suggesting a shift in weight to the left leg.The fingers are not pointing outward.(d) MotionbaseThe person is gesturing with their right hand. Upper body: The right arm is extended forward with the hand open and fingers pointing outward. The left arm hangs by their side. The torso is slightly twisted to the left. Lower body: The left leg is slightly forward with the foot flat on the ground. The right leg is back, with the heel slightly raised, suggesting a shift in weight to the left leg.The fingers are not pointing outward.(d) MotionbaseUnder review as a conference paper at ICLR 2025
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
We summarize our main contributions as follows. (1) MotionBase: We introduce MotionBase, the
first large-scale motion generation benchmark containing over one million motions with detailed
textual descriptions, significantly advancing the capability to effectively train motion generation
models. (2) Key Insights: Our research identifies critical factors affecting the effectiveness of large
motion models, emphasizing the importance of scaling both data and model size. Additionally, we
uncover limitations in the current evaluation metrics, particularly when handling diverse and unseen
motions. (3) Novel Motion Quantization: We propose a novel motion quantization approach that
represents motion clips as 2D images and constructs a finite-scale codebook without requiring token
lookups. This method retains essential information and expands the capacity of the motion encoder,
enhancing the ability of large motion models to leverage large-scale motion data.
2 RELATED WORK
2.1 LARGE LANGUAGE MODELS AND MULTI-MODALITY
Substantial advancements have been made in enhancing LLMs (Brown et al., 2020; Raffel et al.,
2020; Chowdhery et al., 2022) with the ability to understand and respond to human instructions,
through a technique known as instruction tuning (Ouyang et al., 2022). Recent research has extended
these capabilities to the multimodal domain (Ye et al., 2023; Zheng et al., 2023), with notable work
by Liu et al. (2023), who pioneered visual instruction tuning to create a highly adaptable visual
assistant. Additionally, Li et al. (2023a) integrated multimodal context directly into instruction
data to further enhance model performance. Subsequent studies (Zhang et al., 2023b; Zhao et al.,
2023) expanded this research by scaling up instructional datasets and incorporating image-rich text.
Notably, Dai et al. (2023) developed InstructBLIP, based on BLIP-2 (Li et al., 2023b), which features
an advanced visual feature extraction mechanism to improve performance across vision-language
tasks. Despite these breakthroughs, the application of multimodal models to human motion remains
less competitive compared to current state-of-the-art (SoTA) methods, although recent initiatives are
beginning to explore this domain (Jiang et al., 2023; Zhang et al., 2024b).
2.2 VECTOR QUANTIZATION
Vector quantization (VQ) has been highly successful in generating high-quality images (Van
Den Oord et al., 2017) and videos (Gupta et al., 2022; Yan et al., 2021). VQ-VAE first converts
images into discrete representations and autoregressively models their distribution. Building on
this, Lee et al. (2022) introduced residual quantization (RQ), which encodes images into a stacked
map of discrete codes, efficiently reducing the spatial resolution of features. You et al. (2022) further
developed hierarchical vector quantization (HQ), employing a pyramid scheme with two-level codes
for image encoding. Most existing motion generation approaches have adopted VQ or its variants to
quantize human motions. However, the small codebook size in traditional VQ methods limits their
ability to generalize and accurately represent the diversity of human motions. Although increas-
ing the codebook size can improve representational capacity, it often leads to codebook collapse.
Recently, Mentzer et al. (2023) demonstrated that discrete codes can be obtained via scalar quanti-
zation, where each scalar entry is independently quantized to the nearest integer through rounding.
Similarly, Yu et al. (2023) introduced a lookup-free codebook that maps videos into compact discrete
tokens, utilizing all codes without auxiliary losses and expanding the codebook size.
2.3 HUMAN MOTION GENERATION
The task of motion generation involves creating human motion based on various inputs, such as text
descriptions (Guo et al., 2022b; Petrovich et al., 2022), action labels (Cervantes et al., 2022; Guo
et al., 2020) or motion prefixes (Liu et al., 2022; Mao et al., 2019). Among these, text-to-motion
(T2M) generation has received the most attention due to the ease and flexibility of using natural
language as input. Early approaches (Fragkiadaki et al., 2015; Ghosh et al., 2017; Gopalakrishnan
et al., 2019) rely on deterministic motion modeling, which often produce averaged, blurry results.
To overcome this, researchers introduce stochastic methods using models like GANs (Cai et al.,
2018; Wang et al., 2020) or VAEs (Aliakbarian et al., 2020). For instance, T2M-GPT (Zhang et al.,
2023a) extends the temporal VAE to capture the probabilistic relationship between text and mo-
tion. Recently, Guo et al. (2024) proposed integrating residual quantization and masked modeling
3
Under review as a conference paper at ICLR 2025
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
Table 1: Comparison with existing human motion datasets. More details can be found in our ap-
pendix. In the table, B, H, and F refer to body, hand, and face, respectively. “part” indicates that the
text captions include fine-grained descriptions of body parts, while “body” means the descriptions
are not as detailed. “multi” and “single” specify whether the dataset contains multi-person scenarios
or only single-person data. Our MotionBase is the largest motion generation dataset and benchmark
to date, featuring at least 15× more data than previous datasets, along with additional modalities.
SEQ NUMBER MOTION TEXT RGB DEPTH BBOX PERSON
KIT (Plappert et al., 2016)
HumanML3D (Guo et al., 2022a)
MotionX (Lin et al., 2024)
MotionBase-V1
5.7K
29.2K
81.1K
>1M
B
B
B,H,F
B,H
body
body
body
part
(cid:37)
(cid:37)
(cid:33)
(cid:33)
(cid:37)
(cid:37)
(cid:37)
(cid:33)
(cid:37)
(cid:37)
(cid:37)
(cid:33)
single
single
single
multi
to improve traditional vector quantization (VQ). Lu et al. (2023) designed a hierarchical VQVAE
to separately encode body and hand motions. To better align with a motion auto-encoder, Motion-
CLIP (Tevet et al., 2022) incorporates CLIP (Radford et al., 2021) as the text encoder, bringing
in more robust text priors. Additionally, Zhang et al. (2024b) and Jiang et al. (2023) explored the
development of unified models based on LLMs which accept multimodal conditions (e.g., vision,
text, and pose), enabling the generation of subsequent, preceding, or “in-between” motions. De-
spite leveraging the power of LLMs, these large motion models remain limited to in-domain text
instructions and do not yet perform as competitively as existing SoTA methods.
In this work, we aim to bridge the gap between large language models and generalized, reliable
large motion models. To achieve this, We begin by introducing MotionBase — a novel, large-scale
dataset designed to support extensive pretraining and comprehensive fair evaluation.
3 MOTIONBASE DATASET
Data is the foundation of large motion models. With advancements in fields like human pose detec-
tion, we are now able to extract high-quality motion sequences from vast amounts of online videos,
including datasets like InternViD (Wang et al., 2023) and WebVid (Bain et al., 2021). In its initial
public release, our MotionBase contains over one million motion clips, each annotated with fine-
grained automatic pseudo labels. A comparison with existing benchmarks is presented in Table 1.
Our data collection pipeline involves the following key steps in order.
❶ Source Video Collection and Cleaning: We begin by collecting over 20 million videos from
publicly available datasets and online platforms such as YouTube. To ensure quality and relevance,
we filter out videos that do not contain human figures.
❷ 2D-3D Keypoint Estimation: Keypoints are essential for capturing the skeletal structure of
human motion.
Initially, we estimate whole-body 2D keypoints with confidence scores using a
pretrained model (Xu et al., 2022). To further enhance motion accuracy, we estimate precise 3D
keypoints with another pretrained model (Sárándi et al., 2023) trained on large 3D datasets, Fol-
lowing the method of Lin et al. (2024), we apply temporal smoothing and enforce 3D bone length
constraints during triangulation, improving the stability and consistency of the keypoint estimations.
❸ Incorporating Additional Modalities: A comprehensive understanding of human motion ben-
efits from the inclusion of diverse modalities such as RGB and depth data. To enrich MotionBase,
we provide annotations for these additional modalities. Furthermore, MotionBase includes videos
featuring multi-person scenarios, with each motion sequence grounded in its corresponding video
through object-level bounding boxes. Although this paper primarily focuses on the text-to-motion
task, these additional modalities open avenues for future research in other areas.
❹ Local-Global Pose Estimation: We begin by registering the body model SMPL-X (Pavlakos
et al., 2019) for each frame in MotionBase, which leverages keypoints based on progressive learning-
based mesh fitting method (Lin et al., 2024). Specifically, we predict SMPL-X parameters using a
pretrained body mesh recovery method, OSX (Lin et al., 2023), followed by iterative optimization
to fit the parameters to the target 2D and 3D joint positions. After fitting, we apply global motion
optimization based on Yuan et al. (2022) to refine both global motions and camera poses simulta-
4
Under review as a conference paper at ICLR 2025
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
Figure 2: Examples from MotionBase, which encompasses a diverse range of human motions,
including both long-term clips and static snapshots. It features various scenes, ranging from outdoor
environments to indoor settings, and includes both clean, single-person scenarios as well as crowded,
multi-person scenes. Additionally, MotionBase comprises a mix of real-world data and synthetic
data generated by game engines. For more details about MotionBase, please refer to Appendix A.
neously, ensuring alignment with the video evidence. Finally, for motions with noisy or occluded
input data, we reconstruct complete and plausible motions using RoHM (Zhang et al., 2024a).
❺ Hierarchical Motion Descriptions: Existing benchmarks face inherent limitations in their text
descriptions. Previous studies (Guo et al., 2022a) typically use a single sentence to describe whole-
body motions, neglecting finer details of individual body parts, such as the arms or legs. This
approach restricts the model’s ability to perform more nuanced body comprehension and flexible
part-level motion control (e.g., raising only the left arm). Moreover, the richness of text labels often
varies across different motions; for example, a large portion of the Motion-X dataset provides only
action labels. In contrast, MotionBase offers hierarchical textual annotations for each video inspired
by Pi et al. (2023). We carefully design a prompt format and use Gemini-1.5-pro (Reid et al., 2024)
to generate detailed descriptions for individual body parts (e.g., left arm, right leg), assigning a
dedicated sentence to each. Additionally, we summarize the overall body movement in a paragraph
containing 1–3 sentences, providing a more comprehensive description of the motion.
4 SCALING UP LARGE MOTION MODEL
4.1 OVERALL ARCHITECTURE
Similar to previous LLM-based multimodal models, we treat motion as a foreign language. The
overall framework is presented in Figure 11 in Appendix B. Our large motion model, built on a
pre-trained LLM, functions as a generative model that connects a motion tokenizer with the LLM
backbone Θ. The motion tokenizer encodes raw motion clip features M into token embeddings
V = {v1, v2, ..., vn} ∈ Rn×d, where n denotes the number of motion tokens and d represents the
dimensionality of each token. To integrate motion tokens into the LLM framework, we incorporate
K discrete codes in the motion codebook as additional vocabulary for the LLM. Additionally, we
introduce two special tokens, <mot> and </mot>, to signify the start and end of motion sequences
within the input/output streams. The LLM backbone Θ is built on a decoder-only architecture using
5
Under review as a conference paper at ICLR 2025
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
causal transformers. The model generates outputs Y = {y1, y2, ..., ym} in an auto-regressive man-
ner, where Y corresponds to the generated motion sequence based on the provided motion-text input
tokens. In this work, each motion-text pair in the MotionBase dataset is framed as an instruction-
following instance {XQ, XM }, representing a question-answer interaction between the user and the
motion model. The entire instructional dataset adheres to this unified format. To train our model,
we optimize the negative log-likelihood over the predicted tokens which is defined as follows:
L(Θ) = −
L
(cid:88)
j=1
log PΘ(yj|desc, ˆy1:j−1),
(1)
where ˆy and y denote the input and target token sequences, respectively. Θ represents the model
parameters, and L is the length of the target sequence. The input description, desc, can be empty
depending on the instruction provided.
4.2
2D LOOKUP-FREE MOTION QUANTIZATION
Similar to visual tokenization, motion tokenization is a process that compresses motion signals into
a series of discrete tokens, typically involving an encoder E, a decoder D and a codebook C. We
propose a 2D lookup-free quantization method as a key component for building large motion models.
2D Motion Quantization. Traditional motion quantizers use 1D embeddings to represent motion
at each timestamp, which inevitably results in the loss of crucial information. Furthermore, this
approach limits the quantizer’s ability to generate and interpret part-level motions. To address these
limitations, we treat the motion sequence M = {m1, m2, ..., mT } as a single-channel image, rep-
resenting each motin sequence as M ∈ RT ×D×1. Each motion embedding mi is divided into P
components, capturing distinct features of motion, such as root orientation, joint rotation and foot
contact. Our motion encoder then converts M into a feature map E(M) ∈ R⌊T /α⌋×P ×d, where α
denotes the temporal downsampling ratio. This approach ensures that each body part is tokenized
separately, allowing for more granular, part-level motion encoding and decoding.
Lookup-Free Quantization. Traditional motion quantizers are often constrained by small code-
book sizes, restricting their ability to capture the full diversity of human motion. A common ap-
proach is to expand the motion vocabulary. However, excessively enlarging the codebook can result
in “codebook collapse”, where only a small subset of tokens in the codebook is used, offering min-
imal performance improvements. In some cases, an overly large vocabulary can even degrade the
model’s overall performance. To address this, a more effective way is to reduce the dimensionality
of code embeddings (Mentzer et al., 2023), which limits the representational capacity of individual
tokens and encourages more efficient learning across a larger vocabulary. Similar to Yu et al. (2023),
we reduce the embedding dimension of the codebook to zero by replacing the codebook C ∈ RK×d
with an integer set C with |C| = K. Specifically, C is the Cartesian product of single-dimensional
variables C =×d
i=1Ci, where Ci = {−1, 1} and d is equal to log2 K. Given a feature vector
z ∈ Rd, our quantizer Q(·) converts each dimension of the quantized representation into:
Q(zi) = arg mincik
||zi − cik|| = −1{zi ≤ 0} + 1{zi > 0},
(2)
The token index is computed as Index(z) =
where cij denotes the j-th value of Ci.
(cid:80)d
i=1 2i−11{zi > 0}. To train the tokenizer, we employ a standard combination of reconstruc-
tion, perceptual, and commitment losses, along with an entropy penalty to promote better codebook
utilization (Yu et al., 2023). Importantly, we exclude the use of GAN loss, as it was found to nega-
tively impact training stability.
5 EXPERIMENTS
5.1 EXPERIMENTAL SETUP
Datasets. Our investigation first is conducted on the following text-to-motion datasets: Hu-
manML3D (Guo et al., 2022a) and Motion-X (Lin et al., 2024). HumanML3D comprises 14,616
motion clips sourced from the AMASS dataset (Mahmood et al., 2019), paired with 44,970 textual
descriptions. Motion-X, a more recent dataset, includes approximately 81,000 motion clips. To
6
Under review as a conference paper at ICLR 2025
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
validate our conclusions on larger-scale data, we also carry out experiments on the proposed Mo-
tionBase dataset with two variants: MotionBase-0.5 and MotionBase-1.0. MotionBase-0.5 contains
500,000 clips, while MotionBase-1.0 encompasses the full scope of our collected data, with over 1
million clips. Following standard practice, each dataset is split into training, validation, and test sets
in proportions of 85%, 5%, and 15%, respectively.
Evaluation Metrics.
For the motion generation task, we employ the following metrics in our
experiments following Guo et al. (2022a). (1) Frechet Inception Distance (FID): This metric assesses
overall motion quality by measuring the distributional difference between the high-level features of
generated motions and real motions. (2) Motion-retrieval Precision (R-Precision) and Multimodal
Distance (MMDist): These metrics evaluate the semantic alignment between the textual input and
generated motions. R-Precision measures the top-1/2/3 retrieval accuracy, while MMDist computes
the distance between matched text and motion pairs. Additionally, we validate our motion tokenizer
by conducting experiments on the motion reconstruction task. This is measured using both Mean
Per Joint Position Error (MPJPE) and FID. MPJPE quantifies the average distance (in millimeters)
between the predicted joint positions and the ground truth positions across all joints in the skeleton.
Implementation Details. For the motion tokenizer, we implement a VQ codebook C ∈ R1024×512
with an embedding dimensionality of d = 512, and the resulting discrete codes are incorporated as
additional vocabulary for the LLM. In comparison, our lookup-free codebook has a size of 216 =
16384, where the least frequently used tokens from the LLM’s codebook are mapped to represent
motion codes. The motion encoder E operates with a temporal downsampling rate of α = 4. We
experiment with four LLM architectures to build our large motion model: GPT2-medium (Radford
et al., 2019), Llama-2-7b, Llama-2-13b (Touvron et al., 2023b), and Llama3.1-8b (Dubey et al.,
2024). The motion tokenizer is trained with a learning rate of 1e-4 and a batch size of 256 over
300K iterations. For training the large motion model, full parameter tuning is performed on 8×A800
GPUs, with a batch size of 1024, over 300 epochs. The learning rate is set to 2e-4 for GPT2-medium
and 2e-5 for the Llama models. Further details are provided in the appendix due to space limitation.
Table 2: Comparisons under different model and data sizes. All experiments are conducted using
the same pretrained VQ model for consistency. Additionally, we re-train the motion autoencoder
and text encoder (Guo et al., 2022a) separately on the Motion-X and MotionBase datasets, using
their respective data to train the motion autoencoder for each dataset’s evaluation.
Motion-X
MotionBase
Decoder
#Inst.
#Param.
R@1 ↑
R@3 ↑
Real
GPT-2
GPT-2
GPT-2
GPT-2
LLaMA-2
LLaMA-2
LLaMA-2
LLaMA-2
LLaMA-3
LLaMA-3
LLaMA-3
LLaMA-3
LLaMA-2
LLaMA-2
LLaMA-2
LLaMA-2
-
0.02M
0.08M
0.5M
1M
0.02M
0.08M
0.5M
1.0M
0.02M
0.08M
0.5M
1M
0.02M
0.08M
0.5M
1.0M
-
355M
355M
355M
355M
7B
7B
7B
7B
8B
8B
8B
8B
13B
13B
13B
13B
0.496
0.206
0.468
0.358
0.357
0.207
0.471
0.372
0.351
0.217
0.483
0.363
0.354
0.225
0.486
0.375
0.359
0.821
0.402
0.791
0.618
0.614
0.405
0.794
0.627
0.602
0.418
0.802
0.625
0.611
0.436
0.805
0.636
0.612
FID ↓
0.038
54.017
0.096
4.852
5.083
53.354
0.159
4.908
5.582
54.004
0.103
4.798
5.100
53.447
0.132
4.792
5.370
R@1 ↑
R@3 ↑
0.290
0.037
0.055
0.252
0.264
0.041
0.074
0.256
0.263
0.039
0.071
0.256
0.266
0.040
0.074
0.259
0.298
0.563
0.109
0.155
0.533
0.542
0.109
0.185
0.522
0.536
0.102
0.183
0.533
0.557
0.107
0.186
0.520
0.599
FID ↓
0.011
125.824
124.230
0.636
0.516
113.189
127.664
1.084
0.545
117.561
125.310
0.512
0.394
117.594
126.999
0.511
0.595
5.2 DISCUSSION OF SCALING UP MOTION GENERATION
In this section, we investigate the impact of model size and data scale on motion generation perfor-
mance. We utilize the motion autoencoder (Guo et al., 2022a) retrained on Motion-X and Motion-
Base datasets to evaluate performance on their respective test sets. We categorize our training data
7
Under review as a conference paper at ICLR 2025
into four scales: 0.02M (HumanML3D only), 0.08M (Motion-X only), 0.5M (MotionBase-0.5), and
1M (MotionBase-1.0). To ensure fair comparison, we employ the same VQ as the motion tokenizer,
maintaining consistency across experiments to validate our conclusions.
Does increasing model size benefit motion generation? Yes. As shown in Table 2, our results
demonstrate that increasing model size leads to significant performance improvements when pro-
vided with the same amount of training data. Specifically, Llama2-13b outperforms Llama2-7b,
which in turn surpasses GPT2-medium, illustrating a clear trend of performance gains as model ca-
pacity increases. This suggests that models with larger size are better equipped to capture diverse,
complex patterns and relationships within human motions.
Does increasing data scale benefit motion generation? Yes. In Table 2, when using the same foun-
dation model, increasing the scale of training data leads to substantial improvement on MotionBase
test set, aligning with our expected scaling laws. This improvement is particularly pronounced in
the R-precision metric, emphasizing the critical role of data scale in enhancing semantic alignment
between generated motions and text prompts. However, contrary to our expectations, we observe
a noticeable performance decline on Motion-X test set if not trained on Motion-X (0.08M). We
attribute this to the limitations of the retrieval-based evaluation model, as discussed in Section 5.4.
-
Real
0.511
Decoder
0.797 0.002
MLD
MotionDiffuse
R@1 ↑ R@3 ↑ FID ↓ MMDist ↓
Table 3: Comparison with existing SoTA methods on the
HumanML3D benchmark. Results marked with ∗ repre-
sent values reproduced using the officially released code,
while unmarked results are taken from the original papers.
Does the large motion model perform
SoTA competitively? We evaluate
our large motion model on the widely
adopted HumanML3D benchmark. We
compare its performance against a va-
riety of SoTA approaches. This in-
cludes diffusion-based methods such as
MLD (Chen et al., 2023) and Motion-
Diffuse (Zhang et al., 2022), as well
as the GPT-based T2M-GPT (Zhang
et al., 2023a). We also compare against
LLM fine-tuning methods like Mo-
tionGPT (Jiang et al., 2023; Zhang
et al., 2024b), MotionLLM (Wu et al.,
2024), and AvatarGPT (Zhou et al.,
2024). As shown in Table 3, our model,
which utilizes Llama-2-13B as the de-
coder and calculates the loss over the
entire concatenated sequence of input
text, achieves SOTA performance. Our large motion model significantly outperforms other LLM-
based methods such as MotionGPT and AvatarGPT, as well as the earlier T2M-GPT. In particular,
we observe substantial improvements in key metrics such as R@1, R@3, and MMDist, highlighting
our model’s ability to generate motion sequences that are better aligned with text descriptions and
of higher quality.
0.492
T2M-GPT
MotionGPT1,∗
0.409
MotionGPT1
0.492
MotionGPT2,∗ Llama-2-13B 0.367
MotionGPT2,∗ Llama-1-13B 0.363
MotionGPT2
Llama-1-13B 0.411
Gemma-2b
0.482
MotionLLM
Llama-1-13B 0.389
AvatarGPT
0.775 0.141
0.667 0.162
0.778 0.232
0.654 0.571
0.633 0.592
0.696 0.542
0.770 0.491
0.623 0.567
3.121
3.992
3.096
3.981
4.029
3.584
3.138
-
0.772 0.473
0.782 0.630
GPT-2
T5
T5
Llama-2-13B 0.519
0.803 0.166
0.481
0.491
3.196
3.113
2.964
2.974
Ours
-
-
Slow convergence of large motion models. To
evaluate the convergence speed of large motion mod-
els, we train GPT-2, Llama2-7b, and Llama3.1-8b
for 300 epochs on Motion-X. The training curve of
with R@1 performance is illustrated in Figure 3.
We obverse that all large motion models nearly con-
verge by 200 epochs, with larger models converg-
ing faster. Initializing these models with pre-trained
weights proves beneficial for speeding up conver-
gence. Compared to large multimodal models like
LLaVA (Liu et al., 2023), large motion models re-
quire more epochs to capture the complex represen-
tations of motion sequences. We attribute the slow
convergence of these models to the limited represen-
tation capacity of the motion tokenizer, which con-
tains only 512 motion tokens. This suggests the need
to optimize the motion tokenizer and expand its rep-
8
Figure 3: Training curves with Y-axis denot-
ing R@1 retrieval accuracy. All these mod-
els are trained for 300 epochs at most and are
evaluated every 1000 steps.
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
Under review as a conference paper at ICLR 2025
resentation space. To address this, we explore 2D-LFQ quantization method as a promising alterna-
tive.
Does Static and Synthetic Data help? Yes, the ad-
dition of static image data and synthesized data both
contribute to improvements, as illustrated in Table 4,
more analysis can be found in Appendix C.1.
Table 4: Ablation of the effectiveness of syn-
thetic and static data, which takes about 28%
and 44% of all data, respectively.
Real
0.290
0.011
0.563
FID ↓
TRAIN SET
R@1 ↑ R@3 ↑
0.111
0.120
0.264
w/o static & syn
w/o static
MotionBase
Do large motion models outperform in out-of-
distribution setup? Yes. We present the results
in Table 5. This ablation is essential for further val-
idating the generalization capabilities of large mo-
tion models, as the improvements observed in Ta-
ble 2 may stem from the inclusion of additional in-
domain data from Motion-X. In this setup, we select
four subsets from MotionBase, comprising 90K samples (UNSEEN-90K), for evaluation, while the
remaining 38 subsets are used for training. This ensures that the test set consists entirely of out-
of-domain (OOD) samples. We compare the performance of models trained on HumanML3D, Mo-
tionX, and Motion-#38, all utilizing the GPT2-medium architecture, where #N denotes the number
of training subsets. All models are trained using the GPT2-medium. The results on the OOD test set
clearly demonstrate that the model trained on MotionBase significantly outperforms those trained
on HumanML3D and MotionX, particularly in terms of R@1 and R@3 metrics. These findings
strongly highlight the superior generalization ability of large motion models when handling unseen
OOD data, especially when trained on diverse, large-scale datasets. However, we once again observe
unexpected results with the FID metric, which will be discussed further in Section 5.4.
57.719
55.983
0.516
0.248
0.252
0.542
Figure 4: Comparison with different motion quantization on Motion-X (left) and MotionBase
(right). Note that we only show MPJPE (↓) results here. FID results is shown in Appendix C.9.
5.3 DISCUSSION OF MOTION QUANTIZATION
Table 5: Ablation of out-of-domain evaluation on
UNSEEN-90K dataset, where #N denotes we use
N subsets of MotionBase for training.
In this section, we investigate the impact of
different motion quantization methods. We
compare our proposed 2D lookup-free quan-
tization (2D-LFQ) against
two commonly
used approaches: residual vector quantization
(RVQ) and vector quantization (VQ), across
various codebook sizes ranging from 28 to
216. The number of parameters for RVQ/VQ
and 2D-LFQ are 19.43M and 108.35M, re-
spectively. As shown in Figure 4, 2D-LFQ
demonstrates significant improvements over
both RVQ and VQ. Notably, as the codebook
size increases, 2D-LFQ continues to enhance performance, while RVQ and VQ experience dimin-
ishing returns or performance degradation with larger codebooks. Our deeper analysis attributes
these gains to better codebook utilization by 2D-LFQ. Figure 5 illustrates that the utilization rates
HumanML3D
MotionX
MotionBase-#38
204.833
178.368
10.613
0.032
0.042
0.136
0.101
0.119
0.321
TRAIN SET
R@1 ↑
R@3 ↑
FID ↓
0.005
0.349
0.147
Real
9
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
MPJPEMPJPEUnder review as a conference paper at ICLR 2025
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
for VQ and RVQ begin to decline once the codebook size exceeds 210, which corresponds to the
peak performance for these methods, whereas the utilization of 2D-LFQ continues to increase with
larger codebooks. Additionally, we conduct further experiments to validate the benefits of 2D mo-
tion encoding in Appendix C.9.
5.4 LIMITATION OF AUTOMATED METRIC
As mentioned earlier, the FID scores in Table 2
and Table 5 yield unexpected results. Specifically,
when evaluating on Motion-X and UNSEEN-90K,
FID achieves its best performance when trained
on Motion-X, significantly outperforming both the
smaller HumanML3D and the larger-scale Motion-
Base.
In this section, we aim to investigate this
anomaly. FID, a standard metric widely used for
generation tasks, is typically measured by a pre-
In traditional image generation,
trained evaluator.
FID is calculated using a well-trained, robust visual
encoder like InceptionNet (Szegedy et al., 2015),
which is trained on millions of images. However, the
evaluator currently used to compute FID for motion
generation is a simple motion autoencoder with a
very small parameter scale (Guo et al., 2022a). Since
this motion autoencoder is trained on limited data consisting of only 20K motions, we argue that it
may lack the generalization needed for robust performance, leading to difficulties in reliably cap-
turing the complex semantic alignment between text and motion.Similar unexpected results occur
in motion reconstruction as well. As show in Table 6, the FID score on HumanML3D is two or-
ders of magnitude higher when comparing 2D-LFQ and VQ-VAE, despite the former achieving a
much lower MPJPE. When tested on MotionBase, 2D-LFQ obtains the highest FID score even while
achieving the best MPJPE. We observe the same issue with other metrics like MMDist, as discussed
in Appendix C.1. Notably, Voas et al. (2023) have mentioned that existing metrics are sensitive to
the quality of the embedding space and do not always align with human perception. These findings
highlight the need for a more robust and fair metric for large motion models moving forward.
Figure 5: Comparison of codebook utiliza-
tion for different motion quantization.
Table 6: Robustness investigation of the evaluation metrics on the motion reconstruction task.
HumanML3D
Motion-X
MotionBase
Tokenizer
#Num.
#Param.
FID ↓ MPJPE ↓
FID MPJPE
FID MPJPE
VQ-VAE
RQ-VAE
2D-LFQ
512
512
16384
19.43M 0.078
19.43M 0.05
108.35M 1.769
69.2
37.5
45.6
0.852
0.568
0.295
106.4
56.9
54.1
4.366
4.026
7.853
123.6
78.2
64.1
6 CONCLUSION
In this paper, we explore how to advance the field of large-scale motion generation. To this end, we
introduce a large-scale motion dataset named MotionBase, which includes detailed text descriptions
and rich modality annotations, providing a strong foundation for effectively training large motion
models. Our research highlights key findings, such as the impact of scaling both data and model size.
Additionally, we identify potential limitations in the current evaluation metrics, particularly when
assessing diverse and unseen motions. To enhances the benefits large motion models can derive from
extensive motion data, we propose a novel motion quantization approach that treats motion clips as
2D images and constructs a finite-scale codebook, eliminating the need for token lookups. We hope
that this research offers valuable direction for future work in large-scale motion generation.
10
Under review as a conference paper at ICLR 2025
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
REFERENCES
Hyemin Ahn, Timothy Ha, Yunho Choi, Hwiyeon Yoo, and Songhwai Oh. Text2action: Generative
adversarial synthesis from language to action. In 2018 IEEE International Conference on Robotics
and Automation (ICRA), pp. 5915–5920. IEEE, 2018.
Chaitanya Ahuja and Louis-Philippe Morency. Language2pose: Natural language grounded pose
forecasting. In 2019 International Conference on 3D Vision (3DV), pp. 719–728. IEEE, 2019.
Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Lars Petersson, and Stephen Gould.
A stochastic conditioning scheme for diverse human motion prediction. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5223–5232, 2020.
Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and
image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF international conference
on computer vision, pp. 1728–1738, 2021.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Haoye Cai, Chunyan Bai, Yu-Wing Tai, and Chi-Keung Tang. Deep video generation, prediction and
completion of human action sequences. In Proceedings of the European conference on computer
vision (ECCV), pp. 366–382, 2018.
Pablo Cervantes, Yusuke Sekikawa, Ikuro Sato, and Koichi Shinoda. Implicit neural representations
for variable length human motion generation. In European Conference on Computer Vision, pp.
356–372. Springer, 2022.
Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and Gang Yu. Executing your
commands via motion diffusion in latent space. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pp. 18000–18010, 2023.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Jihoon Chung, Cheng-hsin Wuu, Hsuan-ru Yang, Yu-Wing Tai, and Chi-Keung Tang. Haa500:
Human-centric atomic action dataset with curated videos. In Proceedings of the IEEE/CVF inter-
national conference on computer vision, pp. 13465–13474, 2021.
Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang,
Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language
models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models.
arXiv preprint arXiv:2407.21783, 2024.
Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. Recurrent network models
for human dynamics. In Proceedings of the IEEE international conference on computer vision,
pp. 4346–4354, 2015.
Partha Ghosh, Jie Song, Emre Aksan, and Otmar Hilliges. Learning human motion models for long-
In 2017 International Conference on 3D Vision (3DV), pp. 458–466. IEEE,
term predictions.
2017.
Anand Gopalakrishnan, Ankur Mali, Dan Kifer, Lee Giles, and Alexander G Ororbia. A neural
temporal model for human motion prediction. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pp. 12116–12125, 2019.
Chuan Guo, Xinxin Zuo, Sen Wang, Shihao Zou, Qingyao Sun, Annan Deng, Minglun Gong, and
Li Cheng. Action2motion: Conditioned generation of 3d human motions. In Proceedings of the
28th ACM International Conference on Multimedia, pp. 2021–2029, 2020.
11
Under review as a conference paper at ICLR 2025
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating
diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pp. 5152–5161, 2022a.
Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for
the reciprocal generation of 3d human motions and texts. In European Conference on Computer
Vision, pp. 580–597. Springer, 2022b.
Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Gener-
ative masked modeling of 3d human motions. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pp. 1900–1910, 2024.
Agrim Gupta, Stephen Tian, Yunzhi Zhang, Jiajun Wu, Roberto Martín-Martín, and Li Fei-Fei.
Maskvit: Masked visual pre-training for video prediction. arXiv preprint arXiv:2206.11894,
2022.
Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. Motiongpt: Human motion as a
foreign language. Advances in Neural Information Processing Systems, 36:20067–20079, 2023.
Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Autoregressive image
generation using residual quantization. In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pp. 11523–11532, 2022.
Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan
arXiv preprint
Li, and Ziwei Liu. Mimic-it: Multi-modal in-context instruction tuning.
arXiv:2306.05425, 2023a.
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-
arXiv preprint
image pre-training with frozen image encoders and large language models.
arXiv:2301.12597, 2023b.
Jing Lin, Ailing Zeng, Haoqian Wang, Lei Zhang, and Yu Li. One-stage 3d whole-body mesh
In Proceedings of the IEEE/CVF Conference on
recovery with component aware transformer.
Computer Vision and Pattern Recognition, pp. 21159–21168, 2023.
Jing Lin, Ailing Zeng, Shunlin Lu, Yuanhao Cai, Ruimao Zhang, Haoqian Wang, and Lei Zhang.
Motion-x: A large-scale 3d expressive whole-body human motion dataset. Advances in Neural
Information Processing Systems, 36, 2024.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv
preprint arXiv:2304.08485, 2023.
Zhenguang Liu, Shuang Wu, Shuyuan Jin, Shouling Ji, Qi Liu, Shijian Lu, and Li Cheng.
In-
vestigating pose representations and motion contexts modeling for 3d motion prediction. IEEE
transactions on pattern analysis and machine intelligence, 45(1):681–697, 2022.
Shunlin Lu, Ling-Hao Chen, Ailing Zeng, Jing Lin, Ruimao Zhang, Lei Zhang, and Heung-Yeung
Shum. Humantomato: Text-aligned whole-body motion generation. In Forty-first International
Conference on Machine Learning, 2023.
Zhengyi Luo, Jinkun Cao, Kris Kitani, Weipeng Xu, et al. Perpetual humanoid control for real-
time simulated avatars. In Proceedings of the IEEE/CVF International Conference on Computer
Vision, pp. 10895–10904, 2023.
Naureen Mahmood, Nima Ghorbani, Nikolaus F Troje, Gerard Pons-Moll, and Michael J Black.
Amass: Archive of motion capture as surface shapes. In Proceedings of the IEEE/CVF interna-
tional conference on computer vision, pp. 5442–5451, 2019.
Wei Mao, Miaomiao Liu, Mathieu Salzmann, and Hongdong Li. Learning trajectory dependen-
cies for human motion prediction. In Proceedings of the IEEE/CVF international conference on
computer vision, pp. 9489–9497, 2019.
12
Under review as a conference paper at ICLR 2025
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu,
and Christian Theobalt. Monocular 3d human pose estimation in the wild using improved cnn
supervision. In 2017 international conference on 3D vision (3DV), pp. 506–516. IEEE, 2017.
Fabian Mentzer, David Minnen, Eirikur Agustsson, and Michael Tschannen. Finite scalar quantiza-
tion: Vq-vae made simple. arXiv preprint arXiv:2309.15505, 2023.
OpenAI. GPT-4o mini: advancing cost-efficient intelligence. https://openai.com/index/
gpt-4o-mini-advancing-cost-efficient-intelligence/, 2024.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong
Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to fol-
low instructions with human feedback. Advances in neural information processing systems, 35:
27730–27744, 2022.
Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios
Tzionas, and Michael J Black. Expressive body capture: 3d hands, face, and body from a single
image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition,
pp. 10975–10985, 2019.
Mathis Petrovich, Michael J Black, and Gül Varol. Temos: Generating diverse human motions from
textual descriptions. In European Conference on Computer Vision, pp. 480–497. Springer, 2022.
Huaijin Pi, Sida Peng, Minghui Yang, Xiaowei Zhou, and Hujun Bao. Hierarchical generation of
human-object interactions with diffusion probabilistic models. In Proceedings of the IEEE/CVF
International Conference on Computer Vision, pp. 15061–15073, 2023.
Matthias Plappert, Christian Mandery, and Tamim Asfour. The kit motion-language dataset. Big
data, 4(4):236–252, 2016.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language
models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,
Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual
models from natural language supervision. In International conference on machine learning, pp.
8748–8763. PMLR, 2021.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text
transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020.
Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-
baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. Gem-
ini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint
arXiv:2403.05530, 2024.
István Sárándi, Alexander Hermans, and Bastian Leibe. Learning 3d human pose estimation from
dozens of datasets using a geometry-aware autoencoder to bridge between skeleton formats. In
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2956–
2966, 2023.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-
mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015.
Omid Taheri, Nima Ghorbani, Michael J Black, and Dimitrios Tzionas. Grab: A dataset of whole-
body human grasping of objects. In Computer Vision–ECCV 2020: 16th European Conference,
Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16, pp. 581–600. Springer, 2020.
Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or. Motionclip: Ex-
posing human motion generation to clip space. In European Conference on Computer Vision, pp.
358–374. Springer, 2022.
13
Under review as a conference paper at ICLR 2025
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and
efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko-
lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda-
tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in
neural information processing systems, 30, 2017.
Jordan Voas, Yili Wang, Qixing Huang, and Raymond Mooney. What is the best automated metric
for text to motion generation? In SIGGRAPH Asia 2023 Conference Papers, pp. 1–11, 2023.
Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan
Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understand-
ing and generation. arXiv preprint arXiv:2307.06942, 2023.
Zhenyi Wang, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan, and Changyou Chen.
In
Learning diverse stochastic human-action generators by learning smooth latent transitions.
Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 12281–12288, 2020.
Qi Wu, Yubo Zhao, Yifan Wang, Yu-Wing Tai, and Chi-Keung Tang. Motionllm: Multimodal
motion-language learning with large language models. arXiv preprint arXiv:2405.17013, 2024.
Boshen Xu, Ziheng Wang, Yang Du, Sipeng Zheng, Zhinan Song, and Qin Jin. Egonce++: Do
arXiv preprint
egocentric video-language models really understand hand-object interactions?
arXiv:2405.17719, 2024.
Yufei Xu, Jing Zhang, Qiming Zhang, and Dacheng Tao. Vitpose: Simple vision transformer base-
lines for human pose estimation. Advances in Neural Information Processing Systems, 35:38571–
38584, 2022.
Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using
vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021.
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen
Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models
with multimodality. arXiv preprint arXiv:2304.14178, 2023.
Tackgeun You, Saehoon Kim, Chiheon Kim, Doyup Lee, and Bohyung Han. Locally hierarchi-
cal auto-regressive modeling for image generation. Advances in Neural Information Processing
Systems, 35:16360–16372, 2022.
Lijun Yu, José Lezama, Nitesh B Gundavarapu, Luca Versari, Kihyuk Sohn, David Minnen, Yong
Cheng, Agrim Gupta, Xiuye Gu, Alexander G Hauptmann, et al. Language model beats diffusion–
tokenizer is key to visual generation. arXiv preprint arXiv:2310.05737, 2023.
Ye Yuan, Umar Iqbal, Pavlo Molchanov, Kris Kitani, and Jan Kautz. Glamr: Global occlusion-
aware human mesh recovery with dynamic cameras. In Proceedings of the IEEE/CVF conference
on computer vision and pattern recognition, pp. 11038–11049, 2022.
Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu,
Xi Shen, and Ying Shan. Generating human motion from textual descriptions with discrete
In Proceedings of the IEEE/CVF conference on computer vision and pattern
representations.
recognition, pp. 14730–14740, 2023a.
Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei
Liu. Motiondiffuse: Text-driven human motion generation with diffusion model. arXiv preprint
arXiv:2208.15001, 2022.
Siwei Zhang, Bharat Lal Bhatnagar, Yuanlu Xu, Alexander Winkler, Petr Kadlecek, Siyu Tang, and
Federica Bogo. Rohm: Robust human motion reconstruction via diffusion. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14606–14617, 2024a.
14
Under review as a conference paper at ICLR 2025
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, and Tong Sun.
Llavar: Enhanced visual instruction tuning for text-rich image understanding. arXiv preprint
arXiv:2306.17107, 2023b.
Yaqi Zhang, Di Huang, Bin Liu, Shixiang Tang, Yan Lu, Lu Chen, Lei Bai, Qi Chu, Nenghai
Yu, and Wanli Ouyang. Motiongpt: Finetuned llms are general-purpose motion generators. In
Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 7368–7376, 2024b.
Bo Zhao, Boya Wu, and Tiejun Huang. Svit: Scaling up visual instruction tuning. arXiv preprint
arXiv:2307.04087, 2023.
Sipeng Zheng, Yicheng Feng, Zongqing Lu, et al. Steve-eye: Equipping llm-based embodied agents
In The Twelfth International Conference on Learning
with visual perception in open worlds.
Representations, 2023.
Sipeng Zheng, Bohan Zhou, Yicheng Feng, Ye Wang, and Zongqing Lu. Unicode: Learning a
unified codebook for multimodal large language models. arXiv preprint arXiv:2403.09072, 2024.
Zixiang Zhou, Yu Wan, and Baoyuan Wang. Avatargpt: All-in-one framework for motion under-
standing planning generation and beyond. In Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition, pp. 1357–1366, 2024.
15
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
Under review as a conference paper at ICLR 2025
Appendices
A ADDITIONAL DETAILS OF MOSEBASE
In this section, we provide more details about Motionbase that are not included in the main paper
due to spatial limitations.
A.1 STATISTIC ANALYSES
MotionBase contains over 1 million motion sequences from 42 different public datasets and web
videos on the Internet. Subsets of MotionX, including Animation, Perform, Dance, Aist, Kungfu,
GRAB (Taheri et al., 2020), Music, Idea400 (Lin et al., 2024), HAA500 (Chung et al., 2021), Game
Motion, and Fitness, are included in MotionBase. Recognizing the high cost of collecting and anno-
tating videos, we also see the untapped potential of images for motion understanding. Consequently,
MotionBase incorporates image data by repeating each image across 64 frames and treating it as a
motion sequence. For the datasets with long-range videos, such as MPI-INF-3DHP (Mehta et al.,
2017), we segment the footage into sub-clips with random durations ranging from 10 seconds to one
minute. Figure 6 and Figure 7 illustrate the scale and length distributions of MotionBase.
Figure 6: The scale distribution of motion sequences across subsets of MotionBase.
A.2 PROMPT OF MOTION DESCRIPTION
In this paper, we use Gemini-1.5-pro (Reid et al., 2024) and GPT-4o-mini (OpenAI, 2024) as large
multimodal models (LMM) to generate textual annotations for video and image data, respectively.
For each person-centric sample, we first crop and track the person’s body using the corresponding
bounding box(es). The LMM is then tasked with focusing on the person’s physical movements and
positions in the global space to generate detailed descriptions. Unlike previous datasets, we provide
16
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
Under review as a conference paper at ICLR 2025
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
Figure 7: The length distribution across different subsets of MotionBase
more granular motion descriptions by dividing the body into upper and lower sections, prompting
the LMM to generate part-specific descriptions (“part-level”). Additionally, an overall summary of
the entire body’s movement (“whole-body”) is also produced. Figure 8 illustrates the prompt used
to caption human motion sequences in MotionBase.
A.3 WORD DISTRIBUTION ANALYSIS
To further explore the annotated motion text, we generate word clouds from the entire text corpus
in MotionBase. Since the annotations in MotionBase consist of both whole-body and part-level
descriptions, we create separate word clouds for general labels and more detailed annotations, as
shown in Figure 9 and Figure 10, respectively. In Figure 9, we observe that the whole-body anno-
tations primarily highlight high-level motion activities, such as standing, sitting, and walking. In
contrast, Figure 10 shows that part-level annotations focus more on specific body movements, in-
cluding the torso, shoulders, legs, and arms. We believe that this hierarchical structure of annotations
will enhance the understanding of motion.
B ADDITIONAL OVERVIEW OF MODEL ARCHITECTURE
Due to space limitations in the main paper, we provide the overview of our model architecture in
Figure 11 in this appendix. Following most LMMs, our large motion model consists of two stages:
pre-training and fine-tuning. During the pre-training stage, we train a motion encoder, a motion
decoder, and a motion codebook to represent motions using discrete tokens. With this motion to-
kenizer, we fine-tune an autoregressive language model to predict motion tokens. In the inference
stage, the input text is processed by the language model to generate motion tokens in an autoregres-
sive manner, which are then decoded into natural motion by the pre-trained motion decoder.
17
Under review as a conference paper at ICLR 2025
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
Figure 8: Prompt examples to label human motions in the video. We use Gemini-1.5-pro and GPT-
4o-mini to generate motion descriptions for the video and image data, respectively. We provide
“whole-body” (UP) and “part-level” (DOWN) labels for each sample in the dataset.
18
Begin by providing a general overview of the person's current action (e.g., walking, sitting, interacting) within the BBOX area. Then, proceed with a detailed breakdown, focusing exclusively on the physical movements and positions of the person within the BBOX. For the upper body, describe the position and movement of the arms, hands, shoulders, and torso. For the lower body, detail the position and movement of the legs, feet, and overall balance. Ensure the description strictly covers physical actions without mentioning facial expressions, clothing, or environmental elements outside the BBOX.Example:The person is standing still, observing something in front of them.lUpper body: Their arms hang relaxed by their sides, with the shoulders slightly back and the chest open. The torso is upright, with minimal movement, indicating a calm, neutral stance.lLower body: Both feet are planted firmly on the ground, shoulder-width apart. The knees are slightly bent, and their weight is evenly distributed between both legs.The person is standing within the designated area, engaging in a conversation seemingly directed toward someone positioned off-camera to the left. **Upper Body:*** **Arms:** Initially held loosely at the sides, the arms transition to various positions throughout the interaction. At times, they rise to chest level with palms open, suggesting an explanatory gesture. Occasionally, one or both arms extend outwards, indicating direction or emphasis. * **Hands:** Hand movements correspond with arm gestures. Palms face upwards and outwards during open-handed motions, then relax to a neutral position when the arms are at rest. * **Shoulders:** Shoulders remain relatively relaxed throughout, with subtle shifts in position reflecting the arm movements. They don't appear tense or raised, implying a generally comfortable stance.* **Torso:** The torso largely remains stationary, facing forward, with slight turns coinciding with the shifting weight distribution of the lower body.**Lower Body:*** **Legs:** Legs maintain a comfortable stance, slightly apart, with the weight appearing balanced. There's a subtle shift in weight distribution as they adjust their stance. * **Feet:** Feet remain planted on the ground, primarily shoulder-width apart. The positioning suggests a grounded and stable stance. * **Overall Balance:** The individual appears balanced and at ease throughout the interaction, with movements suggesting engagement in the conversation rather than discomfort or restlessness.Under review as a conference paper at ICLR 2025
Figure 9: Word cloud of whole-body textual annotation in MotionBase.
Figure 10: Word cloud of part-level textual annotation in MotionBase.
C ADDITIONAL EXPERIMENTAL RESULTS
In this section, we provide more experimental analysis which can not be presented in our main paper
due to space limitation.
Table 7: Ablation of the effectiveness of synthetic data and static data.
TRAIN SET
R@1 ↑
R@3 ↑
FID ↓ MMDist ↓
Real
w/o static & syn
w/o static
MotionBase
0.290
0.111
0.120
0.264
0.563
0.248
0.252
0.542
0.011
57.719
55.983
0.516
3.480
8.412
8.175
4.007
19
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
Under review as a conference paper at ICLR 2025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
Figure 11: Overview of the large motion model, which can be divided into two stages. In the first
stage(left), we pre-train a motion VQ-VAE to quantify motion sequences into tokens. In the second
stage(right), we fine-tune an autoregressive language model to predict motion tokens.
Table 8: Results on the test set with synthetic and static data filtered out.
TRAIN SET
R@1 ↑
R@3 ↑
FID ↓ MMDist ↓
Real
w/o static & syn
w/o static
MotionBase
0.196
0.167
0.166
0.168
0.474
0.396
0.393
0.399
0.006
1.740
1.780
1.614
1.647
2.323
2.356
2.300
C.1 ABLATION OF SYNTHESIS AND STATIC DATA
For handling static data, our core strategy is to introduce specific language prompts during training.
Specifically, by adding language markers such as "keep the action still," we explicitly guide the
model to understand the distinction between static and dynamic actions. Prompt-based methods
can effectively differentiate between different motion distributions. To validate this approach, we
conduct a series of ablation experiments. We train GPT2-medium on three variations of MotionBase:
without synthetic data, without image data, and without both synthetic data and image data. The
model is trained for 300 epochs with a learning rate of 2e-4. Using the VQ-VAE and retrieval model
trained on MotionBase, we test on the MotionBase test set and a subset of the test set where static
and synthetic data are filtered out. The results are shown in Table 7 and Table 8. Our findings
indicate that incorporating both static data (i.e., image data) and synthetic data leads to performance
improvements in terms of R-Precision.
Table 9: Comparison of evaluations using different encoder models.
EM_Humanml3d
EM_Motion-X
Decoder
#Inst.
#Param. R@1 ↑ R@3 ↑
FID ↓ R@1 ↑ R@3 ↑
FID ↓
Real
GPT-2
GPT-2
-
-
0.02M 355M
0.08M 355M
LLaMA-2
LLaMA-2
LLaMA-3
LLaMA-3
LLaMA-2
LLaMA-2
0.02M
0.08M
0.02M
0.08M
0.02M
0.08M
7B
7B
8B
8B
13B
13B
0.511
0.466
0.462
0.497
0.474
0.500
0.499
0.519
0.504
0.797
0.752
0.744
0.778
0.758
0.783
0.786
0.803
0.790
0.002
0.101
0.208
0.214
0.452
0.173
0.264
0.166
0.393
0.496
0.358
0.362
0.378
0.376
0.380
0.393
0.395
0.400
0.821
0.651
0.656
0.671
0.673
0.675
0.696
0.695
0.700
0.038
0.050
0.754
0.122
0.518
0.094
0.591
0.105
0.637
C.2 ABLATION OF DIFFERENT ENCODER MODELS
Table 9 presents the evaluation results on the HumanML3D test set using different encoder mod-
els (EM). We employ the same dual-encoder architecture (Guo et al., 2022a) but trained it on two
20
Motion TokensAutoregressive Language ModelOutput Motion TokensA womanisblowinga balloonwhilewalkingMotion EncoderMotion DecoderMotion Codebook<motion_id_0><motion_id_1><motion_id_2><motion_id_3><motion_id_4><motion_id_5>Motion TokensUnder review as a conference paper at ICLR 2025
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
Table 10: Comparison between fine-tuning and learning from scratch on the Motion-X test set.
#Inst
Real
0.02M
0.02M
0.08M
0.08M
From Sctrach R@1 ↑ R@3 ↑
FID ↓ MMDist ↓
-
Yes
No
Yes
No
0.496
0.035
0.206
0.460
0.468
0.821
0.103
0.402
0.782
0.791
0.038
16.904
54.017
0.113
0.096
2.438
9.280
8.218
2.862
2.798
Table 11: Results of different loss calculation methods on the HumanML3D test set.
Loss Calculation R@1 ↑ R@3 ↑
FID ↓ MMDist ↓
Real
Motion Seq Loss
Whole Seq Loss
0.511
0.388
0.466
0.797
0.650
0.752
0.002
0.680
0.101
2.974
3.919
3.234
distinct datasets: HumanML3D and Motion-X, where HumanML3D is a subset of Motion-X. The
results highlight the limited generalization ability of the encoder model. When using the model
trained on the larger Motion-X dataset, performance metrics on HumanML3D decrease. This sug-
gests that training on the broader Motion-X dataset negatively impacts R-Precision performance
on the HumanML3D subset. Furthermore, when the encoder model is trained on Motion-X, in-
creasing the training data size for the text-to-motion model leads to significant performance gains.
Conversely, when using the encoder model trained on HumanML3D, the performance of the text-
to-motion model degrades as the training data size increases. This might be attributed to inherent
limitations in the encoder model itself.
C.3 ABLATION OF LEARNING FROM SCRATCH VS. FINE-TUNING
We compare the performance of fine-tuning GPT-2 against training it from scratch (random ini-
tialization). As shown in Table 10, fine-tuned models consistently outperform those trained from
scratch, particularly when trained on HumanML3D and evaluated on MotionX. The improvement
of pretrained LLM highlights the importance of text pre-training in enhancing the model’s under-
standing of text descriptions and improving its generalization capabilities.
C.4 ABLATION OF DIFFERENT LOSS CALCULATION STRATEGIES
We also investigate the impact of different loss calculation strategies on model performance: We
compare two strategies: 1) calculating the loss solely on the output motion tokens, and 2) calcu-
lating the loss on both the input text and the output motion tokens. As shown in Table 11, our
results indicate that the second strategy yields better performance. This improvement compared to
the first alternative is likely due to the strategy’s ability to prevent catastrophic forgetting of text
understanding. Additionally, it helps mitigate overfitting to motion patterns in the training data,
thereby enhancing the model’s generalization ability.
C.5 ABLATION STUDY ON HIERARCHICAL TEXT AND BASIC TEXT
To investigate the effectiveness of hierarchical text representation, we conduct a series of ablation
experiments. As shown in Table 12, we compare the training results using hierarchical text with both
basic and detailed descriptions, against the results using only basic descriptions. The experimental
results demonstrate that hierarchical text can effectively enhance the model’s semantic understand-
ing, thereby improving the semantic matching of generated motions.
It is worth noting that the evaluation results for hierarchical text are sometimes overestimated, even
surpassing the ground truth. We hypothesize that this is because the evaluator itself is a network
21
Under review as a conference paper at ICLR 2025
Table 12: Results of Hierarchical Text and Basic Text on MotionBase.
Training text
R@1 ↑ R@3 ↑
FID ↓ MMDist ↓
Real
Basic text
Hierarchical text
0.290
0.264
0.302
0.563
0.542
0.603
0.011
0.516
0.521
3.480
4.007
3.523
Table 13: Results of LoRA and full parameter fine-tuning on MotionBase.
Training method R@1 ↑ R@3 ↑
FID ↓ MMDist ↓
Real
LoRA
Full Param
0.290
0.249
0.264
0.563
0.520
0.542
0.011
1.896
0.516
3.480
3.869
4.007
model trained on the training set to fit its distribution, and may exhibit bias on the test set. If the
generated text-motion data aligns better with the training set distribution, the evaluation metrics
might even outperform the ground truth on the test set. Therefore, how to quantitatively evaluate
motion generation performance remains an interesting research topic worthy of further exploration.
C.6 ABLATION STUDY ON LORA AND FULL PARAMETER FINE-TUNING
We conduct an ablation study comparing LoRA and full parameter fine-tuning. As shown in Ta-
ble 13, LoRA fine-tuning struggles to achieve competitive results. We attribute this limitation to
the introduction of new motion tokens, which necessitate substantial parameter adjustments for the
language model to comprehend these additional tokens. The constrained nature of LoRA fine-tuning
appears insufficient to effectively address these demands.
C.7 EXPERIMENTAL COMPARISON WITH T2M-GPT ON MOTIONBASE
We train the T2M-GPT model on the MotionBase dataset and compare it with a model based on
GPT-2 medium. As shown in Table 14, despite comparable parameter counts, the T2M-GPT method
struggles to produce competitive results. Because of the inherent limitations of CLIP’s text encod-
ing capabilities, models trained this way struggle to understand a wider range of motion-related
language. We believe that large motion models based on decoder-only LLMs, which jointly train
text tokens and motion tokens, achieve better text-motion semantic alignment and stronger motion
generation capabilities.
C.8 ABLATION OF MOTION GENERATION BASED ON LFQ
To validate the applicability of the LFQ quantization method for motion generation, we conducted
experiments summarized in Table 15. These experiments include data scaling with GPT-2 and pa-
rameter scaling using 0.02M training samples. The results are consistent with our initial conclusions,
confirming robust performance across scaling scenarios. Furthermore, LFQ demonstrates a slight
Table 14: Results of T2M-GPT and GPT-2 on MotionBase.
Model
Real
-
T2M-GPT
GPT-2 Medium
380M
355M
#Param. R@1 ↑ R@3 ↑
FID ↓ MMDist ↓
0.290
0.243
0.264
0.563
0.504
0.542
0.011
1.909
0.516
3.480
4.593
4.007
22
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
Under review as a conference paper at ICLR 2025
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
Figure 12: Comparison with different motion quantization on the Motion-X (left) and MotionBase
dataset (right). The Y-axis denotes FID (↓).
performance advantage over VQ when evaluated with GPT-2. Given that LFQ utilizes a significantly
larger codebook, which increases training difficulty, we anticipate that further improvements could
be achieved by scaling both model parameters and training data.
Table 15: Ablation of motion generation using LFQ and VQ under different setups.
Motion-X
MotionBase
Decoder
GPT-2-VQ
GPT-2-LFQ
GPT-2-LFQ
GPT-2-LFQ
GPT-2-LFQ
LLaMA-2-LFQ
LLaMA-2-LFQ
#Inst.
1M
0.02M
0.08M
1M
0.02M
0.02M
0.02M
#Param.
R@1 ↑
R@3 ↑
355M
355M
355M
355M
355M
7B
13B
0.357
0.166
0.332
0.394
0.166
0.225
0.206
0.614
0.341
0.558
0.628
0.341
0.383
0.351
FID ↓
5.083
76.214
6.245
4.275
76.214
68.542
71.238
R@1 ↑
R@3 ↑
0.264
0.042
0.062
0.326
0.042
0.062
0.085
0.542
0.085
0.144
0.607
0.085
0.140
0.184
FID ↓
0.516
136.254
128.071
0.452
136.254
125.082
119.036
C.9 ABLATION OF MOTION QUANTIZATION
First, we provide additional FID results on Motion-X in Figure 12. It is worth noting that while our
motion quantizer performs worse than RQ-VAE on the smaller HumanML3D dataset, it surpasses
both VQ and RQ when evaluated on the larger Motion-X and MotionBase benchmarks, as can be
seen in Table 6. This suggests that our approach offers a greater advantage when applied to larger
datasets, highlighting its improved generalization compared to previous methods.
To further validate the effectiveness of our 2D quantization strategy, we compare the 2D-LFQ
method with its 1D counterpart (which is identical to VQ except for the quantization strategy). The
results, shown in Table 16, demonstrate that 2D quantization in LFQ significantly outperforms the
1D version. This highlights the superior ability of 2D quantization to enhance the representational
capacity of the motion tokenizer.
Table 16: Ablation of 2D motion quantization vs. its 1D version.
HumanML3D
Motion-X
MotionBase
Tokenizer
#Num.
#Param.
FID ↓ MPJPE ↓
FID MPJPE
FID MPJPE
1D-LFQ
2D-LFQ
16384
16384
19.43M 3.85
108.35M 1.769
52.5
45.6
2.783
0.295
78.9
54.1
10.358
7.853
80.1
64.1
23
Under review as a conference paper at ICLR 2025
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
D DATASET CONSTRUCTION PIPELINE
Our data collection pipeline is a multi-stage process designed to curate a large-scale, high-quality,
and richly annotated multimodal motion dataset. The detailed steps are outlined below:
Video Data Collection and Cleaning: We amass over 20 million videos from publicly available
datasets like InternVid and WebVid, as well as online platforms such as YouTube. To maintain data
relevance and quality, we employ a pretrained human detection model to filter out videos lacking
human presence.
2D and 3D Keypoint Estimation: We estimate 2D human keypoints and their corresponding con-
fidence scores using the pretrained VitPose model (Xu et al., 2022). To further refine motion in-
formation, we leverage a pretrained 3D keypoint estimation model (Sárándi et al., 2023) trained on
extensive 3D datasets. Following the methodology of Lin et al. (2024), we apply temporal smooth-
ing and 3D bone length constraints during triangulation to enhance the stability and consistency of
the keypoint estimations.
Multimodal Information Integration: For a more comprehensive understanding of human motion,
MotionBase incorporates RGB, depth data, and annotations for multi-person scenarios. In multi-
person sequences, each motion is grounded to its respective video via object-level bounding boxes.
While this work primarily focuses on text-to-motion tasks, these additional modalities pave the way
for future research in related areas.
Local-Global Pose Estimation: We fit the SMPL-X body model (Pavlakos et al., 2019) to each
frame in MotionBase using a progressive learning-based mesh fitting approach (Lin et al., 2024).
Specifically, we predict SMPL-X parameters using the pretrained OSX method (Lin et al., 2023),
followed by iterative optimization to align the parameters with the target 2D and 3D joint positions.
Subsequently, we apply a global motion optimization technique based on Yuan et al. (2022) to refine
both global motions and camera poses, ensuring consistency with the video evidence. Finally, for
motion sequences with noisy or occluded input data, we employ RoHM (Zhang et al., 2024a) to
reconstruct complete and plausible motions.
Single-Frame Pose Expansion: To enhance dataset diversity and scale, we expand single-frame
pose data into multi-frame sequences. We achieve this using the PHC (Luo et al., 2023) strategy
and the pre-trained motion completion model MotionGPT (Jiang et al., 2023). The PHC strategy
ensures the physical plausibility of the generated motion sequences, while MotionGPT provides
motion priors to enhance naturalness and fluidity.
Hierarchical Motion Descriptions: MotionBase features hierarchical text annotations to address
limitations in existing dataset descriptions. Leveraging the Gemini-1.5-pro large language model
(Reid et al., 2024) and a carefully crafted prompt format, we generate detailed descriptions for
individual body parts (e.g., left arm, right leg), dedicating a sentence to each. Furthermore, we sum-
marize the overall body movement with 1-3 sentences, providing a more holistic motion description.
E DATASET QUALITY EVALUATION
E.1 MOTION DATA QUALITY
To ensure dataset quality, we conduct multifaceted evaluations of the motion data.
Refinement using a Reinforcement Learning-based Strategy: We use PHC to train a reinforce-
ment learning-based policy model that refines the raw motion data, ensuring conformity to physical
laws and enhancing realism. This policy takes raw motion sequences as input, treats them as target
poses, and generates new motion sequences satisfying physical laws in a simulated environment,
thereby eliminating issues such as jitter and foot sliding. While this strategy may encounter chal-
lenges with drastic movements, it effectively improves data quality for most motion sequences.
Data Diversity: A key advantage of the MotionBase dataset is its scale and diversity. We collect
over one million motion sequences from multiple sources (including InternVid and internet videos),
encompassing a wide range of motion types. This diversity supports the training of more generaliz-
able motion models.
24
Under review as a conference paper at ICLR 2025
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
Figure 13: Quantitative examples of motions generated by our large motion model.
E.2 TEXT DESCRIPTION QUALITY
To ensure text description quality, we employ a multi-level evaluation approach.
Automatic Evaluation based on Large Language Models: We automatically evaluate text de-
scriptions in MotionBase using large language models such as Gemini-1.5-pro (Reid et al., 2024)
and GPT-4. We use a 1-to-5 scoring system based on these criteria:
• 1 point (Very Poor): The description is vague, irrelevant to the motion content, or contains
severe grammatical errors.
• 2 points (Poor): The description lacks specifics, detail, or contains obvious errors.
• 3 points (Fair): The description basically reflects the motion content but lacks detail and
may contain minor errors.
• 4 points (Good): The description is accurate and detailed, clearly expressing the motion
process.
• 5 points (Excellent): The description is precise, comprehensive, and fluent, providing an
in-depth analysis of motion details.
We use GPT-4o to score text descriptions from MotionBase, MotionX, and HumanML3D. Mo-
tionBase achieves an average score of 3.837, while MotionX and HumanML3D score 1.386 and
1.703, respectively, indicating higher quality in MotionBase’s text descriptions. To further evaluate
consistency between text descriptions and motion content, we also input text descriptions and cor-
responding rendered motion videos into the Gemini-Pro model for scoring. MotionBase achieves
an average score of 3.82, while MotionX and HumanML3D score 2.25 and 3.08, respectively, again
confirming the quality advantage of MotionBase’s text descriptions.
Consistency Check of Hierarchical Descriptions: MotionBase provides hierarchical text descrip-
tions, including overall, local detail, and rule-based descriptions. We use GPT-4 and manual checks
25
Person falls to the ground in a sitting motion and then pops back up in a standing position.A person squats down then jumps.A man full-body sideways jumps to his left.A woman is blowing a balloon while walking.A person is building blocks and walking at the same time.The person performs Lunges Of Crossover Reverse Lunge.Text PromptGenerated Motion SequencesUnder review as a conference paper at ICLR 2025
to ensure consistency across different levels, guaranteeing logical coherence and informational com-
pleteness.
F ADDITIONAL QUALITATIVE RESULTS
We provide some examples to visualize the human motions predicted by our large motion model
trained on MotionBase, as illustrated in Figure 13. As can be seen, our large motion model is capable
of generating motion sequences that align well with the input texts, demonstrating the effectiveness
of the MotionBase dataset.
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
26
|
1Iuw1jcIrf | MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code | [
8,
6,
8
] | Published as a conference paper at ICLR 2025
MATHCODER2: BETTER MATH REASONING FROM
CONTINUED PRETRAINING ON MODEL-TRANSLATED
MATHEMATICAL CODE
Zimu Lu∗1, Aojun Zhou∗1, Houxing Ren1, Ke Wang1, Weikang Shi1
Junting Pan1,2, Mingjie Zhan†1, Hongsheng Li†1,2
1Multimedia Laboratory (MMLab), The Chinese University of Hong Kong
[email protected]
[email protected]
{aojunzhou, zmjdll}@gmail.com
2CPII under InnoHK
ABSTRACT
Code has been shown to be effective in enhancing the mathematical reasoning
abilities of large language models due to its precision and accuracy. Previous
works involving continued mathematical pretraining often include code that uti-
lizes math-related packages, which are primarily designed for fields such as engi-
neering, machine learning, signal processing, or module testing, rather than being
directly focused on mathematical reasoning. In this paper, we introduce a novel
method for generating mathematical code accompanied with corresponding rea-
soning steps for continued pretraining. Our approach begins with the construc-
tion of a high-quality mathematical continued pretraining dataset by incorporat-
ing math-related web data, code using mathematical packages, math textbooks,
and synthetic data. Next, we construct reasoning steps by extracting LaTeX ex-
pressions, the conditions needed for the expressions, and the results of the ex-
pressions from the previously collected dataset. Based on this extracted infor-
mation, we generate corresponding code to accurately capture the mathematical
reasoning process. Appending the generated code to each reasoning step results
in data consisting of paired natural language reasoning steps and their correspond-
ing code. Combining this data with the original dataset results in a 19.2B-token
high-performing mathematical pretraining corpus, which we name MathCode-
Pile. Training several popular base models with this corpus significantly improves
their mathematical abilities, leading to the creation of the MathCoder2 family of
models. All of our data processing and training code is open-sourced, ensuring
full transparency and easy reproducibility of the entire data collection and train-
ing pipeline.
1
INTRODUCTION
Various studies (Azerbayev et al., 2024; Shao et al., 2024) have shown that training on code en-
hances the mathematical reasoning abilities of large language models (LLMs). Previous research in
continued mathematical pretraining often includes code that utilizes math-related packages (Azer-
bayev et al., 2024). This code is typically sourced from GitHub and is primarily designed for fields
such as engineering, machine learning, signal processing, or module testing, rather than focusing
directly on mathematics. Recent models (Zhou et al., 2024; Yang et al., 2024b; Ying et al., 2024;
Shao et al., 2024; Wang et al., 2023a) have adopted Tool-Integrated Reasoning (TIR) in fine-tuning.
They utilize integrated natural language reasoning steps and Python code to improve performance
on mathematical reasoning tasks. Reasoning with the help of code is particularly effective for more
challenging problems, likely due to its precision and accuracy.
Although utilizing existing open-source code in the pretraining phase can enhance the mathematical
reasoning abilities of LLMs, such code often lacks accompanying natural language explanations or
context. This might hinder the model’s ability to fully understand them. In this paper, we propose
a novel method for generating large amounts of mathematical code accompanied by corresponding
natural language reasoning steps, which are extracted from math-related pretraining texts. Different
1
Published as a conference paper at ICLR 2025
from the existing math-related code, our generated code is paired with natural language reasoning
steps, making the code more comprehensible. Also, as our code is generated based on math-related
texts, they are all highly related to mathematical reasoning. When used in pretraining, the mathe-
matical code paired with reasoning steps facilitates LLMs’ understanding of math-related pretraining
texts, as it effectively captures the underlying reasoning process. Furthermore, this data enhances
the model’s potential to be finetuned for TIR reasoning.
Our data processing pipeline consists of two key steps: (1) carefully curating a robust basic dataset
for pretraining, and (2) generating paired reasoning steps and mathematical code by extracting La-
TeX expressions and their context, translating the extracted information into Python code snippets,
executing the generated code snippets, and verifying their correctness.
First, we gather and carefully filter a wide variety of math-related data sources, including web pages,
model-generated data, math-related code, and textbooks. Through an advanced filtering process, we
ensure the dataset is both large and highly relevant, minimizing irrelevant content while preserving
the mathematical texts necessary for training. This results in a 16.5B-token dataset that forms the
foundation of our pretraining efforts. By conducting experiments with smaller models, we show that
this careful curation leads to more efficient training without sacrificing model performance.
Second, we propose a novel method for generating large amounts of paired mathematical reason-
ing steps and their corresponding Python code. Given a piece of text from the pretraining corpus
collected above, we wrap it in a carefully designed prompt that instructs a Llama-3.1-70B-Instruct
model to extract LaTeX expressions along with their relevant context, including the conditions for
each expression and the result of its computation. This results in a list of comprehensive mathemati-
cal reasoning steps, complete with the necessary conditions, the computations taken, and the results.
Then, we prompt the model to translate each reasoning step into a Python code snippet that captures
the underlying reasoning process. The generated Python snippets are executed, and only those that
run successfully and produce outputs matching the expected results are retained. By pairing the
code with the corresponding reasoning step, we create the final data. The process is demonstrated
in the lower half of Fig. 1. This process yields a 2.7B-token corpus of mathematical code snippets
accompanied with their corresponding reasoning steps, which we combine with the data generated
in the first step to create a 19.2B-token pretraining dataset, named MathCode-Pile.
We validate the effectiveness of MathCode-Pile on four popular base models: Llama-3-8B,
DeepSeekMath-7B, Mistral-7B, and Code-Llama-7B, significantly improving their performance on
five representative mathematical benchmarks. We name the resulting family of pretrained mod-
els MathCoder2. In particular, MathCoder2-Llama-3-8B achieves 4-shot accuracies of 38.4% on
MATH and 69.9% on GSM8K, outperforming the baseline of training only on the basic data gener-
ated in the first step by 3.1% and 4.1%, respectively. This demonstrates that the data of mathematical
code accompanied with reasoning steps effectively enhances LLMs’ reasoning abilities.
Different from recent works, such as DeepSeekMath (Shao et al., 2024), InternLM-Math (Ying
et al., 2024), and Qwen2.5-Math (Yang et al., 2024b), which only release their model weights, we
offer a detailed, open-source framework for data processing and training that achieves performance
competitive with these models, fostering further progress in mathematical reasoning for LLMs.
Our contributions include:
• A novel and effective method for generating large amounts of mathematical code with cor-
responding natural language reasoning steps, significantly enhancing pretraining outcomes.
• The creation of MathCode-Pile, a meticulously curated 19.2B-token dataset for contin-
ued mathematical pretraining. This dataset includes math-related web data, synthetic data,
code, textbooks, and model-translated mathematical code.
• Full open-sourcing of all data processing and training code, ensuring transparency and
reproducibility to support future research.
2
Published as a conference paper at ICLR 2025
Figure 1: The data processing pipeline. (a) shows the pipeline of prior works. (b) demonstrates our
method. We first use a fastText classifier to filter the Common Crawl corpus, resulting in the initial
filtered math texts. Then, we annotate part of the filtered texts to train a new fastText classifier,
and conduct a second filtering, resulting in the finer filtered math texts. Then we use an instruction-
tuned LLM to extract reasoning steps from these math-related texts, and translate the reasoning steps
into corresponding code snippets. We execute the code snippets and compare the output with the
expected result. If the code executes successfully and the result is as expected, the code is retained.
2 CURATION OF MATHCODE-PILE
We curate our mathematical pretraining dataset, MathCode-Pile, in two steps: first, we collect the
basic data in Sec. 2.1, and then we use it to generate mathematical code snippets with their corre-
sponding natural language reasoning steps in Sec. 2.2.
2.1 BASIC DATA
We collect and carefully filter a diverse range of mathematical data to ensure relevance and quality
for continued pretraining of LLMs. The data includes math-related web content, synthetic data, code
utilizing mathematical packages, and mathematical textbooks.
Math-related Web Data. Web data offers a broad range of real-world mathematical examples.
We start with the OpenWebMath (Paster et al., 2023) dataset, which contains mathematical web
pages sourced from Common Crawl. Observing that a significant portion of these documents are
unrelated to mathematics, we instruct the Mixtral-8x7B-Instruct model with a carefully designed
prompt (detailed in Appendix A) to filter out irrelevant texts. Examples of irrelevant texts are shown
in Appendix D. This reduces the dataset from 13.7B tokens to 4.8B tokens (measured using the
Llama-3 tokenizer). We call this filtered version filtered-OpenWebMath.
To further expand the dataset, we train a fastText classifier (Joulin et al., 2016) using filtered-
OpenWebMath as positive samples and random Common Crawl data as negative samples (training
details are explained in Appendix. B). This model helps identify additional math-related documents
within the Common Crawl data from Matrix (Zhang et al., 2024), a general pretraining dataset. A
second round of filtering is performed, where Mixtral-8x7B-Instruct annotates a portion of these
documents, and a new fastText classifier trained based on these annotations further refines the data.
This produces 6.4B tokens, which we label as filtered-CC-En-math. Finally, we combine filtered-
OpenWebMath and filtered-CC-En-math, resulting in a comprehensive 11.2B-token math-related
web dataset.
3
(b) MathCoder 2.0(a) Other MethodsCommonCrawlRule-based or ModelsMath-relatedTextsCommonCrawlFiner Filtered Math TextsfastTextfastTextInitial Filtered Math TextsFiner Filtered Math Texts:Conditions Needed:1.The integral is taken over a 3D region defined by the limits of integration.2.The integrand is a function of three variables: 𝜌,∅, and 𝜃Computation Expression:8න0𝜋2න0𝜋2න01𝜌2sin∅𝑑𝜌𝑑𝜃𝑑∅Computation Result:4𝜋3Python Code Snippet:importnumpyasnpfromscipy.integrateimporttplquaddefintegrand(rho, theta, phi):returnrho**2*np.sin(phi)result, _ =tplquad(integrand, 0, np.pi/2, lambdaphi: 0, lambdaphi: np.pi/2, lambdaphi, theta: 0, lambdaphi, theta: 1)print(result *8) # multiply by 8 to match the original expression……𝑥=𝑟sin(∅)cos(𝜃)𝑦=𝑟sin(∅)sin(𝜃)𝑧=𝑟cos(∅)To calculate the volume:𝑣𝑜𝑙=8න0𝜋2න0𝜋2න01𝜌2sin∅𝑑𝜌𝑑𝜃𝑑∅=4𝜋3……Translation LLMInterleaved Reasoning Steps and Code SnippetsExecute Code and Compare with ResultDeepseek mathLemma… Published as a conference paper at ICLR 2025
Prompt: You will be presented with a text related to math. I need you to identify all the complex
computations in it. For each complex computation, find out the conditions needed for the computation,
the LaTeX expression that conducts the computation, and the result of the computation. Then generate
a Python code snippet for each computation that demonstrates how the result is reached. Output each
computation in the following format:
Conditions Needed:
1. [Condition 1]
2. [Condition 2]
...
Computation Expression:
$[LaTeX Expression]$
Computation Result:
[Computation Result]
Python Code Snippet:
‘‘‘python
[Python Code]
‘‘‘
There can be more than one complex computation in the text. Output only the computations that
requires calculation. Do not include mathematical statements or definitions as a computation. Make
sure each snippet can be executed individually. The text is as follows: {TEXT}
The computations are:
Figure 2: The prompt for extracting reasoning steps from texts in the pretraining corpus and generat-
ing the corresponding Python snippets. {TEXT} is replaced with the text from the dataset collected
in Sec. 2.1.
Synthetic Data. Synthetic data offers structured mathematical texts that complement the web data.
We collect synthetic data from various open-source repositories on Hugging Face, including datasets
like Education-College-Students1, Maths-College2, and synthetic math books from Matrix (Zhang
et al., 2024). To ensure relevance, we apply a fastText classifier to filter out non-mathematical
documents, refining the dataset to 2.2B tokens of high-quality synthetic math content.
Code Utilizing Mathematical Packages. Code data offers practical examples of how mathematical
libraries are used in programming. We collect code from Python and Jupyter files within the Star-
CoderData dataset (Li et al., 2023), retaining only programs that import math-related packages such
as sympy, fractions, cmath, scipy, or statistics. The widely used numpy package is not used to filter
the documents, as it appears frequently in non-mathematical contexts. After filtering, this collection
process results in 1.7B tokens of code related to mathematical computations.
Mathematical Textbooks. Textbooks provide formal, structured presentations of mathematical con-
cepts, making them a valuable source of math knowledge. We gather 8K PDFs of textbooks from
online resources by identifying those with titles containing math-related keywords such as algebra,
geometry, probability, etc. These PDF files are then converted into markdown format using the
Nougat tool for easier integration into our training pipeline.
2.2 MODEL-TRANSLATED MATHEMATICAL CODE
In this section, we propose a novel approach for extracting reasoning steps from the basic pretrain-
ing data and translating them into corresponding Python code snippets that capture the underlying
reasoning processes. This extraction and translation process is performed using a strong instruction-
tuned model, which is Llama-3.1-70B-Instruct in this paper.
1https://huggingface.co/datasets/ajibawa-2023/Education-College-Students
2https://huggingface.co/datasets/ajibawa-2023/Maths-College
4
Published as a conference paper at ICLR 2025
Table 1: The components and data statistics of MathCode-Pile.
Components
Size (MB)
Documents
Tokens
Average (Tokens)
Filtered-OpenWebMath
Filtered-CC-En-math
Synthetic data
Code using math packages
Mathematical textbooks
Translated mathematical code
Total
16,999
23,465
8,855
6,120
4,431
8,235
68,105
2,824,705
7,597,718
2,195,974
513,059
8,373
6,347,823
4,826,902,621
6,341,745,645
2,193,189,314
1,703,226,005
1,390,268,773
2,728,740,985
1,709
835
999
3,320
166,042
430
19,487,652
19,184,073,343
984
Our method begins with taking a piece of text from the basic pretraining data and wrapping it in a
carefully designed prompt, as shown in Fig. 2. This prompt instructs the model to identify LaTeX
expressions denoting complex computations, along with the necessary context, including the condi-
tions required for the computation and the expected result. By explicitly extracting the conditions of
the LaTeX expression, we enhance the model’s ability to comprehend the underlying mathematical
reasoning behind the usage of the expression. The expected result of the computation can later serve
as a basis for verifying the correctness of the generated code. A mathematical reasoning step is con-
structed by combining the conditions, expression and result. The prompt then directs the model to
produce a Python code snippet that accurately reflects the underlying reasoning process behind the
extracted reasoning step. The model is asked to present the conditions, LaTeX expression, result,
and Python code snippet in a structured format, ensuring that each part can be easily extracted from
the generated text. Examples of generated texts are shown in Appendix C.
After the Python code snippets are generated, they are executed, and outputs of the execution are
compared with the expected results extracted from the generated text. Only the Python code snippets
that execute without errors and produce correct outputs are retained. This filtering process ensures
a higher quality of generated code, making the resulting dataset more reliable for mathematical
pretraining compared to approaches that rely on unverified and general-purpose code.
Leveraging the Llama-3.1-70B-Instruct model, we initially generated 3.1B tokens of the data. After
applying the filtering process, we obtain a total of 2.7B tokens of high-quality data of mathemati-
cal code snippets accompanied with their corresponding reasoning steps. This newly generated data
significantly enriches our original pretraining corpus. By combining this data with the basic pretrain-
ing data, we create a comprehensive pretraining dataset totaling 19.2B tokens, which we refer to as
MathCode-Pile. Detailed statistics of MathCode-Pile are presented in Tab. 1. This dataset is tai-
lored specifically for enhancing the mathematical and coding abilities of LLMs. To avoid benchmark
contamination, we follow Yang et al. (2024b) to filter out samples that have significant overlaps with
any of the questions from benchmark datasets used in evaluation. We use exact match to remove
the identical samples and further apply 13-gram deduplication (with a condition that the Jaccard
similarity should be larger than 0.6) to remove more samples that might cause contamination.
In comparison to traditional methods of curating math-related code, which often draw on general-
purpose repositories, our method ensures that the code is not only syntactically correct but also
mathematically sound, reflecting a deeper understanding of mathematical reasoning. Our math-
ematical code is accompanied with corresponding natural language reasoning steps, which makes
understanding the reasoning process easier. This makes MathCode-Pile a superior resource for mod-
els aimed at performing advanced mathematical reasoning tasks.
3 EXPERIMENTS
To demonstrate the effectiveness of our method, we first train several base models ranging from 7B
to 8B parameters using MathCode-Pile and compare them to other best-performing models of the
same size. The group of models resulting from the continued mathematical pretraining is named
MathCoder2. Next, we train and compare various other open-source math pretraining datasets
against MathCode-Pile using a smaller model, DeepSeekCoder-1.3B. To showcase the potential of
5
Published as a conference paper at ICLR 2025
Table 2: Performance of various pretrained models on five representative mathematical datasets.
All results reported are based on greedy decoding. “Code-open” shows whether the code for data-
processing and model-training is open-sourced. The red numbers show the improvements compared
to the base model from which each MathCoder2 model is trained.
MATH GSM8K
Model
SAT
Size
Code-
open
OCW MMLU-
MATH
Qwen2-Math
Qwen2.5-Math
InternLM2.5
InternLM2-Math-Base
Llemma
Llama-2
Llama-3
MathCoder2-Llama-3
DeepSeekMath
MathCoder2-DeepSeekMath
Mistral
MathCoder2-Mistral
Code-Llama
MathCoder2-Code-Llama
7B
7B
7B
7B
7B
7B
8B
8B
7B
7B
7B
7B
7B
7B
✗
✗
✗
✗
✓
✗
✗
✓
✗
✓
✗
✓
✗
✓
50.4
55.4
34.0
21.5
18.0
3.2
21.4
80.4
91.6
74.8
49.2
36.4
11.8
54.8
87.5
-
65.6
-
53.1
25.0
56.3
14.0
-
8.1
-
7.7
3.7
10.3
38.4(+17.0) 69.9(+15.1) 84.4(+28.1) 18.0(+7.7)
36.2
38.6(+2.4)
64.2
68.8(+4.6)
84.4
90.6(+6.2)
13.1
52.2
36.7(+23.6) 68.2(+16.0) 81.3(+6.3)
75.0
6.7
14.6
28.8(+22.1) 52.3(+37.7) 71.9(+46.9)
25.0
15.4
16.9(+1.5)
8.5
13.2(+4.7)
3.7
8.5(+4.8)
57.9
-
49.6
-
-
-
42.8
46.5(+3.7)
47.4
48.3(+0.9)
38.3
42.2(+3.9)
26.4
33.7(+7.3)
the MathCoder2 models, we further perform supervised fine-tuning on them. Finally, we conduct
ablation studies to analyze the impact of each component of the dataset.
3.1 MAIN RESULTS
Benchmark datasets. We evaluate the MathCoder2 models on five representative datasets:
GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021b), SAT-Math (Azerbayev et al., 2024),
OCW (Lewkowycz et al., 2022), and MMLU-Math (Hendrycks et al., 2021a). GSM8K and MATH
are tested using a 4-shot prompt with MAmmoTH’s evaluation framework (Yue et al., 2023). SAT-
Math and OCW are tested using a 4-shot prompt with DeepSeekMath’s evaluation framework (Shao
et al., 2024). MMLU-Math is tested using the lm-evaluation-harness’s (Gao et al., 2024) default
zero-shot setting for MMLU. These datasets cover a wide range of mathematical problems across
various types and difficulty levels, from primary school math word problems to college-level chal-
lenges, providing a comprehensive evaluation of the models.
Base models and training settings. To demonstrate that our pretraining corpus is effective across
different base models, we continue pretraining four base models with MathCode-Pile: Llama-3-
8B (Dubey et al., 2024), DeepSeekMath-7B (Shao et al., 2024), Mistral-7B (Jiang et al., 2023),
and Code-Llama-7B (Rozi`ere et al., 2024). MathCoder2-Llama-3-8B is trained for 3 epochs with a
global batch size of 4 million tokens and an 8192 token context length. MathCoder2-DeepSeekMath,
MathCoder2-Mistral, and MathCoder2-CodeLlama are each trained for 3 epochs with a global batch
size of 4 million tokens and a 4096 token context length.
Baselines. We compare our method with various other base models that possess strong mathemat-
ical abilities and are of similar sizes, including Qwen2-Math 7B (Yang et al., 2024a), Qwen2.5-
Math 7B (Yang et al., 2024b), InternLM2-Math-Base 7B (Ying et al., 2024), InternLM2.5 7B (Cai
et al., 2024), DeepSeekMath 7B (Shao et al., 2024), Llemma 7B (Azerbayev et al., 2024), Mistral
7B (Jiang et al., 2023), Llama2 7B (Touvron et al., 2023), Llama3 8B (Dubey et al., 2024) and
Code-Llama 7B (Rozi`ere et al., 2024).
Results: As demonstrated in Tab. 2, continued pretraining on MathCode-Pile consistently improves
performance across all five benchmark datasets. MathCoder2 models rival the performance of top
models like InternLM2-Math-Base, InternLM2.5, and DeepSeekMath. In particular, MathCoder2-
DeepSeekMath demonstrates that our method continues to enhance the performance of DeepSeek-
Math, a model that has already been extensively trained on large amounts of math-related data. How-
6
Published as a conference paper at ICLR 2025
Table 3: Performance of various finetuned models on five representative mathematical datasets. All
results reported are based on greedy decoding.
Model
Size MATH
GSM8K OCW Olympiad
SVAMP
Qwen2-Math-Instruct
Qwen2.5-Math-Instruct
DeepSeekMath-Instruct-CoT
DeepSeekMath-Instruct-TIR
InternLM2-math-plus
NuminaMath-7B-CoT
NuminaMath-7B-TIR
ToRA-Code
MathCoder
MAmmoTH2-Plus
Llama-3.1-Instruct
MathCoder2-Llama-3-Instruct-CoT
MathCoder2-Llama-3-Instruct-TIR
MathCoder2-DeepSeekMath-Instruct-CoT
MathCoder2-DeepSeekMath-Instruct-TIR
7B
7B
7B
7B
7B
7B
7B
7B
7B
8B
8B
8B
8B
7B
7B
75.1
83.6
46.8
57.4
54.4
55.2
68.1
44.6
30.2
42.8
47.2
58.5
69.7
55.2
69.6
89.9
95.2
82.9
83.7
84.0
75.4
84.6
72.6
67.8
84.1
76.6
83.9
85.8
80.3
86.5
34.6
37.1
-
-
17.3
19.1
-
-
-
-
21.7
29.4
37.6
30.9
41.9
Bench
38.2
41.6
-
-
18.8
19.9
-
-
-
-
15.4
25.8
37.6
23.0
37.9
-
-
-
-
-
-
-
70.4
70.7
-
-
92.7
94.9
92.1
92.8
ever, there remains a performance gap between MathCoder2 and the Qwen2-Math and Qwen2.5-
Math models. This gap might be attributed to their superior computational, manual, and financial
resources, which enable the scaling of data size and the further improvement of data quality, report-
ing a mathemtaical dataset of 700B tokens (Yang et al., 2024b).
In contrast to models like Qwen2-Math, which only open-source their model weights, with much of
their data processing and training details undisclosed, MathCoder2 is fully open-sourced, including
all data processing pipelines and training code. This openness facilitates transparency, reproducibil-
ity, and further research, which is crucial for advancing the field. Compared to Llemma, which also
open-sources its code, our method achieves better performance on the five datasets. Particularly,
when trained on the same base model, Code-Llama, our method performs significantly better, which
demonstrates the effectiveness of the MathCode-Pile pretraining corpus.
3.2 POST-TRAINING
To further demonstrate the potential of the MathCoder2 models in aligning to mathematical problem-
solving tasks, we select the MathCoder2-Llama-3-8B model and MathCoder2-DeepSeekMath-7B
for finetuning on mathematical problem-solution pairs. We first train the base model on general
mathematical instructions following Yue et al. (2024) for two epochs. Subsequently, we finetune the
model on NuminaMath-CoT3, and NuminaMath-TIR4 datasets for three epochs.
The results are shown in Tab. 3. MathCoder2-Instruct-TIR achieves high performance on all five
datasets, reaching 69.7% on MATH and 86.5% on GSM8K, outperforming many of the best open-
source models of similar size and demonstrating our method’s potential to improve performance
on downstream mathematical reasoning tasks. As this paper focuses on continued mathematical
pretraining, the post-training serves only as a validation of the potential of our models. We conducted
only simple supervised fine-tuning, without performing reinforcement learning or direct preference
optimization, which could further improve performance on downstream tasks.
3.3 ABLATION STUDIES
In this session, we first analyze the impact of various components of the training data. Next, we
compare MathCode-Pile to other open-source mathematical pretraining corpora.
Analysis of the impact of the mathematical code. We analyze the impact of the mathematical code
on continued pretraining by comparing the results of adding and not adding the mathematical code.
As shown in Tab. 4, the addition of the mathematical code in the pretraining corpus significantly
3https://huggingface.co/datasets/AI-MO/NuminaMath-CoT
4https://huggingface.co/datasets/AI-MO/NuminaMath-TIR
7
Published as a conference paper at ICLR 2025
Table 4: Analysis of the impact of the mathematical code. The upper half presents the results of
using and not using the mathematical code data. The lower half analyzes design of concatenating
the reasoning steps and code snippets. “Basic + Reasoning-step-only” represents only adding the
conditions, expressions, and results, while “Basic + Trans-code-only” represents only adding the
translated code. “Basic + Separated Text&Code” represents seperating corresponding code and
text. “Reasoning-Step&Code” represents the concatenated data combining both. “Basic + No-code-
prompt” represents using a prompt that simply instruct Llama-3.1-70B-Instruct to rewrite texts to
improve their quality.
Data Composition
Base Model
MATH GSM8K SAT
OCW MMLU-
MATH
Basic
Basic + Reasoning-Step&Code
Basic + Reasoning-step-only
Basic + Trans-code-only
Basic + No-code-prompt
Basic + Separated Text&Code
Basic + Reasoning-Step&Code
Llama-3-8B
Llama-3-8B
DeepSeekCoder-1.3B
DeepSeekCoder-1.3B
DeepSeekCoder-1.3B
DeepSeekCoder-1.3B
DeepSeekCoder-1.3B
34.7
71.9
38.4(+3.7) 69.9(+4.1) 84.4(+12.5) 18.0(+5.1) 46.5(+1.3)
12.9
65.8
45.2
16.7
14.6
15.7
17.0
17.8
22.7
22.1
21.3
22.0
25.5
40.6
43.8
37.5
46.9
59.4
4.8
5.5
4.8
4.8
5.9
25.9
25.5
24.4
25.3
26.1
Table 5: Analysis of the effect of different components in MathCoder2-Pile. The base model is
DeepSeekCoder-1.3B.
Data Composition
No Math Training
filtered-OpenWebMath (4.8B)
OpenWebMath (12.9B)
filtered-CC-En-math (6.4B)
CC-En-math (22.1B)
filtered-OpenWebMath + textbooks
filtered-OpenWebMath + synthetic data
filtered-OpenWebMath + code
MathCoder2-Pile
MATH
GSM8K
SAT
OCW
MMLU-MATH
4.8
9.0
9.4
9.1
8.4
9.4
10.8
9.4
17.8
4.3
11.4
11.2
12.1
13.0
12.7
12.6
12.1
25.5
18.8
34.4
31.3
31.3
25.0
50.0
50.0
46.9
59.4
2.6
3.7
2.6
3.7
2.9
4.0
4.0
4.0
5.9
24.8
25.4
24.4
25.2
25.0
25.4
25.6
25.4
26.1
improves performance across all five datasets. Note that the mathematical code only constitutes
14.1% of the 19.2B tokens in the MathCode-Pile dataset, yet the improvement in accuracy it brings
about compared to the total improvement in accuracy ( accMathCode-Pile−accbasic
) is 21.8%, 27.1%, 44.5%,
accMathCodePile−accorig
66.2%, and 35.1% on the five benchmark datasets, respectively, demonstrating the effectiveness of
the mathematical code. Comparison across different training steps is shown in Appendix F.
We also analyze the design choice of concatenating the natural language reasoning step with the
mathematical code for pretraining. This analysis is conducted by studying the results of adding only
the natural language reasoning steps, and separately adding only the translated code. As shown in
Tab. 4, Basic + Reasoning-step-only represents adding only the natural language reasoning steps;
Basic + Trans-code-only represents adding only the translated code; Basic + Separated Text&Code
represents seperating code and text; and Basic + Reasoning-Step&Code represents adding the con-
catenated data that combines both. The Basic + Reasoning-Step&Code configuration results in the
best performance, demonstrating the importance of including both the natural language reasoning
step and the translated mathematical code.
To rule out the possibility that the improvement comes from the higher quality of texts generated
by Llama-3.1-70B-Instruct, we use a prompt that asks Llama-3.1-70B-Instruct to rewrite the given
text. The details of this prompt are provided in Appendix E. We present the results of replacing the
mathematical code with texts generated using this prompt in Tab. 4, labeled as “Basic + No-code-
prompt”. Our method of generating mathematical code accompanied with corresponding reasoning
steps outperforms this baseline, demonstrating the effectiveness of our approach.
Analysis of the impact of various parts of the basic data. We perform experiments on a
smaller model, DeepSeekCoder-1.3B, using different parts of the basic data. As demonstrated in
8
Published as a conference paper at ICLR 2025
Table 6: Comparison between MathCode-Pile and other Mathematical Pretrain datasets.
Pretrain Dataset
No Math Training
OpenWebMath
Proof-Pile-2
MathPile
DeepSeekMath Corpus
Base Model
MATH GSM8K SAT
OCW
MMLU-MATH
DeepSeekCoder-1.3B
DeepSeekCoder-1.3B
DeepSeekCoder-1.3B
DeepSeekCoder-1.3B
DeepSeekLLM-1.3B
4.8
9.4
9.2
5.3
13.6
17.8
4.3
11.2
11.2
3.4
23.8
25.5
18.8
31.3
50.0
21.9
56.3
59.4
2.6
2.6
4.4
2.2
4.8
5.9
24.8
24.4
25.8
24.9
-
26.1
MathCoder2-Pile
DeepSeekCoder-1.3B
Table 7: Comparison between finetuning the original Llama-3-8B, MathCoder2-Basic-Llama-3-8B,
and MathCoder2-Llama-3-8B on NuminaMath-TIR. MathCoder2-Basic-Llama-3-8B is the model
resulting from continued pretraining on the basic data.
GSM8K
Base Model
SVAMP
MATH
OCW
Olympiad
Bench
Llama-3-8B
MathCoder2-Basic-Llama-3-8B
MathCoder2-Llama-3-8B
56.1
62.9
65.1
80.1
81.3
84.5
24.6
26.8
34.6
28.4
32.9
34.4
83.8
86.7
87.9
Tab. 5, filtered-OpenWebMath and filtered-CC-En-math significantly improve the performance of
the model. In comparison, textbooks, synthetic data, and code are smaller in data size and play a
less important role. As each of these parts of data is too small for individual pretraining, we combine
them with OpenWebMath-filtered to show that they each bring a small yet noticeable improvement
compared to using only OpenWebMath-filtered. Since we performed filtering on OpenWebMath
and the initially filtered CC-En to remove irrelevant data, we also compare the performance before
and after filtering. We observe that there is no obvious degradation in performance after removing
irrelevant content, showing the effectiveness of the filtering.
Comparison with other open-source mathematical pretraining corpora. We compare
MathCode-Pile with various other open-source mathematical pretraining corpora. We train each
corpus for 3 epochs with a global batch size of 2 million tokens and a 4096 token context length,
since we observe that the model’s performance usually saturates around 3 epochs. As shown in
Tab. 6, MathCode-Pile significantly outperforms OpenWebMath, Proof-Pile-2, and MathPile when
trained on DeepSeekCoder-1.3B. The DeepSeekMath Corpus is not open-source, and its perfor-
mance on DeepSeekLLM-1.3B is taken from Shao et al. (2024), which is trained for 150B tokens,
more than our MathCode-Pile’s training of approximately 60B tokens. The 1.3B model trained with
MathCode-Pile outperforms the 1.3B model trained with DeepSeekMath Corpus.
Analysis of the improvement on the potential of being finetuned for TIR reasoning. To analyze
the effect of the model-translated mathematical code on LLMs’ potential to be finetuned for TIR
reasoning, we finetune the original Llama-3-8B, MathCoder2-Basic-Llama-3-8B, and MathCoder2-
Llama-3-8B on NuminaMath-TIR for three epochs, respectively. As shown in Tab. 7, the results
of finetuning on MathCoder2-Basic-Llama-3-8B are higher than the results of finetuning on Llama-
3-8B. Finetuning on MathCoder2-Llama-3-8B results in even higher performance than finetuning
on MathCoder2-Basic-Llama-3-8B, showing that the addition of mathematical code effectively en-
hances the models’ potential of being finetuned for TIR reasoning.
4 RELATED WORK
Continued mathematical pretraining. Several works (Shao et al., 2024; Azerbayev et al., 2024;
Ying et al., 2024; Yang et al., 2024b) have explored the continued pretraining of LLMs on math-
ematical data, such as mathematical web content, synthetic data, and code. InternLM-Math (Ying
et al., 2024) and Query of CC Fei et al. (2024) use BM25 for data retrieval, while other works such
as DeepSeekMath (Shao et al., 2024) and Qwen2-Math (Yang et al., 2024b) employ fastText (Joulin
et al., 2016) and other meta-information to retrieve texts from Common Crawl. Our approach fol-
lows these methods by using fastText for data filtering, and we introduce a second iteration of finer
filtering to retain more relevant data. MathPile (Wang et al., 2023b) and phi (Gunasekar et al.,
9
Published as a conference paper at ICLR 2025
2023) utilize real or synthesized textbooks, while Llemma (Azerbayev et al., 2024) and Qwen2-
Math (Yang et al., 2024b) incorporate math-related code in their datasets. However, unlike our
method of generating mathematical code with accompanied natural language reasoning, their code
mostly has no natural language explanations or context. Our work builds on these prior efforts by
collecting and expanding upon these sources of math-related text. Unlike works that only open-
source their model weights, we take a more transparent approach by open-sourcing both our data
processing and model training code, thereby ensuring reproducibility and facilitating future research
in this field. Compared to Llemma (Azerbayev et al., 2024), which also open-source their data and
training code, our method results in better performance on mathematical reasoning tasks.
Synthetic data. Numerous finetuning (Yu et al., 2024; Wang et al., 2023a; Lu et al., 2024a) and pre-
training Gunasekar et al. (2023); Wang et al. (2023b); Yang et al. (2024b) studies have explored train-
ing on synthetic data generated using language models or predefined templates. MathGLM (Yang
et al., 2023) and InternLM-Math (Ying et al., 2024) use templates to generate synthetic numeri-
cal operation data, while phi (Gunasekar et al., 2023) produces textbook-quality data with models.
EntiGraph (Yang et al., 2024c) generates diverse text by drawing connections between sampled enti-
ties. Our work proposes a novel method for extracting mathematical reasoning steps and generating
synthetic code snippets that captures the underlying reasoning processes.
Post-training. There are many methods for further improving the mathematical problem-solving
abilities of LLMs. Supervised finetuning adjusts pretrained models using math problems and solu-
tions in various formats, such as Chain-of-Thought (Yu et al., 2024; Yuan et al., 2023), Program-
of-Thought (Yue et al., 2023), and Tool-Integrated Reasoning (Gou et al., 2024; Wang et al., 2023a;
Liao et al., 2024). Reinforcement learning Lightman et al. (2023); Wang et al. (2024) and Direct
Preference Optimization Rafailov et al. (2024); Xu et al. (2024); Lu et al. (2024b) utilize mathemati-
cal preference data to adjust the models’ outputs. These methods are diverse and reveal the potential
of pretrained models. Their performance is often influenced by the quality of the training data used
in the pretraining stage. To explore the potential of finetuning our pretrained models for downstream
tasks, we conduct supervised finetuning with existing open-source data.
5 LIMITATIONS AND FUTURE WORK
One limitation of our work is that our continued pretraining corpus focuses primarily on mathe-
matics and does not intentionally include other STEM subjects. Additionally, our pretraining data
consists entirely of English texts, without incorporating math-related content in other languages, like
Chinese. Due to limitations in computational resources, we only trained models ranging from 1.3B
to 8B parameters. Future work could address these limitations by expanding the dataset to include
other subjects and languages and by training on larger language models. Also, this paper primarily
focuses on continued mathematical pretraining, so we did not apply reinforcement learning meth-
ods like PPO and GRPO, or Direct Preference Optimization in our post-training phase, which can
further improve performance on mathematical reasoning tasks. In the future, we could explore these
methods on our finetuned models. Also, this work did not discuss theorem proving with formal
languages such as Lean and Coq, which is worth investigating in future works.
6 CONCLUSION
In this paper, we present an effective open-source continued mathematical pretraining pipeline for
enhancing mathematical reasoning of LLMs. Through the meticulous collection and filtering of di-
verse math-related texts, such as mathematical web content, synthetic data, code that uses mathemat-
ical packages, and math textbooks, we curate a basic dataset for continued mathematical pretraining.
We then propose a novel method for extracting mathematical reasoning steps from the previously
collected dataset and translating them to code snippets reflecting the underlying reasoning processes.
By combining the basic data with the model-generated mathematical code accompanied with their
corresponding reasoning steps, we produce a 19.2B-token mathematical pretraining corpus named
MathCode-Pile, which significantly improves the performance of four different base models across
five representative mathematical benchmarks. By open-sourcing the entire data processing pipeline
and model training code, we actively promote transparency, reproducibility, and collaboration within
the research community, facilitating future research in this area.
10
Published as a conference paper at ICLR 2025
7 ACKNOWLEDGEMENT
This project is funded in part by National Key R&D Program of China Project 2022ZD0161100, by
the Centre for Perceptual and Interactive Intelligence (CPII) Ltd under the Innovation and Technol-
ogy Commission (ITC)’s InnoHK, by NSFC-RGC Project N CUHK498/24. Hongsheng Li is a PI
of CPII under the InnoHK.
REFERENCES
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Al-
bert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model
for mathematics, 2024. URL https://arxiv.org/abs/2310.10631.
Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about
physical commonsense in natural language, 2019. URL https://arxiv.org/abs/1911.
11641.
Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui
Chen, Zhi Chen, Pei Chu, Xiaoyi Dong, Haodong Duan, Qi Fan, Zhaoye Fei, Yang Gao, Jiaye
Ge, Chenya Gu, Yuzhe Gu, Tao Gui, Aijia Guo, Qipeng Guo, Conghui He, Yingfan Hu, Ting
Huang, Tao Jiang, Penglong Jiao, Zhenjiang Jin, Zhikai Lei, Jiaxing Li, Jingwen Li, Linyang Li,
Shuaibin Li, Wei Li, Yining Li, Hongwei Liu, Jiangning Liu, Jiawei Hong, Kaiwen Liu, Kuikun
Liu, Xiaoran Liu, Chengqi Lv, Haijun Lv, Kai Lv, Li Ma, Runyuan Ma, Zerun Ma, Wenchang
Ning, Linke Ouyang, Jiantao Qiu, Yuan Qu, Fukai Shang, Yunfan Shao, Demin Song, Zifan Song,
Zhihao Sui, Peng Sun, Yu Sun, Huanze Tang, Bin Wang, Guoteng Wang, Jiaqi Wang, Jiayu Wang,
Rui Wang, Yudong Wang, Ziyi Wang, Xingjian Wei, Qizhen Weng, Fan Wu, Yingtong Xiong,
Chao Xu, Ruiliang Xu, Hang Yan, Yirong Yan, Xiaogui Yang, Haochen Ye, Huaiyuan Ying, Jia
Yu, Jing Yu, Yuhang Zang, Chuyu Zhang, Li Zhang, Pan Zhang, Peng Zhang, Ruijie Zhang, Shuo
Zhang, Songyang Zhang, Wenjian Zhang, Wenwei Zhang, Xingcheng Zhang, Xinyue Zhang, Hui
Zhao, Qian Zhao, Xiaomeng Zhao, Fengzhe Zhou, Zaida Zhou, Jingming Zhuo, Yicheng Zou,
Xipeng Qiu, Yu Qiao, and Dahua Lin. Internlm2 technical report, 2024.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.
org/abs/2110.14168.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models.
arXiv preprint arXiv:2407.21783, 2024.
Zhaoye Fei, Yunfan Shao, Linyang Li, Zhiyuan Zeng, Conghui He, Hang Yan, Dahua Lin, and
Xipeng Qiu. Query of cc: Unearthing large scale domain-specific knowledge from public corpora,
2024. URL https://arxiv.org/abs/2401.14624.
Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Fos-
ter, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muen-
nighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lin-
tang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework
for few-shot language model evaluation, 07 2024. URL https://zenodo.org/records/
12608602.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Minlie Huang, Nan Duan, and
Weizhu Chen. Tora: A tool-integrated reasoning agent for mathematical problem solving, 2024.
URL https://arxiv.org/abs/2309.17452.
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital
Shah, Harkirat Singh Behl, Xin Wang, S´ebastien Bubeck, Ronen Eldan, Adam Tauman Kalai,
Yin Tat Lee, and Yuanzhi Li. Textbooks are all you need, 2023. URL https://arxiv.org/
abs/2306.11644.
11
Published as a conference paper at ICLR 2025
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Ja-
cob Steinhardt. Measuring massive multitask language understanding, 2021a. URL https:
//arxiv.org/abs/2009.03300.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset, 2021b.
URL https://arxiv.org/abs/2103.03874.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chap-
lot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier,
L´elio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril,
Thomas Wang, Timoth´ee Lacroix, and William El Sayed. Mistral 7b, 2023. URL https:
//arxiv.org/abs/2310.06825.
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient
text classification, 2016. URL https://arxiv.org/abs/1607.01759.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ra-
masesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam
Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with lan-
guage models, 2022. URL https://arxiv.org/abs/2206.14858.
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao
Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii,
Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, Jo˜ao
Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Lo-
gesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra
Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey,
Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luc-
cioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor,
Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex
Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva
Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Mu˜noz Ferrandis, Sean Hughes,
Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. Starcoder: may the source
be with you!, 2023. URL https://arxiv.org/abs/2305.06161.
Minpeng Liao, Wei Luo, Chengxi Li, Jing Wu, and Kai Fan. Mario: Math reasoning with code
interpreter output – a reproducible pipeline, 2024. URL https://arxiv.org/abs/2401.
08190.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023. URL
https://arxiv.org/abs/2305.20050.
Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by
chatgpt really correct? rigorous evaluation of large language models for code generation, 2023.
URL https://arxiv.org/abs/2305.01210.
Zimu Lu, Aojun Zhou, Houxing Ren, Ke Wang, Weikang Shi, Junting Pan, Mingjie Zhan, and Hong-
sheng Li. Mathgenie: Generating synthetic data with question back-translation for enhancing
mathematical reasoning of llms, 2024a. URL https://arxiv.org/abs/2402.16352.
Zimu Lu, Aojun Zhou, Ke Wang, Houxing Ren, Weikang Shi, Junting Pan, Mingjie Zhan, and Hong-
sheng Li. Step-controlled dpo: Leveraging stepwise error for enhanced mathematical reasoning,
2024b. URL https://arxiv.org/abs/2407.00782.
Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open
dataset of high-quality mathematical web text, 2023. URL https://arxiv.org/abs/
2310.06786.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and
Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model,
2024. URL https://arxiv.org/abs/2305.18290.
12
Published as a conference paper at ICLR 2025
Baptiste Rozi`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, J´er´emy Rapin, Artyom Kozhevnikov, Ivan Ev-
timov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong,
Alexandre D´efossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
Thomas Scialom, and Gabriel Synnaeve. Code llama: Open foundation models for code, 2024.
URL https://arxiv.org/abs/2308.12950.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adver-
sarial winograd schema challenge at scale, 2019. URL https://arxiv.org/abs/1907.
10641.
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang,
Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathe-
matical reasoning in open language models, 2024. URL https://arxiv.org/abs/2402.
03300.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko-
lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher,
Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy
Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel
Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee,
Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra,
Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi,
Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh
Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic,
Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models,
2023. URL https://arxiv.org/abs/2307.09288.
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi
Song, Mingjie Zhan, and Hongsheng Li. Mathcoder: Seamless code integration in llms for en-
hanced mathematical reasoning, 2023a. URL https://arxiv.org/abs/2310.03731.
Peiyi Wang, Lei Li, Zhihong Shao, R. X. Xu, Damai Dai, Yifei Li, Deli Chen, Y. Wu, and Zhifang
Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations, 2024.
URL https://arxiv.org/abs/2312.08935.
Zengzhi Wang, Rui Xia, and Pengfei Liu. Generative ai for math: Part i – mathpile: A billion-token-
scale pretraining corpus for math, 2023b. URL https://arxiv.org/abs/2312.17120.
Yifan Xu, Xiao Liu, Xinghan Liu, Zhenyu Hou, Yueyan Li, Xiaohan Zhang, Zihan Wang, Aohan
Zeng, Zhengxiao Du, Wenyi Zhao, Jie Tang, and Yuxiao Dong. Chatglm-math: Improving math
problem-solving in large language models with a self-critique pipeline, 2024. URL https:
//arxiv.org/abs/2404.02893.
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li,
arXiv preprint
Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report.
arXiv:2407.10671, 2024a.
An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu,
Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu,
Xingzhang Ren, and Zhenru Zhang. Qwen2.5-math technical report: Toward mathematical ex-
pert model via self-improvement, 2024b. URL https://arxiv.org/abs/2409.12122.
Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang, Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang.
Gpt can solve mathematical problems without a calculator, 2023. URL https://arxiv.
org/abs/2309.03241.
Zitong Yang, Neil Band, Shuangping Li, Emmanuel Cand`es, and Tatsunori Hashimoto. Synthetic
continued pretraining, 2024c. URL https://arxiv.org/abs/2409.07431.
13
Published as a conference paper at ICLR 2025
Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma,
Jiawei Hong, Kuikun Liu, Ziyi Wang, Yudong Wang, Zijian Wu, Shuaibin Li, Fengzhe Zhou,
Hongwei Liu, Songyang Zhang, Wenwei Zhang, Hang Yan, Xipeng Qiu, Jiayu Wang, Kai Chen,
and Dahua Lin. Internlm-math: Open math large language models toward verifiable reasoning,
2024.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhen-
guo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions
for large language models, 2024. URL https://arxiv.org/abs/2309.12284.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou,
and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language
models, 2023. URL https://arxiv.org/abs/2308.01825.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
Mammoth: Building math generalist models through hybrid instruction tuning, 2023. URL
https://arxiv.org/abs/2309.05653.
Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from the
web, 2024. URL https://arxiv.org/abs/2405.03548.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a ma-
chine really finish your sentence?, 2019. URL https://arxiv.org/abs/1905.07830.
Ge Zhang, Scott Qu, Jiaheng Liu, Chenchen Zhang, Chenghua Lin, Chou Leuang Yu, Danny Pan,
Esther Cheng, Jie Liu, Qunshu Lin, Raven Yuan, Tuney Zheng, Wei Pang, Xinrun Du, Yiming
Liang, Yinghao Ma, Yizhi Li, Ziyang Ma, Bill Lin, Emmanouil Benetos, Huan Yang, Junting
Zhou, Kaijing Ma, Minghao Liu, Morry Niu, Noah Wang, Quehry Que, Ruibo Liu, Sine Liu,
Shawn Guo, Soren Gao, Wangchunshu Zhou, Xinyue Zhang, Yizhi Zhou, Yubo Wang, Yuelin Bai,
Yuhan Zhang, Yuxiang Zhang, Zenith Wang, Zhenzhu Yang, Zijian Zhao, Jiajun Zhang, Wanli
Ouyang, Wenhao Huang, and Wenhu Chen. Map-neo: Highly capable and transparent bilingual
large language model series, 2024. URL https://arxiv.org/abs/2405.19327.
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia,
Linqi Song, Mingjie Zhan, and Hongsheng Li. Solving challenging math word problems using
GPT-4 code interpreter with code-based self-verification. In The Twelfth International Confer-
ence on Learning Representations, 2024. URL https://openreview.net/forum?id=
c8McWs4Av0.
A PROMPT FOR ANNOTATION OF MATH WEB DOCUMENTS
In this section, we present the prompt we used for annotation of documents in OpenWebMath and
the initially filtered CC-En. The prompt, as shown in Fig. 3, asks the model to classify the document
into one of seven types, which are types of documents that frequently appear in the datasets. We
observe that this method helps the model to better identify and filter out irrelevant text than using a
binary classification of whether the text is related to math.
B TRAINING DETAILS OF FASTTEXT CLASSIFIERS
We employ an open-source library5 for training, configuring the vector dimension to 50, the learning
rate to 0.5, the maximum length of word n-grams to 2, and the number of training epochs to 5. For
the initial filtering of the Common Crawl corpus, we sample 3 million data points from the seed
corpus of filtered-OpenWebMath as positive training examples and another 8 million web pages
from Common Crawl as negative examples. For finer filtering, we use 2 million data points annotated
as math-related by Mixtral-8x7B-Instruct as positive training samples and 1 million data points
annotated as unrelated to math as negative training samples.
5https://fasttext.cc/
14
Published as a conference paper at ICLR 2025
Prompt: You will be provided with a block of text. I need you to classify the text into one of the
following types:
1. The text describes a mathematical problem and its solution.
2. The text explains a mathematical concept or mathematical theory.
3. The text explains a scientific or engineering concept that requires mathematical knowledge.
4. The text describes a programming problem and its solution.
5. The text explains a concept or theory related to programming.
6. The text explains the usage of a programming language or software tool.
7. The text does not belong to any of the types above.
Here’s the text I’ve provided. Kindly analyze and classify it into type 1, 2, 3, 4, 5, 6 or 7. Put your
choice behind “The type is:”. Please do not generate any unrelated additional comments! The type
number must match the type description. Here’s one of the texts that needs to be classified: {TEXT}
The type is:
Figure 3: The prompt for annotation of OpenWebMath and the initially filtered CC-En documents.
{TEXT} is replaced with the content of the document.
Prompt: You will be presented with a text related to math. I need you to carefully read through the
text. If you find any incorrect statments, erroneous computation steps, spelling mistakes, grammatical
errors, or formatting issues, adjust them so that the error is corrected. Rewrite the text to make it more
accurate and easier to understand. You should only output an adjusted version of the given text. Also,
do not change the original language. Please do not generate any unrelated additional comments! The
text is as follows: {TEXT}
You should output:
Figure 4: The prompt asking Llama-3.1-70B-Instruct to simply rewrite the text and improve its
quality. {TEXT} is replaced with the content of the document.
C EXAMPLES OF THE MODEL-GENERATED MATHEMATICAL CODE
This session shows several examples of the translation from math-related texts to mathematical code
acompanied with corresponding reasoning steps. As shown in Tab. 8, Tab. 9, Tab. 10 and Tab. 11,
the model first extract the LaTex expression alone with its conditions and result from the original
text, then generates an Python code snippet based on this information.
D EXAMPLES OF REMOVED IRRELEVANT TEXTS
In this section, we present several examples in the original OpenWebMath dataset that are irrelevant
to mathematical reasoning and removed in the filtering process. As shown in Tab. 12, Tab. 13, and
Tab. 14, the content of these documents are not related to math, but instead are about subjects such
as politics, testing software, or web development. Removing these irrelevant texts have no obvious
impact on the mathematical continued pretraining performance.
E PROMPT FOR SIMPLE REWRITING TO IMPROVE QUALITY FOR ABLATION
STUDY
To rule out the possibility that the improvement results solely from the enhanced quality of the
texts generated by Llama-3.1-70B-Instruct, we designed a prompt asking Llama-3.1-70B-Instruct
to rewrite the text, checking for mistakes in content and format to enhance accuracy and clarity, as
shown in Fig. 4.
15
Published as a conference paper at ICLR 2025
Original Text:
Bayes and his Theorem
My earlier post on Bayesian probability seems to have generated quite a lot of readers, so this
lunchtime I thought I’d add a little bit of background.
......
If we ask the question “what is the probability of exactly x successes from the possible n?”, the
answer is given by the binomial distribution:
......
Pn(x|n, p) = C(n, x)px(1 − p)n−x
Translation Output:
Conditions Needed:
1. The number of trials (n) is a positive integer.
2. The probability of success (p) is a real number between 0 and 1.
3. The number of successes (x) is a non-negative integer less than or equal to n.
Computation Expression:
Pn(x|n, p) = C(n, x)px(1 − p)n−x
Computation Result:
The probability of exactly x successes in n independent trials, each with a probability of success
p.
Python Code Snippet:
‘ ‘ ‘ p y t h o n
i m p o r t math
d e f b i n o m i a l d i s t r i b u t i o n ( n , x , p ) :
c o m b i n a t i o n s = math . comb ( n , x )
p r o b a b i l i t y = c o m b i n a t i o n s * ( p ** x ) * ( ( 1 − p ) ** ( n − x ) )
r e t u r n p r o b a b i l i t y
# number o f
# number o f
# Example u s a g e :
n = 10
x = 5
p = 0 . 5
r e s u l t = b i n o m i a l d i s t r i b u t i o n ( n , x , p )
p r i n t ( r e s u l t )
‘ ‘ ‘
t r i a l s
s u c c e s s e s
# p r o b a b i l i t y o f
s u c c e s s
Table 8: An example of translating a piece of text to mathematical code and a corresponding rea-
soning step.
F COMPARISON BETWEEN ADDING AND NOT ADDING MATHEMATICAL
CODE
In this section, we present the comparison between adding and not adding mathematical code across
different training steps. The experiments are conducted on Llama-3 8B. As shown in Fig. 5 and
Fig. 6, adding the model-translated mathematical code improves accuracy across different training
steps.
G REVISIONEFFECT OF MATHCODE-PILE ON GENERAL CODING AND
OTHER ABILITIES
In order to explore MathCode-Pile’s effect on general coding abilities, we test the MathCoder2 mod-
els on HumanEval and MBPP, two representative coding benchmarks, using the EvalPlus6 frame-
6https://github.com/evalplus/evalplus
16
Published as a conference paper at ICLR 2025
Original Text:
# Probability of getting 2 Aces, 2 Kings and 1 Queen in a five card poker hand (Part II)
So I reworked my formula in method 1 after getting help with my original question ......
(cid:1)
(cid:1)(cid:0)4
(cid:1)(cid:0)4
(cid:0)4
1
2
2
(cid:1) =
(cid:0)52
5
3
54145
5.540678 ∗ 10−5
......
Translation Output:
Conditions Needed:
1. The total number of cards in a deck is 52.
2. We are drawing 5 cards from the deck.
3. We want to calculate the probability of getting 2 Aces, 2 Kings, and 1 Queen.
Computation Expression:
(cid:1)
(cid:1)(cid:0)4
1
(cid:1)(cid:0)4
(cid:0)4
2
2
(cid:1)
(cid:0)52
5
5.540678 ∗ 10−5
Computation Result:
Python Code Snippet:
‘ ‘ ‘ p y t h o n
i m p o r t math
d e f c o m b i n a t i o n ( n , k ) :
r e t u r n math . comb ( n , k )
n u m e r a t o r = c o m b i n a t i o n ( 4 , 2 ) * c o m b i n a t i o n ( 4 , 2 ) * c o m b i n a t i o n ( 4 , 1 )
d e n o m i n a t o r = c o m b i n a t i o n ( 5 2 , 5 )
r e s u l t = n u m e r a t o r
p r i n t ( r e s u l t )
‘ ‘ ‘
/ d e n o m i n a t o r
Table 9: An example of translating a piece of text to mathematical code and a corresponding rea-
soning step.
work. HumanEval+ and MBPP+ are extended versions of HumanEval and MBPP with additional
test samples, as introduced in Liu et al. (2023). The pass@1 accuracies are shown in Tab. 15.
MathCode-Pile improves the performance of Llama3-8B, DeepSeekMath-7B, and Mistral-7B on
these coding benchmarks. The performance of MathCoder2-CodeLlama-7B on these benchmarks is
similar to that of CodeLlama-7B, which is understandable since CodeLlama is specifically trained
for code generation. This demonstrates that MathCode-Pile can enhance general coding abilities
beyond math-focused tasks for models not explicitly trained on code.
To demonstrate how MathCode-Pile affects LLMs’ general abilities, we tested the MathCoder2
models on Hellaswag (Zellers et al., 2019), PIQA Bisk et al. (2019), and Winogrande Sakaguchi
et al. (2019) using the lm-evaluation-harness7. As shown in the table below, training on MathCode-
Pile slightly impacts the performance of general-purpose models, such as Llama3-8B and Mistral-
7B, on these datasets, likely because MathCode-Pile consists entirely of math-related data. The
effect on special-purpose models, such as DeepSeekMath-7B and CodeLlama-7B, is minimal.
7https://github.com/EleutherAI/lm-evaluation-harness
17
Published as a conference paper at ICLR 2025
Original Text:
# How do you use the product rule to differentiate g(x) = (x2 + 1)(x2 − 2x)?
......
and h (x) = x2 − 2x ⇒ h′ (x) = 2x − 2
......
Translation Output:
Conditions Needed:
1. h(x) is a polynomial function
2. The derivative of x2 is 2x
3. The derivative of −2x is −2
Computation Expression:
d
dx (x2 − 2x) = 2x − 2
Computation Result:
h′(x) = 2x − 2
Python Code Snippet:
‘ ‘ ‘ p y t h o n
i m p o r t
sympy a s
s p
x = s p . s y m b o l s ( ’ x ’ )
h = x **2 − 2* x
h p r i m e = s p . d i f f ( h , x )
p r i n t ( h p r i m e )
‘ ‘ ‘
# O u t p u t : 2* x − 2
Table 10: An example of translating a piece of text to mathematical code and a corresponding
reasoning step.
Figure 5: Comparison of the accuracy on GSM8K between adding and not adding mathematical
code across different training steps.
H N-GRAMS OVERLAP BETWEEN TRAINING DATA AND TESTING DATA
We also apply n-gram testing to check for the overlapping ratio between MathCode-Pile and the test
datasets, demonstrate the effectiveness of our decontamination. The overlaps at various n-grams are
very low. The overlap becomes 0.00% when n-grams is 13.
18
010002000300040005000600070008000900010000110001200013000Steps55.057.560.062.565.067.570.0Accuracy (%)GSM8Kbasic datacode-addedPublished as a conference paper at ICLR 2025
Original Text:
# How do you use the product rule to differentiate g(x) = (x2 + 1)(x2 − 2x)?
......
and h (x) = x2 − 2x ⇒ h′ (x) = 2x − 2
......
Translation Output:
Conditions Needed:
1. h(x) is a polynomial function
2. The derivative of x2 is 2x
3. The derivative of −2x is −2
Computation Expression:
d
dx (x2 − 2x) = 2x − 2
Computation Result:
h′(x) = 2x − 2
Python Code Snippet:
‘ ‘ ‘ p y t h o n
i m p o r t
sympy a s
s p
x = s p . s y m b o l s ( ’ x ’ )
h = x **2 − 2* x
h p r i m e = s p . d i f f ( h , x )
p r i n t ( h p r i m e )
‘ ‘ ‘
# O u t p u t : 2* x − 2
Table 11: An example of translating a piece of text to mathematical code and a corresponding
reasoning step.
## Avoiding Weimar Russia
Matthew Yglesias writes:
Matthew Yglesias: Beyond Economics: Over at Brad DeLong’s site you can see a fascinat-
ing discussion of America’s Russia policy in the 1990s between DeLong, Martin Wolf, and
Lawrence Summers. One remark I would make is that to an extraordinary extent, all three par-
ticipants are willing to accept the premise that the only goal of US policy toward Russia in the
1990s was a good-faith effort to induce Russian prosperity, with such efforts being hampered
by political constraints, the objective difficulty of the task, and pure policy errors...
Well, yes. Russia was once a superpower and may be one again. One would have thought that
the history of 1914-1945 would teach ample lessons about the national security undesirability
of trying to keep great powers–like Weimar Germany–poor and weak. One would have thought
that the history of 1945-1990 would teach ample lessons about the national security desirability
of trying to help great powers–like Japan and West Germany–become prosperous, democratic,
and well-integrated into the world economy.
One top of the national-security strategic argument there is the economic argument: the fact
that richer trading partners are better trading partners: they make more and more interesting
stuff for us to buy.
......
Table 12: An example of removed text irrelevant to mathematical reasoning in OpenWebMath.
19
Published as a conference paper at ICLR 2025
# MicroEJ Test Suite Engine¶
## Introduction¶
The MicroEJ Test Suite Engine is a generic tool made for validating any development project
using automatic testing.
This section details advanced configuration for users who wish to integrate custom test suites
in their build flow.
The MicroEJ Test Suite Engine allows the user to test any kind of projects within the configu-
ration of a generic Ant file.
The MicroEJ Test Suite Engine is already pre-configured for running test suites on a MicroEJ
Platform (either on Simulator or on Device).
## Using the MicroEJ Test Suite Ant Tasks¶
Multiple Ant tasks are available in the testsuite-engine.jar provided in the Build Kit:
• testsuite allows the user to run a given test suite and to retrieve an XML report document in a
JUnit format.
• javaTestsuite is a subtask of the testsuite task, used to run a specialized test suite for Java (will
only run Java classes).
• htmlReport is a task which will generate an HTML report from a list of JUnit report files.
......
Table 13: An example of removed text irrelevant to mathematical reasoning in OpenWebMath.
By Kimserey Lam with
# Conemu A Better Command Prompt For Windows
Jul 22nd, 2017 - written by Kimserey with .
When developing multiple Web api under multiple Visual Studio solutions, it can become very
tedious to maintain, run and debug. Opening multiple instances of Visual Studio is very costly
in term of memory and running all at once also clutter the screen which rapidly becomes irritat-
ing. With the advent of dotnet CLI tools, it has been clear that the next step would be to move
out of the common “right click/build, F5” of Visual Studio and toward “dotnet run” on a com-
mand prompt. Last month I was looking for a Windows alternative of the bash terminal which
can be found on Mac and I found ConEmu. ConEmu provides access to all typical shells via
an enhanced UI. Today we will see how we can use ConEmu to ease our development process
by leveraging only 2 of its features; the tasks and environment setup.
1. dotnet CLI 2. Setup environment 4. Apply to multiple services
......
Table 14: An example of removed text irrelevant to mathematical reasoning in OpenWebMath.
Table 15: Performance of the MathCoder2 models on general coding benchmarks: HumanEval,
HumanEval+, MBPP and MBPP+, as well as general ability benchmarks: Hellaswag, PIQA and
Winogrande.
Model
Human-
Eval
Human-
Eval+
MBPP MBPP+ Hella-
swag
PIQA Winog-
rande
Llama-3-8B
MathCoder2-Llama-3-8B
DeepSeekMath-7B
MathCoder2-DeepSeekMath-7B
Mistral-7B
MathCoder2-Mistral-7B
Code-Llama-7B
MathCoder2-Code-Llama-7B
40.2
51.8
36.0
36.6
29.3
39.6
37.8
38.4
35.4
43.3
28.7
32.3
23.8
34.1
35.4
32.3
61.9
61.9
64.8
66.7
51.3
54.5
59.5
58.5
52.1
52.1
52.9
54.8
40.5
46.8
46.8
47.4
79.2
75.9
66.4
66.9
81.1
78.1
62.9
62.8
81.0
78.1
74.7
74.0
82.0
78.0
72.5
72.3
73.4
71.7
64.6
63.1
73.9
72.3
64.7
63.7
Table 16: Overlap ratios for different n-grams.
n-grams
Overlap Ratio (%)
3
0.21
4
0.12
5
0.06
6
0.03
7
0.02
8
0.01
13
0.00
20
Published as a conference paper at ICLR 2025
Figure 6: Comparison of the accuracy on MATH between adding and not adding mathematical code
across different training steps.
21
010002000300040005000600070008000900010000110001200013000Steps222426283032343638Accuracy (%)MATHbasic datacode-added |
X9OfMNNepI | Large Language Models for Rediscovering Unseen Chemistry Scientific Hypotheses | [
6,
5,
8,
6
] | Published as a conference paper at ICLR 2025
MOOSE-CHEM: LARGE LANGUAGE MODELS FOR
REDISCOVERING UNSEEN CHEMISTRY SCIENTIFIC
HYPOTHESES
Zonglin Yang1,2∗, Wanhao Liu2,3, Ben Gao2,4, Tong Xie5,6, Yuqiang Li2,
Wanli Ouyang2, Soujanya Poria7, Erik Cambria1†, Dongzhan Zhou2†
1 Nanyang Technological University 2 Shanghai Artificial Intelligence Laboratory
3 University of Science and Technology of China 4 Wuhan University 5 University of New South Wales
6 GreenDynamics 7 Singapore University of Technology and Design
{zonglin.yang,cambria}@ntu.edu.sg, [email protected]
ABSTRACT
Scientific discovery contributes largely to human society’s prosperity, and recent
progress shows that LLMs could potentially catalyze this process. However, it
is still unclear whether LLMs can discover novel and valid hypotheses in chem-
istry. In this work, we investigate this central research question: Can LLMs au-
tomatically discover novel and valid chemistry research hypotheses given only a
chemistry research background (consisting of a research question and/or a back-
ground survey), without limitation on the domain of the research question? After
extensive discussions with chemistry experts, we propose an assumption that a
majority of chemistry hypotheses can result from a research background and sev-
eral inspirations. With this key insight, we break the central question into three
smaller fundamental questions. In brief, they are: (1) given a background ques-
tion, whether LLMs can retrieve good inspirations; (2) with background and inspi-
rations, whether LLMs can lead to hypothesis; and (3) whether LLMs can identify
good hypotheses to rank them higher. To investigate these questions, we construct
a benchmark consisting of 51 chemistry papers published in Nature, Science, or
a similar level in 2024 (all papers are only available online since 2024). Every
paper is divided by chemistry PhD students into three components: background,
inspirations, and hypothesis. The goal is to rediscover the hypothesis, given only
the background and a large randomly selected chemistry literature corpus con-
sisting of the ground truth inspiration papers, with LLMs trained with data up to
2023. We also develop an LLM-based multi-agent framework 1 that leverages the
assumption, consisting of three stages reflecting the three smaller questions. The
proposed method can rediscover many hypotheses with very high similarity with
the ground truth ones, covering the main innovations.
1
INTRODUCTION
Discovering new science has long been one of the deepest desires of humanity, which can not only
satisfy our curiosity to understand the universe but also contribute largely to the prosperity of human
society (Coccia, 2019). Recently, there are some breakthroughs indicating that LLMs have the
potential to assist scientists in accelerating the discovery process (Luo et al., 2025).
Yang et al. (2024b) first find that LLMs can generate novel and valid enough hypotheses evaluated by
experts. They focus on the social science domain and make discoveries by developing a multi-agent
system, leveraging an assumption that a majority of social science hypotheses can be divided into a
research background concept and an inspiration concept. This assumption is largely valid because
a social science hypothesis is about how an independent variable can influence another dependent
variable (Hair et al., 2007).
1Code and Benchmark are available at https://github.com/ZonglinY/MOOSE-Chem.git
∗Contribution during internship at Shanghai Artificial Intelligence Laboratory. †Corresponding author.
1
Published as a conference paper at ICLR 2025
Si et al. (2024) further validate this finding by employing a large group of scientists to evaluate
LLMs’ generated hypotheses in the NLP domain and show that LLM can generate more novel but
slightly less valid research hypotheses than human researchers. However, it is still unclear LLMs’
scientific discovery ability in natural science such as the chemistry domain.
Sprueill et al. (2023; 2024) adopt LLMs to conduct a search process for catalyst discovery. However,
their method is limited in the catalyst discovery domain, and their evaluation relies on whether LLMs
can rediscover existing commercially used catalysts, potentially influenced by a data contamination
problem. As a result, it is still unclear how good LLMs are for chemistry scientific discovery.
In this paper, we investigate this central research question: Can LLMs automatically discover novel
and valid chemistry research hypotheses (even at the Nature level) given only a chemistry research
background (consisting of a research question and/or a background survey), without limitation on
the domain of the research question? With extensive discussions with chemistry experts, we find
that the assumption used in social science, that a hypothesis can be divided into background and
inspiration, can also apply to a majority of chemistry hypotheses. It is not too surprising, since
cognitive science research has shown that creative ideas often result from the cohesive association of
two seemingly unrelated pieces of knowledge (Koestler, 1964; Benedek et al., 2012; Lee & Chung,
2024). A main difference is that chemistry might need more than one inspiration (e.g., adding several
components to compose a novel chemistry system). With this key insight, we break the seemingly
impossible-to-solve central question into three smaller, more practical, and executable fundamental
questions that, when summed up, should be very close to a set of sufficient conditions for the central
question. Specifically, the smaller questions are (1) whether LLM can identify inspiration papers that
have the potential to help with the given research question; (2) given only known knowledge (from
background and inspirations), whether LLMs can infer unknown knowledge that is highly likely to
be valid; and (3) whether LLM can identify good hypotheses and rank them higher.
To investigate these three questions, we build a benchmark consisting of 51 chemistry papers anno-
tated by chemistry PhD students, breaking every paper into a background, several inspirations, and
a hypothesis. The goal is to rediscover the hypothesis with only the background by using LLMs
trained with data up to December 2023. The papers are all published in Nature, Science, or a similar
level in 2024, and they are only made public on the internet in 2024. The benchmark is designed to
be similar to the Mathematical Olympiad Competition (Trinh et al., 2024), to provide several dozens
of very difficult and meaningful questions to solve. Along with the benchmark, we propose a rank-
ing task for scientific discovery (along with evaluation criteria), which has been largely overlooked
in previous works (Yang et al., 2024a; Wang et al., 2024b). Ranking is important because although
AI systems can generate a large number of hypotheses in a relatively short time, verifying them one
by one requires a lot of experimental costs.
Motivated by this breakup into three smaller questions, we design a multi-agent framework named
MOOSE-CHEM for chemistry scientific discovery. It in general includes three stages: (1) searching
through chemistry literature to find inspiration papers, (2) leveraging the inspirations to propose
hypotheses for the background research question, and (3) identifying high-quality hypotheses to give
them a higher rank. Compared with Yang et al. (2024b)’s method in social science that assumes a
similar separation between background and inspiration for hypothesis formulation, MOOSE-CHEM
adopts an evolutionary algorithm to foster a broader diversity of approaches in using inspiration
for background, thereby capitalizing on the benefits derived from varied mutations. In addition,
MOOSE-CHEM also adopts a multi-step design to collect more than one inspirations for chemistry
discovery. Finally, it uses an efficient ranking method for better reference for scientists.
We design experiments with the benchmark to test the three fundamental questions and find that
LLMs are highly capable. We also test MOOSE-CHEM with the benchmark, mimicking the setting
to run it in the wild by only giving a background and a corpus of up to 3000 chemistry papers
to select inspiration. Even in this challenging setting, MOOSE-CHEM can still rediscover many
hypotheses with very high similarity with the ground truth ones, covering the main innovations.
Overall, the contributions of this paper are:
• We provide the first mathematical derivation on how to decompose the seemingly
impossible-to-solve question P (hypothesis|research background) into many executable
and practical smaller steps. This decomposition make P (hypothesis|research background)
possible to be practical.
2
Published as a conference paper at ICLR 2025
• We develop a scientific discovery framework directly based on the mathematical derivation.
Different from previous works, we propose an evolutionary algorithm-based method to bet-
ter associate background and inspiration, multi-step inspiration retrieval and composition,
and an efficient ranking method. In addition, the framework can be applied to chemistry
and material science, which are not covered by previous methods.
• We construct a benchmark by three chemistry PhD students, consisting of 51 chemistry
papers published on Nature, Science, or a similar level, decomposing each paper into the
research background, inspirations, and hypothesis.
• We propose an assumption, grounded in preliminary experiments, that LLMs may already
possess numerous knowledge pairs capable of being associated to create novel knowl-
edge—even when scientists have not previously recognized any relationship between them.
• For the first time, we show that an LLM-based framework can largely rediscover the main
innovations of many chemistry hypotheses that have been published in Nature and Science.
The rediscovery is not because of data contamination, because we have controlled the date
of the training corpus of the LLM and the online date of the chemistry papers.
2 RELATED WORK
Zhong et al. (2023) work on finding the difference between two corpora to propose hypotheses, but
their evaluation is conducted by Turkers, which cannot lead to a novel discovery. Wang et al. (2024b)
try to utilize LLMs to discover novel NLP and biochemical hypotheses, and find the hypotheses still
fall far behind scientific papers in terms of novelty, depth, and utility. Yang et al. (2024b) first show
that LLMs can generate novel and valid enough hypotheses evaluated by PhD students, but they
only focus on social science. FunSearch (Romera-Paredes et al., 2024) can discover specific solu-
tions for mathematical conjecture but can’t discover new math theorems. Qi et al. (2024) analyzes
LLM’s ability for scientific discovery in the biomedical domain by directly generating hypotheses
with only the research background. Boiko et al. (2023); Baek et al. (2024); Li et al. (2024); Lu et al.
(2024) focus on subsequent steps for scientific discovery, mainly developing and conducting experi-
ments. Sprueill et al. (2023; 2024) focus on catalyst discovery, but their evaluation relies on whether
can rediscover existing commercially used catalysts, which might cause data contamination prob-
lem. Kumar et al. (2024) compare different LLMs on scientific discovery in different disciplines.
Tshitoyan et al. (2019) show that word embedding obtained from large-scale chemistry literature can
recommend materials years before their discovery. Xie et al. (2024) predict emerging thermoelectric
materials by summarizing the sentiment in the existing literature.
3 BENCHMARK CONSTRUCTION
The goal of the benchmark, named TOMATO-Chem, is two-fold. Firstly, it is used to analyze LLM’s
ability in terms of the three smaller questions. Secondly, it serves as a challenge to rediscover nature-
level chemistry hypotheses with only a research background. The setting of the challenge is very
similar to a real copilot setting, where scientists tell the copilot about the specific research question
they are interested in, and optionally a small survey consisting of several paragraphs summarizing
the existing best-performing methods for the research question.
To achieve the goals, we split each collected paper into the following components: <background
question, background question (strict), background survey, background survey (strict), one to three
inspiration paper titles and their reason to serve as an inspiration, research hypothesis, experiments,
reasoning process, summarization of inspirations>. Every component is described by text.
The reason we add a strict version for background question and background survey is that many
hypotheses are making relatively minor modifications based on existing methods covered by the
survey, and the question can be very insightful to provide a hint on the general direction of the
hypothesis. In practice, these situations are entirely possible, especially when the scientist users
can provide a more comprehensive survey on existing methods, or contain deep insights in their
question. Here, we also keep the strict version to make the task more challenging and encourage
developing methods to better assist scientists even when they are also new to their research topic.
3
Published as a conference paper at ICLR 2025
The reasoning process indicates the relation between the components of background, inspirations,
and hypothesis. For example, the reasoning process can be “background + inspiration 1 + inspiration
2 = hypothesis”, or “background + inspiration 1/inspiration 2 + inspiration 3 = hypothesis”.
The benchmark consists of 51 chemistry and material science papers and is constructed by multiple
chemistry PhD students. We only select those papers published on top chemistry venues and be
public on the internet after January 2024. After constructing, the experts check again on (1) whether
the identification of the inspirations is correct and whether more inspirations are needed; (2) whether
the background does not contain any information in inspirations or hypothesis; and (3) whether
the background and the identified inspirations can roughly logically lead to the hypothesis. The
complete instruction on the check process is shown in § A.3.
Category
Count
Polymer Chemistry
Organic Chemistry
Inorganic Chemistry
Analytical Chemistry
Total
21
22
3
5
51
Publication Venue
Count
Nature / Science
Nature Subjournals
Other Top Journals
Total
27
20
4
51
Table 1: Distribution of categories.
Table 2: Distribution of publication venues.
Table 1 and Table 2 show the statistics of the benchmark in terms of chemistry category and pub-
lication venue. Material science is a sub-category of chemistry and can belong to the categories
in Table 1, such as polymer material and organic material. Around 13 collected benchmark papers
are inside the material science domain. Beyond them, more papers have intersections with material
science. In this paper, we target both chemistry and material science, but for simplicity, we only
refer to them as chemistry in this paper.
4 METHODOLOGY
4.1 FUNDAMENTAL ASSUMPTION AND FOLLOWING DECOMPOSITION
We propose an assumption that a majority of chemistry hypotheses can originate from a research
background and several inspirations. This assumption is not only supported by many chemistry
researchers whom we have extensive discussions with but also by the cognitive science finding that
“creative ideas often result from the cohesive association of two (or more) seemingly unrelated
pieces of knowledge” (Koestler, 1964; Benedek et al., 2012; Lee & Chung, 2024). We design our
method based on this fundamental assumption.
Denoting background knowledge as b, inspiration knowledge as i, and hypothesis as h, we translate
this assumption as:
h = f (b, i1, . . . , ik)
(1)
Here, k ∈ Z represents the number of inspirations needed for a particular h. Typically in chemistry,
k ∈ [1, 3]. In other words, given existing knowledge in the background, a majority of chemistry
research is about searching knowledge that previously not known to be related to the background
but in fact can assist the background, then associate the background knowledge and the searched
knowledge in a reasonable way to compose a hypothesis.
Based on this assumption, we can transform the seemingly impossible-to-solve P (h|b) into an equiv-
alent form, where each step in the equivalent form is practical and executable.
P (h|b) ≈
k
(cid:89)
j=1
P (ij|b, hj−1, I) · P (hj|b, ij, hj−1), where h0 = ∅
(2)
Here, I denotes the full (chemistry) literature, representing the full inspiration space to search for
every single i. The full proof along with detailed analyses is shown in § A.1.
4
Published as a conference paper at ICLR 2025
Figure 1: The MOOSE-Chem framework. It receives b and I as input, and outputs a list of ranked
h. The bottom-right legend describes the symbols in the figure.
Equation 2 is meaningful in that by decomposing P (h|b) into more practical and executable smaller
questions, the seemingly impossible-to-solve P (h|b) itself becomes practical. We analyze how
P (ij|b, hj−1, I) and P (hj|b, ij, hj−1) are practical and executable by LLMs in § 5.1 and § 5.2
correspondingly.
Now we have clarified the steps to obtain h from b. However, it still might not be enough helpful
in practice, since I can be on a large scale, and the search process might find lots of i, and finally
lead to lots of h. Moreover, it is very time-consuming for scientists to conduct experiments to verify
every single h. Therefore, it would be very helpful if the generated h could be ranked based on
quality. Here, we adopt a straightforward and efficient way for ranking. Specifically, we design a
rating function R(h), such that R(h) → R. Denoting the full set of generated h as H, we can obtain
P (Hranked) = P (H, R), where Hranked = {h1, h2, . . . , hn | R(hi) ≥ R(hi+1) for all i}
(3)
Supported by Equation 2 and Equation 3, as a result, to model P (h|b), the only three components
we need to model are P (ij|b, hj−1, I), P (hj|b, ij, hj−1), and R(h). The implementation details of
the three components are illustrated in the remaining subsections in § 4. Analyses of LLM’s ability
on the three components are provided in § 5.
4.2 THE FRAMEWORK DEVELOPED BASED ON THE ASSUMPTION
4.2.1 THE GENERAL PICTURE
Our methodology is developed based on the fundamental assumption discussed in § 4.1. Specif-
ically, we use LLMs to perform P (ij|b, hj−1, I), P (hj|b, ij, hj−1), and R(h), and organize them
into a multi-agent LLM-based framework. The input to the framework is only a background question
and/or background survey, together with a (large) chemistry literature corpus to search for inspira-
tion. The output of the framework is a list of ranked research hypothesis.
The framework’s design is shown in Figure 1 (overview in Figure 2). It is a direct implementation
of Equation 2 and 3. We develop it as simply as possible, retaining only the necessary parts.
In the general picture, given a research background b (research question and/or research survey), the
framework first performs P (i1|b, h0 = ∅, I) by screening through the literature corpus I to select
many papers i, where each of them has the potential to serve as an inspiration. Then the framework
performs P (h1|b, i1, h0 = ∅), associating b and each i together to compose h. Then, it ranks h by
assigning an evaluation score r on each of h1 by R(h1). We call these three steps as one round.
Another round means going through the three steps again, based on the previous round’s results.
Since normally in chemistry, no more than three inspirations are needed for one hypothesis (k ∈
[1, 3]), the default setting for MOOSE-Chem is to perform three rounds for each b. In every other
round, the number of i and h can expand exponentially. Here, we adopt beam search to select a fixed
size of the top-ranked h to enter the next round. The default beam size is 15.
5
++,: background: inspiration: hypothesis: hypothesis mutation: literature corpus+refinerefinerecombination+,: background: inspiration: hypothesis: hypothesis mutation: rate score: literature corpus+refinerefinerecombine+mutateone roundPublished as a conference paper at ICLR 2025
4.2.2 DESIGN DETAILS OF P (ij|b, hj−1, I) AND ITS MOTIVATION
We use LLMs to conduct a screening process for P (ij|b, hj−1, I). Specifically, for each inference,
we (1) sequentially select a fixed number of papers from I, where the fixed number is called the
screening window size (default is 15); (2) set up a prompt consisting of b, the title and abstract of
the selected papers from I, and the previous h (if it is not ∅); and (3) instruct the LLM to generate
three titles from the input that can best serve as i for b (and optionally previous h), and give reasons.
In particular, we use LLMs to choose potential inspiration i, but not choose i from citation nor
semantic neighbors because i is supposed to be previously not known to be related to b (we have
discussed it in § 4.1). If the chosen i is already known to be related to b, then the composed h
probably would not be novel. If the chosen i contains similar semantic information with b, then
probably it is not necessary to add i at all, since it does not introduce much (any) extra information.
Our bold assumption here is that advanced LLMs, trained on vast scientific literature, may already
recognize novel knowledge pairs unknown to any scientist that can be associated to create novel
knowledge. However, this may not be too bold, as Tshitoyan et al. (2019) showed that unsupervised
word embeddings from 3.3 million materials science abstracts could predict functional materials
years before their discovery. Here, the functional applications can be seen as b, and the recom-
mended materials can be seen as i, or even directly as h if it is enough similar. It probably indicates
that LLMs trained with significantly more literature tokens and parameters might already be able to
identify the relation between many knowledge pairs that are unknown to be related by any scientist.
We analyze this assumption in § 5.1.
4.2.3 DESIGN DETAILS OF P (hj|b, ij, hj−1) AND ITS MOTIVATION
The retrieved i is expected to be not known to be related to b; therefore, it might be difficult to figure
out an effective way to associate b and i together to compose h. Think of the time when backprop-
agation is about to be invented. Even if we are very familiar with b (multi-layer logistic regression)
and have successfully retrieved i (chain rule in mathematics), can we invent backpropagation?
Our answer is, at least we might need to try multiple times and various ways to leverage the chain
rule for multi-layer logistic regression. With this motivation, we develop a simple evolutionary
algorithm-based method, shown in the top-right of Figure 1. We call it “evolutionary unit” (EU).
Specifically, given b and i, EU will first generate multiple hypothesis “mutations” m, where each
m is a unique way to associate b and i together. Then EU further develops each m independently
by providing feedback to each m in terms of validness, novelty, clarity, and significance, and then
refining them based on the feedback. Yang et al. (2024b) first propose to provide feedback in terms
of validness, novelty, and clarity to refine hypotheses. Here, we add an additional aspect, signifi-
cance, since significance is an important evaluation criterion in chemistry. We assume the refined
hypothesis should be of better quality so that the refined hypothesis is “selected”, while the previous
hypothesis is “eliminated” by the “environment”. Finally EU “recombines” the remaining selected
m, leveraging the advantages from every m to propose h to better associate b and i.
4.2.4 DESIGN DETAILS OF R(h) AND ITS MOTIVATION
We adopt a simple and efficient way for R(h), which is to prompt an LLM to output evaluation scores
for an input h in terms of validness, novelty, significance, and potential. Validness and novelty are
two fundamental requirements for such an inductive reasoning process as scientific discovery (Yang
et al., 2024a;b). Significance is added because it is important for chemistry. We additionally add
potential, because the generated h are about to be further developed by scientists, so we might want
to pick those h that not only are currently in high quality but also have good potential to be further
developed. We did not design R(h) in a more complicated way, since there are lots of h to rank, and
we might want to save more inference time.
Yang et al. (2024b) use the scores as automatic evaluation for generated social science hypotheses
and have shown a high consistency score between automatic evaluation and expert evaluation. How-
ever, in the chemistry domain, LLMs might not be reliable enough to directly evaluate the generated
h (Sprueill et al., 2024). But it might still be able to provide a preliminary quality identifier to h: the
ranking of the average score between the four aspects of an h determines whether it will enter the
6
Published as a conference paper at ICLR 2025
Corpus Size Hit Ratio (top 20%) Hit Ratio (top 4%) Hit Ratio (top 0.8%) Hit Ratio (top 0.016%)
150
300
1000
3000
92.8%
96.7%
96.4%
95.8%
76.8%
83.7%
88.9%
86.9%
61.4%
60.8%
69.0%
70.6%
NA
NA
46.7%
52.0%
Table 3: Main table for Q1. For each screen window of 15 papers, 3 papers are selected.
Screen window size Hit Ratio (1 round) Hit Ratio (2 round) Hit Ratio (3 round) Hit Ratio (4 round)
10
15
20
40
60
98.0%
96.7%
91.2%
88.9%
71.6%
88.9%
83.7%
76.8%
54.9%
53.9%
79.4%
60.8%
58.8%
NA
NA
56.5%
NA
NA
NA
NA
Table 4: Ablation table on screen window size for Q1. The corpus size is 300. For each screen
window no matter its size, 3 papers are selected to remain for the next round of screening.
next round of MOOSE-Chem by beam search. To understand how well LLMs can perform R(h),
we analyze “how well LLMs can rank chemistry hypotheses” in § 5.3.
5
INVESTIGATION ON FUNDAMENTAL QUESTIONS
P (h|b) can be understood as the task to discover high-quality chemistry research hypothesis, given
only a background question and/or background survey. Our central question to investigate is how
well LLMs can perform P (h|b). Supported by Equation 2 and 3, we break up this main question into
three smaller questions: how well can LLMs perform (1) P (ij|b, hj−1, I), (2) P (hj|b, ij, hj−1), and
(3) R(h)? All experiments are performed by GPT-4o (its training data is up to October 2023).
5.1 HOW WELL CAN LLMS PERFORM P (ij|b, hj−1, I)?
Here, we investigate the question (denoted as Q1): “whether LLM can identify inspiration papers
which are unknown to be able to associate with the background (or at least unknown to associate in
a certain way) but in fact can associate with the background to create novel knowledge?”.
We first find 3000 most cited chemistry papers published in Nature, and construct a series of I
in size of 150, 300, 1000, and 3000. I is constructed by first adding the ground truth inspiration
papers (around 120), then randomly selecting the remaining papers from the 3000 papers, and finally
randomizing the order of all the collected papers. Only title and abstract are needed for each paper
in I. The default setting is that each inference of LLMs will screen 15 papers from I, and generate
three titles that LLMs think can best assist b (and/or previous h). Screening through I for one round,
only 20% of I will be selected. Screening another round will only leave 4%, and so on.
We use Hit Ratio as the evaluation metric, which is calculated by the number of selected ground
truth inspiration papers divided by the number of all ground truth inspiration papers. All the Hit
Ratio numbers shown in the tables are averaged across the 51 papers in the benchmark.
Table 3 shows the main experiment results. The Hit Ratio is surprisingly high: More than 75% of
the ground truth inspirations are covered by even only the 4% chosen papers from the chemistry lit-
erature corpus. It seems that LLMs are quite capable of finding inspiration papers that are unknown
to be able to associate with the background but in fact, can associate with the background to create
novel knowledge. It means our bold assumption in § 4.2.2 that “the most advanced LLMs might
already know lots of knowledge pairs that are able to associate to create novel knowledge, where the
knowledge pairs are not known by any scientist to be related” is possible to be true.
Table 4 shows the ablation study in terms of screen window size. It seems that a smaller window
size can lead to better performance: a screen window size of 60 to keep 3 for one round will select
5% of the corpus, and the Hit Ratio is 71.6%; while a screen window size of 15 to keep 3 for two
rounds will select only 4% of the corpus, but the Hit Ratio is as high as 83.7%.
7
Published as a conference paper at ICLR 2025
Model
Hit Ratio (top 20%) Hit Ratio (top 4%) Hit Ratio (top 0.8%)
Llama-3.1-8B
Llama-3.1-70B
Llama-3.1-405B
GPT-4o
71.6%
95.1%
95.7%
96.7%
43.5%
83.0%
78.7%
83.7%
26.8%
59.5%
52.7%
60.8%
Table 5: Comparison of Llama series and GPT-4o on inspiration retrieval. The corpus size is 300.
For each screen window of 15 papers, 3 papers are selected.
5 points
Generated hypothesis covers three key points (or covers all the key points) and leverage
them similarly as in the groundtruth hypothesis; Extra key points do not have apparent flaws.
4 points
3 points
2 points
1 point
0 point
Generated hypothesis covers three key points (or covers all the key points) and leverage
them similarly as in the groundtruth hypothesis; Extra key points have apparent flaws.
Generated hypothesis covers two key points and leverage them similarly
as in the groundtruth hypothesis, but does not cover more or all key points
Generated hypothesis covers one key point and leverage it similarly
as in the groundtruth hypothesis, but does not cover more or all key points
Generated hypothesis covers at least one key point, but is used differently
as in the groundtruth hypothesis
Generated hypothesis does not cover any key point
Table 6: Description of the Matched Score.
Table 5 compares LLMs in different scales on inspiration retrieval ability. The results indicate that
LLMs obtain the emergent ability for inspiration retrieval since a rather small parameter size, but
then quickly plateau. § A.9 discusses research background options’ influence on inspiration retrieval.
5.2 HOW WELL CAN LLMS PERFORM P (hj|b, ij, hj−1)?
Here, we investigate the question (denoted as Q2): “Given only known knowledge, whether LLM
can reason to unknown knowledge that has high probability to be valid?”.
The first challenge to answer Q2 is the evaluation method: The benchmark covers a large range of
chemistry topics, and chemistry is a very complex discipline that a slight change of research topic
would make a chemist unable to provide a reliable enough evaluation. In fact, a chemistry researcher
might not be able to provide a reliable enough evaluation even if the hypothesis is in his domain.
Therefore, we adopt a reference-based evaluation method called “Matched Score” (MS). The de-
scriptions are shown in Table 6. It’s on a 6-point Likert scale, roughly containing four stages. Denot-
ing generated hypothesis as gh, and original hypothesis as oh, the four stages are (1) gh ∩ oh = ∅ (0
point); (2) gh ∩ oh ̸= ∅ (1/2/3 points); (3) gh ⊇ oh (4 points); (4) gh ≈ oh (5 points).
We use MOOSE-Chem to investigate Q2. Specifically, we initialize I as only the ground truth
inspiration papers and search i for k round, where k is the number of ground truth i needed for each
5
4
3
2
1
w/ background survey
Average MS (GPT-4o)
Top MS (GPT-4o)
Top MS (Experts)
2
28
9
9
1
12
18
19
22
17
3
6
5
0
2
w/o background survey
Average MS (GPT-4o)
Top MS (GPT-4o)
1
25
7
2
17
19
19
5
7
0
0
0
0
0
0
0
Total
51
51
51
51
51
Table 7: Main table for Q2. Average/Top MS means the average/highest Matched Score of all
generated h from one b. Table 12 is a more complete version of this table including automatic
evaluation results by Claude-3.5-Sonnet and Gemini-1.5-Pro.
8
Published as a conference paper at ICLR 2025
#Matched i
3
2
1
0
Average Rank Ratio NA 0.411
302
Size
0
0.474
2458
0.521
4899
Table 8: Relation between the number of matched ground truth i and the average ranking ratio (↓).
Matched Score
5
4
3
2
1
0
-1
Average Rank Ratio
Size
0.489
210
0.439
36
0.488
404
0.501
427
0.436
29
0.501
102
0.503
6451
Table 9: Relation between the GPT-4o labeled Matched Score and average ranking ratio (↓).
b. MOOSE-Chem will not retrieve the same i already retrieved in previous rounds, guaranteeing
that before generating the final h, the framework has already seen all the ground truth inspirations.
Table 7 shows the results. For each b, the top two h with the highest MS by GPT-4o are selected
for expert evaluation (by two chemistry PhD students). It indicates that LLMs are quite capable of
associating known knowledge into unknown knowledge that has a high probability to be valid (very
close to oh). In addition, providing a survey can assist the new knowledge-discovery process. We
discuss the agreement between GPT-4o-based evaluation and expert evaluation in § A.14.
5.3 HOW WELL CAN LLMS PERFORM R(h)?
Here, we investigate Q3: “whether LLMs can select high-quality h to rank them higher?”.
To investigate Q3, we run MOOSE-Chem with every b from the benchmark; |I| = 300, containing
all the ground truth i. Every h is given a rating r = R(h), and is ranked based on r. For every
generated h, we get the number of ground truth i it leveraged (#Matched i), and evaluate it with a
GPT-4o evaluated MS (here MS is -1 means this h has not used any ground truth i).
Table 8 shows the relation between the #Matched i and average ranking ratio (the lower, the better).
It shows a clear trend that the more ground truth i is leveraged, the better ranking score h can have.
It indicates that h with a higher ranking ratio is more likely to be matched with better i.
Table 9 shows the relation between the GPT-4o evaluated MS and the average ranking ratio. There
is a trend that the higher the MS, the better the average rank ratio (when MS ∈ [2,4]). However, the
disadvantage of those h without a positive MS is not very significant. It seems that LLMs have a
certain ability to rank good h higher. But it is not sure how significant it is, because a part of the
reason for these results is that those h generated without ground truth i could be also in high quality.
6 EXPERIMENT AND ABLATION STUDY
We perform experiments in a setting similar to the copilot in the wild setting. Only background
question (strict), background survey (strict), and a chemistry corpus |I| = 300 are provided to the
framework. Only the top 4% of I is selected and used to develop h. The evaluation metrics are Top
MS and Average MS (the highest/average Matched Score of all generated h from one b), averaging
across the benchmark. Experiments are conducted by GPT-4o (training data up to October 2023).
6.1 BASELINES
MOOSE is a hypothesis discovery framework for the general social science domain. It leverages
LLMs to retrieve inspirations and uses self-refine (Madaan et al., 2023) to improve the validness,
novelty, and clarity aspects. The difference is that (1) it does not adopt the mutation and recombina-
tion step to better associate background and inspiration; (2) it only retrieves one step of inspiration.
SciMON is a hypothesis discovery framework for the NLP and biochemical domain. It relies on
semantic and citation neighbors to retrieve information to assist the background. As a result, the
retrieved information could be very related to the background that might not be able to serve as an
inspiration. To make the generated hypothesis more novel, it adopts self-refine to focus on improving
9
Published as a conference paper at ICLR 2025
Method
Top MS Average MS
SciMON (Wang et al., 2024b)
MOOSE (Yang et al., 2024a)
Qi et al. (2024)
MOOSE-Chem
w/o multi-step
w/o multi-step & EU
2.549
2.882
2.686
4.020
3.765
2.863
2.281
2.464
2.356
2.564
2.730
2.578
Table 10: Experiments and ablation study. The Matched Score (MS) is evaluated by GPT-4o (this
table), Claude-3.5-Sonnet (Table 13), and Gemini-1.5-Pro (Table 14).
Top MS (Expert)
5
0
4
2
3
19
2
16
1
8
0 Total
6
51
Table 11: MOOSE-Chem runs with |I|=300, mimicking the copilot setting. This table shows the
statistics of the top Matched Score across the benchmark. The evaluation is done by experts.
the novelty aspect of the generated hypothesis. Here, we implement SciMON with LLM-based
inspiration retrieval, the same as MOOSE-Chem. Table 3 shows that the recall rate of LLM-based
retrieval is 83.7%.
Qi et al. (2024) work on hypothesis discovery in the biomedical domain. It retrieves information
pertinent to the keywords in the background to generate hypotheses. As a result, the retrieved infor-
mation might compose of a background survey, but not as inspiration. Self-refine is also adopted.
6.2 RESULTS
Table 10 shows the baseline results and the ablation study of MOOSE-Chem. It indicates that both
mutation & recombination and the multi-step designs can significantly improve the best-performing
h. Mutation & recombination leads to a drop of Average MS compared to the MOOSE baseline;
we attribute the reason to that the mutation step forces LLMs to generate h different from previous
h mutations from the same b and i, and therefore might generate many h that do not make a lot of
sense. The assigned MS to these mutation h is low, and therefore lower down the Average MS.
To better understand the performance of MOOSE-Chem in this real copilot setting, for each b the
top 4 generated h with the highest MS by GPT-4o are evaluated again by two experts in terms of
MS. Table 11 shows the expert evaluation results. Here, the top MS is the highest MS for each b, out
of the 4 expert evaluated h for this b. Note that MS rated as three is already very high. Illustrated in
Table 6, it means the generated h by MOOSE-Chem (that has not seen h) in the real copilot setting
covers two main innovations of the chemistry hypothesis, which is published in Nature, Science or
a similar level. Some case studies can be seen in § A.16.
7 CONCLUSION
We investigated this central question: “Can LLMs automatically discover novel and valid chem-
istry (including material science) research hypotheses (even those which deserve a publication in
Nature, Science, or a similar level) given only a chemistry research background (consisting of a
research question and/or a background survey), without limitation on the domain of the research
question?”. We proposed a fundamental assumption to break up this seemingly impossible-to-solve
central question into three smaller, more practical, and executable fundamental questions. Then, we
investigated LLM’s ability on each of them.
To this end, we constructed a benchmark consisting of chemistry and material science papers pub-
lished and only be public in 2024. We also developed an LLM-based multi-agent framework consist-
ing of three stages reflecting the three smaller fundamental questions. Experiments showed that the
framework (runs in a copilot in-the-wild setting, with LLMs with training data up to October 2023)
can rediscover many hypotheses with very high similarity with the ground-truth ones, covering the
main innovations.
10
Published as a conference paper at ICLR 2025
ACKNOWLEDGMENTS
This work is supported by the Shanghai Municipal Science and Technology Major Project. This
work is supported by Shanghai Artificial Intelligence Laboratory. This research/project is supported
by the Ministry of Education, Singapore under its MOE Academic Research Fund Tier 2 (STEM
RIE2025 Award MOE-T2EP20123-0005).
We thank Mengsong Wu for his insightful discussions with us, and we thank Yuwei Wan for her
efforts to support this research.
REFERENCES
Jinheon Baek, Sujay Kumar Jauhar, Silviu Cucerzan, and Sung Ju Hwang. Researchagent: Iterative
research idea generation over scientific literature with large language models. arXiv preprint
arXiv:2404.07738, 2024.
Mathias Benedek, Tanja K¨onen, and Aljoscha C Neubauer. Associative abilities underlying creativ-
ity. Psychology of Aesthetics, Creativity, and the Arts, 6(3):273, 2012.
Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han-
nah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. Abductive commonsense reasoning.
In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia,
April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=
Byg1v1HKDB.
Daniil A. Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research
with large language models. Nat., 624(7992):570–578, 2023. doi: 10.1038/S41586-023-06792-0.
URL https://doi.org/10.1038/s41586-023-06792-0.
Faxiang Bu, Yuqi Deng, Jie Xu, Dali Yang, Yan Li, Wu Li, and Aiwen Lei. Electrocatalytic reductive
deuteration of arenes and heteroarenes. Nature, pp. 1–2, 2024.
Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash,
Charles Sutton, Xuezhi Wang, and Denny Zhou. Universal self-consistency for large language
model generation. CoRR, abs/2311.17311, 2023. doi: 10.48550/ARXIV.2311.17311. URL
https://doi.org/10.48550/arXiv.2311.17311.
Peter Clark, Oyvind Tafjord, and Kyle Richardson. Transformers as soft reasoners over language.
In Christian Bessiere (ed.), Proceedings of the Twenty-Ninth International Joint Conference on
Artificial Intelligence, IJCAI 2020, pp. 3882–3890. ijcai.org, 2020. doi: 10.24963/ijcai.2020/537.
URL https://doi.org/10.24963/ijcai.2020/537.
Mario Coccia. Why do nations produce science advances and new technology? Technology in
society, 59:101124, 2019.
Joseph F Hair, Arthur H Money, Philip Samouel, and Mike Page. Research methods for business.
Education+ Training, 49(4):336–337, 2007.
Arthur Koestler. The act of creation. London: Hutchinson, 1964.
Sandeep Kumar, Tirthankar Ghosal, Vinayak Goyal, and Asif Ekbal. Can large language models
unlock novel scientific research ideas? arXiv preprint arXiv:2409.06185, 2024.
Byung Cheol Lee and Jaeyeon Chung. An empirical investigation of the impact of chatgpt on
creativity. Nature Human Behaviour, pp. 1–9, 2024.
Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Na-
man Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, Sebastian Riedel,
and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks.
In
Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-
Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Con-
ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12,
2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/
6b493230205f780e1bc26945df7481e5-Abstract.html.
11
Published as a conference paper at ICLR 2025
Ruochen Li, Teerth Patel, Qingyun Wang, and Xinya Du. Mlr-copilot: Autonomous machine learn-
ing research based on large language models agents. arXiv preprint arXiv:2408.14033, 2024.
Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The AI scientist:
Towards fully automated open-ended scientific discovery. CoRR, abs/2408.06292, 2024. doi: 10.
48550/ARXIV.2408.06292. URL https://doi.org/10.48550/arXiv.2408.06292.
Ziming Luo, Zonglin Yang, Zexin Xu, Wei Yang, and Xinya Du. LLM4SR: A survey on large
language models for scientific research. CoRR, abs/2501.04306, 2025. doi: 10.48550/ARXIV.
2501.04306. URL https://doi.org/10.48550/arXiv.2501.04306.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegr-
effe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bod-
hisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and
Peter Clark.
In Alice Oh, Tris-
tan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Ad-
vances in Neural Information Processing Systems 36: Annual Conference on Neural Infor-
mation Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16,
2023, 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/
91edff07232fb1b55a505a9e9f6c0ff3-Abstract-Conference.html.
Iterative refinement with self-feedback.
Self-refine:
Biqing Qi, Kaiyan Zhang, Haoxiang Li, Kai Tian, Sihang Zeng, Zhang-Ren Chen, and Bowen
Zhou. Large language models are zero shot hypothesis proposers. CoLM, abs/2311.05965, 2024.
doi: 10.48550/ARXIV.2311.05965. URL https://doi.org/10.48550/arXiv.2311.
05965.
Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog,
M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming
Wang, Omar Fawzi, Pushmeet Kohli, and Alhussein Fawzi. Mathematical discoveries from
program search with large language models. Nat., 625(7995):468–475, 2024. doi: 10.1038/
S41586-023-06924-6. URL https://doi.org/10.1038/s41586-023-06924-6.
Kaito Shibahara, Yoshihito Kayaki, Kairi Yamashiro, Yuki Nagashima, Kohei Fujii, and Ken
Tanaka. Rh-catalysed enantioselective [2+ 2+ 1] cycloaddition reactions using three different
2π-components. Nature Synthesis, pp. 1–13, 2024.
Chenglei Si, Diyi Yang, and Tatsunori Hashimoto. Can llms generate novel research ideas? a large-
scale human study with 100+ nlp researchers. arXiv preprint arXiv:2409.04109, 2024.
Henry Sprueill, Carl Edwards, Mariefel V. Olarte, Udishnu Sanyal, Heng Ji, and Sutanay Choudhury.
Monte carlo thought search: Large language model querying for complex scientific reasoning in
catalyst design. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association
for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pp. 8348–8365.
Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.FINDINGS-EMNLP.
560. URL https://doi.org/10.18653/v1/2023.findings-emnlp.560.
Henry W. Sprueill, Carl Edwards, Khushbu Agarwal, Mariefel V. Olarte, Udishnu Sanyal, Conrad
Johnston, Hongbin Liu, Heng Ji, and Sutanay Choudhury. CHEMREASONER: heuristic search
over a large language model’s knowledge space using quantum-chemical feedback. In Forty-first
International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024.
OpenReview.net, 2024. URL https://openreview.net/forum?id=3tJDnEszco.
Ryuhei Suzuki, Taiga Ando, Fritz Deufel, Kohsuke Ohmatsu, and Takashi Ooi. Photocatalytic car-
byne reactivity of phosphorus ylides for three-component formal cycloaddition reactions. Nature
Synthesis, pp. 1–7, 2024.
Don R Swanson. Undiscovered public knowledge. The Library Quarterly, 56(2):103–118, 1986.
Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He, and Thang Luong. Solving olympiad geometry with-
out human demonstrations. Nat., 625(7995):476–482, 2024. doi: 10.1038/S41586-023-06747-5.
URL https://doi.org/10.1038/s41586-023-06747-5.
12
Published as a conference paper at ICLR 2025
Vahe Tshitoyan, John Dagdelen, Leigh Weston, Alexander Dunn, Ziqin Rong, Olga Kononova,
Kristin A. Persson, Gerbrand Ceder, and Anubhav Jain. Unsupervised word embeddings capture
latent knowledge from materials science literature. Nat., 571(7763):95–98, 2019. doi: 10.1038/
S41586-019-1335-8. URL https://doi.org/10.1038/s41586-019-1335-8.
Jinpei Wang, Yuxin Song, Fanfei Yu, Yijun Zeng, Chenyang Wu, Xuezhi Qin, Liang Peng, Yitan Li,
Yongsen Zhou, Ran Tao, et al. Ultrastrong, flexible thermogalvanic armor with a carnot-relative
efficiency over 8%. Nature Communications, 15(1):6704, 2024a.
Qingyun Wang, Doug Downey, Heng Ji, and Tom Hope. Scimon: Scientific inspiration machines
In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceed-
optimized for novelty.
ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume
1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024, pp. 279–299. Associ-
ation for Computational Linguistics, 2024b. doi: 10.18653/V1/2024.ACL-LONG.18. URL
https://doi.org/10.18653/v1/2024.acl-long.18.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
In The Eleventh International Conference on Learning Representations, ICLR 2023,
models.
Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/
forum?id=1PL1NIMMrw.
Tong Xie, Yuwei Wan, Haoran Wang, Ina Østrøm, Shaozhou Wang, Mingrui He, Rong Deng,
Xinyuan Wu, Clara Grazian, Chunyu Kit, and Bram Hoex. Opinion mining by convolutional
neural networks for maximizing discoverability of nanomaterials. J. Chem. Inf. Model., 64(7):
2746–2759, 2024. doi: 10.1021/ACS.JCIM.3C00746. URL https://doi.org/10.1021/
acs.jcim.3c00746.
Zonglin Yang, Xinya Du, Alexander M. Rush, and Claire Cardie. Improving event duration pre-
In Trevor Cohn, Yulan He, and Yang Liu (eds.), Find-
diction via time-aware pre-training.
ings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20
November 2020, volume EMNLP 2020 of Findings of ACL, pp. 3370–3378. Association for
Computational Linguistics, 2020. doi: 10.18653/v1/2020.findings-emnlp.302. URL https:
//doi.org/10.18653/v1/2020.findings-emnlp.302.
Zonglin Yang, Xinya Du, Erik Cambria, and Claire Cardie. End-to-end case-based reasoning for
commonsense knowledge base completion. In Proceedings of the 17th Conference of the Euro-
pean Chapter of the Association for Computational Linguistics, pp. 3509–3522, Dubrovnik, Croa-
tia, May 2023a. Association for Computational Linguistics. URL https://aclanthology.
org/2023.eacl-main.255.
Zonglin Yang, Xinya Du, Rui Mao, Jinjie Ni, and Erik Cambria. Logical reasoning over natural lan-
guage as knowledge representation: A survey. In 1st Workshop on Natural Language Reasoning
and Structured Explanations (@ACL 2023), 2023b.
Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao,
and Furu Wei. Language models as inductive reasoners.
In Yvette Graham and Matthew
Purver (eds.), Proceedings of the 18th Conference of the European Chapter of the Associa-
tion for Computational Linguistics, EACL 2024 - Volume 1: Long Papers, St. Julian’s, Malta,
March 17-22, 2024, pp. 209–225. Association for Computational Linguistics, 2024a. URL
https://aclanthology.org/2024.eacl-long.13.
Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, and Erik Cambria. Large language
models for automated open-domain scientific hypotheses discovery. In Lun-Wei Ku, Andre Mar-
tins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics, ACL
2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024, pp. 13545–13565. Associa-
tion for Computational Linguistics, 2024b. doi: 10.18653/V1/2024.FINDINGS-ACL.804. URL
https://doi.org/10.18653/v1/2024.findings-acl.804.
13
Published as a conference paper at ICLR 2025
Ruiqi Zhong, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, and Jacob Steinhardt. Goal
driven discovery of distributional differences via language descriptions.
In Alice Oh, Tris-
tan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Ad-
vances in Neural Information Processing Systems 36: Annual Conference on Neural Infor-
mation Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16,
2023, 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/
7e810b2c75d69be186cadd2fe3febeab-Abstract-Conference.html.
A APPENDIX
A.1 FULL PROOF / DERIVATION FOR THE FUNDAMENTAL ASSUMPTION
We propose an assumption that a majority of chemistry hypotheses can originate from a research
background and several inspirations. This assumption is not only supported by many chemistry
researchers whom we have extensive discussions with but also by the cognitive science finding that
“creative ideas often result from the cohesive association of two (or more) seemingly unrelated
pieces of knowledge” (Koestler, 1964; Benedek et al., 2012; Lee & Chung, 2024). We design our
method based on this fundamental assumption.
This assumption is reminiscent of Swanson Linking (Swanson, 1986) in the domain of literature-
based discovery (LBD), also known as the “ABC model”, where two concepts A and C are hypoth-
esized as linked if they both co-occur with some intermediate concept B in papers. Our assumption
differs in that: (1) for a chemistry hypothesis published in a good venue, usually more than one in-
spirations are needed; (2) background and inspiration are not necessarily linked by a path of interme-
diate papers; (3) our assumption is applied to a majority of existing published chemistry hypotheses,
while LBD has been considered to only focus on a very specific, narrow type of hypothesis (Wang
et al., 2024b). It might indicate that a similar proportion of future chemistry hypotheses can also be
resulted from linkages of existing literature.
Denoting background knowledge as b, inspiration knowledge as i, and hypothesis as h, we translate
this assumption as:
h = f (b, i1, . . . , ik)
(4)
Here, k ∈ Z represents the number of inspirations needed for a particular h. Typically in chemistry,
k ∈ [1, 3].
In other words, given existing knowledge in the background, a majority of chemistry research is
about searching knowledge that previously not known to be related to the background but in fact can
assist the background, then associate the background knowledge and the searched knowledge in a
reasonable way to compose a hypothesis. For example, the proposal of backpropagation can be seen
as a hypothesis. In this case, the background knowledge is multi-layer logistic regression, and the
searched knowledge is the chain rule in calculus.
Here, we call the searched knowledge as “inspiration”. It is vital that the inspiration should not be
known to be related to the background before, or at least should not be used to associate with the
background in a known way. Otherwise the hypothesis would not be novel.
Our goal is to transform the seemingly impossible-to-solve P (h|b) into an equivalent form, where
each step in the equivalent form is practical and executable. Denoting the full (chemistry) literature
as I, such that P (I) = 1. Then a straightforward way of decomposing P (h|b) is by the chain rule
based on Equation 4:
14
Published as a conference paper at ICLR 2025
P (h|b) = P (h, i1, . . . , ik|b)
(cid:40) P (h,b,i1)
P (b,i1)
P (h,b,i1,...,ik)
P (b,i1,...,ik)
· P (b,i1)·P (I)
P (b)·P (I)
P (b,i1,...,ik−1)·P (I) · . . . · P (b,i1)·P (I)
· P (b,i1,...,ik)·P (I)
P (b)·P (I)
if k = 1
if k > 1
(cid:40)
P (h|b, i1) · P (i1|b, I)
P (h|b, i1, . . . , ik) · (cid:81)k
j=2 P (ij|b, i1, . . . , ij−1, I) · P (i1|b, I)
if k = 1
if k > 1
=
=
(5)
(6)
(7)
Here, I is the full inspiration space to search for every single i (here we use the existing chemistry
literature, containing up to 3000 papers as I). The order of ij is exchangeable.
Equation 7 describes the process of P (h|b) in terms of the knowledge-searching perspective. How-
ever, P (h|b, i1, . . . , ik) and P (ij|b, i1, . . . , ij−1, I) might not be enough practicable, and do not
precisely reflect how chemistry researchers find a new i. One of the main reasons is that researchers
tend to think small step by small step. It would be very challenging to think in terms of a big step
without breaking it into several small steps.
To mimic how chemistry researchers conduct research and make it more practicable, we break
P (h|b, i1, . . . , ik) into a series of recursive smaller steps as
P (hk|b, i1, . . . , ik) ≈ P (hk|b, f (b, i1, . . . , ik−1), ik)
= P (hk|b, hk−1, ik)
if k > 1
if k > 1
(8)
(9)
Similarly, we can break P (ij+1|b, i1, . . . , ij, I) as
P (ik+1|b, i1, . . . , ik, I) ≈ P (ik+1|b, f (b, i1, . . . , ik), I)
= P (ik+1|b, hk, I)
if k > 1
if k > 1
(10)
(11)
As a result, to achieve the final hk, we need to obtain h1, . . . , hk−1 first (if k > 1). In addition, seeing
h as a “state”, and i as an “action”, obtaining h and i through P (hk|b, hk−1, ik) and P (ik+1|b, hk, I)
correspondingly indicates a Markov property: (1) a new h only depends on b, its previous h, and the
current i; and (2) an i only depends on b, I, and the previous h.
Therefore, if k > 1,
P (h|b) = P (i1, . . . , ik, h1, . . . , hk|b)
= P (i1, h1|b) · P (i2, h2|b, i1, h1) · . . . · P (ik, hk|b, i1, . . . , ik−1, h1, . . . , hk−1)
≈ P (i1, h1|b) · P (i2, h2|b, h1) · . . . · P (ik, hk|b, hk−1)
=
P (b, i1, I)
P (b, I)
·
P (b, i1, h1)
P (b, i1)
· . . . ·
P (b, ik, hk−1, I)
P (b, hk−1, I)
·
P (b, ik, hk−1, hk)
P (b, ik, hk−1)
= P (i1|b, I) · P (h1|b, i1) ·
k−1
(cid:89)
j=1
P (ij+1|b, hj, I) · P (hj+1|b, ij+1, hj)
=
k
(cid:89)
j=1
P (ij|b, hj−1, I) · P (hj|b, ij, hj−1), where h0 = ∅
(12)
(13)
(14)
(15)
(16)
(17)
Although starting from k > 1, Derivation 17 covers the situation when k = 1 in Equation 7.
Therefore, in sum, we break up the seemingly impossible question P (h|b) into many practical and
executable smaller questions as:
P (h|b) ≈
k
(cid:89)
j=1
P (ij|b, hj−1, I) · P (hj|b, ij, hj−1), where h0 = ∅ and k >= 1
(18)
15
Published as a conference paper at ICLR 2025
Figure 2: Overview of the input and output of the MOOSE-Chem framework.
A.2 MOOSE-CHEM OVERVIEW I/O FIGURE
Figure 2 shows the input and output overview of the MOOSE-Chem framework.
A.3 THE FULL INSTRUCTION FOR BENCHMARK CHECKING
Please help us check again before finalizing the decomposition of each paper in the benchmark:
1. Whether the background question is correct.
2. Background survey shouldn’t contain any information/method in inspiration or hypothesis (ex-
cept if this information/method has been used for this particular background question before). It is
encouraged to include the most similar existing method to the proposed method. For example, the
proposal is to change BaCl2 to BaSO4. It is encouraged to include BaCl2 in the survey, but SO4
must not be included in the survey (since SO4 belongs to the inspiration).
3. Background question cannot contain any information in inspiration or hypothesis as well: It
should be a little bit general question, instead of a specific question asking about how the inspiration
can be leveraged to help with the question. It also shouldn’t be too general that we can’t understand
which specific research domain it works on.
3. Whether the identification of inspirations really the main inspirations for this paper, and whether
we need more main inspiration(s).
4. Whether the main hypothesis is correct and covers the main key points.
5. Whether the background survey + background question + identified inspirations can logically
lead to the hypothesis (if not, we might need to identify more inspirations).
Thank you for the efforts! Your contribution is indispensable for the success of this research. Please
let me know if you have any questions.
A.4 PROMPT TO OBTAIN R(h)
You are known as a diligent and harsh reviewer in Chemistry and Material Science that will spend
much time to find flaws when reviewing and therefore usually gives a relatively much lower score
than other reviewers. But when you meet with a hypothesis you truly appreciate, you don’t mind
to give it good scores. Given a not yet peer reviewed research hypothesis in Chemistry or Material
Science domain, try to evaluate the research hypothesis from four research aspects and give score
according to evaluation guidelines provided below. All four aspects should be evaluated in a 5 point
scale.
Aspect 1: Validness.
5 points: The hypothesis is a logical next step from current research, strongly supported by theory,
perhaps with some indirect experimental evidence or highly predictive computational results.
The experimental verification seems straightforward with a high probability of confirming the
hypothesis; 4 points: Here, the hypothesis is well-rooted in existing theory with some preliminary
It extends known science into new but logically
data or computational models supporting it.
16
++,: background: inspiration: hypothesis: hypothesis mutation: literature corpus+refinerefinerecombination+,: background: inspiration: hypothesis: hypothesis mutation: rate score: literature corpus+refinerefinerecombine+mutateone roundResearch Question(Optional) Background SurveyLots of RandomChemistry Publications(Ranked) ChemistryResearch HypothesesPublished as a conference paper at ICLR 2025
consistent areas, where experiments are feasible with current technology, and there’s a reasonable
expectation of positive results; 3 points: This hypothesis is within the realm of theoretical possibility
but stretches the boundaries of what’s known. It might combine existing knowledge in very novel
ways or predict outcomes for which there’s no direct evidence yet. There’s a conceptual framework
for testing, but success is uncertain; 2 points: While the hypothesis might be grounded in some
theoretical aspects, it significantly deviates from current understanding or requires conditions or
materials that are currently impossible or highly improbable to achieve or synthesize; 1 point: The
hypothesis proposes concepts or outcomes that are not only unsupported by current theory but also
contradict well-established principles or data. There’s no clear path to experimental testing due to
fundamental theoretical or practical barriers.
Aspect 2: Novelty.
5 points: This level of novelty could fundamentally alter our understanding of chemistry or create
entirely new fields.
It often involves predictions or discoveries that, if proven, would require a
significant overhaul of existing chemical theories; 4 points: The hypothesis significantly departs
from established norms, potentially redefining how certain chemical phenomena are understood
or applied. It might involve entirely new materials or theoretical frameworks; 3 points: This level
involves a hypothesis that could potentially lead to new insights or applications. It might challenge
minor aspects of current theories or introduce new methodologies or materials; 2 points: The
hypothesis introduces a new angle or method within an established framework. It might involve
known compounds or reactions but in contexts or combinations not previously explored; 1 point:
The hypothesis involves minor tweaks or applications of well-known principles or techniques. It
might slightly extend existing knowledge but doesn’t introduce fundamentally new concepts.
Aspect 3: Significance.
5 points: This hypothesis could fundamentally change one or more branches of chemistry.
It
might introduce entirely new principles, theories, or methodologies that redefine the boundaries
of chemical science; 4 points: This hypothesis challenges current understanding or introduces
a concept that could lead to substantial changes in how a particular area of chemistry is viewed
or applied.
It might lead to new technologies or significant theoretical advancements; 3 points:
this hypothesis proposes something new or an innovative approach that could lead to noticeable
advancements in a specific area of chemistry. It might open new avenues for research or application
but doesn’t revolutionize the field; 2 points: This hypothesis might offer a small variation or
incremental improvement on existing knowledge. It could potentially refine a known concept but
doesn’t significantly alter the field; 1 point: The hypothesis addresses a very narrow or already
well-established aspect of chemistry. It might confirm what is already known without adding much
new insight.
Aspect 4: Potential.
5 points: The hypothesis, while potentially intriguing now, holds the promise of being revolutionary
with the addition of a key methodological component. This could introduce entirely new concepts
or fields, fundamentally changing our understanding or capabilities in chemistry; 4 points: The
hypothesis, though promising, could be transformative with the right methodological enhancement.
This enhancement might lead to groundbreaking discoveries or applications, significantly advancing
the field; 3 points: The hypothesis, while interesting in its current form, could be significantly
elevated with the right methodological addition. This might lead to new insights or applications
that go beyond the initial scope; 2 points: The hypothesis currently offers some value but has the
potential for more substantial contributions if enhanced with a new methodological approach. This
could lead to incremental advancements in understanding or application; 1 point: The hypothesis,
as it stands, might be straightforward or well-trodden. Even with methodological enhancements,
it’s unlikely to significantly expand current knowledge or applications beyond minor improvements.
The hypothesis is:
Please give a response to the initial question on scoring the hypothesis from four aspects. Remember
that you are a diligent and harsh reviewer.
17
Published as a conference paper at ICLR 2025
Average MS (GPT-4o)
Average MS (Claude-3.5-Sonnet)
Average MS (Gemini-1.5-Pro)
Top MS (GPT-4o)
Top MS (Claude-3.5-Sonnet)
Top MS (Gemini-1.5-Pro)
Top MS (Experts)
Average MS (GPT-4o)
Average MS (Claude-3.5-Sonnet)
Average MS (Gemini-1.5-Pro)
Top MS (GPT-4o)
Top MS (Claude-3.5-Sonnet)
Top MS (Gemini-1.5-Pro)
5
2
4
2
28
33
20
9
1
7
4
25
31
19
4
3
2
1
0
Total
w/ background survey
9
19
13
1
7
18
12
18
15
17
19
10
0
22
17
10
8
3
1
12
6
5
3
11
0
0
1
2
0
0
0
0
0
0
0
w/o background survey
7
24
9
2
19
19
17
18
14
19
1
1
19
2
15
5
0
11
7
0
5
0
0
0
0
0
4
0
0
1
51
51
51
51
51
51
51
51
51
51
51
51
51
Table 12: Main table for Q2. Average/Top MS means the average/highest Matched Score of all gen-
erated h from one b. The numbers represent the statistics of Average/Top MS over the benchmark.
Method
Top MS Average MS
SciMON (Wang et al., 2024b)
MOOSE (Yang et al., 2024a)
Qi et al. (2024)
MOOSE-Chem
w/o multi-step
w/o multi-step & EU
3.824
3.902
3.431
4.471
4.216
3.941
3.529
3.559
3.092
3.697
3.592
3.614
Table 13:
Claude-3.5-Sonnet.
Experiments and ablation study.
The Matched Score is evaluated by
A.5 AUTOMATIC EVALUATION BY CLAUDE AND GEMINI
To investigate whether the results and corresponding conclusions in the main text are caused
by the usage of GPT-4o for automatic evaluation, here we use Claude-3.5-Sonnet and
Gemini-1.5-Pro to evaluate all of the results that have been evaluated by GPT-4o.
Table 12 covers the contents in Table 7, but with more results on using Claude-3.5-Sonnet
and Gemini-1.5-Pro for automatic evaluation. When using different LLMs for automatic eval-
uation, the instruction is the same (can be found in § A.12). The robust results indicate again that
LLMs are quite capable of associating known knowledge into unknown knowledge that has a high
probability to be valid (very close to oh).
but using
Table 13 and Table 14 evaluate
Claude-3.5-Sonnet and Gemini-1.5-Pro for automatic evaluation correspondingly (in-
stead of GPT-4o). The results indicate the robustness of MOOSE-Chem and its components.
same hypotheses with Table 10,
the
Method
Top MS Average MS
SciMON (Wang et al., 2024b)
MOOSE (Yang et al., 2024a)
Qi et al. (2024)
MOOSE-Chem
w/o multi-step
w/o multi-step & EU
2.980
3.039
2.216
3.686
3.588
2.902
2.618
2.690
1.846
2.443
2.529
2.631
Table 14: Experiments and ablation study. The Matched Score is evaluated by Gemini-1.5-Pro.
18
Published as a conference paper at ICLR 2025
MS threshold
only non-EU branch
only EU branches
only EU-recombination branch
5
4
16
19
46
54
20
24
Table 15: Number of hypotheses receiving high Matched Score (MS) from only non-EU branch,
only EU branches, and only EU-recombination branch. Only the hypotheses with a MS that is
higher than the MS threshold are counted.
5
4
3
2
1
w/ significance feedback
Average MS
Top MS
4
33
19
7
15
10
10
1
3
0
0
0
0
w/o significance feedback
Average MS
Top MS
8
34
28
13
11
4
3
0
1
0
0
0
Table 16: Effect of significance feedback (evaluated by Claude-3.5-Sonnet).
A.6 MORE ANALYSIS ON EU
Table 15 shows the number of hypotheses receiving high Matched Score from only non-EU branch,
only EU branches, and only EU-recombination branch. Here, only non-EU branch can be seen as
the hypotheses obtained directly without mutations. The hypotheses are from the same experiment
in Table 10.
The result indicates that about one-third of high-quality hypotheses can be obtained directly without
mutations. In addition, the recombination branch contains more high-quality hypotheses than the
only non-EU branch.
A.7 EFFECT OF SIGNIFICANCE FEEDBACK
Table 16 presents an ablation study on the significance feedback. The results with significance
feedback are from Table 12.
The results indicate that not using significance feedback can even lead to a better performance in
terms of the Matched Score metric. We attribute this phenomenon to LLM’s ability on creativity:
when asked to generate significant hypotheses, LLMs tend to be more deviate from the existing
information for more possible significance, resulting in a lower matched score. However, we should
note that the matched score only measures the matching degree of one given ground truth hypothesis,
and it is possible that the more deviated one is more significant.
A.8 RANKING OF GROUND TRUTH HYPOTHESES
Intuitively if we rank the original hypothesis with the generated hypothesis, the original hypothesis
may be ranked at the top for most of the time. But is it?
Table 17 shows the result, where we assign each ground truth hypothesis with a reward value
R(h) (in terms of validness, novelty, significance, and potential), and calculate its average rank
ratio regarding the framework-generated hypotheses.
Surprisingly, the ground truth hypotheses are not ranked to the top. There are three possible reasons:
Overall Validness Novelty
Significance
Potential
Average Rank Ratio
0.65
0.75
0.76
0.73
0.70
Table 17: Average rank ratio (↓) of the ground truth hypotheses (mixed with generated hypotheses)
19
Published as a conference paper at ICLR 2025
Strict Background Background Survey Hit Ratio (top 20%) Hit Ratio (top 4%) Hit Ratio (top 0.8%)
✓
✓
✗
✓
✗
✓
96.7%
95.1%
96.7%
83.7%
77.8%
80.1%
60.8%
54.2%
57.8%
Table 18: Ablation table on background options for Q1. The corpus size is 300. For each screen
window of 15 papers, 3 papers are selected.
1. LLM does poorly on ranking hypotheses;
2. The generated hypotheses tend to describe their novelty and significance (although they are
prompted to not to), which might influence the judgment;
3. The generated hypotheses may surpass the original in quality.
4. The generated hypotheses may sometimes have more details than the ground truth
one (since the iterative usage of clarity feedback and refinement).
A.9
INFLUENCE OF RESEARCH BACKGROUND OPTIONS TO INSPIRATION RETRIEVAL
Table 18 shows the ablation study in terms of whether to use strict background (discussed in § 3) or
survey or not. It indicates that a survey can largely help with the inspiration retrieval process. Sur-
prisingly, without a strict background, the Hit Ratio goes down a bit. We attribute it to the reason that
mentioning information related to the inspiration will discourage retrieving that inspiration, since in
the prompt, we ask LLMs to search for inspirations, and the demonstration example indicates that
inspirations should not be too similar to the background (to bring in additional information).
A.10 DISCUSSION ON HALLUCINATION AND SCIENTIFIC DISCOVERY
In contrast to the traditional understanding that hallucination is purely a bad thing, LLM’s scientific
discovery ability in fact counts on its hallucination ability to find novel hypotheses: a novel hypoth-
esis would not have been observed by itself, therefore all novel hypotheses come from the class of
hallucination.
In essence, the research development of LLMs for automated scientific hypothesis discovery is to
develop how to better leverage LLMs to hallucinate an unseen hypothesis that has more possibility
to be valid.
A.11 OTHER RELATED WORKS
A.11.1 REASONING
Scientific discovery is highly related to reasoning, since it requires a set of very complex reasoning
processes to lead to new discovery.
Inductive reasoning (Yang et al., 2024a) is the most relevant reasoning type. It is about finding
rules or hypotheses from observations. Scientific discovery is naturally an ultimate goal of inductive
reasoning.
Inductive reasoning is a sub-reasoning type of logical reasoning. The other two sub-reasoning types
are deductive reasoning (Clark et al., 2020) and abductive reasoning (Bhagavatula et al., 2020). Yang
et al. (2023b) discuss their definitions and differences in detail.
Another relevant reasoning type is commonsense reasoning (Yang et al., 2020; 2023a). Scientific
discovery can be seen as an opposite task, which is to reason far outside of commonsense, even to
discover unknown knowledge.
A.11.2 RETRIEVAL
The retrieval of inspiration is a retrieval task, and RAG (Lewis et al., 2020) also works on retrieval.
The main difference is that the current RAG method would most likely retrieve the information that
is semantically the most similar to the input information (research background), while here our goal
20
Published as a conference paper at ICLR 2025
is to retrieve those information that was not known to be related to the input information before, but
in fact is related. We assume that LLMs might have the ability to do it.
A.11.3 SELF CONSISTENCY
Self-consistency (Wang et al., 2023; Chen et al., 2023) might have a similar looking to the “evolu-
tionary unit” (EU), as they all have expansion to several branches, and finally collect these branches
into one.
A key difference is that EU is to explore more diverse options to choose the optimal one, while
self-consistency is to find consistent voting between options.
A.12 PROMPT TO GPT-4O FOR MATCHED SCORE
You are helping to evaluate the quality of a proposed research hypothesis in Chemistry by a phd
student. The ground truth hypothesis will also be provided to compare. Here, we mainly focus on
whether the proposed hypothesis has covered the key points in terms of the methodology in the
ground truth hypothesis. You will also be given a summary of the key points in the methodology
of the ground truth hypothesis for reference. Please note that for the proposed hypothesis to cover
one key point, it is not necessary to explicitly mention the name of the key point, but might also can
integrate the key point implicitly in the proposed method. The evaluation criteria is called ’Matched
score’, which is in a 6-point Likert scale (from 5 to 0). Particularly, 5 points mean that the proposed
hypothesis (1) covers all the key points and leverage them similarly as in the methodology of the
ground truth hypothesis, and (2) does not contain any extra key point that has apparent flaws; 4
points mean that the proposed hypothesis (1) covers all the key points (or at least three key points)
and leverage them similarly as in the methodology of the ground truth hypothesis, (2) but also
with extra key points that have apparent flaws; 3 points mean that the proposed hypothesis (1)
covers at least two key points and leverage them similarly as in the methodology of the ground
truth hypothesis, (2) but does not cover all key points in the ground truth hypothesis, (3) might or
might not contain extra key points; 2 points mean that the proposed hypothesis (1) covers at least
one key point in the methodology of the ground truth hypothesis, and leverage it similarly as in the
methodology of ground truth hypothesis, (2) but does not cover all key points in the ground truth
hypothesis, and (3) might or might not contain extra key points; 1 point means that the proposed
hypothesis (1) covers at least one key point in the methodology of the ground truth hypothesis, (2)
but is used differently as in the methodology of ground truth hypothesis, and (3) might or might
not contain extra key points; 0 point means that the proposed hypothesis does not cover any key
point in the methodology of the ground truth hypothesis at all. Please note that the total number of
key points in the ground truth hypothesis might be less than three, so that multiple points can be
given. E.g., there’s only one key point in the ground truth hypothesis, and the proposed hypothesis
covers the one key point, it’s possible to give 2 points, 4 points, and 5 points. In this case, we
should choose score from 4 points and 5 points, depending on the existence and quality of extra
key points. ’Leveraging a key point similarly as in the methodology of the ground truth hypothesis’
means that in the proposed hypothesis, the same (or very related) concept (key point) is used in
a similar way with a similar goal compared to the ground truth hypothesis (not necessarily for
the proposed hypothesis to be exactly the same with the groudtruth hypothesis to be classified as
’similar’). When judging whether an extra key point has apparent flaws, you should use your own
knowledge to judge, but rather than to rely on the count number of pieces of extra key point to judge.
Please evaluate the proposed hypothesis based on the ground truth hypothesis.
The proposed hypothesis is:
The ground truth hypothesis is:
The key points in the ground truth hypothesis are:
Please evaluate the proposed hypothesis based on the ground truth hypothesis, and give a score.
A.13 GENERATED HYPOTHESES WITH LOW MATCHED SCORE ARE NOT NECESSARILY
BAD
MS only measures the similarity between the generated h and the ground truth h. Receiving an MS
as 0 or 1 does not mean the generated h is bad. Only real lab experiments can check each h.
21
Published as a conference paper at ICLR 2025
#Comparison Pairs Hard Consistency Score
Soft Consistency Score
392
0.345
0.542
Table 19: Consistency score between expert evaluation and GPT-4o evaluation.
#Comparison Pairs Hard Consistency Score
Soft Consistency Score
48
0.438
0.854
Table 20: Consistency score between experts in expert evaluation.
A.14 EVALUATION AGREEMENT BETWEEN EXPERT EVALUATION AND GPT-4O
EVALUATION
Table 19 shows the agreement between expert evaluation and automatic evaluation (by GPT-4o) on
MS. Hard consistency is assigned to 1 only if the two scores are exactly the same, else is assigned
to 0. Soft consistency is assigned to 1 only if the absolute difference between the two scores is less
than 2, else is assigned to 0.
The results show a medium to high consistency between expert evaluation and automatic evaluation.
The main reason is that, in practice, the automatic evaluation is usually 1 to 2 points higher than
expert evaluation, since GPT-4o can usually find a way to explain how the generated hypothesis
is related to the ground truth hypothesis in terms of the main innovations. While this explanation
usually is not wrong, the experts might find that compared to the MS given by GPT-4o, the generated
hypotheses might not be clear enough to deliver the new innovation information to scientists when
using it as a copilot advice.
A.15 AGREEMENT BETWEEN EXPERT EVALUATION
Table 20 shows the consistency score between experts. It is conducted by having the third chemistry
expert check the randomly sampled expert evaluation from the previous two chemistry experts. Hard
consistency is assigned to 1 only if the two scores are exactly the same, else is assigned to 0. Soft
consistency is assigned to 1 only if the absolute difference between the two scores is less than 2, else
is assigned to 0.
The results show a high consistency between experts, indicating a high reliability of reference-based
expert evaluation.
A.16 CASE STUDIES
A.16.1 CASE 1 (MS BY EXPERT: 3)
Generated h: A pioneering integrated electrocatalytic system leveraging ruthenium nanoparticles
embedded in nitrogen-doped graphene, combined with a dual palladium-coated ion-exchange mem-
brane reactor, will catalyze efficient, scalable, and site-selective reductive deuteration of aromatic
hydrocarbons and heteroarenes. Utilizing deuterium sources from both D2 gas and D2O, this sys-
tem will optimize parameters through real-time machine learning-driven dynamic adjustments. Spe-
cific configurations include ruthenium nanoparticle sizes (2-4 nm), nitrogen doping levels (12-14%),
precisely engineered palladium membranes (5 micrometers, ensuring 98% deuterium-selective per-
meability), and advanced cyclic voltammetry protocols (1-5 Hz, -0.5V to -1.5V).
Ground truth h (Bu et al., 2024): The main hypothesis is that a nitrogen-doped ruthenium (Ru)
electrode can effectively catalyze the reductive deuteration of (hetero)arenes in the presence of
D2O, leading to high deuterium incorporation into the resulting saturated cyclic compounds. The
findings validate this hypothesis by demonstrating that this electrocatalytic method is highly effi-
cient, scalable, and versatile, suitable for a wide range of substrates.
Expert’s analysis: The proposed hypothesis effectively covers two key points from the ground truth
hypothesis: the incorporation of ruthenium (Ru) and the use of D2O as a deuterium source
22
Published as a conference paper at ICLR 2025
within the electrocatalytic system. However, the current content does not detail the mechanism
by which Ru-D is produced, which is essential for explaining the process of reductive deuteration.
Nevertheless, the results are still insightful. The specific level of nitrogen doping, for example,
is highly suggestive and warrants further investigation. Overall, the match remains strong in its
alignment with the original hypothesis while also presenting opportunities for deeper exploration.
A.16.2 CASE 2 (MS BY EXPERT: 4)
Generated h: Leveraging an integrated multi-layer hydrogel-ion assembly, constructed through ad-
vanced freeze-casting, salting-out processes, and bioinformatics-driven macromolecular design,
will enhance the Carnot-relative efficiency and mechanical robustness of flexible thermogalvanic
devices. This approach reimagines the use of poly(vinyl alcohol) hydrogels and incorporates novel
ion-specific interactions informed by the Hofmeister series with uniquely potent ions like magnesium
and lithium. Precision ion concentrations (0.15 to 0.45 M) are optimized to amplify thermoelectric
responses and ionic conductance. Molecular dynamics simulations, employing detailed quantum
chemistry models, will validate the enhanced ionic interactions and entropy modulation. Compre-
hensive characterization through nanoindentation, electron microscopy, and advanced computa-
tional platforms will verify crystalline polymer structures and stratified ionic gel configurations. By
setting new benchmarks in efficiency and flexibility, this innovation is poised to revolutionize energy
solutions in wearable technologies, outperforming current models in both scalability and applica-
tion feasibility.
Ground truth h (Wang et al., 2024a): By integrating guanidine sulfate (Gdm)2SO4 into a poly vinyl
alcohol (PVA) hydrogel and employing directional freezing to create aligned channels, it is possible
to achieve a flexible thermogalvanic armor (FTGA) with a Carnot-relative efficiency exceeding 8%
while maintaining high mechanical strength. This integration allows for enhanced thermopower
and mechanical robustness, exceeding the performance of traditional quasi-solid thermocells.
Expert’s Analysis:
The proposed hypothesis effectively covers key points, mirroring the ground truth hypothesis. It in-
corporates the Hofmeister series by mentioning ”ion-specific interactions informed by the Hofmeis-
ter series,” which aligns with the ground truth’s reference to ”guanidine sulfate (Gdm)2SO4” in the
hydrogel integration, acknowledging that sulfate ions are a significant component of the Hofmeister
series. Additionally, the proposed hypothesis employs ”freeze-casting,” which reflects the ground
truth’s ”directional freezing” to create structured channels within the hydrogel. This approach en-
hances both mechanical strength and thermopower—objectives that are consistent with those of the
ground truth hypothesis.
Moreover, the inclusion of salting-out processes in the proposed hypothesis does indeed contribute
to improving the mechanical properties of thermoelectric materials. However, the selection of mag-
nesium and lithium as specific examples is problematic. Overall, the proposed hypothesis provides
valuable insights and serves as a source of inspiration for further exploration within this domain.
A.16.3 CASE 3 (MS BY EXPERT: 3)
Generated h: An enantioselective [2+2+1] cycloaddition involving cycloalkenes, acetylenecarboxy-
lates, and terminal alkynes can be achieved using a dynamically tunable Rh(I) pincer complex
system, specifically incorporating PNP and PONOP ligands, to control the alkyne-to-vinylidene
transformation and enhance selectivity towards chiral 3-methylenecyclopent-1-ene derivatives. By
adjusting the P–Rh–P bite angle with novel ligand modifications and utilizing a blended solvent
system containing toluene and dichloromethane at controlled temperatures (25°C to 80°C), reaction
conditions can be fine-tuned to suppress side reactions. Computational methods (DFT, QTAIM, and
NBO) will guide the optimization process, supported by real-time monitoring using FTIR and NMR.
Exploring dual-ligand strategies, including phosphine/phosphinite combinations, will offer insights
into previously unreported mechanistic pathways, ensuring a reproducible and scalable synthesis
method applicable to pharmaceuticals and advanced material synthesis.
Ground truth h (Shibahara et al., 2024): Rh(I)-catalyzed vinylidene formation and enantioselective
control via chiral diphosphine ligands can work together to enable a selective [2+2+1] cycload-
dition reaction using three different 2π-components. The formation of the vinylidene intermediate
23
Published as a conference paper at ICLR 2025
ensures reaction selectivity, while the chiral ligands, through their dihedral angles, control the enan-
tiomeric outcome. Together, these factors will result in the highly enantioselective synthesis of chiral
3-methylenecyclopent-1-ene derivatives.
Expert’s Analysis: The hypothesis generated in Case 2 receives a score of 3, reflecting its accurate
identification of the metal and ligand necessary for the catalytic process, as well as its articulation
of the mechanism underlying chiral control via ligand geometry. However, the inclusion of mixed
solvents, which were not utilized in the actual study, prevents a higher rating. This prediction suc-
cessfully asserts the necessity of rhodium metal and a phosphine ligand, while also proposing the
significance of the chelating angle between the phosphine ligand and rhodium, in alignment with the
hypothesis presented in the referenced article. Furthermore, although the DFT calculations men-
tioned in the hypothesis are not addressed in the article, they represent a promising avenue for future
research in organic chemistry.
A.16.4 CASE 4 (MS BY EXPERT: 3)
Generated h: A sophisticated dual-modal photoredox strategy synergistically integrates phospho-
rus ylides with a synchronized dual-wavelength excitation (450 nm and 520 nm) system to incite a
regioselective three-component cycloaddition. This method exploits both net-neutral radical/polar
crossover pathways and metal-free carbyne generation to convert inert C–H and C=P bonds into
C–C and C=C bonds efficiently.
Implemented within an optimized mixed solvent system (ace-
tone/acetonitrile) under an ambient nitrogen atmosphere, the approach utilizes precise kinetic mod-
ulation and catalytic control, with computational validation (employing DFT models) underpinning
mechanistic insights and regioselectivity. Experimental verification through in situ spectroscopic
techniques (e.g., UV-Vis and NMR) enhances synchronization precision, curtailing side reactions.
This methodology pledges substantial advancements in sustainable pharmaceutical synthesis and
innovative polymer architecture formation, specifically by improving production efficiency for com-
plex molecules and novel materials.
Ground truth h (Suzuki et al., 2024): The hypothesis is that phosphorus ylides, when exposed to
photoredox catalysis, can undergo single-electron oxidation to generate radical cations, which en-
gage in radical-polar crossover reactions. These intermediates can then sequentially form C–C and
C=C bonds through a formal cycloaddition process, offering a powerful method for constructing
six-membered carbocycles from simple substrates.
Expert’s Analysis: The generated hypothesis also merits a score of 3, as it correctly anticipates the
use of photocatalysis and highlights the significant influence of solvent on the reaction. However,
since dual wavelength catalysis and solvent mixing were not employed in the actual experiment, a
higher score is not warranted. Notably, despite the proposed mixed solvents not being used in the
study, their composition comprises the two best-performing single solvents from the actual research,
thus providing valuable insights that remain relevant to the ongoing investigation.
24
|
5RUM1aIdok | GraphEval: A Lightweight Graph-Based LLM Framework for Idea Evaluation | [
8,
5,
8,
6
] | Published as a conference paper at ICLR 2025
GR A P HEV A L: A LIGHTWEIGHT GRAPH-BASED LLM
FRAMEWORK FOR IDEA EVALUATION
Tao Feng1*, Yihang Sun2*, Jiaxuan You1
1University of Illinois at Urbana-Champaign 2Peking University
*Equal Contribution
ABSTRACT
The powerful capabilities of Large Language Models (LLMs) have led to their grow-
ing use in evaluating human-generated content, particularly in evaluating research
ideas within academic settings. Existing solutions primarily rely on prompt-based
LLM methods or fine-tuned lightweight language models for idea evaluation. How-
ever, these methods are often unstable and struggle to comprehend the complex
semantic information embedded in the ideas, impeding their ability to perform high-
quality evaluations. To address the above challenges, we propose GraphEval,
a lightweight graph-based LLM framework for idea evaluation. Our insight is
that a complex idea can be broken down into comprehensible viewpoint-nodes
using small prompted LLMs. These viewpoint-nodes can then be linked together
through edges created from LLM-based relation extraction and/or BERT similarity
scores. The created viewpoint-graph can be used to conveniently propagate scores
across viewpoint-nodes to improve the robustness of the idea evaluations. In par-
ticular, we propose two lightweight graph-based methods for idea evaluation: (1)
GraphEval-LP: a training-free label propagation algorithm that propagates quality
labels from known viewpoint-nodes to unknown nodes; (2) GraphEval-GNN: a
Graph Neural Network (GNN) that is trained to predict the quality labels given
the observed graph with minimal computation resources. Moreover, to overcome
LLM’s limitation in objectively assessing the novelty of ideas, we further propose
a novelty detection model to GraphEval-GNN to enhance its capability in judging
idea novelty. Experiments on two datasets show GraphEval improves F1 scores
by at least 14% with low computation and API costs. Additionally, GraphEval
can effectively detect plagiarized ideas. Our codes for GraphEval is released at
https://github.com/ulab-uiuc/GraphEval.
1
INTRODUCTION
With the advancement of LLMs, many tasks traditionally performed by humans, such as idea
evaluations (Liang et al., 2024; Lin et al., 2023a), label annotation (Wang et al., 2024; Goel et al.,
2023), or providing feedback to intelligent systems (Stamper et al., 2024; Mai et al., 2023), are
now handled by LLMs. Among these applications, the use of LLMs to substitute humans in idea
evaluations (Lu et al., 2024; Baek et al., 2024) carries substantial potential, where researchers can
obtain much faster feedback, as well as considerable risks, where the preference and bias of LLMs
could affect the development of a scientific domain. Concretely, it is well known that many reviews
for paper submissions are now written with the help of LLMs, which is explicitly allowed by ICLR
2025 as well. Unfortunately, existing LLMs are often biased to be “nice and helpful” while being
highly sensitive to the prompt, illustrated by Figure 1. Therefore, this paper aims to highlight a
pressing research question: how do we improve the fidelity of LLM-based idea evaluation?
Most existing research attempts to address the problem of LLM-based idea evaluation by designing
better prompt strategy (Brown, 2020; Wei et al., 2022; Wang et al., 2022; Yao et al., 2024), so
that more background knowledge, feedback, or inductive bias can be incorporated to an LLM. For
example, Research Agent (Baek et al., 2024) evaluates the ideas based on its five criteria added in
the prompt. AI Scientist (Lu et al., 2024) introduces some prompt tricks like self-reflection (Shinn
et al., 2024), providing few-shot examples (Wei et al., 2022), and response ensembling (Wang et al.,
2022) to enhance the idea evaluations. However, these prompt-based evaluation methods are still
1
Published as a conference paper at ICLR 2025
Figure 1: Current LLMs are highly sensitive to prompts and show biases in evaluations. This
figure illustrates that even minor variations in the LLM’s prompts (Original Prompt, Positive Prompt,
Negative Prompt) for the same idea can lead to drastic changes in the final LLM evaluation results.
Moreover, the LLM tends to always give friendly evaluations like ’Accept’ and rarely gives negative
evaluations such as ’Reject’. This observation demonstrates that the LLM evaluation is biased.
Figure 2: GraphEval performs a better idea evaluation than the existing LLM-based method
by focusing on both the global and local information of the idea. In this figure, the part highlighted
in red in the idea contain factual errors. The existing LLM-based method shown on the far left
focuses solely on the global information of the idea, which often leads to overlooking factual errors
interspersed within the idea. In contrast, GraphEval decomposes the idea into viewpoints to obtain
scores for each viewpoint, then employs Mean Pooling and Min Pooling to extract global and local
information of the idea, respectively. Finally, GraphEval derives a fair and unbiased evaluation
based on these two aspects of information.
limited because: 1) they are highly sensitive to different prompts (Errica et al., 2024; Zhang et al.,
2024a) and are prone to hallucinations (Sansford et al., 2024; Yao et al., 2023a); 2) they also require
LLMs to possess advanced capabilities (Santu et al., 2024; Liang et al., 2024) to fully understand
and judge a complex research idea, which often requires PhD-level humans in the real-world; 3)
they could overlook factual inaccuracies interspersed among the ideas. As illustrated in Figure 2,
existing LLM-based methods directly analyze the entire set of information, therefore easily missing
the factual errors within the idea, leading to a biased evaluation.
Many studies in human psychology (Knauff & Wolf, 2010; Dijkstra et al., 2014) indicate that people
often find it difficult to understand abstract ideas, which can lead to random cognition and decision-
making. However, two methods can significantly enhance human understanding of abstract ideas: 1)
by breaking down complex abstract ideas into simpler viewpoints, it becomes easier for humans to
understand (Cowell et al., 2019; Rips et al., 2012); 2) showing the connections between the complex
ideas and other ideas can also improve human’s understanding on these complex ideas (Huang, 2020;
Hayes & Kraemer, 2017; Khatin-Zadeh & Farsani, 2022).
Inspired by the psychological findings above, we propose GraphEval, a lightweight graph-based
LLM framework for idea evaluation, which breaks down complex ideas into simple and compre-
hensible viewpoints, and bridges different viewpoints into a viewpoint-graph. Specifically, we first
deconstruct complex and difficult-to-understand ideas into simple viewpoints using a prompt-based
approach with a (small) LLM. Then, we treat each viewpoint as a node and construct edges through
LLM-based relation extraction and/or BERT similarity scores. Finally, we create a viewpoint-graph
by joining the viewpoints and edges across different ideas. Based on this, we propose two lightweight
graph-based methods for idea evaluations: 1) GraphEval-LP: It is a training-free framework based on
graph label propagation algorithm. It operates by transferring quality labels from labeled nodes to
unlabeled nodes through weighted edges, ultimately predicting the final evaluation of an idea based
on the labels of the viewpoint-subgraph associated with it. 2) GraphEval-GNN: It is a deep learning
framework based on Graph Neural Networks (GNN) that requires minimal training. It models idea
evaluations as a graph prediction problem using GNNs, obtaining evaluation results by predicting
the attributes or classifications of viewpoint-subgraphs. Moreover, in order to objectively assess
the novelty of ideas, we have also added a plagiarism detection mechanism to the GraphEval-GNN.
2
(Original Prompt, Positive Prompt, Negative Prompt):<System Prompt> You are an AI researcher who is reviewing a paper's title and abstract that was submitted to a prestigious ML venue. Be critical and cautious in your decision.If a paper is good or you are unsure, give it good scores and accept it.If a paper is bad or you are unsure, give it bad scores and reject it. <Instruction> Please evaluate the paper draft based on the following six dimensions:…Query: Title – "In-Context Policy Iteration”; Abstract – "This workpresents In-Context Policy Iteration, an algorithm for performingReinforcement Learning (RL), in-context, using foundation models...“;Original Prompt Result: Overall Score (0-100)= 78, Accept (Poster)Positive Prompt Result: Overall Score (0-100)= 85, Accept (Oral)Negative Prompt Result: Overall Score (0-100)= 75, Accept (Poster)(1) Graph Neural Networks (GNNs) have emerged as a powerful tool for modeling graph-structured data. (2) Recent advancements have demonstrated that GNNs outperform traditional neural networks on tasks such as node classification and link prediction. (3) The Transformer model, originally developed for computer vision, has also been widely adopted in natural language processing tasks. (4) In many applications, GNNs exhibit superior scalability compared to convolutional neural networks.EntireinformationLLMIdea1234Evaluate Base on GraphEval (Ours)Split80757015MeanPoolingMinPooling6015AverageOverall Score = 85,Accept (Oral)Overall Score = 37.5,RejectPublished as a conference paper at ICLR 2025
Specifically, we have incorporated temporal information into the features of the viewpoint-nodes
and deliberately constructed plagiarized ideas along with their negative evaluation labels as negative
samples in the GNN training process. By doing so, we enable the GraphEval to learn to give a
lower evaluation to those ideas that are plagiarisms of previous research.
In summary, our main contributions are as follows:
• To the best of our knowledge, we are the first to investigate LLM-based idea evaluation from a
graph perspective, offering new insights into graph-enhanced LLM research.
• We propose a lightweight graph-based LLM framework called GraphEval for idea evaluations,
which includes GraphEval-LP and GraphEval-GNN methods. It breaks down the complex ideas
into simple viewpoints, connects the viewpoints into a viewpoint-graph, and models the idea
evaluation as a node-level prediction task on the viewpoint-graph.
• Extensive experiments on two datasets have demonstrated that GraphEval can achieve at least
a 14% improvement in F1 score with low computation cost and API cost compared with other
baselines. Additionally, GraphEval can effectively detect plagiarized ideas and provide a fair
evaluation.
2 RELATED WORKS
Automatic Idea Evaluation. The rapid growth of research idea production and intricate knowledge
specialization challenge conventional scientific feedback mechanisms (Liang et al., 2024), prompting
researchers to explore AI for automated idea evaluation to accelerate the academic innovation cycle.
For example, Sun & Li (2021); Li et al. (2019) investigated the use of CNNs for evaluating academic
innovation and design, while Siemon (2023); Bell et al. (2024) analyzed the automated idea evaluation
process from a Human-Computer Interaction perspective. In addition, numerous studies employed
fine-tuned lightweight language models (e.g., BERT (Devlin, 2018)) to evaluate complex texts, such
as dialogues (Thida, 2021), tweets (Pota et al., 2021), and the novelty of ideas (Just et al., 2024; Yuan
et al., 2022). However, most of these methods require extensive training on large-scale data and face
limitations in generalizability (Sun & Li, 2021; Just et al., 2024). Conversely, recent studies have
sought to leverage the domain knowledge and logical capabilities of LLMs to create idea evaluators
(Ubonsiri, 2024; Baek et al., 2024; Du et al., 2024; Lin et al., 2023a). Du et al. (2024) proposed using
a prompt-based approach to allow LLMs to act as reviewers and meta-reviewers in order to assess the
level of papers/ideas based on different evaluation criteria. Xu et al. (2024) utilized representations
from specific layers of LLMs for evaluation, Shankar et al. (2024) aligned LLM evaluators with
human preferences through feedback, and Lu et al. (2024) enhanced the decision-making ability of
LLM-based evaluators via self-reflection, few-shot learning, and response integration. Furthermore,
Si et al. (2024) measured the consistency gap between LLMs and human expert reviews. However,
when faced with the inherently complex semantic information of research ideas (Baek et al., 2024)
and the subjectivity of the evaluation task (Si et al., 2024), the decision-making consistency between
LLMs and human reviewers remains limited, often leading to LLMs struggling to provide high-
quality feedback (Si et al., 2024; Liang et al., 2024; Lu et al., 2024). Recently, some research works
have been evaluating long-form texts, such as biographies of people (Min et al., 2023) and complex
mathematical reasoning texts (Lightman et al., 2023). These studies divide the long text into multiple
subsets and evaluate each of them. Inspired by these works, we decompose the obscure ideas into
simple, understandable viewpoint nodes using LLMs, and further evaluate the idea based on graph
algorithms.
Graph for LLMs. The use of graphs in conjunction with LLMs is an emerging research area,
with several established directions. These include integrating LLMs with path selection mechanisms
to learn unified graph representations (Shang et al., 2024); constructing graph-based text indexes
using LLMs to answer questions over private text corpora (Edge et al., 2024); and utilizing LLMs for
knowledge graph creation (Yang et al., 2024; Zhu et al., 2024; Carta et al., 2023; Trajanoska et al.,
2023) and completion (Yao et al., 2023b). In addition, Zhang et al. (2024b) proposed the NLGift
benchmark, which focuses on the evaluation of LLM graph reasoning generalization; Perozzi et al.
(2024) introduced GraphToken, which can explicitly represent structured data for LLMs; Shi et al.
(2024) introduced a novel recommender that synergizes LLMs and KGs to enhance recommendations
and provide interpretable results. In terms of open-source software, various graph databases are
supported by both the LangChain (LangChain, 2024) and LlamaIndex (LlamaIndex, 2024) libraries.
3
Published as a conference paper at ICLR 2025
Figure 3: Overview of GraphEval methodology. GraphEval first transforms the ideas into a
viewpoint-graph via Viewpoint-Graph Extraction, which contains multiple viewpoint-subgraphs,
viewpoint-nodes, and edges between viewpoint-nodes. Then two lightweight GraphEval imple-
mentations named GraphEval-LP and GraphEval-GNN are employed to evaluate the ideas. Note that
AGG denotes the acronym for aggregation function.
However, leveraging LLMs to extract diverse viewpoints embedded in research ideas, structuring them
as graphs, and using these for idea evaluation remains a direction that warrants further exploration.
3 VIEWPOINT-GRAPH EXTRACTION: A GRAPH STRUCTURE FOR DIVERSE
RESEARCH VIEWPOINTS AND THEIR RELATIONSHIPS
Problem Setup We consider a predefined set of quality labels Slabel for evaluating research
ideas (e.g., categorical values [Reject, Accept (Poster), Accept (Oral)]). Given a set of ideas
[D0, D1, . . . , Dn], only a subset of these ideas has known quality labels during training, and our
objective is to predict the quality labels for the remaining ideas at test time.
Framework Overview Figure 3 provides an overview of the proposed GraphEval framework. The
key insight of our approach is that by leveraging the summarization capabilities of LLMs (Jin et al.,
2024; Ghosh et al., 2024), we can extract a viewpoint-subgraph from each idea’s text, which serves
as a high-granularity representation that captures diverse viewpoints and the semantic relationships
between them. Additionally, we connect multiple viewpoint-subgraphs to construct a larger graph
structure, the viewpoint-graph, which acts as an extensible database, encompassing existing research
viewpoints and their intricate interrelations. This allows us to apply label propagation or GNN
algorithms to evaluate ideas in the test set, using only the quality information from the training set
ideas.
Viewpoint Extraction through Prompted LLMs The key challenges in LLM-based research idea
evaluations are twofold: (1) Research ideas inherently encapsulate complex semantic information
(Baek et al., 2024; Si et al., 2024), as a single idea often contains multiple distinct viewpoints rooted
in different concepts, interconnected through intricate logical relationships that collectively define the
idea. (2) Idea evaluation is fundamentally subjective (Si et al., 2024), which presents a significant
challenge for LLMs’ comprehension and reasoning abilities (Santu et al., 2024; Liang et al., 2024),
often resulting in severe biases and a lack of alignment with human evaluations (Lu et al., 2024).
To address these challenges, we utilize LLMs to extract fine-grained components from research ideas,
which we refer to as viewpoints. A viewpoint can be an idea, argument, or fact embedded within
the research content. These viewpoints are semantically independent, evaluable units that are made
as granular as possible to ensure they cannot be further decomposed. For a given research idea Di,
we employ a prompted LLM Lp to extract a list of viewpoints: [vi
k] = Lp(Di). A simple
example of viewpoint extraction is illustrated in Appendix A.
1, . . . , vi
0, vi
By extracting viewpoints, we decompose semantically intricate research ideas into fine-grained and
semantically independent units. In this process, we utilize prompted LLMs to extract elements
as an objective and straightforward NLP task (Manning, 1999; Rush, 2015), relying solely on the
models’ summarization and abstraction capabilities (Jin et al., 2024; Kurisinkel & Chen, 2023). This
approach typically induces fewer biases compared to subjective judgment tasks that necessitate deeper
comprehension and reasoning of complex text (Zhang et al., 2023; Zheng et al., 2023). Additionally,
it allows us to leverage smaller LLMs (e.g., those with 7 billion parameters) to implement the
GraphEval framework, resulting in significant resource savings.
4
3412121314118109756Edge featureViewpoint node Viewpoint-subgraph1354Label142Sum3412Label predictionLabel propagationArgmax1354GraphConvL-th layer3412AGGSoftmaxGraphEval-LPViewpoint-Graph ExtractionGraphEval-GNN(a) Develop a GNN model that dynamically ...topology of dynamic graphs. (b) Investigatemethods to interpret ... how node representationsare learned. (c) Design a framework for trainingGNNs ... while preserving dataprivacy. (d) Explore the use of GNNs ...predicting future events in spatiotemporal data.Idea List(b)Published as a conference paper at ICLR 2025
Table 1: LLM-based relation extraction yields relatively few relevant relationships from the
viewpoints, resulting in viewpoint-subgraphs with overly sparse edges. We conduct LLM-based
viewpoint and relation extraction on 300 research ideas, quantifying the average length of each idea
and extracted viewpoints, the average number of viewpoints extracted per idea, the average edge
number and edge density of each viewpoint-subgraph, with text length measured in word count. The
LLM used is Mistral (7B) Instruct v0.3 (Jiang et al., 2023).
Avg. Idea Len. Avg. Viewpoint Num. Avg. Viewpoint Len. Avg. Edge Num. Avg. Edge Density
174.60
8.83
20.19
3.71
10.73%
Viewpoint-subgraph Construction through Prompted LLMs To identify the semantic relation-
ships between viewpoints extracted from an idea, we utilize prompted LLMs for relation extraction
(Wei et al., 2024; Jinensibieke et al., 2024). Specifically, we treat each viewpoint as a node in a graph,
referred to as a viewpoint-node. We then input the viewpoint list into the prompted LLM, instructing
it to extract semantically related viewpoint pairs. These pairs are subsequently considered as edges
connecting the corresponding viewpoint-nodes. We refer to the graph constructed from a research
idea as a viewpoint-subgraph.
To validate the feasibility of using prompted LLMs for relation extraction, we collect 300 submissions
to the ICLR conferences between 2021 and 2023, treating the abstracts of these academic papers
as representations of research ideas (Si et al., 2024; Baek et al., 2024). We perform LLM-based
viewpoint and relation extraction on these research ideas. As shown in Table 1, the LLM-based
relation extraction yields relatively few relevant relationships from the viewpoints, resulting in
viewpoint-subgraphs with overly sparse edges. This leads to an excess of isolated viewpoint-nodes in
each viewpoint-subgraph and a deficiency in the inherent relational information. Additionally, the
LLM-based relation extraction incurs extra resource costs.
To address these issues, we propose a method for relation extraction based on embedding similarity.
Viewpoint-subgraph Construction through BERT-based Encoder To automatically identify
logical relationships between viewpoints, we use a BERT-based encoder E to obtain embed-
dings of equal dimensions e for each viewpoint-node v (Qasim et al., 2022; Lin et al., 2023b):
[e1, e2, . . . , en] = E([v1, v2, . . . , vn]). Then, we compute the cosine similarity s between their
embeddings: s(ei, ej) = ei·ej
∥ei∥∥ej ∥ (Izacard et al., 2021; Li et al., 2023). Each viewpoint-node is
connected to the top-k nodes with the highest embedding cosine similarity using weighted undirected
edges, with the edge weights set to the cosine similarity (Harnoune et al., 2021). This way, we
construct the viewpoint-subgraph, which serves as a high-granularity representation of the research
idea. Additionally, by controlling the value of k, we can regulate the edge density, allowing for the
construction of viewpoint-subgraphs that are more suited to specific downstream tasks.
Viewpoint-graph Construction through Connecting Viewpoint-subgraphs After transforming
the ideas in both the training set and test set into viewpoint-subgraphs, we connect them to construct
a larger graph. Specifically, similar to the construction of viewpoint-subgraphs, for each viewpoint-
node, we connect it to the top-m nodes from different subgraphs with the highest embedding cosine
similarity using undirected weighted edges. We refer to the graph constructed as the viewpoint-graph,
which integrates the diverse viewpoints of different research ideas and the interrelations between
them. The viewpoint-graph G can be represented by a node list and an edge list:
G = {[(v0, e0), ..., (vn, en)], [(vk0 , vk1, wk0k1), ..., (vkmn , vkmn+1, wkmnkmn+1)]}
Notably, the viewpoint-graph is scalable, allowing new viewpoint-subgraphs to be integrated in linear
time, providing a theoretical foundation for its expansion as new ideas are generated.
(1)
4 GRAGHEVAL-LP: A SIMPLIFIED AND LIGHTWEIGHT IMPLEMENTATION
After obtaining the viewpoint-graph G, we would like to validate its efficacy by first applying a
simple and lightweight algorithm, label propagation (Raghavan et al., 2007; Zhang et al., 2017), to
evaluate the ideas in the test set. Our results in Section 7 show that this simple algorithm is already
very effective with idea evaluations. We refer to this evaluation framework as GraphEval-LP.
Initialization and Regularization For each viewpoint-node vi in G, we maintain a vector di, where
each dimension corresponds to a quality label in Slabel. Thus, the dimensionality of di is given by
5
Published as a conference paper at ICLR 2025
|di| = |Slabel|. For viewpoint-nodes extracted from ideas in the training set, we assign a value of 1 to
the dimension corresponding to the idea’s label, while all other dimensions are set to 0. In contrast,
viewpoint-nodes extracted from the test set are initialized as zero vectors. Additionally, we regularize
the edge weights wij in G to ensure that the sum of the weights of all edges connected to any given
viewpoint-node vi equals 1, i.e., (cid:80)
j∈N (i) wij = 1, where N (i) represents the set of neighbors of vi.
Label Propagation We perform multiple iterations of label propagation on graph G until the labels
no longer change. Specifically, in each iteration, each node updates its vector by adding the weighted
vectors of its neighboring nodes:
d(t+1)
i
=
1
Zi
(d(t)
i +
(cid:88)
wijd(t)
j )
j∈N (i)
(2)
Where d(t)
i
updated vector is properly scaled.
is the vector of node vi at iteration t, and Zi is a normalization factor that ensures the
Label Prediction After completing label propagation, we sum the vectors of the viewpoint-nodes
corresponding to each idea in the test set. The predicted label ˆy is then determined by selecting the
, where j
dimension with the highest value in the summed vector, i.e., ˆy = arg maxj
indexes the dimensions of the vector and k means the number of viewpoints for a given research idea.
i=1 di
(cid:16)(cid:80)k
(cid:17)
j
5 GRAGHEVAL-GNN: A GRAPH NEURAL NETWORK-BASED SOLUTION
Although label propagation is effective, it does not learn how to properly propagate evaluation scores
from known nodes to unknown nodes. Therefore, we further propose a learning-based approach,
GraphEval-GNN, which is trained to predict the evaluation scores for a viewpoint-node.
Method Overview. As shown in Figure 3, GraphEval-GNN models viewpoints as viewpoint-nodes,
while the relationships between viewpoints are represented by edge features. We apply GNN to
embed the node and edge features and use them for training and testing.
Initialize node/edge features. As illustrated in Sec. 3, we initialize the viewpoint-node features hv
by converting viewpoints into embeddings using BERT. Since the relationships between viewpoint-
nodes encompassing the similarity relations obtained from BERT, we initialize the edge features wv
using this relational attribute.
Predict via a weighted GNN. We implement the predictive model fϕ over viewpoint-nodes using
a weighted GNN, as shown in Figure 3. The objective of the GNN is to learn expressive node
embeddings hv through an iterative weighted aggregation of the local network neighborhoods. The
l-th iteration of the GraphConv(·), or the node embeddings update of the l-th layer, is represented as:
(cid:16)
(cid:16)
(cid:17)
(cid:17)
v = U(l)CONCAT
h(l)
is the node embedding after l iterations, h(0) have been initialized as explained above,
where h(l)
v
N (v) denotes the direct neighbors of v, and U(l), W(l) are learnable parameters.
{RELU(wvW(l)h(l−1)
), q ∈ N (v)}
, h(l−1)
v
MEAN
(3)
,
q
Since the evaluation of an idea is determined by all the viewpoints extracted from it, we further model
the LLM evaluation problem as a sub-graph prediction problem and aggregate all the node embeddings
into a subgraph embedding. Moreover, as introduced in Figure 2, we consider MEAN Pooling and
Max Pooling simultaneously to extract global and local information of the idea. Specifically, the
sub-graph probability distribution ˆyDi of idea Di can be made through GraphPred(·) in the form of:
ˆyDi = SOFTMAX
(cid:16)
MLP
(cid:16)
CONCAT
(cid:16)
MEAN
(cid:110)
h(l)
v
: v ∈ Lp(Di)
(cid:111)
, MAX
(cid:110)
h(l)
v
: v ∈ Lp(Di)
(cid:111)(cid:17)(cid:17)(cid:17)
,
(4)
We have summarized the detailed training process of GraphEval in Algorithm 1. In addition, in
the testing of GraphEval, we choose the category with the highest output probability as the result
of the LLM evaluation.
6
Published as a conference paper at ICLR 2025
Algorithm 1 Training of GraphEval
Require: Dataset Dtrain = {(x, y)}. A weighted GNN fϕ. Edge weights wv. Number of GNN
layers L.
1: Initialize the embeddings of viewpoint node, h(0)
2: for each iteration i do
3:
4:
N ← SampleMiniEdgeBatch(Dtrain)
Mask the viewpoint-subgraphs in Dtrain that are in N , and obtain the labels of the viewpoint-
v , using BERT.
subgraphs in T (i)
n
5:
6:
7:
for l = 1 to L do
h(l)
v ← GraphConv(h(l)
Criterion
Backward
(cid:16)
(cid:16)
v , wv) with fϕ
ˆyDi, {yj}j∈T (i)
n
∈ N
(cid:17)(cid:17)
GraphEval for idea novelty assessment. Assessing the novelty of ideas is crucial, as plagiarized
or derivative ideas can sometimes mislead LLMs into giving them higher evaluation scores. As a
concrete example, if the same idea is being evaluated by an LLM twice, LLM will always assign the
same evaluation score, since it does not take the novelty aspect into account when evaluating ideas.
To address this issue, we enforce our GraphEval-GNN to learn that ideas and viewpoints appearing
later in time and exhibiting high similarity to earlier ones should be evaluated with lower scores.
Specifically, our approach focuses on two key aspects. First, we incorporate temporal features into the
viewpoint representations, enabling the model to capture the chronological sequence of viewpoints.
Second, we artificially generate duplicated ideas and viewpoints that are direct combinations of
existing viewpoints in the viewpoint-graph, label them with lower evaluation scores as negative
samples, and include them in the GNN training process.
6 EXPERIMENTAL SETUP
Task. Following the works of Si et al. (2024); Baek et al. (2024), we treat the abstract of an academic
paper as a representation for the research idea, since it typically offers a concise summary of the
research problem, the scientific methods employed, the experimental design, and the key contributions.
Specifically, we provide each method with the abstracts and titles of the academic papers, tasking them
with evaluating the review decision: Reject, Accept (Poster), Accept (Oral), or Accept (Spotlight).
Datasets. We employ two datasets to thoroughly evaluate the proposed GraphEval framework:
• ICLR Papers: We collect abstracts and review decisions from paper submissions to the ICLR
conferences between 2021 and 2023. From this, we randomly select 300 papers as the training set
for learning-based methods and 50 papers as the test set.
• AI Researcher Dataset: We use the dataset collected by Si et al. (2024) in AI Researcher as an
additional test set, which contains academic papers focusing on the domain of ”novel prompting
methods.” Note that due to the scarcity of Accept (Oral) and Accept (Spotlight) labels in this
dataset, we combine them into a single label, thereby transforming the task into a three-class
classification problem.
The details of the datasets can be found in Appendix C.
Baselines. To gain a comprehensive understanding of the performance of our proposed framework
in evaluating research ideas, we have adopted several baselines:
• Prompted LLM: We provide several criteria for assessing research ideas in the prompt. Addi-
tionally, we present specific standards for the four review decisions and include one example for
each as few-shot examples for in-context learning (Brown, 2020). Moreover, we include the label
distribution of the dataset to help the LLMs understand the frequency of each review decision.
• CoT prompt: Drawing inspiration from Wei et al. (2022), we modify the prompt used for prompted
LLM to adopt a CoT format, guiding it to complete the idea evaluation step by step.
• CoT-SC: Self-consistency with CoT (CoT-SC) is an ensemble approach that samples k = 5 i.i.d.
CoT, then returns the most frequent output (Wang et al., 2022).
7
Published as a conference paper at ICLR 2025
• ToT prompt: Tree of Thoughts (ToT) is an extension of CoT (Yao et al., 2024). Similar to CoT,
we divide the evaluation process into multiple steps. At each step, we sample branch = 5 i.i.d.
CoTs, and pass the most frequent output as the intermediate result to the next step.
• Research Agent: We adopt the idea evaluation method from Research Agent (Baek et al., 2024) as
one of our baselines, where the research problem, scientific method, and experiment design of an
idea are each scored based on five criteria. Building on this, we further introduce a final decision
step that synthesizes the above evaluation results to provide a comprehensive review decision.
• Fine-tuned BERT: In addition to LLM-based methods, we fine-tune a DistilBERT model (Sanh
et al., 2019) using collected paper abstracts and review decisions as a baseline to validate the
competitiveness of our approach compared to learning-based methods.
For all LLM-based baselines (except Fine-tuned BERT), we use two LLMs of different sizes: Mistral
(7B) Instruct v0.3 (Jiang et al., 2023) and Qwen 2 Instruct (72B) (qwe, 2024). All prompts used in
the methods can be found in the Appendix B.
Evaluation Metrics. To comprehensively evaluate the consistency between the idea evaluation
methods and human reviewers, we calculate the accuracy, macro precision, macro recall, and
macro F1 score for each method. Additionally, we record the average token cost per evaluation as
a measure of resource consumption. Note that for Mistral (7B) Instruct v0.3, the API cost is $0.20
per 1M tokens, and for Qwen 2 Instruct (72B), the API cost is $0.90 per 1M tokens.1 We calculate
the average cost per evaluation for each method according to these pricing standards. To intuitively
illustrate the resource consumption of each method, we normalize the average costs by setting the
highest-cost method to 1, which we refer to as normed cost. A smaller normed cost indicates lower
resource expenditure.
Implementation Details. During the training phase, we configured the graph neural network as
a two-layer weighted GNN with a hidden dimension of 64. The batch size is set to 64, and the
maximum number of training epochs is limited to 1000. We employ the Adam optimizer (Diederik,
2014) for training and gradually reduce the learning rate from 1e-3 to 0 using a LambdaLR scheduler.
Our proposed method is implemented using PyTorch2 and PyTorch Geometric (PyG)3, with all
experiments conducted on a single NVIDIA A100 Tensor Core GPU. For the LLMs, we utilize
API calls from Together AI4 to obtain responses. Additionally, the average GPU memory usage of
GraphEval-GNN for the two tasks is 372MB, whereas Fine-tuned BERT utilizes 4.84 GB on average.
7 EXPERIMENT RESULTS
7.1 COMPARISON WITH EXISTING BASELINES.
We report the performance of our methods and baselines in Tables 2 and 3.
(1) Across all datasets, GraphEval-GNN significantly outperforms all baselines: for the ICLR Papers
dataset, it achieves a 10%-72% accuracy advantage and an 18%-42% macro F1 score advantage; for
the AI Researcher dataset, it achieves a 13%-53% accuracy advantage and a 14%-48% macro F1
score advantage. Moreover, its normed cost on both datasets demonstrates its utilization of resources
comparable to the minimum expenditure level. This indicates that by leveraging a smaller LLM (7B
parameters) to convert semantically complex research ideas into more granular viewpoint-graph, and
utilizing GNN algorithms to extract global and local information, we achieve precise evaluation of
research ideas.
(2) Regarding the prompt-based baselines, they generally achieve lower accuracy and macro F1
scores. Our observations indicate that all these methods tend to overestimate the quality of ideas, with
very few ideas being rejected. This aligns with findings in previous works (Lu et al., 2024; Si et al.,
2024). Furthermore, we find that using larger-scale LLMs does not consistently improve evaluation
performance; rather, it often leads to a decline: In experiments, the 72B model tends to provide
consistent or similar decisions and overall scores for different ideas. This suggests that LLMs exhibit
1The API pricing referenced here is based on the rates provided by https://www.together.ai/pricing.
2https://pytorch.org/
3https://pytorch-geometric.readthedocs.io/en/latest/
4https://www.together.ai/
8
Published as a conference paper at ICLR 2025
Table 2: GraphEval-GNN consistently outperforms all baselines in the ICLR Papers Dataset
while utilizing resources comparable to the minimum expenditure level. Bold and underlined text
denotes the best and second-best results. Specifically, for accuracy, macro precision, macro recall, and
macro F1 score, higher values indicate more precise predictions of the labels for research ideas in the
test set. Conversely, for the normed cost, lower values represent reduced resource expenditure. Since
Fine-tuned BERT is not an LLM-based method, its token cost and normed cost are not calculated.
Dataset
ICLR Papers
Method\Metric
Accuracy Precision
Recall
F1 Score Token Cost Normed Cost
Prompted LLM (7B)
Prompted LLM (72B)
CoT prompt (7B)
CoT prompt (72B)
CoT-SC (7B)
CoT-SC (72B)
ToT prompt (7B)
ToT prompt (72B)
Research Agent (7B)
Research Agent (72B)
Fine-tuned BERT
16.00%
6.00%
16.00%
6.00%
20.00%
4.00%
8.00%
4.00%
12.00%
6.00%
66.00%
4.55%
4.09%
5.00%
5.36%
5.21%
1.19%
4.95%
1.06%
8.11%
5.30%
7.14%
16.67%
6.35%
31.25%
7.69%
16.67%
5.05%
27.08%
8.33%
20.83%
2.27%
25.00%
6.47%
18.75%
25.00%
2.04%
22.92% 10.05%
7.17%
31.25%
1968.22
1735.30
2443.28
2415.62
3121.72
3428.14
8963.92
6211.46
7909.18
7278.72
27.22%
28.39% 26.01%
\
GraghEval-LP (Ours)
GraghEval-GNN (Ours)
70.00%
32.55% 32.16%
76.00% 56.59% 42.63% 43.59%
37.61%
2672.95
2672.95
0.06
0.24
0.07
0.33
0.10
0.47
0.27
0.85
0.24
1.00
\
0.08
0.08
Table 3: GraphEval-GNN consistently outperforms all baselines in the AI Researcher Dataset
while utilizing resources comparable to the minimum expenditure level. Bold and underlined
text denotes the best and second-best results. Since Fine-tuned BERT is not an LLM-based method,
its token cost and normed cost are not calculated.
Dataset
AI Researcher
Method\Metric
Accuracy Precision
Recall
F1 Score Token Cost Normed Cost
Prompted LLM (7B)
Prompted LLM (72B)
CoT prompt (7B)
CoT prompt (72B)
CoT-SC (7B)
CoT-SC (72B)
ToT prompt (7B)
ToT prompt (72B)
Research Agent (7B)
Research Agent (72B)
Fine-tuned BERT
26.98%
30.16%
30.16%
23.81%
31.75%
25.40%
30.16%
25.40%
27.42%
20.63%
60.00%
50.40%
52.88%
51.44%
50.51%
26.61%
52.36%
42.66%
51.78%
19.78%
14.53%
36.44% 24.75%
41.99% 28.33%
37.09% 21.18%
34.97% 22.86%
41.67% 26.43%
37.75% 24.37%
29.04% 19.89%
39.38% 22.93%
38.19% 24.24%
32.03% 18.71%
1961.41
1717.57
2410.06
2263.92
2854.44
3157.40
9829.14
6071.98
7887.44
7345.06
54.44%
54.44% 53.33%
\
GraghEval-LP (Ours)
GraghEval-GNN (Ours)
70.47%
55.56% 56.97%
73.33% 81.67% 73.33% 67.13%
61.11%
2541.17
2541.17
0.06
0.23
0.07
0.31
0.09
0.43
0.30
0.83
0.24
1.00
\
0.08
0.08
significant bias when faced with subjective judgment tasks that necessitate a deeper understanding
and reasoning of complex text (Zhang et al., 2023; Zheng et al., 2023), irrespective of their model
size and capability. On the other hand, the GraphEval framework mitigates bias by transforming the
LLM’s task into the objective and straightforward task of extracting elements.
(3) Compared to CoT-SC, the ToT prompt and Research Agent, which utilize more complex prompting
techniques, do not demonstrate significant advantages. This suggests that prompting techniques have
limited efficacy in enhancing the capabilities of LLMs for tasks requiring complex comprehension
and reasoning.
(4) Although fine-tuned BERT achieves better results compared to other prompt-based baselines,
it still falls short of the performance level of GraphEval. This is due to the construction of the
viewpoint-graph, which allows GraphEval-LP and GraphEval-GNN to obtain quality information
about viewpoints locally and capture the intricate interrelations among diverse viewpoints globally,
thereby leading to improved performance.
(5) GraphEval-LP consistently achieves the second-best results across both datasets, and it does not
require training, making it efficient and lightweight. GraphEval-LP effectively demonstrates the
9
Published as a conference paper at ICLR 2025
Figure 4: Novelty assessment can significantly improve the performance of GraphEval when
detecting plagiarized or derivative ideas. We compare two variants of GraphEval on the ICLR
Papers dataset and evaluate their performance on four metrics.
strong utility of the constructed viewpoint-graph for research idea evaluation, owing to the inherent
correlations between research ideas, such as shared common viewpoints. These implicit correlations
cannot be effectively leveraged by prompt-based methods or fine-tuned language models.
(6) The comparison between GraghEval-LP and GraghEval-GNN demonstrates that: 1) Deep learning
can enhance the performance of graphs when applied for LLM evaluation tasks; 2) Although the
introduction of GNN has improved performance, it also results in increased computational cost.
Therefore, in our paper, we propose these two implementations to provide options for users with
different needs.
7.2 GR A P HEV A L FOR NOVELTY ASSESSMENT.
To evaluate the effectiveness of novelty assessment on the ICLR Papers dataset, we artificially
construct 80 plagiarized ideas for testing. Specifically, we employ three strategies to evenly create
these 80 ideas: 1) We directly copy highly-rated ideas from the dataset; 2) We randomly replace some
viewpoints in highly-rated ideas with viewpoints from other ideas; 3) We substitute some viewpoints
in highly-rated ideas with those of their neighboring nodes based on the similarity of embeddings.
Subsequently, we select 10 of the above ideas and construct negative samples using the method
mentioned in Sec 5, which are then provided to GraphEval for training. We compare the impact of
considering Novelty Assessment on the performance of GNN across four metrics, as shown in Figure
4. We can observe that Novelty Assessment can significantly improve the performance of GraphEval
when detecting plagiarized or derivative ideas.
8 CONCLUSION
In this paper, we propose a novel lightweight graph-based LLM framework, GraphEval, for idea
evaluation, addressing the complexities and subjectivity inherent in this task. Drawing inspiration from
human psychology, we break down complex ideas into simpler viewpoints and model the relationships
between them using viewpoint-graphs. Our framework includes two methods: GraphEval-LP, a
training-free approach utilizing label propagation, and GraphEval-GNN, a deep learning-based
method using Graph Neural Networks. Both methods effectively leverage viewpoint-graphs to predict
idea evaluations, while GraphEval-GNN also incorporates a plagiarism detection mechanism that
ensures fair and objective assessment of novelty.
Through extensive experiments on two datasets, we demonstrate that GraphEval not only achieves
a significant improvement in accuracy and macro F1 score compared to multiple baselines but also op-
erates with a low resource expenditure. Our work pioneers the integration of graph-based approaches
with LLMs for idea evaluation, providing new insights for enhancing LLM-based evaluations with
graph representations.
10
AccuracyPrecisionRecallF1 Score0.00.10.20.30.40.50.60.7ValuesWithout Novelty AssessmentWith Novelty AssessmentPublished as a conference paper at ICLR 2025
REFERENCES
Qwen2 technical report. 2024.
Jinheon Baek, Sujay Kumar Jauhar, Silviu Cucerzan, and Sung Ju Hwang. Researchagent: Iterative
research idea generation over scientific literature with large language models. arXiv preprint
arXiv:2404.07738, 2024.
J Jason Bell, Christian Pescher, Gerard J Tellis, and Johann F¨uller. Can ai help in ideation? a
theory-based model for idea screening in crowdsourcing contests. Marketing Science, 43(1):54–72,
2024.
Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
Salvatore Carta, Alessandro Giuliani, Leonardo Piano, Alessandro Sebastian Podda, Livio Pompianu,
and Sandro Gabriele Tiddia. Iterative zero-shot llm prompting for knowledge graph construction.
arXiv preprint arXiv:2307.01128, 2023.
Rosemary A Cowell, Morgan D Barense, and Patrick S Sadil. A roadmap for understanding memory:
Decomposing cognitive processes into operations and representations. Eneuro, 6(4), 2019.
Jacob Devlin. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv
preprint arXiv:1810.04805, 2018.
P Kingma Diederik. Adam: A method for stochastic optimization. (No Title), 2014.
Katinka Dijkstra, Anita Eerland, Josjan Zijlmans, and Lysanne S Post. Embodied cognition, ab-
stract concepts, and the benefits of new technology for implicit body manipulation. Frontiers in
psychology, 5:757, 2014.
Jiangshu Du, Yibo Wang, Wenting Zhao, Zhongfen Deng, Shuaiqi Liu, Renze Lou, Henry Peng Zou,
Pranav Narayanan Venkit, Nan Zhang, Mukund Srinath, et al. Llms assist nlp researchers: Critique
paper (meta-) reviewing. arXiv preprint arXiv:2406.16253, 2024.
Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt,
and Jonathan Larson. From local to global: A graph rag approach to query-focused summarization,
2024. URL https://arxiv.org/abs/2404.16130.
Federico Errica, Giuseppe Siracusano, Davide Sanvito, and Roberto Bifulco. What did i do
wrong? quantifying llms’ sensitivity and consistency to prompt engineering. arXiv preprint
arXiv:2406.12334, 2024.
Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. Human-like
summarization evaluation with chatgpt. arXiv preprint arXiv:2304.02554, 2023.
Akash Ghosh, Arkadeep Acharya, Raghav Jain, Sriparna Saha, Aman Chadha, and Setu Sinha. Clip-
syntel: clip and llm synergy for multimodal question summarization in healthcare. In Proceedings
of the AAAI Conference on Artificial Intelligence, volume 38, pp. 22031–22039, 2024.
Akshay Goel, Almog Gueta, Omry Gilon, Chang Liu, Sofia Erell, Lan Huong Nguyen, Xiaohong
Hao, Bolous Jaber, Shashir Reddy, Rupesh Kartha, et al. Llms accelerate annotation for medical
information extraction. In Machine Learning for Health (ML4H), pp. 82–100. PMLR, 2023.
Ayoub Harnoune, Maryem Rhanoui, Mounia Mikram, Siham Yousfi, Zineb Elkaimbillah, and
Bouchra El Asri. Bert based clinical knowledge extraction for biomedical knowledge graph
construction and analysis. Computer Methods and Programs in Biomedicine Update, 1:100042,
2021.
Justin C Hayes and David JM Kraemer. Grounded understanding of abstract concepts: The case of
stem learning. Cognitive research: principles and implications, 2:1–15, 2017.
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn:
Simplifying and powering graph convolution network for recommendation. In Proceedings of the
43rd International ACM SIGIR conference on research and development in Information Retrieval,
pp. 639–648, 2020.
11
Published as a conference paper at ICLR 2025
Yanli Huang. How to represent abstract concepts? from the perspective of conceptual metaphor
theory. J Hum Psychol, 1(2):27–37, 2020.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand
Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning.
arXiv preprint arXiv:2112.09118, 2021.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot,
Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier,
L´elio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas
Wang, Timoth´ee Lacroix, and William El Sayed. Mistral 7b, 2023. URL https://arxiv.
org/abs/2310.06825.
Hanlei Jin, Yang Zhang, Dan Meng, Jun Wang, and Jinghua Tan. A comprehensive survey on
process-oriented automatic text summarization with exploration of llm-based methods. arXiv
preprint arXiv:2403.02901, 2024.
Dawulie Jinensibieke, Mieradilijiang Maimaiti, Wentao Xiao, Yuanhang Zheng, and Xiangbo Wang.
How good are llms at relation extraction under low-resource scenario? comprehensive evaluation.
arXiv preprint arXiv:2406.11162, 2024.
Julian Just, Thomas Str¨ohle, Johann F¨uller, and Katja Hutter. Ai-based novelty detection in crowd-
sourced idea spaces. Innovation, 26(3):359–386, 2024.
Omid Khatin-Zadeh and Danyal Farsani. The understanding of abstract concepts: a perspective from
distributed models of conceptual representation. Discover Psychology, 2(1):34, 2022.
Markus Knauff and Ann G Wolf. Complex cognition: the science of human reasoning, problem-
solving, and decision-making, 2010.
Litton J Kurisinkel and Nancy F Chen. Llm based multi-document summarization exploiting main-
event biased monotone submodular content extraction. arXiv preprint arXiv:2310.03414, 2023.
LangChain. Langchain graphs. https://python.langchain.com/docs/use_cases/
graph/, 2024.
Baorui Li, Yi Wang, Kesheng Wang, and Jinghui Yang. Application of cnn deep learning in product
design evaluation. In Advanced Manufacturing and Automation VIII 8, pp. 517–526. Springer,
2019.
Xiaonan Li, Kai Lv, Hang Yan, Tianyang Lin, Wei Zhu, Yuan Ni, Guotong Xie, Xiaoling Wang,
arXiv preprint
and Xipeng Qiu. Unified demonstration retriever for in-context learning.
arXiv:2305.04320, 2023.
Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Yi Ding, Xinyu Yang, Kailas
Vodrahalli, Siyu He, Daniel Scott Smith, Yian Yin, et al. Can large language models provide useful
feedback on research papers? a large-scale empirical analysis. NEJM AI, 1(8):AIoa2400196, 2024.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint
arXiv:2305.20050, 2023.
Jialiang Lin, Jiaxin Song, Zhangping Zhou, Yidong Chen, and Xiaodong Shi. Automated scholarly
paper review: concepts, technologies, and challenges. Information fusion, 98:101830, 2023a.
Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen-tau Yih,
and Xilun Chen. How to train your dragon: Diverse augmentation towards generalizable dense
retrieval. arXiv preprint arXiv:2302.07452, 2023b.
LlamaIndex. Llamaindex knowledge graph index. https://docs.llamaindex.ai/en/
stable/examples/index_structs/knowledge_graph/KnowledgeGraphDemo.
html, 2024.
12
Published as a conference paper at ICLR 2025
Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The ai scientist:
Towards fully automated open-ended scientific discovery. arXiv preprint arXiv:2408.06292, 2024.
Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. Chatgpt as a factual inconsistency evaluator for
text summarization. arXiv preprint arXiv:2303.15621, 2023.
Jinjie Mai, Jun Chen, Guocheng Qian, Mohamed Elhoseiny, Bernard Ghanem, et al. Llm as a robotic
brain: Unifying egocentric memory and control. 2023.
Christopher D Manning. Foundations of statistical natural language processing. The MIT Press,
1999.
Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer,
Luke Zettlemoyer, and Hannaneh Hajishirzi. Factscore: Fine-grained atomic evaluation of factual
precision in long form text generation. arXiv preprint arXiv:2305.14251, 2023.
Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao
Zhang, Kathleen McKeown, and Bing Xiang. Entity-level factual consistency of abstractive text
summarization. arXiv preprint arXiv:2102.09130, 2021.
Bryan Perozzi, Bahare Fatemi, Dustin Zelle, Anton Tsitsulin, Mehran Kazemi, Rami Al-Rfou, and
Jonathan Halcrow. Let your graph do the talking: Encoding structured data for llms. arXiv preprint
arXiv:2402.05862, 2024.
Marco Pota, Mirko Ventura, Hamido Fujita, and Massimo Esposito. Multilingual evaluation of
pre-processing for bert-based sentiment analysis of tweets. Expert Systems with Applications, 181:
115119, 2021.
Rukhma Qasim, Waqas Haider Bangyal, Mohammed A Alqarni, and Abdulwahab Ali Almazroi.
A fine-tuned bert-based transfer learning approach for text classification. Journal of healthcare
engineering, 2022(1):3498123, 2022.
Usha Nandini Raghavan, R´eka Albert, and Soundar Kumara. Near linear time algorithm to detect
community structures in large-scale networks. Physical Review E—Statistical, Nonlinear, and Soft
Matter Physics, 76(3):036106, 2007.
Lance J Rips, Edward E Smith, and Douglas L Medin. Concepts and categories: Memory, meaning,
and metaphysics. The Oxford handbook of thinking and reasoning, 1:177–209, 2012.
A Rush. A neural attention model for abstractive sentence summarization. arXiv Preprint, CoRR,
abs/1509.00685, 2015.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of
bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108, 2019.
Hannah Sansford, Nicholas Richardson, Hermina Petric Maretic, and Juba Nait Saada. Grapheval: A
knowledge-graph based llm hallucination evaluation framework. arXiv preprint arXiv:2407.10793,
2024.
Shubhra Kanti Karmaker Santu, Sanjeev Kumar Sinha, Naman Bansal, Alex Knipper, Souvika
Sarkar, John Salvador, Yash Mahajan, Sri Guttikonda, Mousumi Akter, Matthew Freestone,
et al. Prompting llms to compose meta-review drafts from peer-review narratives of scholarly
manuscripts. arXiv preprint arXiv:2402.15589, 2024.
Wenbo Shang, Xuliang Zhu, and Xin Huang. Path-llm: A shortest-path-based llm learning for unified
graph representation. arXiv preprint arXiv:2408.05456, 2024.
Shreya Shankar, JD Zamfirescu-Pereira, Bj¨orn Hartmann, Aditya G Parameswaran, and Ian Arawjo.
Who validates the validators? aligning llm-assisted evaluation of llm outputs with human prefer-
ences. arXiv preprint arXiv:2404.12272, 2024.
Guangsi Shi, Xiaofeng Deng, Linhao Luo, Lijuan Xia, Lei Bao, Bei Ye, Fei Du, Shirui Pan, and
Yuxiao Li. Llm-powered explanations: Unraveling recommendations through subgraph reasoning.
arXiv preprint arXiv:2406.15859, 2024.
13
Published as a conference paper at ICLR 2025
Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion:
Language agents with verbal reinforcement learning. Advances in Neural Information Processing
Systems, 36, 2024.
Chenglei Si, Diyi Yang, and Tatsunori Hashimoto. Can llms generate novel research ideas? a
large-scale human study with 100+ nlp researchers. arXiv preprint arXiv:2409.04109, 2024.
Dominik Siemon. Let the computer evaluate your idea: evaluation apprehension in human-computer
collaboration. Behaviour & Information Technology, 42(5):459–477, 2023.
John Stamper, Ruiwei Xiao, and Xinying Hou. Enhancing llm-based feedback: Insights from
intelligent tutoring systems and the learning sciences. In International Conference on Artificial
Intelligence in Education, pp. 32–43. Springer, 2024.
Lina Sun and Sitan Li. Cnn-based evaluation method of academic innovation effect of american
research universities. In 2021 IEEE International Conference on Industrial Application of Artificial
Intelligence (IAAI), pp. 355–360. IEEE, 2021.
Aye Thida. Bert-based dialogue evaluation methods with ruber framework. In Advances in Artifi-
cial Intelligence: Selected Papers from the Annual Conference of Japanese Society of Artificial
Intelligence (JSAI 2020), volume 1357, pp. 133. Springer Nature, 2021.
Milena Trajanoska, Riste Stojanov, and Dimitar Trajanov. Enhancing knowledge graph construction
using large language models. arXiv preprint arXiv:2305.04676, 2023.
Thanyalak Ubonsiri. AI-GENERATED EVALUATION: THE INFLUENCE OF CHATGPT EVAL-
UATION ON INDIVIDUALS’DECISIONS IN THE IDEA EVALUATION PHASE. PhD thesis,
Leopold-Franzens-Universit¨at Innsbruck, 2024.
Xinru Wang, Hannah Kim, Sajjadur Rahman, Kushan Mitra, and Zhengjie Miao. Human-llm
collaborative annotation through effective verification of llm labels. In Proceedings of the CHI
Conference on Human Factors in Computing Systems, pp. 1–21, 2024.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh-
ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models.
arXiv preprint arXiv:2203.11171, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
neural information processing systems, 35:24824–24837, 2022.
Kangda Wei, Aayush Gautam, and Ruihong Huang. Are llms good annotators for discourse-level
event relation extraction? arXiv preprint arXiv:2407.19568, 2024.
Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Sim-
plifying graph convolutional networks. In International conference on machine learning, pp.
6861–6871. PMLR, 2019.
Yi Xu, Bo Xue, Shuqian Sheng, Cheng Deng, Jiaxin Ding, Zanwei Shen, Luoyi Fu, Xinbing
Wang, and Chenghu Zhou. Good idea or not, representation of llm could tell. arXiv preprint
arXiv:2409.13712, 2024.
Rui Yang, Boming Yang, Sixun Ouyang, Tianwei She, Aosong Feng, Yuang Jiang, Freddy Lecue,
Jinghui Lu, and Irene Li. Graphusion: Leveraging large language models for scientific knowledge
graph fusion and construction in nlp education, 2024. URL https://arxiv.org/abs/
2407.10794.
Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, and Li Yuan. Llm lies: Hallucinations are
not bugs, but features as adversarial examples. arXiv preprint arXiv:2310.01469, 2023a.
Liang Yao, Jiazhen Peng, Chengsheng Mao, and Yuan Luo. Exploring large language models for
knowledge graph completion. arXiv preprint arXiv:2308.13916, 2023b.
14
Published as a conference paper at ICLR 2025
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan.
Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural
Information Processing Systems, 36, 2024.
Weizhe Yuan, Pengfei Liu, and Graham Neubig. Can we automate scientific reviewing? Journal of
Artificial Intelligence Research, 75:171–212, 2022.
Collin Zhang, John X Morris, and Vitaly Shmatikov. Extracting prompts by inverting llm outputs.
arXiv preprint arXiv:2405.15012, 2024a.
Yizhuo Zhang, Heng Wang, Shangbin Feng, Zhaoxuan Tan, Xiaochuang Han, Tianxing He, and
Yulia Tsvetkov. Can llm graph reasoning generalize beyond pattern memorization? arXiv preprint
arXiv:2406.15992, 2024b.
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao,
Yu Zhang, Yulong Chen, et al. Siren’s song in the ai ocean: a survey on hallucination in large
language models. arXiv preprint arXiv:2309.01219, 2023.
Zhi-Wu Zhang, Xiao-Yuan Jing, and Tie-Jian Wang. Label propagation based semi-supervised
learning for software defect prediction. Automated Software Engineering, 24:47–69, 2017.
Shen Zheng, Jie Huang, and Kevin Chen-Chuan Chang. Why does chatgpt fall short in providing
truthful answers? arXiv preprint arXiv:2304.10513, 2023.
Yuqi Zhu, Xiaohan Wang, Jing Chen, Shuofei Qiao, Yixin Ou, Yunzhi Yao, Shumin Deng, Hua-
jun Chen, and Ningyu Zhang. Llms for knowledge graph construction and reasoning: Recent
capabilities and future opportunities. World Wide Web, 27(5):58, 2024.
15
Published as a conference paper at ICLR 2025
A A SIMPLE EXAMPLE OF VIEWPOINT EXTRACTION
Here, we present a simple example of viewpoint extraction in Figure 5. For a given research idea i,
we employ a prompted LLM Lp to extract a list of viewpoints: [v0, v1, . . . , vn] = Lp(i).
Figure 5: Example of viewpoint extraction from a research idea. This figure illustrates how a prompted
LLM extracts fine-grained viewpoints from a research idea. Each viewpoint represents an independent, evaluable
unit such as an idea, argument, or fact. The viewpoints capture distinct components of the research idea that
contribute to its overall understanding.
B PROMPT USAGE
Here, we present the prompts used in our method and the baselines.
B.1 PROMPTS USED IN GRAPHEVAL
We present the prompts used in LLM-based viewpoint extraction and relation extraction in Table 4
and Table 6, respectively.
B.2 PROMPTS USED IN BASELINES
We present several criteria for evaluating research ideas, along with specific standards for the four
review decisions outlined in the prompts used for the baselines. The idea evaluation criteria and
decision prompt templates can be found in Table 8 and Table 9.
The prompt used in the prompted LLM is presented in Table 10, while the prompt used in the CoT
prompt and CoT-SC is shown in Table 11.
For the ToT prompt, we decompose the problem into eight steps: novelty evaluation step, validity
evaluation step, significance evaluation step, rigorousness evaluation step, clarity evaluation step,
ethical evaluation step, overall score step, and final discussion step. The prompts used for the novelty
evaluation step, validity evaluation step, overall score step, and final discussion step are presented in
Tables 12, 13, 14, 15; the prompts for the remaining steps are similar to those displayed in Tables 12
and 13.
Building on the work of Baek et al. (2024), our Research Agent baseline divides the task into four
steps: problem validation step, method validation step, experiment validation step, and final decision
step, with the corresponding prompts presented in Tables 16, 17, 18, 19.
16
State-of-the-art computer vision systems are trained topredict a fixed set of predetermined object categories,this restricted form of supervision limits theirgenerality and usability since additional labeled datais needed to specify any other visual concept ...Research Idea iPrompted-LLM LpState-of-the-art computer vision systems are trained topredict a fixed set of predetermined object categories.The generality and usability of state-of-the-art computervision systems are limited by being trained to predict afixed set of predetermined object categories.Additional labeled data is needed to specify visualconcepts that are not included in the fixed set ofpredetermined object categories.Viewpoint List [ v0, v1, v2, ... ]Published as a conference paper at ICLR 2025
Table 4: Viewpoint extraction prompt template. Here, for brevity and clarity, we have omitted portions of the
system’s input and the LLM’s answer from the one-shot demonstration.
You are required to act as an AI annotator and extract the Viewpoints embedded in the
sentences of the provided academic paper abstract. Below, you will be given an abstract from
an academic paper. You need to break it down sentence by sentence and extract the Viewpoints
embedded in each sentence. The extracted Viewpoints can be an idea, argument, or fact. Each
sentence may contain one or more Viewpoints to be extracted.
The extracted Viewpoints should be as granular as possible to ensure they cannot be further
broken down.
When extracting Viewpoints from a sentence, pay attention to the context within the abstract.
Replace pronouns with the nouns they represent and complete any omitted sentence compo-
nents to ensure the independence of the Viewpoints is not compromised. This means that each
extracted Viewpoint should not contain pronouns whose referents cannot be found within that
Viewpoint.
Below is an example interaction that can serve as a reference for the format and method of
extracting Viewpoints:
System’s Input:
[The Start of Abstract]
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined
object categories. ...
[The End of Abstract]
Your Answer:
[Sentence 1]
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined
object categories.
[Extracted Viewpoints in Sentence 1]
[State-of-the-art computer vision systems are trained to predict a fixed set of predetermined
object categories.]
[Sentence 2]
...
Table 5: Details of the Datasets. We present the data sizes and label distributions for the datasets
used in our experiments. For the AI Researcher Dataset, due to the scarcity of Accept (Oral) and
Accept (Spotlight) labels, we have combined them into a single label.
Dataset
ICLR Papers (Training)
ICLR Papers (Test)
AI Researcher Dataset
Data Size Reject
55%
64%
300
50
66
Poster Oral
10%
25%
8%
24%
Spotlight
10%
4%
53.03% 27.27%
19.70%
C DETAILS OF THE DATASETS
In this section, we present the details of the ICLR Papers and AI Researcher Dataset used in our
experiments, as shown in Table 5.
Specifically, we control the label distribution of the training set in the ICLR Papers by increasing
the representation of Accept (Oral) and Accept (Spotlight) papers. This adjustment enables learning-
based methods to effectively capture features of less-represented samples under long-tail distribution
conditions.
For the AI Researcher Dataset (Si et al., 2024), due to the scarcity of Accept (Oral) and Accept
(Spotlight) labels, we have combined them into a single label, thus transforming the task into a
three-class classification problem. Additionally, given the limited data volume in this dataset, we
record the performance metrics of the methods across the entire dataset when testing prompt-based
methods to obtain a more comprehensive evaluation. For testing other methods, we split the dataset
into training and testing sets in an 85%:15% ratio and conduct multiple experiments to average the
results, thereby reducing bias.
17
Published as a conference paper at ICLR 2025
Table 6: Relation extraction prompt template. Here, for brevity and clarity, we have omitted portions of the
system’s input and the LLM’s answer from the one-shot demonstration.
You are required to act as an AI annotator. You will be provided with an abstract from an
academic paper along with a set of extracted Viewpoints. These Viewpoints can represent an
idea, argument, or fact.
Your task is to identify pairs of related Viewpoints and provide a suitable logical connector
to describe the relationship between each selected pair. Then, you need to indicate whether
the logical relationship you selected belongs to the “supporting relationship” or “opposing
relationship” category. The “supporting relationship” can describe logical connections such as
continuation, cause-effect, exemplification, etc., while the “opposing relationship” is used to
describe contrast, contradiction, etc.
The format of a Viewpoint Pair is as follows: {[Viewpoint1], [logical connector], [”supporting”
or ”opposing”], [Viewpoint2]}
You need to refer to the given academic paper abstract to determine the relevance between
Viewpoints and the appropriate logical relationship for related Viewpoints based on the context.
You need to list all the Viewpoint Pairs you find.
Below is an example interaction that can serve as a reference for the format and method of
constructing Viewpoint Pairs:
System’s Input:
[The Start of Abstract]
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined
object categories. ...
[The End of Abstract]
[The Start of Extracted Viewpoints]
[State-of-the-art computer vision systems are trained to predict a fixed set of predetermined
object categories.]
...
[The End of Extracted Viewpoints]
Your Answer:
[The Start of Viewpoint Pairs]
{[State-of-the-art computer vision systems are trained to predict a fixed set of predetermined
object categories.], [however], [opposing], [The generality and usability of state-of-the-art
computer vision systems are limited by being trained to predict a fixed set of predetermined
object categories.]}
...
[The End of Viewpoint Pairs]
Table 7: Hyperparameter Settings for Experiments. This table lists the hyperparameters, their
descriptions, and the values used during our experiments.
Parameter Name
temperature t
Description
The temperature coefficient set
when calling the LLM
Value
0.1
intra graph degree k Degree of each view-node in
Sub-viewpoint-graph construction
through embedding similarity
inter graph degree m Number of view-nodes each node
connects to in the Viewpoint-graph
construction, from a different sub-
graph
Number of iterations in Label Prop-
agation for GraphEval-LP
max iters
5
10
5 (ICLR Papers)
2 (AI Researcher)
D HYPERPARAMETER CONFIGURATION
The hyperparameter settings used in our experiments are presented in Table 7.
18
Published as a conference paper at ICLR 2025
Table 8: Idea evaluation criteria prompt template. We outline several criteria for assessing research ideas in
the prompts used for the baselines.
Criteria
Novelty
Validity
Significance
Rigorousness
Clarity
Ethical Considerations
Texts
Does it introduce a new problem or perspective that has not been explored
before? Does it introduce new techniques or represent a significant
advancement compared to existing methods? How does it align with or
diverge from current research trends?
Does it include solid theoretical foundations, robust algorithms, and
detailed methodologies to address the research problem? Are the under-
lying principles well-defined and logically consistent?
Consider its potential contribution and impact on the research community
in its specific domain and beyond. How does it compare to existing works
in terms of impact?
Are the research design and methods clearly described and justified?
Is the methodology robust and appropriate for addressing the research
questions? Are the results well-analyzed and interpreted? Do the findings
support the claims made in the paper?
How well do the title and abstract summarize the paper? Are they
clear, concise, and informative? Does the paper effectively convey
its significance and main contributions? Are the title and abstract well-
aligned with each other and accurately represent the core idea and content
of the paper? Is the content well-structured and easy to follow?
Does it adhere to ethical guidelines and responsible research practices?
Are potential negative consequences or biases addressed?
Table 9: Idea evaluation decision prompt template. We present specific standards for the four review
decisions in the prompts used for the baselines.
Decision
Reject
Accept (Poster)
Accept (Oral)
Accept (Spotlight)
Texts
Papers in this category lack sufficient novelty, contain fundamental flaws
in methodology, or fail to present a significant contribution to the field.
For example, a paper that proposes a minor tweak to existing methods
without offering substantial improvement may fall under this category.
These papers offer incremental contributions, demonstrate solid theoret-
ical or experimental work, and may be of interest to a niche audience.
They have clear and understandable results but may not present break-
throughs.
Papers in this category present more significant contributions to the field,
with clear and convincing evidence of effectiveness. The methodology
is robust, and the findings are impactful. These papers are well-executed
and can be of interest to a broader audience.
Papers that represent groundbreaking work or a major advancement in
the field, offering novel insights or techniques with broad applicability
and significant potential impact. These papers stand out in terms of both
innovation and technical quality.
E GENERALIZATION EXPERIMENT
E.1 GENERALIZATION ON LONG FORM TEXT EVALUATION TASK
To validate GraphEval’s capability in text evaluation forms beyond research ideas, we conducted
experiments on a long form text evaluation task (Min et al., 2023). Specifically, we used human-
annotated data from the FActScore dataset, where each entry contains ”atomic facts” about celebrities
generated by LLMs, along with assessments from human annotators on whether these ”atomic facts”
were supported by the materials provided to the annotators. Based on the ”atomic facts” and human
annotations from the training set, our method needed to predict the labels of ”atomic facts” in the test
set that were partitioned off. We selected topics such as Ramesses IV, Lanny Flaherty, and Florencia
19
Published as a conference paper at ICLR 2025
Table 10: Prompted LLM prompt template. We provide several criteria for assessing research ideas in the
prompt. Additionally, we present specific standards for the four review decisions and include one example for
each as few-shot examples for in-context learning. Moreover, we include the label distribution of the dataset to
help the LLMs understand the frequency of each review decision.
[System Prompt]
You are an AI researcher who is reviewing a paper’s title and abstract that was submitted to a
prestigious ML venue. Be critical and cautious in your decision.
If a paper’s title and abstract are bad or you are unsure, give it bad scores and reject it!
[Instruction]
Please evaluate the paper draft based on the following six dimensions:
{idea evaluation criteria prompt template}
You will classify the paper into one of the following four categories based on the evaluation:
{idea evaluation decision prompt template}
**Note:** The approximate distribution of decisions for papers at this ML venue is as follows:
{label distribution of the dataset}. Please take this decision distribution
into account and make your judgment carefully.
[Examples for Evaluation Standards]
{one example per decision}
[Input]
Here is the paper draft to evaluate: Title – {title}; Abstract – {abstract};
[Output]
You only need to give an overall score (0-100) and select a review decision. No detailed
analysis is required.
The output format should follow these rules:
Overall Score (0-100)= {score}
{one decision from ”Reject”, ”Accept (Poster)”, ”Accept (Oral)”, ”Accept (Spotlight)”}
An example of the output:
Overall Score (0-100)= 82
Reject
Bertotti, and divided the training, validation, and test sets in a 7:1:2 ratio. We compared GraphEval
and some applicable baselines on this dataset in Table 20. The experimental results in the table
verify that our approach performs well on the long form text evaluation task, demonstrating good
adaptability to various tasks.
E.2 GENERALIZATION ABILITY ACROSS DIFFERENT TIMES
To explore the temporal generalization performance of GraphEval on the dataset, we selected papers
from before 2022 in the ICLR Papers dataset as the training and validation sets, and papers from
2023 as the test set. We compared the performance of GraphEval with other classic baselines in Table
21. The results in the table validate GraphEval’s temporal generalization ability in the task of idea
evaluation.
F ADDITIONAL ABLATION STUDY
F.1 EFFECTS OF VARIOUS LIGHTWEIGHT GRAPH NEURAL NETWORK ARCHITECTURES
To compare the impact of different lightweight GNN architectures on the performance of GraphEval,
we selected two classic lightweight GNN frameworks, SGC (Wu et al., 2019) and LightGCN (He
et al., 2020), to replace the current heterogeneous graph structure in GraphEval. We named these two
baselines GraphEval-SGC and GraphEval-LightGCN, respectively. We compared these baselines
with GraphEval-GNN on the ICLR Papers dataset, as shown in 22. We observed that the performance
of the lightweight frameworks was inferior to that of GraphEval-GNN, which is due to their sacrifice
of individualized node information in order to optimize memory usage and speed.
20
Published as a conference paper at ICLR 2025
F.2 COMPARATIVE IMPACT OF ALTERNATIVE RELATION EXTRACTION METHODS
We proposed a hybrid relation extraction method named Hybrid to compare with our fully similarity-
based approach, GraphEval. Specifically, the hybrid method uses Prompted LLMs mentioned in
Section 3 to connect nodes within viewpoint-subgraphs, while the edges between viewpoint-subgraphs
are still based on similarity. The results of the two relation extraction methods on the ICLR Papers
dataset are presented in Table 23, showing that GraphEval-GNN performs better than Hybrid. This
might be due to the difficulty of ensuring adequate edge density when connecting nodes within
viewpoint-subgraphs using Prompted LLMs. Additionally, this connection method may increase the
likelihood of hallucinations produced by LLMs and increase the token cost of LLMs, thus affecting
the final impact on idea evaluation and the actual expenses.
G SCALABILITY GENERALIZATION
To validate the generalization capability of GraphEval-GNN on large-scale datasets, we conducted
experiments on the ASAP-Review dataset (Yuan et al., 2022). The ASAP-Review dataset is an open
peer review dataset that includes 5,192 ICLR papers from 2017-2020 obtained through OpenReview
and 3,685 NeurIPS papers from 2016-2019 accessed through NeurIPS Proceedings. A detailed
introduction to this dataset, along with its composition, can be found in Section 3.1 and Table 2 of
(Yuan et al., 2022). Similar to the settings described in Section 6 of our paper, we used the abstracts of
all papers in the dataset as inputs and the review decisions of the papers as the predicted labels, which
included Accept (Oral), Accept (Spotlight), Accept (Poster), and Reject. We divided the dataset
into training, validation, and test sets in the proportions of 70%, 10%, and 20%, respectively. It is
important to note that for NeurIPS papers, since only accepted papers are included and no specific
labels such as Oral, Spotlight, or Poster and ratings are provided, we have to assign all paper labels as
Accept (Poster). This approach ensures the accuracy of the sample because over 85% of the papers
accepted at the NeurIPS conference are designated as posters. As shown in Table 24, we compared
the performance of GraphEval-GNN with that of Fine-tuned BERT and Prompted LLM on this
dataset. We observed that GraphEval-GNN still maintains the best performance on this large-scale
dataset, with an accuracy 9.8% better than the strongest baseline, Fine-tuned BERT. Furthermore,
although the rare labels of Accept (Oral) and Accept (Spotlight) (less than 4%) make it difficult
for all methods to perform well in terms of macro F1 score, GraphEval-GNN still achieved an 8%
improvement in macro F1 score compared to Fine-tuned BERT. These observations demonstrate the
robust generalization capability of GraphEval-GNN on large-scale datasets.
H ACCURACY EVALUATION OF VIEWPOINTS
In order to evaluate the accuracy of viewpoints generated from ideas, we explore from two perspec-
tives. First, we use a prompt-based approach (Luo et al., 2023; Gao et al., 2023), allowing a large
LLM to assess whether each viewpoint is consistent with the original idea. Specifically, we employ
the LLaMa-3.1 (405b) LLM5, which has shown excellent performance in evaluation tasks, as the
evaluator. Using the prompt from Table 25, we evaluate the consistency between the viewpoint and
the idea, with an output of 1 indicating consistency and 0 indicating inconsistency. We calculate
the proportion of samples judged consistent and average this across all samples to determine the
consistency rate. We finally achieve consistency rates of 99.47% and 99.82% for the ICLR Papers and
AI Researcher datasets, respectively. These rates, very close to 100%, demonstrate the high degree of
consistency between the generated viewpoints and the original ideas as achieved by our method.
Additionally, we measure the accuracy of viewpoints from an entity-level perspective. Specifically,
we first aggregate the constructed viewpoints and then assess their entity-level accuracy with respect
to the idea using entity-level factual consistency metrics (Nan et al., 2021). We report the results on
the datasets ICLR Papers and AI Researcher in Table 26. From the table, we can observe that the
entity-level Precision, Recall, and F1 Score between the viewpoints and the idea exceed 0.9 on both
datasets, which also validates the accuracy and rationality of our viewpoints.
5https://ai.meta.com/blog/meta-llama-3-1/
21
Published as a conference paper at ICLR 2025
Table 11: CoT prompt template. We modify the prompt used for prompted LLM to adopt a CoT format,
guiding it to complete the idea evaluation step by step.
{text for clarity in the idea
{text for novelty in the idea
{text for validity in the idea
[Instruction]
Please evaluate the paper draft step by step based on the following dimensions. For each step,
carefully think through and evaluate the corresponding dimension, and then provide ratings
for each dimension (1-10). You must give an overall score (0-100) along with the 6 dimension
scores. No detailed analysis is needed, but ensure that your evaluation for each step is based
on logical reasoning.
[Input]
Here is the paper draft to evaluate: Title – {title}; Abstract – {abstract};
[Step 1: Evaluate Novelty]
First, evaluate the novelty of the paper.
evaluation criteria prompt template.}
Novelty Rating (1-10):
[Step 2: Evaluate Validity]
Next, evaluate the validity of the paper.
evaluation criteria prompt template.}
Validity Rating (1-10):
[Step 3: Evaluate Significance]
Then, evaluate the significance of the paper. {text for significance in the
idea evaluation criteria prompt template.}
Significance Rating (1-10):
[Step 4: Evaluate Rigorousness]
Now, evaluate the rigorousness of the paper. {text for rigorousness in the
idea evaluation criteria prompt template.}
Rigorousness Rating (1-10):
[Step 5: Evaluate Clarity]
Next, evaluate the clarity of the paper.
evaluation criteria prompt template.}
Clarity Rating (1-10):
[Step 6: Evaluate Ethical Considerations]
Lastly, evaluate the ethical considerations of the paper. {text for ethnic in the
idea evaluation criteria prompt template.}
Ethical Considerations Rating (1-10):
[Step 7: Final Overall Score]
After completing all the dimension evaluations, summarize your assessment and give an
overall score that reflects the paper’s general quality and performance across all dimensions.
Overall Score (0-100):
[Step 8: Final Decision]
Based on the overall score and individual ratings, choose the most appropriate review decision.
Carefully consider how the paper performs in each dimension, and select from the following
categories:
{idea evaluation decision prompt template}
Decision:
**Note:** The approximate distribution of decisions for papers at this ML venue is as follows:
{label distribution of the dataset}. Please take this decision distribution
into account and make your judgment carefully.
[Examples for Evaluation Standards]
{one example per decision}
[Output]
The output format should follow these rules:
Novelty Rating (1-10):
Validity Rating (1-10):
Significance Rating (1-10):
Rigorousness Rating (1-10):
Clarity Rating (1-10):
Ethical Considerations Rating (1-10):
Overall Score (0-100):
Decision: {one decision from ”Reject”, ”Accept (Poster)”, ”Accept (Oral)”, ”Accept (Spot-
light)”}
22
Published as a conference paper at ICLR 2025
Table 12: ToT prompt template: Novelty Evaluation Step
[Instruction]
Please evaluate the novelty of the paper draft provided.
{text for novelty in the idea evaluation criteria prompt
template.}
You only need to give a novelty rating (0-10). No detailed analysis is required.
[Input]
Title: {title}
Abstract: {abstract}
[Output]
Please generate a rating for the novelty of this paper (1-10)
An example of the output:
Novelty Rating (1-10): 5
Table 13: ToT prompt template: Validity Evaluation Step
[Instruction]
Please evaluate the validity of the paper draft based on the provided title, abstract, and novelty
rating.
{text for validity in the idea evaluation criteria prompt
template.}
You only need to give a novelty rating (0-10). No detailed analysis is required.
[Input]
Title: {title}
Abstract: {abstract}
Novelty Rating (1-10): {novelty rating}
[Output]
Please generate a rating for the validity of this paper (1-10)
An example of the output:
Validity Rating (1-10): 5
Table 14: ToT prompt template: Overall Score Step
[Instruction]
Please evaluate the overall quality of the paper draft based on the provided title, abstract, and
ratings (novelty, validity, significance, rigorousness, clarity, and ethical considerations).
The overall score should reflect the general quality of the paper and how well it performs
across all the evaluation dimensions.
You only need to give an overall score (0-100). No detailed analysis is required.
[Input]
Title: {title}
Abstract: {abstract}
Novelty Rating (1-10): {novelty result}
Validity Rating (1-10): {validity result}
Significance Rating (1-10): {significance result}
Rigorousness Rating (1-10): {rigorousness result}
Clarity Rating (1-10): {clarity result}
Ethical Considerations Rating (1-10): {ethical considerations result}
[Output]
Please generate an overall score for this paper (0-100).
An example of the output:
Overall Score (0-100): 80
23
Published as a conference paper at ICLR 2025
Table 15: ToT prompt template: Finale Decision Step
[Instruction]
Please determine the final decision for the provided paper draft based on the provided title,
abstract, overall score, and individual ratings (novelty, validity, significance, rigorousness,
clarity, and ethical considerations). The decision should reflect the overall quality of the paper
and how well it performs across all evaluation dimensions. Select the most appropriate option
from the following four categories:
{idea evaluation decision prompt template}
**Note:** The approximate distribution of decisions for papers at this ML venue is as follows:
{label distribution of the dataset}. Please take this decision distribution
into account and make your judgment carefully.
[Examples for Evaluation Standards]
{one example per decision}
[Input]
Title: {title}
Abstract: {abstract}
Novelty Rating (1-10): {novelty result}
Validity Rating (1-10): {validity result}
Significance Rating (1-10): {significance result}
Rigorousness Rating (1-10): {rigorousness result}
Clarity Rating (1-10): {clarity result}
Ethical Considerations Rating (1-10): {ethical considerations result}
Overall Score (0-100): {overall score}
[Output]
Decision: {one decision from ”Reject”, ”Accept (Poster)”, ”Accept (Oral)”, ”Accept (Spot-
light)”}
An example of the output:
Decision: Accept (Poster)
24
Published as a conference paper at ICLR 2025
Table 16: Research Agent prompt template: Problem Validation Step. Criteria is presented in Table 10
of the paper by Baek et al. (2024).
[System Message]
You are an AI assistant whose primary goal is to summarize the research problem in an
academic paper based on its title and abstract, and to assess the quality and validity of
the research problem across various dimensions. Your evaluations and feedback will help
researchers refine their research problems, thereby enhancing the impact and scope of their
work.
[User Message]
You will be provided with the title and abstract of an academic paper, and you need to
extract its research problem and the rationale for the research problem. You are required
to evaluate the research problem based on the following dimensions: Clarity, Relevance,
Originality, Feasibility, and Significance, with a focus on whether it is clearly, accurately, and
understandably defined.
The academic paper title and abstract to be evaluated are as follows:
Paper Title: {title}
Paper Abstract: {abstract}
Now, please proceed with a systematic evaluation focusing on Clarity, Relevance, Originality,
Feasibility, and Significance:
- First, carefully read the provided title and abstract, and extract the research problem and its
rationale.
- Next, generate a review and feedback that is constructive, helpful, and concise, focusing on
the research problem’s Clarity, Relevance, Originality, Feasibility, and Significance.
- Finally, rate the problem on a 5-point Likert scale, with 1 being the lowest score. Ensure
that your ratings are discerning and critical to avoid uniformly high scores (4-5) unless fully
justified. The definitions for each evaluation criterion are as follows:
{criteria}
Output:
First, summarize the research problem and its rationale from the provided paper. After
evaluating the content, provide your review, feedback, and ratings in the following format:
Research Problem: {research problem}
Rationale: {research problem rationale}
Review: {review}
Feedback: {feedback}
Rating (1-5): Clarity-{rating} Relevance-{rating} Originality-{rating} Feasibility-{rating}
Significance-{rating}
25
Published as a conference paper at ICLR 2025
Table 17: Research Agent prompt template: Method Validation Step. Criteria is presented in Table 10
of the paper by Baek et al. (2024).
[System Message]
You are an AI assistant whose primary goal is to summarize the scientific method used in a
research paper based on its title and abstract, and to evaluate the quality and soundness of the
method across various dimensions. Your feedback will help researchers refine their methods,
thereby enhancing the impact and reach of their work.
[User Message]
You will be provided with the title and abstract of an academic paper. From this, you are
required to summarize its Scientific Method and Scientific Method Rationale. You need to
evaluate the method for its Clarity, Validity, Rigorousness, Innovativeness, and Generalizability,
focusing on whether the method is described clearly, precisely, and understandably, ensuring
that it can be replicated and easily comprehended.
As part of your evaluation, you may refer to the research problem of the paper, which will
help you better understand the context of the method and conduct a more comprehensive
assessment.
The academic paper title and abstract to be evaluated and the research problem are as follows:
Paper Title: {title}
Paper Abstract: {abstract}
Research problem: {research problem}
Rationale: {research problem rationale}
Now, please proceed with the systematic evaluation of the method based on Clarity, Validity,
Rigorousness, Innovativeness, and Generalizability:
- First, carefully read the provided paper title and abstract, keeping in mind the context provided
by the research problem, and summarize the scientific method and its rationale.
- Next, generate a review and feedback that should be constructive, helpful, and concise,
focusing on the method’s Clarity, Validity, Rigorousness, Innovativeness, and Generalizability.
- Finally, provide ratings on a 5-point Likert scale, with 1 being the lowest. Ensure that your
ratings are discerning and critical, avoiding a tendency toward uniformly high scores (4-5)
unless fully justified. The definitions of each evaluation criterion are as follows:
{criteria}
Output:
First, summarize the scientific method and its rationale. After evaluating the content, please
provide your review, feedback, and ratings in the following format:
Scientific Method: {scientific method}
Rationale: {scientific method rationale}
Review: {review}
Feedback: {feedback}
Rating (1-5): Clarity-{rating} Validity-{rating} Rigorousness-{rating} Innovativeness-
{rating} Generalizability-{rating}
26
Published as a conference paper at ICLR 2025
Table 18: Research Agent prompt template: Experiment Validation Step. Criteria is presented in Table
10 of the paper by Baek et al. (2024).
[System Message]
You are an AI assistant whose primary goal is to summarize the experimental design in an
academic paper based on its title and abstract and meticulously evaluate the experimental
design across various dimensions. Your evaluations and feedback will help researchers refine
their experimental approaches, thereby amplifying the quality and impact of their scientific
contributions.
[User Message]
You will be provided with the title and abstract of an academic paper. From this, you are
required to summarize its experiment design and experiment design rationale. You are
going to evaluate the experiment design for its Clarity, Validity, Robustness, Feasibility, and
Reproducibility in validating a scientific method to address a research problem, focusing on
how well it is described in a clear, precise, and understandable manner, enabling others to
grasp the setup, procedure, and expected outcomes.
As part of your evaluation, you can refer to the research problem and scientific method, which
will help in understanding the context of the designed experiment for a more comprehensive
assessment.
The academic paper title and abstract to be evaluated, along with the research problem and
scientific method, are as follows:
Paper Title: {title}
Paper Abstract: {abstract}
Research problem: {research problem}
Rationale: {research problem rationale}
Scientific Method: {scientific method}
Rationale: {scientific method rationale}
Now, proceed with your systematic evaluation of Clarity, Validity, Robustness, Feasibility, and
Reproducibility:
- Start by thoroughly reading the provided paper title and abstract, keeping in mind the context
provided by the research problem and scientific method mentioned above. Summarize the
experiment design and its rationale.
- Next, generate a review and feedback that should be constructive, helpful, and concise, focus-
ing on the Clarity, Validity, Robustness, Feasibility, and Reproducibility of the experiment.
- Finally, provide ratings on a 5-point Likert scale, with 1 being the lowest. Ensure that your
evaluation is discerning and critical, avoiding a tendency toward uniformly high scores (4-5)
unless fully justified:
{criteria}
Output:
First, summarize the experiment design and its rationale. After evaluating the content, please
provide your review, feedback, and ratings in the following format:
Experiment Design: {experiment design}
Rationale: {experiment design rationale}
Review: {review}
Feedback: {feedback}
Rating (1-5): Clarity-{rating} Validity-{rating} Robustness-{rating} Feasibility-{rating}
Reproducibility-{rating}
27
Published as a conference paper at ICLR 2025
Table 19: Research Agent prompt template: Finale Decision Step. Building on the work of Baek et al.
(2024), we further introduce a final decision step that synthesizes the evaluation results from the aforementioned
steps to provide a comprehensive review decision.
You are an AI assistant. You will be provided with the title and abstract of an academic
paper, along with a summary of its research problem, scientific method, and experiment design.
Additionally, you will receive reviews, feedback, and ratings (on a scale of 1-5) for the research
problem, scientific method, and experiment design across various dimensions.
Based on the provided paper title and abstract, as well as the evaluations of its research
problem, scientific method, and experiment design, your task is to assign an overall score
(0-100) to the paper.
You will also classify the paper into one of the following four categories based on the evalua-
tion:
{idea evaluation decision prompt template}
**Note:** The approximate distribution of decisions for papers at this ML venue is as follows:
{label distribution of the dataset}. Please take this decision distribution
into account and make your judgment carefully.
[Examples for Evaluation Standards]
{one example per decision}
[Input]
Paper Title: {title}
Paper Abstract: {abstract}
Research Problem: {research problem}
Research Problem Rationale: {research problem rationale}
Research Problem Review: {research problem review}
Research Problem Feedback: {research problem feedback}
Research Problem Rating: {research problem rating}
Scientific Method: {scientific method}
Scientific Method Rationale: {scientific method rationale}
Scientific Method Review: {scientific method review}
Scientific Method Feedback: {scientific method feedback}
Scientific Method Rating: {scientific method rating}
Experiment Design: {experiment design}
Experiment Design Rationale: {experiment design rationale}
Experiment Design Review: {experiment design review}
Experiment Design Feedback: {experiment design feedback}
Experiment Design Rating: {experiment design rating}
[Output]
You only need to give an overall score (0-100) and select a review decision. No detailed
analysis is required. The output format should follow these rules:
Overall Score (0-100)= {score}
{one decision from ”Reject”, ”Accept (Poster)”, ”Accept (Oral)”, ”Accept (Spotlight)”}
An example of the output:
Overall Score (0-100)= 82
Reject
Table 20: Comparative performance results on the Fact Verification dataset. Bold text denotes the
best results. For all metrics—Accuracy, Macro Precision, Macro Recall, and Macro F1 Score—higher
values indicate more precise predictions.
Model
Accuracy Precision
Recall
F1 Score
Prompted LLM (7B)
Prompted LLM (72B)
Finetuned-Bert
49.79%
59.52%
70.27%
57.19%
63.13%
69.74%
52.27% 47.59%
60.35% 56.33%
68.54% 68.64%
GraphEval-LP
GraphEval-GNN
82.83%
83.04% 82.40%
85.00% 90.00% 83.00% 84.00%
83.41%
28
Published as a conference paper at ICLR 2025
Table 21: Comparative performance results under the setting of idea evaluation of different
years. Bold text denotes the best results. For all metrics—Accuracy, Macro Precision, Macro Recall,
and Macro F1 Score—higher values indicate more precise predictions.
Model
Accuracy Precision
Recall
F1 Score
Prompted LLM (7B)
Prompted LLM (72B)
Finetuned-Bert
GraphEval-LP
GraphEval-GNN
16.67%
26.12% 18.25%
14.29%
32.47% 11.76%
48.41%
36.14% 31.57%
48.60% 44.72%
63.20%
76.19% 48.25% 57.38% 51.32%
20.63%
11.25%
42.46%
52.38%
Table 22: Performance comparison of different lightweight graph models.
Accuracy Precision
Model
GraphEval-SGC
GraphEval-LightGCN
GraphEval-GNN
61.0%
54.0%
76.0%
F1 Score
Recall
23.3%
27.3%
27.7%
23.43%
25.05% 26.70%
38.40% 37.30% 44.80%
Table 23: Performance comparison of GraphEval-GNN via two different alternative relation
extraction methods.
Model
Hybrid
GraphEval-GNN
F1 Score
Recall
Accuracy Precision
25.08%
27.60% 25.46%
38.40% 37.30% 44.80%
62.0%
76.0%
Table 24: Comparative performance results for different models on the ASAP-Review dataset.
Bold text denotes the best results. For all metrics—Accuracy, Macro Precision, Macro Recall, and
Macro F1 Score—higher values indicate more precise predictions.
Model
Accuracy Precision
Recall
F1 Score
Prompted LLM (7B)
Prompted LLM (72B)
Finetuned-Bert
GraphEval-GNN
28.57% 12.83%
22.00%
17.86%
3.04%
4.00%
61.17%
30.37% 29.86%
67.02% 33.11% 32.86% 32.20%
11.04%
4.00%
29.81%
Table 25: Prompt template of viewpoint accuracy evaluation.
[Instruction]
Decide if the following Viewpoint, derived from the idea, is consistent with the Idea. Note
that consistency means all information in the viewpoint is fully supported by the idea.
[Input]
Idea: {idea}
Viewpoint: {viewpoint}
[Output]
Explain your reasoning step by step, identifying if each part of the viewpoint aligns with the
idea, then answer: Is the viewpoint consistent with the idea? Answer with only 1 for yes or 0
for no.
Table 26: Performance of entity-level factual consistency metrics for ICLR Papers and AI
Researcher datasets.
Dataset
Precision Recall
F1 Score
ICLR Papers
AI Researcher
0.9339
0.9472
0.9288
0.9004
0.9314
0.9232
29
|
JtGPIZpOrz | Multiagent Finetuning of Language Models | [
6,
8,
6
] | Published as a conference paper at ICLR 2025
MULTIAGENT FINETUNING: SELF IMPROVEMENT WITH
DIVERSE REASONING CHAINS
Vighnesh Subramaniam∗
MIT CSAIL
[email protected]
Yilun Du∗
Harvard University
[email protected]
Joshua B. Tenenbaum
MIT CSAIL, BCS, CBMM
[email protected]
Antonio Torralba
MIT CSAIL
[email protected]
Shuang Li†
Stanford University
[email protected]
Igor Mordatch†
UC Berkeley
[email protected]
ABSTRACT
Large language models (LLMs) have achieved remarkable performance in recent
years but are fundamentally limited by the underlying training data. To improve
models beyond the training data, recent works have explored how LLMs can be used
to generate synthetic data for autonomous self-improvement. However, successive
steps of self-improvement can reach a point of diminishing returns. In this work,
we propose a complementary approach towards self-improvement where finetuning
is applied to a multiagent society of language models. A group of language models,
all starting from the same base model, are independently specialized by updating
each one using data generated through multiagent interactions among the models.
By training each model on independent sets of data, we illustrate how this approach
enables specialization across models and diversification over the set of models.
As a result, our overall system is able to preserve diverse reasoning chains and
autonomously improve over many more rounds of fine-tuning than single-agent
self-improvement methods. We quantitatively illustrate the efficacy of the approach
across a wide suite of reasoning tasks.
Project website at https://llm-multiagent-ft.github.io
1
INTRODUCTION
Recent breakthroughs in large language models (LLMs) like GPT-3.5 and GPT-4 have demonstrated
remarkable proficiency in language generation, comprehension, question answering, and transla-
tion (OpenAI, 2023; Touvron et al., 2023). Despite these advancements, LLMs are fundamentally
constrained by the data they are trained on, with existing models already using much of the available
data on the Internet (Brown et al., 2020). To further enhance the performance of LLMs, recent
research on self-improvement, where LLMs generate additional synthetic data on which they are
trained on (Huang et al., 2022; Yu et al., 2023).
One approach to increase the data available to LLMs is to use powerful existing frontier models like
GPT-4 to generate additional supervisory data. However, this approach is limited by the inherent
quality of frontier models, preventing models from becoming better than the frontier of what the
best existing models can accomplish. In addition, such an approach incurs high financial costs
due to inference expenses of such large models and is also often legally prohibited with existing
commercial-grade models.
An alternative approach is to directly leverage existing language models to generate additional
synthetic data for their self-improvement (Zelikman et al., 2022; Bai et al., 2022; Chen et al., 2024b;
Yuan et al., 2024). In such works, language models are used to iteratively collect data that they
are then finetuned on. However, as models are repeatedly trained, performance gains often plateau
relatively quickly as diversity decreases (Figure 1) and the self-improvement loop is often only
∗Equal Contribution, Corresponding authors
†Equal advising
1
Published as a conference paper at ICLR 2025
Figure 1: Multiagent finetuning improves reasoning performance over multiple rounds of finetuning. Our
multiagent finetuning procedure enables models to improve across multiple iterations of finetuing. Results
reported on the MATH dataset.
run for two or three rounds (Lu et al., 2023; Song et al., 2024). This limits the applicability of
self-improvement to autonomously improve language models, as models can only be improved a
limited amount above their base performance.
In this paper, we propose a new approach to self-improvement that can help mitigate the issue of
decreased gains of performance after multiple rounds of fine-tuning. Instead of fine-tuning a single
model, our method finetunes a multiagent set of language models from the same base model and
then independently specializes each model to capture parts of a task of interest. Our key insight
is that by finetuning multiple models, we can encourage specialization and diversification across
responses, which can enable consistent performance gains over many rounds of fine-tuning. To
achieve specialization between models, we fine-tune each model repeatedly on independent subsets
of the generated data corresponding to responses from the respective particular model.
Within our multiagent set of models, we propose to specialize models into distinct functionalities
within the output generation procedure. First, we specialize a set of models to be generation agents
that produce a set of initial responses given queries. Since initial responses can often be suboptimal,
especially for challenging reasoning tasks, we further propose to specialize a set of models as critic
agents that evaluate and refine the generations of other models. By using this set of distinct models in
combination through multiagent debate (Du et al., 2023), we are able to construct a robust feedback
loop for generating final responses, with experiments on other multiagent methods in Appendix D.
By training each model on distinct sets of data and roles, our approach fosters specialization across
models and promotes diversification within the society of models. Consequently, our system can au-
tonomously improve over many more rounds of finetuning compared to single-agent self-improvement
methods (Figure 1). We quantitatively demonstrate the effectiveness of our approach across a com-
prehensive suite of reasoning tasks, illustrating significant performance gains, as shown in Table 1. In
our experiments, we illustrate how our proposed method can be directly applied to both open-source
LLMs such as Phi-3, Mistral, and LLaMA-3 as well proprietary LLMs such as GPT-3.5 to substan-
tially improve performance. In addition, the finetuned models can generalize to novel datasets and
outperform the baseline methods trained directly on these new datasets.
Overall, our paper has the following contributions: (1) We propose to leverage multiagent interaction
as an approach to self-improvement with language models. (2) We propose to specialize models with
distinct roles to enable detailed feedback between agents and to improve the final output quality. (3)
We quantitatively verify the applicability of our approach across a wide suite of reasoning tasks on
both open-source and proprietary language models. (4) We demonstrate that the finetuned agents can
generalize across different datasets in a zero-shot manner.
2 MULTIAGENT FINETUNING OF LANGUAGE MODELS
We provide an overview of our approach towards multiagent finetuning of language models, where
we learn a multiagent society of models to accomplish a task. Our method involves two components.
We first use a multiagent debate method to construct a finetuning dataset for raining models (though
other multiagent generation methods can also be used, see Appendix Section D). We then introduce
our approach, multiagent finetuning, where we specialize each LLM model by finetuning each model
2
12345Iterations of finetuningIterations of finetuningIterations of finetuning4550556065AccuracyPhi-3Multiagent FT (Ours)Single-agent FT123451618202224262830MistralMultiagent FT (Ours)Single-agent FT1234550.052.555.057.560.062.565.067.570.0LLaMA-3 (8B)Multiagent FT (Ours)Single-agent FTPublished as a conference paper at ICLR 2025
Figure 2: Overview of Multiagent Finetuning.We first use multiagent debate and majority voting to create
the finetuning datasets (left). These datasets are then used to finetune the generation and critic agents (right).
When finetuning generation models, we use the majority voted result (”correct” output) to select first-round
responses from each agent. We then finetune critic models using responses from the final round based on whether
responses match the majority voted result (mix of ”correct and incorrect” outputs). The finetuned models are
combined through multiagent debate to generate more accurate answers. In this figure, we illustrate a single
finetuning iteration. Applying multiple rounds of finetuning iterations can significantly boost performance.
on its own generated data. An overview of our approach can be seen in Figure 2. We first provide
an introduction of our multiagent debate method in Section 2.1. We then discuss how to fine-tune a
single model on generated data in Section 2.2, and the proposed multiagent finetuning in Section 2.3
and Section 2.4. We then show how to apply finetuned models for inference in Section 2.5.
2.1 MULTIAGENT DEBATE
Multiagent debate (Du et al., 2023) involves a series of N language model agents—either specific
copies or finetuned versions of the same model—each tasked with generating a response to a given
problem. After the initial responses are generated, a debate round is initiated among the agents. In
our paper, we concatenate and summarize the responses from other agents. Each agent is instructed to
construct a new response based on its prior response and the summarized responses from the others.
The final result is determined by majority vote based on the outputs from the last round of debate.
The multiagent debate is illustrated in Figure 2.
2.2 FINETUNING MODELS ON GENERATED DATA
We start by considering how to use data generated by multiagent debate data to finetune a single
LLM model for self-improvement. Given a set of natural language inputs Dtask = {xi}, we use
a multiagent debate method (Du et al., 2023), specifically a debate with N agents and M rounds,
to generate responses for each input in Dtask. We obtain the final predicted output ˆyi for each xi
through majority voting in the last round of debate. We use this to construct a “ground truth” dataset
of {(xi, ˆyi)}. In the single LLM model setting, we then finetune the model on the set of generated
responses yi which match ˆyi given input xi.
While the final debate results ˆyi are accurate, they often similar in style and methodology. As a result,
repeatedly capturing a dataset of {(xi, ˆyi)} pairs for multiple rounds of finetuning often leads to a
plateau of self-improvement performance.
2.3 FINETUNING MULTIPLE GENERATION AND CRITIC MODELS
Our goal in multiagent finetuning is to create datasets that construct a set of models representing
different agents that are diverse and accurately solve problems. Instead of building a single dataset to
finetune each model, we propose creating different datasets to finetune different models. A set of
3
Multiagent DebateInput 𝑥Round 1Model 1Model 𝑁Majority Voting Result 𝑦$𝑦!,!𝑦!,#…Model 𝑛…𝑦!,$Model 1Model 𝑁…Model 𝑛…𝑦%,!𝑦%,#𝑦%,$………Round 𝑀 Finetuning Generation ModelsGeneration Model 𝐴’!&The “correct” output of Model 1 after the first round of multiagent debate {𝑦!,!}Critic Model 𝐴’$,!’A mix of “correct and incorrect” output of Model 1 after the 𝑚-th(𝑚>1) round of multiagent debate {𝑦#,!}Summarize the responses from other modelsRound 2…Finetuning Critic ModelsFinetuningGeneration Model 𝐴’#&The “correct” output of Model N after the first round of multiagent debate {𝑦!,$}FinetuningFinetuningCritic Model 𝐴’(,#’Finetuning…A mix of “correct and incorrect” output of Model N after the 𝑚-th(𝑚>1) round of multiagent debate {𝑦#,$}Published as a conference paper at ICLR 2025
Algorithm 1 Multiagent Finetuning of Language Models
Require: A pretrained LLM A; A set of language inputs Dtask = {xi}; The number of agents N ; The number
of debate rounds M ; The number of finetuning iterations L.
N ← A # Copy the LLM to build N generation agents
N ← A # Copy the LLM to build N critic agents
# Multiagent Debate
for x in Dtask do # Iterate over the input tasks
for m in M do # M rounds of debate
else
if m = 0 then
1 , · · · , AG
1 , · · · , AC
1: AG
2: AC
3: # Multiple Iterations of Finetuning
4: for l = 1 → L do
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
end if
end for
# Multiagent Finetuning
Initialize datasets for finetuning generation models {DG
Initialize datasets for finetuning critic models {DC
for n in N do # Iterate over all the agents
n }N
n=1
for x in Dtask do # Iterate over the input tasks
n }N
n=1
y1,1, · · · , y1,N ← AG
1 (x), · · · , AG
N (x) # Response of each generation agent
m,1, · · · , xs
xs
ym,1, · · · , ym,N ← AC
m,N ← Summarize the responses from other agents in round m − 1
m,N ) # Response of each critic agent
m,1), · · · , AC
N (xs
1 (xs
end for
ˆy ← Majority Voting {yM,1, · · · , yM,N } # Responses of the final round of debate
22:
23:
24:
25:
26:
27:
DC
end for
ˆAG
ˆAC
28:
end for
29:
AG
30:
AC
31:
32: end for
DG
n ← DG
n ∪ {(x, y1,n) | y1,n = ˆy} # Add pairs
DC−
n ← DC−
n ∪ {(x, (y1,n, · · · , yM,n)) | y1,n ̸= ˆy, yM,n = ˆy} # Add pairs
DC+
n ← DC+
n ∪ {(x, (y1,n, · · · , yM,n)) | y1,n = ˆy, yM,n = ˆy} # Add pairs
n ← wDC−
n + (1 − w)DC+
n
# Combine the datasets
n ← Finetune(An, DG
n ) # Finetune the generation model
n ← Finetune(An, DC
n ) # Finetune the critic model
1 , · · · , AG
1 , · · · , AC
N ← ˆAG
N ← ˆAC
1 , · · · , ˆAG
1 , · · · , ˆAC
N # Generation agent for the next finetuning iteration
N # Critic agent for the next finetuning iteration
models are trained as generation agents and others as critic agents. The generation models produce
initial responses to input questions. In contrast, the critic models assess the outputs from all generation
agents and then select or generate the most effective responses.
Finetuning Generation Models. The role of a generation model is to generate accurate responses to
input questions. Such models should rely on diverse reasoning chains to promote diversity. Generation
agents AG
n are constructed from the N generation models which generate a response to the given input
x (we omit i for simplicity). For each agent, we select its outputs yn that match the final debate results
ˆy and construct input-output pairs (x, yn). The resulting dataset for agent AG
n is DG
n = {(x, yn)}.
This approach generates a set of finetuning datasets {DG
N } across all N agents. Each dataset
contains different outputs, allowing for specialization and diversification of responses. We finetune
each generation model with the corresponding dataset to get N correspondingly finetuned agents
{ ˆAG
1 , · · · , DG
1 , · · · , ˆAG
N }.
Finetuning Critic Models. The role of a critic model is to further provide accurate critiques to
responses from other agents and use these responses to provide an updated answer. Simply finetuning
generation models isn’t sufficient for achieving optimal results, especially for more challenging tasks,
due to the lack of a feedback mechanism on their outputs. Critic agents AC
n are constructed from
critic models and evaluate the outputs from all generation agents and then select or synthesize the best
responses. This additional step ensures that the system continuously improves and adapts, enhancing
overall performance.
4
Published as a conference paper at ICLR 2025
In the multiagent debate setting, each agent’s output in the last round of debates is represented as
yM,n, where M denotes the number of debate rounds. We first identify those outputs yM,n that align
with the final debate results ˆy. These consistent outputs, together with the previous responses, are
then used to construct input-output pairs (x, (y1,n, . . . , yM,n)) for finetuning the critic models.
To enhance the model’s capability to correct incorrect answers generated early in the debate
process, we sample a subset of pairs where y1,n differs from ˆy, but yM,n matches ˆy and build
a dataset DC−
n = {(x, (y1,n, . . . , yM,n)) |y1,n ̸= ˆy, yM,n = ˆy}. This indicates that the an-
swer was successfully corrected by the end of the debates. We also construct another dataset
DC+
n = {(x, (y1,n, . . . , yM,n)) |y1,n = ˆy, yM,n = ˆy} where both y1,n and yM,n match ˆy, demon-
strating the agent’s ability to maintain the correct answer throughout the debates. We combine these
two datasets to create a comprehensive finetuning dataset for each critic model to construct updated
critic agents AC
n :
n = wDC−
(1)
In the above expression, w is a tunable hyperparameter representing the proportion of data sampled
from the first set, while (1 − w) represents the proportion of data sampled from the second set. This
method generates a series of datasets {DC
N } for finetuning the critic models, denoted as
{ ˆAC
N } after the finetuning process.
1 , · · · , ˆAC
1 , · · · , DC
n + (1 − w)DC+
n .
DC
2.4 MULTIPLE ITERATIONS OF FINETUNING
The finetuned models are capable of generating responses through multiagent debate. We found
that iterative application of the multiagent finetuning allows for continuous learning and adaptation,
leading to progressively refined and more accurate responses over time. The finetuned generation
agents { ˆAG
1 , · · · , ˆAC
N } are used to gather datasets for the next
iteration through multiagent debate. The algorithm for the proposed approach of L iterations of
finetuning is detailed in Algorithm 1. The steps for collecting data for finetuning the generation
models are marked in red, and the finetuning of critic models is shown in blue.
N } and critic agents { ˆAC
1 , · · · , ˆAG
2.5
INFERENCE
1 , · · · , ˆAG
At inference time, we have a set of finetuned generation models which represent generation agents
{ ˆAG
N }, and a set of finetuned critic models which represent critic agents { ˆAC
N }. We
conduct a multiagent debate among these agents, where each individual generation agent participates
in the first round of the debate, followed by each individual critic agent in subsequent rounds. Each
agent takes the responses from all other agents and generates a new response in each round of the
debate. We found that summarizing the responses from the other agents helps eliminate redundant
information while retaining the most important details, thereby further improving performance. The
final result is determined by a majority vote based on the responses from the final round of the debate.
We provide pseudocode in Algorithm 2.
1 , · · · , ˆAC
3 EXPERIMENTS
3.1 LANGUAGE REASONING TASKS
We evaluate our method and baselines on three language reasoning tasks.
Arithmetic. consists of 1,000 generated arithmetic problems in the form a + b · c + d − e · f .
Following the generation procedure in (Du et al., 2023), each variable is assigned a random value up
to a maximum of 30.
Grade School Math (GSM). (Cobbe et al., 2021) consists of math word problems that require
multi-step mathematical reasoning. Each example includes a problem statement, the numerical
answer, and an explanation of the answer.
MATH. Hendrycks et al. (2021) consists of competition-level math problems categorized into five
difficulty levels. For our experiments, we sample problems from the first three levels.
For each dataset, we randomly select 500 examples for finetuning the language model. Additionally,
we select 500 held-out problems for evaluation. We parse the generated answers and evaluate their
correctness by comparing them with the ground truth answers. Accuracy is reported based on how
5
Published as a conference paper at ICLR 2025
frequently the model returns the correct answer. We also report the standard error of each accuracy
value to measure the significance of improvement.
3.2 BASELINES
We compare the proposed method with various baselines. In all multiagent settings, we use three
agents, and for all debate settings, we conduct two rounds of debates to ensure a fair comparison
(additional results with five agents in Appendix Section F).
Base utilizes a single language model to process input and generate responses.
Majority is a multiagent baseline that selects responses based on a majority vote from multiple
agents. If no response secures a majority, one of the potential answers is chosen at random.
Debate is a multiagent debate baseline as described in Du et al. (2023). The debate structure is
outlined in Figure 2.
STaR (Zelikman et al., 2022) iteratively finetunes the language agent using a dataset with ground
truth answers for each problem. Initially, the LM generates an answer for each problem, and correct
responses, as verified by the ground truth, are added to the finetuning dataset. For problems answered
incorrectly, the LM is reprompted with a hint that includes the ground truth answer. Problems where
the generated response includes the correct answer are added to the finetuning dataset. The LM is
finetuned on the collected dataset. This iterative process of building the dataset and finetuning is
repeated until the finetuning loss saturates. The final model is then used for evaluation.
Majority FT is a baseline that incorporates both majority voting and finetuning. We prompt the
language agents with each problem and conduct a majority vote on their results. We then compile the
responses from all agents that align with the majority vote, along with the input, to create a finetuning
dataset. The language model is finetuned using this dataset. Finally, we apply majority voting to the
outputs of the finetuned model to determine the final answer.
3.3 QUANTITATIVE RESULTS
We compare baselines and our method, which was finetuned for only a single iteration (L = 1), in
Table 1. The accuracy and standard error for each dataset are reported. We use three distinct base
language models: three open-source models, Phi-3 4B (Abdin et al., 2024), Mistral 7B (Jiang et al.,
2023), and LLaMA-3 8B (Dubey et al., 2024); and one proprietary model, GPT-3.5 (OpenAI, 2022).
Our method outperforms all the baselines. Although “STaR” utilizes ground truth labels for data
selection and undergoes multiple iterations of finetuning, it still performs worse than our method,
which uses only a single finetuning iteration without access to ground truth. The “Majority”,
“Debate” and “STaR” methods outperform the “Base” model, demonstrating that majority voting,
multiagent debate, and finetuning all contribute to improved performance. “Majority FT” enhances
the performance of “Majority” by incorporating a finetuning procedure. Our method is only finetuned
on 500 examples and still shows significant improvement over the baselines, particularly on more
challenging datasets such as GSM and MATH. Additional evaluations on a larger set of problems and
datasets can be found in Appendix Section H.
3.4 MULTIPLE ITERATIONS OF FINETUNING
To verify the effectiveness of multiple iterations of finetuning, as described in Section 2.4, we present
the performance of our proposed method “Multiagent FT (Ours)” over five iterations of finetuning in
Figure 1. We tested this method on two open-source models, Mistral and Phi-3, using the MATH
dataset. The results demonstrate that “Multiagent FT (Ours)” consistently improves performance
over time. For example, the accuracy of Phi-3 increased from 58.8% to 66.0%, and the accuracy of
Mistral improved from 22.5% to 28.2%. Our method with five rounds of finetuning is 12.6% and
9.31% more accurate than the best baseline listed in Table 1 using Phi-3 and Mistral, respectively.
In contrast, finetuning a single agent (”Single-agent FT”), as described in Section 2.2, shows that
performance saturates after one iteration of finetuning and starts dropping afterward, indicating
potential overfitting to generated responses. This issue occurs when the single model, after several
finetuning cycles, becomes fixated on a small range of responses, which limits its diversity and
6
Published as a conference paper at ICLR 2025
LLM
Methods
Arithmetic
GSM
MATH
GPT-3.5 (OpenAI, 2022)
Phi-3 (Abdin et al., 2024)
Mistral (Jiang et al., 2023)
LLaMA-3 (Dubey et al., 2024)
81.99 ± 0.99 75.60 ± 1.36 46.83 ± 2.25
Base
94.40 ± 1.03 81.20 ± 1.24 51.40 ± 2.23
Majority
98.21 ± 0.54 83.30 ± 1.18 55.73 ± 2.21
Debate
98.38 ± 0.57 83.60 ± 1.17 53.00 ± 2.23
STaR
Majority FT 98.40 ± 0.56 83.70 ± 1.17 53.40 ± 2.23
60.60 ± 2.18
99.62 ± 0.28 85.60 ± 1.11
Ours
88.30 ± 1.44 81.20 ± 1.74 45.60 ± 2.10
Base
91.80 ± 1.23 81.80 ± 1.72 47.20 ± 1.82
Majority
96.20 ± 0.86 84.40 ± 1.58 53.40 ± 2.28
Debate
94.80 ± 0.99 85.80 ± 1.21 51.80 ± 2.06
STaR
Majority FT 93.80 ± 1.08 82.20 ± 1.71 48.60 ± 2.16
99.40 ± 0.34 88.60 ± 1.42 58.80 ± 2.22
Ours
10.80 ± 0.51 35.60 ± 1.92 16.60 ± 1.21
Base
14.80 ± 1.17 41.80 ± 0.88 16.80 ± 1.25
Majority
19.60 ± 1.12 52.60 ± 1.26 18.20 ± 1.37
Debate
17.40 ± 0.97 45.50 ± 1.54 17.84 ± 1.23
STaR
Majority FT 16.40 ± 0.73 44.60 ± 1.65 18.91 ± 1.37
22.60 ± 0.97 58.40 ± 2.11 22.50 ± 1.87
Ours
43.20 ± 2.22 75.00 ± 1.94 46.80 ± 2.23
Base
45.80 ± 2.23 76.40 ± 1.90 47.20 ± 2.23
Majority
78.40 ± 1.44 51.60 ± 2.23
48.40 ± 2.24
Debate
52.20 ± 2.23
Majority FT 49.20 ± 2.24 77.20 ± 1.87
57.40 ± 2.21
52.00 ± 2.24 88.60 ± 1.77
Ours
Table 1: Quantitative results of the proposed method and baselines. Our method outperforms the baselines
across all datasets, as indicated by accuracy (%) ± standard error. The highest values are highlighted in
red, and the second-highest values are highlighted in blue. All results are reported over 500 fixed evaluation
problems, expect GSM results for GPT-3.5 which are reported over 1000 fixed evaluation problems (to construct
nonoverlapping confidence bars).
Figure 3: Diversity is preserved and can improve across iterations of finetuning. We measure the response
diversity of our method and the single-agent finetuning method on the MATH dataset using two diversity
measures. The diversity of our method remains consistent over finetuning iterations for one metric and improves
for another metric, whereas the diversity of the single-agent method drops significantly.
prevents further enhancement. However, finetuning multiple generation and critic agents using our
proposed method increases diversity and consistently improves performance.
4 ANALYSIS
In this section, we aim to answer the following questions: 1) How important is the proposed multiagent
finetuning procedure? 2) Will it increase response diversity? 3) Can the finetuned agent generalize to
other datasets in a zero-shot setting?
4.1 ABLATION STUDIES
We examine each component of the proposed method, as shown in Table 2. Multiagent FT (Ours)
refers to our proposed method with a single round of finetuning, L = 1.
Multiagent FT w/o summary removes the summarization step from the multiagent debate. Instead
of summarizing, the responses from other agents are directly concatenated and presented to each
agent. Summarization helps by eliminating redundant information and retaining the most critical
points; therefore, omitting the summarization step can negatively impact performance.
7
1.01.52.02.53.03.54.04.55.0Iterations of finetuning0.20.30.40.50.60.70.8Negative Log-LikelihoodDiversity Metric: LikelihoodLLaMA-3Multiagent FT (Ours)Single-agent FT1.01.52.02.53.03.54.04.55.0Iterations of finetuning0.20.30.40.50.60.70.8Negative Log-LikelihoodMistralMultiagent FT (Ours)Single-agent FT1.01.52.02.53.03.54.04.55.0Iterations of finetuning0.100.150.200.250.300.350.40Embedding DissimilarityDiversity Metric: Embedding DissimilarityLLaMA-3Multiagent FT (Ours)Single-agent FT1.01.52.02.53.03.54.04.55.0Iterations of finetuning0.200.250.300.350.400.45Embedding DissimilarityMistralMultiagent FT (Ours)Single-agent FTPublished as a conference paper at ICLR 2025
LLM
Ablations
Arithmetic
GSM
MATH
GPT-3.5 (OpenAI, 2022)
Phi-3 (Abdin et al., 2024)
Mistral (Jiang et al., 2023)
LLaMA-3 (Dubey et al., 2024)
Multiagent FT (Ours)
Multiagent FT w/o summary
Multiagent FT w/o critic
Single-agent FT
Single-agent FT w/o debate
Multiagent FT (Ours)
Multiagent FT w/o summary
Multiagent FT w/o critic
Single-agent FT
Single-agent FT w/o debate
Multiagent FT (Ours)
Multiagent FT w/o summary
Multiagent FT w/o critic
Single-agent FT
Single-agent FT w/o debate
Multiagent FT (Ours)
Multiagent FT w/o summary
Multiagent FT w/o critic
Single-agent FT
Single-agent FT w/o debate
99.62 ± 0.28
99.20 ± 0.40
99.20 ± 0.40
99.00 ± 0.45
87.20 ± 1.49
99.40 ± 0.34
98.80 ± 0.51
98.20 ± 0.62
97.40 ± 0.71
92.20 ± 1.20
22.60 ± 1.87
21.80 ± 1.84
21.00 ± 1.82
21.20 ± 1.83
17.71 ± 1.70
52.00 ± 2.24
50.40 ± 2.24
48.60 ± 2.24
48.00 ± 2.23
44.00 ± 2.22
85.60 ± 1.67
82.20 ± 1.72
83.80 ± 1.65
83.60 ± 1.66
75.00 ± 1.93
88.60 ± 1.42
84.40 ± 1.68
86.00 ± 1.58
86.80 ± 1.51
83.60 ± 1.66
58.40 ± 2.11
56.00 ± 1.56
54.80 ± 1.60
55.00 ± 2.22
51.20 ± 2.24
88.60 ± 1.77
83.20 ± 1.67
82.20 ± 1.70
84.40 ± 1.62
81.60 ± 1.73
60.60 ± 2.18
51.70 ± 2.24
50.80 ± 2.24
56.80 ± 2.21
48.89 ± 2.23
58.80 ± 2.22
55.00 ± 2.09
56.60 ± 2.22
56.80 ± 2.21
50.20 ± 2.24
22.50 ± 1.87
20.20 ± 1.55
19.01 ± 1.59
19.21 ± 1.69
17.22 ± 1.54
57.40 ± 2.21
51.60 ± 2.23
50.50 ± 2.23
52.40 ± 2.23
48.80 ± 2.24
Table 2: Ablation results. We examine each component of the proposed method and found that summarization,
the combination of critic and generation agents, multiagent finetuning, and multiagent debate all contribute to
performance improvement. The accuracy (%) ± standard error is reported.
Multiagent FT w/o critic: The critic agents evaluate the outputs from all generation agents and select
or synthesize the best responses. Removing the critic agents and only finetuning the N generation
agents could hurt performance, as the critic agents play a crucial role of refining the final output.
Single-agent FT involves finetuning only a single LLM as covered in Section 2.2 and using it as
an agent in multiagent debate. This approach can easily lead to model collapse, where the agent
generates similar responses after finetuning, thereby reducing diversity and hurting performance.
Therefore, multiagent finetuning is necessary to maintain high performance in reasoning tasks.
Single-agent FT w/o Debate further eliminates the debate procedure, with the finetuned LLM
generating responses directly. As shown in Du et al. (2023), multiagent debate can significantly
boost performance, so removing it could lead to a performance drop.
These results indicate that summarization, the combination of critic and generation agents, multiagent
finetuning, and multiagent debate all contribute to performance improvement. Our proposed method
integrates these components into a single, unified framework, leveraging their combined benefits.
4.2 AGENT RESPONSE DIVERSITY
By finetuning multiple agents with distinct roles, our approach enables us to obtain more diverse
responses across rounds of finetuning compared to a single agent. Figure 3 illustrates the diversity
of generations from our method and single-agent across rounds of finetuning using two metrics of
diversity. We cover one metric of diversity, negative log-likelihood, here and cover the other in
Section C.4.
In our first diversity metric, we aim to characterize specialization by tracking the likelihood of
responses of other agents using likelihood calculations of a specific agent. If we are increasing
diversity, then the log-likelihood of responses from other agents will decrease across iterations of
finetuning. The reasoning used by other agents would be considered less common for the specific
agent, indicating a divergence in responses. If accuracy increases while likelihood of responses from
other agents decreases, this indicates must specialization.
We evaluate the negative log-likelilhood (NLL) of responses from other critic agents using another
held-out critic agent and plot this over iterations of finetuning. We do the same with Single-Agent FT,
using responses from other agents and evaluate likelihood using a held-out agent. Larger NLL values
indicate that the model has assigned low likelihood to a sequence and lower NLL values indicate that
the model has assigned higher likelihood to a sequence. We measure this over iterations of finetuning
for our method as well as Single-Agent FT.
8
Published as a conference paper at ICLR 2025
We compute the diversity across all test exam-
ples and present the results in Figure 3. For the
“Single-agent FT”, all agents are the same fine-
tuned language models, and M = 1. We notice
that NLL increases across iterations of finetun-
ing for our method, meaning that responses from
other critic agents are more diversity accord-
ing to our held-out critic agent. Moreover, our
responses are more diverse than using Single-
Agent FT. This aligns with our previous ob-
servation that diverse responses can mitigate
model collapse and prevent the model from over-
fitting to the finetuning data, leading to better
performance. We also include another metric,
embedding dissimilarity, as a further compar-
ison, finding that responses from our method
preserves diversity, where as diversity reduces
significantly with Single-agent FT. We provide
additional metrics for evaluating diversity in gen-
erations in Appendix Section C, and similarly find that multiagent finetuning preserves the final
diversity of generations.
Figure 4: Relationship between accuracy and diver-
sity. We visualize the relationship between embedding
dissimilarity and MATH accuracy across rounds of fine-
tuning. Our multiagent finetuning preserves diversity
across rounds of finetuning while improving accuracy.
We further analyze the relationship between diversity and performance and show this in Figure 4.
Specifically, we see that an improvement in the diversity of responses correlates positively with an
improvement in performance across rounds of finetuning across both Phi-3 and Mistral models. This
suggests that in general, increasing the diversity of responses can be helpful for improvement over
multiple rounds of fine-tuning. In Appendix Section E, we compare our approach with additional
approaches to improve the diversity of samples such as increasing the temperature at which samples
are generated, or using unique IDs in a single language to simulate a single agent. We find that our
approach outperforms these baselines.
4.3 ZERO-SHOT GENERALIZATION
We investigate the zero-shot
generalization of the proposed
method across different datasets.
Specifically, we use generation
and critic agents finetuned on
the MATH dataset and evaluate
their performance on 100 ran-
domly sampled examples from
the GSM dataset. We compare
our method to baseline methods
used in Table 1. These base-
lines are trained on the GSM
dataset. All methods use Mistral
as the base LLM. Figure 5 shows
that our method surpasses all the
baseline methods, even though it has never seen data from the GSM dataset, indicating the strong
zero-shot generalization capability of the proposed method. We show further results in Section H.3.
Figure 5: Zero-shot generalization of the proposed method. Our
method demonstrates zero-shot generalization capabilities. When trained
on the MATH dataset, it can effectively generalize to the GSM dataset. It
outperforms all the baselines that are trained on the GSM dataset.
5 RELATED WORK
Finetuning methods generally fall into three categories: human-in-the-loop, distillation, and
self-improvement. We briefly cover the first two categories and spend more time on self-improvement,
which is more related to our work.
Finetuning with human-in-the-loop and distillation: Several human-in-the-loop methods have
been introduced for finetuning, most noticeably RLHF (Christiano et al., 2017; Sun et al., 2023) and
9
200.15 0.20 0.25 0.30 0.35 0.40 0.45Diversity Metric: Embedding Dissimlarity30405060MATH AccuracyPhi-3 MistralSingle-Agent FTMultiagentFT(Ours)Single-Agent FTMultiagentFT(Ours)BaseMajorityDebateSTaRMajority-FTOurs (zero-shot)0204060GSM Accuracy44.0049.0051.0048.0045.0054.00Published as a conference paper at ICLR 2025
DPO (Rafailov et al., 2024). These methods have been employed as part of instruction tuning (Zhang
et al., 2023), improving the generated responses to instructions. Several instruction tuning datasets
(Wang et al., 2022; Longpre et al., 2023) have been released publicly, some with human-generated
responses. Other datasets have been constructed using the second category of finetuning methods,
distillation, whereby a much larger, highly performant LLM is used to generate data that finetunes a
smaller LLM (Peng et al., 2023; Liu et al., 2024). These approaches have been used to build recent
LLMs such as Alpaca (Taori et al., 2023) or Vicuna (Chiang et al., 2023) using responses generated
by GPT-3.5 or GPT-4 (Achiam et al., 2023).
Finetuning with self-improvement: Self-improvement methods (Huang et al., 2022; Yu et al., 2023;
Yuan et al., 2024; Hsieh et al., 2023; Welleck et al., 2022) improve the performance of LLMs through
the finetuning. Common approaches include iterated learning (Anthony et al., 2017; Vani et al.; Polu
et al., 2022; Xu et al., 2024) where solution/methods discovered by optimization on prior data are
used to uncover further solutions or, in this context, provide additional finetuning data. Some of
the main papers we use for comparison finetune using bootstrapping through rationale generation
(Zelikman et al., 2022; Lee et al., 2024; Pang et al., 2024; Zhang et al., 2024; Lu et al., 2023) or
use self-play/self-training methods through reinforcement learning (Chen et al., 2024b; Yuan et al.,
2024; Chen et al., 2024a). Most methods find that using self-generated rationales leads to significant
improvement when finetuning. However, these works and many others rely on access to ground truth
answer. Overall, existing works often show a plateauing effect with limited boosts in improvement
after several rounds of fine-tuning. Our work proposes to use multiagent interaction as an approach
to get more consistent gains after multiple rounds of finetuning.
Multiagent Interaction: Our work builds on the combination of finetuning and multiagent interaction
systems. We primarily incorporate multiagent debate (Du et al., 2023; Chan et al., 2023; Pham et al.,
2023; Liang et al., 2023) due to its success in improving factuality and reasoning in LLMs in a
variety of tasks at inference time. Several other multiagent interactions could also serve as the basis
for this paper. Tree-of-thought (Yao et al., 2024; Long, 2023) and graph-of-thought (Besta et al.,
2024) represent two common multiagent interaction systems over LLMs that incorporate responses
across multiple LLMs, which improves reasoning. Other works (Wu et al., 2023) have designed
more flexible systems for multiagent conversations built on structured program synthesis rather than
natural language. Prior work has also focused on incorporating multiagent interaction into domains
beyond factuality and reasoning such as strategy and communication games (Abdelnabi et al., 2023).
More recently, this has led to multiagent interaction systems over LLMs that have optimized via
equilibrium search for factuality and reasoning tasks (Jacob et al., 2023b;a). In contrast to existing
works, our work aims to use multiagent interaction as a method to finetune language models.
6 CONCLUSION AND LIMITATIONS
Limitations. In comparison to existing works in single model finetuning, multiagent finetuning is
substantially more expensive at both training and inference time as multiple copies of a model need
to be trained and run. To run multiagent finetuning experiments on open source models, we used
either four H100 GPUs or four A100 GPUs. Models took between 120GB - 240GB of GPU memory
and inference took between 12-24 hours across multiple GPUs. To improve the training time of
multiagent models, it may be interesting to instead share weights across different instances of models.
To improve inference time in multiagent models, we can directly distill the debate procedure into a
single modelor use quantization as part of finetuning.
Conclusion. In this paper, we have introduced a novel multiagent finetuning framework that sig-
nificantly enhances the performance and diversity of language models. By employing a society of
agents with distinct roles, our method effectively improves the feedback mechanism and overall
output quality, mitigating the limitations inherent in single-agent self-improvement methods. This
system allows for autonomous self-improvement through iterative finetuning, leading to substantial
performance gains across a comprehensive suite of reasoning tasks. Importantly, our approach is
versatile and can be applied to both open-source and proprietary LLMs, ensuring broad utility and
impact. Additionally, our method can be integrated with other finetuning approaches such that
incorporate human feedback such as RLHF or DPO, which we leave to future work. This work opens
new avenues for future research in language model enhancement and sets a foundation for further
advancements in the field.
10
Published as a conference paper at ICLR 2025
ACKNOWLEDGMENTS
This work was supported by the Center for Brains, Minds, and Machines, NSF STC award CCF-
1231216, the NSF award 2124052, the MIT CSAIL Machine Learning Applications Initiative, the
MIT-IBM Watson AI Lab, the CBMM-Siemens Graduate Fellowship, the DARPA Mathematics
for the DIscovery of ALgorithms and Architectures (DIAL) program, the DARPA Knowledge
Management at Scale and Speed (KMASS) program, the DARPA Machine Common Sense (MCS)
program, the Air Force Office of Scientific Research (AFOSR) under award number FA9550-21-1-
0399, the United States Air Force Research Laboratory and the Department of the Air Force Artificial
Intelligence Accelerator under Cooperative Agreement Number FA8750-19-2-1000, and ONR MURI
grant N00014-22-1-2740. The views and conclusions contained in this document are those of the
authors and should not be interpreted as representing the official policies, either expressed or implied,
of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to
reproduce and distribute reprints for Government purposes notwithstanding any copyright notation
herein.
REFERENCES
Sahar Abdelnabi, Amr Gomaa, Sarath Sivaprasad, Lea Sch¨onherr, and Mario Fritz. Llm-deliberation:
Evaluating llms with interactive multi-agent negotiation games. arXiv preprint arXiv:2309.17234,
2023. 10
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany
Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. Phi-3 technical report:
A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219, 2024.
6, 7, 8
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report.
arXiv preprint arXiv:2303.08774, 2023. 10
Thomas Anthony, Zheng Tian, and David Barber. Thinking fast and slow with deep learning and tree
search. Advances in neural information processing systems, 30, 2017. 10
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna
Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness
from ai feedback. arXiv preprint arXiv:2212.08073, 2022. 1
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi,
Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts:
Solving elaborate problems with large language models. In Proceedings of the AAAI Conference
on Artificial Intelligence, volume 38, pp. 17682–17690, 2024. 10
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. 1
Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and
Zhiyuan Liu. Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv
preprint arXiv:2308.07201, 2023. 10
Zhipeng Chen, Kun Zhou, Wayne Xin Zhao, Junchen Wan, Fuzheng Zhang, Di Zhang, and Ji-Rong
Wen. Improving large language models via fine-grained reinforcement learning with minimum
editing constraint. arXiv preprint arXiv:2401.06081, 2024a. 10
Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. Self-play fine-tuning
converts weak language models to strong language models. arXiv preprint arXiv:2401.01335,
2024b. 1, 10
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng,
Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot
11
Published as a conference paper at ICLR 2025
impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April
2023), 2(3):6, 2023. 10
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep
reinforcement learning from human preferences. Advances in neural information processing
systems, 30, 2017. 9
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168, 2021. 5
Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factual-
ity and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325,
2023. 2, 3, 5, 6, 8, 10, 15, 20
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models.
arXiv preprint arXiv:2407.21783, 2024. 6, 7, 8, 20, 21, 22
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv
preprint arXiv:2103.03874, 2021. 5
Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner,
Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. Distilling step-by-step! outperforming larger
language models with less training data and smaller model sizes. arXiv preprint arXiv:2305.02301,
2023. 10
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han.
Large language models can self-improve. arXiv preprint arXiv:2210.11610, 2022. 1, 10
Athul Paul Jacob, Gabriele Farina, and Jacob Andreas. Regularized conventions: Equilibrium
computation as a model of pragmatic reasoning. arXiv preprint arXiv:2311.09712, 2023a. 10
Athul Paul Jacob, Yikang Shen, Gabriele Farina, and Jacob Andreas. The consensus game: Language
model generation via equilibrium search. arXiv preprint arXiv:2310.09139, 2023b. 10
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot,
Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al.
Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. 6, 7, 8
Nicholas Lee, Thanakul Wattanawong, Sehoon Kim, Karttikeya Mangalam, Sheng Shen, Gopala
Anumanchipali, Michael W Mahoney, Kurt Keutzer, and Amir Gholami. Llm2llm: Boosting llms
with novel iterative data enhancement. arXiv preprint arXiv:2403.15042, 2024. 10
Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu,
and Shuming Shi. Encouraging divergent thinking in large language models through multi-agent
debate. arXiv preprint arXiv:2305.19118, 2023. 10
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in
neural information processing systems, 36, 2024. 10
Jieyi Long. Large language model guided tree-of-thought. arXiv preprint arXiv:2305.08291, 2023.
10
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V
Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective
instruction tuning. In International Conference on Machine Learning, pp. 22631–22648. PMLR,
2023. 10
Jianqiao Lu, Wanjun Zhong, Wenyong Huang, Yufei Wang, Fei Mi, Baojun Wang, Weichao Wang,
Lifeng Shang, and Qun Liu. Self: Language-driven self-evolution for large language model. arXiv
preprint arXiv:2310.00533, 2023. 2, 10
12
Published as a conference paper at ICLR 2025
OpenAI. Chatgpt: Optimizing language models for dialogue, December 2022. URL https:
//openai.com/blog/chatgpt/. 6, 7, 8
R OpenAI. Gpt-4 technical report. arXiv, pp. 2303–08774, 2023. 1
Richard Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, and Jason
Weston. Iterative reasoning preference optimization. arXiv preprint arXiv:2404.19733, 2024. 10
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with
gpt-4. arXiv preprint arXiv:2304.03277, 2023. 10
Chau Pham, Boyi Liu, Yingxiang Yang, Zhengyu Chen, Tianyi Liu, Jianbo Yuan, Bryan A Plum-
mer, Zhaoran Wang, and Hongxia Yang. Let models speak ciphers: Multiagent debate through
embeddings. arXiv preprint arXiv:2310.06272, 2023. 10
Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya
Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344,
2022. 10
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea
Finn. Direct preference optimization: Your language model is secretly a reward model. Advances
in Neural Information Processing Systems, 36, 2024. 10
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text
transformer. J. Mach. Learn. Res., 21(140):1–67, 2020. 19
Yuda Song, Hanlin Zhang, Carson Eisenach, Sham Kakade, Dean Foster, and Udaya Ghai. Mind
the gap: Examining the self-improvement capabilities of large language models. arXiv preprint
arXiv:2412.02674, 2024. 2
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan,
Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. Aligning large multimodal models with
factually augmented rlhf. arXiv preprint arXiv:2309.14525, 2023. 9
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B Hashimoto. Alpaca: A strong, replicable instruction-following model.
Stanford Center for Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca.
html, 3(6):7, 2023. 10
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation
and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 1
A Vani, M Schwarzer, Y Lu, E Dhekane, and A Courville. Iterated learning for emergent systematicity
in vqa. arxiv 2021. arXiv preprint arXiv:2105.01119. 10
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei,
Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al.
Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv
preprint arXiv:2204.07705, 2022. 10
Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, and Yejin
Choi. Generating sequences by learning to self-correct. arXiv preprint arXiv:2211.00053, 2022.
10
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li,
Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via
multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023. 10
Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, Dacheng Tao,
and Tianyi Zhou. A survey on knowledge distillation of large language models. arXiv preprint
arXiv:2402.13116, 2024. 10
13
Published as a conference paper at ICLR 2025
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan.
Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural
Information Processing Systems, 36, 2024. 10
Xiao Yu, Baolin Peng, Michel Galley, Jianfeng Gao, and Zhou Yu. Teaching language models to
self-improve through interactive demonstrations. arXiv preprint arXiv:2310.13522, 2023. 1, 10
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, and Jason
Weston. Self-rewarding language models. arXiv preprint arXiv:2401.10020, 2024. 1, 10
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D Goodman. Star: Self-taught reasoner bootstrap-
ping reasoning with reasoning. In Proceedings of the 36th International Conference on Neural
Information Processing Systems, pp. 15476–15488, 2022. 1, 6, 10
Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi
Hu, Tianwei Zhang, Fei Wu, et al. Instruction tuning for large language models: A survey. arXiv
preprint arXiv:2308.10792, 2023. 10
Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee,
Honglak Lee, and Lu Wang. Small language models need strong verifiers to self-correct rea-
soning. arXiv preprint arXiv:2404.17140, 2024. 10
14
Published as a conference paper at ICLR 2025
A APPENDIX SUMMARY
We add additional details for our methods and experiments as well as additional results to provide
more evidence of improvements with multiagent finetuning. In Section B, we provide additional
details on summarization, inference and training details using multiagent finetuning with debate.
In Section C, we cover additional metrics for measuring diversity in agent responses based (1)
consensus and (2) KL-divergence (3) likelihood. Both metrics show that diversity is maintained or
increases while accuracy increase over rounds of finetuning. In Section D, we introduce a cooperative
approach for composing agent responses rather than a competitive approach through multiagent
debate. We apply multiagent finetuning with the cooperative approach to analyze whether our method
is agnostic to the approach style. We find strong similar improvements when our method is applied to
a cooperative approach. In Section E, we include an additional baseline based on Single Agent FT
where we increase the sampling temperature applied across all agents. This is a proxy for increasing
diversity that is complementary to our method. We find that multiagent finetuning significantly
outperforms methods that modify temperature to artificially induce diversity. In Section F, we add an
additional experiment where we apply multiagent finetuning to responses across 5 agents instead of
3. We see significant improvements in performance when using additional agents. In Section G, we
present a simple mathematical model illustrating how multiagent finetuning can improve diversity.
Finally, in Section H, we present additional evaluations of multiagent finetuning across a wide suite
of datasets.
B METHODOLOGY DETAILS
B.1 SUMMARIZATION DETAILS
As done in Du et al. (2023), we incorporate summarization into the multiagent debate procedure. In
summarization, we have an LLM agent take responses from other agents as input and summarize the
answers to the responses. During round m of debate, we introduce a summarization agent AS
n which
n+1 , · · · ym−1
takes responses from the other N − 1 agents in the last round, (ym−1
N )
and generates a summary of the responses xs
n to
generate a new response.
m,n. This summary is sent to the critic agent AC
n−1 , ym−1
, · · · , ym−1
1
B.2
INFERENCE DETAILS
The pseudocode of our method for inference is shown below. .
Algorithm 2 Inference
Require: A set of finetuned generation agents { ˆAG
N };
A test set of language inputs and ground truth responses Dtask = {xi, yi}; The number of agents N ; The
number of debate rounds M .
N }; A set of finetuned critic agents { ˆAC
1 , · · · , ˆAG
1 , · · · , ˆAC
1: success ← 0
2: for x, y in Dtask do # Iterate over the input tasks
for m in M do # M rounds of debate
3:
4:
5:
6:
7:
y1,1, · · · y1,N ← ˆAG
1 (x), · · · , ˆAG
if m = 0 then
else
N (x) # Response of each generation agent
m,1, · · · , xs
xs
ym,1, · · · , ym,N ← ˆAC
m,N ← Summarize the responses from other generator agents
N (xs
m,1), · · · , ˆAC
1 (xs
m,N )
# Response of each critic agent
end for
ˆy ← Majority Voting {yM,1, · · · , yM,N } # Responses of the final round of debate
success ← success + I(ˆy = y)
end if
8:
9:
10:
11:
12:
13: end for
14: Accuracy ← success
|D|
15
Published as a conference paper at ICLR 2025
B.3 EXPERIMENTAL DETAILS
For all open-source models, we perform finetuning using a total of eight 40GB A100 GPUs and four
80GB H100 GPUs. The evaluation of individual inference times for multi-agent finetuning with
open-source models took approximately 30 to 36 hours.
Phi-3 We ran our results using Phi-3-Mini-128K-Instruct which has 4 billion tunable
parameters. We finetune the entire model end-to-end (no LoRA or memory adaptation) on two 40GB
A100 GPUs or one 80GB H100 GPU and run a total of two epochs of finetuning for generation agents
and one epoch of finetuning for critic agents. We use a batch size of 1 and a learning rate of 5e−6
for generation agents and 5e−7 for critic agents. When applying multiple iterations of finetuning,
we use a learning rate of 5e−7, and a weight decay of 1e−3 across both generation and critic agents.
Models are finetuned with a fixed training set of 500 randomly selected questions (where we do
not provide answer annotations for the questions) and then evaluated on a separate test set of 500
randomly selected questions.
Mistral We ran our results using Mistral-7B-Instruct-v0.2, which has 7 billion tunable
parameters. We finetune the entire model end-to-end (no LoRA or memory adaptation) on four 40GB
A100 GPUs or two 80GB H100 GPUs and run a total of one epoch of finetuning. We use a batch size
of 1 and a learning rate of 5e−7 for generation agents and 5e−7 for critic agents and a weight decay
of 1e−2. When applying multiple iterations of finetuning, we use a learning rate of 5e−7 across both
generation and critic agents. Models are finetuned with a fixed training set of 500 randomly selected
questions (where we do not provide answer annotations for the questions) and then evaluated on a
separate test set of 500 randomly selected questions.
LLaMA-3 We ran our using Meta-Llama-3-8B-Instruct, which has 8 billion tunable param-
eters. We finetune the entire model end-to-end (no LoRA or memory adaptation) on three 80GB
H100 GPUs and run a total of two epochs of finetuning. We use a batch size of 1 and a learning
rate of 5e−7 for generation agents and 2e−7 for critic agents. When applying multiple iterations of
finetuning, we use a learning rate of 5e−7 across both generation and critic agents as well as a weight
decay of 1e−2. Models are finetuned with a fixed training set of 500 randomly selected questions
(where we do not provide answer annotations for the questions) and then evaluated on a separate test
set of 500 randomly selected questions.
GPT-3.5 We ran our results on the gpt-3.5-turbo-0613 model. We use the finetuning API
and run a total of two epochs of finetuning, using a batch size of 1 and a learning rate multiplier of
1. Models are finetuned with a fixed training set of 500 randomly selected questions (where we do
not provide answer annotations for the questions) and then evaluated on a separate test set of 500
randomly selected questions.
C DIVERSITY METRICS
We cover different metrics for measuring diversity for both Phi-3 and Mistral to provide an overview
of the diversity of our method in comparison to Single Agent FT.
C.1 CONSENSUS
We further analyze the diversity of responses from our method to show that diversity is preserved.
Rather than using text embeddings, we further measure the consensus among agents as a more
interpretable alternative. This is measured as the proportion of agents that have the same final answer
in a given round of debate. We take an average of this proportion across all 500 problems used
for evaluation. To obtain the mean consensus of our single agent finetuning baseline, we prompt
the single-agent finetuned model 3 times, take a majority vote over generated answers, and find the
proportion of agents that had a generated answer that was the majority vote. In order to convert this
to diversity, we take the difference of the mean consensus value from 1, which represents the average
number of agents with a different response from the consensus answer.
We measure diversity as the inverse of consensus. Specifically, we consider the agent responses in
the final round of debate {yM,1, · · · , yM,N } that match the majority-voted final response ˆy. The
16
Published as a conference paper at ICLR 2025
Figure 6: Consensus: Response diversity across finetuning iterations. We measure the response diversity
based on agent consensus of our method and the single-agent finetuning method on the MATH dataset. The
diversity of our method remains consistent over finetuning iterations, whereas the diversity of the single-agent
method drops significantly.
Figure 7: KL-Divergence: Response diversity across finetuning iterations. We measure diversity based on
the KL-divergence between the probabilities of the output tokens between agents. Similar to our likelihood
measurement, we find that diversity is preserved across rounds of finetuning.
consensus is computed as the percentage of responses in {yM,1, · · · , yM,N } that match ˆy:
Consensus =
(cid:80)N
n=1
I(yM,n = ˆy)
N
,
where I is the indicator function. Diversity is then given by Diversity = 1 − Consensus.
We show results in Figure 6. As seen with our prior metric, embedding dissimilarity, we can preserve
diversity based on the responses given by the agents, rather than based on the embeddings of a
language model.
C.2 KL-DIVERGENCE
We next measure diversity by computing KL divergence between the probability distributions com-
puted based on the final answers from different agents. We estimate the probability distribution of
each agent’s response using the likelihoods from Gemma-2 (2B) For each test example, we compute
the KL divergence between the responses of any two agents and then average the values from all
pairs of agents to determine the overall KL divergence.
We see results in Figure 7. Specifically, we see that diversity is preserved using our method whereby
KL-divergence is consistently higher than the single-agent finetuning baseline.
17
12345Iterations of Finetuning0102030405060DiversityPhi-3Multiagent FT (Ours)Single-agent FT12345Iterations of Finetuning0102030405060MistralMultiagent FT (Ours)Single-agent FT12345Iterations of Finetuning0.020.040.060.080.10KL Divergence across agentsPhi-3Multiagent FT (Ours)Single-agent FT12345Iterations of Finetuning0.040.060.080.100.12MistralPublished as a conference paper at ICLR 2025
Figure 8: KL Diversity between finetuned and unfinetuned LLM. We measure the KL-divergence between
likelihoods of responses from finetuned agents and base LLM agents for single-agent finetuning and genera-
tion/critic agents from multiagent finetuning. Likelihoods are calculated using Gemma-2 (2B). We find that
our method diverges from the base LLM probabilities and furthermore, critic agents have better divergence in
responses and our method has better diversity metrics than single-agent FT.
Figure 9: Embedding Dissimilarity: Response diversity across finetuning iterations. We measure the
response diversity based on the embedding dissimilarity between the responses of different agents, where
embeddings are computed using the T5-3B encoder. We notice that similar to likelihood measurement, we find
that diversity is preserved across rounds of finetuning.
C.3 KL-DIVERGENCE ACROSS MODELS
We further analyze diversity by comparing the KL-divergence of generation and critic agents with the
likelihood of responses from the base LLM model across iterations of finetuning.
We measure the KL-divergence between each agent responses and responses from a base LLM for
500 MATH examples. We average KL-divergence across all examples for each iteration of finetuning.
We apply this measure to agents formed through Single Agent-FT and to generation and critic agents
formed through our method. For Single-Agent FT, we find the KL divergence for each finetuned
agent and average the KL-divergence across all examples and all agents per iteration of finetuning.
For our method, we separate generation and critic agents and find the average KL-divergence for
both. We measure likelihoods using Gemma-2 (2B), similar to Figure 7.
We show results in Figure 8. We see that critic agents generally have higher KL-divergences from
the base LLM and both critic and generation agents have higher KL-divergences across iterations of
finetuning.
C.4 EMBEDDING DISSIMILARITY
Finally, we analyze diversity by measuring the embedding dissimilarity between responses of different
agents.
Specifically, we consider agent responses in the final round of debate {yM,1, · · · , yM,N } that match
the majority-voted final majority-voted final response ˆy. For each response, we obtain pretrained
18
12345Iterations of Finetuning0.100.150.200.250.30KL-DivergenceLLaMA-3: KL-Divergence between finetuned models and base modelSingle-agent FTMultiagent FT (Generation Agents)Multiagent FT (Critic Agents)12345Iterations of Finetuning0.150.200.250.300.35Embedding dissimilarity across agentsPhi-3Multiagent FT (Ours)Single-agent FT12345Iterations of Finetuning0.200.250.300.350.400.45MistralPublished as a conference paper at ICLR 2025
LLM
Methods
Arithmetic
GSM
MATH
GPT-3.5
Cooperative (Base)
Cooperative (FT)
96.60 ± 0.83
98.80 ± 0.39
81.80 ± 1.73
84.00 ± 1.64
53.60 ± 2.23
56.40 ± 2.21
Table 3: Cooperative Finetuning. Our method supports fine-tuning in cooperative settings, where agents work
together (e.g., 3 agents, 2 rounds).
Figure 10: Inducing diversity through increasing temperature. We introduce an additional baseline where we
apply the Single-Agent FT baselin with a temperature of 2. By increasing the sampling temperature, we allow
the model to generate more diverse responses. We observe that our method out-performs higher temperature
settings, which demonstrates that temperature does not increase diversity in a way that is useful for accuracy.
contextual word embeddings from a held-out language model, in this case the T5-3B encoder model
(Raffel et al., 2020).
We feed each agent response to the T5 encoder model to obtain word embeddings and extract the
embedding associated with the classification token [CLS]. As done in prior work, we use this
embedding as a representation of the sequence. We compare the similarity of the agent responses
using cosine similarity of the [CLS] embeddings. Since cosine similarity measures similarity, to
obtain a metric for diversity, we take the complement of cosine similarity by subtracting the value
from 1.
D COOPERATIVE FINETUNING
In this paper, our method mainly builds on a competitive approach for composing agent responses
with multiagent debate. Our approach for multiagent finetuning can be applied to both the competitive
setting, where critic agents provide feedback to generator agents, and cooperative settings, where
agents work together in a ”mixture of experts” style to generate answers. Instead of prompting agents
to critique responses from other agents, in the second round of conversation, we prompt agents to
cooperate with other agents. We ask each agent to generate a new response by merging their own
response with the responses of other agents, using the prompt “Can you derive a new solution by
combining your solution with the solutions of other agents?”. Under this cooperative setting, the
proposed multi-agent finetuning improves the performance, as demonstrated by Cooperative (FT)
outperforming Cooperative (Base).
We show results in Table 3. More specifically, we see that we can finetune with a cooperative method
with multiagent finetuning and achieve similar improvements in performance. This demonstrates that
our method can be applied to other multiagent prompt settings as a general finetuning method for
LLMs.
E ADDITIONAL COMPARISONS
We compare our approach to two additional approaches to improve the diversity of reasoning chains.
19
12345Iterations of finetuning4550556065AccuracyPhi-3Multiagent FT (Ours)Single-agent FT (Temp = 1.0)Single-agent FT (Temp = 2.0)12345Iterations of finetuning1618202224262830MistralMultiagent FT (Ours)Single-agent FT (Temp = 1.0)Single-agent FT (Temp = 2.0)Published as a conference paper at ICLR 2025
LLM
Methods
Arithmetic
GSM
MATH
GPT-3.5
Phi-3
Debate
Majority FT
Ours
Debate
Majority FT
(Ours)
99.40 ± 0.34
99.60 ± 0.28
100.00 ± 0.00
97.40 ± 0.71
95.80 ± 0.90
99.80 ± 0.20
85.40 ± 1.58
86.20 ± 1.54
88.20 ± 1.44
86.00 ± 1.55
84.80 ± 1.61
89.40 ± 1.38
58.20 ± 2.22
59.00 ± 2.19
62.80 ± 2.16
55.20 ± 2.22
53.20 ± 2.23
60.40 ± 2.19
Table 4: More agents of debate. With 5 agents and 2 rounds of debate, our methods still outperform the
baselines and show better results than the 3 agents and 2 rounds of debate results presented in Table 1 of the
main paper.
LLM
LLaMA-3 (Dubey et al., 2024)
Methods
MATH
46.80 ± 2.23
Base
51.60 ± 2.23
Debate
Unique ID 50.80 ± 2.24
57.40 ± 2.21
Ours
Table 5: Unique ID vs Multiagent Finetuning. We introduce an additional comparison to multiagent finetuning
where we feed a unique ID token to each agent, corresponding to a generation or critic agent. We find that this is
not comparable to improvements on multiagent finetuning.
E.1 MODULATING TEMPERATURES
We first consider inducing diverse responses from LLM agents by increasing the temperature of
generation. We add an additional baseline where we vary the temperature of agents finetuned using
Single Agent-FT. Higher temperature values may be a proxy for more diverse responses. We show
results over rounds of finetuning in Figure 10.
We see that our method surpasses the performance of this baseline. This likely because higher
temperature values can reduce accuracy due to increased variability of samples. Our method preserves
diversity of responses while increasing accuracy using a more carefully designed finetuning method.
E.2 UNIQUE ID FOR AGENTS
We next considering an additional comparison to multiagent finetuning that can preserve diversity
while reducing the cost of finetuning. The method involves using a unique identifier as part of
the prompt fed to each agent. We feed each generation agent an ID given by GEN1, GEN2, etc.
Similarly, each critic agent is given an ID CRIT1, CRIT2, etc. Additionally, we provide a short
description to the agent, explaining what the ID refers to. For generation agents, we state that
the agent is tasked with creating a solution. For critic agents, we state that the agent is tasked
with evaluating and improving responses. The ID is presented to the agent at the beginning of
each prompt, marked by the string Agent ID: GEN1 (This is a generation agent
tasked with creating a solution.) as an example of the ID fed to generation agent 1.
We compare the unique ID approach on the same 500 MATH examples reported in Table 1. Results
are shown in Table 5. We find that multiagent finetuning performs significantly better and that using
unique IDs is fairly similar to debate. This demonstrates that mechanisms for generating solutions
and critiquing them is unlocked via finetuning.
F ADDITIONAL AGENTS IN DEBATE
In Table 4, we show the influence additional agents with finetuning. We use 5 agents and 2 rounds
of debate. We find that additional agents improves results as noted in prior work (Du et al., 2023)
over 3 agents, 2 rounds of debate. This also implies that our method will scale with larger number of
finetuned agents.
20
Published as a conference paper at ICLR 2025
LLM
LLaMA-3 (Dubey et al., 2024)
Methods
MATH
24.40 ± 1.92
Base
25.20 ± 1.94
Majority
29.80 ± 2.05
Debate
Majority FT 28.00 ± 2.01
34.20 ± 2.12
Ours
Table 6: Additional Evaluation of Multiagent Finetuning on more difficult tasks. Our method outperforms
the baselines on more difficult tasks including examples from all levels of MATH. This shows the applicability
of our method in more broad settings.
G MATHEMATICAL MODEL OF DIVERSITY OVER ROUNDS OF FINETUNING
We consider a simple mathematical model illustrating how diversity can arise by finetuning models
only on answers that they are accurate on. Consider a training dataset of problems in three topics,
A, B, and C as well as three models we train all initialized from the same base model. For each
model, we assign a specialization skill score SA, SB, SC between 0 and 1, representing how accurate
the model is at answering questions in the specified topic. All three models are initialized to have a
skill of 0.33 on each topic. The specialization Si for each topic i corresponds to the percentage of
questions in topic i the model get accurate, where SA of 0 represents that a model would get 0% of
questions in topic A correct.
At each iteration, a model is trained on all questions it answers correctly in each topic. This increases
the specialization skill score by fraction of training the model saw for each specific topic. Formally,
the updated skill of model A at iteration t would be:
A = St−1
St
A
(cid:18)
1 +
St−1
A
B + St−1
A + St−1
St−1
C
(cid:19)
.
(2)
To account for a finite amount of capacity in each model, after the above skill update, the skills across
all models at iteration t are then normalized to have a sum of one. Without loss of generality, assume
that at iteration t, St
C (which happens by random chance, since we have a
finite number of questions). Under the update rule described, the ratio St+1
A is larger than St
B and St
A to St
A is given by
(cid:18)
1 +
St
A
B + St
A + St
St
C
(cid:19)
/
(cid:88)
i∈{A,B,C}
(cid:18)
1 +
St
i
B + St
A + St
St
C
(cid:19)
St
i
.
Since St
A is greater than or equal to St
i , the above expression is greater than or equal to
(cid:18)
1 +
St
A
B + St
A + St
St
C
(cid:19)
/
(cid:88)
i∈{A,B,C}
(cid:18)
1 +
St
A
B + St
A + St
St
C
(cid:19)
St
i
= 1,
(3)
(4)
where we use the identity that the sum of St
scores. We thus have that St+1
increasing over iterations of training.
A will be larger than St
i is equal to 1 to indicate since they are normalization of the
A, with specialization on topic A monotonically
Since a priori the model has no preference for any particular topic, random sampling each initial base
model will lead to skill preference over a different random topic. This repeated procedure will then
eventually result in models specializing in either topic A, B, C, ensuring diversity across models.
This mathematical model is similar to the multiagent finetuning procedure in the paper, where we
selectively train generators and critics on datasets they are accurate on and illustrate how they can
then specialize on different portions of data.
H ADDITIONAL EVALUATIONS
H.1 LARGER MATH EVALUATION
To further evaluate multiagent finetuning, we evaluate on the MATH dataset across all 5 levels of
difficulty, instead of selecting examples from levels 1-3. We extract 500 examples for training and
500 examples for testing and evaluate on LLaMA-3.
21
Published as a conference paper at ICLR 2025
Figure 11: Multiple iterations of finetuning over all levels of MATH. We apply multiple iterations of finetuning
over 500 examples of MATH sampled from all levels. Even over a more difficult domain, we see significant
improvements from multiagent finetuning that continue to self-improve.
LLM
LLaMA-3 (Dubey et al., 2024)
Methods
MMLU
60.40 ± 2.18
Base
61.80 ± 2.17
Majority
65.80 ± 2.12
Debate
Majority FT 63.40 ± 2.15
68.80 ± 2.07
Ours
Table 7: MMLU Evaluation We introduce an additional evaluation with the MMLU benchmark, finetuning on
500 MMLU examples and testing on 500 different MMLU examples. We find that our method performs better
than other baselines.
We show results across all baselines in Table 6 and results across multiple rounds of finetuning in
Figure 11. We see consistent improvement using LLaMA-3.
H.2 MMLU
We add an additional comparison with MMLU to further establish thte improvement of our method
on a task related to general factuality and reasoning instead of mathematics.
We finetune on 500 MMLU examples randomly sampled from all 57 subjects. We then evaluate on a
different set of 500 randomly sampled examples.
We show results in Table 7. We see that our method can improve performance on a task related to
factuality.
H.3 ZERO-SHOT GENERALIZATION EVALUATION
We include a larger evaluation of zero-shot evaluation of our method in Figure 12, where we finetune
on 500 MATH problems and test on 1000 GSM problems. We find that our method performs
significantly better than all other baselines.
Furthermore, we test another setting to measure zero-shot performance by finetuning on the arithmetic
dataset and evaluating on the GSM dataset. We finetune using 500 arithmetic problems and evaluate
each method on 1000 GSM problems. See Figure 13. We find that our method also performs
significantly better than all other baselines.
22
12345Iterations of finetuning30323436384042AccuracyLLaMA-3 (8B)Multiagent FT (Ours)Single-agent FTPublished as a conference paper at ICLR 2025
Figure 12: Testing zero-shot generalization across 1000 GSM problems We test the zero-shot capabilities of
our method using models trained on the MATH dataset. We find that over 1000 problems of GSM, our method
performs better than all baselines.
Figure 13: Zero-shot generalization after arithmetic finetuning. We evaluate the ability of our method to
generalize after finetuning Mistral on the arithmetic task and evaluating on GSM. We find that this aids in GSM
performance, even more than finetuning with MATH.
23
BaseMajorityDebateSTaRMajority-FTOurs (zero-shot)0204060GSM Accuracy42.1045.6049.7047.8046.1053.00BaseMajorityDebateMajority-FTOurs (zero-shot)0204060GSM Accuracy42.1045.6049.7048.8053.30 |
MnJzJ2gvuf | MAVIS: Mathematical Visual Instruction Tuning with an Automatic Data Engine | [
6,
6,
6,
8
] | Published as a conference paper at ICLR 2025
MAVIS:
MATHEMATICAL VISUAL INSTRUCTION TUNING
WITH AN AUTOMATIC DATA ENGINE
Renrui Zhang1∗†, Xinyu Wei3∗, Dongzhi Jiang1, Ziyu Guo2, Yichi Zhang3, Chengzhuo Tong4
Jiaming Liu3, Aojun Zhou1, Shanghang Zhang3, Peng Gao4, Hongsheng Li1,5‡
1CUHK MMLab & 2MiuLar Lab
4Shanghai AI Laboratory
[email protected], [email protected]
5CPII under InnoHK
3Peking University
∗ Equal contribution † Project lead ‡ Corresponding author
ABSTRACT
Multi-modal Large Language Models (MLLMs) have recently showcased superior
proficiency in general visual scenarios. However, we identify their mathemati-
cal capabilities remain under-explored with three areas to be improved: visual
encoding of math diagrams, diagram-language alignment, and chain-of-thought
(CoT) reasoning. This draws forth an urgent demand for an effective training
paradigm and a large-scale, comprehensive dataset with detailed CoT rationales,
which is challenging to collect and costly to annotate manually. To tackle this
issue, we propose MAVIS, a MAthematical VISual instruction tuning pipeline
for MLLMs, featuring an automatic data engine to efficiently create mathematical
visual datasets. We design the data generation process to be entirely independent
of human intervention or GPT API usage, while ensuring the diagram-caption
correspondence, question-answer correctness, and CoT reasoning quality. With
this approach, we curate two datasets, MAVIS-Caption (558K diagram-caption
pairs) and MAVIS-Instruct (834K visual math problems with CoT rationales), and
propose four progressive stages for training MLLMs from scratch. First, we utilize
MAVIS-Caption to fine-tune a math-specific vision encoder (CLIP-Math) through
contrastive learning, tailored for improved diagram visual encoding. Second, we
also leverage MAVIS-Caption to align the CLIP-Math with a large language model
(LLM) by a projection layer, enhancing vision-language alignment in mathematical
domains. Third, we adopt MAVIS-Instruct to perform the instruction tuning for
robust problem-solving skills, and term the resulting model as MAVIS-7B. Fourth,
we apply Direct Preference Optimization (DPO) to enhance the CoT capabilities of
our model, further refining its step-wise reasoning performance. On various mathe-
matical benchmarks, our MAVIS-7B achieves leading results among open-source
MLLMs, e.g., surpassing other 7B models by +9.3% and the second-best LLaVA-
NeXT (110B) by +6.9%, demonstrating the effectiveness of our method. Data and
models are released at https://github.com/ZrrSkywalker/MAVIS.
1
INTRODUCTION
The pursuit of artificial general intelligence necessitates models to seamlessly interpret and generate
multi-modal data. In recent years, the advent of Large-language Models (LLMs) (Brown et al., 2020;
Touvron et al., 2023a;b; Chiang et al., 2023) and their Multi-modal extension (MLLMs) (Zhang
et al., 2024a; Gao et al., 2023b; Su et al., 2023; Ye et al., 2023a) have significantly facilitated this
process across various fields, such as healthcare (Singhal et al., 2023; Shu et al., 2023), autonomous
driving (Yang et al., 2023; Jin et al., 2024), and robotics (Li et al., 2023b; Liu et al., 2024b). Although
MLLMs exhibit remarkable performance in diverse tasks and benchmarks, one arena where they
have yet to fully demonstrate their potential is mathematical problem-solving in visual contexts.
1
Published as a conference paper at ICLR 2025
Figure 1: (a) We compare the attention map of class tokens from CLIP ViT-L (Radford et al., 2021)
and our CLIP-Math. Our vision encoder can better capture significant mathematical information
within diagrams. (b) We compare the diagram captioning capabilities between GPT-4V (OpenAI,
2023c) and our MAVIS-7B, where GPT-4V fall short of accurately recognizing mathematical elements.
(c) We compare the chain-of-thought (CoT) reasoning between different models, showcasing that
GPT-4V and Gemini-Pro (Gemini Team, 2023) suffer from low-quality reasoning process.
Existing efforts (OpenAI, 2023b;a; Zhou et al., 2023) for text-only mathematics have attained
considerable progress, largely attributed to the availability of sufficient and easily accessible training
data. In contrast, solving visual mathematical problems remains a significant challenge for MLLMs,
primarily due to the absence of a fully validated, effective training pipeline and the acute shortage of
large-scale, high-quality datasets. Visual mathematical data is not only more costly to collect from
publicly available sources compared to text-only data, but also requires expensive manual annotation
to produce accurate step-by-step chain-of-thought (CoT) rationales integrating diagram information.
In light of these challenges, we identify three critical issues that impede the visual mathematical
capabilities of MLLMs.
i. Unsatisfactory math diagram embeddings by vision encoders. Most MLLMs adopt a
frozen CLIP (Radford et al., 2021) as the vision encoder, which is pre-trained by natural
images capturing real-world scenes with rich colors and textures. In contrast, math diagrams
are composed of abstract curves, shapes, and symbols with a monochromatic color scheme,
exhibiting large semantic gaps to general scenarios. As visualized in Figure 1 (a), the
attention map of CLIP struggles to capture important information within math diagrams,
which cannot provide satisfactory visual embeddings for LLMs to understand.
ii. Diagram-language misalignment between vision encoders and LLMs. Likewise, the
vision-language pre-training stage of MLLMs also adopts natural image-caption pairs for
cross-modal alignment. Due to the domain gap, while they can generate accurate captions for
2
Findthedomainandrangeofthefunctionfusingintervalnotation,whichgoesthrough(-2,-4),(-1,-2),(0,0),(1,-4).Question:Answer:domain(-3,1] range[-4,0]GPT-4V:Fromthegraph,Thefunctionstartsatx=−4andgoesuptox=1.Thereisahollowpointatx=−3,whichmeansthefunctionisnotdefinedthere.Thus,thedomainis[-4,-3)∪(-3,1]Therangeisthesetofallpossibleoutputvalues(y-values).Fromthegraph:Thelowesty-valueis−4(atx=−2andx=1).Thehighesty-valueis0,butsincethepointat(−3,0)ishollow,thefunctiondoesnotreach0.Thus,therangeis[-4,0)Gemini-Pro:Thedomainofthefunctionisthesetofallx-valuesforwhichthefunctionisdefined.Thefunctionisdefinedforallx-valuesbetween-3and1,including-3butexcluding1.Thus,Wecanwritethisinintervalnotationas:[-3,1)Therangeofthefunctionisthesetofally-valuesthatthefunctioncantakeon.Thefunctiontakesonally-valuesbetween-4and0,including-4and0.Wecanwritethisinintervalnotationas:[-4,0]MAVIS-7B:Fromthegraph,weknowthatthefunctionstartsfromx=-3andendsatx=1.Besides,thereisahallowpointatx=-3.Thus,Wecandeterminethatthedomainis(-3,1]Fromthegraph,weknowthatthefunctionreachesmaximum0atx=0,andreachesminimus-4atx=-2andx=1.Thefunctionattainsallvaluesfrom-4to0.Thus,therangeis[-4,0]××××××✓✓✓✓✓✓✓✓✓✓✓✓✓TheimageshowsageometricfigurewithpointslabeledA,B,C,D,andE.PointAisatthetopofaverticallinesegmentAB,whichmeasures10units.FromB,a30°angleisformedwiththelinesegmentextendingtoD.ThereisaquartercirclecenteredatBwitharadiusof10units,reachingpointC.ThefigureincludesaninscribedcirclecenteredwithinthequadrilateralformedbypointsC,D,andE.ThelinesABandDEareparallel...AsectorABCisdrawn,withangleCBAmeasuring30degrees.AsquareCBDEislinkedtoedgeCB,withangleBCEmeasuring90degrees.ThedistancebetweenpointsAandBis10.0units,andthedistancebetweenpointsCandBisalso10.0units.ThesquareCBDEhasaninscribedcircle.Math DiagramCLIP ViT-LOur CLIP-MathGPT-4V:MAVIS-7B:(a) Attention Comparison betweenCLIP and our CLIP-Math (b) Captioning Comparison betweenGPT-4V and our MAVIS-7B(c) CoTReasoning Comparison between Different MLLMsPublished as a conference paper at ICLR 2025
real-world images, but fall short of recognizing basic mathematical elements and narrating
their relations. As compared in Figure 1 (b), even GPT-4V (OpenAI, 2023c) produces
low-quality descriptions for simple geometric figures and functions, indicating LLMs are
not well aligned with the visual embedding space of math diagrams.
iii. Inaccurate CoT reasoning capabilities with visual elements by MLLMs. Referring to
the CoT evaluation in MathVerse (Zhang et al., 2024b), incorporating the diagram input
would adversely affect the reasoning quality of MLLMs compared to using only the text-
only question. As visualized in Figure 1 (c), we observe the problem-solving process of
GPT-4V and Gemini-Pro (Gemini Team, 2023) both suffer from low-quality CoT reasoning
accuracy. This demonstrates the incapability of MLLMs to leverage visual cues for precise
step-by-step mathematical problem-solving.
Therefore, to mitigate these issues, it is essential to develop an extensive dataset and effective training
approach tailored to visual mathematics. In this paper, we propose MAVIS, a MAthematical VISual
instruction tuning paradigm and an automatic data generation engine for MLLMs, which aims to fully
unleash their potential for diagram visual encoding and reasoning capabilities. We introduce two
meticulously curated datasets, a progressive four-stage training pipeline, and a visual mathematical
specialist, MAVIS-7B. We summarize the contributions of our work as follows.
• Automatic Mathematical Visual Data Engine. To eliminate the need for labor-intensive
annotation and expensive GPT API (OpenAI, 2023c;b) usage, we designed our data engine
to be entirely rule-based and fully automated. This engine handles every aspect of math-
ematical data creation, including diagram drawing, caption generation, question-answer
synthesis, and CoT rationale production. With this approach, we curate two large-scale,
high-quality mathematical visual datasets, MAVIS-Caption and MAVIS-Instruct, widely
covering plane geometry, analytic geometry, and function. MAVIS-Caption consists of
558K diagram-caption pairs automatically created by our data engine with accurate vision-
language correspondence. MAVIS-Instruct includes 834K visual math problems, which
includes 582K data constructed by our data engine and additional 252K data augmented by
GPT-4V from manual collection and existing datasets (Chen et al., 2021c; Lu et al., 2021).
Each problem is annotated with a CoT rationale, and modified to contain minimized textual
redundancy that enforces MLLMs to pay more attention on visual diagrams.
• Four-stage Training Pipeline. Our training framework involves four progressive stages
designed to sequentially address the aforementioned identified deficiencies in MLLMs.
Firstly, we utilize MAVIS-Caption to fine-tune a math-specific vision encoder by contrastive
learning, termed CLIP-Math, to enable better visual representations of math diagrams.
Subsequently, we align this encoder with the LLM to ensure effective diagram-language
integration also by MAVIS-Caption. After that, our MAVIS-Instruct is adopted to instruction-
tune the MLLM, which provides sufficient step-wise problem-solving supervision. Finally,
we employ Direct Preference Optimization (DPO) (Rafailov et al., 2024) with annotated CoT
rationales in MAVIS-Instruct to further enhance the reasoning capabilities of our model.
• Mathematical Visual Specialist. After the four-stage training, we develop MAVIS-7B,
an MLLM specifically optimized for visual mathematical problem-solving. On various
evaluation benchmarks, our model achieves leading performance compared to existing
open-source MLLMs, e.g., surpassing other 7B models by +9.3% and the second-best
LLaVA-NeXT (110B) (Li et al., 2024a) by +6.9% on MathVerse (Zhang et al., 2024b). The
quantitative results and qualitative analysis both validate the significance of our approach.
2 AUTOMATIC DATA ENGINE
To cope with the substantial data requirements of MLLMs, it is essential to have access to extensive
training instances. However, for visual mathematics, the paucity of publicly available datasets poses
a challenge, and creating such data manually also involves a high cost. Therefore, as illustrated in
Figure 2, we develop an automatic data engine to efficiently generate high-quality math diagrams
(Section 2.1), captions (Section 2.2), and question-answer with rationales (Section 2.3).
3
Published as a conference paper at ICLR 2025
Figure 2: Overview of Automatic Data Engine. We present the generation pipelines of geometry
(Top) and function (Bottom) problems within the proposed automatic data engine, including diagrams,
questions, captions, and Chain-of-Thought (CoT) rationales.
2.1 DIAGRAM GENERATION
Covering most mathematical scenarios, we adopt three diagram types: plane geometry, analytic
geometry, and function. Note that all the logic of the data engine is implemented in Python, and we
employ Matplotlib for the graphical rendering of the diagrams.
Plane Geometry Diagram. As such diagrams typically consist of spatial combinations of various
basic shapes, we utilize principles from multi-hop data curation to develop customized generation
rules. These rules allow for the iterative integration of new shapes into existing configurations.
Initially, we establish a core set of shapes, including squares, rectangles, triangles, sectors, etc, for
diagram generation. Starting with a randomly selected shape, we extend another shape from the set
along one of its straight sides. By iterating this process, we can construct diverse plane geometry
diagrams featuring different combinations of shapes. Additionally, we randomly label the vertices
with letters (e.g., A, B, C) and annotate numerical values relevant to geometric properties (e.g., side
lengths and angles), simulating realistic plane geometry problems.
Analytic Geometry Diagram. Likewise, our approach begins by defining a basic figure set that
differs slightly from that used in plane geometry; for example, we include additional elements such
as points and line segments. We then construct a Cartesian coordinate system, complete with grid
lines and scaled axes. The range of the coordinate system is randomly determined within a predefined
scope. Subsequently, we select a number from 1 to 3 to indicate the number of figures to be drawn on
the graph, and randomly choose coordinates for the top-left vertices to plot these figures at varied
sizes (using these points as centers for circles). Unlike plane geometry, we ensure that the figures do
not overlap, except for points and segments, and maintain the figure areas within a suitable scale.
Function Diagram. We focus on seven fundamental function types: polynomial, sine, cosine,
tangent, logarithmic, absolute value, and piece-wise polynomial functions. For each function type,
we parameterize the equations with random variables, such as coefficients and constants within a
predefined range (e.g., a and b in y = ax + b), which facilitates the generation of diverse function
graphs. We also adopt the same Cartesian coordinate system employed for analytic geometry.
Additionally, for specific caption or question-answering samples, we also plot key features like
extreme points and zero points of the functions, providing additional visual information that aids in
the understanding and reasoning of these mathematical functions.
4
O(3, -2log(16))xyOxyRationaleFundamental ShapesABCABCisanisoscelestriangle,whereAB=AC.AB=2.AB = 2,∠A = 30°ABC2SinceAB = 2and∠A=30°,wecancalculatetheangleCofisoscelestriangleas∠C=(180-30)/2=75°.Using the Law of Sines,BC=sin(∠A)*AB/sin(∠C)=0.5*2/0.97=1.04.2ABCD2ExtendedfromsideBC,BCDconformsasector.∠CBD=90°.ABCD2TheareaofsectorCBD=r*r*π/4=0.85.BC= 1.04, ∠CBD = 90°CaptionConditionRationaleConditionCaptionSelect arandom edgeExtend arandom shapeSelect an attributeto calculateSelect arandom shapeCaptiontemplateRationale templateCaptiontemplateRationale templateRationaleSetconditions andquestionsSolveExpressionSelect arandom functionRationale templateThefigureshowsthegraphoff(x)=-2*log(5*x+1).f(x) = -2*log(5*x + 1)ExpressionCaptionCaptiontemplatea= -2Conditionx= 2, y’ = -10/11 x= 3, y= -2*log(16)What is the area of sector CBD?QuestionWhat arethezeros?QuestionSolveAttributesRationaleRationale templateConsideringx = 3 is... -2*log(3*b+ c) = -2*log(16), and... 3*b+ c = 16;Differentiating... f'(x) = -2*b/(b*x + c)... whenx = 2 is -10/11, -2*b= -20*b/11 -10*c/11;Fromsolvingtheequations, wefindthatb= 5 andc = 1...f(x) = -2*log(5*x + 1).Let-2*log(5*x + 1) = 0. Simplificationof theequationleadsto5*x + 1 = 1.The solutiontotheequationsis x=0.Asa result, wecanconcludethatthezeroof thefunctionis0.Fundamental FunctionsGeometry Problem Generation:Function Problem Generation:Published as a conference paper at ICLR 2025
Figure 3: MAVIS-Caption Dataset. We showcase three diagram-caption pairs of plane geometry,
function, and analytic geometry in MAVIS-Caption, generated by our developed data engine.
2.2 MAVIS-CAPTION
With our mathematical visual data engine, we first curate a diagram-caption dataset, MAVIS-Caption,
as shown in Figure 3, aiming to benefit the diagram visual representations and cross-modal alignment.
Data Overview. As presented in Table 3 of the Appendix, the MAVIS-Caption dataset comprises
588K diagram-caption pairs. This includes 299K for plane geometry, 77K for analytic geometry, and
212K for function. The average word length of the captions is 61.48 words, reflecting their detailed
descriptive nature. The overall vocabulary size is 149, indicating the diversity in language expression.
We adopt different strategies to generate captions for three types of diagrams. It is important to note
that GPT-4 (OpenAI, 2023b) is only utilized during the template creation stage; it is not used at any
point during the automatic caption generation process.
Plane Geometry Caption. We follow the iterative geometric generation process to develop regula-
tions for an accurate and detailed caption. We first prompt GPT-4 to create three sets of language
templates: the descriptive content for fundamental shapes (e.g., “A Triangle {} with two congruent
sides {} and {}”), the phrases to denote specific attributes (e.g., “Angle {} measures {} degrees”), and
the conjunction to link two adjacent shapes (e.g., “Attached to edge {} of shape {}, there is a {}”).
Then, based on various generation scenarios, we fill and merge these templates to acquire a coherent
description of the geometric figure.
Function Caption. As function diagrams typically showcase a single curve, we directly utilize GPT-
4 to generate templates describing various properties of functions, including expressions, domains,
ranges, extreme points, and zero points. Each template is then filled based on specific cases, such as
“The expression of the function is y = −3x3 − 2x2 − 2x − 2. Within the range of x values [−3.0, 4.0],
zero points occur at −0.83 ...”.
Analytic Geometry Caption. We also employ GPT-4 to obtain two sets of language templates: the
description of coordinates and attribute information for basic figures (e.g., “The square with its base
left corner at {} features sides of {} in length”) and the spatial relation for nearby figures (e.g., “On
the bottom right of {}, there is a {}”). The captions are then formulated by filling in the coordinates
and selecting appropriate spatial relationship templates through coordinate comparison.
2.3 MAVIS-INSTRUCT
Besides the diagram-caption data, we curate MAVIS-Instruct of extensive problem-solving data,
which endows MLLMs with visual mathematical reasoning capabilities and serve as the basis for
Direct Preference Optimization (DPO) (Rafailov et al., 2024), as shown in Figure 5.
5
Published as a conference paper at ICLR 2025
Figure 4: MAVIS-Instruct Dataset. We showcase the generated visual math problems from four
sources within MAVIS-Instruct, which contain detailed rationales and minimized textual redundancy.
Data Overview. As illustrated in Table 4 of the Appendix, the MAVIS-Instruct dataset consists of
a total of 834K visual math problems. Given that the proportion of analytic geometry problems is
relatively small, we classify them with function problems for simplicity. Each problem in MAVIS-
Instruct includes a CoT rationale providing step-by-step solutions, with an average answer length
of 150 words. We have minimized textual redundancy in the questions, eliminating unnecessary
contextual information, distracting conditions, and attributes readily observable from the diagrams.
This reduction in text forces MLLMs to enhance their capability to extract essential content from
visual inputs. MAVIS-Instruct is assembled from four distinct sources to ensure broad coverage.
Data Engine Generated Problems. Within our data engine, we manually craft rigorous regulations
to produce visual math problems with accurate CoT annotations. Similar to caption generation, GPT
API is not involved in the automatic synthesis process of questions, answers, and CoT rationales.
• Plane Geometry Problems. We initially prompt GPT-4 to compile a comprehensive set of
mathematical formulas applicable to each basic shape (e.g., Pythagorean theorem for right
triangles and area formula for circles). Then, for a geometric diagram, we randomly select
a known condition within a shape as the final solution target, and systematically deduce
backward to another condition, either within the same shape or an adjacent one, using a
randomly selected mathematical formula. This deduced condition is then set as unknown,
and we continue iterative backward deductions as necessary. The final condition, along
with any conditions in the last step, are presented as initial attributes in the question. The
rationales can be simply obtained by reversing this backward deduction process.
• Function Problems. As the properties of functions are predetermined, we utilize GPT-4 to
generate diverse reasoning templates. These templates facilitate the solving of one function
property based on other provided properties, thereby ensuring the generation of high-quality
function rationales. The related function properties include analytical expression, function
values, zeros, extremum points, monotonicity, derivatives, and integrals. To accurately
reason these properties, the CoT annotation incorporates understanding of function types,
solving the analytical expressions of equations, and interpreting function graphs.
Data Engine Captions Annotated by GPT-4. Given the detailed captions and diagrams generated
by our data engine, we can prompt GPT-4V with these sufficient conditions to synthesis question-
answering data and ensure its correctness. We first generate a new set of 17K diagram-caption pairs
that do not overlap with the previous MAVIS-Caption, which avoids answer leakage within the
detailed caption. Then, we prompt GPT-4V to generate 3 new problems with rationales, obtaining
51K data in total from the diagram-caption pairs.
6
Published as a conference paper at ICLR 2025
Figure 5: Four-stage Training Pipeline of MAVIS. With our curated MAVIS-Caption and MAVIS-
Instruct, we adopt four progressive stages for training a mathematical visual specialist from scratch.
Manual Collection Augmented by GPT-4. To incorporate high-quality problems found in real-
world contexts, we manually collect 4K math problems with diagrams from publicly available
resources. Recognizing that these sources often lack detailed rationales and may contain redundant
text, we initially utilize GPT-4V to annotate a detailed solving process and streamline the question
text to reduce redundancy. Subsequently, for each collected instance, we input the question, rationale,
and diagram into GPT-4 and employ customized few-shot prompts to generate 20 new problems per
original, comprising 15 multiple-choice questions and 5 free-form questions. This process contributes
a total of 83K problems to the dataset.
Existing Datasets Augmented by GPT-4. Given existing well-organized geometric datasets, we
can also leverage them to expand MAVIS-Instruct. Referring to previous prompt designs, we augment
the 8K training set from two dataset, Geometry-3K (Lu et al., 2021) and GeoQA+ (Chen et al.,
2021b), into 80K visual problems with accompanying rationales, mapping each original problem
to 10 new ones. Due to the scarcity of publicly available function data, we do not include function
problems from this source.
3 MATHEMATICAL VISUAL TRAINING
With the curated datasets, we devise a four-stage training pipeline for endowing MLLMs with
mathematical visual capabilities. They respectively aim to mitigate the three deficiencies within
existing MLLMs, i.e., diagram visual encoding, diagram-language alignment, and mathematical
reasoning skills in visual contexts.
3.1 STAGE 1: TRAINING CLIP-MATH
To enhance CLIP’s (Radford et al., 2021) inadequate visual encoding of math diagrams, we utilize
MAVIS-Caption to train a specialized CLIP-Math encoder. Specifically, we fine-tune a pre-trained
CLIP-Base model following the conservative learning scheme. The math diagrams are fed into
the learnable vision encoder, while the corresponding captions are processed by the text encoder,
which remains frozen to provide reliable supervision. Via contrastive training, the model learns
to adapt from its original natural image domain to mathematical contexts, increasing its focus on
essential visual elements within diagrams, as demonstrated in Figure 1 (a). The optimized CLIP-Math
encoder now delivers more precise and robust representations of math diagrams, establishing a solid
foundation for the subsequent visual interpretation of LLMs.
3.2 STAGE 2: ALIGNING DIAGRAM-LANGUAGE
After acquiring the CLIP-Math encoder, we further integrate it with LLMs using MAVIS-Caption
to boost cross-modal alignment between math diagrams and language embedding space. Using a
simple two-layer MLP as the projection layer, we transform the visual encodings from CLIP-Math,
and prepend them as a prefix to the LLM input. This process, guided by the diagram captioning
task, enables the LLM to accurately recognize mathematical components and spatial arrangements.
With the diagram-language alignment, LLMs are equipped with the interpretation capability in math
diagrams, serving as an initial step toward deeper mathematical reasoning. In this stage, we freeze
the CLIP-Math, and train the projection layer along with the LoRA-based (Hu et al., 2021) LLM.
3.3 STAGE 3: INSTRUCTION TUNING
On top of that, we leverage MAVIS-Instruct to endow MLLMs with CoT reasoning and problem-
solving capabilities in visual mathematics. The detailed rationales within each problem’s solution
7
Training CLIP-MATHAligning Diagram-LanguageInstruction TuningPreference Alignmentwith DPOStage-1Stage-2Stage-3Stage-4Published as a conference paper at ICLR 2025
Table 1: Evaluation on MathVerse’s testmini Set with Six Problem Versions. ‘CoT-E’ and ‘Acc’
denote the scores of CoT evaluation strategy and the scores of direct ‘true or false’ accuracy, respec-
tively. ‘∗’ denotes previous mathematical visual specialists. The highest scores for closed-source
and open-source MLLMs are marked in red and blue, respectively.
Model
All
LLM
Size
Text
Dominant
Text
Lite
Vision
Intensive
Vision
Dominant
Vision
Only
CoT-E
Acc
CoT-E
Acc
CoT-E
Acc
CoT-E
Acc
CoT-E
Acc
CoT-E
Acc
Random Chance
Human
ChatGPT
GPT-4
Qwen-VL-Plus
Gemini-Pro
Qwen-VL-Max
GPT-4V
-
-
-
-
-
-
-
-
LLaMA-Adapter-V2
ImageBind-LLM
mPLUG-Owl2
MiniGPT-v2
LLaVA-1.5
SPHINX-Plus
G-LLaVA∗
LLaVA-NeXT
ShareGPT4V
SPHINX-MoE
Math-LLaVA∗
InternLM-XC2.
LLaVA-NeXT
MAVIS-7B w/o DPO∗
MAVIS-7B∗
7B
7B
7B
7B
7B
13B
7B
8B
13B
8×7B
13B
7B
110B
7B
7B
-
-
-
-
21.3
35.3
37.2
54.4
5.8
10.0
10.3
10.9
12.7
14.0
15.7
17.2
17.4
22.8
24.1
25.9
28.3
33.7
35.2
12.4
64.9
-
-
-
-
11.8
23.5
25.3
39.4
5.7
9.2
5.9
11.0
7.6
12.2
16.6
15.6
13.1
15.0
19.0
16.5
24.5
27.5
28.4
51.3
63.4
26.0
39.8
42.8
63.1
7.8
13.2
11.6
13.2
17.1
16.3
22.2
21.6
21.8
33.3
34.2
36.9
37.1
42.5
43.2
Baselines
12.4
71.2
33.3
46.5
-
-
LLMs
38.5
40.7
12.4
70.9
18.9
20.7
Closed-source MLLMs
15.7
26.3
30.7
54.7
21.2
34.7
37.7
56.6
Open-source MLLMs
6.2
11.4
6.6
12.1
8.8
13.9
20.9
19.4
16.2
22.2
21.2
22.3
31.7
41.4
41.6
6.3
11.6
11.4
12.7
12.0
12.8
20.4
19.7
20.6
21.9
22.7
28.3
29.1
36.3
37.2
11.1
23.5
26.1
41.4
5.9
11.3
6.3
12.0
7.6
11.6
20.7
15.2
16.2
16.4
19.8
17.0
24.1
29.1
29.5
-
-
-
-
18.5
32.0
33.6
51.4
6.2
9.8
11.1
11.1
12.6
12.9
16.5
17.6
18.6
21.1
21.1
20.1
22.6
33.3
34.1
12.4
61.4
-
-
9.0
23.0
24.1
34.9
6.1
8.9
6.3
13.1
7.4
11.6
17.2
16.8
15.5
14.8
20.2
15.7
21.0
27.4
27.9
-
-
-
-
19.1
36.8
35.9
50.8
4.5
11.8
9.4
11.3
12.7
14.7
12.7
14.9
16.2
19.6
20.3
24.4
21.8
29.3
29.7
12.4
68.3
-
-
13.0
22.3
24.1
34.4
4.2
11.2
5.6
10.3
7.4
13.5
14.6
15.2
13.8
12.6
17.6
16.4
22.1
24.9
24.7
-
-
-
-
21.8
33.3
35.9
50.3
4.4
3.5
8.0
6.4
9.0
13.2
6.6
12.1
9.7
18.3
22.2
19.8
30.9
27.1
31.8
12.4
66.7
-
-
10.0
22.2
21.4
31.6
6.1
3.4
4.9
7.4
6.9
10.4
9.4
11.3
3.7
9.1
16.4
11.0
20.7
14.6
18.3
provide high-quality reasoning guidance for MLLMs, significantly enhancing their step-by-step
CoT process. Furthermore, as we have minimized the redundancy within question texts during the
construction process, such text-lite problem formats, referring to MathVerse (Zhang et al., 2024b),
facilitate MLLMs to capture more essential information from the visual embeddings for problem-
solving, rather than relying on shortcuts to only process the textual content. During this stage, we
unfreeze both the projection layer and the LoRA-based (Hu et al., 2021) LLM to perform a thorough
instruction-following tuning.
3.4 STAGE 4: PREFERENCE ALIGNMENT WITH DPO
After the instruction tuning phase, the resulting model gains the capability for CoT reasoning on
visual math problems. However, it may still produce inaccurate intermediate steps due to insufficient
supervision for generating the best reasoning path. To address this, we further apply CoT preference
alignment using the DPO (Rafailov et al., 2024) algorithm to further enhance the model’s reasoning
performance. Specifically, we adopt the instruction-tuned model to first infer CoT reasoning process
on the 582K problems generated by data engine within MAVIS-Instruct. Then, we filter out the
incorrect outputs (88K data) based on the final answer as the negative reasoning samples in DPO,
and directly utilize the annotated CoT process as the positive samples. We only unfreeze the LoRA
parameters for DPO training, and finally obtain our mathematical specialist, MAVIS-7B.
4 EXPERIMENT
We first detail our experimental settings in Section 4.1, and then discuss the quantitative on different
benchmarks and qualitative examples in Sections 4.2 and 4.3, respectively. Please refer to the
Appendix for more data details and ablation studies.
8
Published as a conference paper at ICLR 2025
Table 2: Evaluation on Six Mathematical Benchmarks. ‘MMMU-Math’ denotes the math problems
within the test set of MMMU. ‘GPS’, ‘ALG’, and ‘GEO’ denote geometry problem solving, algebraic,
and geometry in MathVista’s testmini set. ‘S1’, ‘S2’, and ‘S3’ denote different problem steps in
We-Math’s testmini set. ‘∗’ denotes previous mathematical visual specialists. The highest scores for
closed-source and open-source MLLMs are marked in red and blue, respectively.
Model
LLM
Size
GeoQA
FunctionQA MMMU-Math MathVision
Random Chance
Human
ChatGPT
GPT-4
Qwen-VL-Plus
Qwen-VL-Max
GPT-4V
-
-
-
-
-
-
-
17.1
92.3
-
-
-
-
-
-
-
-
-
-
-
-
Baselines
7.2
68.8
LLMs
9.7
13.1
21.6
84.2
-
30.6
Closed-source MLLMs
-
36.3
48.4
10.7
15.6
22.8
Open-source MLLMs
GPS
24.1
48.4
31.7
31.7
38.5
-
50.5
MathVista
ALG
GEO
S1
We-Math
S2
25.8
50.9
32.4
33.5
39.1
-
53.0
22.7
51.4
33.0
32.2
39.3
-
51.0
-
-
-
-
-
-
-
-
S3
-
-
-
-
-
40.8
65.5
-
30.3
49.2
-
20.6
38.2
LLaMA-Adapter V2
mPLUG-Owl2
UniMath
LLaVA-1.5
ShareGPT4V
SPHINX-MoE
G-LLaVA∗
Math-LLaVA∗
InternLM-XC2.
LLaVA-NeXT
MAVIS-7B w/o DPO∗
MAVIS-7B∗
7B
7B
-
13B
13B
8×7B
13B
13B
7B
110B
7B
7B
18.1
15.7
50.0
20.3
-
-
67.0
blue62.3
blue66.4
-
66.7
68.3
30.6
blue29.0
-
blue33.9
-
33.9
blue24.2
blue38.7
blue38.7
-
40.3
50.0
23.0
18.8
-
24.0
-
-
blue27.6
blue36.1
30.1
-
39.2
42.4
8.2
8.6
-
11.2
11.9
14.2
blue1.3
blue15.5
14.5
-
18.6
19.2
25.5
blue12.5
-
blue16.3
-
31.2
blue36.1
57.7
63.0
-
63.2
64.1
26.3
blue27.7
-
blue38.5
-
31.7
blue24.6
53.0
56.6
-
58.3
59.2
24.3
blue14.2
-
blue16.7
-
30.5
blue33.1
56.5
62.3
-
63.0
63.2
-
-
-
-
-
-
32.4
blue37.5
47.0
53.7
56.9
57.2
-
-
-
-
-
-
30.1
blue30.5
33.1
36.9
37.1
37.9
-
-
-
-
-
-
32.7
blue32.4
33.0
31.5
33.2
34.6
4.1 EXPERIMENTAL SETTINGS
Implementation Details. We adopt a CLIP ViT-L (Radford et al., 2021) as the pre-trained model to
fine-tune our CLIP-Math, and utilize Mammoth2-7B (Yue et al., 2024) as the base LLM to construct
MAVIS-7B. In the first stage, we fine-tune the CLIP for 10 epochs with a batch size 16 and an initial
learning rate 2e−6. In the second stage, we train the diagram-language alignment for 1 epoch with a
batch size 32 and an initial learning rate 2e−6, and adopt LoRA (Hu et al., 2021) with a rank 128. In
the third and fourth stages, we adopt the same training settings as the second one.
Evaluation Schemes. We evaluate our model MAVIS-7B on several popular mathematical bench-
marks, MathVerse (Zhang et al., 2024b), GeoQA (Chen et al., 2021c), FunctionQA (function problems
in MathVista (Lu et al., 2023)), MMMU-Math (the math problems in MMMU (Yue et al., 2023a)),
MathVision (Wang et al., 2024b), three mathematical categories in MathVista, and We-Math (Qiao
et al., 2024). We compare a variety of existing MLLMs, including two mathematical visual special-
ist (Gao et al., 2023a; Shi et al., 2024), two LLMs (OpenAI, 2023a;b), and other general MLLMs (Bai
et al., 2023b; Gao et al., 2023b; Ye et al., 2023b; Liu et al., 2023a; Chen et al., 2023b; Gao et al.,
2024; Dong et al., 2024; Liu et al., 2024a; Chen et al., 2023a; Gao et al., 2024).
4.2 QUANTITATIVE PERFORMANCE
As shown in Table 1 for the MathVerse benchmark, MAVIS-7B achieves the best overall scores
in both CoT evaluation and accuracy among open-source MLLMs with only a 7B model size, and
consistently surpasses the second-best method on different problem versions. Specifically, our model
surpasses the powerful InternLM-XComposer2 (7B) (Dong et al., 2024) by +9.3% and ShareGPT4V
(13B) (Chen et al., 2023b) by +17.8% CoT evaluation scores. Compared to other mathematical
visual specialist, i.e., G-LLaVA (7B) (Gao et al., 2023a) and the concurrent Math-LLaVA (13B) (Shi
et al., 2024), MAVIS-7B exhibits superior problem-solving capabilities with higher CoT evaluation
scores of +19.5% and +11.1%, respectively. In addition, our model is also advantageous to the most
powerful open-source MLLM series, LLaVA-NeXT (Li et al., 2024a), from 8B to 110B model sizes,
demonstrating the math-specific proficiency of MAVIS-7B. Note that, the improvement brought by
DPO (our fourth-stage training) is more apparent in CoT evaluation compared to the accuracy scores,
indicating that the preference alignment learning can effectively boost the CoT reasoning capabilities.
9
Published as a conference paper at ICLR 2025
Figure 6: Problem-solving Comparison of MAVIS-7B and GPT-4V.
Table 2 showcases the performance comparison on six other mathematical benchmarks, where our
model still attains remarkable performance among other MLLMs. In detail, MAVIS-7B outperforms
the closed-source Qwen-VL-Max (Bai et al., 2023a) by +6.1% in MMMU-Math, +3.6% in MathVi-
sion, and around +10% in three subsets of We-Math. Our model even exceeds GPT-4V (OpenAI,
2023b) in the three mathematical categories of MathVista, indicating our problem-solving and rea-
soning proficiency. We also observe that, the enhancement from DPO increases from ‘S1’ to ‘S3’ of
We-Math, which well demonstrates its benefit on math problems with more intricate reasoning steps.
4.3 QUALITATIVE ANALYSIS
In Figure 6, we compare the mathematical problem-solving examples between MAVIS-7B and GPT-
4V (OpenAI, 2023c). As presented, our model not only showcases better accuracy in understanding
the geometric elements, function curves, and coordinate axes in mathematical diagrams, but also
performs higher-quality step-by-step reasoning process for formula substitution and numerical
calculation. This demonstrates the effectiveness of our four-stage training pipeline and automatic
data engine for enhanced diagram understanding and CoT reasoning.
5 CONCLUSION
In this paper, we propose MAVIS, the first mathematical visual instruction tuning paradigm for
MLLMs. We first introduce two high-quality datasets by a delicate data engine, MAVIS-Caption
and MAVIS-Instruct, containing large-scale diagram-language and problem-solving data. Then, we
customize a three-stage training framework to progressively train the math-specific vision encoder, the
diagram-language alignment, and the mathematical reasoning capabilities of MLLMs. The obtained
specialist model, MAVIS-7B, achieves superior performance across different mathematical visual
benchmarks, demonstrating the potential to serve as a new standard for future research.
10
Published as a conference paper at ICLR 2025
ACKNOWLEDGEMENT
This project is funded in part by National Key R&D Program of China Project 2022ZD0161100, by
the Centre for Perceptual and Interactive Intelligence (CPII) Ltd under the Innovation and Technology
Commission (ITC)’s InnoHK, by NSFC-RGC Project N_CUHK498/24. Hongsheng Li is a PI of
CPII under the InnoHK.
REFERENCES
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel
Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language
model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–
23736, 2022.
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe,
Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. Openflamingo: An open-source framework for
training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023.
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang
Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities.
ArXiv, abs/2308.12966, 2023a.
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou,
and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization,
text reading, and beyond. arXiv preprint arXiv:2308.12966, 2023b.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. In Advances in neural information processing systems, pp. 1877–1901, 2020.
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric Xing, and Liang Lin.
GeoQA: A geometric question answering benchmark towards multimodal numerical reason-
In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Findings of the
ing.
Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 513–523, Online, August
2021a. Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-acl.46. URL
https://aclanthology.org/2021.findings-acl.46.
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric P. Xing, and Liang Lin.
Geoqa: A geometric question answering benchmark towards multimodal numerical reasoning.
ArXiv, abs/2105.14517, 2021b. URL https://api.semanticscholar.org/CorpusID:
235253782.
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric P Xing, and Liang Lin.
Geoqa: A geometric question answering benchmark towards multimodal numerical reasoning.
arXiv preprint arXiv:2105.14517, 2021c.
Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin, Chongyu Chen, and Xiaodan Liang. Uni-
geo: Unifying geometry logical reasoning via reformulating mathematical expression. ArXiv,
abs/2212.02746, 2022.
Jun Chen, Deyao Zhu1 Xiaoqian Shen1 Xiang Li, Zechun Liu2 Pengchuan Zhang, Raghuraman
Krishnamoorthi2 Vikas Chandra2 Yunyang Xiong, and Mohamed Elhoseiny. Minigpt-v2: Large
language model as a unified interface for vision-language multi-task learning. arXiv preprint
arXiv:2310.09478, 2023a.
Lin Chen, Jinsong Li, Xiao wen Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua
Lin. Sharegpt4v: Improving large multi-modal models with better captions. ArXiv, abs/2311.12793,
2023b. URL https://api.semanticscholar.org/CorpusID:265308687.
Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi
Hu, Jiapeng Luo, Zheng Ma, et al. How far are we to gpt-4v? closing the gap to commercial
multimodal models with open-source suites. arXiv preprint arXiv:2404.16821, 2024.
11
Published as a conference paper at ICLR 2025
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng,
Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna:
An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. https://lmsys.org/
blog/2023-03-30-vicuna/, March 2023.
Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Bin Wang, Linke Ouyang, Xilin Wei, Songyang
Zhang, Haodong Duan, Maosong Cao, et al. Internlm-xcomposer2: Mastering free-form text-image
composition and comprehension in vision-language large model. arXiv preprint arXiv:2401.16420,
2024.
Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu
Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation
benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075, 2024.
Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong,
Jianhua Han, Hang Xu, Zhenguo Li, et al. G-llava: Solving geometric problem with multi-modal
large language model. arXiv preprint arXiv:2312.11370, 2023a.
Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu,
Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. Llama-adapter v2: Parameter-efficient
visual instruction model. arXiv preprint arXiv:2304.15010, 2023b.
Peng Gao, Renrui Zhang, Chris Liu, Longtian Qiu, Siyuan Huang, Weifeng Lin, Shitian Zhao, Shijie
Geng, Ziyi Lin, Peng Jin, et al. Sphinx-x: Scaling data and parameters for a family of multi-modal
large language models. arXiv preprint arXiv:2402.05935, 2024.
Google Gemini Team. Gemini: a family of highly capable multimodal models. arXiv preprint
arXiv:2312.11805, 2023.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen,
et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint
arXiv:2309.17452, 2023.
Ziyu Guo, Renrui Zhang, Xiangyang Zhu, Yiwen Tang, Xianzheng Ma, Jiaming Han, Kexin Chen,
Peng Gao, Xianzhi Li, Hongsheng Li, et al. Point-bind & point-llm: Aligning point cloud
with multi-modality for 3d understanding, generation, and instruction following. arXiv preprint
arXiv:2309.00615, 2023.
Ziyu Guo*, Renrui Zhang*#, Xiangyang Zhu, Chengzhuo Tong, Peng Gao, Chunyuan Li, and
Pheng-Ann Heng. Sam2point: Segment any 3d as videos in zero-shot and promptable manners.
arXiv preprint arXiv:2408.16768, 2024.
Ziyu Guo, Renrui Zhang, Chengzhuo Tong, Zhizheng Zhao, Peng Gao, Hongsheng Li, and Pheng-
Ann Heng. Can we generate images with cot? let’s verify and reinforce image generation step by
step. arXiv preprint arXiv:2501.13926, 2025.
Jiaming Han, Renrui Zhang, Wenqi Shao, Peng Gao, Peng Xu, Han Xiao, Kaipeng Zhang, Chris Liu,
Song Wen, Ziyu Guo, et al. Imagebind-llm: Multi-modality instruction tuning. arXiv preprint
arXiv:2309.03905, 2023.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang,
and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint
arXiv:2106.09685, 2021.
Yueru Jia, Jiaming Liu, Sixiang Chen, Chenyang Gu, Zhilue Wang, Longzan Luo, Lily Lee, Pengwei
Wang, Zhongyuan Wang, Renrui Zhang, et al. Lift3d foundation policy: Lifting 2d large-scale
pretrained models for robust 3d robotic manipulation. arXiv preprint arXiv:2411.18623, 2024.
Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris
Bamford, Devendra Singh Chaplot, Diego de Las Casas, Emma Bou Hanna, Florian Bressand,
Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-
Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le
Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed.
Mixtral of experts. Arxiv 2401.04088, 2024.
12
Published as a conference paper at ICLR 2025
Dongzhi Jiang, Renrui Zhang, Ziyu Guo, Yanwei Li, Yu Qi, Xinyan Chen, Liuhui Wang, Jianhan
Jin, Claire Guo, Shen Yan, et al. Mme-cot: Benchmarking chain-of-thought in large multimodal
models for reasoning quality, robustness, and efficiency. arXiv preprint arXiv:2502.09621, 2025.
Bu Jin, Yupeng Zheng, Pengfei Li, Weize Li, Yuhang Zheng, Sujie Hu, Xinyu Liu, Jinwei Zhu, Zhijie
Yan, Haiyang Sun, et al. Tod3cap: Towards 3d dense captioning in outdoor scenes. arXiv preprint
arXiv:2403.19589, 2024.
Mehran Kazemi, Hamidreza Alvari, Ankit Anand, Jialin Wu, Xi Chen, and Radu Soricut. Geomverse:
A systematic evaluation of large models for geometric reasoning. arXiv preprint arXiv:2312.12241,
2023.
Bo Li, Kaichen Zhang, Hao Zhang, Dong Guo, Renrui Zhang, Feng Li, Yuanhan Zhang,
Llava-next: Stronger llms supercharge multimodal ca-
Ziwei Liu, and Chunyuan Li.
pabilities in the wild, May 2024a. URL https://llava-vl.github.io/blog/
2024-05-10-llava-next-stronger-llms/.
Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma,
and 3d in large
Tackling multi-image, video,
URL https://llava-vl.github.io/blog/
and Chunyuan Li.
multimodal models,
2024-06-16-llava-next-interleave/.
Llava-next:
June 2024b.
Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li.
Llava-next-interleave: Tackling multi-image, video, and 3d in large multimodal models, 2024c.
URL https://arxiv.org/abs/2407.07895.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-
training for unified vision-language understanding and generation. In International Conference on
Machine Learning, pp. 12888–12900. PMLR, 2022.
KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and
Yu Qiao. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023a.
Xiaoqi Li, Mingxu Zhang, Yiran Geng, Haoran Geng, Yuxing Long, Yan Shen, Renrui Zhang, Jiaming
Liu, and Hao Dong. Manipllm: Embodied multimodal large language model for object-centric
robotic manipulation. arXiv preprint arXiv:2312.16217, 2023b.
Zhenwen Liang, Tianyu Yang, Jipeng Zhang, and Xiangliang Zhang. Unimath: A foundational and
multimodal mathematical reasoner. In EMNLP, 2023.
Weifeng Lin, Xinyu Wei, Ruichuan An, Peng Gao, Bocheng Zou, Yulin Luo, Siyuan Huang, Shang-
hang Zhang, and Hongsheng Li. Draw-and-understand: Leveraging visual prompts to enable mllms
to comprehend what you want, 2025. URL https://arxiv.org/abs/2403.20271.
Ziyi Lin, Chris Liu, Renrui Zhang, Peng Gao, Longtian Qiu, Han Xiao, Han Qiu, Chen Lin, Wenqi
Shao, Keqin Chen, et al. Sphinx: The joint mixing of weights, tasks, and visual embeddings for
multi-modal large language models. arXiv preprint arXiv:2311.07575, 2023.
Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction
tuning, 2023a.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS,
2023b.
Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee.
Llava-next: Improved reasoning, ocr, and world knowledge, January 2024a. URL https:
//llava-vl.github.io/blog/2024-01-30-llava-next/.
Jiaming Liu, Chenxuan Li, Guanqun Wang, Lily Lee, Kaichen Zhou, Sixiang Chen, Chuyan Xiong,
Jiaxin Ge, Renrui Zhang, and Shanghang Zhang. Self-corrected multimodal large language model
for end-to-end robot manipulation. arXiv preprint arXiv:2405.17418, 2024b.
13
Published as a conference paper at ICLR 2025
Jiaming Liu, Mengzhen Liu, Zhenyu Wang, Lily Lee, Kaichen Zhou, Pengju An, Senqiao Yang,
Renrui Zhang, Yandong Guo, and Shanghang Zhang. Robomamba: Multimodal state space model
for efficient robot reasoning and manipulation. NeurIPS 2024, 2024c.
Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu.
Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning.
arXiv preprint arXiv:2105.04165, 2021.
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chun yue Li, Hannaneh Hajishirzi, Hao Cheng,
Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating math reasoning in visual
contexts with gpt-4v, bard, and other large multimodal models. ArXiv, abs/2310.02255, 2023.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David
Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work:
Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114,
2021.
OpenAI. Chatgpt. https://chat.openai.com, 2023a.
OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023b.
OpenAI. GPT-4V(ision) system card, 2023c. URL https://openai.com/research/
gpt-4v-system-card.
Tianshuo Peng, Mingsheng Li, Hongbin Zhou, Renqiu Xia, Renrui Zhang, Lei Bai, Song Mao, Bin
Wang, Conghui He, Aojun Zhou, et al. Chimera: Improving generalist model with domain-specific
experts. arXiv preprint arXiv:2412.05983, 2024.
Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue,
Shanglin Lei, Zhe Wei, Miaoxuan Zhang, et al. We-math: Does your large multimodal model
achieve human-like mathematical reasoning? arXiv preprint arXiv:2407.01284, 2024.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,
Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.
Learning transferable visual models from natural language supervision. In International Conference
on Machine Learning, 2021. URL https://api.semanticscholar.org/CorpusID:
231591445.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea
Finn. Direct preference optimization: Your language model is secretly a reward model, 2024. URL
https://arxiv.org/abs/2305.18290.
Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy
Ka-Wei Lee. Math-llava: Bootstrapping mathematical reasoning for multimodal large language
models. arXiv preprint arXiv:2406.17294, 2024.
Chang Shu, Baian Chen, Fangyu Liu, Zihao Fu, Ehsan Shareghi, and Nigel Collier. Visual med-alpaca:
A parameter-efficient biomedical llm with visual capabilities, 2023.
Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen
Pfohl, Heather Cole-Lewis, Darlene Neal, et al. Towards expert-level medical question answering
with large language models. arXiv preprint arXiv:2305.09617, 2023.
Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. Pandagpt: One model to
instruction-follow them all. arXiv preprint arXiv:2305.16355, 2023.
Yiwen Tang, Zoey Guo, Zhuhao Wang, Ray Zhang, Qizhi Chen, Junli Liu, Delin Qu, Zhigang Wang,
Dong Wang, Xuelong Li, et al. Exploring the potential of encoder-free architectures in 3d lmms.
arXiv preprint arXiv:2502.09620, 2025.
InternLM Team. Internlm: A multilingual language model with progressively enhanced capabilities,
2023.
14
Published as a conference paper at ICLR 2025
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and
efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation
and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
Guanqun Wang, Xinyu Wei, Jiaming Liu, Ray Zhang, Yichi Zhang, Kevin Zhang, Maurice Chong,
and Shanghang Zhang. Mr-mllm: Mutual reinforcement of multimodal comprehension and vision
perception, 2024a. URL https://arxiv.org/abs/2406.15768.
Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. Measuring
multimodal mathematical reasoning with math-vision dataset. arXiv preprint arXiv:2402.14804,
2024b.
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi Song,
Mingjie Zhan, and Hongsheng Li. Mathcoder: Seamless code integration in LLMs for enhanced
mathematical reasoning. In The Twelfth International Conference on Learning Representations,
2024c. URL https://openreview.net/forum?id=z8TW0ttBPp.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
Neural Information Processing Systems, 35:24824–24837, 2022.
Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm:
Empowering large language models to understand point clouds. arXiv preprint arXiv:2308.16911,
2023.
Senqiao Yang, Jiaming Liu, Ray Zhang, Mingjie Pan, Zoey Guo, Xiaoqi Li, Zehui Chen, Peng Gao,
Yandong Guo, and Shanghang Zhang. Lidar-llm: Exploring the potential of large language models
for 3d lidar understanding. arXiv preprint arXiv:2312.14074, 2023.
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen
Hu, Pengcheng Shi, Yaya Shi, Chaoya Jiang, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng
Tian, Qi Qian, Ji Zhang, and Fei Huang. mplug-owl: Modularization empowers large language
models with multimodality, 2023a.
Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Anwen Hu, Haowei Liu, Qi Qian, Ji Zhang, Fei
Huang, and Jingren Zhou. mplug-owl2: Revolutionizing multi-modal large language model with
modality collaboration, 2023b.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo
Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for
large language models. arXiv preprint arXiv:2309.12284, 2023.
Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu
Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin,
Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert
agi. arXiv preprint arXiv:2311.16502, 2023a.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint
arXiv:2309.05653, 2023b.
Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from the
web. arXiv preprint arXiv:2405.03548, 2024.
Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao,
and Yu Qiao. LLaMA-adapter: Efficient fine-tuning of large language models with zero-initialized
attention. In The Twelfth International Conference on Learning Representations, 2024a. URL
https://openreview.net/forum?id=d4UiXAHN2W.
15
Published as a conference paper at ICLR 2025
Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou,
Pan Lu, Kai-Wei Chang, Peng Gao, et al. Mathverse: Does your multi-modal llm truly see the
diagrams in visual math problems? arXiv preprint arXiv:2403.14624, 2024b.
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia,
Linqi Song, Mingjie Zhan, et al. Solving challenging math word problems using gpt-4 code
interpreter with code-based self-verification. arXiv preprint arXiv:2308.07921, 2023.
16
Published as a conference paper at ICLR 2025
A APPENDIX
A.1 RELATED WORK
Visual Instruction Tuning. The advancement of large language models (LLMs) (Brown et al., 2020;
Jiang et al., 2024; Touvron et al., 2023b; Chiang et al., 2023) with instruction tuning has significantly
enhanced zero-shot capabilities across a range of tasks. Drawing inspiration from this, LLaMA-
Adapter series (Zhang et al., 2024a; Gao et al., 2023b; Han et al., 2023) propose a zero-initialized
attention mechanism to align frozen vision encoders (Radford et al., 2021) with LLaMA (Touvron
et al., 2023a) for multi-modal learning. LLaVA series (Liu et al., 2023b;a) employ a linear projector
for vision-language alignment, establishing visual instruction tuning as a standard training approach
in the multi-modal field. Flamingo (Alayrac et al., 2022) and OpenFlamingo (Awadalla et al., 2023)
have honed visual representation by integrating a cross-attention resampler with vision encoders.
SPHINX series (Gao et al., 2024; Lin et al., 2023; 2025) and MR-MLLM (Wang et al., 2024a)
utilize a blend of visual encoders to make the LLM cognizant of various image aspects. InternVL
series (Chen et al., 2024; Dong et al., 2024; Team, 2023) employ a large vision encoder and Q-
Former (Li et al., 2022) to incorporate high-quality visual information through a multi-stage training
methodology. LLaVA-NexT (Liu et al., 2024a; Li et al., 2024a;b) further introduces the ‘AnyRes’
technique to manage images at any given resolution, and LLaVA-NexT-Interleave (Li et al., 2024c)
extends the scope widely to interleave multi-image settings. There are also recent efforts to apply
visual instruction tuning to 3D (Guo et al., 2023; Xu et al., 2023; Guo* et al., 2024; Tang et al., 2025),
video (Li et al., 2023a; Fu et al., 2024), reasoning Guo et al. (2025); Jiang et al. (2025); Peng et al.
(2024), and robotics Jia et al. (2024); Liu et al. (2024c) scenarios. Despite the impressive strides made
in both model capability and training efficiency by multi-modal large language models (MLLMs)
through visual instruction tuning, there is currently no MLLM specifically designed for mathematical
problem-solving, nor a substantial dataset available for such purposes in the open-source community.
In this paper, we mitigate the issue by proposing MAVIS with high-quality mathematical visual
datasets and training paradigms.
Mathematics in Large Models. Recent research has predominantly concentrated on text-only
mathematical problem-solving using LLMs. MAmmoTH (Yue et al., 2023b; 2024) have compiled ex-
tensive collections of mathematical problems, training LLMs using the reasoning processes described
in solutions. MetaMATH (Yu et al., 2023) has expanded upon this by rewriting existing problems
to create a larger dataset. MathCoder (Wang et al., 2024c) and ToRA (Gou et al., 2023) introduced
a tools agent approach, employing Python code and symbolic resolvers during the training phase,
significantly outperforming traditional models that rely on text-only mathematical reasoning. How-
ever, in the multi-modal field, despite the introduction of several datasets such as Geometry3K (Lu
et al., 2021), GeoQA (Chen et al., 2021b), UniGeo (Chen et al., 2022), UniMath (Liang et al., 2023),
and GeomVerse (Kazemi et al., 2023), aiming at enhancing the performance of MLLMs in solving
graphical mathematical problems, these datasets are quite limited in scale and domain. Based on
these datasets, G-LLaVA (Gao et al., 2023a) has developed superior capabilities for understanding
graphical geometries but struggles with mathematical problems in other domains. The comprehensive
benchmark MathVerse (Zhang et al., 2024b) has also highlighted the existing MLLMs’ unsatisfactory
capacity for encoding visual diagrams in diverse mathematical domains. Therefore, there is a pressing
need for the development of more robust encoders for mathematical images and the tuning of MLLMs
with mathematical visual instructions, for which we propose MAVIS to address the challenges.
A.2 HUMAN EVALUATION OF MAVIS-INSTRUCT
To assess the dataset’s coverage, validity, and quality, human verification is employed. The creation
process of our MAVIS-Instruct dataset can be broadly categorized into two approaches:
• GPT-generated: This method leverages GPT-4 to generate new problems (including ques-
tions, rationales, and answers) based on existing problems with diagrams. While this
approach produces fluent, human-like sentences, it may be influenced by the inherent
capabilities and occasional instability of GPT-4V.
• Data Engine: As the main source of our mathematical visual data, this method utilizes the
custom automatic data engine to generate new problems (including diagrams, questions,
17
Published as a conference paper at ICLR 2025
Figure 7: Human Evaluation Results on
200 randomly sampled problems in MAVIS-
Instruct, 100 GPT-generated and 100 Data
Engine. We set three levels (1, 2, and 3) for
each metric, and report average scores.
Figure 8: Human Evaluation Statistics on
200 randomly sampled problems in MAVIS-
Instruct, 100 GPT-generated and 100 Data
Engine. We count the numbers of three score
levels (1, 2, and 3) for each metric.
rationales, and answers), without relying on GPT models. It guarantees 100% correctness
due to the use of rigorous templates, though it may occasionally exhibit rigid expressions.
Specifically, we evaluate four aspects(Diagram, Question, Rationale and Answer) of each problem
using seven metrics. Each metric is scored on a scale of 1 to 3, where 1 denotes poor, 2 denotes
moderate, and 3 denotes good. The human evaluation results are shown in Figure 7 and score
statistics are shown in Figure 8. In addition, we also showcase some specific examples in Figure 9
and Figure 10. We analyze each aspect as follows:
• Diagram: The diagrams in GPT-generated problems are directly collected from existing
sources with rigorous human filtering, ensuring high quality, resulting in scores close to 3.
In contrast, for rule-based problems, the diagrams are drawn accurately using Python code
driven by our data engine, which guarantees correctness. However, these diagrams may
lack alignment with human aesthetic preferences, as indicated by 3% of them receiving an
appearance score of 1.
• Question: Regarding the questions, both GPT-generated and rule-based problems display
a high degree of accuracy in aligning with the diagram elements. This is attributed to the
well-crafted prompts used with GPT-4 and the meticulous template design of the data engine.
Nevertheless, rule-based questions may occasionally exhibit minor fluency issues, as they
lack human refinement.
• Rationale: In terms of the rationales, most instances feature a precise and detailed chain-of-
thought (CoT) reasoning process. However, in a few cases (3% receiving an accuracy score
of 1), some GPT-generated rationales contain minor reasoning or calculation errors, which
are inherent to GPT-4’s limitations in problem-solving. These errors usually affect only one
or two steps and do not compromise the overall logic. Conversely, the rule-based rationales
are highly accurate due to the carefully designed data engine, although there is still room for
improvement in language fluency.
• Answer: The answers in both methods achieve high correctness scores. For GPT-generated
problems, we prompt GPT-4 to identify a known condition from the original problems as
the answer. Similarly, for rule-based problems, we randomly select a known attribute from
the generated diagrams to serve as the answer.
Overall, the randomly sampled instances show that our dataset exhibits good question quality and
answer accuracy.
18
DiagramQuestionRationaleAnswer2.963.002.832.973.002.853.002.913.003.00GPT-generatedData Engine3.002.923.003.003.002.00100cases961001001008810097861009210091100100431138939DiagramQuestionRationaleAnswerScore 3Score 2Score 1GPT-Rule-GPT-Rule-GPT-Rule-GPT-Rule-GPT-Rule-GPT-Rule-GPT-Rule-DiagramQuestionRationaleAnswer2.963.002.832.973.002.853.002.913.003.00GPT-generatedRule-based3.002.923.003.003.002.00100cases961001001008810097861009210091100100431138939DiagramQuestionRationaleAnswerScore 3Score 2Score 1GPT EngineGPT EngineGPT EngineGPT EngineGPT EngineGPT EngineGPT EnginePublished as a conference paper at ICLR 2025
Figure 9: Diagram Examples in MAVIS-Instruct. The first three diagrams showcase superior
correctness and appearance, while a small portion of Data Engine generated diagrams (3%) are not
aligned with human preference, e.g., the fourth diagram.
Figure 10: Accurate Rationale Examples in MAVIS-Instruct. Most GPT-generated and Data
Engine-generated rationales ensure correctness.
19
GPT-generatedAppearance Score: 3Date EngineAppearance Score: 3Date EngineAppearance Score: 3Date EngineAppearance Score: 1Published as a conference paper at ICLR 2025
Table 3: Statistics of MAVIS-Caption.
Table 4: Subject Distribution of MAVIS-Instruct.
Statistic
Number
Statistic
Total Captions
- Total number
- Average length (words)
- Average length (characters)
- Vocabulary size
Plane Geometry
- Total number
- Average length (words)
- Average length (characters)
- Vocabulary size
Analytic Geometry
- Total number
- Average length (words)
- Average length (characters)
- Vocabulary size
Function
- Total number
- Average length (words)
- Average length (characters)
- Vocabulary size
588K
62.85
339.68
418
299K (50.9%)
69.77
385.85
195
77K (13.1%)
39.64
210.10
158
212K (36.0%)
61.48
321.46
149
Total questions
- Multiple-choice questions
- Free-form questions
Data Engine Generated Problems
- Geometry questions
- Function questions
Data Engine Captions Annotated by GPT-4
- Geometry questions
- Function questions
Manual Collection Augmented by GPT-4
- Geometry questions
- Function questions
Existing Datasets Augmented by GPT-4
- Geometry questions
- Function questions
Number of unique images
Number of unique questions
Number of unique answers
Average question length
Average answer length
Number
834K
615K (62.4%)
218K (37.6%)
582K
466K (80.0%)
116K (20.0%)
51K
30K (58.8%)
21K (41.2%)
83K
72K (86.5%)
11K (13.5%)
118K
118K (100.0%)
0 (0%)
611K (73.3%)
804K (96.5%)
675K (81.0%)
44.60
62.82
A.3 ABLATION STUDY
A.3.1 MAVIS-CAPTION
To validate the enhancement of Math-CLIP’s diagram perception capability, we sampled 100
validation diagram-caption pairs and computed their cosine similarity using both CLIP and Math-
CLIP. The results, as shown in Table 5, indicate that Math-CLIP encodes more discriminative diagram
features. Additionally, the attention visualization in Figure 1(a) of the main paper further demonstrates
that Math-CLIP captures mathematical visual elements within diagrams more effectively, highlighting
the efficacy of MAVIS-Caption.
To validate the role of MAVIS-Caption in second-stage training, we present both quantitative and
qualitative results for diagram captioning on the same 100 validation pairs in the first column of
Table 6. The use of MAVIS-Caption significantly enhances the diagram understanding capability.
This shows that MAVIS-Caption helps the LLM generate accurate captions from diagrams, improving
its ability to comprehend each visual token from Math-CLIP and align visual elements with textual
descriptions. We also evaluated MAVIS’s performance on MathVerse without second-stage training,
as shown in the second column of Table 6. Without MAVIS-Caption training, the CoT reasoning
quality of MAVIS-7B is somewhat compromised. This suggests that training the model in diagram
captioning improves its mathematical expression capability, enabling it to produce language
expressions that align with mathematical concepts. This foundational skill supports the generation of
subsequent CoT reasoning steps.
Table 5: Diagram Perception Enhancement by Math-CLIP, using MAVIS-Caption in the first
stage. We calculate the average cosine similarity among 100 validation diagram-caption pairs.
Vision Encoder Matched Pair ↑ Unmatched Pair ↓
CLIP
Math-CLIP
0.22
0.83
0.24
0.17
A.3.2 MAVIS-INSTRUCT
Redundant Text When curating questions for MAVIS-Instruct, we minimize the redundant content
within the question texts, which refers to the directly observable content in the diagram, e.g., the
20
Published as a conference paper at ICLR 2025
Table 6: Diagram Understanding Enhancement and Mathematical Expression Enhancement in
LLM using MAVIS-Caption in the second Stage. We compare the METEOR and CIDEr scores for
diagram captioning on 100 validation samples, as well as the accuracy and CoT evaluation results on
MathVerse, both with and without the MAVIS-Caption training.
Training Data
Diagram-Caption Pairs
MathVerse
METEOR
CIDEr
Acc (%) CoT-E (%)
w MAVIS-Caption
w/o MAVIS-Caption
23.7
14.0
161.3
69.4
28.4
25.6
35.2
32.8
presence of shapes or intersection points of functions. Such information is repetitive to visual
components, and may assist MLLMs in bypassing the process of diagram interpretation, thereby
harming their related skills. By mostly avoiding redundant texts in MAVIS-Instruct, our data enforces
MLLMs to learn stronger diagram interpretation capabilities. In Table 7, we add redundant
texts (diagram captions) to the Data Engine Generated Problems for training, leading to expected
performance drop.
CoT Rationales
For each instance in MAVIS-Instruct, we incorporate detailed rationales for
problem-solving, either generated by GPT-4 or our rule-based data engine. In Table 8, we remove all
intermediate rationales of each problem in MAVIS-Instruct, and train the model to directly output
the final answer. As shown, both the CoT evaluation and accuracy scores are degraded. This
demonstrates the significance of our rationale annotations, which effectively improves the CoT
reasoning capabilities of MLLMs.
Table 7: Diagram Interpretation Enhancement for MLLM,
using MAVIS-Instruct in the third stage. We compare the
results by adding redundant texts (diagram captions) to the
Data Engine Generated Problems within MAVIS-Instruct.
MAVIS-Instruct
MathVerse GeoQA FunctionQA
w/o Redundant Texts
w Redundant Texts
28.4
26.5
68.3
66.5
50.0
48.4
Table 8: Reasoning Capability
Enhancement for MLLM, using
MAVIS-Instruct in the third stage.
Training Data
MathVerse
Acc CoT-E
w Rationales
w/o Rationales
28.4
25.2
35.2
26.6
A.3.3 COMPARED TO GENERAL VISUAL INSTRUCTION DATA
Since Mammoth-2 is a highly capable LLM for mathematical tasks, one possible question is whether
simply integrating a vision encoder into Mammoth-2 and training it with conventional visual in-
struction tuning data would suffice for effectively solving visual-based mathematical problems. To
compare MAVIS data with other visual instruction tuning datasets and investigate the specific ben-
efits of MAVIS data in Mammoth-2 (7B), we conduct an ablation study. We utilize the data from
LLaVA-NeXT (558K for pre-training and 760K for fine-tuning) and compare it with our MAVIS data
(558K MAVIS-Caption for pre-training and 834K MAVIS-Instruct for fine-tuning). Performance is
evaluated using the accuracy metric on MathVerse, excluding the DPO training stage for fairness.
Table 9: Ablation study results for comparison between MAVIS Data and other visual instruction
tuning data. The first row in the table represents the original LLaVA-NeXT-8B.
Visual Encoder LLM
Pre-training
Fine-tuning
MathVerse Acc (%)
CLIP
CLIP
CLIP
CLIP
Math-CLIP
LLaVA data
LLaVA data
LLaMA-3 (8B)
LLaVA data
Mammoth-2 (7B) LLaVA data
MAVIS-Instruct
Mammoth-2 (7B) LLaVA data
Mammoth-2 (7B) MAVIS-Caption MAVIS-Instruct
Mammoth-2 (7B) MAVIS-Caption MAVIS-Instruct
15.6
18.3
25.7
26.4
27.5
21
Published as a conference paper at ICLR 2025
Based on the results presented in Table 9, we make the following observations:
1. Mammoth-2 vs. LLaMA-3: Mammoth-2 achieves a +2.7 improvement in accuracy com-
pared to LLaMA-3, highlighting its prior knowledge and inherent capability in mathematical
problem solving.
2. Impact of MAVIS-Instruct: Fine-tuning with MAVIS-Instruct significantly enhances
performance by +7.4, underscoring the substantial advantage of our dataset for mathematical
reasoning tasks compared to general visual instruction datasets.
3. MAVIS-Caption and Math-CLIP: Using MAVIS-Caption for pre-training and employing
the Math-CLIP encoder further boosts performance, leading to enhanced mathematical
visual perception and reasoning capabilities. Overall, our MAVIS data contributes a +9.2
improvement in accuracy over Mammoth-2 trained with LLaVA data.
A.3.4 PERFORMANCE ACROSS DIFFERENT SUBJECTS
Although MAVIS-Instruct contains a substantial number of high-quality solid geometry problems
that were manually curated, our data engine only generates plane geometry and function problems.
Therefore, we aim to evaluate the performance of the MAVIS model across different mathematical
domains, specifically plane geometry, functions, and solid geometry. We provide the detailed subject
scores of MAVIS-7B on MathVerse, comparing the CoT evaluation score (note that the subject-level
accuracy scores are not publicly released) with other models on the official leaderboard.
Table 10: Performance comparison across different models on Plane Geometry, Solid Geometry, and
Functions of MathVerse evaluation tasks.
Model
All (CoT-Eval) Plane Geometry
Solid Geometry Functions
LLaVA-NeXT
ShareGPT4V
SPHINX-MoE
InternLM-XC2
MAVIS-7B
17.2
17.4
22.8
25.9
35.2
15.9
16.9
24.5
26.2
37.1
19.6
15.0
15.8
20.1
28.9
23.1
20.2
19.5
23.7
31.0
The results shown in Table 10 demonstrate that our model achieves leading performance across all
three subjects. Notably, its proficiency in plane geometry and functions can be attributed to the
training with our meticulously curated MAVIS dataset. Additionally, for solid geometry, which shares
similarities with plane geometry in both visual appearance and reasoning process, we believe that our
model effectively generalizes its learned knowledge and reasoning capabilities, leading to enhanced
performance in this domain as well.
A.3.5 SYNTHETIC DATA VS REAL DATA
In MAVIS-Instruct, we integrate both synthetic problems generated by the data engine (633K, 76%)
and real-world problems augmented with GPT (201K, 24%). The synthetic data is composed of
both geometry and function problems, while the real-world data primarily focuses on geometry. We
conduct an ablation study to assess the contributions of these data components, excluding the DPO
training stage to ensure fairness.
Table 11: Ablation study of synthetic and real data contributions to MAVIS-7B’s performance.
Synthetic Data Real-world Data MathVerse Acc (%) GeoQA FunctionQA MMMU-Math
✓
–
✓
–
✓
✓
22.6
24.3
27.5
44.2
66.4
66.7
37.1
25.8
40.3
34.6
29.8
39.2
The results shown in Table 11 indicate that the two data sources exhibit complementary characteristics,
both playing a crucial role in achieving the final performance. Specifically, synthetic data significantly
enhances the results on FunctionQA and MMMU-Math, as these benchmarks include a substantial
22
Published as a conference paper at ICLR 2025
proportion of function-related problems. Conversely, real-world data has a greater impact on GeoQA,
given its stronger alignment with the geometry-focused nature of this benchmark.
A.3.6 DATA SCALING
A good instruction tuning dataset should exhibit the characteristic of data scaling: as the dataset size
increases, the model trained on it should demonstrate progressively better performance. To verify that
MAVIS-Instruct possesses this property, we conduct an ablation study on the 834K MAVIS-Instruct
dataset by randomly sampling 25%, 50%, and 75% of the data for instruction tuning, excluding
the DPO stage. We then evaluate the models using the accuracy metric on MathVerse. The results,
as shown in Table 12, indicate that the performance of MAVIS-7B consistently improves as the
data scale increases. This demonstrates the promising potential of our dataset to further enhance
mathematical reasoning capabilities with larger-scale utilization.
Table 12: Performance of MAVIS-7B at different
data proportions.
Table 13: Comparison of different training set-
tings.
25% 50% 75% 100%
23.3
25.7
26.9
27.5
LLMs
Caption CIDEr MathVerse Acc (%)
Frozen
Unfrozen
LoRA-based
79.6
146.2
161.3
26.2
28.1
28.4
A.3.7 GENERALIZATION ABILITY
Although our Data Engine considers as many problem types as possible, it is inherently challenging
for a manually designed engine to cover all possible types of mathematical visual diagrams. To
alleviate this, we incorporate real-world data by manual collection and GPT augmentation, which
well enhances the generalization capablitity of MAVIS-7B. In Figure 11, we selected examples
involving inscribed triangles (they cannot be generated via our data engine) from publicly available
math benchmarks, and tested MAVIS-7B on these examples, where our model performs accurate
reasoning.
Figure 11: Examples for MAVIS-7B solving math problems with circumcircles of triangles.
A.3.8 ENHANCING VISION-ONLY PERFORMANCE
To evaluate the impact of OCR datasets on MAVIS-7B’s performance, we conducted an ablation
study focusing on vision-only and vision-dominant problems in MathVerse. These problems require
the model to interpret question texts rendered directly in diagrams, thus relying heavily on OCR
capabilities. MAVIS-7B, however, was initially not trained with OCR-specific datasets, limiting its
performance in these tasks.
In contrast, generalist models like LLaVA-NeXT include extensive OCR datasets such as OCRVQA,
DocVQA, and SynDog-EN, which significantly enhance their OCR capabilities. To bridge this gap,
we incorporated OCR datasets (OCRVQA and DocVQA) in our third-stage instruction tuning to
improve MAVIS-7B’s OCR performance.
The results, as shown in Table 14, indicate a notable improvement in vision-dominant and vision-only
problems for MAVIS-7B after the inclusion of OCR datasets, highlighting the potential of better OCR
23
Q:ABistangenttocircleOatpointB,andtheextendedlineofAOintersectscircleOatpointC.ConnectBC,ifangleA=36.0,thenangleCisequalto?A:SinceABistangenttocircleOatpointB,wehaveOBperpendiculartoAB.Therefore,angleABOis90°.AngleAOBisequalto90°-angleA,whichis90°-36°,givingus54°.SinceOCisequaltoOB,angleCisequaltoangleOBC.AndsinceangleAOBisequaltoangleCplusangleOBC,wehaveangleCequaltohalfofangleAOB,whichishalfof54°,givingus 27°.Q:incircleO,chordACparallelradiusOB,angleBOC=50.0,thedegreeofangleOABis?A:SinceACisparalleltoOB,thereforeangleBOCisequaltoangleACOwhichis50degrees.Also,OAisequaltoOC,soangleOACisequaltoangleACOwhichis50degrees.SinceangleCABishalfofangleBOC,itis25degrees.Therefore,angleBAOisequaltoangleOACminusangleCABwhichis25degrees.Published as a conference paper at ICLR 2025
integration for further boosting its performance. In Figure 12, we also showcase some failure cases
of our MAVIS-7B with OCR training on vision-only problems. Although the vision-only results are
improved via the OCR instruction dataset, the model still suffers from limited perception capabilities
of questions and visual elements within the diagram. This indicates that the OCR capability is still
the bottleneck of vision-only performance. We leave this as a future work to further enhance the
OCR capabilities of MAVIS for mathematical visual elements.
Figure 12: Failure cases of MAVIS-7B with OCR training on vision-only problems.
Table 14: Impact of OCR data on solving vision-only problems.
Model
LLM Size All
Text Dominant Text Lite Vision Intensive Vision Dominant Vision Only
LLaVA-NeXT
LLaVA-NeXT
MAVIS-7B
MAVIS-7B w/ OCR
8B
110B
7B
7B
15.6
24.5
28.4
28.9
19.4
31.7
41.6
40.8
15.2
24.1
29.5
29.2
16.8
21.0
27.9
27.4
15.2
22.1
24.7
26.2
11.3
20.7
18.3
21.1
A.3.9 BASE LLM
We investigate different LLMs for the MAVIS model. As shown in Table 15, MAVIS is not very
sensitive to LLM choices, and still surpasses previous models with the same LLM.
Table 15: Performance Comparison using
Different LLMs. We compare the accuracy
and CoT evaluation results on MathVerse.
Method
Base LLM
MathVerse
Acc CoT-E
SPHINX-Plus
ShareGPT4V
InternLM-XC2.
LLaMA2-13B 12.2
Vicuna-13B
13.1
InternLM2-7B 16.5
MAVIS
LLaMA2-13B 24.5
Vicuna-13B
24.8
InternLM2-7B 28.0
28.4
MAmmoTH2
14.0
17.4
25.9
30.7
30.6
33.8
35.2
A.3.10 DIFFERENT TRAINING SETTINGS
Our training strategy is similar to LLaVA, but with key differences in the pre-training stage: we
train both the projection layer and the LoRA-based LLM, whereas LLaVA only trains the projection
24
Since BD bisects angle ABC, angles ABD and CBD are congruent. With angle ABC measuring 70 degrees (180 - 70 - 40 = 70), each of these angles measures 35 degrees (70 / 2 = 35). Because CD is parallel to AB, line BD acts as a transversal, … so angle CDB also measures 35 degrees. Therefore, the answer is D. In the given diagram, AB is tangent to the circle at point B, and ∠OBA=90∘ since a tangent is perpendicular to the radius at the point of tangency. The triangle △OBC is isosceles as OB=OC (both are radii of the circle). Given ∠ABO=36∘, the remaining angle ∠OBC=180∘−90∘−36∘=54∘. Since △OBC is isosceles, ∠C=∠OBC=54∘. Thus, the correct answer is B: 54°.To solve 10×f(7)+9×g(1), we first determine f(7)and g(1) from the graphs. From the blue graph (y=f(x)), at x=7, f(7)=−6. From the pink graph (y=g(x)), at x=1, g(1)=2. Substituting these values into the expression gives 10×(−6)+9×2=−60+18=−42. Thus, the final answer is -42.Question & Diagram:Answer:AAnswer:DAnswer: −1Question & Diagram:Question & Diagram:MAIS-7B with OCR:MAIS-7B with OCR:MAIS-7B with OCR:Published as a conference paper at ICLR 2025
layer. This design choice stems from the fundamental differences between general visual tasks and
mathematical tasks:
1. For general visual tasks (e.g., LLaVA), training MLLMs typically requires the LLM to generate
daily natural language responses, such as descriptive captions or instruction-following outputs.
These outputs often rely on pre-existing knowledge within the pre-trained LLM. As a result, in
LLaVA, there is no need to unfreeze the LLM to learn new types of outputs.
2. In contrast, for mathematical domains, LLMs need to generate math-specific responses, such as
geometric descriptions, functional explanations, formulas, and theorems. These outputs often
involve domain-specific knowledge not inherent in pre-trained LLMs. Given this, we incorporate
learnable LoRA layers to infuse new knowledge into the LLM, enhancing its capability to produce
high-quality mathematical expressions. Concurrently, we aim to prevent the LLM from overfitting
to diagram captioning tasks during alignment. Therefore, using LoRA-based tuning allows us to
preserve the LLM’s generalizable pre-trained language knowledge while injecting specialized
math-specific capabilities.
To further investigate the impact of different training settings on model performance, we conduct
an ablation study comparing various LLM training settings during the alignment stage. We evaluate
two tasks: the CIDEr score for diagram captioning on 100 validation samples (following the same
setting as in Table 6 of the Appendix) and the accuracy score on MathVerse. The results, as shown in
Table 13, indicate that the LoRA-based approach performs best, enabling MLLMs to generate high-
quality mathematical captions while preserving pre-trained knowledge for improved problem-solving
capabilities.
A.3.11 ENHANCING A PRE-TRAINED MLLM
To investigate whether our curated data and training techniques can improve the mathematical perfor-
mance of a pre-trained large model (LLaVA-NeXT), we conducted an ablation study. Specifically,
we progressively employed MAVIS-Instruct for instruction tuning, followed by DPO alignment
on top of LLaVA-NeXT-8B, with both training stages performed for one epoch using a learning
rate of 1 × 10−5. The results, as shown in Table 16, demonstrate that these two continual training
stages significantly enhance LLaVA-NeXT’s ability to solve mathematical problems, with notable
improvements across all evaluation categories.
Table 16: Performance improvement of LLaVA-NeXT-8B with MAVIS-Instruct and DPO alignment.
Model
LLM Size All
Text Dominant Text Lite Vision Intensive Vision Dominant Vision Only
LLaVA-NeXT
+ MAVIS-Instruct
+ DPO
8B
8B
8B
15.6
22.8
24.0
19.4
32.3
33.7
15.2
25.3
26.9
16.8
24.6
25.4
15.2
18.3
19.1
11.3
14.2
15.1
A.4 DETAILS OF AUTOMATIC DATA ENGINE
A.4.1 DIAGRAM GENERATION
In this section, we detail the implementation specifics of the process for generating diagrams related
to plane geometry, analytic geometry, and function domains.
Plane Geometry Diagram.
Inspired by previous multi-hop reasoning methods (Kazemi et al., 2023;
Wei et al., 2022; Nye et al., 2021), we employ an iterative generation method over logical theories
to generate plane geometric images along with corresponding captions and question-answering
pairs, whose complexity can be controlled across multiple axes. Specifically, we first define a set of
fundamental geometric shapes in Figure 13.
Within each shape, new basic shapes can be generated by extending a particular edge. For each basic
shape, we initially define a meta reasoning process:
On−1, C i
mn−1
Ei
mn−1−−−−→ On, i ∈ [1, z],
(1)
25
Published as a conference paper at ICLR 2025
Figure 13: The set of fundamental shapes in plane geometry diagrams, whose straight edges can be
extended into other basic shapes.
where O represents the initial side length of the shape, Cm denotes the additional conditions required
to complete meta reasoning, and Em provides a detailed explanation of the meta reasoning process.
For example, when considering an isosceles triangle as the (n − 1)th shape in a sequence, the vertex
angle is still required as Cm to reason about base side length, and then to expand to the nth shape,
with Em serving as the explanation of this process. The variable z indicates that there are z sets of
possible meta reasoning for the shape, n indicates the length of the generating sequence, which is
also the number of hops of reasoning required to answer the question. The initial side, extend side,
and additional conditions for meta-reasoning of each basic shape can be referred to in Figure 13. In
the final shape, question-answering pairs pertinent to this shape can be generated as
On, C j
qn
, Qj
n
Ej
qn−−→ Aj
n, j ∈ [1, m],
(2)
where Cq represents the additional conditions required to solve the problem, while Q and A denote
the question and answer, respectively. Eq refers to the detailed explanation of the solving process.
The variable m indicates that there are m pairs of question-answering and corresponding detailed
explanations within the shape. By applying meta reasoning to the n − 1th shape, the initial side
length of the nth shape can be deduced. Therefore, for a complex composite figure consisting of n
shapes, the overall question-answering pair can be defined as follows:
O1,
n−1
(cid:88)
k=1
Cmk , C j
qn
, Qj
n
Ej
qn−−→ Aj
n.
(3)
Each shape defines a sufficient number of conditions, explanations, and answers to ensure the diversity
of the generated question-answering pairs. Based on the aforementioned rules, controlling the length
of the generation sequence can regulate the number of reasoning steps, and controlling the type of
questions can manage the knowledge required for solving the problems. Thus, we can generate
questions of varying difficulty levels, which can also be illustrated in Figure 14a.
Analytic Geometry Diagram. The image generation method for analytic geometry is relatively
straightforward. First, we randomly select a range within the coordinate system: the minimum value
of x is chosen as an integer between [−12, −8], and the maximum value of x is chosen as an integer
between [8, 12]; the range for y is the same as for x. Then, we define the following basic shapes:
point, line segment, line, circle, ellipse, rectangle, square, polygon, and sector. During the generation
process, we select a number between 1 and 4 as the number of shapes to generate. The generation
rule is that nonlinear shapes other than points, line segments, and lines must not overlap.
Function Diagram. The generation of function graphs is also straightforward as shown in Fig-
ure 14b. We define the following basic functions, each with a set of parameters that can be randomly
selected:
Sine Function
Cosine Function
y = A · sin(f · x + ϕ), where the amplitude A is a random integer
between 1 and 3, the frequency f is either 1 or 2, and the phase ϕ is a
random integer between 0 and 2π.
y = A · cos(f · x + ϕ), where the amplitude A is a random integer
between 1 and 3, the frequency f is either 1 or 2, and the phase ϕ is a
random integer between 0 and 2π.
26
Published as a conference paper at ICLR 2025
(a) A single process for generating plane geometry diagrams and corresponding question-answering pairs as
well as image captions. In this example, the generation sequence length is specified as 2. Initial side length is
painted in pink, Cm is painted in green, while Cq is painted in yellow. Whenever a new basic shape is generated,
its caption is appended to the previous caption.
(b) A single process is used for generating function diagrams along with the corresponding question-answer pairs
and image captions. Once the functional expression is determined, all its properties can be directly computed,
and the function plot can be generated accordingly. The caption for the function diagram simply states the
functional expression.
Figure 14: The pipeline of our data engine, consisting of (a) the generation of plane geometry
diagrams and (b) the generation of function diagrams.
Tangent Function
Polynomial Function
y = A · tan(f · x + ϕ), where the amplitude A is a random integer
between 1 and 3, the frequency f is either 1 or 2, and the phase ϕ is a
random integer between 0 and 2π.
P (x) = anxn + an−1xn−1 + · · · + a1x + a0, where the degree n is
a random integer between 1 and 4. The coefficients ai are randomly
selected integers ranging from -3 to 3.
piece-wise Function
piece-wise polynomial functions are divided into 2 or 3 segments, with
each segment’s parameters identical to those of a polynomial function.
Logarithmic Function
y = a · logb(c · x + d), where the coefficient a is randomly cho-
sen from {−3, −2, −1, 1, 2, 3}, the base b is randomly chosen from
{2, 10, ⌊e⌋}, the coefficient c is a random integer between 1 and 3,
and the coefficient d is a random integer between 1 and 6, ensuring
that c · x + d is positive.
Absolute Function
y = |a · x + b|, where a and b are random integer between −5 and 5.
27
Published as a conference paper at ICLR 2025
Figure 15: Examples of analytical geometry diagram caption.
We first determine the domain range to be displayed on the function graph. For trigonometric
functions, the domain is set to [−π, π]. For piece-wise polynomial functions, the minimum value of
x is a random integer between [−12, −8], and the maximum value of x is a random integer between
[8, 12]. For other functions, the minimum and maximum values of x are random integers within
the ranges of [−6, −3] and [3, 6], respectively. During the plotting process, we calculate the local
maxima, minima, and zeros of the function by iterating through the domain. We then render the
x-coordinates of these extrema and zeros on the x-axis of the function graph.
A.4.2 MAVIS-CAPTION
In this section, we detail how the captions corresponding to images in the MAVIS-Caption Dataset
are generated with our automatic data engine.
Plane Geometry Caption. Based on the generation process described in Section A.4.1, when
generating each shape, a caption is randomly selected from a set of captions for that shape and some
connecting words are randomly added. We also randomly select some edges or angles and state their
measurements in the caption. After generating the raw caption, we use GPT-3.5 to refine it, enhancing
its linguistic structure and semantic diversity. An example is shown in Figure ??.
Function Caption. According to the function graph generation process described in Section A.4.1,
we record the function’s zeros and extrema. Additionally, we also record the function’s expression
and asymptotes. These attributes are incorporated into a randomly selected caption template to form
the function graph’s caption. Some examples are provided in Figure 16.
Analytic Geometry Caption. For each shape, we maintain a set of caption templates that describe
the shape’s type, coordinate position, and other attributes. In the generation process described in
Section A.4.1, we select a template and randomly add some diverse connecting words to form a
complete caption. Examples of some captions are shown in Figure 15.
A.4.3 MAVIS-INSTRUCT
Manual Collection Augmented by GPT-4. To complement the dataset with real-world problem-
solving scenarios, we hire 8 human experts to manually collect visual math problems from various
public sources1,2,3, spanning plane geometry, analytic geometry, and function. For problems, we try
to obtain their content as complete as possible, including questions, diagrams, answers, and rationales
if available. The collection process consists of the following steps:
1. Problem Collection: We gathered problems from three public sources as comprehensively as
possible, including questions, diagrams, answers, category information, and rationales where
28
Published as a conference paper at ICLR 2025
Figure 16: Function diagram captions.
available. The problems are primarily at the high-school level, covering plane geometry and
functions (including analytic geometry).
2. Data Verification: Based on their initial categories (subject, subfield, and difficulty level), the
problems were organized into distinct groups. Six expert annotators were tasked with meticulously
verifying the correctness and completeness of each problem. They refined the detailed chain-of-
thought (CoT) rationales and ensured that there was no overlap with evaluation data by visually
inspecting the diagrams. This rigorous verification process resulted in a total of 4K verified
problems.
3. Text-lite Construction: To optimize the problems for training mathematical visual capabilities,
the 4K problems were processed using GPT-4V with a customized prompt (as shown in Figure
15). This step involved removing redundant information from the question text to create concise,
text-lite problems, specifically tailored to our training objectives.
Then, we first feed all the related information into GPT-4V to eliminate the redundant information
within text questions, constructing the text-lite version of problems by the prompt in Figure 17. Then,
we design three types of prompts for GPT-4 to augment 15 multiple-choice questions (including 10
multiple-choice and 5 binary-choice, i.e., ‘True’ or ‘False’) and 5 free-form questions, respectively,
as shown in Figure 18. We do not adopt GPT-4V here, since GPT-4V itself would misunderstand
diagrams for low-quality data augmentation. The newly generated problems contain detailed CoT
rationales and diverse question forms.
Existing Datasets Augmented by GPT-4. Previous efforts have been made to provide some small-
scale, plane geometry datasets, e.g., GeoQA (Chen et al., 2021c), GeoQA+ (Chen et al., 2021a),
and Geometry3K (Lu et al., 2021). Although they are limited in data scale for tuning MLLMs
and include no rationales, we can also regard them as a seed dataset and adopt GPT-4 to augment
larger-scale training data. We do not utilize GPT-4V here for the same reason aforementioned. In
detail, we design 3 types of question generation approaches using different prompts, as shown in
Figure 19. For Geometry3K, as the question texts are normally brief and contain marginal descriptive
information, posing challenges for GPT-4 to understand the diagram, we only augment them to
generate binary-choice questions, i.e., ‘Ture’ or ‘False’. For GeoQA+, we can leverage the sufficient
redundant information within their texts to generate more diverse and accurate multi-choice and
free-form questions. Likewise, GPT-4 can produce CoT rationales for each problem.
1https://homework.study.com
2https://www.ixl.com/math
3https://mathspace.co/us
29
Published as a conference paper at ICLR 2025
Figure 17: Manually collect visual math problems text-lite version.
Figure 18: We design different types of prompts for GPT-4 to augment 15 multiple-choice questions
and 5 free-form questions, respectively.
Data Engine Captions Annotated by GPT-4. Given the delicately designed data engine for
automatic diagram-caption creation, we can utilize the generated large-scale pairs to annotate question-
answering data using GPT-4V. Different from the previous two sources that augment questions based
on questions, we utilize the GPT-4V model here for caution data with two reasons: first, the detailed
caption from our data engine can well guide GPT-4V for relatively higher-quality visual embedding;
second, the visual input serves as guidance to provide additional spatial information for broad
question forms. As shown in Figure 27 and Figure 28, we adopt different prompts for function and
plane geometry problems, ensuring that the generated question-answering data is of high quality for
instruction tuning.
Data Engine Generated Problems:
PLANE GEOMETRY. Based on the generation process described in Section A.4.1, we pose questions
about the final shape in the generation sequence. We designed 6 types of questions: finding the
perimeter, finding the area, finding the base length, finding the angle, finding the arc length, and
finding the extended edge length. Each type of question has a set of templates that can be randomly
30
Published as a conference paper at ICLR 2025
Figure 19: We design 3 types of question generation approaches using different prompts to augment
existing visual mathematical dataset.
Figure 20: The Text Dominant, Text Lite, Vision Dominant, and Vision Only versions of the same
question. Text Dominant and Text Lite use the same image. In the text, the necessary conditions for
solving the problem are highlighted in red, while redundant descriptive conditions are highlighted in
blue. In the Vision Only version, the question is rendered in the image, with no textual format.
selected, as shown in Figure 21-26. As for the answer and analysis, each shape has a set of templates
for different types of questions to choose from, as shown in Section A.4.1.
To further enhance the model’s understanding of different forms of questions and better utilize the
diverse modal information in the text and images, we divided the plain geometry questions generated
by the Data Engine into four versions referring to MathVerse (Zhang et al., 2024b): Text Dominant,
Text Lite, Vision Dominant, and Vision Only.
Text Dominant
Text Lite
Vision Dominant
Vision Only
We marked all the conditions required for solving the problem in the diagram
and also described these conditions in the text, along with some redundant
descriptive text.
All the conditions required for solving the problem are randomly divided into
two parts: one part is marked in the diagram, and the other part is described
in the text. In other words, the conditions in the diagram and the conditions
in the text do not overlap.
All the conditions required for solving the problem are marked in the diagram,
while the text only contains the question without any conditions.
Not only are all the conditions required for solving the problem marked in
the diagram, but the question is also rendered in the diagram, leaving the text
portion empty.
The differences among the four versions of the same question are illustrated in Figure 20. Each basic
shape will retain a set of redundant conditions. During the shape generation process, there is a 50%
probability of including these redundant conditions.
31
Published as a conference paper at ICLR 2025
Figure 21: Perimeter problem templates.
Figure 22: Area problem templates.
Figure 23: Base length problem templates.
FUNCTION. All functions will be examined with two types of questions: finding the derivative and
finding the extrema. After obtaining the derivative, we calculate whether the derivative has zeros
within the given domain. The presence of zeros determines the method for calculating the extrema.
32
Published as a conference paper at ICLR 2025
Figure 24: Angle problem templates.
Figure 25: Arc length problem templates.
Figure 26: Extend side length problem tem-
plates.
33
Published as a conference paper at ICLR 2025
Figure 27: The function prompt for GPT-4V and the generated questions and rationals.
Figure 28: The geometry prompt for GPT-4V and the generated questions and rationals.
34
The function can be described by the equation y = 1*cos(1*x + 3). With an x range of [-𝝿, 𝝿], we find the zero points at -1.43, 1.71. Exploring the function within the range [-𝝿, 𝝿], we find that the highest value it reaches is 1.0 at -3.0, and the lowest value is -1.0 at 0.14. There are no asymptotes in the functions.Question: What is the length of square side FG? Choices: A. 10.0; B. 10.1; C. 9.9; D. 10.5Rational: Since FG is a side of square DBFG, and the properties of a square dictate that all sides are equal in length, the length of FG must be the same as DB which is given as 10.0 units. So the answer is A.Diagram:Caption:Question & Rationale Generated by GPT-4V:Question: What is the length of diagonal AC in triangle ABC? Choices: A. 10.1; B. 9.8; C. 9.9; D. 10.0Rational: In right triangle ABC with sides AB = 7.0 and BC = 7.0 units, and angle ACB = 45°, the triangle is isosceles. The length of diagonal AC can be calculated using the Pythagorean theorem: AC = √(AB² + BC²) = √(7.0² + 7.0²) = √98 ≈ 9.9. So the answer is C.Question: What is the perimeter of quadrilateral CBDE? Choices: A. 33.0; B. 34.0; C. 35.0; D. 32.0Rational: The perimeter of quadrilateral CBDE includes sides CB, BD, DE, and EC. Given CB = 7.0, BD = 10.0, and knowing quadrilateral CBDE has parallel and equal opposite sides, DE = CB and EC = BD. Thus, DE = 7.0 and EC = 10.0. The perimeter = CB + BD + DE + EC = 7.0 + 10.0 + 7.0 + 10.0 = 34.0 units. So the answer is B.Here is some more supplementary information for the diagram I give you: "{caption}"Please first watch the diagram and ask a question that can be correctly answered only by seeing the diagram.Then you should correctly analysis the question and give the answer. The reasoning process should not use "According to the caption" or something like that.Next, you should give 3 other hard false choices along with the correct one.You should totally provide 3 questions.The type of questions is multi-choice. You should strictly follow this format and order (end every problem with ####):QUESTION: What is the maximum value of the function in the given range? ANALYSIS: From the diagram, we can observe that when x=0, the function obtain the maximum value, which is 4.ANSWER: B. 4CHOICE: A. 3; B. 4; C. 2; D. 1 ####Prompt for Function Problems:Firstly, trace a Right Triangle ABC.Secondly, attached to edge CB, draw a Quadrilateral with opposite sides parallel and equal in length CBDE.Thirdly, joined to edge DB, render a Square DBFG.The measurement of angle ACB is 45o. The angle DBC measures 60o. Angle BDG measures 90oin size. The distance between points A and B is 7.0. The segment CB spans 7.0 units. Side DB spans 10.0 units.Question: What is the length of square side FG? Choices: A. 10.0; B. 10.1; C. 9.9; D. 10.5Rational: Since FG is a side of square DBFG, and the properties of a square dictate that all sides are equal in length, the length of FG must be the same as DB which is given as 10.0 units. So the answer is A.Diagram:Caption:Question & Rationale Generated by GPT-4V:Question: What is the length of diagonal AC in triangle ABC? Choices: A. 10.1; B. 9.8; C. 9.9; D. 10.0Rational: In right triangle ABC with sides AB = 7.0 and BC = 7.0 units, and angle ACB = 45°, the triangle is isosceles. The length of diagonal AC can be calculated using the Pythagorean theorem: AC = √(AB² + BC²) = √(7.0² + 7.0²) = √98 ≈ 9.9. So the answer is C.Question: What is the perimeter of quadrilateral CBDE? Choices: A. 33.0; B. 34.0; C. 35.0; D. 32.0Rational: The perimeter of quadrilateral CBDE includes sides CB, BD, DE, and EC. Given CB = 7.0, BD = 10.0, and knowing quadrilateral CBDE has parallel and equal opposite sides, DE = CB and EC = BD. Thus, DE = 7.0 and EC = 10.0. The perimeter = CB + BD + DE + EC = 7.0 + 10.0 + 7.0 + 10.0 = 34.0 units. So the answer is B.Here is some more supplementary information of the diagram I give you: "{caption}"Please first watch the diagram and ask a question that can be correctly answered only by seeing the diagram.Then you should correctly analysis the question and give the answer. The reasoning process should not use "According to the caption" or something like that.Next, you should give 3 other hard false choices along with the correct one.You should totally provide 3 questions.The type of questions is multi-choice. You should strictly follow this format and order (end every problem with ####):QUESTION: What is the height of the trapezium ABCD?ANALYSIS: Since we know the length of AB and the angle CBA, we can derive the height of the trapezium ABCD. The height should be AB \times sin(\angle CBA) = 11.7 * sin(60) = 10.1, so the answer is 11.7ANSWER: B. 11.7CHOICE: A. 11; B. 11.7; C. 12; D. 8 ####Prompt for Plane Geometry Problems:Published as a conference paper at ICLR 2025
Figure 29: The initial side, extend side, and additional conditions for meta-reasoning of each basic
shape. Some special shapes are not extended and only appear in the last position of the generation
sequence, thus their extend side is ∅.
35
|
4hPwLg7zD3 | Fourier Head: Helping Large Language Models Learn Complex Probability Distributions | [
6,
5,
6,
8
] | Published as a conference paper at ICLR 2025
FOURIER HEAD:
HELPING LARGE LANGUAGE MODELS
LEARN COMPLEX PROBABILITY DISTRIBUTIONS
Nate Gillman*,1 , Daksh Aggarwal*,1, Michael Freeman1, Saurabh Singh2, Chen Sun1,2
1Brown University, 2Google DeepMind
ABSTRACT
As the quality of large language models has improved, there has been increased
interest in using them to model non-linguistic tokens. For example, the Decision
Transformer recasts agentic decision making as a sequence modeling problem,
using a decoder-only LLM to model the distribution over the discrete action space
for an Atari agent. However, when adapting LLMs to non-linguistic domains, it
remains unclear if softmax over discrete bins captures the continuous structure of
the tokens and the potentially complex distributions needed for high quality token
generation. We introduce a neural network layer, constructed using Fourier series,
which we can easily substitute for any linear layer if we want the outputs to have a
more continuous structure. We perform extensive analysis on synthetic datasets, as
well as on large-scale decision making and time series forecasting tasks. We also
provide theoretical evidence that this layer can better learn signal from data while
ignoring high-frequency noise. All of our results support the effectiveness of our
proposed Fourier head in scenarios where the underlying data distribution has a
natural continuous structure. For example, the Fourier head improves a Decision
Transformer agent’s returns across four benchmark Atari games by as much as
377%, and increases a state-of-the-art times series foundation model’s forecasting
performance by 3.5% across 20 benchmarks unseen during training. We release
our implementation at https://nategillman.com/fourier-head.
Fourier Head Learns Higher Quality Densities
Figure 1: We task an MLP with learning to approximate a continuous bimodal density using a
categorical distribution and a cross-entropy objective. We observe that a standard linear head fails
to distinguish between the two modes, and overfits to high-frequency noise in the training set. In
contrast, our proposed Fourier head learns a smoother, more accurate categorical distribution.
1
INTRODUCTION
Human language can be viewed as a discretization for a continuous, often probabilistic represen-
tation of the world that is construed in our mind (Spivey, 2008). The continuous structure can be
partially captured by language models with their token embeddings, where “nearby” tokens are em-
bedded to have latent representations with high cosine similarities. The embeddings themselves are
acquired as a result of the data-driven learning process. Can we, based on rich prior knowledge
∗Equal contribution. Correspondence to: nate [email protected], [email protected].
1
Published as a conference paper at ICLR 2025
about the continuous world, inform the language model about the underlying continuity of its in-
puts, like the fact that the word “emerald” is more similar to “shamrock” than “pine” when they are
used to describe different shades of green? As large language models (LLMs) have evolved into
“foundation models” that are adapted to a diverse range of tasks, tokens that are a priori continuous
are more essential than ever, for example for arithmetic computations (Liu et al., 2023), decision
making with continuous or discrete actions (Chen et al., 2021), future anticipation and time-series
forecasting (Ansari et al., 2024), or simply drawing random numbers given a probability distribu-
tion (Hopkins et al., 2023).
We view the problem of informing LLMs to utilize the continuity prior from the perspective of prob-
ability density estimation. For simplicity, we adopt the standard next token prediction framework
whose training objective is softmax cross-entropy. Assuming non-overlapping vocabulary, continu-
ous values can be discretized via binning (Ansari et al., 2024). On one hand, the linear head adopted
by LLMs independently projects each token into probabilities, and has the expressive power to flex-
ibly approximate arbitrary probability density functions subject to the “quantization” errors. The
linear head however does not consider any continuous structure that resides among the tokens (i.e.
a random re-shuffle of the tokens in the vocabulary would not change the predictions). On the other
hand, a head based on a parameterized distribution (e.g. Gaussian or Gaussian Mixtures) naturally
incorporates the continuous structure, but is often too simple (and overly “smooth”) to account for
multi-modal distributions for future prediction or decision making. Can we design a head that is
both expressive and incorporates continuous structures?
We introduce the Fourier head, motivated by Fourier series as universal function approximators.
The Fourier head learns a continuous probability density function, and returns a discrete ap-
proximation of it. Intuitively, returning a discretization of a continuous density in this way allows
the classification head to better model the low-frequency signals from the training data, because the
Fourier head is forced to approximate the categorical distributions using a finite number of frequen-
cies. At a high level, the Fourier head inputs x ∈ Rn, uses a linear layer to learn the coefficients
for a Fourier series with N frequencies over [−1, 1], and quantizes the interval [−1, 1] into m equal
bins. Then, the Fourier head evaluates the learned Fourier PDF at those m bin center points, and
returns those m likelihoods as a categorical distribution. The Fourier head builds upon the Fourier
Basis Density Model (De la Fuente et al., 2024).
Our main contributions are as follows.
Contribution #1: We reveal the underlying principle on the trade-off between the Fourier head’s
expressive power and the “smoothness” of the predicted distributions. We prove a theorem which
demonstrates a scaling law for the Fourier head. Namely, as we increase the quantity of Fourier
coefficients learned by the Fourier head, the layer can model increasingly complicated distributions;
however, the Fourier head will necessarily fit to more high-frequency noise, thereby outputting cat-
egorical distributions which are less smooth.
Contribution #2: We propose a practical implementation of the Fourier head capable of sequential
prediction tasks by modeling complex multi-modal distributions. Additionally, we propose strate-
gies to improve the layer’s performance, including Fourier coefficient norm regularization, weight
initialization, and the choice of how many Fourier frequencies to use.
Contribution #3: We demonstrate the effectiveness of the Fourier head on two large scale tasks,
where intuitively a continuity inductive bias over the output dimensions ought to help the model’s
generation performance. In the first task, an offline RL agent which uses a decoder-only transformer
to model the next-action distribution for an Atari agent, we improve returns across four benchmark
games by as much as 377%. In the second, we outperform a state-of-the-art time series foundation
model on zero-shot forecasting by 3.5% across a benchmark of 20 datasets unseen during training.
2 FOURIER HEAD
2.1 FOURIER HEAD: MOTIVATION
When practitioners apply LLMs to model complex probability distributions over non-linguistic to-
kens, a standard technique is to quantize the latent space into m tokens and learn a conditional
categorical distribution over those tokens. We share two examples here:
2
Published as a conference paper at ICLR 2025
Example 1: The Decision Transformer (Chen et al., 2021) models an Atari agent’s behavior in the
Seaquest game by learning a categorical distribution over the 18 possible actions (move left, move
right, shoot left, etc.). They use an decoder-only transformer architecture.
Example 2: The Chronos time series foundation model (Ansari et al., 2024) models the distribution
of next numerical values by quantizing the closed interval [−15, 15] into 4096 bins, and learning a
categorical distribution over those bins. They use an encoder-decoder transformer.
In a pure language modeling task, token ID 1000 and token ID 1001 likely represent unrelated
words. However, in a task where the token IDs represent numerical values, the token ID 1000 and
1001 would represent numbers that are close together.
The final layers of an LLM for such a task are generally a linear layer, followed by softmax, fol-
lowed by cross-entropy loss. We hypothesize that in scenarios where nearby token IDs encode
similar items, an inductive bias that encourages them to have similar probabilities will improve per-
formance. A generic linear layer learns an unstructured categorical distribution and thereby allows
more arbitrary probabilities. In this work, we propose to give the model this inductive bias by let-
ting the classification head learn a categorical distribution as the discretization of a continuous
learned function from a suitably flexible class. We consider the very flexible class of truncated
Fourier series with N frequencies. These are functions of the form
f (x) = a0 +
N
(cid:88)
k=1
(cid:0)ak cos(kπx) + bk sin(kπx)(cid:1).
(2.1)
Fourier series are a classical tool for solving quantitative problems (Stein & Shakarchi, 2003) be-
cause functions like Equation 2.1 are universal function approximators, with the approximation
improving as N increases.
2.2 FOURIER HEAD: DEFINITION
We now propose a replacement for the generic linear layer token classification head, built using
Fourier series. We call our replacement the Fourier Series Classification Head, or the Fourier
head for short. The Fourier head inputs any vector x ∈ Rn, and outputs a categorical distribution
in Rm. For a high level summary of how it works–the Fourier head inputs x ∈ Rm, uses a linear
layer to extract the coefficients for a Fourier series over [−1, 1], quantizes the interval [−1, 1] into
m equal bins, evaluates the learned Fourier PDF at those m bin centerpoints, and returns those
m likelihoods as a categorical distribution. We formally define this layer in Algorithm 1. The
Fourier head is constructed using the Fourier Basis Density Model from (De la Fuente et al., 2024).
justification for how learning the autocorrelation
For more details on the original method (e.g.
coefficients guarantees that the Fourier series has integral 1, and justification for normalizing the
Fourier coefficients by ℜ(c0)) we refer the author to (De la Fuente et al., 2024). We direct the
curious reader to Appendix C.1 for a low-dimensional demonstration of the Fourier head in action.
2.3 FOURIER HEAD: CONSIDERATIONS FOR TRAINING
We highlight the main design choices of the Fourier head so that users may apply it most effectively.
Training objective: The Fourier head inputs a signal x ∈ Rn and extracts from that signal an inter-
mediate representation of a probability distribution px(z) defined over z ∈ [−1, 1]. This probability
distribution has a closed formula equal to a Fourier series. In our experiments, we optimize the pa-
rameters of the Fourier PDF by discretizing it over the latent space and training using cross-entropy
loss. However, we should note that the Fourier layer allows MLE training directly on continuous
values, by evaluating the Fourier PDF directly on the ground truth value in the latent space. But for
consistency of comparison, and to demonstrate how easy it is to swap the Fourier head with a linear
layer, we use softmax cross-entropy loss as the objective.
Choice of hyperparameter N : The Fourier head has one crucial hyperparameter–namely, the num-
ber of frequencies. How should one choose this in practice? We offer Theorem 3.3 as guiding
principle beyond simple trial and error. This result provides a scaling law which formalizes the
smoothness-expressive power trade-off in choosing the number of frequencies. In general, using
more frequencies leads to more expressive power, and generally better success metrics, but at the
cost of a learning less smooth densities, as well as more model parameters.
3
Published as a conference paper at ICLR 2025
Algorithm 1 Fourier head
Hyperparameters: the input dimension n, output dimension m, number of frequencies N
Initialization: define a linear layer A : Rn → R2(N +1) // maps input to autocorrelation coefficients
INPUT x = (x1, . . . , xn) ∈ Rn
(α0, β0, . . . , αN , βN ) ← Ax
ak ← αk + iβk ∈ C, for every k = 0, . . . , N
ck ← (cid:80)N −k
p(z) = 1
ℓ=0 aℓa∗
2 + ℜ
(cid:16)(cid:80)N
(cid:17)
k=1
ck
ℜ(c0) exp(ikπz)
m , for every k = 0, . . . , m − 1
, for every k = 0, . . . , m − 1
bk ← −1 + 1+2k
yk ← p(bk)
(cid:80)m−1
j=0 p(bj )
OUTPUT (y1, . . . ym) ∈ Rm
ℓ+k ∈ C, for every k = 0, . . . , N // compute Fourier coefficients
// compute autocorrelation coefficients
// define Fourier PDF over [−1, 1]
// define m bin centerpoints
// evaluate PDF at m bin centerpoints
// by design, (cid:80)m
k=1 yk = 1 and each yk ≥ 0
Fourier regularization: For a given number of frequencies N , there could be many learned Fourier
models that fit the given data equally well. To encourage a smoother learned model and penalize
unnecessary high frequency content, we follow (De la Fuente et al., 2024) and add a regularization
term that measures the total squared variation for the Fourier model, to prevent higher order Fourier
coefficients from growing too large during training. This helps ensure that the learned Fourier PDF
doesn’t overfit to noise in the data, and therefore has a bias towards learning smoother densities. In
the notation from Algorithm 1, this means adding a regularization term of γ · 2π2
k=1 k2|ck|2 to
m
the loss function, where γ is a hyperparameter. When picking regularization strength, we find that
in the low-frequency domain (e.g. frequencies in the single digits) using γ = 0 works best, and in
the high-frequency domain (e.g. greater than 10 frequencies), using γ = 10−6 works best.
(cid:80)m
Binning strategy: The choice of data binning can impact performance. As discussed, the Fourier
head should only be applied when nearby bins are ‘similar’ in some sense, requiring a semantically
meaningful bin ordering. When bins represent quantized numerical values over a continuous la-
tent space, a ‘mixed-precision’ binning strategy can improve performance. For example, to model
values in [−15, 15] with most data in [−1, 10], allocating more bins to the dense interval improves
performance. Given m total bins, a hyperparameter d ∈ [0, 1) controls allocation, with ⌊d · m⌋
bins for the sparse interval and the rest for the dense range (estimated from training data). Fourier
theory supports this approach, as increasing precision in dense regions de-localizes the quantized
data distribution, localizing the Fourier spectrum. This accelerates higher frequency decay, enabling
effective learning with lower-frequency Fourier heads. Separately, we note that (De la Fuente et al.,
2024) suggests re-parameterizing the periodic domain to the real line, though we do not use this in
our work.
Weight initialization: The learned parameters for the Fourier head consist of the learned linear
layer which extracts autocorrelation parameters ak. In PyTorch, linear layers use He initialization
(He et al., 2015) by default, which ensures that the linear layer outputs values close to zero in
expectation. Similarly, initializing the Fourier densities to be uniform p(z) ≈ 1/2 improves learning
dynamics. We accomplish this by dividing the weights and biases by a large number, such as 1000,
after He initialization; this guarantees that the linear layer outputs very small values, so that Fourier
coefficients output from the autocorrelation step are very small as well.
3 THEORETICAL ANALYSIS OF FOURIER HEAD
3.1
“SMOOTHNESS”: A METRIC FOR HIGH FREQUENCY CONTENT
In this subsection we propose a smoothness metric which inputs a categorical distribution y =
(y1, . . . , ym) ∈ Rm, and assigns a numerical value depending on how smooth it is. The score will
output 0 if y is the smoothest possible categorical distribution, and larger values if y is less smooth.
We will first specify what we mean by “smooth”:
4
Published as a conference paper at ICLR 2025
Heuristic 3.1. We say a function is smooth if it contains very little high-frequency information.
For example, the uniform categorical distribution contains no high-frequency information, so it is
the smoothest possible function, and should get a smoothness score of 0. In contrast, a categorical
distribution containing samples from sin(100πx) contains lots of high frequency information, so it
should get a smoothness score greater than 0. We seek to define a metric which measures smoothness
according to Heuristic 3.1.
We will first develop a smoothness metric in the general case of a function f : [a, b] → R, then
specialize to case of the discrete categorical distribution that we consider in the paper. If we let
ασ ∈ R be weights satisfying (cid:82) ∞
0 ασdσ = 1, and D be some measure of discrepancy such as L2,
and let gσ(x) ∗ f (x) denote the convolution of f (x) with a Gaussian kernel of standard deviation σ,
then it is reasonable to define the smoothness of f to be the quantity
s(f ) :=
(cid:90) ∞
(cid:90) b
0
a
ασD[f (x), gσ(x) ∗ f (x)]dxdσ.
(3.1)
In this expression, the discrepancy D[f (x), gσ(x) ∗ f (x)] measures how different f (x) is from a
Gaussian-smoothed version of itself. Because the Gaussian is a low-pass filter, we can interpret
Equation 3.1 as saying, at a high level, that a function is “smooth” if it doesn’t change that much
when you remove high frequency content from it.
In our experiments, we consider discrete categorical distributions, and wish to tractably quantify
their smoothness. Accordingly, we define a specific case of this as follows.
Definition 3.2 (Smoothness metric for categorical distributions). Suppose y = (y1, . . . , ym) ∈ Rm
is a categorical distribution, so every yk ≥ 0 and (cid:80)m
k=1 yk = 1. Denote by gσ ∈ R2m−1 the discrete
Gaussian kernel of standard deviation σ and radius m − 1. Define the weights ασ = 6/π2σ2. Then
we define the smoothness of y to be the constant
s(y) :=
∞
(cid:88)
σ=1
ασ∥y − gσ ∗ y∥2.
(3.2)
We direct the curious reader to Appendix B, where we conduct additional experiments to justify this
choice of smoothness metric for our experiments.
3.2 A SCALING LAW FOR THE FOURIER HEAD, IN FREQUENCY-ASPECT
In this subsection, we share a theorem that analyzes the quality of the Fourier head as the quantity of
frequencies changes. We refer to this as the Fourier head scaling law as it quantifies the trade-off
between modeling capacity and smoothness as the number of frequencies increases. On one hand, it
is a celebrated result from Fourier analysis that a Fourier series with a greater number of frequencies
models a larger class of functions; but on the other hand, we show that increasing frequencies also
incurs loss in smoothness. This is to be expected, as we designed our smoothness metric with the
intention of identifying a distribution as less smooth if it contains more high-frequency information.
Theorem 3.3. (Fourier head scaling law.) Consider a Fourier head with input dimension n, output
dimension m, and N frequencies. Suppose that 1 ≪ N < m
2 . Then the following are true:
1.
2.
(Increasing N improves modeling power.) As N increases, the Fourier head is capable of
learning a larger class of densities.
(Increasing N degrades smoothness.) Consider an input to the Fourier head x ∈ Rn, and
denote by fx : [−1, 1] → R the optimal conditional distribution that we would like the Fourier
head to approximate for this input. Suppose that there exists some t ≥ 2 such that the Fourier
coefficients of fx decay on the order of 1/kt. Denote by fx,N the truncation of fx to its first
N frequencies, denote by ⃗b ∈ Rm the m bin centerpoints in [−1, 1], and denote by y(N ) =
fx,N (⃗b)/(fx,N (b0) + · · · + fx,N (bm−1)) ∈ Rm the discretization of fx,N into m bins. Then,
there exist constants C1, C2 > 0 such that
s(y(N )) = C1 −
C2
N 2t−1 + O(1/N 2t).
(3.3)
5
Published as a conference paper at ICLR 2025
Toy Example: Learned Conditional Distribution vs True Conditional Distribution
Figure 2: Comparison between the PMFs learned by the linear head, GMM head, and the Fourier
(The GMM dataset is in
head, for two of the datasets in the toy example–Gaussian and Beta.
Figure 1.) We observe that the Fourier head learns a smoother categorical distribution than the
linear head over its predicted values. Furthermore, the Fourier head better fits the true conditional
PDF; this is reflected in the KL divergence and smoothness metrics.
Note that the smoothness scaling law asymptotic in Equation 3.3 shows that as N increases, so does
s(y(N )). Further, note that if the Fourier spectrum of the underlying distribution decays quicker
(controlled by t) then the rate at which smoothness degrades is slower; this is because if what we
are learning has little high frequency content, then increasing the frequencies shouldn’t affect the
smoothness of the learned distribution very much. In part (2), our assumption that the Fourier coef-
ficients decay at least quadratically is reasonable since if fx is at least twice continuously differen-
tiable, we already know its Fourier coefficients corresponding to the k-th frequency are in O(1/k2)
(Stein & Shakarchi, 2003, Ch.2, Cor. 2.4). Our Fourier weight decay regularization helps toward
ensuring that this condition is met in practice as well. We include a full proof in Appendix A.
4 TOY EXAMPLES
4.1 LEARNING A CONTINUOUS CONDITIONAL DISTRIBUTION
We demonstrate the advantage of using the Fourier head to learn a probability distribution for a
simple task: learning the conditional distribution of the third number in the sequence given the first
two. Here we will use q(z) to denote the quantization of z.
Dataset: We create 3 synthetic datasets, which we name Gaussian, GMM-2, and Beta. Each
dataset consists of 5000 quantized triples {(q(x), q(y), q(z))} ⊆ [ − 1, 1]3. Crucially, z is sampled
from a distribution which is conditioned on x and y, and we have an explicit closed formula for this
distribution. By design, the Gaussian dataset is unimodal in z, whereas the more challenging GMM-
2 and Beta datasets are not unimodal. Full details about the datasets can be found in Appendix C.2.
Task: Predict the conditional distribution of q(z) given the quantized tuple (q(x), q(y)).
Model architecture: Our model is an MLP with ReLU activations and one hidden layer, which
maps R2 → R64 → R32 → R50. The output of the model has dimension 50 because we quantize
the interval [−1, 1] into 50 bins. We consider two baselines alongside the Fourier model. For the
first baseline, the classification head is a linear layer; for the second baseline, the classification head
is a Gaussian model mixture classification layer with two Gaussians, where the means and standard
deviations are learned; for the Fourier model, the classification head is the Fourier head. We sweep
over frequencies N = 2, 4, . . . , 20, and consider regularization γ ∈ {0, 10−6}. We train those
models via cross-entropy loss.1 We also consider a regression-based model, trained using MSE.
Model evaluation: We use three metrics for evaluation. Our first metric is the average KL di-
vergence DKL(q(P(x, y))||M (q(x), q(y))), where P(x, y) is the fixed conditional distribution of z
given (x, y); q(P(x, y)) is the quantized approximation of P(x, y), obtained by evaluating the den-
1Note that we also demonstrate the possibility of training a continuous version of the Fourier head, using
a maximum-likelihood based objective. Accordingly, we carry out experiments in the continuous domain
analogous to those we did in the quantized domain; for more details, see Appendix C.3.
6
Published as a conference paper at ICLR 2025
KL Divergence (↓)
Smoothness (↓)
Dataset
Gaussian
GMM-2
Beta
Linear
0.170 ± 0.052
0.238 ± 0.032
0.234 ± 0.032
Fourier
0.116 ± 0.043
0.146 ± 0.033
0.191 ± 0.016
Linear
0.116 ± 0.049
0.068 ± 0.022
0.127 ± 0.044
Fourier
0.057 ± 0.011
0.038 ± 0.007
0.076 ± 0.021
Table 1: We compare metrics between the linear head, and the Fourier head with 12 frequencies and
no regularization, for every dataset in our toy example. We observe that the Fourier head outperforms
the linear head across all metrics. Notably, using Fourier head improves the KL divergence (the
primary success metric) on average by approximately 40%. We aggregate metrics over 4 different
seeds and report the standard deviation.
Using Llama-3.1-8B-Instruct to Simulate Gaussian Sampling
Figure 3: We demonstrate that the baseline Llama model does a poor job simulating Gaussian sam-
pling, as measured by the Total Variation Distance between the ground truth quantized Gaussian
histogram, and the empirical histogram of samples. We find that LoRA fine-tuning improves the
results by a factor of ≈ 2.07, and that using the Fourier head improves the output distribution by a
factor of ≈ 4.86.
sity function of P(x, y) at the bin centers, multiplying by the bin width, and finally scaling by the
sum of the likelihoods; and M (q(x), q(y)) denotes the predicted categorical conditional distribution
of q(z). Our second metric is smoothness. And our third metric is MSE, where we consider the
expected value of q(z) under the learned categorical distribution as a prediction for q(z).
Results: The metrics for the best performing model on each dataset are reported in Table 1. Figure
2 presents sample visualizations of the learned conditional distributions alongside the true densities.
And in Appendix C.2, we present the results of a study on the impact of number of frequencies
and Fourier regularization. Notably, this study provides empirical evidence for the Fourier head
scaling law in Theorem 3.3, as it demonstrates that for all datasets, as frequency increases, the
smoothness degrades, and model performance improves until it reaches a saturation point. Crucially,
we observe that the Fourier head flexibly learns all three distributions better than the linear baseline
does. We note that the Fourier head outperforms the linear head on MSE as well; we include a
complete comparison with both Linear and GMM head baselines in Appendix C.2. Additionally,
in Figure 10 (Appendix), we demonstrate that the regression model simply regresses to the mean
of the conditional distribution. Accordingly, the regression model performs well for the unimodal
Gaussian dataset, and it performs poorly for the bimodal datasets GMM-2 and Beta.
4.2 ARE LLMS RANDOM NUMBER GENERATORS?
Suppose that we query an LLM with the following prompt, repeatedly: “The following is a list of
normally distributed random numbers in the interval [-1, 1] with mean 0.55 and std 0.10: 0.57,
0.36, ”. Would the model outputs be approximately Gaussian? In this empirical study, we simulate
Gaussian sampling using the Llama-3.1-8B-Instruct model (Dubey et al., 2024). We demonstrate
that this language model struggles to generate high quality numerical Gaussian samples. We con-
sider two possible interventions: LoRA fine-tuning the base Llama model, and LoRA fine-tuning
the base model while also replacing the linear classification head with a Fourier head. We find that
LoRA fine-tuning improves the learned distribution significantly, and replacing the linear head with
the Fourier head improves the distributions even further. We present an illustrative example of this
phenomenon in Figure 3. See Appendix C.4 for experiment details and some related works.
7
Published as a conference paper at ICLR 2025
5 LARGE-SCALE STUDY: OFFLINE REINFORCEMENT LEARNING
The Decision Transformer (Chen et al., 2021) reframes reinforcement learning as sequentially mod-
eling rewards, states, and actions. We evaluate its performance on the Seaquest game from the Atari
(Bellemare et al., 2013) benchmark. The Seaquest game contains 18 actions, with two groups of
eight actions that have a natural “closeness” metric defined on them: move left, up left, up, up right,
right, down right, down, down left; as well as shooting in those eight directions. The original ar-
chitecture uses a decoder-only language model (Radford et al., 2018) to encode context and map
it through a linear layer, producing a categorical distribution over actions. At test time, the agent
samples from this distribution to select its next action. We replace the linear classification head with
a Fourier head, introducing a prior that semantically similar actions (e.g., ‘move left’ and ‘move
up left’) should have similar likelihoods. Our results show the Fourier head improves returns by as
much as 46% in the reward-conditioned setting, using identical training hyperparameters.
Task: In the Seaquest game, the agent moves a submarine to avoid enemies, shoot at enemies, and
rescue divers. We consider this task in the Offline RL setting. The agent observes the past states,
actions, and rewards, as well as the return-to-go, and attempts to predict the action that matches
what an agent operating like the dataset would likely do. We also consider three other Atari games
with the same action space: BankHeist, DoubleDunk, and Gravitar.
Dataset: We use the same dataset from the original Decision Transformer implementation (Chen
et al., 2021). This dataset consists of 500k transitions experienced by an online deep Q-network
agent (Mnih et al., 2015) during training on the Seaquest game.
Model architecture: (Chen et al., 2021) used the GPT-1 model (Radford et al., 2018) to autoregres-
sively encode the context, which is then fed through a linear layer of dimension 18, and the model
ultimately optimizes the cross-entropy loss between the action logits and the ground truth action
from the dataset. We refer to this model as the linear baseline. To create our Fourier-N version, we
simply replace the linear head with a Fourier head with N frequencies and Fourier regularization
γ = 10−6. In our experiments we consider frequencies N ∈ {2, 4, 6, 8, . . . , 30, 32}.
Normalized Returns for Decision Transformer Agent
Atari Game
Classification Head
Linear head
Fourier head
BankHeist
DoubleDunk
−0.09 ± 0.05 −72.72 ± 33.08
45.45 ± 36.36
0.92 ± 0.33
Gravitar
1.32 ± 0.17
4.98 ± 0.93
Seaquest
2.53 ± 0.63
3.70 ± 0.47
Table 2: We present returns obtained by the Decision Transformer agent using the linear baseline,
and the Fourier head, across the four Atari games. We compute the returns (mean and standard
deviation) by averaging over four seeds. Across all these games, the Fourier head significantly
improves the normalized returns obtained by the agent.
Figure 4: We present empirical results for how the quantity of Fourier frequencies impacts returns
and smoothness for the imitation learning task. For normalized returns, higher is better; for smooth-
ness, lower is better. We can see that the Fourier agent achieves higher normalized returns than
the linear baseline agent when sufficiently many Fourier frequencies are used, while still learning
smoother next-action distributions.
8
Published as a conference paper at ICLR 2025
Model Evaluation: We present results for the linear baseline and Fourier-N head (N ∈
2, 4, 6, 8, . . . , 30, 32) across four Atari games, showing mean reward totals for rollouts at the best
epoch across four seeds. Table 2 demonstrates significant return gains with the Fourier head. For ex-
ample, Seaquest returns increase by up to 46.2%, while Gravitar sees as much as a 377% boost. Fig-
ure 4 shows improved Seaquest performance as the number of frequencies grows, with learned PMFs
becoming less smooth, aligning with Theorem 3.3. Qualitative results in Figure 13 (Appendix) high-
light the smoother PMFs produced by the Fourier head. Additional results for BankHeist, Double-
Dunk, and Gravitar in Figure 16 (Appendix) confirm that the Fourier agent consistently outperforms
the linear baseline while maintaining smoother next-action distributions.
Ablations: We analyze whether model size has any effect on the relative performance of the Linear
head and the Fourier head. The results in Figure 14 (Appendix) demonstrate that, across model
sizes, the Decision Transformer with a Fourier head is better at learning high-quality next action
distributions than the Decision Transformer with a Linear head. We also analyze whether dataset
size has any effect on the relative performance of the Linear head and the Fourier head, and obtain a
similar result. In Figure 15 (Appendix) we show that, across dataset sizes, the Decision Transformer
agent with the Fourier head achieves larger returns than the agent with a linear head.
6 LARGE-SCALE STUDY: PROBABILISTIC TIME SERIES FORECASTING
The Chronos time series foundation models (Ansari et al., 2024) “learn the language of time se-
ries”. They do this by approaching time series forecasting as language modeling, by tokenizing the
quantized number line, learning token embeddings for each of those quantized values, and finally
learning a categorical distribution to decide what the next value ought to be. This model is built
on top of the encoder-decoder T5 model (Raffel et al., 2020). In particular, this model normalizes
time series values to the range [−15, 15] and quantizes this interval into 4096 tokens. As usual for
language modeling, the final layer is a linear map which learns a categorical distribution over next
tokens. In particular, we observe that token i represents a number very close to tokens i − 1 and
i + 1. However, we note that there is no inductive bias in the T5 architecture which pushes their
likelihoods to be similar. This is not a hypothetical problem; in Figure 17 (Appendix), we can see
that the linear next-token prediction PMFs fit to the noise, and appear very jagged.
The motivation for replacing the linear head with the Fourier head is to “smooth” out the
distribution in the left side of Figure 17, to help the forecasting model better learn the signal,
and ignore the noise. In this figure, we can see that the Fourier head accomplishes this successfully.
In this section, we study how the performance of the Chronos time series foundation model changes
when we pre-train using the Fourier head, instead of the linear head. For all of the frequencies that
we consider, the Fourier head outperforms the Chronos linear baseline on the MASE metric, while
learning next token multinomials which are at least 8x smoother, with fewer parameters than the
baseline.
Dataset: We use the same training dataset for large-scale pretraining that Ansari et al. (2024) used.
We gather an evaluation benchmark of 20 time series datasets which were not seen during train-
ing. These 20 come from the zero-shot eval from (Ansari et al., 2024). The reader can check
Appendix D.2 for details on the training and evaluation datasets we used.
Model architecture: We use the Chronos model, which is built using the T5 architecture (Raffel
et al., 2020). The original model has a linear classification head. For our study, we will replace
this with a Fourier head with frequencies N = 64, 128, 256, 550. We use mixed precision binning;
this is informed by an analysis of the Fourier spectrum of the next-token distribution, as described in
Section 2.3. We also use Fourier weight decay regularization with γ = 10−6. For the task, the model
Chronos Time Series Model MASE (↓) WQL (↓)
0.750
Linear
0.749
Fourier-550
0.883
0.852
Smoothness (↓)
0.1689 ± 0.1087
0.0283 ± 0.0224
Table 3: We present large-scale experiments on Chronos time series forecasting. The best-
performing Fourier model outperforms the linear baseline both terms of the continuity of the learned
probability mass functions (smoothness) for the quality of the forecasts (MASE, WQL).
9
Published as a conference paper at ICLR 2025
learns to input time series context of length 512, and output a probabilistic forecast of length 64. At
test time, the model chooses the next numerical token by sampling from the next-token distribution
Model evaluation: We have two sets of metrics: model performance from (Ansari et al., 2024)
(MASE measures the accuracy of median forecast, and WQL measures the quality of the proba-
bilistic forecast), as well as our smoothness metric. Our Fourier metrics in Table 3 demonstrate that
every Fourier model outperforms the linear baseline for MASE and smoothness. Furthermore, for
the largest Fourier model that we consider, Fourier outperforms linear on WQL as well.
Ablations: The results in Table 8 (Appendix) show that mixed precision binning and regularization
improve the MASE and smoothness for the Fourier head, and that using more Fourier frequencies
improves MASE and WQL. Additionally, we show that the Fourier head yields more accurate fore-
casts than the linear head across dataset sizes and model sizes (Figures 18 and 19, Appendix).
7 RELATED WORK
LLMs outside of natural language domains: LLMs are often adapted to domains beyond natural
language, as general purpose sequence models. For example, they have been used in protein syn-
thesis (Madani et al., 2023), time series forecasting (Ansari et al., 2024; Das et al., 2024; Jin et al.,
2024; Nate Gruver & Wilson, 2023; Requeima et al., 2024; Jia et al., 2024; Zhou et al., 2023; Wang
et al., 2024), music generation (Dhariwal et al., 2020; Agostinelli et al., 2023; Copet et al., 2023;
Yuan et al., 2024), and as well as in decision making (Li et al., 2022; Chen et al., 2021).
We consider three categories to adapt LLMs to non-language domains: when the output of a
language-trained LLM is used as a feature for some out-of-domain task; when a language-pretrained
LLM is fine-tuned on a domain-specific task; and when an LLM architecture is trained on a domain-
specific dataset from scratch. Our work directly considers the latter method of LLM adaptation,
particularly in settings where the outputs approximate continuous values. We note that using LLMs
to model numerical functions has seen success in continuing sequences (Mirchandani et al., 2023)
but has been challenging for modeling samplers for probability distributions (Hopkins et al., 2023).
In a related direction, Razeghi et al. (2022) found that model performance on numerical reason-
ing tasks is correlated with the frequency of specific numbers in its corpus. Further, some have
re-framed continuous regression as a descretized classification problem to leverage LLMs in numer-
ical modeling contexts (Song et al., 2024) or RL contexts (Farebrother et al., 2024). While even
frozen LLMs with no further training show interesting empirical results as regressors (Vacareanu
et al., 2024), there is a conceptual mismatch between the downstream task and model construc-
tion because tokenized numerical values trained using cross-entropy loss does not explicitly enforce
numerical relationships between the tokens.
Fourier series in neural networks: Many works leverage the Fourier transform as a data pre-
processing step or a deterministic transformation within the network, or use Fourier analysis to
motivate design choices. It is far less common to learn the Fourier series directly. De la Fuente
et al. (2024) learned marginal univariate densities parameterized using a Fourier basis; our work
extends their Fourier Basis Density model to multivariate settings with an autoregressive scheme.
Our method learns conditional univariate densities using a Fourier basis, where the coefficients of the
Fourier density model are input dependent. Sitzmann et al. (2020) proposed sinusoidal activation
functions, which can be seen as learning the frequencies of a Fourier series; in contrast, we fix
the frequencies to the canonoical choice {1, 2, . . . , N }, and learn the amplitudes. This allows the
Fourier head to more directly benefit from approximation results from Fourier analysis.
8 CONCLUSION
We propose the Fourier head and demonstrate its positive impact on performance on several tasks.
We prove a scaling law that characterizes the trade-off between the model’s expressivity and the
smoothness of its output distribution. The Fourier head is a modular architecture that can be easily
added to existing models that would benefit from the continuity inductive bias that the head imparts.
The Fourier head extends the already extensive reach of LLMs into more diverse, numerical, and
probabilistic domains. Future work includes exploring alternative training objectives that do not
depend on discretizing probability density functions, and incorporating the Fourier head in general-
purpose LLM training, where the head can be adaptively employed when needed.
10
Published as a conference paper at ICLR 2025
9 REPRODUCIBILITY STATEMENT
We have made efforts to ensure reproducibility. In Algorithm 1 we provide all the mathematical
details that one needs to reproduce the Fourier head. In Appendix D.2 we prove our scaling law,
Theorem 3.3, in full detail, and we list all assumptions in the statement of the theorem. Additionally,
we have released the research code on GitHub: https://github.com/nate-gillman/
fourier-head.
ACKNOWLEDGMENTS
We would like to thank Jona Balle, Alfredo De la Fuente, Calvin Luo, Singh Saluja, Matthew
Schoenbauer, and Megan Wei for the useful discussions. This work is supported by the Samsung
Advanced Institute of Technology, NASA, and a Richard B. Salomon Award for Chen Sun. Our
research was conducted using computational resources at the Center for Computation and Visual-
ization at Brown University. Chen would like to thank Mia for inspiration.
REFERENCES
Andrea Agostinelli, Timo I Denk, Zal´an Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon,
Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, et al. Musiclm: Generating
music from text. arXiv preprint arXiv:2301.11325, 2023.
Abdul Fatir Ansari, Lorenzo Stella, Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen,
Oleksandr Shchur, Syama Syndar Rangapuram, Sebastian Pineda Arango, Shubham Kapoor,
Jasper Zschiegner, Danielle C. Maddix, Michael W. Mahoney, Kari Torkkola, Andrew Gor-
don Wilson, Michael Bohlke-Schneider, and Yuyang Wang. Chronos: Learning the language
of time series. Transactions on Machine Learning Research, 2024.
ISSN 2835-8856. URL
https://openreview.net/forum?id=gerNCVqqtR.
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environ-
ment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:
253–279, 2013.
Peter J Brockwell and Richard A Davis. Time series: theory and methods. Springer science &
business media, 1991.
Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter
Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning
via sequence modeling. arXiv preprint arXiv:2106.01345, 2021.
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexan-
dre D´efossez. Simple and controllable music generation. In Thirty-seventh Conference on Neural
Information Processing Systems, 2023.
Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou. A decoder-only foundation model for
time-series forecasting. In International Conference on Machine Learning, 2024.
Alfredo De la Fuente, Saurabh Singh, and Johannes Ball´e. Fourier basis density model. arXiv
preprint arXiv:2402.15345, 2024.
Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever.
Jukebox: A generative model for music. arXiv preprint arXiv:2005.00341, 2020.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models.
arXiv preprint arXiv:2407.21783, 2024.
Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Ta¨ıga, Yevgen Chebotar, Ted Xiao, Alex
Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, et al. Stop regressing: Training
value functions via classification for scalable deep rl. arXiv preprint arXiv:2403.03950, 2024.
11
Published as a conference paper at ICLR 2025
Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing
Moore, Manoj Plakal, and Marvin Ritter. Audio set: An ontology and human-labeled dataset for
audio events. In Proc. IEEE ICASSP 2017, New Orleans, LA, 2017.
Yuan Gong, Yu-An Chung, and James Glass. Psla: Improving audio tagging with pretraining,
sampling, labeling, and aggregation. IEEE/ACM Transactions on Audio, Speech, and Language
Processing, 2021. doi: 10.1109/TASLP.2021.3120633.
Jia Gu, Liang Pang, Huawei Shen, and Xueqi Cheng. Do llms play dice? exploring probabil-
ity distribution sampling in large language models for behavioral simulation. arXiv preprint
arXiv:2404.09043, 2024.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing
human-level performance on imagenet classification. In Proceedings of the IEEE international
conference on computer vision, pp. 1026–1034, 2015.
Aspen K Hopkins, Alex Renda, and Michael Carbin. Can llms generate random numbers? evaluat-
ing llm sampling in controlled domains. In ICML 2023 Workshop: Sampling and Optimization in
Discrete Space, 2023.
Tsuyoshi Inouye, Kazuhiro Shinosaki, H. Sakamoto, Seigo Toi, Satoshi Ukai, Akinori Iyama,
Y Katsuda, and Makiko Hirano. Quantification of eeg irregularity by use of the entropy of
the power spectrum. Electroencephalography and Clinical Neurophysiology, 79(3):204–210,
ISSN 0013-4694. doi: https://doi.org/10.1016/0013-4694(91)90138-T. URL https:
1991.
//www.sciencedirect.com/science/article/pii/001346949190138T.
Furong Jia, Kevin Wang, Yixiang Zheng, Defu Cao, and Yan Liu. Gpt4mts: Prompt-based large
language model for multimodal time-series forecasting. Proceedings of the AAAI Conference on
Artificial Intelligence, 38(21):23343–23351, Mar. 2024. doi: 10.1609/aaai.v38i21.30383. URL
https://ojs.aaai.org/index.php/AAAI/article/view/30383.
Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y Zhang, Xiaoming Shi, Pin-Yu Chen, Yux-
uan Liang, Yuan-Fang Li, Shirui Pan, and Qingsong Wen. Time-LLM: Time series forecasting by
reprogramming large language models. In International Conference on Learning Representations
(ICLR), 2024.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child,
Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language
models. arXiv preprint arXiv:2001.08361, 2020.
Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang,
Ekin Aky¨urek, Anima Anandkumar, et al. Pre-trained language models for interactive decision-
making. Advances in Neural Information Processing Systems, 35:31199–31212, 2022.
Yixin Liu, Avi Singh, C Daniel Freeman, John D Co-Reyes, and Peter J Liu.
Improving large
language model fine-tuning for solving math problems. arXiv preprint arXiv:2310.10047, 2023.
Madani, Krause, and et al. Greene. Large language models generate functional protein se-
quences across diverse families. Nature Biotechnology, 41:1099–1106, 2023. doi: 10.1038/
s41587-022-01618-2.
Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Are-
nas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. Large language models as general pattern
machines. In Proceedings of the 7th Conference on Robot Learning (CoRL), 2023.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-
mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level
control through deep reinforcement learning. nature, 518(7540):529–533, 2015.
Shikai Qiu Nate Gruver, Marc Finzi and Andrew Gordon Wilson. Large Language Models Are Zero
Shot Time Series Forecasters. In Advances in Neural Information Processing Systems, 2023.
12
Published as a conference paper at ICLR 2025
Akshay Paruchuri, Jake Garrison, Shun Liao, John Hernandez, Jacob Sunshine, Tim Althoff, Xin
Liu, and Daniel McDuff. What are the odds? language models are capable of probabilistic
reasoning. arXiv preprint arXiv:2406.12830, 2024.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language under-
standing by generative pre-training. OpenAI website, 2018.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text
transformer. Journal of machine learning research, 21(140):1–67, 2020.
Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh.
Impact of pretraining
term frequencies on few-shot numerical reasoning. In Yoav Goldberg, Zornitsa Kozareva, and
Yue Zhang (eds.), Findings of the Association for Computational Linguistics: EMNLP 2022, pp.
840–854, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational
Linguistics. doi: 10.18653/v1/2022.findings-emnlp.59.
James Requeima, John Bronskill, Dami Choi, Richard E Turner, and David Duvenaud. Llm
processes: Numerical predictive distributions conditioned on natural language. arXiv preprint
arXiv:2405.12856, 2024.
Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Im-
plicit neural representations with periodic activation functions. Advances in neural information
processing systems, 33:7462–7473, 2020.
Xingyou Song, Oscar Li, Chansoo Lee, Bangding Yang, Daiyi Peng, Sagi Perel, and Yutian Chen.
Omnipred: Language models as universal regressors. CoRR, abs/2402.14547, 2024. doi: 10.
48550/ARXIV.2402.14547. URL https://doi.org/10.48550/arXiv.2402.14547.
Michael Spivey. The continuity of mind. Oxford University Press, 2008.
Elias M Stein and Rami Shakarchi. Fourier analysis: an introduction, volume 1. Princeton Univer-
sity Press, 2003.
Elias M Stein and Rami Shakarchi. Real Analysis: Measure Theory, Integration, and Hilbert Spaces,
volume 3. Princeton University Press, 2005.
Terry Tao, Dec 2014.
URL https://terrytao.wordpress.com/2014/12/09/
254a-notes-2-complex-analytic-multiplicative-number-theory/#nxx.
Robert Vacareanu, Vlad Andrei Negru, Vasile Suciu, and Mihai Surdeanu. From words to numbers:
Your large language model is secretly a capable regressor when given in-context examples. In
First Conference on Language Modeling, 2024. URL https://openreview.net/forum?
id=LzpaUxcNFK.
Shiyu Wang, Haixu Wu, Xiaoming Shi, Tengge Hu, Huakun Luo, Lintao Ma, James Y Zhang,
and JUN ZHOU. Timemixer: Decomposable multiscale mixing for time series forecasting. In
International Conference on Learning Representations (ICLR), 2024.
Megan Wei, Michael Freeman, Chris Donahue, and Chen Sun. Do music generation models encode
music theory? In International Society for Music Information Retrieval, 2024.
Eric W. Weisstein. Square wave. From MathWorld–A Wolfram Web Resource, 2024. URL https:
//mathworld.wolfram.com/SquareWave.html. Accessed: September 16, 2024.
Ruibin Yuan, Hanfeng Lin, Yi Wang, Zeyue Tian, Shangda Wu, Tianhao Shen, Ge Zhang, Yuhang
Wu, Cong Liu, Ziya Zhou, Ziyang Ma, Liumeng Xue, Ziyu Wang, Qin Liu, Tianyu Zheng, Yizhi
Li, Yinghao Ma, Yiming Liang, Xiaowei Chi, Ruibo Liu, Zili Wang, Pengfei Li, Jingcheng Wu,
Chenghua Lin, Qifeng Liu, Tao Jiang, Wenhao Huang, Wenhu Chen, Emmanouil Benetos, Jie Fu,
Gus Xia, Roger Dannenberg, Wei Xue, Shiyin Kang, and Yike Guo. Chatmusician: Understand-
ing and generating music intrinsically with llm. arXiv preprint arXiv:2307.07443, 2024.
Tian Zhou, Peisong Niu, Xue Wang, Liang Sun, and Rong Jin. One Fits All: Power general time
series analysis by pretrained lm. In NeurIPS, 2023.
13
Published as a conference paper at ICLR 2025
Appendix
Table of Contents
A Proof of Fourier Head Scaling Law, Theorem 3.3
.
.
.
.
.
.
.
A.1 Definitions .
A.2 Overview of Proof .
.
A.3 Proving Theorem 3.3 Using the Lemmata .
.
A.4 Proving the Lemmata .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
B Smoothness Metric
C Additional Experiment Details, Toy Examples
C.1 Motivating Example: Audio Spectrogram Transformer .
.
.
C.2 Learning a Continuous Density .
.
C.3 MLE-based Fourier Head .
.
.
.
C.4 Are LLMs Random Number Generators? .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
D Additional Experiment Details, Large-Scale Examples
.
.
D.1 Decision Transformer
.
D.2 Chronos .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
14
15
16
17
22
25
25
25
26
28
30
30
30
A PROOF OF FOURIER HEAD SCALING LAW, THEOREM 3.3
In this section we prove Theorem 3.3, the Fourier head scaling law. To do this, we must first discuss
the Nyquist-Shannon Sampling Theorem. This result states that in order to avoid distortion of a
signal (such as aliasing) the sampling rate must be at least twice the bandwidth of the signal. In the
setting of the Fourier head, our sampling rate is m/2 because we have m bins uniformly spaced in
(−1, 1), and the bandwidth is N/2 because the frequency of sin(πN x) is N/2. Thus the Nyquist
Theorem requires us to have
m/2 ≥ 2 · (N/2) = N
in order for the higher order frequency content learned by our model to not be fallacious when we are
learning from only m bins. This justifies why we only theoretically study the case 1 ≪ N < m/2
in the scaling law.
A.1 DEFINITIONS
Consider an input x ∈ Rn to the Fourier head, and denote by fx : [−1, 1] → R the optimal
conditional distribution that we would like the Fourier head to approximate for this input. We will
assume that fx is periodic, since the Fourier head learns a 2-periodic Fourier density. We denote
by fx,N the truncation of the Fourier series of fx to its first N frequencies. Note that fx,N also
integrates to 1 over [−1, 1] since its first Fourier coefficient is the same as that of fx. Further, fx,N is
non-negative on [−1, 1] since its Fourier coefficients, being a subsequence of the coefficients of fx,
are non-negative definite; a periodic function with non-negative definite Fourier coefficients is non-
negative by Herglotz’s Theorem (Brockwell & Davis, 1991, Corollary 4.3.2). For completeness, we
will recall the convolution formulas, specialized to the cases we consider in our argument.
Definition A.1 (Discrete convolution). Let bj := −1 + 2j+1
m , 0 ≤ j < m be the center points of
the m bins in (−1, 1), and let us denote ⃗b := (b0, . . . , bm−1). Denote by Gσ(z) := e−z2/2σ2
the
Gaussian PDF with standard deviation σ. Then the discrete Gaussian convolution filter of radius
m − 1 is
2πσ
√
gσ :=
Gσ([1 − m, 2 − m, 3 − m, . . . , m − 1])
S(m, σ)
∈ R2m−1,
(A.1)
14
Published as a conference paper at ICLR 2025
where the normalization constant is
S(m, σ) :=
m−1
(cid:88)
k=1−m
Gσ(k).
(A.2)
The discrete convolution of gσ ∈ R2m−1 and fx,N (⃗b) ∈ Rm is the vector (gσ ∗ fx,N )(⃗b) ∈ Rm
whose j’th coordinate is given by
(gσ ∗ fx,N )(bj) =
1
S(m, σ)
m−1
(cid:88)
k=1−m
Gσ(k) · fx,N (bj−k) .
(A.3)
Definition A.2 (Continuous convolution). The continuous Gaussian convolution filter ˜gσ : [ −
2, 2] → R>0 is
˜gσ(z) =
(z)
G 2σ
m
S(m, σ)
=
m
2S(m, σ)
Gσ
(cid:16) mz
2
(cid:17)
.
(A.4)
This function ˜gσ(z) is a normalized truncation of a Gaussian PDF with mean 0 and standard devia-
tion 2σ/m. The continuous convolution of ˜gσ : [−2, 2] and the periodic function fx,N : [−1, 1] → R
is
˜gσ ∗ fx,N (z) :=
(cid:90) 2
−2
˜gσ(u)fx,N (z − u) du.
(A.5)
A.2 OVERVIEW OF PROOF
In this subsection, we provide an overview of the proof of Theorem 3.3 by presenting the statements
of the lemmata that we will need, and connecting each one to the overall argument. In the next
subsection, we rigorously prove the scaling law by careful applications of these lemmata. And in
the following subsection, we will rigorously prove each of the lemmata.
This first lemma allows us to replace the discrete Gaussian convolution in the definition with a
continuous Gaussian convolution.
Lemma A.3. (Discrete convolution is close to continuous convolution) If we define the constant
B1(m, σ) := 1 + Gσ(m)
S(m,σ) , then we have that
∥fx,N (⃗b) − gσ ∗ fx,N (⃗b)∥2 = ∥B1(m, σ)fx,N (⃗b) − ˜gσ ∗ fx,N (⃗b)∥2 +
Furthermore, B1(m, σ) satisfies the following bound, uniformly in σ,
√
mO(1/N 2t+1).
(A.6)
B1(m, σ) ≤ 1 +
1
2m − 1
.
(A.7)
This next lemma, a standard result from analytic number theory allows us to upper bound the sums
of the norms of the Fourier series coefficients. This is proved in various places, see e.g. (Tao, 2014,
Equation 21).
Lemma A.4 (Asymptotic expansion of Riemann zeta function). Consider the Riemann zeta function
ζ(t) := (cid:80)∞
k=1
1
kt = ζ(t) −
1
t − 1
1
N t−1 + O(1/N t).
(A.8)
1
kt . If t ≥ 2, then
N
(cid:88)
k=1
This next lemma allows us to extract the main asymptotic behavior in the scaling law.
Lemma A.5. (Main term asymptotic) Denote by a0(x) the constant coefficient of fx,N . Let us
suppose that the Fourier coefficients of fx,N decay like B3(x)/kt, and define the constant
B2(σ, m, x) := (cid:112)a0(x)2B1(m, σ)2 + 2B1(m, σ)2B3(x)2ζ(2t).
Then we know that
∥B1(m, σ)fx,N (⃗b) − ˜gσ ∗ fx,N (⃗b)∥2
(cid:18)
√
=
m
B2(σ, m, x) −
B1(m, σ)2B3(x)2
2t − 1
·
1
N 2t−1 + O(1/N 2t)
(cid:19)
.
Furthermore, B2(σ, m, x) is bounded from above and below as a function of σ and m.
15
(A.9)
(A.10)
(A.11)
Published as a conference paper at ICLR 2025
This final lemma allows us to relate the continuous case, where our analysis works out easier, to the
discrete case, where our smoothness metric is actually defined.
Lemma A.6. (The average value of the truncated Fourier PDF is 1/2) If N < m/2, then
m−1
(cid:88)
j=0
fx,N (bj) =
m
2
.
(A.12)
A.3 PROVING THEOREM 3.3 USING THE LEMMATA
We now prove the theorem that provides a scaling law for the Fourier head. This result quantifies
the trade-off between modeling capacity and smoothness as the number of frequencies increases.
In order to prove this, we must assume that fx, the conditional distribution being learned by the
Fourier head, is sufficiently smooth. For example, if fx is twice continuously differentiable, then the
Fourier coefficients corresponding to the k-th frequency of fx are in O(1/k2) (Stein & Shakarchi,
2003, Ch.2, Cor. 2.4). Thus, our assumption that the Fourier coefficients decay quadratically is
reasonable, and our Fourier weight decay regularization helps ensure that this condition is met in
practice as well.
In our theorem, we generalize this hypothesis to the cases where the Fourier
coefficients corresponding to the k-th frequency of fx are in O(1/kt).
Theorem 3.3. (Fourier head scaling law.) Consider a Fourier head with input dimension n, output
dimension m, and N frequencies. Suppose that 1 ≪ N < m
2 . Then the following are true:
1.
2.
(Increasing N improves modeling power.) As N increases, the Fourier head is capable of
learning a larger class of densities.
(Increasing N degrades smoothness.) Consider an input to the Fourier head x ∈ Rn, and
denote by fx : [−1, 1] → R the optimal conditional distribution that we would like the Fourier
head to approximate for this input. Suppose that there exists some t ≥ 2 such that the Fourier
coefficients of fx decay on the order of 1/kt. Denote by fx,N the truncation of fx to its first
N frequencies, denote by ⃗b ∈ Rm the m bin centerpoints in [−1, 1], and denote by y(N ) =
fx,N (⃗b)/(fx,N (b0) + · · · + fx,N (bm−1)) ∈ Rm the discretization of fx,N into m bins. Then,
there exist constants C1, C2 > 0 such that
s(y(N )) = C1 −
C2
N 2t−1 + O(1/N 2t).
(3.3)
Proof of Claim 2 of Theorem 3.3. We can estimate that
s(y(N )) =
=
=
(cid:80)m−1
1
j=0 fx,N (bj)
1
j=0 fx,N (bj)
(cid:80)m−1
√
m
j=0 fx,N (bj)
(cid:80)m−1
∞
(cid:88)
σ=1
∞
(cid:88)
σ=1
ασ∥fx,N (⃗b) − gσ ∗ fx,N (⃗b)∥2
(Definition 3.2)
(cid:16)
ασ
∥B1(m, σ)fx,N (⃗b) − (˜gσ ∗ fx,N )(⃗b)∥2
(Lemma A.3)
√
+
mO(1/N 2t+1)
(cid:17)
(cid:16)
ασ
∞
(cid:88)
σ=1
B2(σ, m, x) −
B1(m, σ)2B3(x)2
(2t − 1)N 2t−1
+ O(1/N 2t) + O(1/N 2t+1)
(cid:17)
(Lemma A.5)
=
2
√
m
(cid:18)
·
C3 −
(cid:19)
C4
N 2t−1 + O(1/N 2t)
.
(Lemmata A.5, A.6)
In the last step we used the convergence of the respective series (which follows from boundedness
of B2(σ, m, x) and B1(m, σ) in σ) and we assigned C3 and C4 to be those sums. This completes
the proof.
Proof of Claim 1 of Theorem 3.3. The proof of this claim is more straightforward. For any function
f on [−1, 1] that is at least twice continuously differentiable, we know that the Fourier series of f
16
Published as a conference paper at ICLR 2025
converges uniformly and absolutely to f (Stein & Shakarchi, 2003, Ch. 2, Cor. 2.4). In other words,
the function fN being learnt by the Fourier head converges uniformly and absolutely to f .
A.4 PROVING THE LEMMATA
In this subsection, we will restate and prove Lemmata A.3, A.5, and A.6.
Lemma A.3. (Discrete convolution is close to continuous convolution) If we define the constant
B1(m, σ) := 1 + Gσ(m)
S(m,σ) , then we have that
∥fx,N (⃗b) − gσ ∗ fx,N (⃗b)∥2 = ∥B1(m, σ)fx,N (⃗b) − ˜gσ ∗ fx,N (⃗b)∥2 +
√
mO(1/N 2t+1).
(A.6)
Furthermore, B1(m, σ) satisfies the following bound, uniformly in σ,
B1(m, σ) ≤ 1 +
1
2m − 1
.
(A.7)
Proof of Lemma A.3. Extending fx,N periodically to [−2, 2], we can compute that the continuous
convolution ˜gσ ∗ fx,N (z) is
(˜gσ ∗ fx,N )(z) =
(cid:90) 2
−2
˜gσ(u)fx,N (z − u) du
=
=
m
2S(m, σ)
1
S(m, σ)
(cid:90) 2
−2
(cid:90) m
−m
Gσ
(cid:17)
(cid:16) mu
2
fx,N (z − u) du
(cid:18)
Gσ(s)fx,N
z −
(cid:19)
2s
m
ds,
(A.13)
(A.14)
(A.15)
where in the third step we applied the change of variables s = mu
2 . We claim that this is precisely
a continuous approximation of the discrete convolution in Definition 3.2. To see this, we will apply
the Euler-Maclaurin formula. This formula says that the integral in Equation A.15 is a Riemann
sum over rectangles of width 1 evaluated at the right endpoints of each interval, minus an error term
E(m, σ), as follows:
(˜gσ ∗ fx,N )(bj) + E(m, σ) =
=
=
1
S(m, σ)
1
S(m, σ)
1
S(m, σ)
=
1
S(m, σ)
=
1
S(m, σ)
m
(cid:88)
k=1−m
m
(cid:88)
k=1−m
m
(cid:88)
k=1−m
(cid:32) m−1
(cid:88)
k=1−m
(cid:18)
Gσ(k) · fx,N
bj −
(cid:19)
2k
m
(cid:18)
Gσ(k) · fx,N
−1 +
(cid:18)
Gσ(k) · fx,N
−1 +
(cid:19)
(cid:19)
2j + 1
m
−
2k
m
2(j − k) + 1
m
(A.16)
(A.17)
(A.18)
(cid:33)
Gσ(k) · fx,N (bj−k) + Gσ(m)fx,N (bj−m)
(A.19)
(S(m, σ) · (gσ ∗ fx,N )(bj) + Gσ(m)fx,N (bj))
(A.20)
= (gσ ∗ fx,N )(bj) +
1
S(m, σ)
Gσ(m)fx,N (bj),
(A.21)
where the error term is defined as
E(m, σ) :=
(cid:90) m
1
S(m, σ)
1
2S(m, σ)
−m
+
d (cid:0)Gσ(s)fx,N
ds
(cid:0)z − 2s
m
(cid:1)(cid:1)
P1(s) ds
(A.22)
(Gσ(m)fx,N (z − 2) − Gσ(−m)fx,N (z + 2)) ,
(A.23)
17
Published as a conference paper at ICLR 2025
where P1(s) := s − ⌊s⌋ − 1/2 is the periodized Bernoulli polynomial. We will now estimate this
error term. Note that since Gσ is an even function and fx,N is periodic with period 2, the difference
in A.23 is 0. Therefore, we can compute that
E(m, σ) =
G′
σ(s)P1(s)fx,N
(cid:90) m
1
S(m, σ)
−m
2
mS(m, σ)
−
(cid:18)
z −
(cid:19)
ds
2s
m
(cid:18)
(cid:90) m
−m
Gσ(s)P1(s)f ′
x,N
z −
(cid:19)
2s
m
ds.
Using the triangle inequality, we can bound E(m, σ) in terms of convolutions with ˜gσ:
|E(m, σ)| ≤
1
S(m, σ)
(cid:12)
(cid:90) m
(cid:12)
(cid:12)
(cid:12)
G′
σ(s)P1(s)fx,N
(cid:18)
z −
(cid:19)
2s
m
(cid:12)
(cid:12)
ds
(cid:12)
(cid:12)
+
−m
1
S(m, σ)
(cid:32)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:90) m
(cid:90) m
−m
Gσ(s)P1(s)f ′
x,N
(cid:18)
z −
2s
m
(cid:19) −2
m
(cid:12)
(cid:12)
ds
(cid:12)
(cid:12)
≤
1
S(m, σ)
m
2σ2
(cid:18)
Gσ(s)fx,N
z −
(cid:19)
2s
m
ds+
−m
+
1
m
(cid:90) m
−m
Gσ(s)
(cid:18)
z −
f ′
x,N
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:33)
ds
2s
m
(cid:19)(cid:12)
(cid:12)
(cid:12)
(cid:12)
=
m
2σ2 (˜gσ ∗ fx,N )(z) +
1
m
(˜gσ ∗ (cid:12)
(cid:12)f ′
x,N
(cid:12)
(cid:12))(z),
(A.24)
(A.25)
(A.26)
(A.27)
(A.28)
(A.29)
(A.30)
where in Equation A.29 we used that |P1(s)| ≤ 1/2 and that |G′
mGσ(s)/σ2 for s ∈ [−m, m].
σ(s)| = |s| Gσ(s)/σ2 ≤
Note that since ˜gσ is a truncated Gaussian on [−2, 2], it is infinitely differentiable on the open set
(−2, 2), however, it is not differentiable at the endpoints −2 and 2 when treated as a 4-periodic
function. This technical difficulty can be resolved using mollifiers: we can replace ˜gσ with ˜gσ ∗ φϵ,
where {φϵ} is a family of mollifiers indexed by ϵ > 0. The key properties of a mollifier are that
˜gσ ∗ φϵ is infinitely differentiable as a 4-periodic function for all ϵ > 0 and limϵ→0 ˜gσ ∗ φϵ = ˜gσ
(Stein & Shakarchi, 2005, Ch. 3). We are ultimately interested in only bounds on absolute values
of ˜gσ convolved with various functions, and since absolute values are continuous and inequalities
are preserved under taking limits, all our bounds are still true. In particular, this shows that the k’th
Fourier coefficients of ˜gσ decay faster than any polynomial. And on the other hand, by assumption
we know that the Fourier coefficients of fx,N decay on the order of 1/kt; and we know that |f ′
x,N | is
continuous and 2π periodic, so its Fourier coefficients converge. So by the convolution theorem, we
can deduce that the Fourier coefficients of ˜g ∗ fx,N and ˜gσ ∗ |f ′
x,N | decay faster than any polynomial.
(cid:12)
Summed over the N frequencies, this shows that |˜gσ ∗ fx,N (x)| and |˜gσ ∗ (cid:12)
(cid:12) (z)| decay faster
than any polynomial as well. Since m is fixed and σ ≥ 1, this implies that
(cid:12)f ′
x,N
|E(m, σ)| = O(1/N 2t+1).
(A.31)
Using Definition A.1 and Equation A.16, we have that
gσ ∗ fx,N (bj) =
1
S(m, σ)
m
(cid:88)
k=1−m
(cid:18)
Gσ(k) · fx,N
bj −
(cid:19)
2k
m
−
1
S(m, σ)
Gσ(m)fx,N (bj)
(A.32)
= (˜gσ ∗ fx,N )(bj) + E(m, σ) −
1
S(m, σ)
Gσ(m)fx,N (bj).
(A.33)
If we define C1(m, σ) := Gσ(m)
S(m,σ) , then Equation A.33 combined with A.31 together imply that
(cid:12)
(cid:12)
(cid:12)gσ ∗ fx,N (bj) − ˜gσ ∗ fx,N (bj) + C1(σ, m)fx,N (bj)
(cid:12)
(cid:12) = O(1/N 2t+1).
(cid:12)
(A.34)
18
Published as a conference paper at ICLR 2025
Finally, we can estimate that
∥fx,N (⃗b) − gσ ∗ fx,N (⃗b)∥2
= ∥fx,N (⃗b) − ˜gσ ∗ fx,N (⃗b) + C1(σ, m)fx,N (⃗b)∥2
+ ∥gσ ∗ fx,N (⃗b) − ˜gσ ∗ fx,N (⃗b) + C1(m, σ)fx,N (⃗b)∥
= ∥fx,N (⃗b) − ˜gσ ∗ fx,N (⃗b) + C1(m, σ)fx,N (⃗b)∥2 +
= ∥(1 + C1(m, σ))fx,N (⃗b) − ˜gσ ∗ fx,N (⃗b)∥2 +
√
mO(1/N 2t+1).
√
m∥O(1/N 2t+1)∥2
(A.35)
(A.36)
(A.37)
(A.38)
(A.39)
This completes the first part of the proof. For the second part of the proof, since S(m, σ) ≥ (2m −
1)Gσ(m − 1), we can estimate that
B1(m, σ) ≤ 1 +
Gσ(m)
(2m − 1)Gσ(m − 1)
≤ 1 +
1
2m − 1
e−(2m−1)/2σ2
≤ 1 +
1
2m − 1
.
(A.40)
This completes the proof.
Lemma A.5. (Main term asymptotic) Denote by a0(x) the constant coefficient of fx,N . Let us
suppose that the Fourier coefficients of fx,N decay like B3(x)/kt, and define the constant
B2(σ, m, x) := (cid:112)a0(x)2B1(m, σ)2 + 2B1(m, σ)2B3(x)2ζ(2t).
Then we know that
∥B1(m, σ)fx,N (⃗b) − ˜gσ ∗ fx,N (⃗b)∥2
(cid:18)
√
=
m
B2(σ, m, x) −
B1(m, σ)2B3(x)2
2t − 1
·
1
N 2t−1 + O(1/N 2t)
(cid:19)
.
(A.9)
(A.10)
(A.11)
Furthermore, B2(σ, m, x) is bounded from above and below as a function of σ and m.
Proof of Lemma A.5. We will first argue that B2(σ, m, x), as a function of σ and m, is bounded from
above and below. Indeed, from its definition, B2(σ, m, x) ≥ (cid:112)a0(x)2B1(m, σ)2 ≥ |a0(x)|. But
the Fourier PDF has integral 1 over [−1, 1], so its constant term is a0(x) = 1/2. This implies that
B2(σ, m, x) ≥ 1/2. To see B2(σ, m, x) is bounded from above, we simply recall that in Lemma A.3
we showed that |B1(m, σ)| ≤ 2, which implies that |B2(σ, m, x)| ≤ (cid:112)4a0(x)2 + 8B3(x)2ζ(2t).
This shows that B2(σ, m, x) is bounded above and below as a function of m and σ, as claimed.
Now, let (d0(x), . . . , dm−1(x)) ∈ Rm be the discrete Fourier transform of (B1(m, σ)fx,N − ˜gσ ∗
fx,N )(⃗b) ∈ Rm. For notational simplicity, we will write B1 = B1(m, σ) as long as σ is fixed. By
Plancherel’s Theorem, we have
m−1
(cid:88)
j=0
|(B1fx,N − ˜gσ ∗ fx,N )(bj)|2 =
1
m
m−1
(cid:88)
k=0
|dk(x)|2 .
(A.41)
Let hσ,j be the Fourier coefficients of ˜gσ, treated as a periodic function with period 4, and defined
over [−2, 2]:
˜gσ(z) =
∞
(cid:88)
j=−∞
hσ,jeπijz/2.
(A.42)
Since fx,N is defined over [−1, 1] and is periodic with period 2, we can likewise treat it as a function
over [−2, 2] with period 4, in which case we can rewrite its Fourier series as
where
fx,N (z) =
2N
(cid:88)
j=−2N
˜aj(x)eπijz/2,
˜aj(x) =
(cid:26)aj/2(x)
0
if j ≡ 0
else.
(mod 2)
19
(A.43)
(A.44)
Published as a conference paper at ICLR 2025
Then, by the Convolution Theorem, we have
(B1fx,N − ˜gσ ∗ fx,N )(bl) =
2N
(cid:88)
j=−2N
˜aj(x)(B1 − hσ,j) · eπijbl/2
(A.45)
=
=
N
(cid:88)
k=−N
N
(cid:88)
k=−N
˜a2k(x)(B1 − hσ,2k) · eπikbl
(A.46)
ak(x)(B1 − hσ,2k) · eπikbl ,
(A.47)
where in the second equality we used the fact that ˜aj is 0 for odd j and therefore re-indexed using
k = j/2. Thus, using the definition of DFT along with Equation A.47, we get
dk(x) =
m−1
(cid:88)
(B1fx,N − ˜gσ ∗ fx,N )(bl) · e−2πikl/m
l=0
m−1
(cid:88)
N
(cid:88)
l=0
j=−N
aj(x)(B1 − hσ,2j)eπijbl · e−2πikl/m
=
=
=
N
(cid:88)
j=−N
N
(cid:88)
j=−N
aj(x)(B1 − hσ,2j)
m−1
(cid:88)
l=0
eπij(−1+ 2l+1
m ) · e−2πikl/m
aj(x)(B1 − hσ,2j) · eπij(−1+1/m)
m−1
(cid:88)
l=0
e2πi(j−k)l/m.
(A.51)
(A.48)
(A.49)
(A.50)
First, we claim that at most a single summand (in j) is represented. Towards this, we note that
m−1
(cid:88)
l=0
e2πi(j−k)l/m =
(cid:26)0
j ̸≡ k (mod m)
m j ≡ k (mod m).
(A.52)
Then, we note that since 0 < N < m/2, for each 0 ≤ k < m, there is at most one j ∈ {−N, −N +
1, . . . , N }, such that j ≡ k (mod m). This shows that there is at most a single summand. We will
now find the exact formula for each summand. We consider three cases.
• Case 1: 0 ≤ k ≤ N . In this case, j = k satisfies j ≡ k (mod m), so this index gives the
exponential sum of m.
• Case 2: N < k < m − N . In this case, k is too large to be an index in the sum, so we
can’t choose j = k; the next smallest equivalent value is j = k − m, which satisfies j ≡ k
(mod m). But N − m < j < −N in this case, so k is too small to be an index in the sum;
therefore, every exponential sum is zero in this range.
• Case 3: m − N ≤ k ≤ m − 1. In this case, j = k − m satisfies j ≡ k (mod m). We have
−N ≤ j ≤ 1, so this is a valid index in the sum.
This gives the following closed formula:
dk(x) =
m · ak(x)(B1 − hσ,2k)eπik(−1+1/m)
0
m · ak−m(B1 − hσ,2(k−m))eπi(k−m)(−1+1/m) m − N ≤ k ≤ m − 1.
0 ≤ k ≤ N
N < k < m − N
(A.53)
20
Published as a conference paper at ICLR 2025
Using this closed formula in A.41, we obtain
m−1
(cid:88)
j=0
|(B1fx,N − ˜gσ ∗ fx,N )(bj)|2
=
1
m
N
(cid:88)
k=0
(cid:12)
(cid:12)
(cid:12)m · ak(x)(B1 − hσ,2k)eπik(−1+1/m)(cid:12)
(cid:12)
(cid:12)
2
+
1
m
m−1
(cid:88)
k=m−N
(cid:12)
(cid:12)
(cid:12)m · ak−m(B1 − hσ,2(k−m))eπi(k−m)(−1+1/m)(cid:12)
(cid:12)
(cid:12)
(A.54)
(A.55)
(A.56)
2
= m
N
(cid:88)
k=0
|ak(x)(B1 − hσ,2k)|2 + m
m−1
(cid:88)
k=m−N
(cid:12)ak−m(B1 − hσ,2(k−m))(cid:12)
(cid:12)
(cid:12)
2
,
(A.57)
where in the last step we used that (cid:12)
(cid:12) since they are both
complex exponentials. Now, since ˜gσ is a real and even function, we know that its Fourier coeffi-
cients hσ,k are real. Further, since ˜gσ is infinitely differentiable, we also know that hσ,k = O(1/kt)
(in fact they decay faster than 1/kp for any p ≥ 1). Thus, using that |ak(x)| decays like B3(x)/kt,
we see
(cid:12)eπi(k−m)(1−1/m)(cid:12)
(cid:12)eπik(1−1/m)(cid:12)
(cid:12) = 1 = (cid:12)
|ak(x)(B1 − hσ,2k)|2 = |ak(x)|2 (B2
1 − 2hσ,2k(x) + h2
σ,2k)
(cid:18)
(cid:18)
(cid:19)
=
3 B2
B2
1
k2t + O
1
k2t(2k)t
+ O
1
k2t(2k)2t
(cid:19)
.
(A.58)
(A.59)
From A.59, it is clear that since we are interested in only the dominant asymptotic, we can safely
ignore the higher order terms coming from the hσ,k. As a result, we can estimate that
m−1
(cid:88)
j=0
|(B1fx,N − ˜gσ ∗ fx,N )(bj)|2 ≈ mB2
1 a0(x)2 + m
= ma0(x)2B2
1 + m
= ma0(x)2B2
1 + m
N
(cid:88)
k=1
N
(cid:88)
k=1
N
(cid:88)
k=1
B2
3 B2
1
k2t + m
m−1
(cid:88)
k=m−N
B2
3 B2
1
(k − m)2t
B2
3 B2
1
k2t + m
−1
(cid:88)
k=−N
B2
3 B2
1
k2t
3 B2
B2
1
k2t + m
N
(cid:88)
k=1
B2
3 B2
1
k2t
= ma0(x)2B2
1 + 2m
N
(cid:88)
k=1
B2
3 B2
1
k2t
.
Next, we note that our asymptotic in Lemma A.4, applied at 2t, yields
N
(cid:88)
k=1
1
k2t = ζ(2t) −
1
2t − 1
1
N 2t−1 + O(1/N 2t).
Substituting this into A.63, we obtain
m−1
(cid:88)
j=0
|(B1fx,N − ˜gσ ∗ fx,N )(bj)|2
= ma0(x)2B2
1 + 2mB2
1 B2
3
(cid:18)
ζ(2t) −
1
2t − 1
1
(cid:19)
N 2t−1 + O(1/N 2t)
(cid:18)
(cid:18)
B2
2 −
B2
2 −
= m
= m
1 B2
2B2
3
2t − 1
2B2
1 B2
3
2t − 1
1
1
N 2t−1 + B2
1 B2
(cid:19)
3 O(1/N 2t)
(cid:19)
N 2t−1 + O(1/N 2t)
,
21
(A.60)
(A.61)
(A.62)
(A.63)
(A.64)
(A.65)
(A.66)
(A.67)
(A.68)
Published as a conference paper at ICLR 2025
where we defined B2 = B2(σ, m, x) := (cid:112)a0(x)2B2
3 ζ(2t) as in the statement of the
Lemma, and in the third line we applied Lemma A.3 to estimate that B1 ≤ 2, and we used that
B3 = B3(x) only depends on x. Then, using the Taylor expansion (1 + x)1/2 = 1 + x
2 + O(x2)
about 0, we can estimate that
1 + 2B2
1 B2
|(B1fx,N − ˜gσ ∗ fx,N )(bj)|2
1/2
m−1
(cid:88)
j=0
(cid:18)
mB2
2 −
m2B2
1 B2
3
2t − 1
1
=
=
=
=
(cid:18)
(cid:18)
mB2
mB2
(cid:18)
√
√
√
1 −
1 −
m
B2 −
(cid:19)1/2
(cid:19)1/2
O(1/N 2t)
2B2
N 2t−1 + mO(1/N 2t)
1 B2
3
(2t − 1)B2
2
1 B2
1
3
(2t − 1)B2
2
2
B2
1 B2
3
2t − 1
(cid:19)
N 2t−1 + O(1/N 2t)
1
N 2t−1 +
1
1
B2
2
2B2
1
·
·
.
(cid:19)
N 2t−1 + O(1/N 2t)
(A.69)
(A.70)
(A.71)
(A.72)
(A.73)
To justify our application of the Taylor expansion, we note that N ≫ 1, and B2 = B2(σ, m, x) is
bounded below as a function of σ and m. This completes the proof.
Lemma A.6. (The average value of the truncated Fourier PDF is 1/2) If N < m/2, then
m−1
(cid:88)
j=0
fx,N (bj) =
m
2
.
Proof of Lemma A.6. Denote by ak the Fourier coefficients of fx,N . We can compute that
m−1
(cid:88)
j=0
fx,N (bj) =
m−1
(cid:88)
N
(cid:88)
j=0
k=−N
akeiπk(−1+ 2j+1
m )
N
(cid:88)
=
k=−N
ake−iπke
iπk
m
m−1
(cid:88)
j=0
2πijk
m .
e
(A.12)
(A.74)
(A.75)
Note that by hypothesis, N < m/2, which implies that |k| < m for every outer sum index k. We
m = 1; and if k ̸= 0, then the
consider two cases; if k = 0, then the innermost summand is e
innermost sum is a truncated geometric series with first term 1, common ratio e 2πik
m , and m terms.
In summary, the innermost summand is
2πijk
m−1
(cid:88)
j=0
2πijk
m =
e
(cid:26)m k = 0
k ̸= 0,
0
(A.76)
which implies that (cid:80)m−1
average value 1/2 over [−1, 1]. This completes the proof.
j=0 fx,N (bj) = ma0. But a0 = 1/2 because fx,N is a PDF implies it has
B SMOOTHNESS METRIC
We will examine how the proposed smoothness metric Equation 3.1 behaves in a toy example setting
to gain intuition for its behavior. Consider a square wave, which can be expressed as an infinite sum
of odd integer harmonics that decay in amplitude proportional to their frequency:
f (x) =
4
π
∞
(cid:88)
n=1,3,5,...
1
n
sin
(cid:16) nπx
L
(cid:17)
.
(B.1)
Here, the wavelength is 2L (Weisstein, 2024).
22
Published as a conference paper at ICLR 2025
We construct a truncated version of the square wave with a finite and fixed number of frequencies.
The waveform will slowly approach its jagged, square shape as more sine waves are added. We
frame these increasingly jagged waves as discretized multinomial densities to simulate the output of
the Fourier head. To do this, we simply set the height to zero when the wave crest becomes negative
and normalize the sum to 1. The output of this transformation for a few representative waveforms is
pictured in Figure 5.
Figure 5: Truncated square waves framed as densities and their smoothness.
Intuitively, the truncated square wave with a single sine wave ought to be the smoothest. Thus our
metric in this context should be smallest at that point, and increase monotonically as we add more
sine waves. The plot in 6 demonstrates that this is indeed the case.
Choice of L2 Distance over L1 Distance: The proposed smoothness metric Equation 3.1 permits
a general measure of discrepancy D, and we’ve chosen D to be L2 distance as indicated in 3.2. We
empirically observe that L2 distance better preserves monotonicity than the L1 for higher frequency
content, thus motivating this choice. With a sample rate of 2048Hz, the L1 distance exhibits some
undesirable warping when our square-wave multinomial uses over 80 sine waves (see Figure 6). A
Fourier head in a practical setting may possess several more than 80 frequencies; accordingly, we
favor the L2 distance as our discrepancy measure.
Alternative Notions of Smoothness: In validating our choice of smoothness metric, we compare it
to the spectral entropy (Inouye et al., 1991), which has a similar purpose in quantifying the “smooth-
ness” of the frequency content of a signal. Spectral entropy is defined as the Shannon entropy of the
power spectral density of a sampled signal f , which is defined as follows:
H(f ; N ) =
(cid:88)
n∈N
p(n) log2
(cid:18) 1
(cid:19)
p(n)
= −
(cid:88)
n∈N
Sn
Stotal
log2
(cid:19)
(cid:18) Sn
Stotal
(B.2)
Here, N is the number of Fourier frequencies and S is the power of a frequency n ∈ N ; Sn is the
power spectrum of the nth frequency, and Stotal is the power of the signal using all N frequencies.
For some frequency at index n, Sn/Stotal is called its relative power and (cid:80)
= 1 enables
us to consider each frequency’s power as a probability.
Sn
Stotal
n∈N
In the discrete case, the maximum entropy distribution is the uniform distribution. Thus, white
noise will have the highest spectral entropy. This has the consequence that power spectral densities
have more high frequency information will have lower entropy than that of white noise, provided
that there is a relationship between amplitude and frequency. More concretely, blue noise, which is
23
Published as a conference paper at ICLR 2025
Figure 6: Values of the smoothness metric 3.2 on our square-wave-like multinomials as we increase
the number of sine waves. We desire the value of this metric to be close to zero when there are few
sine waves, and be monotonically increasing with each additional wave, indicating that adding more
high frequency content results in a less smooth distribution. On the right, we can see that L1 as a
discrepancy measure leads to non-monotonicity, motivating our choice of L2 distance in measuring
our results.
defined by the amplitude increasing proportionally to the frequency, will have lower spectral entropy
than white noise. We sought a metric that always quantified ‘sharper’ signals like blue noise as less
smooth. In Table 4, we frame sampled noises of different types as multinomial distributions to
match our model setting by normalizing their amplitudes to be in [0, 1] and normalizing their sum to
1. Our noise types are defined before normalization, in order of smoothest to sharpest:
• Brown: S ∝ 1
F 2
• Pink: S ∝ 1
F
• White: S ∼ N (0, 1)
• Blue: S ∝ F
where S is the power density and F is the frequency. To obtain samples of each type, we first
generate white noise. We do this by sampling a Gaussian with mean 0 and standard deviation 1
to obtain amplitudes for t samples. We then apply the Fourier transform, and multiply (or divide)
the amplitudes of each component by their frequency, and apply the inverse Fourier transform to
recover the waveform. Finally we adjust the range of amplitudes of the signal to be within [0, 1] and
normalize the sum to 1.
Noise Mean ± Std. Deviation
Discrepancy
0.0003 ± 0.0001
Brown
L2
0.0017 ± 0.0002
Pink
L2
0.0034 ± 0.0003
White
L2
0.0038 ± 0.0003
Blue
L2
0.4516 ± 0.0894
Spectral Entropy Brown
0.3878 ± 0.0603
Spectral Entropy
Pink
0.4266 ± 0.0614
Spectral Entropy White
0.4191 ± 0.0583
Blue
Spectral Entropy
Diff Delta Desired Delta
n/a
n/a
+
0.0014
+
0.0016
+
0.0005
n/a
n/a
+
-0.0638
+
0.0388
+
-0.0076
n/a
+
+
+
n/a
-
+
-
Table 4: Smoothness measurements for four types of noise bootstrap aggregated over 1,000 trials.
The color red emphasizes how the value of Spectral Entropy is undesirably not monotonic increasing
for what we consider increasingly “sharp” noise types.
24
Published as a conference paper at ICLR 2025
C ADDITIONAL EXPERIMENT DETAILS, TOY EXAMPLES
C.1 MOTIVATING EXAMPLE: AUDIO SPECTROGRAM TRANSFORMER
To illustrate a simple problem setting where the design of the Fourier head is appropriate, we use
it as a drop-in replacement for a linear classification head in the Audio Spectrogram Transformer
(Gong et al., 2021). We consider the task of beats per minute (BPM) classification for metronome-
like audio samples (Wei et al., 2024) within the tempo range {50, 51, . . . , 210}. While this task is
not difficult, we use this audio classification task to illustrate some of the design choices one can
make when using the Fourier head. In this case, it is natural to group the BPMs into contiguous bins
{[50, 54], [55, 59], . . . } and use the Fourier head to classify them. These bins have a natural contin-
uous structure, which is where the Fourier head performs well. We also expect that the categorical
distribution over possible BPMs for a given audio clip ought to be unimodal and therefore require
few frequencies to approximate. In fact, our best performing model for this example uses only one
frequency.
We initialize the Audio Spectrogram Transformer with pretrained weights from AudioSet (Gem-
meke et al., 2017), and we train two different models–one with a standard linear classification head,
and one with the Fourier head. The Fourier head outperforms the linear classification head by an
F1 score improvement of +118%. We attribute this success to the inductive bias of continuity that
the Fourier head imparts. In Figure 7 we present the learned probability masses of both heads on
the same input sample. This graph illustrates that the Fourier head learns smoother PMFs than the
linear head, a concept which we will later formalize and explore.
Audio Classification Task: Learned Linear vs. Fourier PMFs
Figure 7: Comparison between the PMF learned by the linear head, and the Fourier head with
2 frequencies, for the toy BPM classification task, on a single audio example. We observe that
the Fourier head learns a smoother categorical distribution over its predicted values, and is better
centered around the ground truth label. We also note the small mini-sine wave artifacting on the left
side of the Fourier model, which tends to occur when using few frequencies.
C.2 LEARNING A CONTINUOUS DENSITY
Here we provide full details of the datasets used in our toy example of learning a known conditional
distribution.
Dataset: We create a synthetic dataset D = {(q(x), q(y), q(z))} ⊂ R3 as follows. Fix a probability
distribution P1 = P1(x) that is parameterized by one variable and a second distribution P2 =
P2(x, y) parameterized by two variables. Fix an interval I ⊂ R. Sample x uniformly from I,
sample y ∼ P1(x), and finally sample z ∼ P2(x, y). We can repeat this sampling procedure N
times to obtain a set of N triples for which we know the conditional distribution of z given x and
y. Finally, we quantize this set to a fixed number of uniformly spaced bins in the range [−1, 1] to
obtain the dataset DP1,P2 . We will denote the quantization of z by q(z). We quantize into 50 bins
and our dataset has size 5000, with a 80-20 split between the train and test set. We describe three
choices for the distributions we used to create our datasets. We fix I = [−0.8, 0.8] and σ2 = 0.01
in all of them.
1. Gaussian dataset: P1(x) = N (x, σ2), and P2(x, y) = N (y, σ2).
2. GMM-2 dataset: P1 = Uniform(I), and P2(x, y) is a GMM centered at x and y with
variance σ2.
25
Published as a conference paper at ICLR 2025
3. Beta dataset: P1(x) = N (x, σ2), and P2(x, y) ∼ U ({±1}) × Beta(100 |x| , 100 |y|),
where U ({±1}) denotes the Rademacher distribution supported on {±1} with probability
1/2 each.
Additional results: In Figure 8, we present results from training over a range of frequencies, and for
each frequency we ran experiments with and without Fourier regularization. In Table 6 we present
results on the MSE metric, that show that the Fourier head outperforms the linear classification head.
Figure 8: We study how the quantity of Fourier frequencies impacts KL divergence and smoothness
for the toy example on each dataset. For both KL divergence and smoothness, lower is better. We
observe that the Fourier models with and without regularization performed similarly to each other,
and outperformed the linear baseline. We also note that the 50% error bars are larger for the linear
baseline model; this indicates that the Fourier models (both with and without regularization) are in
general more stable. This is in contrast to our large scale time series forecasting experiments, where
we find that regularization helps; this is likely because those experiments use an order of magnitude
more frequencies, and their conditional distributions are more complicated. While the GMM head
has better KL divergence on the Gaussian and GMM-2 datasets, which is to be expected, the Fourier
model (both with and without regularization) eventually has the best KL divergence on the Beta
dataset, since it is non-Gaussian Notice also how on each of the datasets, the smoothness degrades
as frequency increases, in a fashion that follows the asymptotic from our Theorem 3.3.
C.3 MLE-BASED FOURIER HEAD
We carry out experiments in the continuous domain analogous to those we did in the quantized
domain from the toy example.
Dataset: We use the same synthetic datasets–Gaussian, GMM-2, and Beta–as in the previous sub-
section, except we do not quantize the data into bins.
Task: Predict the conditional distribution of z given (x, y).
Model architecture: Our model is an MLP with one hidden layer and the final layer is an MLE-
based Fourier head which returns the 2N + 1 learned real coefficients of the Fourier series, mapping
R2 → R64 → R32 → R2N +1. Alongside the Fourier-MLE model, we consider a baseline where
the final layer is a Gaussian model mixture whose means and standard deviations are learned using
an MLE objective. For the MLE-Fourier model, we sweep over frequencies N = 2, 4, . . . , 20 and
regularization γ ∈ {0, 10−6}.
26
Published as a conference paper at ICLR 2025
Figure 9: We study how the quantity of Fourier frequencies impacts KL divergence and perplex-
ity for the toy example on each dataset for the MLE experiments. For both KL divergence and
perplexity, lower is better. We observe that the Fourier models with and without regularization per-
formed similarly to each other. While the GMM head has better KL divergence on the Gaussian and
GMM-2 datasets, which is to be expected, the Fourier model (both with and without regularization)
has the best KL divergence on the Beta dataset for sufficiently large Fourier frequencies, since it is
non-Gaussian.
Dataset
Gaussian
GMM-2
Beta
Linear
0.170 ± 0.052
0.238 ± 0.032
0.234 ± 0.032
Dataset
Gaussian
GMM-2
Beta
Linear
0.116 ± 0.049
0.068 ± 0.022
0.127 ± 0.044
KL Divergence (↓)
GMM
0.026 ± 0.011
0.030 ± 0.006
0.407 ± 0.012
Fourier
0.116 ± 0.043
0.146 ± 0.033
0.191 ± 0.016
Smoothness (↓)
GMM
0.068 ± 0.012
0.043 ± 0.009
0.061 ± 0.003
Fourier
0.057 ± 0.011
0.038 ± 0.007
0.076 ± 0.021
Table 5: KL divergence and Smoothness for the three classification heads (Linear, GMM, and
Fourier) on each of the three synthetic datasets (Gaussian, GMM-2, Beta). As expected, the GMM
head achieves the best KL divergence on the Gaussian and GMM-2 datasets, as their conditional dis-
tributions are Gaussian. However, the Fourier head has the best KL divergence on the Beta dataset.
This demonstrates the flexibility of the Fourier head in modeling non-Gaussian distributions as well.
Model evaluation: We use two metrics for evaluation. The first metric is the average KL divergence
DKL(P(x, y)∥M (x, y)), where P(x, y) is the fixed conditional distribution of z given (x, y) and
M (x, y) denotes the predicted probability density function of z. Our second metric is perplexity,
which is the exponential of the average negative log likelihood of the test set.
Results: The metrics for the best performing model on each dataset are reported in Table 7. Figure
11 presents sample visualizations of the learned conditional distributions alongside the true densities.
While, as expected, the GMM-MLE head outperforms the Fourier-MLE head on the Gaussian and
GMM-2 datasets due to the Gaussian nature of the datasets, the Fourier-MLE head outperforms
27
Published as a conference paper at ICLR 2025
Toy Example: MSE (↓)
Dataset
Gaussian
GMM-2
Beta
Pointwise Regression
0.010 ± 0.001
0.121 ± 0.004
0.275 ± 0.009
Linear
0.013 ± 0.001
0.126 ± 0.004
0.276 ± 0.008
Classification Head
GMM
0.010 ± 0.001
0.120 ± 0.004
0.273 ± 0.009
Fourier
0.012 ± 0.001
0.123 ± 0.005
0.275 ± 0.008
Table 6: We compare the MSE between the linear head, GMM head, and the Fourier head with 12
frequencies and no regularization, for every dataset in the toy example. We also include a Pointwise
Regression model baseline, whose base architecture is same as the classification heads, except the
last classification layer is replaced with a dense layer having output dimension 1. We train the
Pointwise Regression model using MSE. For a given dataset, the MSE values across all of the
models is roughly similar. This is because the pointwise regression model tends to regress to the
mean, as does the expected value of each of the classification heads.
Toy Example: Ground Truth Conditional Distribution vs. Pointwise Regression Output
Figure 10: We present some examples of the ground truth conditional distribution versus the point
predicted by the Pointwise Regression model. The regression model simply regresses to the mean
of the conditional distribution. Accordingly, the regression model performs extremely well for the
unimodal Gaussian dataset, and it performs poorly for the bimodal datasets GMM-2 and Beta.
the GMM-MLE head on the Beta dataset, highlighting the flexibility of the Fourier-MLE head in
learning a large variety of probability distributions. In Appendix C.2, we present the results of a
study on the impact of number of frequencies and Fourier regularization in the MLE setting.
KL Divergence (↓)
Perplexity (↓)
Dataset
Gaussian
GMM-2
Beta
GMM-MLE
0.012 ± 0.002
0.018 ± 0.001
0.257 ± 0.03
Fourier-MLE
0.034 ± 0.003
0.072 ± 0.005
0.130 ± 0.005
GMM-MLE
0.410 ± 0.014
0.702 ± 0.015
0.623 ± 0.035
Fourier-MLE
0.422 ± 0.019
0.740 ± 0.014
0.542 ± 0.017
Table 7: We compare metrics between the GMM-MLE head, and the Fourier-MLE head with 12
frequencies and no regularization, for every dataset in our toy example. We aggregate metrics over
4 different seeds and report the standard deviation.
C.4 ARE LLMS RANDOM NUMBER GENERATORS?
Dataset: We create a training dataset using the prompt template: “The following is a list of normally
distributed random numbers in the interval [-1, 1] with mean µ and std σ: x1, x2, ” and response
template: “x3”, where (µ, σ) ∈ {(−0.55, 0.10), (−0.03, 0.24), (0.42, 0.16), (0.55, 0.10)}, and each
28
Published as a conference paper at ICLR 2025
Figure 11: Comparison between the PDFs learned by the GMM-MLE head and the Fourier-MLE
head for each of the datasets in the toy example. While GMM-MLE outperforms Fourier-MLE on
the Gaussian and GMM-2 datasets, Fourier-MLE performs better on the Beta dataset.
xi ∼ N (µ, σ). We write each number using two decimal places. Our training dataset consists of
256 such (prompt, response) pairs, divided evenly among the four distributions.
Model Architecture: We consider three different models: the original Llama-3.1-8B-Instruct model
Dubey et al. (2024), the original model after LoRA fine-tuning, and the original model where we
replace the linear classification head with the Fourier head and perform LoRA fine-tuning. For the
Fourier head, we use an output dimension of 200 and the original latent space [−1, 1] because of
our chosen decimal precision. We conduct LoRA fine-tuning for 16 epochs with a learning rate of
3 ∗ 10−4 and a linear decay schedule, and a batch size of 64. We release all of our training code on
our project page.
Model Evaluation: We compute two metrics: the first is Total Variation Distance; we define this
to be one half of the L∞ distance between the ground truth quantized Gaussian histogram, and the
empirical histogram of samples, quantized into 20 bins. Our second metric is the Quantity of Unique
Samples. In Figure 3 we present example histograms for each of the three models that we consider
for the median TVD in each class. Those results demonstrate that the Fourier head learns a more
accurate PMF. And in Figure 12 we demonstrate that the Fourier head model consistently obtains
a lower TVD and a greater diversity of samples. We hypothesize that the LoRA-finetuned baseline
model has fewer diverse samples because it memorizes training data instead of learning the actual
distribution. In contrast, the Fourier head is forced to learn a continuous distribution, and samples
directly from that distribution.
Related works which consider LLMs as random number generators: Gu et al. (2024) explores
probability distribution sampling in LLMs in the context of behavioral simulation, which demon-
strates an application of using LLMs as random number generators. And Paruchuri et al. (2024)
explores the tasks of using LLMs for estimating percentiles, drawing samples, and calculating prob-
abilities. Lastly, Hopkins et al. (2023) explores sampling from numerical probability distributions
using LLMs, but they only consider a single non-uniform density, namely the normal distribution
N (0.5, 0.2887). They find that LLMs don’t perform well at this task, but they don’t thoroughly
investigate model-based interventions. In contrast, we thoroughly investigate the impact of fine-
tuning, and replacing the token classification head one with a better inductive bias for modeling
complex probability distributions.
29
Published as a conference paper at ICLR 2025
Figure 12: We demonstrate that the Fourier head model consistently obtains a lower total variation
distance, as well as a greater diversity of samples. For TVD (top), lower is better, because lower
values indicate that the learned distribution is closer to the ground truth distribution. And for quantity
of samples (bottom), higher is better, because lower values indicate that the LM has just memorizes
specific numbers instead of performing sampling. We present here the mean values across the four
distributions, for all ten seeds. We can see that the Fourier head obtains more diverse samples, and
learns a distribution closer to the ground truth.
D ADDITIONAL EXPERIMENT DETAILS, LARGE-SCALE EXAMPLES
D.1 DECISION TRANSFORMER
Following the original Decision Transformer implementation, we trained on 500k transitions ob-
served by a DQN agent during training, for 5 epochs. We trained on the same model size as the
original implementation (a GPT-1 model with approximately 2.012M parameters) which takes about
4 hours on a single GPU. We can see that in Figure 13 that the PMFs learned by the Fourier head
are smoother. In Figure 16 we present results for more Atari games. In Figure 14, we present results
from an ablation study of the model size. The results demonstrate that, across model sizes, the Deci-
sion Transformer with a Fourier head is better at learning high-quality next action distributions than
the Decision Transformer with a linear head. And in Figure 15, we present results from an ablation
study of the dataset size, which show that the Fourier head obtains larger returns than the Linear
classification head across dataset sizes.
D.2 CHRONOS
In Figure 17 we present a learned next-token PMF from a linear Chronos model, and a next-token
PMF from a Chronos model which uses the linear head. The Fourier head is about 4x smoother. In
Table 8 we present results from an ablation study on the quantity of Fourier frequencies, choice of
regularization, and binning strategy. We followed the original Chronos implementation, keeping all
hyperparameters the same. In particular, we trained for 200k steps, on the same model size as the
original implementation (the T5 model with approximately 20M parameters) and this takes about 48
hours on 8 GPUs. See Table 9 for the datasets we used to train and evaluate Chronos.
30
Published as a conference paper at ICLR 2025
Figure 13: We present example next action distributions for a single step in the Decision Transformer
test split. The Fourier agent with 8 frequencies produces a “clump” of actions that is semantically
meaningful. Namely, this agent almost certainly wants to shoot in the down right or right direction,
presumably because there is a submarine in that direction. In contrast, the linear agent’s next-action
distribution doesn’t clearly depict a strategy, and incorrectly assigns higher likelihoods to incorrect
actions. Because the Fourier head outputs a smoother PMF, it learns to concentrate more probability
mass near the correct action.
Figure 14: We present an ablation study on the effect of the model size on the relative performance of
the Fourier head and the Linear head. The results demonstrate that, across model sizes, the Decision
Transformer with a Fourier head is better at learning high-quality next action distributions than the
Decision Transformer with a linear head.
31
Published as a conference paper at ICLR 2025
Fourier Head Ablation Study: Dataset Size
Figure 15: In this ablation study, we analyze whether dataset size has any effect on the relative
performance of the Linear head and the Fourier head. Our results show that, across dataset sizes, the
Decision Transformer agent with a Fourier head achieves larger returns than the linear head on the
Seaquest game.
Chronos Time Series Model
Linear
Fourier-64
Fourier-128
Fourier-256
Fourier-550
Fourier-550 (no regularization)
Fourier-550 (uniform precision binning)
MASE (↓) WQL (↓)
0.750
0.798
0.767
0.755
0.749
0.753
0.747
0.883
0.875
0.872
0.859
0.852
0.861
0.873
Smoothness (↓)
0.1689 ± 0.1087
0.0032 ± 0.0012
0.0068 ± 0.0035
0.0139 ± 0.0087
0.0283 ± 0.0224
0.0286 ± 0.0219
0.0395 ± 0.0252
Table 8: We present large-scale experiments on Chronos time series forecasting. Notably, every
Fourier model outperforms the linear baseline on MASE and smoothness metrics. We can see that
within the Fourier model class, decreasing the number of frequencies lets you trade off the continuity
of the learned probability mass functions (smoothness) for the quality of the forecasts (MASE,
WQL). In the bottom two rows, we present an ablation for our large-scale experiments on Chronos
time series forecasting. The best overall performing Fourier-550 model uses Fourier regularization
and mixed precision binning, which are both techniques informed by Fourier analysis. We observe
that both of these interventions improve the MASE, but have minimal effect on the WQL. Note that
the choice of binning strategy doesn’t affect the performance of the linear baseline.
32
Published as a conference paper at ICLR 2025
Figure 16: We present empirical results for how the quantity of Fourier frequencies impacts returns
and smoothness for additional imitation learning games. For normalized returns, higher is better; for
smoothness, lower is better. We can see that for the BankHeist, DoubleDunk, and Gravitar games,
the Fourier agent consistently achieves higher normalized returns than the linear baseline agent,
while still learning smoother next-action distributions.
Figure 17: We present the next token value distribution for a single forecasted timestep on the
Tourism Monthly dataset. We observe that the Fourier head’s learned conditional distribution is
smoother, fitting signal more robustly, whereas the linear head overfits to the noise, and is therefore
more jagged. We note that the x-axis represents the bins in the latent space [−1, 1]; the x-axis values
for the Fourier head are lower because the linear head uses uniform binning, and the Fourier head
uses mixed precision binning.
33
Published as a conference paper at ICLR 2025
Fourier Head Ablation Study: Chronos Dataset Size
Figure 18: In this ablation study, we analyze whether dataset size has any effect on the relative
performance of the linear head and the Fourier head for the probabilistic time series task. Our results
show that, across dataset sizes, the Fourier head yields more accurate forecasts than the linear head.
For the dataset sizes 1.1 × 105, 1.1 × 106, and 1.1 × 107, we report the average MASE across
four seeds; for the dataset size 1.1 × 108 we report the MASE from Table 3. We generate the plot
following (Kaplan et al., 2020) and observe a similar power-law scaling behavior for both methods,
with the Fourier head consistently outperforming the linear head.
Fourier Head Ablation Study: Chronos Model Size
Figure 19: In this ablation study, we analyze whether model size has any effect on the relative
performance of the linear head and the Fourier head for the probabilistic time series forecasting
task. Our results show that, across model sizes, the Fourier head yields more accurate forecasts than
the linear head. For the model sizes 1.25M, 2.5M, 5M, and 10M, we report the average MASE
across three seeds; for the model size 20M we report the MASE from Table 3. We generate the plot
following (Kaplan et al., 2020) and observe a similar power-law scaling behavior for both methods,
with the Fourier head consistently outperforming the linear head.
34
Published as a conference paper at ICLR 2025
Table 9: All datasets that are used for our time series forecasting experiments. We built our time
series forecasting experiments on top of Chronos (Ansari et al., 2024), and this table is mostly
copied from their paper. The datasets are partitioned according to how they are used for training and
evaluation of models: pretraining-only data is only used for training; evaluation data is not used in
training models, but only for evaluation (final H observations). All of our evaluation datasets came
from the zero-shot evaluation set from Chronos.
Dataset
Domain Freq. # Series
Series Length
Prediction
min
avg max Length (H)
Pretraining
M
Brazilian Cities Temperature nature
transport 1H
Mexico City Bikes
energy
Solar (5 Min.)
Solar (Hourly)
energy
Spanish Energy and Weather energy
Taxi (Hourly)
USHCN
Weatherbench (Daily)
Weatherbench (Hourly)
Weatherbench (Weekly)
Wiki Daily (100k)
Wind Farms (Daily)
Wind Farms (Hourly)
5min
1H
1H
transport 1H
1D
nature
1D
nature
1H
nature
1W 225280
nature
100000
1D
web
337
1D
energy
337
1H
energy
1320
757
492
12
494
780 78313 104449
5166 105120 105120 105120
8760
5166
66 35064 35064 35064
744
739
734
5906 38653 59283
225280 14609 14609 14610
225280 350633 350639 350640
2087
2741
366
8784
2087
2741
71
1715
2087
2741
354
8514
2428
6090
8760
8760
5 230736 231052 232272
120
72
51
2674
84
767
150
617
114
203
58
181
144
1428
72
756
47
645
874
24000
841
23000
1969
30490
791
111
113
111
333
366
130
427
47
518
862 17544 17544 17544
1332 14296 65981
3010
98
51
84
90
48
24
117
48
28
100
37
1562
791
113
298
99
24
28
51
84
48
18
15
66
24
20
24
19
124
791
113
91
30
11
Evaluation
Australian Electricity
CIF 2016
Car Parts
Hospital
M1 (Monthly)
M1 (Quarterly)
M1 (Yearly)
M3 (Monthly)
M3 (Quarterly)
M3 (Yearly)
M4 (Quarterly)
M4 (Yearly)
M5
NN5 (Daily)
NN5 (Weekly)
Tourism (Monthly)
Tourism (Quarterly)
Tourism (Yearly)
Traffic
Weather
30min
energy
1M
banking
retail
1M
healthcare 1M
1M
various
3M
various
1Y
various
1M
various
3M
various
1Y
various
3M
various
1Y
various
1D
retail
1D
finance
1W
finance
1M
various
1Q
various
1Y
various
transport 1H
1D
nature
35
-
-
-
-
-
-
-
-
-
-
-
-
-
48
12
12
12
18
8
6
18
8
6
8
6
28
56
8
24
8
4
24
30
|
9RCT0ngvZP | Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning | [
6,
6,
8,
6
] | Published as a conference paper at ICLR 2025
MONTESSORI-INSTRUCT: GENERATE INFLUENTIAL
TRAINING DATA TAILORED FOR STUDENT LEARNING
Xiaochuan Li♦∗, Zichun Yu✦, Chenyan Xiong✦
♦School of Software, Tsinghua University
✦Language Technologies Institute, Carnegie Mellon University
[email protected]
[email protected]
[email protected]
ABSTRACT
Synthetic data has been widely used to train large language models, but their gener-
ative nature inevitably introduces noisy, non-informative, and misleading learning
signals. In this paper, we propose MONTESSORI-INSTRUCT, a novel data synthesis
framework that tailors the data synthesis ability of the teacher language model
toward the student language model’s learning process. Specifically, we utilize local
data influence of synthetic training data points on students to characterize students’
learning preferences. Then, we train the teacher model with Direct Preference
Optimization (DPO) to generate synthetic data tailored toward student learning pref-
erences. Experiments with Llama3-8B-Instruct (teacher) and Llama3-8B (student)
on Alpaca Eval and MT-Bench demonstrate that Montessori-Instruct significantly
outperforms standard synthesis methods by 18.35% and 46.24% relatively. Our
method also beats data synthesized by a stronger teacher model, GPT-4o. Further
analysis confirms the benefits of teacher’s learning to generate more influential train-
ing data in the student’s improved learning, the advantages of local data influence
in accurately measuring student preferences, and the robustness of Montessori-
Instruct across different student models. Our code and data are open-sourced at
https://github.com/cxcscmu/Montessori-Instruct.
1
INTRODUCTION
Synthetic training data is highly effective in various applications of large language models (LLMs) (Lu
et al., 2023), spanning from general pretraining (Allal et al., 2024; Zhou et al., 2024), instruction-
tuning (Tong et al., 2024) to domain-specific scenarios such as mathematics (Yu et al., 2023) and
coding (Jiang et al., 2024). The advantages of synthetic data include its low cost, convenience, and
flexibility, making them an appealing choice for scaling up training data (Yue et al., 2024), mitigating
the shortage of human labels (Chang et al., 2024), and improving data diversity (Sun et al., 2023).
Typical data synthesis methods (Wang et al., 2023) employ an instruction-tuned teacher model and
prompt it with seed data to generate synthetic training data for a student model. It is widely observed
that the teacher-generated data can be noisy and non-informative (Bauer et al., 2024), their simple and
uniform format may lead to pattern overfitting (Chen et al., 2024), and their biased and ungrounded
content can introduce ambiguity in AI alignment (Liu et al., 2024). These are fundamental challenges
of synthetic data as they can mislead students and sometimes even result in model collapse (Shumailov
et al., 2023a; Seddik et al., 2024).
In this paper, we propose MONTESSORI-INSTRUCT, a novel data synthesis framework designed
to generate more tailored and informative data by directly optimizing the synthesis ability of the
teacher toward the student’s learning preferences. We first leverage influence functions (Koh & Liang,
2017; Yu et al., 2024b) to precisely measure the utility of synthetic data–its ability to effectively
train the students. Then, we optimize the parameters of the teacher model according to the student’s
preferences through Direct Preference Optimization (DPO) (Rafailov et al., 2024). The preference-
optimized teacher then synthesizes influential training data for the students. As shown in Figure 1,
∗Part of this work is done while visiting CMU.
1
Published as a conference paper at ICLR 2025
(a) Self-Instruct
(b) Self-Reward
(c) LLM2LLM
(d) Montessori-Instruct
Figure 1: Data synthesis methods with standard teacher (data synthesizer) and student (target) setups.
rather than employing LLM-as-a-judge (Zheng et al., 2024) to evaluate and filter data by quality (Yuan
et al., 2024) or prompting teachers to generate harder examples (Lee et al., 2024) Montessori-Instruct
directly optimizes the teacher according to students’ learning preferences, leading to more customized,
flexible, and effective synthetic training data for the students.
Our experiments use Montessori-Instruct to synthesize 10K instruction-response pairs with Llama3-
8B-Instruct (Meta, 2024) as teacher and train Llama3-8B/Tinyllama-1.1B (Zhang et al., 2024) as
students. The results show that Montessori-Instruct achieves relative improvements of 18.35% and
46.24% over Self-Instruct on in-domain Alpaca Eval (Dubois et al., 2024) and out-of-domain MT-
Bench (Zheng et al., 2024), respectively. The benefits of Montessori-Instruct are more pronounced
compared to state-of-the-art data synthesis methods such as Self-Reward and LLM2LLM, as well as
data synthesized by the cutting-edge LLM, GPT-4o (OpenAI, 2024). The results on a wide range of
general NLP tasks (e.g., MMLU (Hendrycks et al., 2020) and GSM8K (Cobbe et al., 2021)) further
demonstrate the generalization capabilities of Montessori-Instruct.
Further analyses reveal a strong correlation between the teacher’s optimization process and the
student’s performance, demonstrating that Montessori-Instruct enables the teacher to generate data
aligned with students’ preferences to enhance its learning. Ablation studies highlight the advantages
of using data influence to reflect students’ preferences, the effectiveness of optimizing the teacher
parameters over solely bootstrapping the data, and the robustness of Montessori-Instruct across
different seed data, multiple iterations, and a variety of student models.
Our main contributions are summarized as follows:
1. We propose Montessori-Instruct, a novel data synthesis framework that tailors the data
synthesis ability of the teacher toward the student’s learning.
2. We incorporate influence functions to accurately capture the student’s data preferences and
effectively guide the teacher’s optimization directions.
3. Our empirical results demonstrate the effectiveness and robustness of Montessori-Instruct in
improving students’ learning outcomes by tailoring synthetic data generation to align with
student learning preferences.
2 RELATED WORK
Synthetic data has been shown highly effective in various applications of large language models (Lu
et al., 2023), including pretraining (Allal et al., 2024; Zhou et al., 2024), instruction-tuning (Tong
et al., 2024; Yue et al., 2024), mathematics (Yu et al., 2023) and coding (Jiang et al., 2024). Typical
approaches like Self-Instruct (Wang et al., 2023) leverages an instruction-tuned teacher to generate
instruction-response pairs given a small amount of seed data. Following the similar pipeline, Self-
Guide (Zhao et al., 2024) and Self-Alignment (Sun et al., 2023; Guo et al., 2024) further enhance data
quality for specific tasks, such as safety, truthfulness, and instruction-following, by carefully curating
task-relevant seeds. In parallel, Instruction Backtranslation (Li et al., 2023) and Bonito (Nayak et al.,
2024) collect massive texts from the internet as responses, prompt LLMs to synthesize instructions
reversely, and select high-quality candidates.
Despite its promising potential, synthetic data primarily rely on the teacher’s free-form generations,
thus is inevitably often biased, non-informative, and misleading (Bauer et al., 2024; Liu et al., 2024).
2
ResponsesConstructInstructionsConstructPreferencedatasetStudent’s mistakes🔥TeacherInstructions🔥Student🧊TeacherInstructionsResponsesData influenceDPOSynthesizeFine-tuneSynthesizeResponsesSynthesizeInstructionsSynthesizeFine-tuneResponsesFilterPrompt(a) Self-Instruct(b) Self-Reward(c) LLM2LLM(d) Montessori-Instruct🔥Student🧊Teacher🔥Student🧊Teacher🔥StudentSynthesizeFine-tunePreferencedatasetObtainDPORewardsResponsesRewardsInstructionsPreferencedatasetDPOSynthesize(b) Self-Reward🧊TeacherSynthesize🔥StudentLLM-as-a-JudgeConstructResponsesConstructInstructionsConstructPreferencedatasetStudent’s mistakes🔥TeacherInstructions🔥Student🧊TeacherInstructionsResponsesData influenceDPOSynthesizeFine-tuneSynthesizeResponsesSynthesizeInstructionsSynthesizeFine-tuneResponsesFilterPrompt(a) Self-Instruct(b) Self-Reward(c) LLM2LLM(d) Montessori-Instruct🔥Student🧊Teacher🔥Student🧊Teacher🔥StudentSynthesizeFine-tunePreferencedatasetObtainDPORewardsResponsesConstructInstructionsConstructPreferencedatasetStudent’s mistakes🔥TeacherInstructions🔥Student🧊TeacherInstructionsResponsesData influenceDPOSynthesizeFine-tuneSynthesizeResponsesSynthesizeInstructionsSynthesizeFine-tuneResponsesFilterPrompt(a) Self-Instruct(b) Self-Reward(c) LLM2LLM(d) Montessori-Instruct🔥Student🧊Teacher🔥Student🧊Teacher🔥StudentSynthesizeFine-tunePreferencedatasetObtainDPORewardsPublished as a conference paper at ICLR 2025
The discrepancy between synthetic data and real-world sources often results in a misalignment with
human values and preferences (Liu et al., 2024), raising the risk of training student models that are
biased (Feng et al., 2023; Liu et al., 2021), ungrounded (Liu et al., 2022; Patel & Pavlick, 2022),
or misrepresentative of real-world scenarios (Ji et al., 2023; Hu et al., 2024b). It is also observed
that task-specific synthetic data often lacks diversity (Yu et al., 2024a), whereas general synthetic
data suffers from pattern overfitting (Chen et al., 2024) and the memorization of the synthesis
model’s training data (Van Breugel et al., 2023). Another challenge of synthetic data training is the
phenomenon of model collapse (Shu et al., 2023), where the massive noise in unregulated synthetic
data leads to the disappearance of the tails of the original content distribution and ineffective student
models (Seddik et al., 2024).
To address these limitations, researchers have explored various approaches to improve the utility of
synthetic data (Shu et al., 2023; Wang et al., 2024). One line of work focuses on filtering out noisy
synthetic data, using techniques like ranking synthetic data with an additional reward model (Shu
et al., 2023), verifying the truthfulness of responses via programs (Dong et al., 2024), prompting
LLMs to judge the data quality (Zheng et al., 2024), and ensemble of multiple teacher (Lee et al.,
2023). One can also directly adjust the teacher’s synthesis strategies to generate more useful data for
students (Lee et al., 2024; Yuan et al., 2024). For instance, LLM2LLM (Lee et al., 2024) collects
data points that the student answers incorrectly and prompts the teacher to bootstrap similar data,
thereby generating targeted data to strengthen the student’s weaknesses. Another potential path, such
as Self-Reward (Yuan et al., 2024), is to employ LLM-as-a-judge (Zheng et al., 2024) to assign each
response a discrete reward score and optimize the student to generate highly rewarding responses.
The last body of related work is data influence functions (Hampel, 1974), a commonly used technique
for measuring the utility of data on a model’s performance. Influence function (Hampel, 1974;
Koh & Liang, 2017; Bae et al., 2022) quantifies the change in reference loss when a data point is
upweighted in the training set (Koh & Liang, 2017). It often serves as a theoretical tool to analyze data
utility (Choe et al., 2024) and attribute model behavior (Park et al., 2023). Recent work has applied
influence functions to facilitate model-aware data selection in pretraining or instruction-tuning, using
first-order approximation (Xia et al., 2024), linear datamodels (Engstrom et al., 2024), and data
influence models (Yu et al., 2024b). These methods have been shown to be more effective than
traditional rule-based techniques in data selection, mostly notably in the pretraining stage (Engstrom
et al., 2024; Yu et al., 2024b).
3 MONTESSORI-INSTRUCT
This section first introduces the overall framework of MONTESSORI-INSTRUCT (§ 3.1) and then
elaborates its two main components: local data influence collection (§ 3.2) and student-preference-
guided teacher optimization (§ 3.3).
3.1 OVERALL FRAMEWORK
Standard data synthesis methods (Wang et al., 2023; Yuan et al., 2024; Lee et al., 2024) begin with
a teacher model M and a seed prompt p formed using a few-shot sample of example data. The
teacher model processes the seed p to generate a set of N new instructions, {xi | 1 ≤ i ≤ N }, that
follow a similar format to the seed but with a variety of contents. Each generated instruction xi is
then used to prompt the teacher to synthesize the corresponding response yi. This yields a set of
instruction-response pairs {(xi, yi) | 1 ≤ i ≤ N } that are then used to train the student model m.
Montessori-Instruct upgrades this standard data synthesis pipeline with the optimization of the teacher
model toward the student’s learning preferences. The student-preference-guided teacher optimization
starts with prompting the teacher to generate a probing dataset Dprobing using Self-Instruct and then
collecting these data points’ local data influence Im on the student model (§ 3.2). The collected
data preferences form the preference dataset Dpreference, and Montessori-Instruct uses it to update
the teacher model via Direct Preference Optimization (DPO) (Rafailov et al., 2024) (§ 3.3). The
optimized teacher then generates the actual training dataset to train the student model m. The process
can be iterated multiple rounds to continually refine the teacher according to the student’s updated
preferences. This process is illustrated in Figure 2 and discussed in detail in the next two sections.
3
Published as a conference paper at ICLR 2025
Figure 2: Student-Preference-Guided teacher optimization in Montessori-Instruct.
3.2 LOCAL DATA INFLUENCE COLLECTION
A key component of our framework is to precisely measure the utility of synthetic data, i.e., how
good they are at improving the student’s learning outcomes. This question is often approached using
influence functions (Weisberg & Cook, 1982; Koh & Liang, 2017), which was designed to quantify
changes in reference loss when a data point (xi, yi) is upweighted in the training sets Park et al.
(2023), thus reflecting the utility of this data point to the student’s learning.
In order to efficiently calculate the data influence, we follow Yu et al. (2024b) and approximate
influence functions locally, using the change of the model’s reference loss before and after training
on a single data point (xi, yi):
Im(xi; Dref) ≈ −L(Dref | A(yi | xi; m)) + L(Dref | m),
where L(Dref | m) = E(x,y)∼Dref ℓ(y | x; m),
where Dref denotes the reference data that measure the student’s capability, and ℓ(y|x; m) is the loss
of student m on an input-output pair (x, y). A(yi | xi; m) refers to the optimization operation of
student m on data (xi, yi), e.g., one-step training with Adam (Kingma & Ba, 2015) on (xi, yi).
(1)
(2)
The local data influence, Im(xi; Dref), represents how the instruction-response pair (xi, yi) impacts
the student’s learning outcome as measured on the reference data. A positive Im indicates that
the data benefits the student’s reference performance, while a negative Im shows the opposite. A
complete theoretical derivation of local data influence is provided in Appendix B.
3.3 STUDENT-PREFERENCE-GUIDED TEACHER OPTIMIZATION
After calculating local data influence for each instruction in the probing dataset Dprobing, we pair every
two instructions with positive and negative influence, along with their corresponding seed prompt p,
to construct the preference dataset:
Dpreference = {(p, x+, x−) | Im(x−; Dref) < 0 < Im(x+; Dref)}.
(3)
We then apply DPO to optimize the teacher model M toward the student’s learning preferences:
LDPO(M∗; M) = −E(p,x+,x−)∼Dpreference[ logσ(β log
M∗(x+ | p)
M(x+ | p)
− β log
M∗(x− | p)
M(x− | p)
)],
(4)
where β is a parameter that controls the deviation from the initial teacher M and σ is the logistic
function. The updated teacher, M∗, after one or multiple iterations, is then used to synthesize the
training data for the student model m.
4 EXPERIMENTAL METHODOLOGIES
This section details our main experimental setups, including a thorough configuration of the data
synthesis process, the chosen baselines, and the evaluation methods.
4
seedseedCollect Data Influence onPromptTeacherCollect Local Data InfluenceseedStudentPreferencedatasetConstruct Preference DatasetinstructioninfluenceGenerateInfluence < 0DPO UpdateCollect Data Influence oninstructionStudentresponseInfluence > 0instructioninfluenceGenerate Selection Datasetinstructionresponse.........Published as a conference paper at ICLR 2025
Data Synthesis Process. We choose Llama3-8B-Instruct (Meta, 2024) as the teacher, and train
Llama3-8B (Meta, 2024) and Tinyllama-1.1B (Zhang et al., 2024) as students. We merge the text in
instruction and input fields of Alpaca GPT-4 dataset (Taori et al., 2023), consisting of 52K entries,
to create our seed pool. We follow the 8-shot seed proposed in Self-Instruct (Wang et al., 2023) to
prompt the teacher to generate instructions, with 6 out of the 8 randomly sampled from the seed pool
and 2 sampled from the synthetic instructions in the teacher’s previous iterations. Detailed prompts
are provided in Figure 13.
Following Yuan et al. (2024), we initially use the unoptimized teacher model to synthesize 1K
data to warm up the student. Then, we generate 4 instructions for each seed and 1 response for
each instruction and filter out similar instructions whose Rough-L score exceeds 0.7, resulting in a
probing dataset of 10K prompt-instruction-response triplets. For each instruction-response pair in the
probing dataset, we collect local data influence using the loss difference of the student model on the
reference data (Alpaca GPT-4) before and after one-step training. Then, we construct a preference
dataset comprising 6,792 entries, where each entry represents a seed-instruction pair with positive
and negative influences. This preference dataset is used to train the teacher with Direct Preference
Optimization (DPO) (Rafailov et al., 2024). Finally, we use the optimized teacher to synthesize 10K
data to train the student from scratch. In the subsequent iterations, we optimize the teacher using
similar steps, but with the updated student from last iteration to collect data influence. For both the
teacher and student training, we utilize AdamW optimizer (Loshchilov & Hutter, 2019) along with
WSD scheduler (Hu et al., 2024a). Both models are trained for one epoch. For teacher’s generation,
we use vLLM (Kwon et al., 2023) as our decoding engine and provide specific decoding parameters
in Table 5. More details can be found in Appendix A.
Baselines. We compare our method against several mainstream data synthesis baselines. The
simplest baseline is Self-Instruct (Wang et al., 2023), where we use the unoptimized teacher to
synthesize data. Additionally, we select GPT-4o (OpenAI, 2024) as a stronger teacher to synthesize
an equivalent amount of data for comparison. Another baseline is Self-Reward (Yuan et al., 2024),
which employs an LLM-as-a-judge (Zheng et al., 2024) to assign ratings from 1 to 5 points to its
self-synthesized responses. Since we find in our preliminary experiments that Llama3-8B lacks the
ability to effectively score its own responses, we instead employ GPT-4o as an external judge to
score the student’s responses. The results of the original Self-Reward are reported in the Appendix
§ D.3. The final baseline is LLM2LLM (Lee et al., 2024), which evaluates the student’s accuracy
on its seed set and filters out those that result in incorrect answers. In our case, we define data
points with the highest 50% training loss as incorrect examples. The teacher is then prompted to
bootstrap data similar to the incorrectly answered seeds. To align with our setting, we uniformly
conduct two rounds of iterations for Self-Reward and LLM2LLM. For all methods, we synthesize
10K instruction-response pairs to train the student models.
Evaluation Methods. We use Alpaca Eval 2.0 (Dubois et al., 2024) as the in-domain evaluation to
assess the model’s instruction-following ability. We utilize gpt-4-turbo-2024-04-09 as the
evaluator and uniformly compare all methods against the student model trained with Self-Instruct. The
evaluation metrics are standard Winng Rate (WR) and Length Control Winning Rate (LC-WR). For
head-to-head winning rate, we employ the evaluation prompt in both pairwise orders, and if the results
disagree, we count it as a tie. Additionally, we evaluate the model’s generalization performance across
six out-of-domain tasks, including MT-Bench (Zheng et al., 2024), ARC-Challenge (25-shot) (Clark
et al., 2018), GSM8K (8-shot) (Cobbe et al., 2021), HellaSwag (8-shot) (Zellers et al., 2019), GPQA
(0-shot) (Rein et al., 2023), and MMLU (0-shot) (Hendrycks et al., 2020). These tasks span areas
such as multi-turn dialogue, knowledge-based question answering, mathematics, and natural language
reasoning, offering a thorough assessment of our approach’s effectiveness. For MT-Bench, we report
the score out of 10 judged by gpt-4-turbo-2024-04-09. For other tasks, we report normalized
accuracy if it is included in the evaluation results, otherwise, standard accuracy.
5 EVALUATION RESULTS
This section evaluates the effectiveness of Montessori-Instruct (§ 5.1), illustrates the correlation
between the teacher’s learning and the student’s performance (§ 5.2), conducts comprehensive
ablation studies on the effectiveness of local data influence, the optimization of the teacher, the seed
5
Published as a conference paper at ICLR 2025
Table 1: Evaluation of training 8B/1.1B students with different data synthesis methods. Adoption of
a stronger teacher model (GPT-4o) is indicated by ∗. All else use Llama3-8B-Instruct as the teacher
model. The best and second-best performances are marked in bold and underscore, respectively.
In-Domain
Out-Of-Domain
Methods
Alpaca Eval 2.0
MT-Bench MMLU GPQA ARC-C GSM8K HellaSwag
LC-WR
WR
Score
Accuracy
8B Setting: Student=Llama3-8B
No fine-tuning
2.09%
3.39%
Self-Instruct
Self-Instruct∗
Self-Reward∗
Iteration 1
Iteration 2
LLM2LLM
Iteration 1
Iteration 2
50%
50%
54.95% 56.39%
51.87% 55.38%
53.49% 57.32%
51.49% 53.12%
52.63% 55.02%
Montessori-Instruct
Iteration 1
Iteration 2
54.92% 58.59%
56.37% 60.15%
1.1B Setting: Student=Tinyllama-1.1B
No fine-tuning
17.89% 17.56%
Self-Instruct
Self-Instruct∗
Self-Reward∗
Iteration 1
Iteration 2
LLM2LLM
Iteration 1
Iteration 2
50%
50%
54.02% 55.02%
47.62% 48.34%
46.48% 46.95%
52.03% 52.75%
51.64% 53.52%
Montessori-Instruct
Iteration 1
Iteration 2
53.25% 51.77%
54.37% 54.68%
5.597
6.490
5.918
6.713
6.798
6.531
6.519
6.903
7.163
1.020
2.154
1.928
1.804
1.717
2.243
2.192
2.485
2.526
62.15
62.42
63.41
62.46
62.02
62.18
62.46
62.93
63.47
26.16
26.21
26.64
26.34
26.09
25.87
25.62
26.23
26.47
24.33
31.92
30.13
28.19
29.08
29.12
30.04
29.91
31.36
23.88
24.78
24.33
23.92
24.62
24.51
24.84
23.92
24.88
57.85
59.98
60.58
59.84
60.64
57.49
59.65
62.97
60.17
37.12
37.97
38.82
37.64
38.03
36.86
36.74
37.97
38.05
51.25
58.76
50.42
53.60
56.37
55.28
57.75
58.76
60.02
1.97
1.82
2.20
1.76
1.76
2.24
2.31
2.35
2.82
81.96
80.93
81.42
81 .04
81.13
80.49
80.57
81.22
81.98
62.61
62.47
63.17
62.27
62.79
62.15
62.08
62.59
63.54
data and multiple iterations (§ 5.3), and then demonstrates the generalization of the synthetic data
from Montessori-Instruct (§ 5.4).
5.1 OVERALL PERFORMANCE
Table 1 presents the overall performance of Montessori-Instruct compared with the state-of-the-art
data synthesis methods. In the 8B setting, Montessori-Instruct significantly outperforms Self-Instruct
by 6.37% LC-WR and 10.15% WR on Alpaca Eval. Notably, our method still surpasses Self-Instruct
with GPT-4o as the teacher, suggesting that a stronger LLM does not necessarily produce more
beneficial data than a weaker LLM that is tailored to the student’s needs. Compared to Self-Reward
and LLM2LLM, Montessori-Instruct consistently shows better performance across both iterations.
This underscores the advantage of directly optimizing the teacher model’s parameters toward the
student’s preferences derived from data influence.
In addition to in-domain evaluation, Montessori-Instruct also outperforms all the baselines on out-of-
domain tasks, achieving maximum improvements of 0.673 and 0.372 on the MT-Bench in the 8B and
1.1B settings, respectively. This indicates that the teacher optimized by our method does not overfit
the reference tasks and maintains strong robustness and generalization capabilities, whereas other
baselines suffer from performance degradation on out-of-domain tasks.
5.2 CORRELATION BETWEEN TEACHER’S LEARNING AND STUDENT’S PERFORMANCE
This set of experiments examines how the teacher is progressively optimized to align with student
preferences, thereby enhancing the student’s performance. We first zoom in on the teacher’s learning
process to investigate its progressive impact on student models. Figures 3a and 3b compare the
6
Published as a conference paper at ICLR 2025
(a) Alpaca Eval
(b) MT-Bench
(c) Data influence
(d) Positive influence
Figure 3: Figures (a) and (b) illustrate the correlation between the teacher’s learning process and the
performance of the student trained on data synthesized by the intermediate teachers in Alpaca Eval
and MT-Bench. Figure (c) depicts how the distribution of the local data influence of the teacher’s
synthetic data shifts as the teacher is progressively updated. Figure (d) presents the proportion of
training data with positive local data influence during the student’s training.
performance of students trained using synthetic data generated from the teacher’s intermediate
checkpoints. The learning margin reflects the teacher’s learning process, representing the average
difference between selected rewards and corresponding rejected rewards in DPO. A larger margin
indicates that the teacher is more likely to generate the selected synthetic data. The results indicate a
positive correlation between the student’s performance and the teacher’s optimization progress.
We then select several teacher checkpoints to examine the properties of their synthetic data, aiming to
identify changes occurring as the teacher learns. Specifically, we focus on the distribution of local
data influence in the synthetic data, defined as the change in the model’s reference loss before and
after training on a single data point, which indicates the utility of that data for the model. The baseline
reference loss is the loss on the reference set prior to one-step training, i.e., Equation 2. As shown
in Figure 3c, we observe that as the teacher is optimized, the distribution of its synthetic data shifts
towards the positive side, indicating an increased proportion of data with positive local influence in
its synthetic outputs. From the student’s perspective (Figure 3d), which shows the changes in the
proportion of data with positive local influence in the next training batch, this proportion decreases
over time during training. However, the data generated by the updated teacher consistently maintains
a higher proportion of positive influence compared to a regular teacher.
In summary, we attribute the improved performance achieved by Montessori-Instruct to the teacher’s
continuously enhanced ability to synthesize data with higher local influence, by using DPO to
distinguish data with varying influence values. The positive correlation between student performance
and the increased proportion of training data with positive local influence leads to more effective
learning, thereby improving the student’s overall performance.
5.3 ABLATION STUDIES
This subsection demonstrates the effectiveness of the methodological design in Montessori-Instruct
through four ablation studies, summarized in Table 2. The yellow lines show ablations on data
point utility evaluation methods. The red lines represent optimization for responses based on
instructions and optimization for teacher models. The blue lines cover various seed data types: OOD
(Out-Of-Domain), ID (In-Domain), and Test (direct use of the test set).
Effectiveness of Local Data Influence. To evaluate the impact of different methods for obtaining
the influence of a data point, we compare our local data influence against two additional baselines:
(1) LLM-as-a-Judge (Zheng et al., 2024), which leverages GPT-4o to directly assign a 1-5 score
to each instruction-response pair, inspired by Self-Reward, and (2) Training loss, which directly
uses the training loss of each data point as its influence score, inspired by LLM2LLM. As shown
in the yellow lines in table 2, our local data influence consistently outperforms both baselines by a
significant margin. This indicates that local data influence is a more effective metric for capturing
students’ fine-grained data preferences compared to the other methods.
Effectiveness of Teacher Optimization. To analyze the effectiveness of the optimization strategy
on the teacher, we compare our method with two additional ablation baselines: (1) Bootstrap: we
7
01000200030004000500060007000Probing Dataset Size0.000.030.060.090.12Learning Margin5053565962Alpaca EvalDPO Margin (Teacher)Alpaca Eval WRAlpaca Eval LC-WR01000200030004000500060007000Probing Dataset Size0.000.030.060.090.12Learning Margin5.56.16.77.3MT-Bench ScoreDPO Margin (Teacher)MT-Bench (8B student)0.060.040.020.000.020.04Local Data Influence010203040Countat ckpt-1200at ckpt-1600at ckpt-2400baseline ref. loss501001502002508B Student Training Steps5060708090100Positive Influence Data %Teacher w/o updateTeacher w/ updatePublished as a conference paper at ICLR 2025
Table 2: Ablation studies on the effectiveness of the methodological design in Montessori-Instruct.
All experiments were conducted on the Llama3-8B students.
Methodological design
Alpaca Eval 2.0
MT-Bench MMLU GPQA ARC-C GSM8K HellaSwag
LC-WR
WR
Score
Accuracy
Effectiveness of Local Data Influence
LLM-as-a-Judge
Training loss
Local data influence (Ours)
53.42% 54.93%
52.34% 54.99%
54.92% 58.59%
Effectiveness of Teacher Optimization
Bootstrap
Response optimization
Instruction optimization (Ours)
50.59% 48.14%
51.59% 54.22%
54.92% 58.59%
Effectiveness of Seed Data
Open Assistant (OOD)
Alpaca GPT4 (ID) (Ours)
Alpaca Eval (Test)
52.28% 54.76%
54.92% 58.59%
57.64% 61.36%
6.731
6.656
6.903
6.618
6.556
6.903
6.706
6.903
7.147
62.93
62.54
62.93
60.67
62.43
62.93
62.86
62.93
62.93
29.75
29.89
29.91
25.19
27.45
29.91
29.74
29.91
30.44
62.09
61.48
62.97
57.95
60.42
62.97
62.29
62.97
63.06
58.82
58.76
58.76
58.13
56.38
58.76
58.42
58.76
60.80
81.05
80.93
81.22
80.46
81.04
81.22
81.24
81.22
81.09
(a) Win rates of iterations compared to Self-Instruct
(b) Win rates compared between different iterations
Figure 4: Head-to-head win rates for evaluating 8B models among the Self-Instruct baseline and
three successive iterations updated using Montessori-Instruct.
bootstrap the top 50% influential data by utilizing it as the seed, and (2) Response optimization: we
optimize the teacher by the student’s local data influence of different responses given an instruction.
As shown in red lines in table 2, optimizing the teacher is generally better than merely bootstrapping
influential data, highlighting the necessity of adapting the teacher to the student’s needs. Furthermore,
instruction optimization (Montessori-Instruct) outperforms response optimization across all tasks.
We attribute this to the smaller search space of response optimization, which limits the headroom for
teacher improvement compared to instruction optimization.
Effectiveness of Seed Data. This study examines the impact of the seed data by varying its relevance
to the evaluation tasks. In addition to the Alpaca GPT-4 (in-domain seed data) used in the main
experiments, we also utilize Open Assistant and Alpaca Eval as alternative seed data. Open Assistant
represents an out-of-domain seed, whereas Alpaca Eval is directly sampled from the evaluation task.
Blue lines in table 2 demonstrates that using Alpaca Eval leads to the best performance on itself
while using Open Assistant is less effective compared to in-domain seed data. For more general
NLP benchmarks, changing the seed data results in only slight differences in performance. This
indicates that our method is robust enough to enhance the synthesis ability of teachers, even when
using different seeds.
Effectiveness of Multiple Iterations. We examine the performance differences when applying
Montessori-Instruct over multiple iterations. In each iteration, we begin by constructing a probing
dataset of 2K samples to collect local data influence on the student model from the previous iteration,
followed by updating the previous teacher. As shown in Figure 4a, Montessori-Instruct continues to
outperform Self-Instruct across three iterations, achieving a peak head-to-head win rates of 51.9%.
The results in Figure 4 illustrate the comparison between different iterations, demonstrating that
Montessori-Instruct can yield improvements over previous iterations. We attribute these gains to the
Montessori-Instruct’s ability to capture the data preferences of students at different iterations and to
tailor influential data according to their evolving needs.
8
Montessori-Instruct M1vs. Self-InstructMontessori-Instruct M2vs. Self-InstructMontessori-Instruct M3vs. Self-Instruct40.326.633.146.725.827.551.924.323.8Montessori-Instruct WinsTieSelf-Instruct WinsMontessori-Instruct M3vs. M1Montessori-Instruct M2vs. M1Montessori-Instruct M3vs. M246.327.426.340.230.729.136.832.930.3Left Wins (in Left vs. Right)TieRight WinsPublished as a conference paper at ICLR 2025
(a) Llama3-8B
(b) Qwen1.5-7B
(c) Mistral-7B
(d) Gemma2-9B
Figure 5: Evaluation results of training four different student models using synthetic data generated
by a teacher optimized for the data preferences of the 1.1B student.
5.4 GENERALIZATION ABILITY OF THE SYNTHESIZED DATA
In this experiment, we study the generalization ability of our teacher optimized toward a small student
(1.1B)’s preferences. Specifically, we utilize the data synthesized by this teacher to train four different
student models—Llama3-8B (Meta, 2024), Mistral-7B (Jiang et al., 2023), Qwen1.5-7B (Bai et al.,
2023), and Gemma2-9B (Team et al., 2024). As shown in Figure 5, the data synthesized by one
teacher leads to consistent performance gains across all the students compared to Self-Instruct. This
finding implies we can directly deploy an optimized teacher to generate data for a variety of student
models, enhancing their performance with a low expense.
5.5 CASE STUDY
In this section, we present several
cases to visualize the differences be-
tween the instructions synthesized
by Self-Instruct and by Montessori-
Instruct, and showcase the chosen and
rejected data pairs that reflect what
the teacher learns during our optimiza-
tion. Figure 6 shows the word analysis
of root verbs and their corresponding
nouns. We identify the top 10 most
common root verbs (inner circle) and
their top 4 direct noun objects (outer
circle) in the generated instructions.
The results indicate that, compared
to Self-Instruct, Montessori-Instruct
guides the teacher to synthesize more
on writing instructions and providing specific, informative examples, while reducing the frequency of
simple commands like summarizing and translating.
Figure 6: The most common root verbs (inner circle) and
their top direct noun objects (outer circle) in generated in-
structions
(b) Montessori-Instruct
(a) Self-Instruct
Table 3 compares the chosen and rejected data pairs given the same prompt. Our method discards
low-utility data, such as explanations of simple concepts and sentence translations, and increases
the likelihood of generating complex and informative instructions. This further demonstrates the
effectiveness of using local data influence to differentiate data utility.
6 DISCUSSION AND LIMITATIONS
Synthetic Data Scale. We synthesize 10K data points to verify the effectiveness of our innovative
data synthesis framework. While this 10K dataset outperforms other baselines and demonstrates
strong generalization, its effectiveness when scaled to the volume required for production-level
fine-tuning (around 100K) remains unclear. Expanding the synthetic data volume may introduce
redundancy, a phenomenon commonly observed in data synthesis (Bauer et al., 2024; Liu et al., 2024).
It would be meaningful to study how to balance the quantity and the diversity of the synthetic data,
while this is orthogonal to our main contribution.
9
WRLC-WRMT-Bench01836547290Alpaca Eval %3.392.0950.0050.0054.1353.61pretrained+self-instruct+montessori-instruct0246810MT-Bench Score5.606.496.83WRLC-WRMT-Bench01836547290Alpaca Eval %17.5211.0950.0050.0054.4853.82pretrained+self-instruct+montessori-instruct0246810MT-Bench Score5.215.755.87WRLC-WRMT-Bench01836547290Alpaca Eval %14.9210.4750.0050.0053.6852.92pretrained+self-instruct+montessori-instruct0246810MT-Bench Score5.165.876.13WRLC-WRMT-Bench01836547290Alpaca Eval %14.629.4550.0050.0054.0452.85pretrained+self-instruct+montessori-instruct0246810MT-Bench Score6.047.197.74explainwritesummarizereducetranslatedescribeuseprovidecalculateimproveconceptdifferencesdifferenceimportancestoryparagraphscriptessaypointsarticleitparagraphwasteconsumptionpollutionfootprintsentencetextphrasepoemdifferencesstepsprocessfeaturesmanagersourcestransportationsystemexampleexamplessummaryguideareacostitipskillsihealthexperienceLoading [MathJax]/extensions/MathMenu.jswriteexplainreducecalculateprovidedescribeusemakeincludeconsiderparagraphrecipestoryguideconceptdifferencesdifferencestepswastestressconsumptionpollutionareacostidistanceexampleexampleslistrecipeprocessdifferencesstepscyclemanageringredientstransportationsourcescakebuttersandwichdishingredientsstagesstepsnamefactorsrestrictionssizebudgetLoading [MathJax]/extensions/MathMenu.jsPublished as a conference paper at ICLR 2025
Table 3: Examples of instruction-response pairs with positive and negative influences
Positive
[Instruction]: What is the most efficient way to transport a
given package across a city, considering traffic congestion,
distance, and cost?
[Response]: The most efficient way to transport a pack-
age across a city depends on several factors, including the
package’s size, weight, fragility, and urgency, as well as the
city’s traffic patterns, infrastructure, and regulations. Here
are some general considerations and options to minimize
costs, reduce congestion, and ensure timely delivery:
1. **Distance and Route Optimization**:
* Use online route planners or transportation manage-
ment software to determine the shortest and most efficient
route.
* Consider alternative routes or modes of transportation
if the primary route is congested or closed.
2. **Traffic Congestion Mitigation**:
* Avoid peak hours (7-9 am and 4-6 pm) when possible,
as traffic congestion is typically highest during these times...
[Instruction]: Can you write a script for a public service
announcement about the importance of recycling?
[Response]: Here is a script for a public service announce-
ment about the importance of recycling:
[Opening shot of a beautiful park or nature reserve, with a
narrator speaking]
Narrator: ”Imagine a world where our planet is thriving,
where wildlife roams free, and where our communities are
healthy and strong. But to get there, we need to take action.”
[Cut to a shot of a person holding a plastic bottle and throw-
ing it away]
Narrator: ”One of the simplest and most effective ways to
make a difference is by recycling.”...
Negative
[Instruction]: Explain the concept of artificial
intelligence in 100 words.
[Response]: Artificial intelligence (AI) refers
to the development of computer systems that
can perform tasks that typically require hu-
man intelligence, such as learning, problem-
solving, and decision-making. AI systems use
algorithms and data to analyze and interpret in-
formation, allowing them to make predictions,
classify objects, and generate insights. They
can be trained on large datasets to improve their
performance over time, and can even adapt to
new situations and learn from experience. AI
has many applications, including robotics, nat-
ural language processing, image recognition,
and expert systems, and is increasingly being
used in industries such as healthcare, finance,
and transportation.
[Instruction]: Can you translate the following
text from english to spanish: ”the hotel room
was comfortable and clean.”
[Response]: The translation of the text from
English to Spanish is:
”El cuarto de hotel era c´omodo y limpio.”
Here’s a breakdown of the translation:
* ”the hotel room” = ”el cuarto de hotel”
* ”was” = ”era” (verb conjugation)
* ”comfortable” = ”c´omodo”
* ”and” = ”y” (conjunction)
* ”clean” = ”limpio”
So, the complete sentence in Spanish is: ”El
cuarto de hotel era c´omodo y limpio.”
Overhead. Montessori-Instruct introduces an additional computational cost. Compared to Wang
et al. (2023), training an 8B model using our method increases the average processing time per data
by 5.8 seconds (see the Appendix E for details). At the instruction finetuning stage, compute is less
an issue compared to pretraining. The scale is smaller, and generating data is faster and cheaper than
human annotations. Additionally, the most time-intensive step in our method–”collecting local data
influence”–can be independently parallelized on heterogeneous compute systems, allowing for easy
acceleration. As demonstrated in § 5.4, Montessori-Instruct exhibits strong generalization capabilities.
In practice, one can use a smaller model to collect data influence for updating the teacher and then
apply the updated teacher to synthesize data for larger models.
7 CONCLUSION
In this paper, we propose Montessori-Instruct, a novel data synthesis framework that tailors the
teacher for student learning. Montessori-Instruct leverages local data influence to reflect the student’s
learning preferences and to optimize the teacher to produce more influential synthetic training data.
Experimental results demonstrate that Montessori-Instruct significantly outperforms state-of-the-art
data synthesis methods in both in-domain and out-of-domain evaluations, exceeding the performance
of data generated by stronger teacher models like GPT-4o. Further analyses confirm the benefits of
optimizing the teacher toward the student’s preferences in improving student performances. Ablation
studies validate the benefits of using local data influence to reflect data utility and highlight the
benefits of optimizing the teacher over bootstrapping. Our work successfully demonstrates the
potential of incorporating the student’s learning preferences into teacher optimization, and we hope it
inspires further exploration of more effective synthetic data generation frameworks.
10
Published as a conference paper at ICLR 2025
ACKNOWLEDGEMENTS
We sincerely thank Shi Yu and Zhenghao Liu for discussing ideas and providing helpful feedback on
this work. We also extend our gratitude to Alex Xu for testing our Github repository.
REFERENCES
Loubna Ben Allal,
How to create
dia:
models,
cosmopedia-how-to-create-large-scale-synthetic-data-for-pre-training.
Accessed: 2024-09-09.
Cosmope-
language
https://huggingface.co/blog/cosmopedia#
Anton Lozhkov,
large-scale
URL
for pre-training large
synthetic data
and Daniel
Strien.
2024.
van
Juhan Bae, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger B Grosse. If influence functions
are the answer, then what is the question? Advances in Neural Information Processing Systems,
35:17953–17967, 2022.
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge,
Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu,
Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan,
Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin
Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng
Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou,
Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report. arXiv preprint arXiv:2309.16609,
2023.
Andr´e Bauer, Simon Trapp, Michael Stenger, Robert Leppich, Samuel Kounev, Mark Leznik, Kyle
Chard, and Ian Foster. Comprehensive exploration of synthetic data generation: A survey. arXiv
preprint arXiv:2401.02524, 2024.
calflops. calflops: a flops and params calculate tool for neural networks. https://github.com/
MrYxJ/calculate-flops.pytorch, 2024.
Hsin-Yu Chang, Pei-Yu Chen, Tun-Hsiang Chou, Chang-Sheng Kao, Hsuan-Yun Yu, Yen-Ting Lin,
and Yun-Nung Chen. A survey of data synthesis approaches, 2024. URL https://arxiv.
org/abs/2407.03672.
Jie Chen, Yupeng Zhang, Bingning Wang, Wayne Xin Zhao, Ji-Rong Wen, and Weipeng Chen.
Unveiling the flaws: Exploring imperfections in synthetic data and mitigation strategies for large
language models. ArXiv, abs/2406.12397, 2024. URL https://api.semanticscholar.
org/CorpusID:270562788.
Sang Keun Choe, Hwijeen Ahn, Juhan Bae, Kewen Zhao, Minsoo Kang, Youngseog Chung, Adithya
Pratapa, Willie Neiswanger, Emma Strubell, Teruko Mitamura, et al. What is your data worth to
gpt? llm-scale data valuation with influence functions. arXiv preprint arXiv:2405.13954, 2024.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and
Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge.
arXiv preprint arXiv:1803.05457, 2018.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168, 2021.
Guanting Dong, Keming Lu, Chengpeng Li, Tingyu Xia, Bowen Yu, Chang Zhou, and Jingren
Zhou. Self-play with execution feedback: Improving instruction-following capabilities of large
language models. ArXiv, abs/2406.13542, 2024. URL https://api.semanticscholar.
org/CorpusID:270620157.
Yann Dubois, Bal´azs Galambosi, Percy Liang, and Tatsunori B Hashimoto. Length-controlled
alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475, 2024.
11
Published as a conference paper at ICLR 2025
Logan Engstrom, Axel Feldmann, and Aleksander Madry. Dsdm: Model-aware dataset selection
with datamodels, 2024. URL https://arxiv.org/abs/2401.12926.
Shangbin Feng, Chan Young Park, Yuhan Liu, and Yulia Tsvetkov. From pretraining data to language
models to downstream tasks: Tracking the trails of political biases leading to unfair nlp models,
2023. URL https://arxiv.org/abs/2305.08283.
Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Man-
grulkar, Marc Sun, and Benjamin Bossan. Accelerate: Training and inference at scale made simple,
efficient and adaptable. https://github.com/huggingface/accelerate, 2022.
Hongyi Guo, Yuanshun Yao, Wei Shen, Jiaheng Wei, Xiaoying Zhang, Zhaoran Wang, and Yang Liu.
Human-instruction-free llm self-alignment with limited samples. ArXiv, abs/2401.06785, 2024.
URL https://api.semanticscholar.org/CorpusID:266999538.
Frank R Hampel. The influence curve and its role in robust estimation. Journal of the american
statistical association, 69(346):383–393, 1974.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
arXiv preprint
Jacob Steinhardt. Measuring massive multitask language understanding.
arXiv:2009.03300, 2020.
Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang,
Yuxiang Huang, Weilin Zhao, et al. Minicpm: Unveiling the potential of small language models
with scalable training strategies. arXiv preprint arXiv:2404.06395, 2024a.
Xuming Hu, Junzhe Chen, Xiaochuan Li, Yufei Guo, Lijie Wen, Philip S. Yu, and Zhijiang Guo.
Towards understanding factual knowledge of large language models. In The Twelfth International
Conference on Learning Representations, 2024b. URL https://openreview.net/forum?
id=9OevMUdods.
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang,
Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM
Computing Surveys, 55(12):1–38, 2023.
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot,
Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al.
Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
Juyong Jiang, Fan Wang, Jiasi Shen, Sungju Kim, and Sunghun Kim. A survey on large language
models for code generation. arXiv preprint arXiv:2406.00515, 2024.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, San Diega,
CA, USA, 2015.
Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In
International conference on machine learning, pp. 1885–1894. PMLR, 2017.
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.
Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model
serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating
Systems Principles, 2023.
Nicholas Lee, Thanakul Wattanawong, Sehoon Kim, Karttikeya Mangalam, Sheng Shen, Gopala
Anumanchipali, Michael W Mahoney, Kurt Keutzer, and Amir Gholami. Llm2llm: Boosting llms
with novel iterative data enhancement. arXiv preprint arXiv:2403.15042, 2024.
Young-Suk Lee, Md Arafat Sultan, Yousef El-Kurdi, Tahira Naseem Asim Munawar, Radu Florian,
Salim Roukos, and Ram´on Fern´andez Astudillo. Ensemble-instruct: Generating instruction-
tuning data with a heterogeneous mixture of lms. ArXiv, abs/2310.13961, 2023. URL https:
//api.semanticscholar.org/CorpusID:264426718.
12
Published as a conference paper at ICLR 2025
Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, and
Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259,
2023.
Ruibo Liu, Chenyan Jia, Jason Wei, Guangxuan Xu, Lili Wang, and Soroush Vosoughi. Mitigating
political bias in language models through reinforced calibration. In Proceedings of the AAAI
Conference on Artificial Intelligence, volume 35, pp. 14857–14866, 2021.
Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou,
and Andrew M Dai. Mind’s eye: Grounded language model reasoning through simulation. arXiv
preprint arXiv:2210.05359, 2022.
Ruibo Liu, Jerry Wei, Fangyu Liu, Chenglei Si, Yanzhe Zhang, Jinmeng Rao, Steven Zheng, Daiyi
Peng, Diyi Yang, Denny Zhou, et al. Best practices and lessons learned on synthetic data for
language models. arXiv preprint arXiv:2404.07503, 2024.
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization, 2019. URL https:
//arxiv.org/abs/1711.05101.
Yingzhou Lu, Minjie Shen, Huazheng Wang, Xiao Wang, Capucine van Rechem, and Wenqi Wei.
Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062, 2023.
Meta. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783.
Nihal V. Nayak, Yiyang Nan, Avi Trost, and Stephen H. Bach. Learning to generate instruction
tuning datasets for zero-shot task adaptation. ArXiv, abs/2402.18334, 2024. URL https:
//api.semanticscholar.org/CorpusID:268041745.
OpenAI. Gpt4 url, 2024. URL https://chatgpt.com/.
Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, and Aleksander Madry. Trak:
Attributing model behavior at scale. arXiv preprint arXiv:2303.14186, 2023.
Roma Patel and Ellie Pavlick. Mapping language models to grounded conceptual spaces.
In
International conference on learning representations, 2022.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea
Finn. Direct preference optimization: Your language model is secretly a reward model. Advances
in Neural Information Processing Systems, 36, 2024.
David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani,
Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark.
arXiv preprint arXiv:2311.12022, 2023.
Nikhil Sardana, Jacob Portes, Sasha Doubov, and Jonathan Frankle. Beyond chinchilla-optimal:
Accounting for inference in language model scaling laws. arXiv preprint arXiv:2401.00448, 2023.
Mohamed El Amine Seddik, Suei-Wen Chen, Soufiane Hayou, Pierre Youssef, and M´erouane
Debbah. How bad is training on synthetic data? a statistical analysis of language model collapse.
ArXiv, abs/2404.05090, 2024. URL https://api.semanticscholar.org/CorpusID:
269005923.
Lei Shu, Liangchen Luo, Jayakumar Hoskere, Yun Zhu, Canoee Liu, Simon Tong, Jindong Chen,
and Lei Meng. Rewritelm: An instruction-tuned large language model for text rewriting. In AAAI
Conference on Artificial Intelligence, 2023. URL https://api.semanticscholar.org/
CorpusID:258887805.
Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson.
The curse of recursion: Training on generated data makes models forget. ArXiv, abs/2305.17493,
2023a. URL https://api.semanticscholar.org/CorpusID:258987240.
Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Ander-
son. The curse of recursion: Training on generated data makes models forget. arXiv preprint
arXiv:2305.17493, 2023b.
13
Published as a conference paper at ICLR 2025
Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally
can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024.
Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David D. Cox,
Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from
scratch with minimal human supervision. ArXiv, abs/2305.03047, 2023. URL https:
//api.semanticscholar.org/CorpusID:258479665.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model.
https://github.com/tatsu-lab/stanford_alpaca, 2023.
Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya
Bhupatiraju, L´eonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ram´e, et al.
Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118,
2024.
Yuxuan Tong, Xiwen Zhang, Rui Wang, Ruidong Wu, and Junxian He. Dart-math: Difficulty-aware
rejection tuning for mathematical problem-solving. 2024. URL https://arxiv.org/abs/
2407.13690.
Boris Van Breugel, Zhaozhi Qian, and Mihaela Van Der Schaar. Synthetic data, real errors: how
(not) to publish and use synthetic data. In International Conference on Machine Learning, pp.
34793–34808. PMLR, 2023.
Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan
Lambert, and Shengyi Huang. Trl: Transformer reinforcement learning. https://github.
com/huggingface/trl, 2020.
Tianlu Wang,
Ilia Kulikov, Olga Golovneva, Ping Yu, Weizhe Yuan, Jane Dwivedi-Yu,
Richard Yuanzhe Pang, Maryam Fazel-Zarandi, Jason Weston, and Xian Li. Self-taught evaluators.
arXiv preprint arXiv:2408.02666, 2024.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. In
The 61st Annual Meeting Of The Association For Computational Linguistics, 2023.
Sanford Weisberg and R Dennis Cook. Residuals and influence in regression. 1982.
Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. Less:
Selecting influential data for targeted instruction tuning. In ICLR 2024 Workshop on Navigating
and Addressing Data Problems for Foundation Models, 2024.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo
Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for
large language models. arXiv preprint arXiv:2309.12284, 2023.
Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander J Ratner, Ranjay Krishna, Jiaming Shen,
and Chao Zhang. Large language model as attributed training data generator: A tale of diversity
and bias. Advances in Neural Information Processing Systems, 36, 2024a.
Zichun Yu, Spandan Das, and Chenyan Xiong. Mates: Model-aware data selection for efficient
pretraining with data influence models. arXiv preprint arXiv:2406.06046, 2024b.
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, and Jason
Weston. Self-rewarding language models. arXiv preprint arXiv:2401.10020, 2024.
Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from the
web. arXiv preprint arXiv:2405.03548, 2024.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine
really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
14
Published as a conference paper at ICLR 2025
Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and Wei Lu. Tinyllama: An open-source small
language model, 2024.
Chenyang Zhao, Xueying Jia, Vijay Viswanathan, Tongshuang Wu, and Graham Neubig. Self-guide:
Better task-specific instruction following via self-synthetic finetuning. ArXiv, abs/2407.12874,
2024. URL https://api.semanticscholar.org/CorpusID:271270568.
Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright, Hamid
Shojanazeri, Myle Ott, Sam Shleifer, Alban Desmaison, Can Balioglu, Pritam Damania, Bernard
Nguyen, Geeta Chauhan, Yuchen Hao, Ajit Mathews, and Shen Li. Pytorch fsdp: Experiences on
scaling fully sharded data parallel, 2023. URL https://arxiv.org/abs/2304.11277.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and
chatbot arena. Advances in Neural Information Processing Systems, 36, 2024.
Kun Zhou, Beichen Zhang, Jiapeng Wang, Zhipeng Chen, Wayne Xin Zhao, Jing Sha, Zhichao Sheng,
Shijin Wang, and Ji-Rong Wen. Jiuzhang3. 0: Efficiently improving mathematical reasoning by
training small data synthesis models. arXiv preprint arXiv:2405.14365, 2024.
15
Published as a conference paper at ICLR 2025
A TRAINING DETAILS
The hyperparameters used during training teachers and students are as follows. We employ the
AdamW optimizer (Loshchilov & Hutter, 2019) with a WSD scheduler (Hu et al., 2024a). For SFT,
the 8B model utilizes a maximum learning rate of 5e−6, while the 1B model uses 1e−5. The WSD
scheduler is configured with a warmup ratio of 0.1, a stable ratio of 0.5, and a decay ratio of 0.4, with
the learning rate decaying to one-thousandth of the maximum. The epoch is set to 1, batch size is
set to 32 and the dropout is 0. We mask non-target tokens, calculating the loss only on target tokens.
If the student model does not have a chat template itself, we apply the Llama3-8B formatted chat
template, as shown in 7, with bos token, eos token and pad token set to <|start header id|>,
<|end header id|>, and <|end header id|>, respectively. For DPO, we use a learning rate
of 1e−6, set β to 0.1, and use a batch size of 2, while other parameters remain the same as in SFT.
Figure 7: Chat Template
Chat Template
{% if messages[0][’role’] == ’system’ %}
{% set offset = 1 %}
{% else %}
{% set offset = 0 %}
{% endif %}
{{ bos token }}
{% for message in messages %}
{% if (message[’role’] == ’user’) !=
(loop.index0 % 2 == offset) %}
{{ raise exception(’Conversation roles
must alternate userassistantuserassistant...’)
}}
{% endif %}
{{ <|start header id|> + message[’role’] +
<|end header id|> + message[’content’] | trim +
eos token }}
{% endfor %}
{% if add generation prompt %}
{{ ’<|start header id|>’ + ’assistant’ +
’<|end header id|>
n
n’ }}
{% endif %}
We use Hugging Face TRL codebase (von Werra et al., 2020) to perform both full parameters fine-
tuning and direct preference optimization. For the 8B model, we employ the Hugging Face Accelerate
codebase (Gugger et al., 2022) to facilitate FSDP training (Zhao et al., 2023). All the parameters
introduced in this section are summarized in Table 4.
Table 4: Training Parameters
Method
Learning Rate
Weight Decay
Warmup Ratio
Stable Ratio
Decay Ratio
SFT
DPO
5.0e − 6
1.0e − 6
Method Minium Learning Rate
SFT
DPO
5.0e − 9
1.0e − 9
0.0
0.0
Epoch
1
1
Method
Max Length
Dropout
SFT
DPO
1024
1024
0.0
0.0
0.1
0.1
0.5
0.5
0.4
0.4
Per Device Train Batch Size Gradient Accumulation
2
1
Train Batch Size
32
2
Flash Attention 2
True
True
Beta
-
0.1
2
2
BF16
True
True
16
Published as a conference paper at ICLR 2025
B THEORETICAL GUARANTEE OF LOCAL DATA INFLUENCE
This section provides a detailed explanation of the derivation for computing local data influence
and the rationale behind its effectiveness. We referred to the derivation method in Yu et al. (2024b).
We use Dref to represent the reference set and m to represent the student model that we calculate
local data influence on. The derivation begins with the standard influence functions Koh & Liang
(2017); Weisberg & Cook (1982) which quantify the change in reference loss when a data point xi
is upweighted by a small ϵ. We denote the optimal model state after the upweighting as mϵ,xi =
j=1 L(xj | m) + ϵL(xi | m) and simplify the optimal model under ϵ = 0 case (i.e.,
arg minm
no upweighting) as m. The influence of upweighting xi is then given by:
(cid:80)n
1
n
Im(xi; Dref) def=
dL(Dref | mϵ,xi )
dϵ
|ϵ=0
= ∇mL(Dref | m)⊤ dmϵ,xi
dϵ
= −∇mL(Dref | m)⊤H −1
m ∇mL(xi | m),
|ϵ=0
(5)
(6)
(7)
(cid:80)n
j=1 ∇2
where Hm = 1
mL(xj | m) is the Hessian matrix, which is positive definite. The derivation
n
from Eq. 6 to Eq. 7 is given by building a quadratic approximation to the empirical risk around m and
tperforming a single Newton step as shown in Koh & Liang (2017). Now let’s consider the scenario
in which xi is incorporated into the training data. In this case, ϵ = 1
n , and the parameter difference
m ∇mL(xi | m) and the influence in Eq. 7 can be
due to the inclusion of xi is m 1
further represented as:
n ,xi − m ≈ 1
n H −1
Im(xi; Dref) ≈ n∇mL(Dref | m)⊤(m 1
n ,xi − m)
≈ n(L(Dref | m 1
∝ −L(Dref | m) + L(Dref | m 1
n ,xi) − L(Dref | m))
n ,xi).
(8)
(9)
(10)
So far, we have successfully derived the method (Eq. 10) of calculating local data influence used
in § 3.2. Using the supervised fine-tuning algorithm A, we denote the model state m 1
n ,xi as
A(yi | xi; m), which is updated on the synthetic data point (xi, yi) for one step. Replacing the
variables in Eq. 10 with the notation of our method, we can obtain:
Im(xi; Dref) ≈ −L(Dref | A(yi | xi; m)) + L(Dref | m)
(11)
C STATISTICS ON SYNTHESIS DATA
We plot the top 20 most common root verbs (inner circle) and their top 4 direct noun objects (outer
circle) in the generated instructions by Self-Instruct (Figure 8), the first iteration of Montessori-
Instruct (Figure 9), and the second iteration of Montessori-Instruct (Figure 10), respectively.
We observe an increasing trend in instructions such as ’write,’ ’provide,’ and ’make,’ as well as a
consistent trend for instructions like ’explain’ and ’describe.’ These commands typically require more
general detailed information and lead to longer, more complex responses. Meanwhile, commands like
’translate’ and ’calculate’ show a decline, as they usually require straightforward answers and simpler
formats. This outcome demonstrates that Montessori-Instruct helps the teacher model generate more
detailed and informative instructions, thereby improving student performance.
We also plot the distribution of tokenized instructions and responses generated by Self-Instruct and
Montessori-Instruct for comparison. As shown in Figures 11 and 12, there is an increasing trend in
the length of instructions, while the length of responses remains relatively unchanged. This aligns
with our design, which focuses on optimizing instructions based on prompts rather than optimizing
responses based on instructions. The increased length of instructions also reflects the teacher’s data
synthesis strategy shifting toward more complex and informative instructions.
17
Published as a conference paper at ICLR 2025
Figure 8: The top 20 most common root verbs (inner circle) and their top 4 direct noun objects (outer
circle) in the generated instructions by Self-Instruct
Figure 9: The top 20 most common root verbs (inner circle) and their top 4 direct noun objects (outer
circle) in the generated instructions by Montessori-Instruct (iteration 1)
18
explainwritesummarizereducetranslatedescribeuseprovidecalculateimproveplanidentifytroubleshootmakeorganizegivecreateconsiderincludeconceptdifferencesdifferenceimportancestoryparagraphscriptessaypointsarticleitparagraphwasteconsumptionpollutionfootprintsentencetextphrasepoemdifferencesstepsprocessfeaturesmanagersourcestransportationsystemexampleexamplessummaryguideareacostitipskillsihealthexperiencetripweddingeventpartythemecausesthemescharactersissuewebsitecomputerconnectionthatbutterpizzabeginnersbookshelfscheduledeskmeetingbaselengthradiusweiplanbudgetguidefactorsbudgetrestrictionssizeingredientsstagesstrengthspriceLoading [MathJax]/extensions/MathMenu.jswriteexplainreducecalculateprovidedescribeusesummarizetranslateplanningpackdiscoversgivemakeincludeconsiderorganizeimprovemaintainparagraphrecipestoryguideconceptdifferencesdifferencestepswastestressconsumptionpollutionareacostidistanceexampleexampleslistrecipeprocessdifferencesstepscyclemanageringredientstransportationsourcespointseventslifesentencephrasetextparagraphtripweddingvacationceremonysuitcasethattravelerboxeswhotalentworldgirlbaseradiuslengthbudgetcakebuttersandwichdishingredientsstagesstepsnamefactorsrestrictionssizebudgetbookshelfdeskclosetcollectionskillsqualityithealthrefrigeratorsofainteriorwalletLoading [MathJax]/extensions/MathMenu.jsPublished as a conference paper at ICLR 2025
Figure 10: The top 20 most common root verbs (inner circle) and their top 4 direct noun objects
(outer circle) in the generated instructions by Montessori-Instruct (iteration 2)
Figure 11: Distribution of tokenized instructions generated by Self-Instruct and Montessori-Instruct
(a) Self-Instruct
(b) Montessori-Instruct
Figure 12: Distribution of tokenized responses generated by Self-Instruct and Montessori-Instruct
(a) Self-Instruct
(b) Montessori-Instruct
19
writeexplainincludedescribemakeprovideservesplanconsiderusesummarizecookreduceassemblecreateprepareimprovemaintainguiderecipeparagraphemailconceptdifferencesstepsdifferenceingredientslisttoolstipsstepsprocessdifferencesfeaturesdishbutterpizzathatcakeexampleexamplesguidelistthatpeoplecuisinerestauranttrippartyweddingeventfactorsbudgetrestrictionstypeingredientsmanagerdesksourcespointsdisheggsteakbreastriskwastestressconsumptionbookshelfdeskikeachairguideplanrecipebudgetdishmealbreakfastitskillsitefficiencyqualitypairwalletsofainteriorLoading [MathJax]/extensions/MathMenu.js20406080100Tokenized Instruction Length0500100015002000# Instructions020406080100Tokenized Instruction Length025050075010001250# Instructions0200400600800100012001400Tokenized Response Length0200400600# Responses02004006008001000Tokenized Response Length0200400600# ResponsesPublished as a conference paper at ICLR 2025
D ADDITIONAL EXPERIMENTAL DETAILS
D.1 PROMPTS USED FOR INSTRUCTION GENERATION.
In this section, we present the prompts used in Montessori-Instruct. Figure 13 illustrates how we
prompt the teacher model to generate new instructions. We begin by outlining some requirements for
the teacher, followed by inserting 8-shot seed examples sampled from both the seed pool and the data
pool generated in the previous iteration. We then extract the instruction from the teacher’s output
using regex matching and filter out those with incorrect formats.
Figure 14 displays the prompt used in our ablation studies on the effectiveness of Local Data Influence.
In this study, we evaluated different methods for assessing the utility of synthetic data, one of which
involved using LLM-as-a-Judge (Zheng et al., 2024). We adapted the prompt from Self-Reward (Yuan
et al., 2024) and added an additional point to evaluate the quality of the instruction, resulting in a
maximum score of 6 points.
Figure 13: Prompt for Generating Instructions
Prompt
Generate an instruction. This instruction should be a question
that humans would be ask. It can be in imperative or interrog-
ative form. We will use the instructions you generate to train
models, so you must ensure that the instructions generated are
of high quality and correct and also keep the instruction clear
and concise.
You should:
1. Briefly explain why you generate this instruction.
2. Think about whether you need to add some input to this
instruction so that it can be answered directly. (For example,
for tasks that involve summarizing, you need to provide the
paragraph to be summarized).
3. Return you output strictly following the format:
Your generated instruction should strictly follow the following
format:
<instruction><YOUR INSTRUCTION HERE><YOUR
INPUT HERE></instruction>
If there is no need to add inputs to answer the instruction, you
can skip the <YOUR INPUT HERE> part. If you need to add
inputs, just replace the <YOUR INPUT HERE> with the input.
Now here are some examples of reference instructions, and
please generate only one instruction.
D.2 DECODING STRATEGIES
We list all the parameters used for decoding outputs from language models in Table 5. Separate
parameters are used for generating instructions and responses. A higher temperature is used for
instruction generation to encourage diversity, enabling us to leverage local data influence to identify
more informative instructions. For responses, we use a temperature of 0.6 to reduce uncertainty.
Additionally, two penalty techniques are employed to mitigate duplication issues during synthesis.
D.3 SELF-REWARD RESULTS WITHOUT THE EXTERNAL JUDGE
In this section, we report the results of the original Self-Reward (Yuan et al., 2024) method. Self-
Reward requires the student model to generate responses to given instructions, and then assess their
own responses by generating judgments and scores ranging from 1 to 5 using LLM-as-a-Judge (Zheng
et al., 2024). It then employs Direct Preference Optimization (DPO) to encourage the student to
synthesize higher-scoring responses. However, this approach demands a high level of instruction-
following ability from the student model. The authors of Self-Reward employ Llama2-70B as the
20
Published as a conference paper at ICLR 2025
Figure 14: LLM-as-a-Judge Prompt for evaluating instructions and corresponding responses in our
ablation studies on the effectiveness of Local Data Influence
Prompt
Review the user’s instruction and the corresponding response
using the additive 6-point scoring system described below.
Points are accumulated based on the satisfaction of each crite-
rion:
- Add 1 point if the response is relevant and provides some in-
formation related to the user’s inquiry, even if it is incomplete
or contains some irrelevant content.
- Add another point if the response addresses a substantial
portion of the user’s question, but does not completely resolve
the query or provide a direct answer.
- Award a third point if the response answers the basic ele-
ments of the user’s question in a useful way, regardless of
whether it seems to have been written by an AI Assistant or if
it has elements typically found in blogs or search results.
- Grant a fourth point if the response is clearly written from
an AI Assistant’s perspective, addressing the user’s question
directly and comprehensively, and is well-organized and help-
ful, even if there is slight room for improvement in clarity,
conciseness or focus.
- Bestow a fifth point for a response that is impeccably tailored
to the user’s question by an AI Assistant, without extraneous
information, reflecting expert knowledge, and demonstrating a
high-quality, engaging, and insightful answer.
- Award an additional point if you consider this instruction to
be of moderate difficulty, requiring thought and analysis rather
than being a straightforward task.
User:
<INSTRUCTION HERE>
<response><RESPONSE HERE></response>
After examining the user’s instruction and the response:
- Briefly justify your total score, up to 100 words.
- Conclude with the score using the format: \Score:
<total points>”
Remember to assess from the AI Assistant perspective, uti-
lizing web search knowledge as necessary. To evaluate the
response in alignment with this additive scoring model, we’ll
systematically attribute points based on the outlined criteria.
Table 5: Decoding Parameters using vLLM
Generate Instruction Generate Responses
temperature
top p
frequency penalty
presence penalty
repetition penalty
max token
1
0.9
0
1
1.5
1024
0.6
0.9
0
1
1
1024
student model for this reason. In our experimental setup with Llama3-8B and TinyLlama-1.1B, both
models lack sufficient instruction-following capabilities and fail to produce detailed judgments and
valid scores. For example, Llama3-8B’s scores are skewed, clustering around 4 and 5, making it
difficult to differentiate between responses. The 1.1B model’s scores even do not follow the rules in
21
Published as a conference paper at ICLR 2025
Table 6: Evaluation of training 8B/1.1B students using the original Self-Reward settings compared to
Self-Instruct, without relying on external judges.
In-Domain
Out-Of-Domain
Methods
Alpaca Eval 2.0
MT-Bench MMLU GPQA ARC-C GSM8K HellaSwag
LC-WR
WR
Score
Accuracy
8B Setting: Student=Llama3-8B
No fine-tuning
2.09%
Self-Instruct
50%
Self-Reward
Iteration 1
Iteration 2
2.45%
2.69%
3.39%
50%
4.06%
4.71%
1.1B Setting: Student=Tinyllama-1.1B
No fine-tuning
17.89%
17.56%
Self-Instruct
50%
50%
Self-Reward
Iteration 1
Iteration 2
7.79%
6.34%
8.13%
7.57%
5.597
6.490
5.442
5.428
1.020
2.154
1.000
1.000
62.15
62.42
61.79
61.79
26.16
26.21
23.58
23.44
24.33
31.92
24.30
23.58
23.88
24.78
22.30
22.06
57.85
59.98
57.81
57.64
37.12
37.97
36.55
36.49
51.25
58.76
49.92
49.53
1.97
1.82
0.94
0.98
81.96
80.93
80.75
80.17
62.61
62.47
61.92
61.24
the prompt and fall outside the specified 1 to 5 range. Therefore, in our main experiment, we use
GPT-4o as an external judge to score the student responses. Nonetheless, we also report results here
based on the original Self-Reward settings, where the model judges its own responses without relying
on a more powerful external model.
E COST ANALYSIS
E.1 TIME OVERLOAD
Compared to Self-Instruct (Wang et al., 2023), our method introduces additional overhead in: (1)
collecting local data influence to construct the preference dataset (§ 3.2), (2) and performing DPO
optimization for the teacher model (§ 3.3). The majority of the computational overhead arises from
collecting local data influence. This process begins by generating instructions and responses to create
a probing dataset, distinct from the training set used for fine-tuning the student, and used solely
for calculating local data influence. Then, we traverse the entire probing dataset, fine-tuning the
student model on each individual data point to collect its corresponding local influence. For each data
point, loading the student’s warmed-up checkpoint from disk, training for one step, and evaluating
on the reference dataset are the primary time-consuming steps. We provide a detailed breakdown of
the time required for these steps in table 7 and calculate the average time needed to run the entire
Montessori-Instruct process and resulte in the final student model. The calculations are based on a
probing dataset and training dataset, each consisting of 10K entries.
However, there are two simple ways to reduce the time demand for Montessori-Instruct. First,
the process of collecting local data influence can be parallelized independently on a heterogeneous
compute system to speed up execution, with no need for communication between systems—a common
bottleneck in distributed training. In our experiments, we utilize 8 H100 GPUs to accelerate this
process. Second, as demonstrated in our experiments (§ 5.4), Montessori-Instruct shows strong
generalization capabilities. In practice, a smaller model can be used to collect data influence for
updating the teacher, which can then synthesize data for larger models. This approach significantly
reduces the computational overhead compared to using larger models directly for collecting local
data influence.
22
Published as a conference paper at ICLR 2025
Table 7: Time Overload Statistics
Task
collect local data influence / per data
Task
Time for DPO Training / per data
Task
Sub task
8B
1B
generate instructions
generate responses
load warmuped ckpt from disk
fine-tune for one step
eval on reference set
total
0.372s
0.031s
2.69s
4.12s
4.19s
13.403s
1.08s
0.79s
1.26s
3.533s
8B
8B
0.362s
1B
1B
Method
Time for obtaining the final student model / per data
Self-Instruct
Montessori-Instruct
0.486s
5.842s
0.422s
1.834s
E.2 COST-PERFORMANCE RELATIONSHIP
We provide further clarification on the cost-performance relationship of our method compared
to all baselines. We analyzed the Performance-FLOPs curve of four methods, with a particular
focus on the changes in Self-Instruct’s Alpaca Eval and MT-Bench Score as their FLOPs increase
to levels comparable to those of Montessori-Instruct. We scale the FLOPs of Self-Instruct by
synthesizing additional data. We also marked the Performance-FLOPs relationship of the two
baselines, LLM2LLM and Self-Reward, in the following figures.
(a) Alpaca Eval WR
(b) Alpaca Eval LC-WR
(c) MT-Bench
Figure 15: The Performance-FLOPs curve for all four methods.
It can be seen that Self-Instruct quickly reached the upper bound during the scaling-up process, and
even with more FLOPs, no better performance improvement can be achieved. The reason may be
that the data generated by Self-Instruct is severely homogenized. In contrast, the upper bound of our
method is significantly better and continuously grows when we invest more FLOPs into it.
Then we give a computational result of the FLOPs estimated for four methods, as well as the
pretraining and test-time-scaling. The detailed derivation is provided in E.3. The main FLOPs for
Montessori-Instruct come from processing probing data. In the Table 1, we used 10K probing data
to utilize the most resources to achieve the best performance, but as the Figure 3a and Figure 3b
suggests, using around 1K probing data can already achieve better performance than other baselines.
To make a fair comparison, we calculate the FLOPs under 1K probing data. We estimate the FLOPs
as follows (Llama3-8B-Instruct as the teacher, Llama3-8B as the student):
• Self-Instruct: 1.34 × 1020 FLOPs
• Self-Reward: 2.11 × 1021 FLOPs
• LLM2LLM: 2.3 × 1020 FLOPs
• Montessori-Instruct: 6.43 × 1020 FLOPs
23
036912151821FLOPs (×1020)424650545862Win Rate(%)Self-InstructMontessori-InstructLLM2LLMSelf-Reward036912151821FLOPs (×1020)424650545862LC Win Rate(%)Self-InstructMontessori-InstructLLM2LLMSelf-Reward036912151821FLOPs (×1020)5.55.86.16.46.77.07.3MT-Bench ScoreSelf-InstructMontessori-InstructLLM2LLMSelf-RewardPublished as a conference paper at ICLR 2025
• Pretrain Llama3-8B: 1.87 × 1024 FLOPs
• Inference-Time Scaling: 1.60 × 1023 FLOPs
We can see that Montessori-Instruct’s FLOPs are 7 times less than Self-Reward. Furthermore, if we
use the proxy model (Yu et al., 2024b), such as a smaller-sized model (e.g., 1B parameters for assisting
an 8B model) to process probing data, Montessori’s FLOPs can further reduce to 1.92 × 1020 FLOPs.
This makes it comparable to Self-Instruct while still outperforming it. Using a proxy model has
promising potential for enhancing both efficiency and performance, which we leave for future work.
Regarding the pretraining, since the computational cost during the SFT phase is significantly lower
than that during the pretraining phase ( 104 times smaller), even if we increase resource investment
in SFT, its overall consumption remains minimal. Recent work has focused on scaling inference
time to achieve better performance (Snell et al., 2024). However, the inference-time scaling FLOPs
are also significantly larger than those of SFT, being approximately 103 times greater, according to
Sardana et al. (2023). Nevertheless, our teacher training represents a one-time cost. As demonstrated
in Section 5.4, the optimized teacher can assist multiple students in improving their performance
without the need for retraining from scratch.
E.3 DERIVATION OF FLOPS
• When generating synthetic data, the input window includes both prompt and seed data, so
we set the input length to 2048.
• For instruction-based input/output, the input/output length is 128.
• For response-based input/output, the input/output length is 1024.
• For judgment-based input/output using an LLM, the input/output length is 1024.
We define the computational cost of generating one token for an input of length 128 as one unit F.
During instruction fine-tuning, the input and output lengths are 128 and 1024, respectively. The
backward FLOPs are approximately twice the forward FLOPs. For one data sample, the training
FLOPs can be estimated as:
1024F × 3 = 3072F
FLOPs calculations are based on calflops (2024), where F = 1.92T FLOPs.
24
Published as a conference paper at ICLR 2025
Method
Self-Reward
Synthesize 10K instructions from seed
Synthesize 4 responses per instruction
Generate 3 judgments per response
Train with 10K pairs using DPO
Synthesizes 2K instruction-response-
judge sets
Perform SFT on student
Total
LLM2LLM
Synthesize 10K instructions from seed
Generate 1 response per instruction
Student responds to each instruction
Resynthesize 10K instructions
Generate 1 response per instruction
Perform SFT on student
Total
Montessori
Synthesize 1K instructions from seed
Generate 1 response per instruction
Train student with each instruction
Evaluate trained student on validation set
Perform DPO updates on teacher with 6K samples
Resynthesize 10K instructions
Generate 1 response per instruction
Perform SFT on student
Total
Use a 1B model for probing data
Self-Instruct
Synthesize 1K instructions from seed
Generate 1 response per instruction
Perform SFT on student
Total
FLOPs (F)
16F × 128 × 10K = 20480KF
40K × 1024F = 40960KF
40K × 8F × 1024 × 3 = 983040KF
DPO10K
(16F × 128 + F × 1024 + 8F × 1024)
×2K = 22528KF
SFT2K
≈ 1100MF + DPO10K
16F × 128 × 10K = 20480KF
10240KF
F × 1024 × 10K = 10240KF
20480KF
10240KF
SFT10K
≈ 120MF
2048KF
1024KF
SFT10K
1KF × 1024 × 256 = 262144KF
DPO1K
20480KF
10240KF
SFT10K
≈ 340MF + DPO1K
≈ 100MF + DPO1K
2048KF
1024KF
SFT10K
≈ 70MF
Table 8: FLOPs Computation Table for Different Methods
F EXPERIMENTS UNDER THE SELF-EVOLVE SETTING
In our primary experiment, we leveraged a teacher model to generate tailored synthetic data aimed at
enhancing the capabilities of a different student model. Here, we shift our focus to explore whether
LLMs can harness synthetic data generated by themselves to achieve self-improvement—a
paradigm we term the “Self-Evolve” setting. To investigate this, we adapt our Montessori-Instruct
framework by aligning the student model with the teacher model. Starting from an identical check-
point, the model generates synthetic data for itself, employing influence scores to identify the most
beneficial and tailored samples, and subsequently performs Direct Preference Optimization on itself.
Notably, the fine-tuning process begins anew from the initial checkpoint, rather than building upon
a post-DPO state. We evaluate this paradigm using both Llama3-8B-Instruct, an instruction-tuned
model, and Llama3-8B, its pretrained version, to assess the potential of self-improvement. The results
are presented in Table 9.
Our findings reveal that Llama3-8B-Instruct achieves superior performance across all benchmarks
under the self-evolve setting, exhibiting a consistent upward trend in capability. Remarkably, even
the non-instruction-tuned Llama3-8B demonstrates self-improvement at the 8B parameter scale.
However, while Llama3-8B exhibits gains with each iteration, the rate of improvement diminishes
over time. This suggests that the pretrained model struggles to surpass its instruction-tuned version
25
Published as a conference paper at ICLR 2025
Table 9: Self-improvement performance of Llama3 models across different iterations. The Winning
Rate (WR) and Length-Control Winning Rate (LC-WR) are compared to the Llama3-8B-Instruct
model. The best performances are marked in bold.
Methods
Alpaca Eval 2.0
MT-Bench
WR
LC-WR
Llama3-8B-Instruct
Llama3-8B (No fine-tuning)
50.00% 50.00%
2.09%
3.39%
Teacher=Student=Llama3-8B
Iteration 1
Iteration 2
Iteration 3
26.76% 26.53%
35.42% 34.76%
39.84% 38.12%
Teacher=Student=Llama3-8B-Instruct
Iteration 1
Iteration 2
Iteration 3
53.74% 52.51%
56.78% 54.84%
58.62% 56.12%
Score
7.472
5.597
6.224
6.308
6.386
7.563
7.595
7.611
through self-evolution alone at this stage. We attribute this limitation to the suboptimal quality and
restricted diversity of the synthetic data produced by the models themselves. Shumailov et al. (2023b)
reveals that the perplexity of synthetic training data tends to converge toward a low-value range after
multiple iterations, offering diminishing returns in terms of novel and beneficial information for
model enhancement. We hope that future research will devise innovative strategies to bridge the gap
between synthetic and organic data, unlocking the full potential of self-evolving LLMs.
26
|
Subsets and Splits