Dataset Viewer
forum id
stringlengths 10
10
| title
stringlengths 21
154
| scores
sequencelengths 3
8
| text
stringlengths 48.3k
238k
|
---|---|---|---|
MKEHCx25xp | WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild | [
8,
6,
8
] | Published as a conference paper at ICLR 2025
WILDBENCH: BENCHMARKING LLMS WITH
CHALLENGING TASKS FROM REAL USERS IN THE WILD
Bill Yuchen Lin♡♢
Yuntian Deng♡ Khyathi Chandu♡ Faeze Brahman♡ Abhilasha Ravichander♡
Valentina Pyatkin♡ Nouha Dziri♡ Ronan Le Bras♡ Yejin Choi♡♢
♡Allen Institute for AI
♢University of Washington
https://hf.co/spaces/allenai/WildBench
ABSTRACT
We introduce WildBench, an automated evaluation framework designed to bench-
mark large language models (LLMs) using challenging, real-world user queries.
WILDBENCH consists of 1,024 examples carefully selected from over one million
human-chatbot conversation logs. For automated evaluation with WILDBENCH,
we have developed two metrics, WB-Reward and WB-Score, which are computable
using advanced LLMs such as GPT-4-turbo. WILDBENCH evaluation uses task-
specific checklists to evaluate model outputs systematically and provides structured
explanations that justify the scores and comparisons, resulting in more reliable
and interpretable automatic judgments. WB-Reward employs fine-grained pair-
wise comparisons between model responses, generating five potential outcomes:
much better, slightly better, slightly worse, much worse, or a tie. Unlike previous
evaluations that employed a single baseline model, we selected three baseline mod-
els at varying performance levels to ensure a comprehensive pairwise evaluation.
Additionally, we propose a simple method to mitigate length bias by converting
outcomes of “slightly better/worse” to “tie” if the winner’s response exceeds the
loser’s by more than K characters. WB-Score evaluates the quality of model
outputs individually, making it a fast and cost-efficient evaluation metric. WILD-
BENCH results demonstrate a strong correlation with the human-voted Elo ratings
from Chatbot Arena on hard tasks. Specifically, WB-Reward achieves a Pearson
correlation of 0.98 with top-ranking models. Additionally, WB-Score reaches 0.95,
surpassing both ArenaHard’s 0.91 and AlpacaEval2.0’s 0.89 for length-controlled
win rates, as well as the 0.87 for regular win rates.
1
INTRODUCTION
Large language models (LLMs) have become integral to a wide range of real-world applications
due to their strong generalization capabilities across diverse tasks. However, effectively evaluating
their performance remains a challenging problem, particularly when striving for an automated and
cost-effective solution. Traditional benchmarking datasets like MMLU (Li et al., 2023a) focus
primarily on assessing the reasoning abilities of LLMs using multiple-choice questions, which fall
short in evaluating the more open-ended problems that real-world users pose. Chatbot Arena (Chiang
et al., 2024) provides an online platform where human preferences are collected to judge pairs of
model outputs, subsequently ranking LLMs using Elo ratings. While this human-based evaluation
method offers valuable insights into user preferences, it has notable limitations, such as high labor
costs, the inability to deliver real-time results, a lack of data transparency, and the challenge of fairly
evaluating all models with the same data.
Several automated benchmarks such as AlpacaEval (Li et al., 2023b), MT-bench (Zheng et al., 2024),
and ArenaHard (Li et al., 2024) employ advanced LLMs like GPT-4-Turbo to assess the quality of
model responses. Comparative analyses of these benchmarks are presented in Table 1 and Figure 3.
These existing benchmarks exhibit significant shortcomings in task composition and skill coverage,
particularly in mirroring the natural distribution of real-world user tasks. MT-bench, comprising
1
Published as a conference paper at ICLR 2025
Figure 1: Example tasks sampled from AlpacaEval (Li et al., 2023b) and WILDBENCH. Tasks
in WILDBENCH are more diverse and challenging, which are collected from real users in the wild.
Complex real-user tasks usually have multiple constraints and require higher-order reasoning skills,
which are well represented in WILDBENCH.
only 80 hand-crafted examples, lacks sufficient breadth for a comprehensive evaluation. Meanwhile,
AlpacaEval, with 805 tasks derived from multiple alignment datasets, includes relatively simple tasks,
such as “What is the capital of Australia?” and suffers from low task diversity; for instance, over 20
tasks redundantly assess recipe generation skills (e.g., “can you provide a recipe for ...?”). We show a
few examples in Figure 1 to illustrate the differences between AlpacaEval and our WILDBENCH.
AlpacaEval mostly focuses on information-seeking tasks, containing merely 6% coding and 3%
mathematics tasks. Conversely, ArenaHard, sampling 500 tasks from ChatbotArena, displays an
excessive concentration on coding and debugging tasks, accounting for over 57% of its content. Most
existing benchmarks do not sufficiently challenge the models with the varied and unexpected nature
of user inquiries in practical settings, thus limiting their overall effectiveness in providing a holistic
evaluation. This issue highlights the necessity for more comprehensive benchmarks that can better
simulate the wide range of tasks from real users.
In this paper, we introduce WILDBENCH, an automated evaluation framework designed for assessing
LLMs using complex tasks from real-world users. The examples in WILDBENCH are periodically
updated, with the current version (V2) comprising 1,024 tasks carefully curated from real user-chatbot
dialogs provided by the AI2’s WildChat project (Zhao et al., 2024). We engage multiple advanced
LLMs to process a filtered selection from WildChat, tasking them with the analysis of the requisite
knowledge and skills for each task and subsequently labeling the difficulty level. Tasks considered as
easy by all models are excluded. We ensure the distribution of tasks mirrors the original WildChat
data, such that the task distribution of WILDBENCH is still natural (Figure 3). Additionally, all
finalized tasks undergo manual review. Further details are provided in Section 2.
As shown in Figure 1, WILDBENCH presents a significantly harder challenge due to the complexity,
depth, and realism of the tasks involved. WILDBENCH is sourced from real-world user interactions
and has been carefully curated to ensure diversity and challenge. The tasks in WILDBENCH typically
demand higher-order reasoning, such as writing and/or debugging code with specific constraints,
creative writing with multiple constraints on the style and content, or designing a software system
with complex requirements. These tasks often require critical thinking, creativity, and technical
expertise, making WILDBENCH substantially more challenging than AlpacaEval, where simpler,
factual, or surface-level tasks dominate.
WILDBENCH evaluation is illustrated in Figure 4. To design a reliable automatic evaluation, we
employ two key designs for using LLMs as judges. Drawing inspiration from how humans evaluate
responses to open-ended questions, we develop task-specific checklists. These checklists guide LLMs
in generating consistent and reliable judgments, with each checklist comprising questions focused on
specific criteria. Similar to the zero-shot Chain-of-Thoughts (CoT) prompting (Kojima et al., 2022),
we prompt LLMs to provide step-by-step, structured analyses of each LLM response. This method
encourages a detailed, fine-grained evaluation process, culminating in a well-justified final decision.
We employ two primary metrics: WB-Reward for pairwise comparisons and WB-Score for individual
scoring. WB-Reward is based on pairwise comparisons between LLMs, with five possible outcomes:
“A is much/slightly better/worse than B” or “Tie.” Notably, we used three baseline models to compare
with each testing model instead of using a single baseline model, as most prior works do. This
approach provides a more comprehensive assessment based on different levels of model performance.
2
What is the capital of Australia?What is some cool music from the 1920s?How do I wrap a present neatly?Can you write code?~20 recipe generation tasks AlpacaEvalPlease provide me python code to go through a directory and its subdirectories and delete images that are not horizontal.hey can you write an essay on the impact of the G20 summit on the global economy, trade, development and the role of young people in shaping the future of the world, it has to have more than 1200 words. Write it beau>ful and poe>c. Use extensive vocabulary. Use a lot of factual and empirical data. Use some, ancient indian historical references.I want to create an open source, highly realistic and grounded text-based business simulation game that is played in the terminal, with a large range of different features that make the game as realistic a simulation as possible. In light of this the game should not have set values for anything because that is unrealistic - real life isn’t like that; the sim should be as close to reality as possible. I will host it on Github. Please create a FULL, COMPLETE file structure for the game’s Github repo.Diverse tasks from real users! 123Published as a conference paper at ICLR 2025
WB-Score measures the quality of each model’s generation individually, offering a quicker and
more cost-effective evaluation. To mitigate the bias towards longer outputs, a common issue in
LLM-as-a-judge evaluations (Dubois et al., 2024), we introduced a simple length-penalty method,
converting slight wins/losses to ties when the winner’s output is significantly longer than the loser’s.
Both metrics have demonstrated strong correlations with human judgments, evidenced by a Pearson
correlation of 0.98 for WB-Reward and 0.95 for WB-Score against the human-voted Elo rating from
Chatbot Arena on the top-ranking models. These scores significantly surpass other benchmarks,
such as ArenaHard(Li et al., 2024)’s 0.91 and AlpacaEval2.0’s 0.87 (0.89 for the length-controlled
version) (Li et al., 2023b; Dubois et al., 2024), validating WILDBENCH’s effectiveness and alignment
with human-based evaluation. More details are shown in Table 3 in Section 4.
2 WILDBENCH DATA CURATION
In this section, we describe the data curation process for the tasks used to evaluate LLMs in WILD-
BENCH . Our goal is to ensure that the selected tasks not only represent real-world use cases but are
also challenging enough to distinguish the varying capabilities of LLMs.
Table 1: Statistical comparison of LLM alignment benchmarks. Length are in characters.
Dataset
MT-Bench
AlpacaEval
ArenaHard
#Tasks #Turns ChatHistory QueryLen PromptLen RealUser TaskTag
Evaluation
80
805
500
2
1
1
¸Dynamic
Ø
Ø
¸Static
202.2
164.9
406.4
978.5
Dynamic
164.9
406.4
3402.1
Ø
Ø
¸
¸¸
¸
Ø
Ø
¸
Score
Pair (ref=1)
Pair (ref=1)
Score+Pair (ref=3)
WILDBENCH
1,024
≤5
Figure 2: Distribution of query lengths in AlpacaEval, ArenaHard, and WildBench.
2.1 MINING CHALLENGING TASKS FROM WILDCHAT
We sourced tasks from the WildChat dataset (Zhao et al., 2024), which comprises one million
human-chatbot conversations from real users. This dataset is particularly suited for conversion into an
evaluation benchmark because it contains a diverse array of tasks that users expect LLMs to perform,
such as writing assistance, coding, mathematics, data analysis, role playing, and planning.
Basic filtering. To control the quality and diversity of the selected tasks, we applied several filtering
steps. First, we removed user queries that were either too short (less than 10 tokens) or excessively
long (more than 3,000 tokens). We also excluded conversations with more than five user-chatbot
turns to maintain focus and coherence in the tasks, as conversations exceeding five turns tend to
contain multiple topics. Furthermore, we focused on English data and filtered out non-English tasks.
Since our focus is more on evaluating the capabilities of LLMs rather than content moderation, we
also removed toxic conversations. To ensure task diversity, we used sentence embeddings from
SentenceBERT (Reimers & Gurevych, 2019) to calculate the cosine similarity between queries,
discarding those with a high similarity score above 0.9. The threshold is determined by manual
inspection. Lastly, to further enhance task diversity, we used a diverse user pool by retaining only the
last conversation for each unique device, thus removing tasks from the same user that might require
similar underlying skills.
Difficulty annotation. To identify challenging tasks that can distinguish the performance of different
LLMs, we used GPT-4-Turbo (OpenAI, 2023), Claude-3-Sonnet, and Opus (Anthropic, 2024) to
3
Published as a conference paper at ICLR 2025
Figure 3: Distribution of task categories in AlpacaEval, ArenaHard, and WildBench.
analyze the required background knowledge and reasoning capabilities for each task. These models
assigned a difficulty rating on a five-point scale (from “very easy” to “very hard”). Tasks rated as
“very easy” or “easy” by all models were excluded. From the remaining pool, we randomly sampled
1,500 tasks to ensure that the distribution of task categories is similar to the original dataset.
Human annotation. To improve the quality of selected tasks, human annotation was used for quality
control. We first used GPT-4-Turbo to summarize the intent of each query. These summaries were
then used to help human reviewers remove nonsensical tasks. Finally, we retained 1,024 tasks for
WILDBENCH. We also manually reviewed the tasks to ensure that they were challenging and diverse,
covering a wide range of task categories. For the checklist questions, we verified that they were clear,
interpretable, and relevant to the evaluation of LLM responses.
Dynamic updates and data leakage prevention. WILDBENCH is designed to be a dynamic
benchmark that is updated regularly to reflect new types of user interactions. In fact, we have already
released two versions of the benchmark (V1 in 2024 March and V2 in 2024 May), with similar
curation process but on different iterations of WildChat data. To prevent potential data leakage for
LLMs that use WildChat as part of their training or alignment, we coordinated with the WildChat
team to ensure that the tasks we sample will not be publicly available in the WildChat dataset.
2.2 WILDBENCH STATISTICS
To better understand the composition of our evaluation, we analyze basic statistics and task categories.
Basic statistics. Table 1 compares the statistics of WILDBENCH to existing benchmarks AlpacaE-
val (Li et al., 2023b; Dubois et al., 2024), MT-Bench (Zheng et al., 2024), and ArenaHard (Li et al.,
2024). Among these benchmarks, only ArenaHard and WILDBENCH are sourced from user queries in
the wild (“RealUser”), rather than being curated by experts or through crowdsourcing. The difference
between ArenaHard and our WildBench is that our data distribution aligns with real users’ task
categories, rather than overly focusing on coding and debugging as ArenaHard does.
Long-context tasks. WILDBENCH includes conversation histories of up to four turns per conversa-
tion, reflecting complex and extended user interactions that are facilitated by recent advancements in
LLMs, with over 20% of conversations having more than two or more turns as shown in Figure 8. Ad-
ditionally, as shown in Figure 2, WILDBENCH has longer query lengths, attributable to the extensive
context provided by real user interactions captured in the dataset. This is because that GPT-4-Turbo,
one of the chatbots behind WildChat, supports up to 128K context tokens and 4K output tokens.
This capability exemplifies the importance of a dynamic, in-the-wild benchmark: as models evolve,
they unlock new user applications. Thanks to these realistic user activities, WILDBENCH is a more
suitable benchmark for testing the long-context problem solving abilities of LLMs.
Task categories. To enable a fine-grained analysis of LLM capabilities across varied tasks, we
categorize the tasks into 12 categories based on previous analysis of ShareGPT queries (Ouyang et al.,
2023) and our intent annotation of the tasks. Detailed descriptions about the 12 task categories are
shown in Appendix A. The distribution of the task categories is shown in Figure 3. In this figure, we
also compare to AlpacaEval and ArenaHard. Notably, WILDBENCH is more balanced compared to
AlpacaEval and ArenaHard, which have over 50% of their tasks in Information seeking and Coding
& Debugging categories, respectively.
4
Information seekingCoding & DebuggingAlpacaEval (805)ArenaHard (500)🌟WildBench (1024)Published as a conference paper at ICLR 2025
Figure 4: Evaluation framework for WILDBENCH. There are two metrics: WB-Score for individual
evaluation and WB-Reward for pairwise evaluation. The checklist is used to guide the evaluation
process. The length penalty is used to mitigate the length bias. WB-Reward and WB-Score both have
strong correlations with human-based ranking of LLMs on Chatbot Arena.
3 AUTOMATIC EVALUATION WITH WILDBENCH
In this section, we introduce the evaluation process of LLMs using WILDBENCH. We first explain
how we generate a checklist for each test query to enhance interpretability and reduce evaluation
ambiguity in WILDBENCH. Then, we introduce two automatic metrics: WILDBENCH-Score and
WILDBENCH-Reward. Finally, we discuss how we mitigate the length bias in the evaluation process.
3.1
INSTANCE-SPECIFIC CHECKLISTS
Powerful LLMs have been widely used as judges to evaluate the quality of LLM outputs in many
automatic evaluation methods, such as AlpacaEval (Li et al., 2023b). However, even asking humans
to judge which of the given two model outputs is better can be subjective and ambiguous. Moreover,
such judgements provide limited information about the quality of the models. Without a constant,
interpretable, and comprehensive evaluation standard, the results can be noisy and hard to interpret.
To address this issue, we generate a checklist for each test query in WILDBENCH to comprehensively
evaluate the responses of different models. The checklist consists of 5-10 questions that are designed
to be interpretable and easy to verify. We combine the responses of GPT-4-Turbo and Claude-3-Opus
to finalize the checklists, thereby mitigating the bias of using a single LLM as the evaluator. These
checklists have been manually reviewed and are used as part of the prompts for LLM judges to
evaluate the responses of different models. An example of the checklist can be found in Figure 4.
Taking the G20 example in Figure 1, here is a subset of checklist questions for the task:
Example checklist for the G20 task example in Figure 1.
¸ Does the essay contain more than 1200 words as requested by the user?
¸ Is the language of the essay beautiful and poetic, incorporating extensive vocabulary as
specified?
¸ Does the essay include a significant amount of factual and empirical data related to the
impact of the G20 summit on the global economy, trade, and development?
¸ Are there references to the role of young people in shaping the future of the world within
the context of the G20 summit?
¸ Does the essay include ancient Indian historical references as requested by the user?
¸ Is the essay structured in a clear and logical manner, facilitating an easy understanding
of the discussed topics?
3.2 PAIRWISE EVALUATION WITH WB-REWARD METRIC
WB-Reward is based on pairwise evaluation, which uses a GPT-4-Turbo judge to compare the
responses of two LLMs to determine which one performs better on a given task, using a structured
5
IndividualLLM A’s responseLLM B’s responsejson_output = { "analysis of A": "[analysis of Response A]", "analysis of B": "[analysis of Response B]", "reason of A=B": "[where Response A and B perform equally]", "reason of A>B": "[where Response A is better than B]", "reason of B>A": "[where Response B is better than A]", "choice": "[A++ or A+ or A=B or B+ or B++]"} A++ means A is muchbetter, A+means A is slightlybetter,...🌟WB-Reward Model X vs Y (Baseline) +1 when X>>Y; +0.5 when X>Y;-1 when X<<Y;-0.5 when X<Y; 0 when X=Y;w/ Length Penalty Baseline Models è LLM response📝Checklistjson_output = { "strengths": "[analysis for the strengths]", "weaknesses": "[analysis for the weaknesses]", "score": "[1~10]"} Score 5~6: The response is fair but has some issues (e.g., factual errors, hallucinations, missing key information); ...GPT-4T Haiku Llama-2-70BPairwiseWB-Score💯👤Query💬History👤Query💬HistoryChecklist📝📝Example Task (history + query)👤User:I want a formula that will find the last matching value in sheet named Requisition that matches the value in cell B1 of my current sheet and return the value from the row in column B ….🤖AI: …. 👤USER: the formula does not appear to be finding the last value in column A; 🤖AI: …. 👤USER: you provided the exact same formula, is there an alternative formula >> Coding & Debugging, Data AnalysisChecklist(a list of questions and criteria for eval)1⃣Does the alternative formula provided correctly address the user's need to find the last matching value in a specified column and return a corresponding value from another column? 2⃣Is the alternative formula syntactically correct and compatible with spreadsheet software such as Microsoft Excel or Google Sheets? ...Correlation w/ ChatbotArena Elo(Pearson; Top; Hard-En-240520)WB-Score🦁WB-Reward🦁ArenaHardAE2-LCAE20.8650.8920.9090.9550.984Published as a conference paper at ICLR 2025
checklist to guide the comparison. This metric provides straightforward comparisons among models
and the intermediate outcomes of win/lose rates are easy to interpret.
Step-by-step evaluation process. In Figure 4, we detail the step-by-step evaluation process for
pairwise comparison. First, we provide a chain of evaluation questions to guide the LLM judge to
analyze the user query and the conversation history. The LLM then evaluates the two responses and
also analyze where and why one is better than the other. Finally, we ask the LLM to make a final
judgment on which response is better and why. This method is inspired by the evaluation process in
human evaluation, where human judges are asked to provide detailed feedback on the quality of the
responses before making a final decision. The full evaluation prompt can be found at Appendix D
WB-Reward metric. To compute the WB-Reward for a test model X against a baseline model Y, we
assign rewards based on the comparison result: +1 if X is much better than Y, +0.5 if X is slightly
better than Y, 0 for a tie, -0.5 for X is slightly worse than Y, and -1 for X is much worse than Y.
Baseline LLMs for pairwise evaluation. Using a single baseline model for pairwise evaluation
can lead to noisy and biased evaluations. To mitigate this issue, we use three baseline models
(GPT-4-Turbo-0429, Claude-3-Haiku, and Llama-2-70B-chat (Touvron et al., 2023)) to compute the
rewards for each model. Our metric WB-Reward (Mix) is the average of the rewards from these three
baselines on 1024 examples, providing a more robust performance evaluation on WILDBENCH.
Mitigating length bias with a margin for ties. Previous studies have shown that LLM judges tend
to prefer longer responses (Dubois et al., 2024). To mitigate this bias, we propose a simple and
intuitive length penalty method. If the winning response is longer than the losing one by a certain
threshold (K characters), we convert Slightly Win/Slightly Lose to a Tie. K can be customized via our
leaderboard web-page for personalized configuration. Setting K = ∞ will disable the length penalty.
We designed this feature to support a more personalized and flexible leaderboard. For example, users
who prefer shorter and more concise outputs can set a smaller K if they do not prioritize correlating
perfectly with the general human-based model rankings on ChatbotArena. This choice allows for a
customized leaderboard experience depending on user preferences.
3.3
INDIVIDUAL EVALUATION WITH WB-SCORE METRIC
Although pairwise evaluation provides a direct comparison between LLMs, it is usually more
expensive and time-consuming than grading each individual LLM generation. To individually
evaluate the performance of each model on WILDBENCH, we prompt GPT-4-Turbo to assign a score
from 1 to 10 for each model’s response. The full evaluation prompt can be found at Appendix E.
Score definition. To ensure a stable and consistent evaluation, we ask GPT-4-Turbo to evaluate the
quality of each response based on the checklist and provide detailed strengths and weakness of each
output before giving a score from 1 to 10. The scores are defined as follows:
• Score 1–2: The response is very poor and does not make sense at all.
• Score 3–4: The response is poor and does not help the user solve the problem meaningfully.
• Score 5–6: The response is fair but has issues (e.g., factual errors, hallucinations, missing key information).
• Score 7–8: The response is good but could be improved.
• Score 9–10: The response is perfect and provides helpful information to solve the problem.
Score rescaling. The WILDBENCH-Score is calculated as the average of the scores on all examples
tested, where each score is first subtracted by 5 and then multiplied by 2 (i.e., S′ = (S − 5) × 2). A
score of 5 represents a borderline acceptable response, so this rescaling can help to better differentiate
the performance of models that can effectively solve the tasks.
4 RESULTS & ANALYSIS
We analyze the performance of different models on WILDBENCH. We first present the leader-
board analysis, then examine the length bias issue in the evaluation process, and finally discuss the
correlation between WILDBENCH-Score and ChatbotArena Elo rating.
Leaderboard features. In Table 2, we present a subset of the results from our live leaderboard demo.
For the most up-to-date results and more interactive features, such as customizing length penalties and
viewing the detailed task-wise performance of each model, please refer to our live leaderboard. Our
6
Published as a conference paper at ICLR 2025
Table 2: Evaluation results (subset) of LLMs using WILDBENCH and other benchmarks. Please
refer to Figure 6-7 and demo website to view and interact with the full results.
Model names
WB-Reward (no length penalty) WB-
Mix ◎GPT4T ◎Haiku ◎Llama2 Score
Yi-1.5-34B-Chat
GPT-4o-0513 (cid:181) 35.7
1
◎ GPT-4-Turbo-0409 (cid:181) 34.6
2
GPT-4-Turbo-0125 (cid:181) 29.9
3
Gemini-1.5-Pro (cid:181) 27.8
4
Llama-3-70B-Inst
21
5
Claude 3 Opus (cid:181) 20.1
6
Gemini-1.5-Flash (cid:181) 17.4
7
8
16.8
10 Llama3-Inst-8B-SimPO 14
Claude 3 Sonnet (cid:181)
7.2
13
14
4.4
Qwen1.5-72B-Chat
Command-R-Plus (cid:181)
0.4
17
◎ Claude 3 Haiku (cid:181) -8.5
Mistral-Large (cid:181) -10.5
-11.9
-14.6
Command-R (cid:181) -16
-18.8
-21.6
-24.3
-25
Tulu-2-dpo-70b -25.4
Mixtral-8x7B-Inst
DBRX Inst
Yi-1.5-6B-Chat
Mistral-7B-Inst-v0.2
StarlingLM-7B-beta
Llama-3-8B-Inst
20
21
23
24
25
26
27
29
30
32
33
34
35
36
38
39
40
◎ Llama-2-70B-chat
Qwen1.5-7B-Chat
-26.8
-27
Phi-3-medium-128k -33.3
GPT-3.5-turbo-0125 -33.5
-48
-57
-74.1
Llama-2-7B-chat
Gemma-7B-it
Gemma-2B-it
1.5
0
-4.4
-4.4
-19
-20.4
-16.6
-18.3
-22.5
-31.6
-34.8
-36.3
-46.9
-48.1
-48.7
-49.8
-48.4
-53.4
-57.3
-55
-58.1
-59.3
-56.9
-57.7
-66.4
-66.3
-71.8
-78.4
-87.8
46.3
45.3
38.8
37.9
31.9
34.3
26.3
24.1
18.9
19.4
13.1
7.4
0
-4
-5
-9.7
-12.7
-13.5
-16.3
-19.9
-22.4
-20.3
-23.6
-23
-30
-30
-44.6
-55.8
-73.6
59.3
58.4
55.2
50
50.2
46.3
42.5
44.5
45.7
33.9
34.7
30.2
21.4
20.5
18
15.7
13.1
10.4
8.7
2.1
5.5
3.3
0
-0.2
-3.6
-4.1
-27.8
-36.8
-60.8
65.3
64.7
63.3
55.7
60.4
63.1
53.1
57.8
53.9
55.5
56.5
51.4
50.4
54.2
46.8
45.7
45.7
47.8
48.9
39.6
43.4
45.2
39.2
40
42.1
42.1
27.6
23.9
6.2
Arena Arena- AlpacaEval2
Elo
1293
1251
1239
-
1213
1232
-
-
-
1187
1143
1155
1169
1158
1111
1144
1106
1114
1106
-
1071
1099
1070
1059
-
1105
1012
1047
980
Hard
LC WR
-
82.6
78.0
-
41.1
60.4
-
-
33.8
46.8
36.1
33.1
41.5
37.7
23.0
20.6
17.0
23.4
23.9
-
-
15.0
11.6
-
-
23.3
4.6
7.5
3.0
57.5
55.0
-
-
34.4
40.5
-
-
44.7
34.9
36.6
-
-
32.7
-
22.9
-
23.7
25.4
-
17.1
21.2
14.7
14.7
-
-
5.4
10.4
5.4
51.3
46.1
-
-
33.2
29.1
-
-
40.5
25.6
26.5
-
-
21.4
-
22.6
-
18.3
18.4
-
14.7
16.0
13.9
11.8
-
-
5.0
6.9
3.4
live leaderboard also supports exploring data and comparing model outputs side by side to understand
the strengths and weaknesses of each model.
By using three baseline models of varying performance levels (GPT-4-Turbo > Claude 3 Haiku >
Llama-2-70B-chat), we observe that the tested models can be naturally grouped into three tiers based
on their performance. Tier 1 models outperform Claude 3 Haiku, Tier 2 models outperform Llama-2-
70B-chat but are worse than Claude 3 Haiku, and Tier 3 models are worse than Llama-2-70B-chat.
4.1 LEADERBOARD ANALYSIS
Where are the gaps between models? A unique feature of the WILDBENCH leaderboard is the
ability to compare models across different task categories, which enables us to identify the strengths
and weaknesses of each model on different types of tasks. In Figure 5, we select a set of popular
models for analysis: Llama-3-8B-Inst (Meta, 2023), Llama-3-8B-Inst-SimPO (Meng et al., 2024b),
Yi-1.5-34B-chat (AI et al., 2024), Llama-3-70B-Inst, GPT-4-Turbo-0409, and Claude 3 Opus. We
show their performance in WB-Score across five task categories (merged from the 12 categories shown
in Figure 3). Larger models like GPT-4-Turbo-0409 and Claude 3 Opus perform well across all task
categories, while open LLMs like Llama-3-8B-Inst and Yi-1.5-34B-chat show weaker performance
on coding and math-related tasks.
Will an 8B model outperform a 70B model? On the AlpacaEval-2.0 leaderboard, Llama-3-8B-
Inst-SimPO (LC=44.7%) significantly outperforms Llama-3-70B-Inst (LC=34.4%) (Meng et al.,
2024a), which is surprising and differs from our results. As shown in both Table 2 and Figure 5, our
results indicate that Llama-3-8B-Inst-SimPO is generally still worse than Yi-34B-chat and Llama-3-
70B-Inst. However, on information-seeking and creative tasks, Llama-3-8B-Inst-SimPO performs
comparably to Llama-3-70B-Inst. Thus, we believe AlpacaEval’s evaluation results underestimate
the performance of Llama-3-70B-Inst due to task selection bias in addition to the weakness of their
evaluation prompting method. While the performance of Llama-3-8B-Inst-SimPO is not as good as it
7
Published as a conference paper at ICLR 2025
Table 3: Correlation with Chatbot ArenaElo Elo (Hard-En-240520) of alignment benchmarks.
Metric
ArenaElo (Hard-En)
P-Cortop
1.000
P-Corall
1.000
S-Corall K-Corall
1.000
1.000
Arena-Hard
AlpacaEval2-LC
AlpacaEval2
WB-Score
WB-Rewardmix
∞
WB-Rewardmix
500
0.909
0.892
0.865
0.955
0.984
0.984
0.925
0.951
0.952
0.940
0.973
0.976
0.965
0.924
0.960
0.943
0.978
0.974
0.890
0.818
0.868
0.846
0.912
0.912
Metric
∞
∞
Avg Length
WB-Rewardllama
WB-Rewardgpt4t
WB-Rewardhaiku
WB-Rewardllama
500
WB-Rewardgpt4t
500
WB-Rewardhaiku
500
∞
P-Cortop
0.472
P-Corall
0.554
S-Corall
0.376
0.976
0.974
0.985
0.977
0.992
0.973
0.965
0.961
0.974
0.969
0.973
0.976
0.965
0.965
0.982
0.961
0.969
0.974
seems on AlpacaEval-2.0, it is indeed the best 8B model in our evaluation and outperforms some
other larger models. Interestingly, Llama-3-8B-Inst-SimPO consistently improves the performance of
Llama-3-8B-Inst on all task categories, resulting in a similar shape on the radar plot in Figure 5.
Are longer responses always better? WILD-
BENCH is robust to length bias. For example,
Llama-2-70B-chat and Llama-3-70B-Inst have
similar output lengths (2,965 vs 2,983 chars),
yet Llama-3-70B-Inst ranks 5th while Llama-2-
70B-chat ranks 33rd on the leaderboard of 40
models. Additionally, Yi-1.5-6B’s output length
is the 4th longest among the 40 models (3,322
characters), but it ranks 29th on the leaderboard.
This suggests that the WILDBENCH evaluation
is not biased towards longer responses, with re-
sponse quality being the most important factor
in the evaluation process. Additionally, we use
a length penalty to ensure that longer responses
are not always favored, and users can customize
the length penalty to adjust the trade-off be-
tween response length and quality according to
their needs. This feature is available on our live
leaderboard and is illustrated in Figure 6.
4.2 CORRELATION TO HUMAN JUDGMENT
Figure 5: Performance breakdown by task category
of 6 models on WILDBENCH.
To analyze how well WILDBENCH evaluation
correlates with human judgment, we compare our results to the ChatbotArena Elo rating generated
by large-scale online human evaluations. Focusing on hard prompts, we use the Elo ratings from the
Hard-English version released on May 20, 2024.
We compare our WB-Reward and WB-Score with three other metrics: AlpacaEval winrate (WR),
length-controlled winrate (LC), and ArenaHard scores. We use three correlation metrics: Pearson
correlation (P-Cor), Spearman correlation (S-Cor), and Kendall’s tau correlation (K-Cor). To ensure
a fair comparison, we consider all models that have all four metrics available in Table 2, which results
in 14 models. To distinguish the top-performing models, we also consider the top 6 models, denoting
their correlation metrics as P-Cortop, and P-Corall respectively. The reason why we care about the
correlation on top-ranking models is that models released in the future are likely to compete with
the top models, so the Pearson correlation in this range is more important from the perspective of
predicting the future application of a metric. The analysis results are shown in Table 3.
Both WB-Reward and WB-Score show strong correlations with the human-based Elo rating, par-
ticularly for the top-performing models, achieving the best correlation among all other automatic
metrics. Among using different baseline models for pairwise evaluation, we find that using Haiku as
the baseline model yields the best correlation. These results suggest that the WILDBENCH evaluation
correlates well with human judgment in ranking model performance as an automatic metric.
8
Reasoning & PlanningCreativeTasksCoding&DebuggingInfo SeekingMath& DataPublished as a conference paper at ICLR 2025
4.3 ABLATION STUDIES AND DISCUSSIONS.
Checklists. In our ablation study on the impact of checklists, we compared model performance
with and without checklists by removing the associated parts from the prompt templates. The
results indicate that incorporating checklists improves the final correlation with human preferences.
Specifically, the WB-Score without checklists achieves a Pearson correlation of 0.905 (for all models),
which is lower than the 0.925 correlation achieved when using checklists.
Length penalties. We experimented with different K (100, 200, 500, 1000, inf) in the length penalty
method. We found that K = 500 is the best choice, as it achieves the highest correlation with human
judgments. This result suggests that the length penalty method is effective in mitigating the length
bias in LLM evaluations.
Do multiple LLMs as judges help? How much do multiple LLMs help? We experimented with
using GPT-4, Claude 3 Opus, and Mistral-Large as LLM judges. Our experiments revealed that
these LLM judges produced very similar results, thereby exerting minimal influence on the final
relative ranking of LLMs. Considering to reduce the cost of evaluation and faster turnaround time,
we recommend using a single LLM as a judge in practice. In the future versions, we will explore
more efficient ways to use multiple LLMs as judges, for example, by using different judge LLMs for
different tasks that are best suited to their strengths.
Data distribution. How do we explain that WildBench has a different distribution compared to
ChatbotArena’s platform but still shows a strong correlation, even better than ArenaHard? The
objective of WildBench is to evaluate LLMs on challenging tasks from real users. The ArenaElo
we use for comparison is derived from the hard-English split in ChatbotArena, where human users
submit tasks and vote. Thus, both WildBench and ChatbotArena aim to address the same goal. While
it is practically impossible to match the exact distribution of users and tasks between the two—given
that WildChat users are anonymous and ChatbotArena does not publicize its data—both are sourced
from real users on the web. Consequently, this represents the best possible approach for correlating
our LLM ratings with human-based ratings.
Two complementary metrics: WB-Reward & WB-Score. Both metrics use checklists and a
CoT-style prompt for evaluation, utilizing the same testing data. The key differences are in their
methodologies: WB-Score: Evaluates each model’s outputs individually on a scale of 1-10, with
detailed explanations for each score (see Appendix); WB-Reward: Compares a model’s outputs
to those of three baseline models at different performance levels for a comprehensive evaluation.
Pairwise evaluations can be coarse, but using three baseline models and refined pairwise choices
(e.g., much better or slightly better) mitigates this. WB-Score provides a universal score comparable
across models using the same evaluation templates and checklists. Additionally, WB-Score is cheaper
and faster to run (10 minutes, $5) compared to WB-Reward, which requires 3-4 times the cost due
to multiple baselines. Both metrics have their strengths and weaknesses. We use both to build our
official leaderboard, allowing users to choose the most suitable metrics for their experiments.
5 RELATED WORKS
Close-ended benchmarks. Close-ended benchmarks typically consist of multiple-choice questions
and have been widely used to evaluate LLMs authors (2022). For example, MMLU (Hendrycks et al.,
2020) includes multi-choice questions across various subject areas. Its variants include CMMLU (Li
et al., 2023a) for Chinese, KMMLU (Son et al., 2024) for Korean, and MMLU-Pro (Wang et al.,
2024) for more challenging evaluation. GPQA (Rein et al., 2023) is another close-ended benchmark
designed to be challenging even for humans with internet access. Specialized benchmarks with
ground-truth answers, such as GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021), also
fall into this category. While these benchmarks focus on close-form answers, our work evaluates
LLMs’ ability to generate free-form responses and engage in conversations with users.
Expert-curated and crowdsourced data. Several open-ended generation benchmarks rely on data
curated by human experts or crowdsourcing workers. For instance, MT-Bench (Zheng et al., 2024)
manually creates examples for predefined categories. AlpacaEval (Li et al., 2023b) is based on
author-written examples (Dubois et al., 2023; Taori et al., 2023; Wang et al., 2022), which primarily
consists of simple instructions such as rewriting tasks.
9
Published as a conference paper at ICLR 2025
In-the-wild data. A key feature of our work is that its underlying data is sourced from real-world
use cases, ensuring alignment with actual LLM use cases. Notable benchmarks using real-world data
include ChatbotArena (Zheng et al., 2024; Chiang et al., 2024), where users input their questions
and choose the better response from two LLMs. However, ChatbotArena relies on extensive human
feedback. WildVision (Lu et al., 2024) is a similar project but designed for vision language models.
ArenaHard (Li et al., 2024) is another work that selects user queries from ChatbotArena to construct
a benchmark for automatic evaluation.
Evaluation methods. Evaluating open-ended generation poses challenges due to the lack of a single
valid ground truth. Human evaluation, though reliable, is expensive and time-consuming. To reduce
costs and enable fast evaluation, powerful LLMs are often used as judges, as seen in benchmarks like
MT-Bench, AlpacaEval, ArenaHard, and our own. Evaluation methods include single-system grading,
which assigns scores to individual outputs, and pairwise comparisons, which compare outputs of two
systems to compute win rates. Pairwise comparisons, while more expensive, can highlight subtle
differences across systems (Zheng et al., 2024). To mitigate self-selection bias where an LLM prefers
its own outputs (Panickssery et al., 2024), we use checklists generated from multiple LLMs, similar
to InfoBench (Qin et al., 2024). In addition, we ask LLM judges generate structured explanations
that enable human verification for further calibration, inspired by Just-Eval (Lin et al., 2023). There
are also local evaluators that can be used to evaluate LLMs with our WILDBENCH with open-weight
LLMs, such as TIGERScore (Jiang et al., 2023) and Prometheus (Kim et al., 2024).
Data leakage prevention. Publicly available benchmarks risk contamination from LLMs trained on
such data. GPQA includes a special string to help LLM developers filter out its data (Rein et al., 2023),
yet indirect leakage through cited examples remains possible. To mitigate this, we reserve a subset
of WildChat that is never released publicly, which keeps its expert-curated evaluation data private.
However, WILDBENCH provides a public validation set and details the benchmark construction
process for greater transparency.
Other dimensions for evaluation. While our focus is on evaluating LLM capabilities, other
evaluation dimensions, such as safety (Mazeika et al., 2024; Jiang et al., 2024), fairness (Gallegos
et al., 2024), logical reasoning (Lin et al., 2024), agentic planning (Liu et al., 2023; Mialon et al.,
2023; Lin et al., 2022), and hallucination detection (Min et al., 2023; Mishra et al., 2024; Hong et al.,
2024), are equally important.
6 CONCLUSION AND FUTURE DIRECTIONS
In this work, we introduced WILDBENCH, a benchmark designed to evaluate LLMs using real-
world user queries. An important feature of WILDBENCH data is the nature of in-the-wild user
queries with natural task distribution. To evaluate LLM performance using the collected data, we
introduced a CoT-like LLM-as-judge method to improve the interpretability of evaluations and reduce
ambiguity. We also incorporated a length penalty method to mitigate the length bias in LLM-as-judge
evaluations. Experiments show that our primary metrics, WB-Reward and WB-Score, have very
strong correlations with human judgments, surpassing existing evaluations.
We present extensive experiments and analyses, showcasing the performance of a wide range of 40
LLMs, including both proprietary and public ones, on the WILDBENCH benchmark. By providing a
detailed breakdown of scores across different task categories, WILDBENCH offers insights on the
strengths and weaknesses of different models. By introducing WILDBENCH, we aim to provide
a realistic, dynamic, and contamination-resilient evaluation framework that accurately reflects the
capabilities of LLMs. We will actively maintain the project for continually evaluating new LLMs
with unseen tasks over time.
REFERENCES
01. AI, :, Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li,
Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Senbin
Yang, Shiming Yang, Tao Yu, Wen Xie, Wenhao Huang, Xiaohui Hu, Xiaoyi Ren, Xinyao Niu,
Pengcheng Nie, Yuchi Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu, Zhiyuan Liu, and
Zonghong Dai. Yi: Open foundation models by 01.ai, 2024.
10
Published as a conference paper at ICLR 2025
Anthropic. The claude 3 model family: Opus, sonnet, haiku. https://www-cdn.anthropic.
com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.
pdf, 2024.
The BigBench authors. Beyond the imitation game: Quantifying and extrapolating the capabilities of
language models. ArXiv, abs/2206.04615, 2022. URL https://api.semanticscholar.
org/CorpusID:263625818.
Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng
Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E. Gonzalez, and Ion Stoica. Chatbot arena:
An open platform for evaluating llms by human preference, 2024.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher
Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint
arXiv:2110.14168, 2021.
Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin,
Percy Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that
learn from human feedback. arXiv preprint arXiv:2305.14387, 2023.
Yann Dubois, Balázs Galambosi, Percy Liang, and Tatsunori B. Hashimoto. Length-controlled
alpacaeval: A simple way to debias automatic evaluators, 2024.
Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernon-
court, Tong Yu, Ruiyi Zhang, and Nesreen K. Ahmed. Bias and fairness in large language models:
A survey, 2024.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
arXiv preprint
Jacob Steinhardt. Measuring massive multitask language understanding.
arXiv:2009.03300, 2020.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv
preprint arXiv:2103.03874, 2021.
Giwon Hong, Aryo Pradipta Gema, Rohit Saxena, Xiaotang Du, Ping Nie, Yu Zhao, Laura Perez-
Beltrachini, Max Ryabinin, Xuanli He, Clémentine Fourrier, and Pasquale Minervini. The hal-
lucinations leaderboard – an open effort to measure hallucinations in large language models,
2024.
Dongfu Jiang, Yishan Li, Ge Zhang, Wenhao Huang, Bill Yuchen Lin, and Wenhu Chen. Tigerscore:
Towards building explainable metric for all text generation tasks. Transactions on Machine
Learning Research, 2023.
Liwei Jiang, Kavel Rao, Seungju Han, Allyson Ettinger, Faeze Brahman, Sachin Kumar, Niloofar
Mireshghallah, Ximing Lu, Maarten Sap, Yejin Choi, and Nouha Dziri. Wildteaming at scale:
From in-the-wild jailbreaks to (adversarially) safer language models. ArXiv, abs/2406.18510, 2024.
URL https://api.semanticscholar.org/CorpusID:270738096.
Seungone Kim, Juyoung Suk, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham
Neubig, Moontae Lee, Kyungjae Lee, and Minjoon Seo. Prometheus 2: An open source language
model specialized in evaluating other language models. arXiv preprint arXiv:2405.01535, 2024.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. ArXiv, abs/2205.11916, 2022. URL https://api.
semanticscholar.org/CorpusID:249017743.
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy
Baldwin. Cmmlu: Measuring massive multitask language understanding in chinese. arXiv preprint
arXiv:2306.09212, 2023a.
Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Banghua Zhu, Joseph E. Gonzalez, and Ion
Stoica. From live data to high-quality benchmarks: The arena-hard pipeline, April 2024. URL
https://lmsys.org/blog/2024-04-19-arena-hard/.
11
Published as a conference paper at ICLR 2025
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following
models. https://github.com/tatsu-lab/alpaca_eval, 2023b.
Bill Yuchen Lin, Chengsong Huang, Qian Liu, Wenda Gu, Sam Sommerer, and Xiang Ren. On
grounded planning for embodied tasks with language models. ArXiv, abs/2209.00465, 2022. URL
https://api.semanticscholar.org/CorpusID:251979509.
Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Raghavi
Chandu, Chandra Bhagavatula, and Yejin Choi. The unlocking spell on base llms: Rethink-
ing alignment via in-context learning. ArXiv, abs/2312.01552, 2023. URL https://api.
semanticscholar.org/CorpusID:265608902.
Bill Yuchen Lin, Ronan Le Bras, and Yejin Choi. Zebralogic: Benchmarking the logical rea-
soning ability of language models, 2024. URL https://hf.co/spaces/allenai/
ZebraLogicBench-Leaderboard.
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Yuxian Gu, Hangliang
Ding, Kai Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui
Zhang, Shengqi Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie
Tang. Agentbench: Evaluating llms as agents. ArXiv, abs/2308.03688, 2023. URL https:
//api.semanticscholar.org/CorpusID:260682249.
Yujie Lu, Dongfu Jiang, Wenhu Chen, William Yang Wang, Yejin Choi, and Bill Yuchen Lin.
Wildvision: Evaluating vision-language models in the wild with human preferences. arXiv preprint
arXiv:2406.11069, 2024.
Mantas Mazeika, Long Phan, Xuwang Yin, Andy Zou, Zifan Wang, Norman Mu, Elham Sakhaee,
Nathaniel Li, Steven Basart, Bo Li, David Forsyth, and Dan Hendrycks. Harmbench: A standard-
ized evaluation framework for automated red teaming and robust refusal, 2024.
Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a
reference-free reward. 2024a. URL https://api.semanticscholar.org/CorpusID:
269983560.
Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference-
free reward, 2024b.
Meta.
Introducing Meta Llama 3: The most capable openly available LLM to date. https:
//ai.meta.com/blog/meta-llama-3/, 2023.
Grégoire Mialon, Clémentine Fourrier, Craig Swift, Thomas Wolf, Yann André LeCun, and Thomas
Scialom. Gaia: a benchmark for general ai assistants. ArXiv, abs/2311.12983, 2023. URL
https://api.semanticscholar.org/CorpusID:265351664.
Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer,
Luke Zettlemoyer, and Hannaneh Hajishirzi. Factscore: Fine-grained atomic evaluation of factual
precision in long form text generation. arXiv preprint arXiv:2305.14251, 2023.
Abhika Mishra, Akari Asai, Vidhisha Balachandran, Yizhong Wang, Graham Neubig, Yulia Tsvetkov,
and Hannaneh Hajishirzi. Fine-grained hallucination detection and editing for language models,
2024.
OpenAI. Gpt-4 technical report, 2023.
Siru Ouyang, Shuohang Wang, Yang Liu, Ming Zhong, Yizhu Jiao, Dan Iter, Reid Pryzant, Chenguang
Zhu, Heng Ji, and Jiawei Han. The shifted and the overlooked: A task-oriented investigation
of user-GPT interactions. In The 2023 Conference on Empirical Methods in Natural Language
Processing, 2023. URL https://openreview.net/forum?id=qS1ip2dGH0.
Arjun Panickssery, Samuel R. Bowman, and Shi Feng. Llm evaluators recognize and favor their own
generations, 2024.
12
Published as a conference paper at ICLR 2025
Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng
Wu, Fei Liu, Pengfei Liu, and Dong Yu. Infobench: Evaluating instruction following ability in
large language models, 2024.
Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks,
2019.
David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani,
Julian Michael, and Samuel R. Bowman. Gpqa: A graduate-level google-proof q&a benchmark,
2023.
Guijin Son, Hanwool Lee, Sungdong Kim, Seungone Kim, Niklas Muennighoff, Taekyoon Choi,
Cheonbok Park, Kang Min Yoo, and Stella Biderman. Kmmlu: Measuring massive multitask
language understanding in korean. arXiv preprint arXiv:2402.11548, 2024.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model.
https://github.com/tatsu-lab/stanford_alpaca, 2023.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cris-
tian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu,
Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel
Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee,
Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra,
Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi,
Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh
Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic,
Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models,
2023.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions.
arXiv preprint arXiv:2212.10560, 2022.
Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming
Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi
Fan, Xiang Yue, and Wenhu Chen. Mmlu-pro: A more robust and challenging multi-task language
understanding benchmark, 2024.
Wenting Zhao, Xiang Ren, John Frederick Hessel, Claire Cardie, Yejin Choi, and Yuntian Deng. Wild-
chat: 1m chatgpt interaction logs in the wild. 2024. URL https://api.semanticscholar.
org/CorpusID:269390491.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and
chatbot arena. Advances in Neural Information Processing Systems, 36, 2024.
13
Published as a conference paper at ICLR 2025
Appendix
A TASK CATEGORIES
In Section 2.2 we mentioned that tasks are categorized into 12 categories to enable fine-grained
analysis of LLM capabilities. The definition of these task categories are as follows.
• Information seeking - Users ask for specific information or facts about various topics.
• Reasoning - Queries require logical thinking, problem-solving, or processing of complex ideas.
• Planning - Users need assistance in creating plans or strategies for activities and projects.
• Editing - Involves editing, rephrasing, proofreading, or other tasks related to the composition of
general written content.
• Coding & Debugging - Users seek help with writing, reviewing, or fixing code in programming.
• Math - Queries related to mathematical concepts, problems, and calculations.
• Role playing - Users engage in scenarios requiring ChatGPT to adopt a character or persona.
• Data Analysis - Requests involve interpreting data, statistics, or performing analytical tasks.
• Creative Writing - Users seek assistance with crafting stories, poems, or other creative texts.
• Advice seeking - Users ask for recommendations or guidance on various personal or professional
issues.
• Brainstorming - Involves generating ideas, creative thinking, or exploring possibilities.
• Others - Any queries that do not fit into the above categories or are of a miscellaneous nature.
We consolidate the original categories into five major groups for easier task-wise analysis. Specifically,
we combine “Information seeking” and “Advice seeking” into “Info Seeking”; “Math” and “Data
Analysis” into “Math & Data”; and “Reasoning” and “Planning” into “Reasoning & Planning.” The
remaining types are grouped under “Creative Tasks.” These consolidated groups are illustrated in
Figure 5.
Please note that the following links are allenai for double-blind
review, which we will update after the review process. The supple-
mentary zip file contains the source code for the evaluation scripts,
the leaderboard, and the data.
Figure 8: Distribution of the
number of turns in WildBench.
B MORE INFORMATION ON WILDBENCH DATA
The distribution of the number of turns in WILDBENCH can be found
in Figure 8. The dataset documentation, metadata, and the public sub-
set of WILDBENCH can be found at https://huggingface.
co/datasets/allenai/WildBench/viewer/v2. We re-
lease the data under AI2’s ImpACT license as a low-risk artifact,
and we bear all responsibility in case of rights violations. We will
ensure that the dataset will be available for a long time and maintain
the data by continuously updating it.
C MORE INFORMATION ON WILDBENCH EVALUATION
Our evaluation results on the public subset of WILDBENCH can be reproduced using evaluation scripts
available at https://github.com/allenai/WildBench/. We have included generation
script for each model under the folder https://github.com/allenai/WildBench/tree/
main/scripts, and the scripts for evaluating generations can be found at https://github.
com/allenai/WildBench/tree/main/evaluation.
D PROMPT TEMPLATE FOR PAIRWISE EVALUATION METRIC WB-REWARD
The prompt template for pairwise evaluation is shown below. It can be divided into three sections:
the first section provides the high-level instruction, the task to be tested, and two model outputs; the
14
Published as a conference paper at ICLR 2025
second section specifies the checklist and the rules; and the last section instructs the LLM judge to
follow the step-by-step evaluation process as detailed in Section 3.2
# Instruction
You are an expert evaluator. Your task is to evaluate the quality of
(cid:44)→
the responses generated by two AI models. We will provide you with
the user query and a pair of AI-generated responses (Response A and
B). You should first read the user query and the conversation
history carefully for analyzing the task, and then evaluate the
quality of the responses based on and rules provided below.
(cid:44)→
(cid:44)→
(cid:44)→
(cid:44)→
# Conversation between User and AI
## History
<|begin_of_history|>
{$history}
<|end_of_history|>
## Current User Query
<|begin_of_query|>
{$user_query}
<|end_of_query|>
## Response A
<|begin_of_response_A|>
{$candidate_A}
<|end_of_response_A|>
## Response B
<|begin_of_response_B|>
{$candidate_B}
<|end_of_response_B|>
# Evaluation
## Checklist
<|begin_of_checklist|>
{$checklist}
<|end_of_checklist|>
Please use this checklist to guide your evaluation, but do not limit
(cid:44)→
your assessment to the checklist.
## Rules
You should compare the above two responses based on your analysis of
(cid:44)→
the user queries and the conversation history. You should first
write down your analysis and the checklist that you used for the
evaluation, and then provide your assessment according to the
checklist. There are five choices to give your final assessment:
["A++", "A+", "A=B", "B+", "B++"], which correspond to the
following meanings:
(cid:44)→
(cid:44)→
(cid:44)→
(cid:44)→
(cid:44)→
- `A++`: Response A is much better than Response B.
- `A+`: Response A is only slightly better than Response B.
- `A=B`: Response A and B are of the same quality. Please use this
(cid:44)→
- `B+`: Response B is only slightly better than Response A.
- `B++`: Response B is much better than Response A.
choice sparingly.
15
Published as a conference paper at ICLR 2025
## Output Format
First, please output your analysis for each model response, and then
(cid:44)→
summarize your assessment to three aspects: "reason A=B", "reason
A>B", and "reason B>A", and finally make your choice for the final
assessment.
(cid:44)→
(cid:44)→
filling in the placeholders in []:
Please provide your evaluation results in the following json format by
(cid:44)→
```
{
"analysis of A": "[analysis of Response A]",
"analysis of B": "[analysis of Response B]",
"reason of A=B": "[where Response A and B perform equally well]",
"reason of A>B": "[where Response A is better than Response B]",
"reason of B>A": "[where Response B is better than Response A]",
"choice": "[A++ or A+ or A=B or B+ or B++]",
}
```
E PROMPT TEMPLATE FOR INDIVIDUAL EVALUATION METRIC WB-SCORE
The prompt template for individual evaluation is shown below. It can be similarly divided into three
sections: the first section provides the high-level instruction, the task to be tested, and the model
output; the second section specifies the checklist and the rules; and the last section instructs the LLM
judge to follow the step-by-step evaluation process as detailed in Section 3.3.
# Instruction
the responses generated by AI models.
You are an expert evaluator. Your task is to evaluate the quality of
(cid:44)→
We will provide you with the user query and an AI-generated responses.
You should first read the user query and the conversation history
(cid:44)→
carefully for analyzing the task, and then evaluate the quality of
the responses based on and rules provided below.
(cid:44)→
# Conversation between User and AI
## History
<|begin_of_history|>
{$history}
<|end_of_history|>
## Current User Query
<|begin_of_query|>
{$user_query}
<|end_of_query|>
## AI Response
<|begin_of_response|>
{$model_output}
<|end_of_response|>
16
Published as a conference paper at ICLR 2025
# Evaluation
## Checklist
<|begin_of_checklist|>
{$checklist}
<|end_of_checklist|>
Please use this checklist to guide your evaluation, but do not limit
(cid:44)→
your assessment to the checklist.
## Rules
user queries and the conversation history.
You should compare the above response based on your analysis of the
(cid:44)→
You should first write down your analysis and the checklist that you
(cid:44)→
used for the evaluation, and then provide your assessment according
to the checklist.
(cid:44)→
The scores are in the range of 1~10, where 1 means the response is very
(cid:44)→
Here are more detailed criteria for the scores:
poor and 10 means the response is perfect.
in a meaningful way.
- Score 1~2: The response is very poor and does not make sense at all.
- Score 3~4: The response is poor and does help user solve the problem
(cid:44)→
- Score 5~6: The response is fair but has some issues (e.g., factual
(cid:44)→
- Score 7~8: The response is good enough but could be improved in some
(cid:44)→
- Score 9~10: The response is perfect and provides helpful information
(cid:44)→
errors, hallucinations, missing key information).
that can help user solve the problem.
ways.
## Output Format
First, please output your analysis for the model response, and then
(cid:44)→
summarize your assessment to two aspects: "strengths" and
"weaknesses"; Finally, please write down your rating for the
assessment.
(cid:44)→
(cid:44)→
filling in the placeholders in []:
Please provide your evaluation results in the following json format by
(cid:44)→
```
{
"strengths": "[analysis for the strengths of the response]",
"weaknesses": "[analysis for the weaknesses of the response]",
"score": "[1~10]"
}
```
F FULL WILDBENCH LEADERBOARD
The full WILDBENCH leaderboard as of Jun 5, 2024 can be found in Figure 6; The updated leader-
board as of Sept 1, 2024 can be found in Figure 7. Note that we used a new metric named WB-Elo
that is based on merging WB-Reward and WB-Score to a collection of pairwise comparisons and
perform Elo rating updates on top of existing LMSYS Elo rating, thus we can have a faster and more
stable leaderboard update. You can view and interact with the latest results on our leaderboard on our
website at https://huggingface.co/spaces/allenai/WildBench
17
Published as a conference paper at ICLR 2025
Figure 6: Leaderboard of WildBench (2024 Jun 5th)
18
Published as a conference paper at ICLR 2025
Figure 7: Leaderboard of WildBench (2024 Sept 1st)
19
|
lgsyLSsDRe | NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models | [
8,
8,
6,
8
] | Published as a conference paper at ICLR 2025
NV-EMBED: IMPROVED TECHNIQUES FOR TRAINING
LLMS AS GENERALIST EMBEDDING MODELS
Chankyu Lee ∗ 1
Rajarshi Roy 1
Mengyao Xu 1
Jonathan Raiman 1
Mohammad Shoeybi 1
Bryan Catanzaro 1
Wei Ping ∗ 1
NVIDIA
ABSTRACT
Decoder-only large language model (LLM)-based embedding models are begin-
ning to outperform BERT or T5-based embedding models in general-purpose text
embedding tasks, including dense vector-based retrieval. In this work, we introduce
the NV-Embed model, incorporating architectural designs, training procedures,
and curated datasets to significantly enhance the performance of LLM as a versatile
embedding model, while maintaining its simplicity and reproducibility. For model
architecture, we propose a latent attention layer to obtain pooled embeddings,
which consistently improves retrieval and downstream task accuracy compared to
mean pooling or using the last <EOS> token embedding from LLMs. To enhance
representation learning, we remove the causal attention mask of LLMs during
contrastive training. For training algorithm, we introduce a two-stage contrastive
instruction-tuning method. It first applies contrastive training with instructions on
retrieval datasets, utilizing in-batch negatives and curated hard negative examples.
At stage-2, it blends various non-retrieval into instruction tuning, which not only
enhances non-retrieval task accuracy but also improves retrieval performance. For
training data, we utilize the hard-negative mining, synthetic data generation and
existing public available datasets to boost the performance of embedding model.
By combining these techniques, our NV-Embed-v1 and NV-Embed-v2 models
obtained the No.1 position on the Massive Text Embedding Benchmark (MTEB)
(as of May 24, 2024 and August 30, 2024, respectively) across 56 embedding tasks,
demonstrating the sustained effectiveness of the proposed methods over time. Also,
it achieved the highest scores in the Long Doc section and the second-highest scores
in the QA section of the AIR Benchmark, which covers a range of out-of-domain in-
formation retrieval topics beyond those in MTEB. We further provide the analysis of
model compression techniques for generalist embedding models. We open-source
the model at: https://huggingface.co/nvidia/NV-Embed-v2.
1
INTRODUCTION
Embedding or dense vector representation of text (Mikolov et al., 2013; Devlin et al., 2018) encodes its
semantic information and can be used for many downstream applications, including retrieval, rerank-
ing, classification, clustering, and semantic textual similarity tasks. The embedding-based retriever
is also a critical component for retrieval-augmented generation (RAG) (Lewis et al., 2020), which
allows LLMs to access the most up-to-date external or proprietary knowledge without modifying the
model parameters (Liu et al., 2024; Guu et al., 2020; Shi et al., 2023; Wang et al., 2023a).
The embedding models built on bidirectional language models (Devlin et al., 2018; Raffel et al.,
2020) have dominated the landscape for years (e.g., Reimers & Gurevych, 2019; Gao et al., 2021;
Wang et al., 2022; Izacard et al., 2021; Ni et al., 2021), although one notable exception is Neelakantan
et al. (2022). The recent work by Wang et al. (2023b) demonstrates that decoder-only LLMs can
outperform frontier bidirectional embedding models (Wang et al., 2022; Ni et al., 2021; Chen et al.,
2023) in retrieval and general-purpose embedding tasks.
∗Correspondence to: Chankyu Lee <[email protected]>, Wei Ping <[email protected]>.
1
Published as a conference paper at ICLR 2025
Table 1: Top MTEB leaderboard models as of ICLR submission date (2024-10-01). We use the original model
names on the leaderboard for clarity.
Embedding Task
Mertric
NV-Embed-v2
Bge-en-icl (zero shot)
Stella-1.5B-v5
SFR-Embedding-2R
Gte-Qwen2-7B-instruct
NV-Embed-v1
Bge-multilingual-gemma2
Voyage-large-2-instruct
SFR-Embedding
GritLM-7B
E5-mistral-7b-instruct
Text-embed-3-large (OpenAI)
Retrieval (15) Rerank (4)
nDCG@10
62.65
61.67
61.01
60.18
60.25
59.36
59.24
58.28
59.00
57.41
56.9
55.44
MAP
60.65
59.66
61.21
60.14
61.42
60.59
59.72
60.09
60.64
60.49
60.21
59.16
Cluster. (11)
V-Meas.
58.46
57.51
57.69
56.17
56.92
52.80
54.65
53.35
51.67
50.61
50.26
49.01
PairClass. (3) Class. (12)
AP
88.67
86.93
88.07
88.07
85.79
86.91
85.84
89.24
88.54
87.16
88.34
85.72
Acc.
90.37
88.62
87.63
89.05
86.58
87.35
88.08
81.49
78.33
79.46
78.47
75.45
STS (10)
Spear.
84.31
83.74
84.51
81.26
83.04
82.84
83.88
84.58
85.05
83.35
84.66
81.73
Summ.( 1) Avg. (56)
Spear.
30.7
30.75
31.49
30.71
31.35
31.2
31.2
30.84
31.16
30.37
31.4
29.92
72.31
71.24
71.19
70.31
70.24
69.32
69.88
68.28
67.56
66.76
66.63
64.59
In this work, we introduce NV-Embed, a generalist embedding model that significantly enhances the
performance of decoder-only LLMs for embedding and retrieval tasks. Specifically, we make the
following contributions:
1. For model architecture, we propose a novel latent attention layer to obtain pooled embeddings for
a sequence of tokens. In contrast to the popular average pooling in bidirectional embedding mod-
els (e.g., Wang et al., 2022) and last <EOS> token embedding in decoder-only LLMs (Neelakantan
et al., 2022; Wang et al., 2023b), our proposed pooling technique consistently improves accuracy of
retrieval and other downstream tasks. To further enhance representation learning, we remove causal
attention mask during contrastive training of decoder-only LLM, resulting in solid improvements.
Our design is simpler yet more effective compared to related work (BehnamGhader et al., 2024;
Muennighoff et al., 2024), which involves an additional training phase with masked token prediction
or a mixed training objective.
2. For model training, we introduce a two-stage contrastive instruction-tuning method, starting with
the pretrained Mistral-7B (Jiang et al., 2023). In the first stage, we apply contrastive training with
instructions on retrieval datasets, utilizing in-batch negative and curated hard-negative examples. In
the second stage, we blend carefully curated non-retrieval datasets into the stage-one training data.
Since in-batch negative samples are misleading for non-retrieval tasks in some cases, we disable
in-batch negative training in stage two. This design not only improves the accuracy of classification,
clustering, and semantic textual similarity tasks, but also surprisingly enhances retrieval performance.
Note, our model is also not fine-tuned from existing embedding models1.
3. Training data is one of the most crucial factors in achieving state-of-the-art results. We provide
a detailed recipe on the curation of training datasets, including dataset-specific information, the
positive-aware hard-negative mining technique to enhance contrastive training, the synthetic data
generation and example-based multi-class labeling. This enables the community to easily reproduce
and even surpass our model, ultimately advancing the development of the embedding models.
4. Our NV-Embed-v1 model obtained the No.1 position on the Massive Text Embedding Benchmark
(MTEB) (as of May 24, 2024) (Muennighoff et al., 2022) across 56 embedding tasks. By improving
the curation of the training data, NV-Embed-v2 model set a new record high score of 72.31 and
reclaimed the No. 1 spot (as of Aug 30, 2024) on the highly competitive MTEB leaderboard,
further demonstrating the sustained effectiveness of our approach. Note that our model also attains
the highest scores in 15 retrieval tasks (commonly referred to as BEIR (Thakur et al., 2021)), 11
clustering tasks, and 12 classification tasks in the MTEB benchmark. See Table 1 for detailed
information. Additionally, it secured the highest scores in Long Doc section and the second scores
in QA section on the AIR-Benchmark which covers a range of out-of-domain information retrieval
topics beyond those in MTEB.
5. We study the model compression techniques, including pruning, quantization and knowledge-
distillation, for LLM-based embedding models. Through the comparison with smaller embedding
models directly built on Llama3.2-3B, Qwen2.5-3B, and Minitron-4B, we demonstrate that our
model compression approach achieves superior accuracy and quantization robustness.
We organize the rest of the paper in the following. In § 2, we discuss the related work. We present
the architectural and training method in § 3. We provide detailed recipe of training data curation in
§ 4. We present the experiment results in § 5 and conclude the paper in § 6. Model compression
techniques and results are presented in § A due to the page limit. AIR-bench results are shown in § B.
1For example, SFR-Embedding and Linq-Embed are fine-tuned from E5-mistral-7b-instruct.
2
Published as a conference paper at ICLR 2025
2 RELATED WORK
2.1 BIDIRECTIONAL EMBEDDING MODELS
BERT (Devlin et al., 2018) or T5 (Raffel et al., 2020)-based embedding models have long been
the dominant approaches for general-purpose embedding tasks. Early examples include Sentence-
BERT (Reimers & Gurevych, 2019) and SimCSE (Gao et al., 2021), which finetune BERT on natural
language inference (NLI) datasets. In general, these embedding models are first initialized from
pre-trained BERT (Wang et al., 2022; Izacard et al., 2021) or T5 encoders (Ni et al., 2021). Then,
they are further pre-trained with contrastive learning on curated unsupervised (Izacard et al., 2021)
or weakly-supervised text pairs (Wang et al., 2022). Finally, the embedding models (Li et al., 2023;
Wang et al., 2022; Ni et al., 2021; Chen et al., 2023) are fine-tuned on a variety of supervised data,
including MS MARCO (Nguyen et al., 2016), for retrieval and other downstream tasks. Note that
all the state-of-the-art embedding models are trained in this supervised manner. Some of the most
recent frontier models in this category include mxbai-embed-large-v1 (Lee et al., 2024b) (MTEB:
64.68), UAE-Large-V1 (Li & Li, 2023) (MTEB: 64.64), and voyage-large-2-instruct (Voyage-AI,
2024) (MTEB: 68.28).
2.2 DECODER-ONLY LLM-BASED EMBEDDING MODELS
Decoder-only LLMs (Brown et al., 2020) were believed to underperform bidirectional models on
general-purpose embedding tasks for years, because: i) unidirectional attention limits the representa-
tion learning capability, and ii) the scaling of LLMs leads to very high-dimension embeddings, which
may suffer from the curse of dimensionality.
The early work by Neelakantan et al. (2022) initializes embedding models using pre-trained, decoder-
only GPT-3 models (Brown et al., 2020) and applies continued contrastive training. The hidden state
from the final layer, corresponding to the special token <EOS> at the end of the sequence, is used
as the embedding for the input sequence. Its latest successor, text-embedding-3-large, achieves an
MTEB score of 64.59 (OpenAI, 2024). Most recently, E5-Mistral (Wang et al., 2023b) (MTEB:
66.63) applies contrastive learning with task-specific instructions on Mistral 7B (Jiang et al., 2023).
It begins to outperform the state-of-the-art bidirectional models on comprehensive embedding
benchmarks (Muennighoff et al., 2022) by utilizing a massive amount of synthetic data from the
proprietary GPT-4 model. LLM2Vec (BehnamGhader et al., 2024) (MTEB score: 65.01) tries to
build the embedding model from LLMs while only using public available data, but it is still worse
than E5-Mistral.
Given the success of E5-Mistral, SFR-Embedding-Mistral (Meng et al., 2024b) (MTEB: 67.56) and
SFR-Embedding-2R (Meng et al., 2024a) (MTEB: 70.31) further fine-tunes this model on the blend
of non-retrieval and retrieval datasets for improved accuracy on both tasks, which is closely related
to our NV-Embed. However, there are the following key differences: 1) NV-Embed is trained
from scratch on Mistral 7B LLM directly using public available data, and not dependent on other
embedding model or proprietary synthetic data. Consequently, we introduce a new architecture that
eliminates unnecessary causal attention mask and further improves the sequence pooling mechanism
with latent attention layer. 2) SFR-Embedding-Mistral uses task-homogeneous batching, which
constructs batches consisting exclusively of samples from a single task. In contrast, our NV-Embed
uses well-blended batches consisting samples from all tasks to avoid potential “zigzag” gradient
updates, which leads to a new record high score on both full MTEB and retrieval tasks compared to
SFR-Embedding-Mistral.
Over the past year, MTEB has become one of the most competitive leaderboards across all AI
categories, leading to significantly increased competition among participants. Many of the recent
top-performing models (e.g., stella-1.5B-v5, gte-Qwen2-7B-instruct, bge-multilingual-gemma2,
voyage-large-2-instruct, and text-embed-3-large) have not disclosed key technical details necessary
for reproduction, particularly the blend of training data used. Among the recently disclosed works,
GritLM (Muennighoff et al., 2024) (MTEB: 65.66) unifies text embedding and generation into a single
LLM model. In addition, bge-en-icl (Li et al., 2024) (MTEB: 71.24) enhances query embeddings by
introducing few-shot examples on the query side, utilizing the in-context learning (ICL) capabilities
in text embedding tasks. This approach introduces an overhead by supplying task-relevant examples
to the query during the training process. To maintain zero-shot evaluation accuracy, both zero-shot
3
Published as a conference paper at ICLR 2025
Figure 1: Proposed architecture design comprising of decoder-only LLM followed by latent attention
layer. Latent attention layer functions as a form of cross-attention where the decoder-only LLM
output serves as queries (Q) and trainable latent array passes through the key-value inputs, followed
by MLP. Blue dotted lines indicate the two matrix multiplications involved in QKV-attentions.
and few-shot samples are included during training. In our paper, we focus on comparing the zero-shot
evaluation accuracy of the bge-en-icl model to ensure the fair comparisons during the evaluation
phase.
Another area of research focuses on improving data curation processes to enhance the accuracy of
fine-tuning retrieval embedding models. Gecko (Lee et al., 2024a) (MTEB: 66.31) attempts to distill a
smaller bidirectional embedding model from a decoder-only LLM (Gemini et al., 2023) by generating
synthetic paired data. It refines the data quality by retrieving a set of candidate passages for each query
and relabeling the positive and hard negative passages using the LLM. Linq-embed-mistral (Kim
et al., 2024) utilized LLMs to refine data by generating, filtering, and mining negative samples.
Meanwhile, NV-Retriever (Moreira et al., 2024) introduced a positive-aware hard-negative mining
technique that considers positive relevance scores to more effectively eliminate false negatives. In
this work, we apply this positive-aware hard-negative technique to curate the samples and enhance
the contrastive training.
3 METHODS
In this section, we describe our architecture designs and two-stage instruction-tuning method.
3.1 BIDIRECTIONAL ATTENTION
The causal attention mask in decoder-only LLMs is introduced for next-token prediction task (Vaswani
et al., 2017). In principle, causal mask in decoder blocks prevents information leakage by allowing
the decoder to attend only to previous positions during auto-regressive text generation. However, it
is observed that unidirectional attention limits the model’s representation power, as evidenced by
the poor performance of GPT models compared to similarly sized BERT or T5 models on natural
language understanding benchmarks (e.g., Wang et al., 2019). In recent, LLM2Vec (BehnamGhader
et al., 2024) introduces additional training phase with a specially designed masked token prediction
to warm-up the bidirectional attention. GRIT (Muennighoff et al., 2024) utilizes a hybrid objective
with both bidirectional representation learning and causal generative training. In contrast, we simply
remove the causal attention mask of decoder-only LLM during the contrastive learning and find it
works compellingly well as demonstrated by our results. As a result, we go with simple solution.
4
Published as a conference paper at ICLR 2025
3.2 LATENT ATTENTION LAYER
There are two popular methods to obtain the embedding for a sequence of tokens: i) mean pooling,
and ii) the last <EOS> token embedding. Previous bidirectional embedding models typically use
mean pooling (Wang et al., 2022; Izacard et al., 2021), while the last <EOS> token embedding is
more popular for decoder-only LLM based embedding models. However, both methods have certain
limitations. Mean pooling simply takes the average of token embeddings and may dilute the important
information from key phrases, meanwhile the last <EOS> token embedding may suffer from recency
bias, relying heavily on the output embedding of last token.
In this work, we propose a latent attention layer inspired by Jaegle et al. (2021) to achieve more
expressive pooling of the sequences for general-purpose embedding tasks. Specifically, we denote
the last layer hidden from decoder as the query Q ∈ Rl×d, where l is the length of sequence, and d is
the hidden dimension. They are sent to attend the latent array K = V ∈ Rr×d, which are trainable
“dictionary” used to obtain better representation, where r is the number of latents in the dictionary.
The output of this cross-attention is O ∈ Rl×d,
O = softmax(QK T )V
(1)
which is followed by a regular MLP consists of two linear transformations with a GELU activation
in between. Our model uses latent attention layer with r of 512 and the number of heads as 8 for
multi-head attention. Finally, we apply mean pooling after MLP layers to obtain the embedding of
whole sequences. See Figure 1 for an illustration. It is worth mentioning here that our approach
follows the spirit of dictionary learning to obtain better representation (e.g., Wang et al., 2018), which
is different from the Perceiver IO architecture. We compare the proposed latent attention layer with
normal self-attention and find consistent improvements in our ablation study.
3.3 TWO-STAGE INSTRUCTION-TUNING
Instruction-tuning has been widely applied for training LLM to follow instructions (Wei et al., 2021;
Ouyang et al., 2022) and to perform retrieval-augmented generation (Wang et al., 2023a; Liu et al.,
2024). It has also been recently applied for training retrievers and general-purpose embedding models
that can adapt their output embeddings with different instructions and task types (Asai et al., 2022;
Wang et al., 2023b).
To obtain a generalist embedding model that can appropriately perform on retrieval and non-retrieval
tasks (e.g., classification, clustering), we need take the characteristics of different tasks into account.
For example, the use of in-batch negatives has been demonstrated to be highly efficient for training
dense-embedding-based retrievers (e.g., Karpukhin et al., 2020), because it allows to reuse the
computation and effectively train on B2 question/passage pairs for each mini-batch with only B
questions and corresponding positive passages. However, applying in-batch negatives trick can
mislead the embedding model for classification or clustering task, as the “passages” in the mini-batch
may come from the the class and are not negatives.
Given these considerations, we introduce a two-stage instruction tuning method which first conducts
contrastive training with instructions on a variety of retrieval datasets (details are in section 4.1),
utilizing in-batch negatives and curated hard-negative examples. In the second stage, we perform
contrastive instruction-tuning on a combination of retrieval and non-retrieval datasets (details are in
section 4.2) without applying the trick of in-batch negatives. It is worth mentioning here that retrieval
task presents greater difficulty compared to the other tasks so that our training strategy focuses on
fine-tuning the model for retrieval initially. In second stage, we blend the remaining embedding tasks
into the instruction-tuning.
4 TRAINING DATA
For training data, we employ public retrieval and non-retrieval datasets and synthetically generated
samples to demonstrate our model’s capability in embedding tasks. Our training procedure incorpo-
rates both retrieval and non-retrieval tasks including classification, clustering, and semantic textual
similarity datasets.
5
Published as a conference paper at ICLR 2025
Given a relevant query-document pair, the instructed query follows the instruction template as follows:
q+
inst = Instruct : {task_definition} Query : q+
The instruction templates for each {task_definition} are provided in Table 12 for training and
Table 13 for evaluation. Note, we mask out the instruction tokens in the output embeddings during
both training and evaluation, although they still impact the output due to self-attention. We do not
add any instruction prefix to document corpus.
(2)
4.1 PUBLIC RETRIEVAL DATASETS
We adopt the retrieval datasets as follows: MSMARCO (Bajaj et al., 2016), HotpotQA (Yang et al.,
2018), Natural Question (Kwiatkowski et al., 2019), PAQ (Lewis et al., 2021), Stack Exchange (Stack-
Exchange-Community, 2023), Natural Language Inference (Group et al., 2022), SQuAD (Rajpurkar
et al., 2016), ArguAna (Wachsmuth et al., 2018), BioASQ (Tsatsaronis et al., 2015), FiQA (Maia
et al., 2018), FEVER (Thorne et al., 2018), HoVer (Jiang et al., 2020), SciFact (Wadden et al., 2022),
NFCorpus, MIRACL (Zhang et al., 2023) and Mr.TyDi (Zhang et al., 2021).
It is important to note that certain datasets (e.g., MSMARCO) are training splits of the MTEB
Benchmark, which we follow the existing practices established by leading generalist embedding
models (Meng et al., 2024b; Wang et al., 2023b; BehnamGhader et al., 2024; Muennighoff et al.,
2024). Table 12 further provides the number of samples used for training. We demonstrate the
zero-shot generalization capability of NV-Embed on AIR-bench in B.
4.1.1 HARDNEGATIVE MINING TECHNIQUE
Embedding models are trained using contrastive learning (Gao et al., 2021), aiming to increase the
similarity between the embeddings of a query and its relevant passages (positives) while reducing
the similarity with irrelevant passages (negatives). Public retrieval datasets typically only contains
the positive query-passage pairs but do not contain its own hardnegatives, making it necessary
to mine of such negative examples. To address this, we apply the recently proposed positive-
aware hard-negative technique (Moreira et al., 2024) that considers the positive relevance scores
for better false negatives removal. Following the ablation studies in Moreira et al. (2024), we use
E5-mistral-7b-instruct (Wang et al., 2023b) as a teacher retrieval model to identify the optimal
hardnegative passages relevant to the query. We set the maximum threshold for negative scores based
on a percentage of the positive score (TopKPercPos) with a 95% margin, described as follows:
max_negative_score_threshold = pos_score * percentage_margin.
4.2 PUBLIC NON-RETRIEVAL DATASETS
Besides retrieval datasets, we utilize public non-retrieval datasets mainly from three sub-tasks in
MTEB benchmark: classification, clustering and semantic similarity (STS). We pre-process the
format of these datasets to become the compatible with retrieval datasets for contrastive training:
query q+, positive document d+ and hard negative documents {d−
0 , ..., d−
n }.
For classification, we utilize the English training splits of various datasets from MTEB Huggingface
datasets (Muennighoff et al., 2022; Lhoest et al., 2021). The classification datasets that we use
are as follows: AmazonReviews (McAuley & Leskovec, 2013a), AmazonCounterfactual (O’Neill
et al., 2021), Banking77 (Casanueva et al., 2020), Emotion (Saravia et al., 2018), IMDB (Maas
et al., 2011), MTOPDomain/MTOPIntent (Li et al., 2021), ToxicConversations (Adams et al., 2019),
TweetSentimentExtraction (Maggie, 2020), AmazonPolarity (McAuley & Leskovec, 2013b), Mas-
siveScenario/MassiveIntent (FitzGerald et al., 2022). For the Emotion and AmazonCounterfactual
classification datasets we use BM25 (Robertson et al., 2009) similarity thresholds to filter out training
data that is similar to the MTEB evaluation set.
For clustering datasets, we utilize the raw_arxiv, raw_biorxiv and raw_medrxiv datasets from MTEB
Huggingface datasets, TwentyNewsgroups (Lang, 1995), Reddit (Geigle et al., 2021), StackEx-
change (Geigle et al., 2021), RedditP2P (Reimers, 2021b) and StackExchangeP2P (Reimers, 2021a)
We filter out any training data that match the MTEB evaluation set.
The classification and clustering datasets provide examples and corresponding class/cluster labels.
The example texts extracted from the appropriate text/title/abstract field are used for the query
6
Published as a conference paper at ICLR 2025
q+. For binary classification tasks the label texts are used as documents d+, d−. For multi-class
classification and clustering tasks, a randomly sampled example from the ground-truth class/cluster is
used for the positive document d+ and randomly sampled examples from other classes/clusters are
used for negative documents d−
k . We will present ablation experiments supporting this approach in
section 5.2.4.
For semantic textual similarity datasets, we use the training splits of three semantic similarity datasets
STS12 (Agirre et al., 2012), STS22 (Chen et al., 2022), STS-Benchmark (Cer et al., 2017) from
MTEB Huggingface datasets. For any pair of texts with associated relevance scores (ta, tb, score),
we create two examples (q+ = ta, d+ = tb) and (q+ = tb, d+ = ta) if score ≥ 4. We mine the hard
negatives d−
k from the pool of other texts using the same technique as section 4.1.1. Task instructions
are appended to d+, d− since they are symmmetric with the query.
4.3 SYNTHETIC TASKS DATASET
Due to the limited variety of subjects and tasks in public training datasets, the available instruction
templates for training are also restricted. To enhance task-wise generalization, we employ the
Mixtral-8x22B-Instruct-v0.1 model (MistralAI) to create a dataset consisting of 120,000 synthetic
examples across 60,000 synthetic tasks. Following a two-step prompting approach proposed by
E5-mistral-7b-instruct (Wang et al., 2023b), we adjust the prompts for Mixtral-8x22B-Instruct-v0.1
and English text. We generate only the short-long, long-short, and short-short examples (40,000 of
each), as we use public STS datasets and do not assess bitext retrieval tasks. Example prompts for
synthetic data generation can be found in Appendix 15 and 16.
5 EXPERIMENTS
Training and inference experiment details are illustrated in Appendix C.
5.1 MTEB RESULTS
We evaluate the proposed NV-Embed model on the full MTEB benchmark (Muennighoff et al., 2022)
across 56 tasks. Table 1 summarizes averaged MTEB scores for seven sub-category tasks compared
to frontier models on MTEB leaderboard2. Our initial model, namely NV-Embed-v1 get the score of
69.32 and obtain the No.1 position on the MTEB as of May 24, 2024 (detailed benchmark scores
available in Table 2). We then further improve the model through the curation of training dataset,
including adding more retrieval datasets, applying positive-aware hard-negative mining technique,
using synthetic data generation process and constructing example-based multi-class labels. As a
result, our NV-Embed-v2 model sets a new record high score of 72.31 and reclaimed No.1 (as of Aug
30, 2024) on highly competitive MTEB leaderboard, further highlighting the sustained effectiveness
of the proposed methods. In following sub-section 5.2, we will present ablation studies on design
choices regarding the model architecture, training algorithm and the curation of training data.
Based on quantitative leaderboard results, we compare our NV-Embed with the recent frontier
embedding models. The e5-mistral-7b-instruct (Wang et al., 2023b) and google-gecko (Lee et al.,
2024a) utilize proprietary synthetic data to train their model in a single stage manner. In contrast,
we recognize that retrieval task presents greater difficulty compared to the other embedding tasks
and prioritizes our training strategy on fine-tuning the model for retrieval first, followed by blending
the remaining sub-tasks into instruction-tuning, leading to substantially improved BEIR and overall
MTEB results.
SFR-Embedding-2R (Meng et al., 2024b) demonstrates competitive scores on the MTEB (70.31) and
BEIR (60.18) benchmarks by continuing to finetune the e5-mistral-7b-instruct model (Wang et al.,
2023b). However, it remains largely constrained by the architectural limitations of its parent model,
such as the causal attention mask and the last token pooling method. In contrast, our NV-Embed
model is trained starting from the Mistral 7B LLM (Jiang et al., 2023) rather than finetuning e5-
mistral-7b-instruct (Wang et al., 2023b). It features a new architecture that removes the unnecessary
causal attention mask and further improves the sequence pooling mechanism with a latent attention
layer. Table 3 and 14 provides a detailed scores of BEIR and MTEB benchmarks.
2https://github.com/embeddings-benchmark/mteb
7
Published as a conference paper at ICLR 2025
Table 2: Averaged MTEB scores on seven tasks after first and second stage training using only the
publically available data and before applying the positive-aware hardnegative mining, synthetic data
and example-based multi-class labeling. The averaged score 69.32 corresponds to NV-Embed-v1.
Pool Type
Mask Type
Retrieval(15)
Rerank (4)
Clustering (11)
PairClass. (3)
Classification (12)
STS (10)
Summar. (1)
Average (56)
Pool Type
Mask Type
Retrieval (15)
Rerank (4)
Clustering (11)
PairClass. (3)
Classification (12)
STS (10)
Summar. (1)
Average (56)
EOS
First stage training
Mean
bidirect
57.70
59.76
44.75
86.17
73.17
74.96
29.28
62.68
causal
56.42
57.21
40.83
83.63
69.22
73.45
28.4
60.06
bidirect
58.42
60.02
45.97
87.45
74.62
77.47
29.72
64.00
causal
57.55
59.35
45.42
84.46
72.48
73.60
30.89
62.32
Latent-attention
causal
bidirect
59.00
57.65
59.72
59.59
45.61
45.44
82.02
87.59
72.74
73.93
78.65
79.07
30.94
30.16
63.39
64.18
Self-attention
bidirect
57.89
59.73
45.19
86.51
73.54
76.89
30.22
63.27
causal
57.21
59.51
45.07
85.74
73.32
77.55
31.59
63.11
EOS
Second stage training
Mean
bidirect
58.39
60.37
51.43
84.06
85.85
79.55
30.36
67.85
causal
56.59
59.23
49.81
80.99
85.04
79.12
29.12
66.50
bidirect
58.71
60.77
52.80
87.45
87.06
82.53
30.49
68.97
causal
57.88
60.27
51.58
82.89
86.08
81.74
31.82
68.13
Latent-attention
causal
bidirect
59.36
58.33
60.57
60.54
51.7
52.80
83.45
86.91
86.58
87.35
81.94
82.84
31.87
31.20
69.32
68.47
Self-attention
bidirect
58.64
60.5
53.34
86.12
86.76
82.38
30.105
69.10
causal
57.71
60.38
51.51
84.44
86.25
81.52
31.4
68.16
Table 3: Averaged MTEB scores on seven embedding tasks after two stage training after applying the
positive-aware hardnegative mining, synthetic data and example-based multi-class labeling. Note, the
averaged score 72.31 corresponds to NV-Embed-v2.
Pool Type
Mask Type
Retrieval (15)
Rerank (4)
Clustering (11)
PairClass. (3)
Classification (12)
STS (10)
Summar. (1)
Average (56)
EOS
Mean
bidirect
62.13
60.02
58.24
87.69
90.10
82.27
30.25
71.63
causal
60.30
59.13
57.11
85.05
90.01
81.65
32.75
70.85
bidirect
61.81
60.65
57.44
87.35
89.49
84.35
30.75
71.71
causal
61.01
59.10
57.34
87.35
89.85
84.35
30.88
71.38
Latent-attention
causal
bidirect
62.65
61.15
59.36
60.65
58.46
57.80
87.22
88.67
90.37
90.49
84.13
84.31
30.90
30.70
72.31
71.61
Self-attention
bidirect
61.17
60.67
58.24
87.69
90.10
84.22
30.93
71.61
causal
60.53
59.67
57.11
85.05
90.01
83.81
31.36
70.6
5.2 ABLATION STUDY
We conduct ablation studies to compare several training, architecture and data curation design
choices: two-stage training, bidirectional attention, latent-attention pooling method, synthetic data
and example-based multi-class labeling.
5.2.1 TWO-STAGE TRAINING
We compare the two-stage and single-stage training with and without the use of the in-batch negative
technique, as shown in Table 4. We observe that our proposed two-stage training surpasses single-
stage training because it allows the use of beneficial in-batch negatives for retrieval tasks in the
first stage, while disabling the in-batch technique for non-retrieval tasks in the second stage. In
contrast, single-stage training with in-batch negatives leads to significantly lower MTEB performance,
especially in the classification sub-task. This accuracy degradation occurs because many classification
tasks involve few-class labels (such as binary labels like True/False), meaning that the inbatch negative
labels in the batch can actually be the positive label. While single-stage training without in-batch
negatives produces more comparable results (MTEB scores: 72.31 for two-stage training vs. 71.94 for
single-stage without in-batch), two-stage training significantly outperforms in the retrieval sub-tasks
(BEIR scores: 62.65 for two-stage training vs. 61.37 for single-stage without in-batch). It is worth
8
Published as a conference paper at ICLR 2025
Table 4: Averaged MTEB scores on ablation studies for NV-Embed-v2: two stage training, multi-
class data labeling, positive-aware hardnegative mining and synthetically generated dataset. In the
third part of the table, HN represents hardnegative mining technique, AD means adding public
retrieval datasets and SD refers to adding synthetically generated data. In the fourth part of the
table, we also include NV-Embed-v1, which omits HN, AD, and SD in stage-one training and uses a
label-based approach in stage-two training.
Embedding Task
Single Stage (Inbatch Enabled)
Single Stage (Inbatch Disabled)
Two Stage Training
Reversed Two Stage
Section 5.3.1 Two stage training
Retrieval Rerank
60.64
60.81
60.65
60.98
Cluster.
57.67
58.31
58.46
58.22
61.25
61.37
62.65
61.91
PairClass. Class.
86.6
90.2
90.37
90.26
87.82
88.3
88.67
88.59
STS
83.7
84.5
84.31
83.07
Summ.
30.75
30.96
30.70
31.28
Avg.
70.83
71.94
72.31
71.85
Embedding Task
Section 5.3.4 Multi-class Classification and Clustering Labels in stage-two training
STS
84.25
84.31
PairClass. Class.
89.17
90.37
Cluster.
53.04
58.46
Retrieval Rerank
88.04
88.67
62.40
62.65
59.7
60.65
Label-based approach
Example-based approach
Summ.
30.77
30.70
Avg.
70.82
72.31
Section 5.3.5 Hard-negative mining and Synthetically Generated Dataset in stage-one training
Embedding Task
[S0] Without HN, Without AD, Without SD
[S1] With HN, Without AD, Without SD
[S2] With HN, With AD, Without SD
[S3] With HN, With AD, With SD
Retrieval Rerank
59.85
59.80
60.45
60.65
PairClass. Class.
90.71
90.31
90.34
90.37
Cluster.
57.95
58.01
58.16
58.46
STS
81.98
84.26
84.11
84.31
85.79
88.56
88.38
88.67
59.22
61.52
62.28
62.65
Summ.
29.87
30.36
29.95
30.70
Avg.
70.73
71.83
72.07
72.31
Label-based approach + [S0]
NV-Embed-v1
60.59
52.80
59.36
86.91
87.35
82.84
31.2
69.32
highlighting here that the retrieval is considered the most crucial sub-category for the advancement of
RAG technology across the MTEB embedding tasks.
Lastly, we explore another research question: what happens if the order of two-stage training is
reversed? To examine this, we further finetune the Single Stage (Inbatch disabled) model using only
the retrieval datasets with enabling the inbatch negative technique and present the MTEB results
in Table 4. While the retrieval score increased from 61.37 to 61.91 after the reversed two-staged
training, it remains lower than the retrieval score of 62.65 achieved with our proposed two-stage
training method. Furthermore, the scores on other embedding tasks, such as Clustering and STS,
declined compared to the Single Stage (Inbatch disabled) approach. Consequently, the overall MTEB
score for Reversed Two Stage (score: 71.85) is lower than our proposed Two-Stage Training (score:
72.31) as well as the Single Stage with Inbatch disabled (score: 71.94).
5.2.2 CAUSAL ATTENTION VS. BIDIRECTIONAL ATTENTION
To examine the impact of self-attention masks in decoder-only LLM models for embedding applica-
tions, we conducted experiments comparing bidirectional and causal mask types. As illustrated in
Tables 2 and 3, the bidirectional mask consistently outperforms the causal mask based on the average
MTEB scores across 56 tasks for all pooling types. This indicates that embeddings generated with
causal attention masks are significantly less effective than those produced with bidirectional attention
masks.
5.2.3 POOLING METHODS
To examine the impact of different pooling methods on embedding models, we conducted experiments
comparing <EOS>-last, mean, latent-attention, and self-attention pooling types. As depicted in Tables
2 and 3, mean pooling consistently outperforms <EOS>-last token embedding based on the average
MTEB scores across 56 tasks. This difference may be due to the last <EOS> token embedding being
influenced by recency bias, showing an excessive dependence on the output of the final token.
To enhance performance beyond mean pooling, we experimented with adding the proposed latent-
attention or self-attention layer (both followed by MLP) before mean pooling to address the issue of
important information from key phrases being diluted. According to Tables 2, self-attention does
not provide additional accuracy improvements for the embedding capabilities of decoder-only LLMs
(i.e., mean pooling 68.97 vs. self-attention 69.10 on MTEB tasks). It even slightly reduces accuracy
9
Published as a conference paper at ICLR 2025
on 15 retrieval tasks (i.e., mean pooling 58.71 vs. self-attention 58.64). Table 3 also shows the similar
trends of NV-Embed-v2. This is not surprising, as the LLM already has many self-attention layers
to learn the representation, and adding an additional one does not bring significant additive value.
In contrast, the latent-attention layer proved beneficial for majority of embedding tasks, as shown
in Table 2 and 3. Specifically, the nDCG@10 accuracy of the more challenging 15 retrieval tasks
improved (i.e., mean pooling 61.82 vs.
latent-attention 62.65) in Table 3. We hypothesize that
this is due to the "dictionary learning" provided by the latent array, which offers more expressive
representation. The latent-attention layer effectively learns output embedding representations from
decoder-only LLMs, mitigating the information dilution caused by averaging the output embeddings.
5.2.4 MULTI-CLASS CLASSIFICATION AND CLUSTERING LABELS
We compare the effect of using two possible tech-
niques for constructing positive and negative docu-
ments for multi-class classification and clustering
tasks. In label-based approach, the ground-truth
class/cluster label corresponding to the example
in the query is used as the positive document, and
other class/cluster labels are sampled for negative
documents. In example-based approach, another
example from the same class/cluster as the exam-
ple in the query is used as the positive document,
and examples from other clusters are sampled for
negative documents. We use random sampling
to get a broad coverage across labels and exam-
ples. In this work, all 11 clustering datasets and 5
muti-class classification datasets are constructed
as example-based approach. As shown in Table 4,
the example-based approach leads to significant
improvements over the label-based approach for
both classification and clustering. Table 5 further
shows the detailed ablation study of label-based
and example-based labels for classification and
clustering multi-class samples.
Table 5: Ablation study on using class/cluster
labels vs. sampled class/cluster examples as
positive and negative documents for multi-class
classification and clustering tasks.
+/- Document Format
Emotion-Classification
MassiveIntent-Classification
MassiveScenario-Classification
MTOPDomain-Classification
MTOPIntent-Classification
Arxiv-Clustering-P2P
Arxiv-Clustering-S2S
Biorxiv-Clustering-P2P
Biorxiv-Clustering-S2S
Medrxiv-Clustering-P2P
Medrxiv-Clustering-S2S
Reddit-Clustering
Reddit-Clustering-P2P
StackExchange-Clustering
StackExchange-Clustering-P2P
TwentyNewsgroups-Clustering
Average (16)
Labels Examples
90.83
84.94
90.18
98.84
88.55
53.01
49.19
45.38
42.67
37.58
36.82
59.83
72.58
79.37
48.59
58.41
64.80
93.38
86.10
92.17
99.25
94.37
55.80
51.26
54.09
49.60
46.09
44.86
71.10
74.94
82.10
48.36
64.82
69.27
5.2.5 HARDNEGATIVE MINING AND SYNTHETICALLY GENERATED DATASET
We provide a step-by-step curation of training dataset, incorporating the hard negative mining
technique (S1), additional public retrieval data (S2), and synthetically generated data (S3). As
shown in Table 4, the first step of adding the hard negative mining technique significantly boosted
retrieval accuracy, with the BEIR score increasing from 59.22 to 61.52. In the next step (S2), we
included more public retrieval datasets (HoVer, SciFact, Nfcorpus, MIRACL, Mr.Tydi) followed by
synthetically generated data. Adding the public retrieval datasets further increased the retrieval score
by 0.7 points. Finally, incorporating the synthetic dataset (S3) leads to a modest improvement in the
overall MTEB scores, raising them by 0.24 points.
6 CONCLUSION
We introduced the NV-Embed model, a decoder-only LLM designed to outperform existing bidi-
rectional models in general-purpose text embedding tasks. For model architecture, we propose a
latent attention layer to obtain expressive pooled embeddings and remove the unnecessary causal
attention mask of decoder-only LLMs. For training algorithm, we introduce a two-stage contrastive
instruction-tuning scheme to sequentially improve the embedding tasks. By leveraging carefully
curated datasets, hard-negative mining, synthetic data generation and example-based multi-class
labeling, our approach achieve the superior accuracy across diverse embedding tasks. As a result, the
series of NV-Embed models achieved and maintained the No.1 ranking on the MTEB leaderboard
and also demonstrated superior accuracy in out-of-domain tasks in AIR Benchmark.
10
Published as a conference paper at ICLR 2025
7 ACKNOWLEDGMENT
We would like to extend our sincere gratitude to the NVIDIA Merlin team for their valuable collabo-
ration and insightful discussions on building embedding and retriever models. We especially wish to
thank Benedikt Schifferer, Gabriel de Souza P. Moreira, Radek Osmulski, Mengyao Xu, Ronay Ak,
and Even Oldridge for providing the data from NV-Retriever (Moreira et al., 2024).
REFERENCES
C.J. Adams, Daniel Borkan, Jeffrey Sorensen, Lucas Dixon, Lucy Vasserman, and Nithum Thain.
Jig-
saw unintended bias in toxicity classification, 2019. URL https://kaggle.com/competitions/
jigsaw-unintended-bias-in-toxicity-classification.
Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. SemEval-2012 task 6: A pilot on semantic
textual similarity. In Eneko Agirre, Johan Bos, Mona Diab, Suresh Manandhar, Yuval Marton, and Deniz
Yuret (eds.), *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume
1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth
International Workshop on Semantic Evaluation (SemEval 2012), pp. 385–393, Montréal, Canada, 7-8 June
2012. Association for Computational Linguistics. URL https://aclanthology.org/S12-1051.
Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi,
and Wen-tau Yih. Task-aware retrieval with instructions. arXiv preprint arXiv:2211.09260, 2022.
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew
McNamara, Bhaskar Mitra, Tri Nguyen, et al. Ms marco: A human generated machine reading comprehension
dataset. arXiv preprint arXiv:1611.09268, 2016.
Parishad BehnamGhader, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, and
Siva Reddy. Llm2vec: Large language models are secretly powerful text encoders. arXiv preprint
arXiv:2404.05961, 2024.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners.
Advances in neural information processing systems, 33:1877–1901, 2020.
Iñigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. Efficient intent detection
with dual sentence encoders. In Proceedings of the 2nd Workshop on NLP for ConvAI - ACL 2020, mar
2020. URL https://arxiv.org/abs/2003.04807. Data available at https://github.com/PolyAI-
LDN/task-specific-datasets.
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. SemEval-2017 task 1: Semantic
textual similarity multilingual and crosslingual focused evaluation. In Steven Bethard, Marine Carpuat,
Marianna Apidianaki, Saif M. Mohammad, Daniel Cer, and David Jurgens (eds.), Proceedings of the 11th
International Workshop on Semantic Evaluation (SemEval-2017), pp. 1–14, Vancouver, Canada, August 2017.
Association for Computational Linguistics. doi: 10.18653/v1/S17-2001. URL https://aclanthology.
org/S17-2001.
Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. Bge m3-embedding: Multi-lingual,
multi-functionality, multi-granularity text embeddings through self-knowledge distillation, 2023.
Xi Chen, Ali Zeynali, Chico Camargo, Fabian Flöck, Devin Gaffney, Przemyslaw Grabowicz, Scott Hale, David
Jurgens, and Mattia Samory. SemEval-2022 task 8: Multilingual news article similarity. In Guy Emerson,
Natalie Schluter, Gabriel Stanovsky, Ritesh Kumar, Alexis Palmer, Nathan Schneider, Siddharth Singh, and
Shyam Ratan (eds.), Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-
2022), pp. 1094–1106, Seattle, United States, July 2022. Association for Computational Linguistics. doi:
10.18653/v1/2022.semeval-1.155. URL https://aclanthology.org/2022.semeval-1.155.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional
transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash,
Liam Urbach, Vishesh Kakarala, Richa Singh, et al. Massive: A 1m-example multilingual natural language
understanding dataset with 51 typologically-diverse languages. arXiv preprint arXiv:2204.08582, 2022.
Elias Frantar and Dan Alistarh. Sparsegpt: Massive language models can be accurately pruned in one-shot. In
International Conference on Machine Learning, pp. 10323–10337. PMLR, 2023.
11
Published as a conference paper at ICLR 2025
Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for
generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. Simcse: Simple contrastive learning of sentence embeddings.
arXiv preprint arXiv:2104.08821, 2021.
Gregor Geigle, Nils Reimers, Andreas Rücklé, and Iryna Gurevych. Tweac: transformer with extendable qa
agent classifiers. arXiv preprint arXiv:2104.07081, 2021.
Team Gemini, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut,
Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models.
arXiv preprint arXiv:2312.11805, 2023.
Stanford NLP Group et al. The stanford natural language inference (snli) corpus, 2022.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language
model pre-training. In International conference on machine learning, pp. 3929–3938. PMLR, 2020.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu
Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and
Edouard Grave. Unsupervised dense information retrieval with contrastive learning. arXiv preprint
arXiv:2112.09118, 2021.
Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda
Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, et al. Perceiver io: A general architecture for
structured inputs & outputs. arXiv preprint arXiv:2107.14795, 2021.
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las
Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv
preprint arXiv:2310.06825, 2023.
Yichen Jiang, Shikha Bordia, Zheng Zhong, Charles Dognin, Maneesh Singh, and Mohit Bansal. Hover: A
dataset for many-hop fact extraction and claim verification. arXiv preprint arXiv:2011.03088, 2020.
Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and
Wen-tau Yih. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906,
2020.
Junseong Kim, Seolhwa Lee, Jihoon Kwon, Sangmo Gu, Yejin Kim, Minkyung Cho, Jy yong Sohn, and
Chanyeol Choi. Linq-embed-mistral: Elevating text retrieval with improved gpt data through task-specific
control and quality refinement. linq ai research blog, 2024. URL https://getlinq.com/blog/
linq-embed-mistral/.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle
Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question
answering research. Transactions of the Association for Computational Linguistics, 7:453–466, 2019.
Ken Lang. Newsweeder: Learning to filter netnews. In Machine learning proceedings 1995, pp. 331–339.
Elsevier, 1995.
Jinhyuk Lee, Zhuyun Dai, Xiaoqi Ren, Blair Chen, Daniel Cer, Jeremy R Cole, Kai Hui, Michael Boratko, Rajvi
Kapadia, Wen Ding, et al. Gecko: Versatile text embeddings distilled from large language models. arXiv
preprint arXiv:2403.20327, 2024a.
Sean Lee, Aamir Shakir, Darius Koenig, and Julius Lipp. Open source strikes bread - new fluffy embeddings
model, 2024b. URL https://www.mixedbread.ai/blog/mxbai-embed-large-v1.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich
Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-
intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020.
Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus
Stenetorp, and Sebastian Riedel. Paq: 65 million probably-asked questions and what you can do with them.
Transactions of the Association for Computational Linguistics, 9:1098–1115, 2021.
12
Published as a conference paper at ICLR 2025
Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil,
Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani,
Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina
McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut,
Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and
Thomas Wolf. Datasets: A community library for natural language processing. In Proceedings of the 2021
Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 175–184,
Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics.
URL https://aclanthology.org/2021.emnlp-demo.21.
Chaofan Li, MingHao Qin, Shitao Xiao, Jianlyu Chen, Kun Luo, Yingxia Shao, Defu Lian, and Zheng Liu.
Making text embedders few-shot learners. arXiv preprint arXiv:2409.15700, 2024.
Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. MTOP: A com-
prehensive multilingual task-oriented semantic parsing benchmark. In Paola Merlo, Jorg Tiedemann, and
Reut Tsarfaty (eds.), Proceedings of the 16th Conference of the European Chapter of the Association for
Computational Linguistics: Main Volume, pp. 2950–2962, Online, April 2021. Association for Computa-
tional Linguistics. doi: 10.18653/v1/2021.eacl-main.257. URL https://aclanthology.org/2021.
eacl-main.257.
Xianming Li and Jing Li. Angle-optimized text embeddings. arXiv preprint arXiv:2309.12871, 2023. URL
https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1.
Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. Towards general text
embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281, 2023.
Zihan Liu, Wei Ping, Rajarshi Roy, Peng Xu, Mohammad Shoeybi, and Bryan Catanzaro. ChatQA: Surpassing
GPT-4 on conversational QA and RAG. arXiv preprint arXiv:2401.10225, 2024.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning
word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for
Computational Linguistics: Human Language Technologies, pp. 142–150, Portland, Oregon, USA, June
2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/
P11-1015.
Wei Chen Maggie, Phil Culliton. Tweet sentiment extraction, 2020. URL https://kaggle.com/
competitions/tweet-sentiment-extraction.
Macedo Maia, Siegfried Handschuh, André Freitas, Brian Davis, Ross McDermott, Manel Zarrouk, and
Alexandra Balahur. Www’18 open challenge: financial opinion mining and question answering. In Companion
proceedings of the the web conference 2018, pp. 1941–1942, 2018.
Julian McAuley and Jure Leskovec. Hidden factors and hidden topics: understanding rating dimensions
with review text. In Proceedings of the 7th ACM Conference on Recommender Systems, RecSys ’13, pp.
165–172, New York, NY, USA, 2013a. Association for Computing Machinery. ISBN 9781450324090. doi:
10.1145/2507157.2507163. URL https://doi.org/10.1145/2507157.2507163.
Julian McAuley and Jure Leskovec. Hidden factors and hidden topics: understanding rating dimensions with
review text. In Proceedings of the 7th ACM conference on Recommender systems, pp. 165–172, 2013b.
Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, and Semih Yavuz. Sfr-embedding-
2: Advanced text embedding with multi-stage training, 2024a. URL https://huggingface.co/
Salesforce/SFR-Embedding-2_R.
Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, and Semih Yavuz. Sfrembedding-mistral:
enhance text retrieval with transfer learning. Salesforce AI Research Blog, 3, 2024b.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words
and phrases and their compositionality. Advances in neural information processing systems, 2013.
MistralAI. Mixtral 8x22b. URL https://mistral.ai/news/mixtral-8x22b/.
Gabriel de Souza P Moreira, Radek Osmulski, Mengyao Xu, Ronay Ak, Benedikt Schifferer, and Even Oldridge.
NV-Retriever: Improving text embedding models with effective hard-negative mining. arXiv preprint
arXiv:2407.15831, 2024.
Niklas Muennighoff, Nouamane Tazi, Loïc Magne, and Nils Reimers. MTEB: Massive text embedding
benchmark. arXiv preprint arXiv:2210.07316, 2022.
13
Published as a conference paper at ICLR 2025
Niklas Muennighoff, Hongjin Su, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, and Douwe
Kiela. Generative representational instruction tuning. arXiv preprint arXiv:2402.09906, 2024.
Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas
Tezak, Jong Wook Kim, Chris Hallacy, et al. Text and code embeddings by contrastive pre-training. arXiv
preprint arXiv:2201.10005, 2022.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. MS
MARCO: A human-generated machine reading comprehension dataset. 2016.
Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y Zhao, Yi Luan,
Keith B Hall, Ming-Wei Chang, et al. Large dual encoders are generalizable retrievers. arXiv preprint
arXiv:2112.07899, 2021.
James O’Neill, Polina Rozenshtein, Ryuichi Kiryo, Motoko Kubota, and Danushka Bollegala. I wish i would
have loved this one, but i didn’t–a multilingual dataset for counterfactual detection in product reviews. arXiv
preprint arXiv:2104.06893, 2021.
OpenAI. New embedding models and api updates, 2024.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with
human feedback. Advances in neural information processing systems, 2022.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei
Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of
machine learning research, 21(140):1–67, 2020.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine
comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
Nils Reimers. Stackexchange (title, body) pairs, 2021a. URL https://huggingface.co/datasets/
flax-sentence-embeddings/stackexchange_title_body_jsonl.
Nils Reimers. Reddit (title, body) pairs, 2021b. URL https://huggingface.co/datasets/
sentence-transformers/reddit-title-body.
Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv
preprint arXiv:1908.10084, 2019.
Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Founda-
tions and Trends® in Information Retrieval, 3(4):333–389, 2009.
Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. CARER: Contextualized
affect representations for emotion recognition. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi
Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp.
3687–3697, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi:
10.18653/v1/D18-1404. URL https://aclanthology.org/D18-1404.
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and
Wen-tau Yih. Replug: Retrieval-augmented black-box language models. arXiv preprint arXiv:2301.12652,
2023.
Stack-Exchange-Community. Stack exchange data dump, 2023.
Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. A simple and effective pruning approach for large
language models. arXiv preprint arXiv:2306.11695, 2023.
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. Beir: A heterogenous
benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663, 2021.
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. Fever: a large-scale dataset for
fact extraction and verification. arXiv preprint arXiv:1803.05355, 2018.
George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R
Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, et al. An
overview of the bioasq large-scale biomedical semantic indexing and question answering competition. BMC
bioinformatics, 16:1–28, 2015.
14
Published as a conference paper at ICLR 2025
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser,
and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
Voyage-AI. voyage-large-2-instruct: Instruction-tuned and rank 1 on mteb, 2024.
Henning Wachsmuth, Shahbaz Syed, and Benno Stein. Retrieval of the best counterargument without prior topic
knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pp. 241–251, 2018.
David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Iz Beltagy, Lucy Lu Wang, and Hannaneh Hajishirzi.
Scifact-open: Towards open-domain scientific claim verification. arXiv preprint arXiv:2210.13777, 2022.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and
Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems.
Advances in neural information processing systems, 32, 2019.
Boxin Wang, Wei Ping, Lawrence McAfee, Peng Xu, Bo Li, Mohammad Shoeybi, and Bryan Catanzaro.
Instructretro: Instruction tuning post retrieval-augmented pretraining. arXiv preprint arXiv:2310.07713,
2023a.
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and
Furu Wei. Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533,
2022.
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. Improving text
embeddings with large language models. arXiv preprint arXiv:2401.00368, 2023b.
Yuxuan Wang, Daisy Stanton, Yu Zhang, RJ-Skerry Ryan, Eric Battenberg, Joel Shor, Ying Xiao, Ye Jia, Fei
Ren, and Rif A Saurous. Style tokens: Unsupervised style modeling, control and transfer in end-to-end speech
synthesis. In International conference on machine learning, pp. 5180–5189. PMLR, 2018.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai,
and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christo-
pher D Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint
arXiv:1809.09600, 2018.
Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. Mr. tydi: A multi-lingual benchmark for dense retrieval.
arXiv preprint arXiv:2108.08787, 2021.
Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li,
Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. Miracl: A multilingual retrieval dataset covering 18 diverse
languages. Transactions of the Association for Computational Linguistics, 11:1114–1131, 2023.
15
Published as a conference paper at ICLR 2025
A COMPREHENSIVE STUDY OF MODEL COMPRESSION TECHNIQUES FOR
NV-EMBED
Increasing computational and memory demands of LLM-based embedding model present the chal-
lenges for the deployment, limiting their scalability and accessibility. In this appendix section, we
provide the analysis of post-training model compression techniques (i.e., pruning and quantization)
for generalist embedding models. Our analysis demonstrates that these compression methods en-
hance the accuracy and robustness of LLM-based embedding models, surpassing the performance of
smaller-sized embedding models based on Llama3.2-3B, Qwen2.5-3B and Minitron-4B.
In model compression process, we first perform pruning the NV-Embed-v2 model, reducing its size
from 8 billion parameters to 3.5 billion (i.e., pruning the main decoder-only blocks and removing the
latent attention block). Next, we apply quantization to lower its precision to 8-bit weights including
integer and floating (E4M3, E5M2) formats. Finally, we perform continual re-training using fine-
tuning (PEFT) method known as low-rank adaptation (LoRA) to restore the model’s accuracy. For
evaluation, we evaluate our model on MTEB benchmark (Muennighoff et al., 2022).
A.1 PRUNING
In order to find better pruning techniques, we apply three methods (magnitude-based, WANDA(Sun
et al., 2023), SparseGPT(Frantar & Alistarh, 2023)) for semi-structured (2:4 and 4:8) and unstructured
approaches. Note, unstructured pruning strategy removes the network elements from individual
weights, while the structured strategy removes the blocks of nonzero weights in higher granularity
ways such as row/columns of weight metrics. Semi-structured is the hardware friendly way (N:M
sparsity), ensuring that N weights remain non-zero within every group of M weights. For example,
4:8 semi-structured pruning prunes four out of every eight elements in a weight tensor. This semi-
structured sparsity reduces the size of the weight matrices and computational cost, while maintaining
certain regularity for efficient hardware utilization. The literature presents various criteria for deter-
mining which weights to prune. The simplest approach is magnitude-based pruning, which retains
weights with higher absolute values and removes the rest. Another approach is WANDA (Sun et al.,
2023) which introduces a pruning technique that considers both weights and activations. SparseGPT
(Frantar & Alistarh, 2023) identifies the non-critical connections by utilizing the approximate hessian
based optimization method.
Table 6 summarizes the averaged MTEB scores for different model pruning, respectively. Among
these techniques, SparseGPT generally delivers the best results, while magnitude-based and WANDA
methods produce comparable performance both during pruning and after retraining as shown in Table
6. Notably, semi-structured (2:4) pruning yields the lowest scores but demonstrates the greatest
accuracy recovery following retraining for MTEB benchmarks. Based on these findings, we focus on
SparseGPT pruning for subsequent ablation studies.
Table 6: Pruning - MTEB benchmark
Pruning Criterion
Magnitude
Wanda
SparseGPT
Pruning
Re-train
Pruning
Re-train
Pruning
Re-train
Semi-structured
2:4
64.62
69.96
64.26
69.74
68.48
70.41
4:8
67.6
70.46
67.87
70.42
70.11
70.9
Unstructured
69.18
70.84
70.19
70.81
71.33
71.18
A.2 KNOWLEDGE DISTILLATION
In traditional accuracy recovery approaches after model compression, ground truth labels are utilized
for continual retraining. To improve this retraining process, we leverage a knowledge distillation loss
term, where the uncompressed model serves as the teacher, transfering the knowledge of the more
advanced teacher model to a smaller and simpler student model. To enable the student model mimic
the teacher’s behavior, we introduce mean-squared error losses for both the output state (So) and the
intermediate states (S1 − So−1).
16
Published as a conference paper at ICLR 2025
For this knowledge distillation process, we use the the uncompressed embedding model serves as the
teacher, while the compressed version acts as the student. We remove the latent attention block and
compensate the accuracy degradation with knowledge distillation. The knowledge distillation loss is
defined as Lkd = (cid:80)O−2
t )] + M SE(SO−1
t ) where Lkd is knowledge distillation
loss, O is the number of layers, n is layer number, MSE represents the mean-squared function, Ss is
student state and St is the teacher state. Based on this, the total loss function is sum of contrastive
and knowledge distillation loss as: Ltotal = Lcontrastive + α × Lkd where α is weight term.
n=1 [M SE(Sn
s , Sn
, SO
s
As presented in Table 7, incorporating knowledge distillation ("GT+KD") consistently outperforms
using only ground truth labels ("GT") across different approaches for MTEB benchmarks. Among the
methods, 2:4 semi-structured pruning yields the worst results but benefits the most from knowledge
distillation, achieving improvements of 0.76 on the MTEB benchmark.
Label Types
Table 7: Knowledge Distillation - MTEB benchmark
Semi-structured
2:4
70.41
71.17
4:8
70.90
71.22
GT
GT+KD
Unstructured
71.18
71.48
A.3 QUANTIZATION
For weight quantization stage, we adopt GPTQ (Frantar et al., 2022), a post-training weight quantiza-
tion method that utilizes approximate Hessian information to reduce the precision of the weights. To
evaluate our compressed embedding models, we compare them against three smaller LLM-based
embedding models—Llama3.2-3B, Qwen2.5-3B, and Minitron-4B—which have varying numbers of
weight parameters. Table 8 provides the averaged MTEB scores for compressed models (pruning and
quantization), respectively.
A key observation is that our compressed models demonstrates superior robustness in low precision
settings compared to their smaller counter parts.For example, NV-Embed quantized to INT8 maintains
nearly identical MTEB scores (0.0% for 2:4 semi-structured, 0.01% for 4:8 semi-structured, and
0.01% for unstructured) compared to the performance drops observed in smaller models such as
Llama-3B (-0.47%), Qwen-3B (-0.14%), and Minitron-4B (-0.84%). This trend remains consistent
across different 8 bit precision cases as well.
Compared to integer format which has an uniform numerical distribution, floating point format can
also represent the same number of discrete points, covering larger numerical range and non-uniform
distributions (high precision for small values and lower precision for large values). There are two
primary FP8 format: E4M3 (4-bit exponent, 3-bit mantissa), E5M2 (5-bit exponent, 2-bit mantissa)
where 1 bit represents the signed bit. Table 8 shows that 8 bit floating point (E4M3 and E5M2)
achieve comparable MTEB scores to the INT8 format.
Table 8: Quantization - MTEB benchmark
FP8 (E4M3)
70.94
-0.34%
71.28
0.08%
71.55
0.09%
70.05
-0.36%
69.70
-0.1%
69.97
-1.0%
Precision
Score
Diff (%)
Score
Diff (%)
Score
Diff (%)
Score
Diff (%)
Score
Diff (%)
Score
Diff (%)
INT8
71.17
0.00%
71.23
0.01%
71.49
0.01%
69.98
-0.47%
69.70
-0.1%
70.09
-0.84%
16bit
71.17
-
71.22
-
71.48
-
70.31
-
69.77
-
70.68
-
FP8 (E5M2)
71.14
0.03%
71.48
0.37%
71.75
0.37%
70.06
-0.35%
69.67
-0.14%
69.97
-1.02%
NV-Embed (2:4)
NV-Embed (4:8)
NV-Embed (Unstr)
Llama3.2-3b
Qwen2.5-3b
Minitron-4b
17
Published as a conference paper at ICLR 2025
B AIR BENCHMARK
In this appendix section, we present AIR-Bench3 (version of 24.04) that is newly released information
retrieval benchmark, incorporating the diverse and comprehensive domains such as healthcare, law,
news, book, arxiv, finance and synthetically generated samples using diverse LLMs. Importantly,
AIR-Bench can help us to understand the generalization capability of the embedding/retrieval model,
because the majority of different domain samples do not appear in MTEB benchmarks. Moreover,
the AIR-Bench is designed as a closed-book benchmark whose ground truth is kept confidential. As
a result, the benchmark score can be only obtained through the HuggingFace Hub platform.
In AIR-Benchmark 24.04 version, there are two tasks: QA and Long-Doc. We run evaluations on
8 English datasets in QA task and 15 English datasets on the Long-Doc task. As shown in Table 9,
our NV-Embed-v2 achieves the second highest scores in QA section. As described in Table 10,
our NV-Embed-v2 attained the highest scores of 74.78 on the Long-Doc section, surpassing the
Bge-en-icl model that requires overheads adding in-context examples to query during training. It
is important to highlight that the NV-Embed-v2 model, which achieved higher MTEB accuracy
scores, also demonstrates improved accuracy on both QA and Long-Doc tasks in the AIR-Bench
compared to NV-Embed-v1. Interestingly, this is not always observed in the literature, where
a model performing better on MTEB does not necessarily outperform on the AIR-Bench. For
example, while SFR-Embedding-2R substantially outperforms SFR-Embedding-Mistral in MTEB
scores (SFR-Embedding-2R: 70.31, SFR-Embedding-Mistral: 67.56), it falls short in AIR-Bench
performance both in QA (SFR-Embedding-2R: 49.47, SFR-Embedding-Mistral: 51.58) and Long-doc
(SFR-Embedding-2R: 67.45, SFR-Embedding-Mistral: 69.0).
Table 9: QA (nDCG@10 scores) on AIR benchmark 24.04
Domain
Bge-en-icl (zero-shot)
NV-Embed-v2
SFR-Embedding-Mistral
Stella-1.5B-v5
Gte-Qwen2-7B-instruct
NV-Embed-v1
Linq-Embed-Mistral
SFR-Embedding-2R
E5-mistral-7b-instruct
Wiki Web
54.40
64.61
52.58
65.19
51.27
63.46
50.88
61.99
51.20
63.46
50.42
62.84
48.41
61.04
48.77
63.72
44.41
61.67
News Healthcare
55.11
53.13
52.21
53.87
54.07
51.46
49.44
51.14
48.18
57.25
59.56
58.76
58.81
54.20
58.53
60.18
55.86
56.32
Law
25.10
25.00
23.27
23.22
22.31
20.65
20.34
20.98
19.32
Finance Arxiv Msmarco Avg (8)
52.93
54.81
52.28
53.04
51.58
56.94
51.53
57.26
50.26
58.20
50.02
49.89
49.69
50.04
49.47
54.78
48.56
54.79
63.71
60.8
58.99
61.38
58.39
60.27
60.50
57.66
59.03
48.46
48.94
47.75
44.81
40.27
46.10
47.56
42.84
44.78
Table 10: Long-document (Recall@10 scores) on AIR benchmark 24.04
Domain
NV-Embed-v2
Bge-en-icl (zero-shot)
NV-Embed-v1
Bge-multilingual-gemma2
Linq-Embed-Mistral
Stella-1.5B-v5
SFR-Embedding-Mistral
Text-embed-3-large (OpenAI)
E5-mistral-7b-instruct
SFR-Embedding-2R
Arxiv (4) Book (2) Healthcare (5)
79.27
78.30
77.65
71.77
75.46
73.17
72.79
74.53
72.14
70.51
77.46
78.21
75.49
76.46
73.81
74.38
72.41
73.16
72.44
70.22
73.01
73.65
72.38
73.96
71.58
70.02
67.94
65.83
68.44
67.60
Law (4) Avg. (15)
71.18
67.09
69.55
70.86
68.58
69.32
64.83
64.47
62.92
62.82
74.78
73.75
73.45
72.88
72.11
71.25
69.0
68.77
68.49
67.45
3https://github.com/AIR-Bench/AIR-Bench
18
Published as a conference paper at ICLR 2025
C EXPERIMENTAL DETAILS AND INSTRUCTION TEMPLATES FOR TRAINING
AND EVALUATION
In this section, we describe our detailed experimental setups. We use a parameter-efficient finetun-
ing (PEFT) method denoted as low-rank adaptation (LoRA) (Hu et al., 2021) to efficiently finetune
our proposed NV-Embed model. We chose Mistral 7B (Jiang et al., 2023) as the base decoder-only
LLM. We replace the attention mask from causal to bidirectional, and integrate the latent attention
layer with 512 latents, 4096 hidden dimension size, and 8 multi-head attentions.
We train Mistral 7B LLM model end-to-end with a contrastive loss using LoRA with rank 16, alpha
32 and dropout rate of 0.1. We use Adam optimizer with 50 warm-up steps and learning rate 2e-5 for
first stage and 1.5e-5 for second stage with linear decay. The optimizer hyperparameters are included
in Table 11. We restart the optimizer with the same 50 warm-up steps and lower learning rate for the
second stage. The model is finetuned with 128 batch size, where each batch is composed of a query
paired with 1 positive and 7 hard negative documents. Training samples from different datasets in
Table 12 are uniformly sampled. We train using Bfloat16, and set the maximum sequence length as
512 tokens. The special <BOS> and <EOS> tokens are appended at the start and end of given query
and documents. The whole training is conducted in two stages where the model is initially trained
on retrieval datasets utilizing in-batch negative technique. Subsequently, the model is trained with
blended datasets with both retrieval and non-retrieval embedding tasks.
For evaluation, we assess our model using a maximum length of 512 tokens to ensure fair comparisons
with prior work (Wang et al., 2023b), which also provides evaluation results based on 512 token
limits. Evaluation instructions templates are available in Table 13.
Table 11: Parameters used in the experiments
Parameter
Batchsize
Number of Hardnegatives
Warm-up Steps
Value
128
7
50
Training Steps
Learning Rate
LoRA Params
Weight Decay
Optimizer
Padding Side
Number of Latents (r)
Latent Width (d)
Multi-Attention Heads
First stage - 20k
Second stage - 18k
First stage - 2e-5
Second stage - 1.5e-5
Rank - 16
Alpha - 32
Dropout - 0.1
0.03
Adam
right
512
4096
8
19
Published as a conference paper at ICLR 2025
Table 12: Instructions and number of samples used for each training dataset.
Task Name
ArguAna
Natural Language Inference
PAQ, MSMARCO
SQUAD
StackExchange
Natural Question
HotpotQA
FEVER
FiQA2018
BioASQ
HoVer
Nfcorpus
MIRACL
Mr.TyDi
SciFact
STS12, STS22, STSBenchmark
AmazonCounterfactual-Classification
AmazonPolarity-Classification
AmazonReviews-Classification
Banking77-Classification
Emotion-Classification
Instruction Template
Given a claim, retrieve documents that support or refute the claim
Retrieve semantically similar text
Given a premise, retrieve a hypothesis that is entailed by the premise
Given a web search query, retrieve relevant passages that answer the query
Given a question, retrieve passages that answer the question
Given a question, retrieve documents that can help answer the question
Given a question, retrieve passages that answer the question
Given a web search query, retrieve relevant passages that answer the query
Given a question, retrieve passages that answer the question
Given a multi-hop question, retrieve documents that can help answer the question
Given a claim, retrieve documents that support or refute the claim
Given a financial question, retrieve relevant passages that answer the query
Given a query, retrieve documents that can help answer the question
Given a claim, retrieve documents that support or refute the claim
Given a question, retrieve relevant documents that answer the question
Given a question, retrieve passages that answer the question
Given a question, retrieve passages that answer the question
Given a scientific claim, retrieve documents that support or refute the claim
Retrieve semantically similar text.
Classify a given Amazon customer review text as either counterfactual or not-counterfactual
Classify Amazon reviews into positive or negative sentiment
Classify the given Amazon review into its appropriate rating category
Given a online banking query, find the corresponding intents
Classify the emotion expressed in the given Twitter message into one of the six emotions:anger,
fear, joy, love, sadness, and surprise
Number of Samples
16k
270k
500k, 500k
87k
80k
100k
170k
140k
5k
2.4k
17k
3.6k
2k
2k
0.9k
1.8k, 0.3k, 2.7k
6k
20k
40k
10k
16k
Classify the sentiment expressed in the given movie review text from the IMDB dataset
Classify the intent of the given utterance in task-oriented conversation
Classify the intent domain of the given utterance in task-oriented conversation
Given a user utterance as query, find the user intents
Given a user utterance as query, find the user scenarios
Classify the given comments as either toxic or not toxic
Imdb-Classification
MTOPIntent-Classification
MTOPDomain-Classification
MassiveIntent-Classification
MassiveScenario-Classification
ToxicConversationsClassification
TweetSentimentExtractionClassification Classify the sentiment of a given tweet as either positive, negative, or neutral
Arxiv-Clustering-P2P
Arxiv-Clustering-S2S
Biorxiv-Clustering-P2P
Biorxiv-Clustering-S2S
Medrxiv-Clustering-P2P
Medrxiv-Clustering-S2S
Reddit-Clustering
Reddit-Clustering-S2S
Stackexchange-Clustering
Stackexchange-Clustering-S2S
TwentyNewsgroups-Clustering
Identify the main and secondary category of Arxiv papers based on the titles and abstracts
Identify the main and secondary category of Arxiv papers based on the titles
Identify the main category of Biorxiv papers based on the titles and abstracts
Identify the main category of Biorxiv papers based on the titles
Identify the main category of Medrxiv papers based on the titles and abstracts
Identify the main category of Medrxiv papers based on the titles
Identify the main category of Medrxiv papers based on the titles and abstracts
Identify the main category of Medrxiv papers based on the titles and abstracts
Identify the main category of Medrxiv papers based on the titles and abstracts
Identify the main category of Medrxiv papers based on the titles and abstracts
Identify the topic or theme of the given news articles
24k
15k
15k
11k
11k
50k
27k
50k
50k
15k
15k
2.3k
2.3k
50k
40k
50k
40k
1.7k
D LATENT-ATTENTION VISUALIZATION
Figure 2: Attention over 4096 latents across 8 heads (columns) are visualized for 10 positive and
10 negative reviews (rows) from the AmazonReviewsClassification dataset. The attention weights
are mean pooled across tokens. The attention weights reveal that the latents specialize in learning
features of queries. The latent indicated by the arrows specialized in learning the positivity of reviews.
It has high attention across the positive reviews and low attention across the negative reviews.
20
Published as a conference paper at ICLR 2025
Table 13: Instructions used for evaluation on the MTEB benchmark. “STS*” indicates we use the
same instructions for all the STS tasks.
Task Name
ArguAna
ClimateFEVER
DBPedia
FEVER
FiQA2018
HotpotQA
MSMARCO
NFCorpus
Natural Question
QuoraRetrieval
SCIDOCS
SciFact
Touche2020
TREC-COVID
STS
SummEval
AmazonCounterfactualClassification
AmazonPolarityClassification
AmazonReviewsClassification
Banking77Classification
EmotionClassification
Instruction Template
Given a claim, retrieve documents that support or refute the claim
Given a claim about climate change, retrieve documents that support or refute the claim
Given a query, retrieve relevant entity descriptions from DBPedia
Given a claim, retrieve documents that support or refute the claim
Given a financial question, retrieve user replies that best answer the question
Given a multi-hop question, retrieve documents that can help answer the question
Given a web search query, retrieve relevant passages that answer the query
Given a question, retrieve relevant documents that answer the question
Given a question, retrieve passages that answer the question
Given a question, retrieve questions that are semantically equivalent to the given question
Given a scientific paper title, retrieve paper abstracts that are cited by the given paper
Given a scientific claim, retrieve documents that support or refute the claim
Given a question, retrieve passages that answer the question
Given a query on COVID-19, retrieve documents that answer the query
Retrieve semantically similar text.
Given a news summary, retrieve other semantically similar summaries
Classify a given Amazon customer review text as either counterfactual or not-counterfactual
Classify Amazon reviews into positive or negative sentiment
Classify the given Amazon review into its appropriate rating category
Given a online banking query, find the corresponding intents
Classify the emotion expressed in the given Twitter message into one of the six emotions:anger,
fear, joy, love, sadness, and surprise
Classify the sentiment expressed in the given movie review text from the IMDB dataset
Given a user utterance as query, find the user intents
Given a user utterance as query, find the user scenarios
Classify the intent domain of the given utterance in task-oriented conversation
Classify the intent of the given utterance in task-oriented conversation
Classify the given comments as either toxic or not toxic
ImdbClassification
MassiveIntentClassification
MassiveScenarioClassification
MTOPDomainClassification
MTOPIntentClassification
ToxicConversationsClassification
TweetSentimentExtractionClassification Classify the sentiment of a given tweet as either positive, negative, or neutral
ArxivClusteringP2P
ArxivClusteringS2S
BiorxivClusteringP2P
BiorxivClusteringS2S
MedrxivClusteringP2P
MedrxivClusteringS2S
RedditClustering
RedditClusteringP2P
StackExchangeClustering
StackExchangeClusteringP2P
TwentyNewsgroupsClustering
AskUbuntuDupQuestions
MindSmallReranking
SciDocsRR
StackOverflowDupQuestions
SprintDuplicateQuestions
TwitterSemEval2015
TwitterURLCorpus
Identify the main and secondary category of Arxiv papers based on the titles and abstracts
Identify the main and secondary category of Arxiv papers based on the titles
Identify the main category of Biorxiv papers based on the titles and abstracts
Identify the main category of Biorxiv papers based on the titles
Identify the main category of Medrxiv papers based on the titles and abstracts
Identify the main category of Medrxiv papers based on the titles
Identify the topic or theme of Reddit posts based on the titles
Identify the topic or theme of Reddit posts based on the titles and posts
Identify the topic or theme of StackExchange posts based on the titles
Identify the topic or theme of StackExchange posts based on the given paragraphs
Identify the topic or theme of the given news articles
Retrieve duplicate questions from AskUbuntu forum
Retrieve relevant news articles based on user browsing history
Given a title of a scientific paper, retrieve the titles of other relevant papers
Retrieve duplicate questions from StackOverflow forum
Retrieve duplicate questions from Sprint forum
Retrieve tweets that are semantically similar to the given tweet
Retrieve tweets that are semantically similar to the given tweet
21
Published as a conference paper at ICLR 2025
Table 14: Full BEIR and MTEB benchmark
ArguAna
ClimateFEVER
CQADupStack
DBPEDIA
FEVER
FiQA2018
HotpotQA
MSMARCO
NFCorpus
Natural
QuoraRetrieval
SCIDOCS
SciFact
Touche2020
TREC-COVID
BIOSSES
SICK-R
STS12
STS13
STS14
STS15
STS16
STS17
STS22
STSBenchmark
SummEval
SprintDuplicateQuestions
TwitterSemEval2015
TwitterURLCorpus
AmazonCounterfactual
AmazonPolarity
AmazonReviews
Banking77
Emotion
Imdb
MassiveIntent
MassiveScenario
MTOPDomain
MTOPIntent
ToxicConversations
TweetSentimentExtraction
Arxiv-P2P
Arxiv-S2S
Biorxiv-P2P
Biorxiv-S2S
Medrxiv-P2P
Medrxiv-S2S
Reddit
Reddit-P2P
StackExchange
StackExchange-P2P
TwentyNewsgroups
AskUbuntuDupQuestions
MindSmallRerank
SciDocsRR
StackOverflowDupQuestions
MTEB Average (56)
Bge-multilin
gual-gemma2
77.37
39.37
47.94
51.37
90.38
60.04
83.26
45.71
38.11
71.45
90.04
26.93
72.05
30.26
64.27
85.74
82.66
77.71
87.45
83.48
87.63
86.7
91.18
69.02
87.25
31.2
90.94
79.64
86.95
89.48
96.9
61.6
92.53
92.97
96.66
82.05
84.4
98.61
95.51
87.34
78.86
54.91
50.28
52.64
49.2
45.81
44.11
56.03
65.83
66.21
45.74
70.44
64.59
31.79
87.6
54.9
69.88
Gte-Qwen2-
7B-instruct
64.27
45.88
46.43
52.42
95.11
62.03
73.08
45.98
40.6
67
90.09
28.91
79.06
30.57
82.26
81.37
79.28
79.55
88.83
83.87
88.54
86.49
88.73
66.88
86.85
31.35
92.82
77.96
86.59
91.31
97.5
62.56
87.57
79.45
96.75
85.41
89.77
99.04
91.88
85.12
72.58
54.46
51.74
50.09
46.65
46.23
44.13
73.55
74.13
79.86
49.41
53.91
67.58
33.36
89.09
55.66
70.24
NV-Embed-v1 NV-Embed-v2
68.21
34.72
50.51
48.29
87.77
63.1
79.92
46.49
38.04
71.22
89.21
20.19
78.43
28.38
85.88
85.59
82.8
76.22
86.3
82.09
87.24
84.77
87.42
69.85
86.14
31.2
95.94
78.73
86.05
95.12
97.14
55.47
90.34
91.71
97.06
80.07
81.74
96.51
89.77
92.6
80.6
53.76
49.59
48.15
44.74
39.24
36.98
63.2
68.01
74.99
42.04
60.13
67.5
30.82
87.26
56.58
69.32
70.07
45.39
50.24
53.50
93.75
65.73
85.48
45.63
45.17
73.57
89.04
21.90
80.13
31.78
88.44
87.42
82.15
77.89
88.30
84.30
89.04
86.77
90.67
68.12
88.41
30.70
97.02
81.11
87.87
94.28
97.74
63.96
92.42
93.38
97.14
86.10
92.17
99.25
94.37
92.74
80.87
55.80
51.26
54.09
49.60
46.09
44.86
71.10
74.94
82.10
48.36
64.82
67.46
31.76
87.59
55.79
72.31
Stella-en-
1.5B-v5
65.27
46.11
47.75
52.28
94.83
60.48
76.67
45.22
42
71.8
90.03
26.64
80.09
29.94
85.98
83.11
82.89
80.09
89.68
85.07
89.39
87.15
91.35
68.1
88.23
31.49
96.04
80.58
87.58
92.87
97.16
59.36
89.79
84.29
96.66
85.83
90.2
99.01
92.78
88.76
74.84
55.44
50.66
50.68
46.87
46.87
44.65
72.86
75.27
80.29
49.57
61.43
67.33
33.05
89.2
55.25
71.19
bge-en-icl
(zeroshot)
82.76
45.35
47.23
50.42
91.96
58.77
84.98
46.72
40.69
73.85
91.02
25.25
78.33
29.67
78.11
86.35
83.87
77.73
85.98
82.34
87.35
86.54
91.25
68.08
87.92
30.75
95.06
78.54
87.19
92.88
96.86
61.28
91.42
93.31
96.91
82.26
83.92
97.99
93.56
93.16
79.9
54.42
49.17
52.32
48.38
46.13
44.2
71.2
72.17
81.29
45.53
68.51
64.8
30.6
86.9
56.32
71.24
SFR-Embe
dding-2R
62.34
34.43
46.11
51.21
92.16
61.77
81.36
42.18
41.34
73.96
89.58
24.87
85.91
28.18
87.28
87.6
77.01
75.67
82.4
79.93
85.82
84.5
88.93
67.1
83.6
30.71
97.62
78.57
88.03
92.72
97.31
61.04
90.02
93.37
96.8
85.97
90.61
98.58
91.3
91.14
79.7
54.02
48.82
50.76
46.57
46.66
44.18
62.92
72.74
76.48
48.29
66.42
66.71
31.26
87.29
55.32
70.31
22
Published as a conference paper at ICLR 2025
Table 15: Prompt template for short-long matching subgroup.
Brainstorm a list of potentially useful text retrieval tasks.
Here are a few examples for your reference:
- Given a web search query, retrieve relevant passages that answer the query
- Given a claim about climate change, retrieve documents that support or refute the claim
- Given a job title, search for job descriptions that provide information about the role
Please adhere to the following guidelines:
- Specify the type of query and the type of desired texts.
- Each retrieval task should cover a wide range of queries, and should not be too specific.
- Cover a wide range of query types and desired text types.
Your output must always be a JSON list of strings only, with about 40 elements, and each element corresponds to
a distinct retrieval task in one sentence. Do not explain yourself or output anything else. Be creative!
You have been assigned a retrieval task: {task}
Your mission is to write one text retrieval example for this task in JSON format. The JSON object must
contain the following keys:
- "user_query": a string, a random example of what is provided as specified by the task description.
- "positive_document": a string, a relevant document for the user query.
- "hard_negative_document1": a string, a hard negative document that is irrelevant but appears relevant to the query.
- "hard_negative_document2": a string, another hard negative document that is irrelevant but appears relevant to the query.
Please adhere to the following guidelines:
- The "user_query" should be {query_type}, {query_length}, {clarity}, and diverse in topic. The "user_query" should
not restate the task and just contain what the task description says is provided.
- All documents must be created independent of the query. Avoid copying the query verbatim. It’s acceptable if
some parts of the "positive_document" are not topically related to the query.
- All documents should be at least {num_words} words long.
- The "hard_negative_document1" may contain little useful information, but it should be less useful or
comprehensive compared to the "positive_document".
- The "hard_negative_document2" may should be about a related but different topic.
- Do not provide any explanation in any document on why it is relevant or not relevant to the query.
- Both the query and documents require {difficulty} level education to understand.
Your output must always be a JSON object only, do not explain yourself or output anything else. Be creative!"""
Placeholders:
“{query_type}” ∈ {extremely long-tail, long-tail, common}
“{query_length}” ∈ {less than 5 words, 5 to 15 words, at least 10 words}
“{difficulty}” ∈ {high school, college, PhD}
“{clarity}” ∈ {clear, understandable with some effort, ambiguous}
“{num_words}” ∈ {50, 100, 200, 300, 400, 500}
23
Published as a conference paper at ICLR 2025
Table 16: Prompt template for long-short matching subgroup.
Brainstorm a list of potentially useful text classification tasks.
Please adhere to the following guidelines:
- Tasks should cover a diverse range of domains and task types.
Your output must always be a JSON list of strings only, with about 40 elements, and each element corresponds
to a distinct text classification task in one sentence. Do not explain yourself or output anything else. Be creative!
You have been assigned a text classification task: {task}
Your mission is to write one text classification example for this task in JSON format. The JSON object must
contain the following keys:
- "input_text": a string, the input text specified by the classification task.
- "label": a string, the correct
label of the input text.
- "misleading_label": a string, an incorrect label that is related to the task.
Please adhere to the following guidelines:
- The "input_text" should be {num_words} words and diverse in expression.
- The "misleading_label" must be a valid label for the given task, but not as appropriate as the "label" for the
"input_text".
- Avoid including the values of the "label" and "misleading_label" fields in the "input_text", that would make
the task too easy.
- The "input_text" is {clarity} and requires {difficulty} level education to comprehend.
Your output must always be a JSON object only, do not explain yourself or output anything else. Be creative!
Placeholders:
{num_words} ∈ {"less than 10","at least 10", "at least 50", "at least 100", "at least 200"}
{difficulty} ∈ {high school, college, PhD}
{clarity} ∈ {clear, understandable with some effort, ambiguous}
24
|
et5l9qPUhm | Strong Model Collapse | [
8,
8,
8
] | "Published as a conference paper at ICLR 2025\n\nSTRONG MODEL COLLAPSE\n\nElvis Dohmatob1,2,3, Yunzh(...TRUNCATED) |
8m7p4k6Zeb | "From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning o(...TRUNCATED) | [
6,
6,
6
] | "Published as a conference paper at ICLR 2025\n\nFROM ARTIFICIAL NEEDLES TO REAL HAYSTACKS: IM-\nPRO(...TRUNCATED) |
hTphfqtafO | Large Language Models are Interpretable Learners | [
5,
6,
8
] | "Published as a conference paper at ICLR 2025\n\nLARGE LANGUAGE MODELS ARE INTERPRETABLE\nLEARNERS\n(...TRUNCATED) |
kxnoqaisCT | Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents | [
8,
8,
5,
10
] | "Published as a conference paper at ICLR 2025\n\nNavigating the Digital World as Humans Do:\nUNIVERS(...TRUNCATED) |
590yfqz1LE | Measuring Non-Adversarial Reproduction of Training Data in Large Language Models | [
6,
5,
8,
8,
8,
6,
5,
8
] | "Published as a conference paper at ICLR 2025\n\nMEASURING NON-ADVERSARIAL REPRODUCTION\nOF TRAINING(...TRUNCATED) |
FpiCLJrSW8 | More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness | [
8,
8,
6,
6
] | "Published as a conference paper at ICLR 2025\n\nMORE RLHF, MORE TRUST? ON THE IMPACT OF PREF-\nEREN(...TRUNCATED) |
AqfUa08PCH | Training Language Models on Synthetic Edit Sequences Improves Code Synthesis | [
6,
8,
6,
6
] | "Published as a conference paper at ICLR 2025\n\nTRAINING LANGUAGE MODELS ON SYNTHETIC\nEDIT SEQUENC(...TRUNCATED) |
vhPE3PtTgC | SWEb: A Large Web Dataset for the Scandinavian Languages | [
8,
6,
6,
5
] | "Published as a conference paper at ICLR 2025\n\nSWEB: A LARGE WEB DATASET FOR THE\nSCANDINAVIAN LAN(...TRUNCATED) |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 57