Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
3
6
corpus-id
stringlengths
3
59
score
int64
1
1
70041
2_Hearts_(Kylie_Minogue_song)
1
164883
Hezbollah
1
7429
Jenny_McCarthy
1
195244
Kevin_Bacon
1
195244
Sleepers
1
2961
Taarak_Mehta_Ka_Ooltah_Chashmah
1
162206
Ding_Yanyuhang
1
49775
Baloch_people
1
1799
Aphrodite
1
114158
Adobe_Photoshop
1
3483
Brie_Larson
1
74648
Ned_Stark
1
195124
Backing_vocalist
1
47848
Jack_Dylan_Grazer
1
61233
Half_Girlfriend_(film)
1
133374
Shannon_Lee
1
169941
Japan_national_football_team
1
198544
Catherine_Hardwicke
1
145707
Great_white_shark
1
121548
Cyprus
1
219704
Corsica
1
182902
Far_from_the_Madding_Crowd_(1967_film)
1
182902
Nicolas_Roeg
1
144391
Ghost_(1990_film)
1
39150
Ed_and_Lorraine_Warren
1
185999
Kyra_Sedgwick
1
25005
ASAP_Rocky
1
223294
Baja_1000
1
223294
Vince_Vaughn
1
214790
The_Tracey_Fragments_(film)
1
194184
Jeb_Bush
1
67244
Duke_of_York
1
80650
Neuromodulation
1
226458
Chadwick_Boseman
1
59603
Robert_Zemeckis
1
115920
The_Big_Country
1
143242
Wii
1
143242
Castlevania_(1986_video_game)
1
41199
The_Wonder_Years
1
45220
Birmingham
1
120644
Buffy_the_Vampire_Slayer
1
120644
Slayer_(Buffy_the_Vampire_Slayer)
1
77435
Vandals
1
23540
Babur
1
57236
Victor_Frankenstein_(film)
1
172720
Northwestern_University
1
172720
Big_Ten_Conference
1
219676
Corsica
1
219676
Haute-Corse
1
141268
Exit_the_King
1
78843
Skopje
1
15406
Shomu_Mukherjee
1
74224
The_Last_Song_(film)
1
189885
Augustus_Prew
1
74140
Menace_II_Society
1
69178
The_Catcher_in_the_Rye
1
56959
Samsung
1
38947
Two_and_a_Half_Men
1
38947
Charlie_Sheen
1
38947
Angus_T._Jones
1
136045
Birmingham
1
136045
West_Midlands_(county)
1
114251
Road_House_(1989_film)
1
76096
Loretta_Sanchez
1
123031
No_Strings_Attached_(film)
1
123031
United_States
1
5599
San_Francisco_Bay_Area
1
167164
Westworld_(TV_series)
1
216793
Timur
1
40086
Fargo_(film)
1
40086
Ed_Decter
1
143222
Scaramouche
1
113106
United_Kingdom
1
113106
Sky_UK
1
45759
Chris_Paul
1
45759
Basketball_at_the_Summer_Olympics
1
140442
Africa_Cup_of_Nations
1
26340
Pocahontas
1
49869
Neuromodulation
1
160534
Meghan_Markle
1
48576
Annabelle_(doll)
1
183423
Simi_Valley,_California
1
53319
The_Big_Country
1
179817
Vic_Mensa
1
123375
Providence,_Rhode_Island
1
123375
Brown_University
1
31105
Therasia
1
82208
Younger_(TV_series)
1
74544
Lorelai_Gilmore
1
216795
Timur
1
24454
Azithromycin
1
170015
Brat_Pack_(actors)
1
51425
Mutiny_on_the_Bounty_(1962_film)
1
15483
South_Island
1
143048
Pink_(singer)
1
45566
Clueless_(film)
1
171519
Fringe_(TV_series)
1
207009
Punch-Drunk_Love
1
84807
Bob_Ross
1
84807
The_Joy_of_Painting
1
End of preview. Expand in Data Studio

FEVERHardNegatives

An MTEB dataset
Massive Text Embedding Benchmark

FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The hard negative version has been created by pooling the 250 top documents per query from BM25, e5-multilingual-large and e5-mistral-instruct.

Task category t2t
Domains Encyclopaedic, Written
Reference https://fever.ai/

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["FEVERHardNegatives"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@inproceedings{thorne-etal-2018-fever,
  abstract = {In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo by annotators achieving 0.6841 in Fleiss kappa. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87{\%}, while if we ignore the evidence we achieve 50.91{\%}. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.},
  address = {New Orleans, Louisiana},
  author = {Thorne, James  and
Vlachos, Andreas  and
Christodoulopoulos, Christos  and
Mittal, Arpit},
  booktitle = {Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)},
  doi = {10.18653/v1/N18-1074},
  editor = {Walker, Marilyn  and
Ji, Heng  and
Stent, Amanda},
  month = jun,
  pages = {809--819},
  publisher = {Association for Computational Linguistics},
  title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification},
  url = {https://aclanthology.org/N18-1074},
  year = {2018},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("FEVERHardNegatives")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 164698,
        "number_of_characters": 114054968,
        "num_documents": 163698,
        "min_document_length": 2,
        "average_document_length": 696.4370242764114,
        "max_document_length": 29033,
        "unique_documents": 163698,
        "num_queries": 1000,
        "min_query_length": 15,
        "average_query_length": 49.62,
        "max_query_length": 172,
        "unique_queries": 1000,
        "none_queries": 0,
        "num_relevant_docs": 1171,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 1.171,
        "max_relevant_docs_per_query": 15,
        "unique_relevant_docs": 677,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
536