Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
_id
string
text
string
title
string
TR1-d-0
Mar, 1192
TR1-d-1
May, 1747
TR1-d-2
Mar, 1249
TR1-d-3
Jul, 1078
TR1-d-4
Sep, 1908
TR1-d-5
Jan, 1780
TR1-d-6
Feb, 2007
TR1-d-7
Jan, 1660
TR1-d-8
Sep, 1876
TR1-d-9
Nov, 1103
TR1-d-10
Apr, 1425
TR1-d-11
Oct, 1441
TR1-d-12
Sep, 1026
TR1-d-13
Jul, 1903
TR1-d-14
Jul, 1538
TR1-d-15
Dec, 1361
TR1-d-16
Jul, 1754
TR1-d-17
Jul, 1822
TR1-d-18
Feb, 1670
TR1-d-19
Mar, 1532
TR1-d-20
Oct, 1440
TR1-d-21
May, 1474
TR1-d-22
Apr, 1697
TR1-d-23
Oct, 1611
TR1-d-24
May, 1732
TR1-d-25
Mar, 1130
TR1-d-26
Jun, 1112
TR1-d-27
Nov, 1628
TR1-d-28
Dec, 1789
TR1-d-29
Aug, 1785
TR1-d-30
Jul, 1631
TR1-d-31
Aug, 1036
TR1-d-32
Sep, 1631
TR1-d-33
May, 1602
TR1-d-34
Apr, 1650
TR1-d-35
Apr, 1648
TR1-d-36
Apr, 1703
TR1-d-37
Apr, 1026
TR1-d-38
Oct, 1700
TR1-d-39
May, 1939
TR1-d-40
Jan, 1230
TR1-d-41
Feb, 1852
TR1-d-42
Oct, 1232
TR1-d-43
Mar, 1452
TR1-d-44
Nov, 1910
TR1-d-45
Feb, 1131
TR1-d-46
Feb, 1824
TR1-d-47
Apr, 1077
TR1-d-48
Mar, 1185
TR1-d-49
Feb, 1034
TR1-d-50
May, 1459
TR1-d-51
Apr, 1872
TR1-d-52
Apr, 1911
TR1-d-53
Jan, 1947
TR1-d-54
Mar, 1337
TR1-d-55
Dec, 1294
TR1-d-56
Mar, 1490
TR1-d-57
Aug, 1415
TR1-d-58
Feb, 1591
TR1-d-59
Mar, 1365
TR1-d-60
Sep, 1250
TR1-d-61
Oct, 1721
TR1-d-62
Mar, 1560
TR1-d-63
Jun, 1654
TR1-d-64
Jan, 1186
TR1-d-65
Apr, 1960
TR1-d-66
Sep, 1180
TR1-d-67
Jan, 1766
TR1-d-68
Jan, 1120
TR1-d-69
Oct, 1111
TR1-d-70
Feb, 1480
TR1-d-71
Apr, 1562
TR1-d-72
Apr, 1229
TR1-d-73
Aug, 1205
TR1-d-74
Sep, 1616
TR1-d-75
Apr, 1191
TR1-d-76
Jul, 1230
TR1-d-77
May, 1907
TR1-d-78
Jan, 1816
TR1-d-79
Apr, 1463
TR1-d-80
Dec, 1195
TR1-d-81
Aug, 1993
TR1-d-82
Apr, 1414
TR1-d-83
Nov, 1493
TR1-d-84
Apr, 1198
TR1-d-85
Jul, 1713
TR1-d-86
Apr, 1265
TR1-d-87
Apr, 2026
TR1-d-88
May, 1495
TR1-d-89
Jun, 1823
TR1-d-90
Jan, 1915
TR1-d-91
Jul, 1545
TR1-d-92
Feb, 1297
TR1-d-93
Mar, 2026
TR1-d-94
Feb, 1917
TR1-d-95
Nov, 1709
TR1-d-96
May, 1705
TR1-d-97
Feb, 1957
TR1-d-98
Jul, 1409
TR1-d-99
Feb, 1126
End of preview. Expand in Data Studio

TempReasonL1

An MTEB dataset
Massive Text Embedding Benchmark

Measuring the ability to retrieve the groundtruth answers to reasoning task queries on TempReason l1.

Task category t2t
Domains Encyclopaedic, Written
Reference https://github.com/DAMO-NLP-SG/TempReason

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_task("TempReasonL1")
evaluator = mteb.MTEB([task])

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repository.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@article{tan2023towards,
  author = {Tan, Qingyu and Ng, Hwee Tou and Bing, Lidong},
  journal = {arXiv preprint arXiv:2306.08952},
  title = {Towards benchmarking and improving the temporal reasoning capability of large language models},
  year = {2023},
}

@article{xiao2024rar,
  author = {Xiao, Chenghao and Hudson, G Thomas and Moubayed, Noura Al},
  journal = {arXiv preprint arXiv:2404.06347},
  title = {RAR-b: Reasoning as Retrieval Benchmark},
  year = {2024},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("TempReasonL1")

desc_stats = task.metadata.descriptive_stats
{}

This dataset card was automatically generated using MTEB

Downloads last month
27