query-id
stringlengths 7
9
| corpus-id
stringclasses 46
values | score
int64 1
1
|
---|---|---|
query_0
|
Geneva Durben
| 1 |
query_0
|
Dorathea Bastress
| 1 |
query_1
|
Geneva Durben
| 1 |
query_1
|
Armand Schweda
| 1 |
query_2
|
Geneva Durben
| 1 |
query_2
|
Flor Lemaire
| 1 |
query_3
|
Geneva Durben
| 1 |
query_3
|
Pate Lindley
| 1 |
query_4
|
Geneva Durben
| 1 |
query_4
|
Shelvia Goike
| 1 |
query_5
|
Geneva Durben
| 1 |
query_5
|
Ovid Rahm
| 1 |
query_6
|
Geneva Durben
| 1 |
query_6
|
Bronson Saelee
| 1 |
query_7
|
Geneva Durben
| 1 |
query_7
|
Gladstone Oonk
| 1 |
query_8
|
Geneva Durben
| 1 |
query_8
|
Ofelia Rosselot
| 1 |
query_9
|
Geneva Durben
| 1 |
query_9
|
Tisha Ghent
| 1 |
query_10
|
Geneva Durben
| 1 |
query_10
|
Herminia Caranto
| 1 |
query_11
|
Geneva Durben
| 1 |
query_11
|
Linzy Recknor
| 1 |
query_12
|
Geneva Durben
| 1 |
query_12
|
Vinie Relford
| 1 |
query_13
|
Geneva Durben
| 1 |
query_13
|
Jerrod Dumpit
| 1 |
query_14
|
Geneva Durben
| 1 |
query_14
|
Amaris Grow
| 1 |
query_15
|
Geneva Durben
| 1 |
query_15
|
Marcellus Meachum
| 1 |
query_16
|
Geneva Durben
| 1 |
query_16
|
Wellington Hinn
| 1 |
query_17
|
Geneva Durben
| 1 |
query_17
|
Georgette Cagna
| 1 |
query_18
|
Geneva Durben
| 1 |
query_18
|
Laurine Bellizzi
| 1 |
query_19
|
Geneva Durben
| 1 |
query_19
|
Agnes Reap
| 1 |
query_20
|
Geneva Durben
| 1 |
query_20
|
Sheree Riddley
| 1 |
query_21
|
Geneva Durben
| 1 |
query_21
|
Mathew Weierke
| 1 |
query_22
|
Geneva Durben
| 1 |
query_22
|
Casimiro Steo
| 1 |
query_23
|
Geneva Durben
| 1 |
query_23
|
Maryann Bohnsack
| 1 |
query_24
|
Geneva Durben
| 1 |
query_24
|
Flo Zaugg
| 1 |
query_25
|
Geneva Durben
| 1 |
query_25
|
Nathen Saadia
| 1 |
query_26
|
Geneva Durben
| 1 |
query_26
|
Ruby Gaskin
| 1 |
query_27
|
Geneva Durben
| 1 |
query_27
|
Jerrie Roupe
| 1 |
query_28
|
Geneva Durben
| 1 |
query_28
|
Camisha Bogosian
| 1 |
query_29
|
Geneva Durben
| 1 |
query_29
|
Gaetano Argel
| 1 |
query_30
|
Geneva Durben
| 1 |
query_30
|
Nathaniel Robens
| 1 |
query_31
|
Geneva Durben
| 1 |
query_31
|
Tarik Hollfelder
| 1 |
query_32
|
Geneva Durben
| 1 |
query_32
|
Riya Hayhoe
| 1 |
query_33
|
Geneva Durben
| 1 |
query_33
|
Chaney Gertman
| 1 |
query_34
|
Geneva Durben
| 1 |
query_34
|
Cristy Walford
| 1 |
query_35
|
Geneva Durben
| 1 |
query_35
|
Eustace Comment
| 1 |
query_36
|
Geneva Durben
| 1 |
query_36
|
Terrell Varadarajan
| 1 |
query_37
|
Geneva Durben
| 1 |
query_37
|
Darwyn Raio
| 1 |
query_38
|
Geneva Durben
| 1 |
query_38
|
Eudora Cervero
| 1 |
query_39
|
Geneva Durben
| 1 |
query_39
|
Jacey Gnatek
| 1 |
query_40
|
Geneva Durben
| 1 |
query_40
|
Elam Mejiamejia
| 1 |
query_41
|
Geneva Durben
| 1 |
query_41
|
Celia Marszalek
| 1 |
query_42
|
Geneva Durben
| 1 |
query_42
|
Aliza Uhlrich
| 1 |
query_43
|
Geneva Durben
| 1 |
query_43
|
Chadwick Frisella
| 1 |
query_44
|
Geneva Durben
| 1 |
query_44
|
Theola Laudermilk
| 1 |
query_45
|
Dorathea Bastress
| 1 |
query_45
|
Armand Schweda
| 1 |
query_46
|
Dorathea Bastress
| 1 |
query_46
|
Flor Lemaire
| 1 |
query_47
|
Dorathea Bastress
| 1 |
query_47
|
Pate Lindley
| 1 |
query_48
|
Dorathea Bastress
| 1 |
query_48
|
Shelvia Goike
| 1 |
query_49
|
Dorathea Bastress
| 1 |
query_49
|
Ovid Rahm
| 1 |
LIMIT-small
A retrieval dataset that exposes fundamental theoretical limitations of embedding-based retrieval models. Despite using simple queries like "Who likes Apples?", state-of-the-art embedding models achieve less than 20% recall@100 on LIMIT full and cannot solve LIMIT-small (46 docs).
Links
- Paper: On the Theoretical Limitations of Embedding-Based Retrieval
- Code: github.com/google-deepmind/limit
- Full version: LIMIT (50k documents)
- Small version: LIMIT-small (46 documents only)
Dataset Details
Queries (1,000): Simple questions asking "Who likes [attribute]?"
- Examples: "Who likes Quokkas?", "Who likes Joshua Trees?", "Who likes Disco Music?"
Corpus (46 documents): Short biographical texts describing people and their preferences
- Format: "[Name] likes [attribute1] and [attribute2]."
- Example: "Geneva Durben likes Quokkas and Apples."
Qrels (2,000): Each query has exactly 2 relevant documents (score=1), creating nearly all possible combinations of 2 documents from the 46 corpus documents (C(46,2) = 1,035 combinations).
Format
The dataset follows standard MTEB format with three configurations:
default
: Query-document relevance judgments (qrels), keys:corpus-id
,query-id
,score
(1 for relevant)queries
: Query texts with IDs , keys:_id
,text
corpus
: Document texts with IDs, keys:_id
,title
(empty), andtext
Purpose
Tests whether embedding models can represent all top-k combinations of relevant documents, based on theoretical results connecting embedding dimension to representational capacity. Despite the simple nature of queries, state-of-the-art models struggle due to fundamental dimensional limitations.
Citation
@misc{weller2025theoreticallimit,
title={On the Theoretical Limitations of Embedding-Based Retrieval},
author={Orion Weller and Michael Boratko and Iftekhar Naim and Jinhyuk Lee},
year={2025},
eprint={2508.21038},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2508.21038},
}
- Downloads last month
- 89