LIMIT / README.md
orionweller's picture
Add text-retrieval task category (#2)
2158340 verified
metadata
language: en
license: cc-by-4.0
size_categories:
  - 10K<n<100K
task_categories:
  - text-ranking
  - text-retrieval
tags:
  - retrieval
  - embeddings
  - benchmark
dataset_info:
  - config_name: default
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: score
        dtype: int64
    splits:
      - name: test
        num_examples: 2000
  - config_name: corpus
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: corpus
        num_examples: 50000
  - config_name: queries
    features:
      - name: _id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: queries
        num_examples: 1000
configs:
  - config_name: default
    data_files:
      - split: test
        path: qrels.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl

LIMIT

A retrieval dataset that exposes fundamental theoretical limitations of embedding-based retrieval models. Despite using simple queries like "Who likes Apples?", state-of-the-art embedding models achieve less than 20% recall@100 on LIMIT full and cannot solve LIMIT-small (46 docs).

Links

Sample Usage

You can load the data using the datasets library from Huggingface (LIMIT, LIMIT-small):

from datasets import load_dataset
ds = load_dataset("orionweller/LIMIT-small", "corpus") # also available: queries, test (contains qrels).

Dataset Details

Queries (1,000): Simple questions asking "Who likes [attribute]?"

  • Examples: "Who likes Quokkas?", "Who likes Joshua Trees?", "Who likes Disco Music?"

Corpus (50k documents): Short biographical texts describing people and their preferences

  • Format: "[Name] likes [attribute1] and [attribute2]."
  • Example: "Geneva Durben likes Quokkas and Apples."

Qrels (2,000): Each query has exactly 2 relevant documents (score=1), creating nearly all possible combinations of 2 documents from the 46 corpus documents (C(46,2) = 1,035 combinations).

Format

The dataset follows standard MTEB format with three configurations:

  • default: Query-document relevance judgments (qrels), keys: corpus-id, query-id, score (1 for relevant)
  • queries: Query texts with IDs , keys: _id, text
  • corpus: Document texts with IDs, keys: _id, title (empty), and text

Purpose

Tests whether embedding models can represent all top-k combinations of relevant documents, based on theoretical results connecting embedding dimension to representational capacity. Despite the simple nature of queries, state-of-the-art models struggle due to fundamental dimensional limitations.

Citation

@misc{weller2025theoreticallimit,
      title={On the Theoretical Limitations of Embedding-Based Retrieval}, 
      author={Orion Weller and Michael Boratko and Iftekhar Naim and Jinhyuk Lee},
      year={2025},
      eprint={2508.21038},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2508.21038}, 
}