Datasets:

Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Browsing Lost Unformed Recollections

alt text

The leaderboard can be found at https://huggingface.co/spaces/PatronusAI/BLUR-leaderboard. If you use or find this dataset helpful in your research, please do cite our paper:

Paper Link: arXiv

@misc{chwang2025blur,
    title = {Browsing {Lost} {Unformed} {Recollections}: {A} {Benchmark} for {Tip}-of-the-{Tongue} {Search} and {Reasoning}},
    shorttitle = {Browsing {Lost} {Unformed} {Recollections}},
    url = {http://arxiv.org/abs/2503.19193},
    doi = {10.48550/arXiv.2503.19193},
    abstract = {We introduce Browsing Lost Unformed Recollections, a tip-of-the-tongue known-item search and reasoning benchmark for general AI assistants. BLUR introduces a set of 573 real-world validated questions that demand searching and reasoning across multi-modal and multilingual inputs, as well as proficient tool use, in order to excel on. Humans easily ace these questions (scoring on average 98\%), while the best-performing system scores around 56\%. To facilitate progress toward addressing this challenging and aspirational use case for general AI assistants, we release 350 questions through a public leaderboard, retain the answers to 250 of them, and have the rest as a private test set.},
    urldate = {2025-03-26},
    publisher = {arXiv},
    author = {CH-Wang, Sky and Deshpande, Darshan and Muresan, Smaranda and Kannappan, Anand and Qian, Rebecca},
    month = mar,
    year = {2025},
    note = {arXiv:2503.19193 [cs]},
    keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Information Retrieval, Computer Science - Multiagent Systems},
}

Task Description

Have you ever been caught at a loss for words? Where you know something exists—and can describe it—but don’t know the exact or key phrase to search for on Google?

The BLUR (Browsing Lost Unformed Recollections) benchmark is a search-based aspirational AI general assistant benchmark that aims to create a series of challenges that recreate the experience of searching for information when you only have a vague or incomplete recollection of the target. It focuses on tasks where the user has to piece together fragmented memories, descriptions, or related concepts to find the correct answer, much like trying to recall a word, phrase, or idea when it's on the tip of your tongue.

The benchmark evaluates systems in a zero-shot manner based on their ability to:

  • Handle Ambiguity: Recognize and interpret partial or unclear user input to generate relevant search results or answers.
  • Contextual Matching: Infer the correct answer from disjointed descriptions and provide responses that align with the user's intended, though imprecisely described, goal.
  • Reason with Tools: Leverage and reason over multiple calls to its suite of tools to gather and synthesize scattered information into coherent, contextually accurate conclusions. The benchmark measures how well a system can piece together incomplete data using reasoning across its toolset.
  • Multimodal Reasoning: Interpret and integrate input from different modalities (e.g., text, images, audio) to form a more complete understanding of the user's needs. This challenge tests the ability to combine and reason over multiple types of content, enhancing its search capabilities to retrieve the most relevant results.

By addressing these challenges, the BLUR benchmark seeks to assess how well AI systems can support users who are searching based on intuition and fragmented memories rather than precise keywords or phrases. The benchmark aims to foster the development of more intuitive, more realistic, and more memory-aided search technologies that accommodate the natural imperfections of human memory.

Example Queries

I am trying to remember the title of a book I once read. It was published in 2017 and its cover had the image of a snowman looking over some mountains. It had something to do with the search for knowledge. I cannot remember the author of the book but he was a co-author of another book, Conversaciones para Triunfar. What is the title of the book I am looking for?

I visited a bank in Ibadan with a friend, but I can’t recall its name or location. It was my first time in Ibadan, so my memory of the place is a bit vague. However, I remember taking a picture of an attractive building located opposite the bank. I’ll attach the picture, can you help me identify the bank and its address? (Image not shown here for display purposes)

Dataset Structure

Metadata Description
query This field contains the primary query.
file This field specifies the filepath if a file is attached to the query.
scaffolded_query This field includes the query wrapped in a consistent prompt scaffold for evaluation.
answer This field provides the final answer to the query; it is only populated in the validation set.
domain This field indicates the domain category to which the query belongs.
difficulty This field represents the level of complexity or difficulty assigned to the query.
License This dataset is distributed under the MIT License.

The Benchmark

To construct this dataset, we invited annotators to reflect on recent or current instances where they struggled to recall the name of something. Annotators were asked to describe everything they could recall about the item in question, framing it as a prompt they might use to seek help online or from a friend. Annotators were also given the option to upload a file in addition to providing a text input if they wished. We then tasked the writer with locating the item whose name they struggled to recall. Separately, a different annotator (the validator) was challenged to identify the item based solely on the original description prompt provided by the writer. Both annotators were given access to a web browser and documented their search process step by step. If the validator's answer matched the writer's, the prompt was included in the final dataset. Otherwise, we presented the validator with the correct answer along with the writer's search steps and evaluated posthoc agreement—whether the validator acknowledged their error and could clearly articulate their mistake. If post hoc agreement was achieved, the prompt was included in the final dataset; otherwise, it was discarded. Prompts were finally minimally edited to standardize formatting and correct typos.

Unambiguous Answers. The majority of the effort in this two-stage dataset creation process—prompt writing followed by validation—focused on ensuring that the prompts were unambiguous, meaning they led to a single, correct answer. In doing so, we deliberately avoided adversarial dataset construction, as it not only obscures the specific abilities benchmarks aim to measure but also undermines ecological validity.

Multimodal and Multilingual. While annotators were instructed to write their prompts in English, no language restrictions were placed on the details of the items remembered. Approximately 30% of our dataset is notably multilingual. This includes cases where descriptions are written in other languages or where the descriptions are in one language but the item itself belongs to a different language. Similarly, 35% of our dataset is multimodal on input, featuring prompts accompanied by file attachments rather than being exclusively text-based. Files included sketches of the items recalled, similar images found online, video and audio files in which the item appeared, and more. Note that, in addition to the explicit multimodal understanding required to process file inputs, a majority of queries in BLUR also require reasoning over multimodal sources of information encountered in web searches (images, videos, and more) despite being only text-based.

Ease of Use. Answers in the dataset are concise, consisting of a single string that can be evaluated for correctness using a weak string match. Prompts are designed for zero-shot answering and evaluation, structured within a question scaffold that constrains the date and time range as well as the answer format.

Difficulty. The time validators took to answer these queries naturally serves as a proxy of their difficulty level for humans. Based on these times, we divided the dataset into three difficulty levels: easy, for questions resolved in under 10 minutes, medium, for those requiring 10 to 20 minutes, and hard, for those taking over 20 minutes to answer.

Downloads last month
27