Papers
arxiv:2508.11987

FutureX: An Advanced Live Benchmark for LLM Agents in Future Prediction

Published on Aug 16
· Submitted by liujiashuo77 on Aug 21
#3 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

FutureX is a dynamic, live benchmark for evaluating LLM agents in future prediction tasks, addressing challenges in real-time updates and data contamination.

AI-generated summary

Future prediction is a complex task for LLM agents, requiring a high level of analytical thinking, information gathering, contextual understanding, and decision-making under uncertainty. Agents must not only gather and interpret vast amounts of dynamic information but also integrate diverse data sources, weigh uncertainties, and adapt predictions based on emerging trends, just as human experts do in fields like politics, economics, and finance. Despite its importance, no large-scale benchmark exists for evaluating agents on future prediction, largely due to challenges in handling real-time updates and retrieving timely, accurate answers. To address this, we introduce FutureX, a dynamic and live evaluation benchmark specifically designed for LLM agents performing future prediction tasks. FutureX is the largest and most diverse live benchmark for future prediction, supporting real-time daily updates and eliminating data contamination through an automated pipeline for question gathering and answer collection. We evaluate 25 LLM/agent models, including those with reasoning, search capabilities, and integration of external tools such as the open-source Deep Research Agent and closed-source Deep Research models. This comprehensive evaluation assesses agents' adaptive reasoning and performance in dynamic environments. Additionally, we provide in-depth analyses of agents' failure modes and performance pitfalls in future-oriented tasks, including the vulnerability to fake web pages and the temporal validity. Our goal is to establish a dynamic, contamination-free evaluation standard that drives the development of LLM agents capable of performing at the level of professional human analysts in complex reasoning and predictive thinking.

Community

Paper author Paper submitter
edited 2 days ago

A truly contamination-free benchmark!

The world’s first live benchmark for real future prediction, avoiding any data contamination, covering diverse domains like politics, economy, culture, and sports. A reliable benchmark to test LLM agents' planning, searching, and reasoning capabilities!

Paper author

This has huge economic potential! Real-world trends (such as stock market fluctuations, epidemic spread, and technology adoption curves) are the result of the interaction and emergence of a large number of heterogeneous individuals (people, institutions, and companies). AI agents should do better.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.11987 in a model README.md to link it from this page.

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.11987 in a Space README.md to link it from this page.

Collections including this paper 5