ACADREASON: Exploring the Limits of Reasoning Models with Academic Research Problems
Abstract
The Acadreason benchmark evaluates LLMs and agents on high-level academic reasoning across multiple domains, revealing significant capability gaps.
In recent years, the research focus of large language models (LLMs) and agents has shifted increasingly from demonstrating novel capabilities to complex reasoning and tackling challenging tasks. However, existing evaluations focus mainly on math/code contests or general tasks, while existing multi-domain academic benchmarks lack sufficient reasoning depth, leaving the field without a rigorous benchmark for high-level reasoning. To fill this gap, we introduce the Acadreason benchmark, designed to evaluate the ability of LLMs and agents to acquire and reason over academic knowledge. It consists of 50 expert-annotated academic problems across five high-reasoning domains, including computer science, economics, law, mathematics, and philosophy. All questions are sourced from top-tier publications in recent years and undergo rigorous annotation and quality control to ensure they are both challenging and answerable. We conduct systematic evaluations of over 10 mainstream LLMs and agents. The results show that most LLMs scored below 20 points, with even the cutting-edge GPT-5 achieving only 16 points. While agents achieved higher scores, none exceeded 40 points. This demonstrates the current capability gap between LLMs and agents in super-intelligent academic research tasks and highlights the challenges of Acadreason.
Community
Really thoughtful work! ACADREASON seems like a valuable step toward deeper academic reasoning evaluation—looking forward to seeing how it evolves and inspires future benchmarks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Benchmarking Chinese Commonsense Reasoning with a Multi-hop Reasoning Perspective (2025)
- EngiBench: A Benchmark for Evaluating Large Language Models on Engineering Problem Solving (2025)
- StatEval: A Comprehensive Benchmark for Large Language Models in Statistics (2025)
- ELAIPBench: A Benchmark for Expert-Level Artificial Intelligence Paper Understanding (2025)
- MSCoRe: A Benchmark for Multi-Stage Collaborative Reasoning in LLM Agents (2025)
- Demystifying Scientific Problem-Solving in LLMs by Probing Knowledge and Reasoning (2025)
- Atomic Thinking of LLMs: Decoupling and Exploring Mathematical Reasoning Abilities (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper