Demystifying Scientific Problem-Solving in LLMs by Probing Knowledge and Reasoning
Abstract
SciReas and SciReas-Pro benchmarks, along with KRUX framework, provide insights into the distinct roles of knowledge and reasoning in scientific tasks, highlighting critical bottlenecks and improvements for LLMs.
Scientific problem solving poses unique challenges for LLMs, requiring both deep domain knowledge and the ability to apply such knowledge through complex reasoning. While automated scientific reasoners hold great promise for assisting human scientists, there is currently no widely adopted holistic benchmark for evaluating scientific reasoning, and few approaches systematically disentangle the distinct roles of knowledge and reasoning in these tasks. To address these gaps, we introduce SciReas, a diverse suite of existing benchmarks for scientific reasoning tasks, and SciReas-Pro, a selective subset that requires more complex reasoning. Our holistic evaluation surfaces insights about scientific reasoning performance that remain hidden when relying on individual benchmarks alone. We then propose KRUX, a probing framework for studying the distinct roles of reasoning and knowledge in scientific tasks. Combining the two, we conduct an in-depth analysis that yields several key findings: (1) Retrieving task-relevant knowledge from model parameters is a critical bottleneck for LLMs in scientific reasoning; (2) Reasoning models consistently benefit from external knowledge added in-context on top of the reasoning enhancement; (3) Enhancing verbalized reasoning improves LLMs' ability to surface task-relevant knowledge. Finally, we conduct a lightweight analysis, comparing our science-focused data composition with concurrent efforts on long CoT SFT, and release SciLit01, a strong 8B baseline for scientific reasoning.
Community
๐ New paper: Demystifying Scientific Problem-Solving in LLMs โ we separate what a model knows from how it reasons and show how both matter for science tasks. https://arxiv.org/pdf/2508.19202v1
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning (2025)
- InfiAlign: A Scalable and Sample-Efficient Framework for Aligning LLMs to Enhance Reasoning Capabilities (2025)
- KG-o1: Enhancing Multi-hop Question Answering in Large Language Models via Knowledge Graph Integration (2025)
- Symbolic or Numerical? Understanding Physics Problem Solving in Reasoning LLMs (2025)
- Mixture of Reasonings: Teach Large Language Models to Reason with Adaptive Strategies (2025)
- The Challenge of Teaching Reasoning to LLMs Without RL or Distillation (2025)
- InternBootcamp Technical Report: Boosting LLM Reasoning with Verifiable Task Scaling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper