MOOSE-Chem3: Toward Experiment-Guided Hypothesis Ranking via Simulated Experimental Feedback
Abstract
A novel simulator and experiment-guided ranking method improve hypothesis prioritization in scientific discovery by incorporating simulated experimental outcomes.
Hypothesis ranking is a crucial component of automated scientific discovery, particularly in natural sciences where wet-lab experiments are costly and throughput-limited. Existing approaches focus on pre-experiment ranking, relying solely on large language model's internal reasoning without incorporating empirical outcomes from experiments. We introduce the task of experiment-guided ranking, which aims to prioritize candidate hypotheses based on the results of previously tested ones. However, developing such strategies is challenging due to the impracticality of repeatedly conducting real experiments in natural science domains. To address this, we propose a simulator grounded in three domain-informed assumptions, modeling hypothesis performance as a function of similarity to a known ground truth hypothesis, perturbed by noise. We curate a dataset of 124 chemistry hypotheses with experimentally reported outcomes to validate the simulator. Building on this simulator, we develop a pseudo experiment-guided ranking method that clusters hypotheses by shared functional characteristics and prioritizes candidates based on insights derived from simulated experimental feedback. Experiments show that our method outperforms pre-experiment baselines and strong ablations.
Community
This paper introduces experiment-guided hypothesis ranking, a novel setting where candidate hypotheses are prioritized based on experimental feedback from previously tested hypotheses.
To support research in this area, the work proposes a simulator grounded in three domain-informed assumptions that can generate simulated experimental feedback without requiring costly real-world trials.
The simulator is validated on a curated dataset of 124 chemistry hypotheses, and the resulting method outperforms strong pre-experiment baselines. This enables scalable research on feedback-driven hypothesis discovery strategies in scientific domains where empirical validation is expensive or slow.
Great Job!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HypoBench: Towards Systematic and Principled Benchmarking for Hypothesis Generation (2025)
- LLM-SRBench: A New Benchmark for Scientific Equation Discovery with Large Language Models (2025)
- m-KAILIN: Knowledge-Driven Agentic Scientific Corpus Distillation Framework for Biomedical Large Language Models Training (2025)
- ChemRxivQuest: A Curated Chemistry Question-Answer Database Extracted from ChemRxiv Preprints (2025)
- Can LLMs Generate Tabular Summaries of Science Papers? Rethinking the Evaluation Protocol (2025)
- Harnessing LLMs Explanations to Boost Surrogate Models in Tabular Data Classification (2025)
- Entropy-Based Adaptive Weighting for Self-Training (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper