ProfBench: Multi-Domain Rubrics requiring Professional Knowledge to Answer and Judge
Abstract
ProfBench evaluates large language models in professional domains using human-expert criteria, revealing challenges and performance disparities between proprietary and open-weight models.
Evaluating progress in large language models (LLMs) is often constrained by the challenge of verifying responses, limiting assessments to tasks like mathematics, programming, and short-form question-answering. However, many real-world applications require evaluating LLMs in processing professional documents, synthesizing information, and generating comprehensive reports in response to user queries. We introduce ProfBench: a set of over 7000 response-criterion pairs as evaluated by human-experts with professional knowledge across Physics PhD, Chemistry PhD, Finance MBA and Consulting MBA. We build robust and affordable LLM-Judges to evaluate ProfBench rubrics, by mitigating self-enhancement bias and reducing the cost of evaluation by 2-3 orders of magnitude, to make it fair and accessible to the broader community. Our findings reveal that ProfBench poses significant challenges even for state-of-the-art LLMs, with top-performing models like GPT-5-high achieving only 65.9\% overall performance. Furthermore, we identify notable performance disparities between proprietary and open-weight models and provide insights into the role that extended thinking plays in addressing complex, professional-domain tasks. Data: https://huggingface.co/datasets/nvidia/ProfBench and Code: https://github.com/NVlabs/ProfBench
Community
ProfBench: Multi-Domain Rubrics requiring Professional Knowledge to Answer and Judge
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Hard2Verify: A Step-Level Verification Benchmark for Open-Ended Frontier Math (2025)
- ReTraceQA: Evaluating Reasoning Traces of Small Language Models in Commonsense Question Answering (2025)
- TutorBench: A Benchmark To Assess Tutoring Capabilities Of Large Language Models (2025)
- Deploying Tiny LVLM Judges for Real-World Evaluation of Chart Models: Lessons Learned and Best Practices (2025)
- mR3: Multilingual Rubric-Agnostic Reward Reasoning Models (2025)
- MatSciBench: Benchmarking the Reasoning Ability of Large Language Models in Materials Science (2025)
- EngiBench: A Benchmark for Evaluating Large Language Models on Engineering Problem Solving (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
 You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: 
@librarian-bot
	 recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
 
	 
					