xbench: Tracking Agents Productivity Scaling with Profession-Aligned Real-World Evaluations
Abstract
We introduce xbench, a dynamic, profession-aligned evaluation suite designed to bridge the gap between AI agent capabilities and real-world productivity. While existing benchmarks often focus on isolated technical skills, they may not accurately reflect the economic value agents deliver in professional settings. To address this, xbench targets commercially significant domains with evaluation tasks defined by industry professionals. Our framework creates metrics that strongly correlate with productivity value, enables prediction of Technology-Market Fit (TMF), and facilitates tracking of product capabilities over time. As our initial implementations, we present two benchmarks: Recruitment and Marketing. For Recruitment, we collect 50 tasks from real-world headhunting business scenarios to evaluate agents' abilities in company mapping, information retrieval, and talent sourcing. For Marketing, we assess agents' ability to match influencers with advertiser needs, evaluating their performance across 50 advertiser requirements using a curated pool of 836 candidate influencers. We present initial evaluation results for leading contemporary agents, establishing a baseline for these professional domains. Our continuously updated evalsets and evaluations are available at https://xbench.org.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Evaluating LLM Metrics Through Real-World Capabilities (2025)
- CRMArena-Pro: Holistic Assessment of LLM Agents Across Diverse Business Scenarios and Interactions (2025)
- EconWebArena: Benchmarking Autonomous Agents on Economic Tasks in Realistic Web Environments (2025)
- From Rankings to Insights: Evaluation Should Shift Focus from Leaderboard to Feedback (2025)
- Evolutionary Perspectives on the Evaluation of LLM-Based AI Agents: A Comprehensive Survey (2025)
- BizFinBench: A Business-Driven Real-World Financial Benchmark for Evaluating LLMs (2025)
- Evaluation Framework for AI Systems in "the Wild" (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper