StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs Paper • 2505.20139 • Published May 26 • 18
QuickVideo: Real-Time Long Video Understanding with System Algorithm Co-Design Paper • 2505.16175 • Published May 22 • 40
General-Reasoner: Advancing LLM Reasoning Across All Domains Paper • 2505.14652 • Published May 20 • 22
MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks Paper • 2410.10563 • Published Oct 14, 2024 • 39
MantisScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation Paper • 2406.15252 • Published Jun 21, 2024 • 18
WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences Paper • 2406.11069 • Published Jun 16, 2024 • 14
GenAI Arena: An Open Evaluation Platform for Generative Models Paper • 2406.04485 • Published Jun 6, 2024 • 23
VIEScore: Towards Explainable Metrics for Conditional Image Synthesis Evaluation Paper • 2312.14867 • Published Dec 22, 2023 • 1
PairReranker: Pairwise Reranking for Natural Language Generation Paper • 2212.10555 • Published Dec 20, 2022
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI Paper • 2311.16502 • Published Nov 27, 2023 • 35
TIGERScore: Towards Building Explainable Metric for All Text Generation Tasks Paper • 2310.00752 • Published Oct 1, 2023 • 3
LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion Paper • 2306.02561 • Published Jun 5, 2023 • 6