LLM-as-a-Judge & Reward Model: What They Can and Cannot Do Paper • 2409.11239 • Published Sep 17, 2024 • 1
Understand, Solve and Translate: Bridging the Multilingual Mathematical Reasoning Gap Paper • 2501.02448 • Published 13 days ago
LLM-as-an-Interviewer: Beyond Static Testing Through Dynamic LLM Evaluation Paper • 2412.10424 • Published Dec 10, 2024 • 2
Bridging the Data Provenance Gap Across Text, Speech and Video Paper • 2412.17847 • Published about 1 month ago • 8
Evaluating Language Models as Synthetic Data Generators Paper • 2412.03679 • Published Dec 4, 2024 • 46
Surveying the Effects of Quality, Diversity, and Complexity in Synthetic Data From Large Language Models Paper • 2412.02980 • Published Dec 4, 2024 • 12
MM-Eval: A Multilingual Meta-Evaluation Benchmark for LLM-as-a-Judge and Reward Models Paper • 2410.17578 • Published Oct 23, 2024 • 1
Better Instruction-Following Through Minimum Bayes Risk Paper • 2410.02902 • Published Oct 3, 2024
Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages Paper • 2410.16153 • Published Oct 21, 2024 • 44
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models Paper • 2409.17146 • Published Sep 25, 2024 • 106