Abstract
The Multimodal Large Language Model (MLLM) is currently experiencing rapid growth, driven by the advanced capabilities of LLMs. Unlike earlier specialists, existing MLLMs are evolving towards a Multimodal Generalist paradigm. Initially limited to understanding multiple modalities, these models have advanced to not only comprehend but also generate across modalities. Their capabilities have expanded from coarse-grained to fine-grained multimodal understanding and from supporting limited modalities to arbitrary ones. While many benchmarks exist to assess MLLMs, a critical question arises: Can we simply assume that higher performance across tasks indicates a stronger MLLM capability, bringing us closer to human-level AI? We argue that the answer is not as straightforward as it seems. This project introduces General-Level, an evaluation framework that defines 5-scale levels of MLLM performance and generality, offering a methodology to compare MLLMs and gauge the progress of existing systems towards more robust multimodal generalists and, ultimately, towards AGI. At the core of the framework is the concept of Synergy, which measures whether models maintain consistent capabilities across comprehension and generation, and across multiple modalities. To support this evaluation, we present General-Bench, which encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over 700 tasks and 325,800 instances. The evaluation results that involve over 100 existing state-of-the-art MLLMs uncover the capability rankings of generalists, highlighting the challenges in reaching genuine AI. We expect this project to pave the way for future research on next-generation multimodal foundation models, providing a robust infrastructure to accelerate the realization of AGI. Project page: https://generalist.top/
Community
ICML'25 paper (Spotlight): On Path to Multimodal Generalist: General-Level and General-Bench
๐ This paper/project introduces:
- ๐ General-Level, a novel 5-scale level evaluation system with a new norm for assessing the multimodal generalists (multimodal LLMs/agents) by assessing the level of synergy across comprehension and generation tasks, as well as across multimodal interactions;
- ๐ General-Bench, a companion super massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over 700 tasks and 325K instances.
The evaluation results that involve over 100 existing state-of-the-art MLLMs uncover the capability rankings of
generalists, highlighting the challenges in reaching genuine AI.
๐ Project: https://generalist.top/
๐ Leaderboard: https://generalist.top/leaderboard
๐ Paper: https://arxiv.org/abs/2505.04620
๐ค Huggingface Benchmark: https://huggingface.co/General-Level
Models citing this paper 1
Datasets citing this paper 3
Spaces citing this paper 0
No Space linking this paper