yifanzhang114 commited on
Commit
f3e9c2e
·
verified ·
1 Parent(s): 93fce1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -13
README.md CHANGED
@@ -18,25 +18,21 @@ size_categories:
18
 
19
  ## Dataset details
20
 
21
- Existing Multimodal Large Language Model benchmarks present several common barriers that make it difficult to measure the significant challenges that models face in the real world, including:
22
- 1) small data scale leading to large performance variance;
23
- 2) reliance on model-based annotations, resulting in significant model bias;
24
- 3) restricted data sources, often overlapping with existing benchmarks and posing a risk of data leakage;
25
- 4) insufficient task difficulty and discrimination, especially the limited image resolution.
26
-
27
 
28
- We present MME-RealWord, a benchmark meticulously designed to address real-world applications with practical relevance. Featuring 13,366 high-resolution images averaging 1,734 × 1,734 pixels, MME-RealWord poses substantial recognition challenges. Our dataset encompasses 29,429 annotations across 43 tasks, all expertly curated by a team of 25 crowdsource workers and 7 MLLM experts. The main advantages of MME-RealWorld compared to existing MLLM benchmarks as follows:
29
-
30
- 1. **Scale, Diversity, and Real-World Utility**: MME-RealWord is the largest fully human-annotated MLLM benchmark, covering 6 domains and 14 sub-classes, closely tied to real-world tasks.
 
31
 
32
- 2. **Quality**: The dataset features high-resolution images with crucial details and manual annotations verified by experts to ensure top-notch data quality.
33
 
34
- 3. **Safety**: MME-RealWord avoids data overlap with other benchmarks and relies solely on human annotations, eliminating model biases and personal bias.
35
 
36
- 4. **Difficulty and Distinguishability**: The dataset poses significant challenges, with models struggling to achieve even 55% accuracy in basic tasks, clearly distinguishing between different MLLMs.
37
 
38
- 5. **MME-RealWord-CN**: The dataset includes a specialized Chinese benchmark with images and questions tailored to Chinese contexts, overcoming issues in existing translated benchmarks.
39
 
 
40
 
41
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/Do69D0sNlG9eqr9cyE7bm.png)
42
 
 
18
 
19
  ## Dataset details
20
 
 
 
 
 
 
 
21
 
22
+ Existing Multimodal Large Language Model benchmarks present several common barriers that make it difficult to measure the significant challenges that models face in the real world, including:
23
+ 1) small data scale leads to a large performance variance;
24
+ 2) reliance on model-based annotations results in restricted data quality;
25
+ 3) insufficient task difficulty, especially caused by the limited image resolution.
26
 
27
+ We present MME-RealWord, a benchmark meticulously designed to address real-world applications with practical relevance. Featuring 13,366 high-resolution images averaging 2,000 × 1,500 pixels, MME-RealWord poses substantial recognition challenges. Our dataset encompasses 29,429 annotations across 43 tasks, all expertly curated by a team of 25 crowdsource workers and 7 MLLM experts. The main advantages of MME-RealWorld compared to existing MLLM benchmarks as follows:
28
 
29
+ 1. **Data Scale**: with the efforts of a total of 32 volunteers, we have manually annotated 29,429 QA pairs focused on real-world scenarios, making this the largest fully human-annotated benchmark known to date.
30
 
31
+ 2. **Data Quality**: 1) Resolution: Many image details, such as a scoreboard in a sports event, carry critical information. These details can only be properly interpreted with high- resolution images, which are essential for providing meaningful assistance to humans. To the best of our knowledge, MME-RealWorld features the highest average image resolution among existing competitors. 2) Annotation: All annotations are manually completed, with a professional team cross-checking the results to ensure data quality.
32
 
33
+ 3. **Task Difficulty and Real-World Utility.**: We can see that even the most advanced models have not surpassed 60% accuracy. Additionally, many real-world tasks are significantly more difficult than those in traditional benchmarks. For example, in video monitoring, a model needs to count the presence of 133 vehicles, or in remote sensing, it must identify and count small objects on a map with an average resolution exceeding 5000×5000.
34
 
35
+ 4. **MME-RealWord-CN.**: Existing Chinese benchmark is usually translated from its English version. This has two limitations: 1) Question-image mismatch. The image may relate to an English scenario, which is not intuitively connected to a Chinese question. 2) Translation mismatch [58]. The machine translation is not always precise and perfect enough. We collect additional images that focus on Chinese scenarios, asking Chinese volunteers for annotation. This results in 5,917 QA pairs.
36
 
37
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/Do69D0sNlG9eqr9cyE7bm.png)
38