Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -28,3 +28,59 @@ configs:
|
|
28 |
- split: test
|
29 |
path: data/test-*
|
30 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
- split: test
|
29 |
path: data/test-*
|
30 |
---
|
31 |
+
|
32 |
+
# MMK12
|
33 |
+
|
34 |
+
[\[📂 GitHub\]](https://github.com/ModalMinds/MM-EUREKA) [\[📜 Paper\]](https://arxiv.org/abs/2503.07365v2)
|
35 |
+
|
36 |
+
***`2025/04/16:` We release a new version of MMK12, which can greatly enhance the multimodal reasoning of Qwen-2.5-VL.***
|
37 |
+
|
38 |
+
We use MMK12 for RL training to develop MM-EUREKA-7B and MM-EUREKA-32B, with specific training details available in [paper](https://arxiv.org/abs/2503.07365v2).
|
39 |
+
|
40 |
+
Both models demonstrate excellent performance on the MMK12 evaluation set (a multidisciplinary multimodal reasoning benchmark), with MM-EUREKA-32B ranking second only to o1.
|
41 |
+
|
42 |
+
| Model | Mathematics | Physics | Chemistry | Biology | Avg. |
|
43 |
+
|-------|-------------|---------|-----------|---------|------|
|
44 |
+
| **Closed-Source Models** |
|
45 |
+
| Claude3.7-Sonnet | 57.4 | 53.4 | 55.4 | 55.0 | 55.3 |
|
46 |
+
| GPT-4o | 55.8 | 41.2 | 47.0 | 55.4 | 49.9 |
|
47 |
+
| o1 | 81.6 | 68.8 | 71.4 | 74.0 | 73.9 |
|
48 |
+
| Gemini2-flash | 76.8 | 53.6 | 64.6 | 66.0 | 65.2 |
|
49 |
+
| **Open-Source General Models** |
|
50 |
+
| InternVL2.5-VL-8B | 46.8 | 35.0 | 50.0 | 50.8 | 45.6 |
|
51 |
+
| Qwen-2.5-VL-7B | 58.4 | 45.4 | 56.4 | 54.0 | 53.6 |
|
52 |
+
| InternVL2.5-VL-38B | 61.6 | 49.8 | 60.4 | 60.0 | 58.0 |
|
53 |
+
| Qwen-2.5-VL-32B | 71.6 | 59.4 | 69.6 | 66.6 | 66.8 |
|
54 |
+
| InternVL2.5-VL-78B | 59.8 | 53.2 | 68.0 | 65.2 | 61.6 |
|
55 |
+
| Qwen-2.5-VL-72B | 75.6 | 64.8 | 69.6 | 72.0 | 70.5 |
|
56 |
+
| **Open-Source Reasoning Models** |
|
57 |
+
| InternVL2.5-8B-MPO | 26.6 | 25.0 | 42.4 | 44.0 | 34.5 |
|
58 |
+
| InternVL2.5-38B-MPO | 41.4 | 42.8 | 55.8 | 53.2 | 48.3 |
|
59 |
+
| QVQ-72B-Preview | 61.4 | 57.4 | 62.6 | 64.4 | 61.5 |
|
60 |
+
| Adora | 63.6 | 50.6 | 59.0 | 59.0 | 58.1 |
|
61 |
+
| R1-Onevision | 44.8 | 33.8 | 39.8 | 40.8 | 39.8 |
|
62 |
+
| OpenVLThinker-7 | 63.0 | 53.8 | 60.6 | 65.0 | 60.6 |
|
63 |
+
| **Ours** |
|
64 |
+
| MM-Eureka-7B | 71.2 | 56.2 | 65.2 | 65.2 | 64.5 |
|
65 |
+
| MM-Eureka-32B | 74.6 | 62.0 | 75.4 | 76.8 | 72.2 |
|
66 |
+
|
67 |
+
|
68 |
+
## Data fields
|
69 |
+
| Key | Description |
|
70 |
+
| ---------- | ----------------------------------- |
|
71 |
+
| `id` | ID. |
|
72 |
+
| `subject` | subject: math, physics, chemistry, and biology |
|
73 |
+
| `image` | Image path. |
|
74 |
+
| `question` | Input query. |
|
75 |
+
| `answer` | Verified Answer. |
|
76 |
+
|
77 |
+
## Citation
|
78 |
+
If you find this project useful in your research, please consider citing:
|
79 |
+
```BibTeX
|
80 |
+
@article{meng2025mm,
|
81 |
+
title={MM-Eureka: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning},
|
82 |
+
author={Meng, Fanqing and Du, Lingxiao and Liu, Zongkai and Zhou, Zhixiang and Lu, Quanfeng and Fu, Daocheng and Shi, Botian and Wang, Wenhai and He, Junjun and Zhang, Kaipeng and others},
|
83 |
+
journal={arXiv preprint arXiv:2503.07365},
|
84 |
+
year={2025}
|
85 |
+
}
|
86 |
+
|