Datasets:

ArXiv:
nielsr HF Staff commited on
Commit
a0ceb66
Β·
verified Β·
1 Parent(s): b832b91

Add metadata and link to paper

Browse files

This PR adds metadata (task category, license, language) to the dataset card. It also adds a link to the paper (as soon as it is available).

Files changed (1) hide show
  1. README.md +279 -272
README.md CHANGED
@@ -1,273 +1,280 @@
1
-
2
- <div align="center">
3
- <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
4
-
5
- <h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
6
- <p align="center">
7
- <a href="https://generalist.top/">[πŸ“– Project]</a>
8
- <a href="https://level.generalist.top">[πŸ† Leaderboard]</a>
9
- <a href="https://xxxxx">[πŸ“„ Paper]</a>
10
- <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
11
- <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
12
- </p>
13
-
14
-
15
- </div>
16
-
17
-
18
- ---
19
- We divide our benchmark into two settings: **`open`** and **`closed`**.
20
-
21
- <!-- This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
22
- It allows researchers to train and evaluate their models with access to the answers.
23
-
24
- If you wish to thoroughly evaluate your model's performance, please use the
25
- [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
26
-
27
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top). -->
28
-
29
-
30
- This is the **`Closed benchmark`** of Generalist-Bench, where we release only the question annotationsβ€”**without ground-truth answers**β€”for all datasets.
31
-
32
- You can follow the detailed [usage](#-usage) instructions to submit the resuls generate by your own model.
33
-
34
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
35
-
36
-
37
- If you’d like to train or evaluate your model with access to the full answers, please check out the [πŸ‘‰ open benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Openset), where all ground-truth annotations are provided.
38
-
39
-
40
-
41
-
42
-
43
-
44
-
45
-
46
-
47
- ---
48
-
49
- ## πŸ“• Table of Contents
50
-
51
- - [✨ File Origanization Structure](#filestructure)
52
- - [🍟 Usage](#usage)
53
- - [🌐 General-Bench](#bench)
54
- - [πŸ• Capabilities and Domians Distribution](#distribution)
55
- - [πŸ–ΌοΈ Image Task Taxonomy](#imagetaxonomy)
56
- - [πŸ“½οΈ Video Task Taxonomy](#videotaxonomy)
57
- - [πŸ“ž Audio Task Taxonomy](#audiotaxonomy)
58
- - [πŸ’Ž 3D Task Taxonomy](#3dtaxonomy)
59
- - [πŸ“š Language Task Taxonomy](#languagetaxonomy)
60
-
61
-
62
-
63
-
64
-
65
- ---
66
-
67
- <span id='filestructure'/>
68
-
69
- # ✨✨✨ **File Origanization Structure**
70
-
71
- Here is the organization structure of the file system:
72
-
73
- ```
74
- General-Bench
75
- β”œβ”€β”€ Image
76
- β”‚ β”œβ”€β”€ comprehension
77
- β”‚ β”‚ β”œβ”€β”€ Bird-Detection
78
- β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
79
- β”‚ β”‚ β”‚ └── images
80
- β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
81
- β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
82
- β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
83
- β”‚ β”‚ β”‚ └── images
84
- β”‚ β”‚ └── ...
85
- β”‚ └── generation
86
- β”‚ └── Layout-to-Face-Image-Generation
87
- β”‚ β”œβ”€β”€ annotation.json
88
- β”‚ └── images
89
- β”‚ └── ...
90
- β”œβ”€β”€ Video
91
- β”‚ β”œβ”€β”€ comprehension
92
- β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
93
- β”‚ β”‚ β”œβ”€β”€ annotation.json
94
- β”‚ β”‚ └── videos
95
- β”‚ β”‚ └── ...
96
- β”‚ └── generation
97
- β”‚ └── Scene-Image-to-Video-Generation
98
- β”‚ β”œβ”€β”€ annotation.json
99
- β”‚ └── videos
100
- β”‚ └── ...
101
- β”œβ”€β”€ 3d
102
- β”‚ β”œβ”€β”€ comprehension
103
- β”‚ β”‚ └── 3D-Furniture-Classification
104
- β”‚ β”‚ β”œβ”€β”€ annotation.json
105
- β”‚ β”‚ └── pointclouds
106
- β”‚ β”‚ └── ...
107
- β”‚ └── generation
108
- β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
109
- β”‚ β”œβ”€β”€ annotation.json
110
- β”‚ └── pointclouds
111
- β”‚ └── ...
112
- β”œβ”€β”€ Audio
113
- β”‚ β”œβ”€β”€ comprehension
114
- β”‚ β”‚ └── Accent-Classification
115
- β”‚ β”‚ β”œβ”€β”€ annotation.json
116
- β”‚ β”‚ └── audios
117
- β”‚ β”‚ └── ...
118
- β”‚ └── generation
119
- β”‚ └── Video-To-Audio
120
- β”‚ β”œβ”€β”€ annotation.json
121
- β”‚ └── audios
122
- β”‚ └── ...
123
- β”œβ”€β”€ NLP
124
- β”‚ β”œβ”€β”€ History-Question-Answering
125
- β”‚ β”‚ └── annotation.json
126
- β”‚ β”œβ”€β”€ Abstractive-Summarization
127
- β”‚ β”‚ └── annotation.json
128
- β”‚ └── ...
129
-
130
- ```
131
-
132
-
133
- An illustrative example of file formats:
134
-
135
-
136
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)
137
-
138
-
139
- <span id='usage'/>
140
-
141
- ## 🍟🍟🍟 Usage
142
-
143
- Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
144
-
145
- xxxx
146
-
147
-
148
- ---
149
-
150
-
151
-
152
-
153
-
154
- <span id='bench'/>
155
-
156
-
157
-
158
- # 🌐🌐🌐 **General-Bench**
159
-
160
-
161
-
162
-
163
- A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
164
-
165
- <div align="center">
166
- <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
167
- <p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
168
- comprehension and generation categories in various modalities</p>
169
- </div>
170
-
171
-
172
- <span id='distribution'/>
173
-
174
- ## πŸ•πŸ•πŸ• Capabilities and Domians Distribution
175
-
176
- <div align="center">
177
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
178
- <p> Distribution of various capabilities evaluated in General-Bench.</p>
179
- </div>
180
-
181
-
182
- <div align="center">
183
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
184
- <p>Distribution of various domains and disciplines covered by General-Bench.</p>
185
- </div>
186
-
187
-
188
-
189
-
190
-
191
- <span id='imagetaxonomy'/>
192
-
193
- # πŸ–ΌοΈ Image Task Taxonomy
194
- <div align="center">
195
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
196
- <p>Taxonomy and hierarchy of data in terms of Image modality.</p>
197
- </div>
198
-
199
-
200
-
201
-
202
- <span id='videotaxonomy'/>
203
-
204
- # πŸ“½οΈ Video Task Taxonomy
205
-
206
- <div align="center">
207
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
208
- <p>Taxonomy and hierarchy of data in terms of Video modality.</p>
209
- </div>
210
-
211
-
212
-
213
-
214
-
215
-
216
-
217
-
218
-
219
- <span id='audiotaxonomy'/>
220
-
221
- # πŸ“ž Audio Task Taxonomy
222
-
223
-
224
-
225
- <div align="center">
226
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
227
- <p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
228
- </div>
229
-
230
-
231
-
232
- <span id='3dtaxonomy'/>
233
-
234
- # πŸ’Ž 3D Task Taxonomy
235
-
236
-
237
- <div align="center">
238
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
239
- <p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
240
- </div>
241
-
242
-
243
-
244
-
245
- <span id='languagetaxonomy'/>
246
-
247
- # πŸ“š Language Task Taxonomy
248
-
249
- <div align="center">
250
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
251
- <p>Taxonomy and hierarchy of data in terms of Language modality.</p>
252
- </div>
253
-
254
-
255
-
256
-
257
- ---
258
-
259
-
260
-
261
-
262
- # 🚩 **Citation**
263
-
264
- If you find our benchmark useful in your research, please kindly consider citing us:
265
-
266
- ```
267
- @article{generalist2025,
268
- title={On Path to Multimodal Generalist: Levels and Benchmarks},
269
- author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
270
- journal={arXiv},
271
- year={2025}
272
- }
 
 
 
 
 
 
 
273
  ```
 
1
+ ---
2
+ task_categories:
3
+ - any-to-any
4
+ license: unknown
5
+ language:
6
+ - en
7
+ ---
8
+
9
+ <div align="center">
10
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
11
+
12
+ <h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
13
+ <p align="center">
14
+ <a href="https://generalist.top/">[πŸ“– Project]</a>
15
+ <a href="https://level.generalist.top">[πŸ† Leaderboard]</a>
16
+ <a href="https://huggingface.co/papers/2505.04620">[πŸ“„ Paper]</a>
17
+ <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
18
+ <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
19
+ </p>
20
+
21
+
22
+ </div>
23
+
24
+
25
+ ---
26
+ We divide our benchmark into two settings: **`open`** and **`closed`**.
27
+
28
+ <!-- This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
29
+ It allows researchers to train and evaluate their models with access to the answers.
30
+
31
+ If you wish to thoroughly evaluate your model's performance, please use the
32
+ [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
33
+
34
+ Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top). -->
35
+
36
+
37
+ This is the **`Closed benchmark`** of Generalist-Bench, where we release only the question annotationsβ€”**without ground-truth answers**β€”for all datasets.
38
+
39
+ You can follow the detailed [usage](#-usage) instructions to submit the resuls generate by your own model.
40
+
41
+ Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
42
+
43
+
44
+ If you’d like to train or evaluate your model with access to the full answers, please check out the [πŸ‘‰ open benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Openset), where all ground-truth annotations are provided.
45
+
46
+
47
+
48
+
49
+
50
+
51
+
52
+
53
+
54
+ ---
55
+
56
+ ## πŸ“• Table of Contents
57
+
58
+ - [✨ File Origanization Structure](#filestructure)
59
+ - [🍟 Usage](#usage)
60
+ - [🌐 General-Bench](#bench)
61
+ - [πŸ• Capabilities and Domians Distribution](#distribution)
62
+ - [πŸ–ΌοΈ Image Task Taxonomy](#imagetaxonomy)
63
+ - [πŸ“½οΈ Video Task Taxonomy](#videotaxonomy)
64
+ - [πŸ“ž Audio Task Taxonomy](#audiotaxonomy)
65
+ - [πŸ’Ž 3D Task Taxonomy](#3dtaxonomy)
66
+ - [πŸ“š Language Task Taxonomy](#languagetaxonomy)
67
+
68
+
69
+
70
+
71
+
72
+ ---
73
+
74
+ <span id='filestructure'/>
75
+
76
+ # ✨✨✨ **File Origanization Structure**
77
+
78
+ Here is the organization structure of the file system:
79
+
80
+ ```
81
+ General-Bench
82
+ β”œβ”€β”€ Image
83
+ β”‚ β”œβ”€β”€ comprehension
84
+ β”‚ β”‚ β”œβ”€β”€ Bird-Detection
85
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
86
+ β”‚ β”‚ β”‚ └── images
87
+ β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
88
+ β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
89
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
90
+ β”‚ β”‚ β”‚ └── images
91
+ β”‚ β”‚ └── ...
92
+ β”‚ └── generation
93
+ β”‚ └── Layout-to-Face-Image-Generation
94
+ β”‚ β”œβ”€β”€ annotation.json
95
+ β”‚ └── images
96
+ β”‚ └── ...
97
+ β”œβ”€β”€ Video
98
+ β”‚ β”œβ”€β”€ comprehension
99
+ β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
100
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
101
+ β”‚ β”‚ └── videos
102
+ β”‚ β”‚ └── ...
103
+ β”‚ └── generation
104
+ β”‚ └── Scene-Image-to-Video-Generation
105
+ β”‚ β”œβ”€β”€ annotation.json
106
+ β”‚ └── videos
107
+ β”‚ └── ...
108
+ β”œβ”€β”€ 3d
109
+ β”‚ β”œβ”€β”€ comprehension
110
+ β”‚ β”‚ └── 3D-Furniture-Classification
111
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
112
+ β”‚ β”‚ └── pointclouds
113
+ β”‚ β”‚ └── ...
114
+ β”‚ └── generation
115
+ β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
116
+ β”‚ β”œβ”€β”€ annotation.json
117
+ β”‚ └── pointclouds
118
+ β”‚ └── ...
119
+ β”œβ”€β”€ Audio
120
+ β”‚ β”œβ”€β”€ comprehension
121
+ β”‚ β”‚ └── Accent-Classification
122
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
123
+ β”‚ β”‚ └── audios
124
+ β”‚ β”‚ └── ...
125
+ β”‚ └── generation
126
+ β”‚ └── Video-To-Audio
127
+ β”‚ β”œβ”€β”€ annotation.json
128
+ β”‚ └── audios
129
+ β”‚ └── ...
130
+ β”œβ”€β”€ NLP
131
+ β”‚ β”œβ”€β”€ History-Question-Answering
132
+ β”‚ β”‚ └── annotation.json
133
+ β”‚ β”œβ”€β”€ Abstractive-Summarization
134
+ β”‚ β”‚ └── annotation.json
135
+ β”‚ └── ...
136
+
137
+ ```
138
+
139
+
140
+ An illustrative example of file formats:
141
+
142
+
143
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)
144
+
145
+
146
+ <span id='usage'/>
147
+
148
+ ## 🍟🍟🍟 Usage
149
+
150
+ Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
151
+
152
+ xxxx
153
+
154
+
155
+ ---
156
+
157
+
158
+
159
+
160
+
161
+ <span id='bench'/>
162
+
163
+
164
+
165
+ # 🌐🌐🌐 **General-Bench**
166
+
167
+
168
+
169
+
170
+ A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
171
+
172
+ <div align="center">
173
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
174
+ <p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
175
+ comprehension and generation categories in various modalities</p>
176
+ </div>
177
+
178
+
179
+ <span id='distribution'/>
180
+
181
+ ## πŸ•πŸ•πŸ• Capabilities and Domians Distribution
182
+
183
+ <div align="center">
184
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
185
+ <p> Distribution of various capabilities evaluated in General-Bench.</p>
186
+ </div>
187
+
188
+
189
+ <div align="center">
190
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
191
+ <p>Distribution of various domains and disciplines covered by General-Bench.</p>
192
+ </div>
193
+
194
+
195
+
196
+
197
+
198
+ <span id='imagetaxonomy'/>
199
+
200
+ # πŸ–ΌοΈ Image Task Taxonomy
201
+ <div align="center">
202
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
203
+ <p>Taxonomy and hierarchy of data in terms of Image modality.</p>
204
+ </div>
205
+
206
+
207
+
208
+
209
+ <span id='videotaxonomy'/>
210
+
211
+ # πŸ“½οΈ Video Task Taxonomy
212
+
213
+ <div align="center">
214
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
215
+ <p>Taxonomy and hierarchy of data in terms of Video modality.</p>
216
+ </div>
217
+
218
+
219
+
220
+
221
+
222
+
223
+
224
+
225
+
226
+ <span id='audiotaxonomy'/>
227
+
228
+ # πŸ“ž Audio Task Taxonomy
229
+
230
+
231
+
232
+ <div align="center">
233
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
234
+ <p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
235
+ </div>
236
+
237
+
238
+
239
+ <span id='3dtaxonomy'/>
240
+
241
+ # πŸ’Ž 3D Task Taxonomy
242
+
243
+
244
+ <div align="center">
245
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
246
+ <p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
247
+ </div>
248
+
249
+
250
+
251
+
252
+ <span id='languagetaxonomy'/>
253
+
254
+ # πŸ“š Language Task Taxonomy
255
+
256
+ <div align="center">
257
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
258
+ <p>Taxonomy and hierarchy of data in terms of Language modality.</p>
259
+ </div>
260
+
261
+
262
+
263
+
264
+ ---
265
+
266
+
267
+
268
+
269
+ # 🚩 **Citation**
270
+
271
+ If you find our benchmark useful in your research, please kindly consider citing us:
272
+
273
+ ```
274
+ @article{generalist2025,
275
+ title={On Path to Multimodal Generalist: Levels and Benchmarks},
276
+ author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
277
+ journal={arXiv},
278
+ year={2025}
279
+ }
280
  ```