Improve dataset card: Add task categories, language, tags, paper links, sample usage, and citation

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +123 -13
README.md CHANGED
@@ -1,24 +1,31 @@
1
  ---
2
  license: cc-by-nc-4.0
3
  modalities:
4
- - audio
5
- - text
 
 
 
 
 
 
 
 
 
6
  configs:
7
  - config_name: temporal_reasoning
8
  data_files:
9
  - split: test
10
- path: "meta_info/holistic_reasoning_temporal.json"
11
  default: true
12
-
13
  - config_name: spatial_reasoning
14
  data_files:
15
  - split: test
16
- path: "meta_info/holistic_reasoning_spatial.json"
17
-
18
  - config_name: perception
19
  data_files:
20
  - split: test
21
- path: "meta_info/foundation_perception.json"
22
  ---
23
 
24
  <div align="center">
@@ -57,7 +64,7 @@ configs:
57
  </p>
58
  <p align="center" style="font-size: 1em; margin-top: -1em"> <sup>*</sup> Equal Contribution. <sup>&dagger;</sup>Corresponding authors. </p>
59
  <p align="center" style="font-size: 1.2em; margin-top: 0.5em">
60
- 📖<a href="">arXiv</a>
61
  |🏠<a href="https://github.com/InternLM/StarBench">Code</a>
62
  |🌐<a href="https://internlm.github.io/StarBench/">Homepage</a>
63
  | 🤗<a href="https://huggingface.co/datasets/internlm/STAR-Bench">Dataset</a>
@@ -72,7 +79,7 @@ We formalize <strong>audio 4D intelligence</strong> that is defined as reasoning
72
  <img src="assets/teaser.png" alt="teaser" width="100%">
73
  </p>
74
  Unlike prior benchmarks where caption-only answering reduces accuracy slightly, STAR-Bench induces far larger drops (-31.5\% temporal, -35.2\% spatial), evidencing its focus on <strong>linguistically hard-to-describe cues</strong>.
75
- Evaluating 19 models reveals substantial gaps to humans and a capability hierarchy. Our STAR-Bench provides critical insights and a clear path forward for developing future models with a more robust understanding of the physical world.
76
 
77
  Benchmark examples are illustrated below. You can also visit the [homepage](https://internlm.github.io/StarBench/) for a more intuitive overview.
78
  </p>
@@ -117,19 +124,122 @@ For the holistic spatio-temporal reasoning task, the curation process comprises
117
  <img src="assets/pipeline.png" alt="pipeline" width="90%">
118
  </p>
119
 
 
 
 
120
 
121
- ## ✒️Citation
 
 
 
 
 
 
 
 
122
  ```
123
- TBD
 
 
 
 
 
124
  ```
125
 
126
- ## 📄 License
127
- ![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg) ![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg) **Usage and License Notices**: The data and code are intended and licensed for research use only.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
128
 
 
129
 
 
 
 
 
 
 
 
 
130
 
 
131
 
 
 
 
132
 
 
 
 
 
 
 
 
 
133
 
 
 
 
 
 
 
 
 
 
134
 
 
 
135
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
  modalities:
4
+ - audio
5
+ - text
6
+ language:
7
+ - en
8
+ task_categories:
9
+ - audio-text-to-text
10
+ tags:
11
+ - 4d-intelligence
12
+ - spatio-temporal-reasoning
13
+ - audio-reasoning
14
+ - audio-benchmark
15
  configs:
16
  - config_name: temporal_reasoning
17
  data_files:
18
  - split: test
19
+ path: meta_info/holistic_reasoning_temporal.json
20
  default: true
 
21
  - config_name: spatial_reasoning
22
  data_files:
23
  - split: test
24
+ path: meta_info/holistic_reasoning_spatial.json
 
25
  - config_name: perception
26
  data_files:
27
  - split: test
28
+ path: meta_info/foundation_perception.json
29
  ---
30
 
31
  <div align="center">
 
64
  </p>
65
  <p align="center" style="font-size: 1em; margin-top: -1em"> <sup>*</sup> Equal Contribution. <sup>&dagger;</sup>Corresponding authors. </p>
66
  <p align="center" style="font-size: 1.2em; margin-top: 0.5em">
67
+ 📖<a href="https://huggingface.co/papers/2510.24693">Paper</a> | 📖<a href="https://arxiv.org/abs/2510.24693">arXiv</a>
68
  |🏠<a href="https://github.com/InternLM/StarBench">Code</a>
69
  |🌐<a href="https://internlm.github.io/StarBench/">Homepage</a>
70
  | 🤗<a href="https://huggingface.co/datasets/internlm/STAR-Bench">Dataset</a>
 
79
  <img src="assets/teaser.png" alt="teaser" width="100%">
80
  </p>
81
  Unlike prior benchmarks where caption-only answering reduces accuracy slightly, STAR-Bench induces far larger drops (-31.5\% temporal, -35.2\% spatial), evidencing its focus on <strong>linguistically hard-to-describe cues</strong>.
82
+ Evaluating 19 models reveals substantial gaps compared with humans and a capability hierarchy. Our STAR-Bench provides critical insights and a clear path forward for developing future models with a more robust understanding of the physical world.
83
 
84
  Benchmark examples are illustrated below. You can also visit the [homepage](https://internlm.github.io/StarBench/) for a more intuitive overview.
85
  </p>
 
124
  <img src="assets/pipeline.png" alt="pipeline" width="90%">
125
  </p>
126
 
127
+ ## 🛠️ Sample Usage
128
+ The `ALMEval_code/` is partially adapted from [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) and [Kimi-Audio-Evalkit](https://github.com/MoonshotAI/Kimi-Audio-Evalkit).
129
+ It provides a unified evaluation pipeline for multimodal large models on **STAR-Bench**.
130
 
131
+
132
+ **Step 1: Prepare Environment**
133
+
134
+ ```bash
135
+ git clone https://github.com/InternLM/StarBench.git
136
+ cd StarBench
137
+ conda activate starbench python==3.10.0
138
+ pip install -r requirements.txt
139
+ cd ALMEval_code
140
  ```
141
+
142
+ **Step 2: Get STAR-Bench v1.0 Dataset**
143
+
144
+ Download STAR-Bench v1.0 dataset from 🤗[HuggingFace](https://huggingface.co/datasets/internlm/STAR-Bench)
145
+ ```bash
146
+ huggingface-cli download --repo-type dataset --resume-download internlm/STAR-Bench --local-dir your_local_data_dir
147
  ```
148
 
149
+ **Step 3: Set Up Your Model for Evaluation**
150
+
151
+ Currently supported models include: `Qwen2.5-Omni`, `Qwen2-Audio-Instruct`, `DeSTA2.5-Audio`, `Phi4-MM`, `Kimi-Audio`, `MiDashengLM`, `Step-Audio-2-mini`, `Gemma-3n-E4B-it`, `Gemini` and `GPT-4o Audio`.
152
+ <!-- `Ming-Lite-Omni-1.5`,`Xiaomi-MiMo-Audio`,`MiniCPM-O-v2.6`,`Audio Flamingo 3`, -->
153
+
154
+ To integrate a new model, create a new file `yourmodel.py` under the `models/` directory and implement the function generate_inner().
155
+
156
+ ✅ Example: generate_inner()
157
+ ```
158
+ def generate_inner(self, msg):
159
+ """
160
+ Args:
161
+ msg: dict, input format as below
162
+ """
163
+ msg = {
164
+ "meta": {
165
+ "id": ...,
166
+ "task": ...,
167
+ "category": ...,
168
+ "sub-category": ...,
169
+ "options": ...,
170
+ "answer": ...,
171
+ "answer_letter": ...,
172
+ "rotate_id": ...,
173
+ },
174
+ "prompts": [
175
+ {"type": "text", "value": "xxxx"},
176
+ {"type": "audio", "value": "audio1.wav"},
177
+ {"type": "text", "value": "xxxx"},
178
+ {"type": "audio", "value": "audio2.wav"},
179
+ ...
180
+ ]
181
+ }
182
+ # Return the model's textual response
183
+ return "your model output here"
184
+ ```
185
+
186
+ **Step 4: Configure Model Settings**
187
+
188
+ Modify the configuration file: `/models/model.yaml`.
189
+
190
+ For existing models, you may need to update parameters such as `model_path` to match your local model weight path.
191
+
192
+ To add a new model variant, follow these steps:
193
+ 1. Create a new top-level key for your alias (e.g., 'my_model_variant:').
194
+ 2. Set 'base_model' to the `NAME` attribute of the corresponding Python class.
195
+ 3. Add any necessary arguments for the class's `__init__` method under `init_args`.
196
+
197
+ Example:
198
+ ```
199
+ qwen25-omni:
200
+ base_model: qwen25-omni
201
+ init_args:
202
+ model_path: your_model_weight_path_here
203
+ ```
204
 
205
+ **Step 5: Run Evaluation**
206
 
207
+ Run the following command:
208
+ ```
209
+ python ./run.py \
210
+ --model qwen25-omni \
211
+ --data starbench_default \
212
+ --dataset_root your_local_data_dir \
213
+ --work-dir ./eval_results
214
+ ```
215
 
216
+ Evaluation results will be automatically saved to the ./eval_results directory.
217
 
218
+ You can also evaluate specific subtasks or their combinations by modifying the `--data` argument.
219
+ The full list of available task names can be found in
220
+ `ALMEval_code/datasets/__init__.py.`
221
 
222
+ Example: Evaluate only the temporal reasoning and spatial reasoning tasks:
223
+ ```bash
224
+ python ./run.py \
225
+ --model qwen25-omni \
226
+ --data tr sr \
227
+ --dataset_root your_local_data_dir \
228
+ --work-dir ./eval_results
229
+ ```
230
 
231
+ ## ✒️Citation
232
+ ```bibtex
233
+ @article{liu2025starbench,
234
+ title={STAR-Bench: Probing Deep Spatio-Temporal Reasoning as Audio 4D Intelligence},
235
+ author={Liu, Zihan and Niu, Zhikang and Xiao, Qiuyang and Zheng, Zhisheng and Yuan, Ruoqi and Zang, Yuhang and Cao, Yuhang and Dong, Xiaoyi and Liang, Jianze and Chen, Xie and Sun, Leilei and Lin, Dahua and Wang, Jiaqi},
236
+ journal={arXiv preprint arXiv:2510.24693},
237
+ year={2025}
238
+ }
239
+ ```
240
 
241
+ ## 📄 License
242
+ ![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg) ![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg) **Usage and License Notices**: The data and code are intended and licensed for research use only.
243
 
244
+ ## Acknowledgement
245
+ We sincerely thank <a href="2077ai.com" target="_blank">2077AI</a> for providing the platform that supported our data annotation, verification, and review processes.