Add task category, paper and github link
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,7 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# IV‑Bench: A Benchmark for Image‑Grounded Video Perception and Reasoning in Multimodal LLMs
|
2 |
|
|
|
|
|
|
|
3 |
## Dataset Availability
|
4 |
Due to privacy policy, only a subset of the IV‑Bench dataset is publicly available. Specifically, we release **1,680 samples**, including video IDs, image–text queries, and distractors.
|
5 |
|
6 |
## Usage
|
7 |
-
Detailed usage instructions can be found on GitHub: [IV‑Bench](https://github.com/multimodal-art-projection/IV-Bench)
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- video-text-to-text
|
4 |
+
license: mit
|
5 |
+
---
|
6 |
+
|
7 |
# IV‑Bench: A Benchmark for Image‑Grounded Video Perception and Reasoning in Multimodal LLMs
|
8 |
|
9 |
+
[Paper](https://huggingface.co/papers/2504.15415)
|
10 |
+
[GitHub](https://github.com/multimodal-art-projection/IV-Bench)
|
11 |
+
|
12 |
## Dataset Availability
|
13 |
Due to privacy policy, only a subset of the IV‑Bench dataset is publicly available. Specifically, we release **1,680 samples**, including video IDs, image–text queries, and distractors.
|
14 |
|
15 |
## Usage
|
16 |
+
Detailed usage instructions can be found on GitHub: [IV‑Bench](https://github.com/multimodal-art-projection/IV-Bench)
|