Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
MaxyLee nielsr HF staff commited on
Commit
5213ae1
·
verified ·
1 Parent(s): 201ba94

Change Github link to specific folder, change task category to `image-text-to-text` (#2)

Browse files

- Change Github link to specific folder, change task category to `image-text-to-text` (3af31207bd8a44946504908be79b74be58d7b7cc)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +6 -9
README.md CHANGED
@@ -1,12 +1,12 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - image-to-text
5
  language:
6
  - en
7
- pretty_name: KVG-Bench
8
  size_categories:
9
  - 1K<n<10K
 
 
 
10
  ---
11
 
12
  # DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding
@@ -14,11 +14,8 @@ Xinyu Ma, Ziyang Ding, Zhicong Luo, Chi Chen, Zonghao Guo, Derek F. Wong, Xiaoyi
14
 
15
  <a href='https://deepperception-kvg.github.io/'><img src='https://img.shields.io/badge/Project-Page-blue'></a>
16
  <a href='https://arxiv.org/abs/2503.12797'><img src='https://img.shields.io/badge/Paper-PDF-Green'></a>
17
- <a href='https://github.com/MaxyLee/DeepPerception'><img src='https://img.shields.io/badge/Github-Page-green'></a>
18
  <a href='https://huggingface.co/MaxyLee/DeepPerception'><img src='https://img.shields.io/badge/Model-Huggingface-yellow'></a>
19
  <a href='https://huggingface.co/datasets/MaxyLee/KVG'><img src='https://img.shields.io/badge/Dataset-Huggingface-purple'></a>
20
 
21
- This is the official repository of **KVG-Bench**, a comprehensive benchmark of Knowledge-intensive Visual Grounding (KVG) spanning 10 categories with 1.3K manually curated test cases.
22
-
23
-
24
-
 
1
  ---
 
 
 
2
  language:
3
  - en
4
+ license: apache-2.0
5
  size_categories:
6
  - 1K<n<10K
7
+ task_categories:
8
+ - image-text-to-text
9
+ pretty_name: KVG-Bench
10
  ---
11
 
12
  # DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding
 
14
 
15
  <a href='https://deepperception-kvg.github.io/'><img src='https://img.shields.io/badge/Project-Page-blue'></a>
16
  <a href='https://arxiv.org/abs/2503.12797'><img src='https://img.shields.io/badge/Paper-PDF-Green'></a>
17
+ <a href='https://github.com/thunlp/DeepPerception/tree/main/DeepPerception/KVG_Bench'><img src='https://img.shields.io/badge/Github-Page-green'></a>
18
  <a href='https://huggingface.co/MaxyLee/DeepPerception'><img src='https://img.shields.io/badge/Model-Huggingface-yellow'></a>
19
  <a href='https://huggingface.co/datasets/MaxyLee/KVG'><img src='https://img.shields.io/badge/Dataset-Huggingface-purple'></a>
20
 
21
+ This is the official repository of **KVG-Bench**, a comprehensive benchmark of Knowledge-intensive Visual Grounding (KVG) spanning 10 categories with 1.3K manually curated test cases.