nielsr HF Staff commited on
Commit
a0d1cb6
·
verified ·
1 Parent(s): a38d9d7

Improve model card for TreeVGR-7B with detailed info, usage, and tags

Browse files

This PR significantly enhances the model card for **TreeVGR-7B** by:
- Adding `pipeline_tag: image-text-to-text` and `library_name: transformers` to the metadata for improved discoverability and integration with the Hugging Face Hub.
- Incorporating comprehensive metadata tags, including `base_model` and `datasets`, to provide richer context about the model's origins and associated resources.
- Updating the paper link to the official Hugging Face Papers page: https://huggingface.co/papers/2507.07999.
- Including the full paper abstract, a visual overview, detailed installation and usage instructions, and a dedicated section for related Hugging Face resources.
- Adding badges for quick navigation to key project assets.
- Ensuring proper citation and acknowledgements are present.

Files changed (1) hide show
  1. README.md +106 -2
README.md CHANGED
@@ -1,7 +1,111 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- Paper: arxiv.org/abs/2507.07999
6
 
7
- For usage, please refer to our GitHub repo: https://github.com/Haochen-Wang409/TreeVGR
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: image-text-to-text
4
+ library_name: transformers
5
+ tags:
6
+ - visual-question-answering
7
+ - visual-grounding
8
+ - visual-reasoning
9
+ - qwen
10
+ base_model: Qwen/Qwen2.5-VL-7B
11
+ datasets:
12
+ - HaochenWang/TreeBench
13
+ - HaochenWang/TreeVGR-RL-37K
14
+ - HaochenWang/TreeVGR-SFT-35K
15
  ---
16
 
17
+ # TreeVGR-7B: Traceable Evidence Enhanced Visual Grounded Reasoning Model
18
 
19
+ This repository contains the **TreeVGR-7B** model, a state-of-the-art open-source visual grounded reasoning model, as presented in the paper [Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology](https://huggingface.co/papers/2507.07999).
20
+
21
+ <p align="center">
22
+ <a href="https://huggingface.co/papers/2507.07999">
23
+ <img src="https://img.shields.io/badge/Paper-HuggingFace-red"></a>
24
+ <a href="https://huggingface.co/datasets/HaochenWang/TreeBench">
25
+ <img src="https://img.shields.io/badge/TreeBench-HuggingFace-orange"></a>
26
+ <a href="https://huggingface.co/HaochenWang/TreeVGR-7B">
27
+ <img src="https://img.shields.io/badge/TreeVGR-HuggingFace-yellow"></a>
28
+ <a href="https://github.com/Haochen-Wang409/TreeVGR">
29
+ <img src="https://img.shields.io/badge/Code-GitHub-blue"></a>
30
+ </p>
31
+
32
+ ## Abstract
33
+
34
+ Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, just like human "thinking with images". However, no benchmark exists to evaluate these capabilities holistically. To bridge this gap, we propose TreeBench (Traceable Evidence Evaluation Benchmark), a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate eight LMM experts to manually annotate questions, candidate options, and answers for each image. After three stages of quality control, TreeBench consists of 405 challenging visual question-answering pairs, even the most advanced models struggle with this benchmark, where none of them reach 60% accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce TreeVGR (Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to supervise localization and reasoning jointly with reinforcement learning, enabling accurate localizations and explainable reasoning pathways. Initialized from Qwen2.5-VL-7B, it improves V* Bench (+16.8), MME-RealWorld (+12.6), and TreeBench (+13.4), proving traceability is key to advancing vision-grounded reasoning.
35
+
36
+ ![TreeBench Overview](https://github.com/Haochen-Wang409/TreeVGR/raw/main/assets/treebench.png)
37
+
38
+ ## News
39
+
40
+ - [2025/07/11] 🔥🔥🔥 **TreeBench** and **TreeVGR** have been supported by [**VLMEvalKit**](https://github.com/open-compass/VLMEvalKit)! 🔥🔥🔥
41
+ - [2025/07/11] 🔥 **TreeBench** and **TreeVGR** have been released.
42
+
43
+ ## Installation
44
+
45
+ ```bash
46
+ pip3 install -r requirements.txt
47
+ pip3 install flash-attn --no-build-isolation -v
48
+ ```
49
+
50
+ ## Usage
51
+
52
+ This repo provides a simple local inference demo of our TreeVGR on TreeBench. First, clone this repo,
53
+ ```bash
54
+ git clone https://github.com/Haochen-Wang409/TreeVGR
55
+ cd TreeVGR
56
+ ```
57
+ and then, simply run inference_treebench.py
58
+ ```bash
59
+ python3 inference_treebench.py
60
+ ```
61
+
62
+ This should give:
63
+ ```
64
+ Perception/Attributes 18/29=62.07
65
+ Perception/Material 7/13=53.85
66
+ Perception/Physical State 19/23=82.61
67
+ Perception/Object Retrieval 10/16=62.5
68
+ Perception/OCR 42/68=61.76
69
+ Reasoning/Perspective Transform 19/85=22.35
70
+ Reasoning/Ordering 20/57=35.09
71
+ Reasoning/Contact and Occlusion 25/41=60.98
72
+ Reasoning/Spatial Containment 20/29=68.97
73
+ Reasoning/Comparison 20/44=45.45
74
+ ==> Overall 200/405=49.38
75
+ ==> Mean IoU: 43.3
76
+ ```
77
+ This result is slightly different from the paper, as we mainly utilized [**VLMEvalKit**](https://github.com/open-compass/VLMEvalKit) for a more comprehensive evaluation.
78
+
79
+ ## Hugging Face Resources
80
+
81
+ **Benchmark**
82
+ - [TreeBench](https://huggingface.co/datasets/HaochenWang/TreeBench)
83
+
84
+ **Checkpoints**
85
+ - [TreeVGR-7B](https://huggingface.co/HaochenWang/TreeVGR-7B)
86
+ - [TreeVGR-7B-CI](https://huggingface.co/HaochenWang/TreeVGR-7B-CI)
87
+
88
+ **Training Datasets**
89
+ - [TreeVGR-RL-37K](https://huggingface.co/datasets/HaochenWang/TreeVGR-RL-37K)
90
+ - [TreeVGR-SFT-35K](https://huggingface.co/datasets/HaochenWang/TreeVGR-SFT-35K)
91
+
92
+ ## Citation
93
+
94
+ If you find this work useful for your research and applications, please cite using this BibTeX:
95
+ ```bibtex
96
+ @article{wang2025traceable,
97
+ title={Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology},
98
+ author={Haochen Wang and Xiangtai Li and Zilong Huang and Anran Wang and Jiacong Wang and Tao Zhang and Jiani Zheng and Sule Bai and Zijian Kang and Jiashi Feng and Zhuochen Wang and Zhaoxiang Zhang},
99
+ journal={arXiv preprint arXiv:2507.07999},
100
+ year={2025}
101
+ }
102
+ ```
103
+
104
+ ## Acknowledgement
105
+ We would like to express our sincere appreciation to the following projects:
106
+ - [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL): The base model we utilized.
107
+ - [VGR](https://huggingface.co/datasets/BytedanceDouyinContent/VGR): The source of our SFT dataset.
108
+ - [V*](https://github.com/penghao-wu/vstar) and [VisDrone](https://github.com/VisDrone/VisDrone-Dataset): The image source of our RL dataset.
109
+ - [SA-1B](https://ai.meta.com/datasets/segment-anything/): The image source of our TreeBench.
110
+ - [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): The SFT codebase we utilized.
111
+ - [EasyR1](https://github.com/hiyouga/EasyR1): The RL codebase we utilized.