nielsr HF Staff commited on
Commit
197907a
·
verified ·
1 Parent(s): 91d44bc

Enhance model card with abstract, usage, and citation

Browse files

Hi!

I've updated the model card to make it more informative and user-friendly for the community.
Specifically, I've:
- Updated the paper link to point to the official Hugging Face Papers page (https://huggingface.co/papers/2507.07999).
- Added the paper's abstract for a comprehensive overview.
- Included detailed installation and sample usage instructions, making it easier to get started.
- Added the BibTeX citation for proper attribution.

These changes aim to improve discoverability and usability within the Hugging Face ecosystem. Let me know if you have any questions!

Files changed (1) hide show
  1. README.md +61 -3
README.md CHANGED
@@ -4,11 +4,69 @@ base_model:
4
  datasets:
5
  - HaochenWang/TreeVGR-SFT-35K
6
  - HaochenWang/TreeVGR-RL-37K
 
7
  license: apache-2.0
8
  pipeline_tag: image-text-to-text
9
- library_name: transformers
10
  ---
11
 
12
- Paper: [Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology](https://arxiv.org/abs/2507.07999)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- For usage, please refer to our GitHub repo: https://github.com/Haochen-Wang409/TreeVGR
 
 
 
 
 
 
 
 
4
  datasets:
5
  - HaochenWang/TreeVGR-SFT-35K
6
  - HaochenWang/TreeVGR-RL-37K
7
+ library_name: transformers
8
  license: apache-2.0
9
  pipeline_tag: image-text-to-text
 
10
  ---
11
 
12
+ # TreeVGR-7B
13
+
14
+ This repository contains the **TreeVGR-7B** model, as presented in the paper [Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology](https://huggingface.co/papers/2507.07999).
15
+
16
+ **TL; DR**: We propose TreeBench, the first benchmark specially designed for evaluating "thinking with images" capabilities with *traceable visual evidence*, and TreeVGR, the current state-of-the-art open-source visual grounded reasoning models.
17
+
18
+ ## Abstract
19
+
20
+ Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, just like human "thinking with images". However, no benchmark exists to evaluate these capabilities holistically. To bridge this gap, we propose **TreeBench** (Traceable Evidence Evaluation Benchmark), a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate eight LMM experts to manually annotate questions, candidate options, and answers for each image. After three stages of quality control, **TreeBench** consists of 405 challenging visual question-answering pairs, even the most advanced models struggle with this benchmark, where none of them reach 60% accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce **TreeVGR** (Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to supervise localization and reasoning jointly with reinforcement learning, enabling accurate localizations and explainable reasoning pathways. Initialized from Qwen2.5-VL-7B, it improves V\* Bench (+16.8), MME-RealWorld (+12.6), and TreeBench (+13.4), proving traceability is key to advancing visual grounded reasoning.
21
+
22
+ For more details, please refer to the [GitHub repository](https://github.com/Haochen-Wang409/TreeVGR).
23
+
24
+ ## Installation
25
+
26
+ To get started, first clone the repository and install the required dependencies:
27
+
28
+ ```bash
29
+ git clone https://github.com/Haochen-Wang409/TreeVGR
30
+ cd TreeVGR
31
+ pip3 install -r requirements.txt
32
+ pip3 install flash-attn --no-build-isolation -v
33
+ ```
34
+
35
+ ## Usage
36
+
37
+ This repository provides a simple local inference demo of TreeVGR on TreeBench. After installation, you can run the inference script:
38
+
39
+ ```bash
40
+ python3 inference_treebench.py
41
+ ```
42
+
43
+ This should produce an output similar to:
44
+
45
+ ```
46
+ Perception/Attributes 18/29=62.07
47
+ Perception/Material 7/13=53.85
48
+ Perception/Physical State 19/23=82.61
49
+ Perception/Object Retrieval 10/16=62.5
50
+ Perception/OCR 42/68=61.76
51
+ Reasoning/Perspective Transform 19/85=22.35
52
+ Reasoning/Ordering 20/57=35.09
53
+ Reasoning/Contact and Occlusion 25/41=60.98
54
+ Reasoning/Spatial Containment 20/29=68.97
55
+ Reasoning/Comparison 20/44=45.45
56
+ ==> Overall 200/405=49.38
57
+ ==> Mean IoU: 43.3
58
+ ```
59
+ Note: This result is slightly different from the paper, as we mainly utilized [**VLMEvalKit**](https://github.com/open-compass/VLMEvalKit) for a more comprehensive evaluation.
60
+
61
+ ## Citation
62
+
63
+ If you find this work useful for your research and applications, please cite using the following BibTeX:
64
 
65
+ ```bibtex
66
+ @article{wang2025traceable,
67
+ title={Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology},
68
+ author={Haochen Wang and Xiangtai Li and Zilong Huang and Anran Wang and Jiacong Wang and Tao Zhang and Jiani Zheng and Sule Bai and Zijian Kang and Jiashi Feng and Zhuochen Wang and Zhaoxiang Zhang},
69
+ journal={arXiv preprint arXiv:2507.07999},
70
+ year={2025}
71
+ }
72
+ ```