Datasets:
Improve dataset card: Add task category, license, paper, and code links for ReasonSeg-Test
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,4 +1,17 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
features:
|
| 4 |
- name: image
|
|
@@ -28,3 +41,104 @@ configs:
|
|
| 28 |
- split: test
|
| 29 |
path: data/test-*
|
| 30 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-segmentation
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- reasoning
|
| 9 |
+
- zero-shot
|
| 10 |
+
- reinforcement-learning
|
| 11 |
+
- multi-modal
|
| 12 |
+
- VLM
|
| 13 |
+
size_categories:
|
| 14 |
+
- n<1K
|
| 15 |
dataset_info:
|
| 16 |
features:
|
| 17 |
- name: image
|
|
|
|
| 41 |
- split: test
|
| 42 |
path: data/test-*
|
| 43 |
---
|
| 44 |
+
|
| 45 |
+
# ReasonSeg Test Dataset
|
| 46 |
+
|
| 47 |
+
This repository contains the **ReasonSeg Test Dataset**, which serves as an evaluation benchmark for the paper [Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement](https://arxiv.org/abs/2503.06520).
|
| 48 |
+
|
| 49 |
+
**Code:** [https://github.com/dvlab-research/Seg-Zero](https://github.com/dvlab-research/Seg-Zero)
|
| 50 |
+
|
| 51 |
+
## Paper Abstract
|
| 52 |
+
|
| 53 |
+
Traditional methods for reasoning segmentation rely on supervised fine-tuning with categorical labels and simple descriptions, limiting its out-of-domain generalization and lacking explicit reasoning processes. To address these limitations, we propose Seg-Zero, a novel framework that demonstrates remarkable generalizability and derives explicit chain-of-thought reasoning through cognitive reinforcement. Seg-Zero introduces a decoupled architecture consisting of a reasoning model and a segmentation model. The reasoning model interprets user intentions, generates explicit reasoning chains, and produces positional prompts, which are subsequently used by the segmentation model to generate precious pixel-level masks. We design a sophisticated reward mechanism that integrates both format and accuracy rewards to effectively guide optimization directions. Trained exclusively via reinforcement learning with GRPO and without explicit reasoning data, Seg-Zero achieves robust zero-shot generalization and exhibits emergent test-time reasoning capabilities. Experiments show that Seg-Zero-7B achieves a zero-shot performance of 57.5 on the ReasonSeg benchmark, surpassing the prior LISA-7B by 18%. This significant improvement highlights Seg-Zero's ability to generalize across domains while presenting an explicit reasoning process.
|
| 54 |
+
|
| 55 |
+
## About Seg-Zero
|
| 56 |
+
|
| 57 |
+
Seg-Zero is a novel framework for reasoning segmentation that utilizes cognitive reinforcement to achieve remarkable generalizability and explicit chain-of-thought reasoning.
|
| 58 |
+
|
| 59 |
+
<div align=center>
|
| 60 |
+
<img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/overview.png"/>
|
| 61 |
+
</div>
|
| 62 |
+
|
| 63 |
+
Seg-Zero demonstrates the following features:
|
| 64 |
+
1. Seg-Zero exhibits emergent test-time reasoning ability. It generates a reasoning chain before producing the final segmentation mask.
|
| 65 |
+
2. Seg-Zero is trained exclusively using reinforcement learning, without any explicit supervised reasoning data.
|
| 66 |
+
3. Compared to supervised fine-tuning, our Seg-Zero achieves superior performance on both in-domain and out-of-domain data.
|
| 67 |
+
|
| 68 |
+
### Model Pipeline
|
| 69 |
+
|
| 70 |
+
Seg-Zero employs a decoupled architecture, including a reasoning model and segmentation model. A sophisticated reward mechanism integrates both format and accuracy rewards.
|
| 71 |
+
|
| 72 |
+
<div align=center>
|
| 73 |
+
<img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/pipeline.png"/>
|
| 74 |
+
</div>
|
| 75 |
+
|
| 76 |
+
### Examples
|
| 77 |
+
|
| 78 |
+
<div align=center>
|
| 79 |
+
<img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/examples.png"/>
|
| 80 |
+
</div>
|
| 81 |
+
|
| 82 |
+
## Sample Usage: Evaluation
|
| 83 |
+
|
| 84 |
+
This dataset (`ReasonSeg-Test`) is designed for evaluating the zero-shot performance of models like Seg-Zero on reasoning-based image segmentation tasks.
|
| 85 |
+
|
| 86 |
+
First, install the necessary dependencies for the Seg-Zero project:
|
| 87 |
+
|
| 88 |
+
```bash
|
| 89 |
+
git clone https://github.com/dvlab-research/Seg-Zero.git
|
| 90 |
+
cd Seg-Zero
|
| 91 |
+
conda create -n visionreasoner python=3.12
|
| 92 |
+
conda activate visionreasoner
|
| 93 |
+
pip install torch==2.6.0 torchvision==0.21.0
|
| 94 |
+
pip install -e .
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
Then, you can run evaluation using the provided scripts. Make sure to download pretrained models first:
|
| 98 |
+
|
| 99 |
+
```bash
|
| 100 |
+
mkdir pretrained_models
|
| 101 |
+
cd pretrained_models
|
| 102 |
+
git lfs install
|
| 103 |
+
git clone https://huggingface.co/Ricky06662/VisionReasoner-7B
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
With the pretrained models downloaded, you can run the evaluation script for ReasonSeg:
|
| 107 |
+
|
| 108 |
+
```bash
|
| 109 |
+
bash evaluation_scripts/eval_reasonseg_visionreasoner.sh
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
Adjust `'--batch_size'` in the bash scripts based on your GPU. You will see the gIoU in your command line.
|
| 113 |
+
|
| 114 |
+
<div align=center>
|
| 115 |
+
<img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/val_results.png"/>
|
| 116 |
+
</div>
|
| 117 |
+
|
| 118 |
+
## The GRPO Algorithm
|
| 119 |
+
|
| 120 |
+
Seg-Zero generates several samples, calculates the rewards and then optimizes towards samples that achieve higher rewards.
|
| 121 |
+
|
| 122 |
+
<div align=center>
|
| 123 |
+
<img width="48%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/rl_sample.png"/>
|
| 124 |
+
</div>
|
| 125 |
+
|
| 126 |
+
## Citation
|
| 127 |
+
|
| 128 |
+
If you use this dataset or the Seg-Zero framework, please cite the associated papers:
|
| 129 |
+
|
| 130 |
+
```bibtex
|
| 131 |
+
@article{liu2025segzero,
|
| 132 |
+
title = {Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement},
|
| 133 |
+
author = {Liu, Yuqi and Peng, Bohao and Zhong, Zhisheng and Yue, Zihao and Lu, Fanbin and Yu, Bei and Jia, Jiaya},
|
| 134 |
+
journal = {arXiv preprint arXiv:2503.06520},
|
| 135 |
+
year = {2025}
|
| 136 |
+
}
|
| 137 |
+
|
| 138 |
+
@article{liu2025visionreasoner,
|
| 139 |
+
title = {VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning},
|
| 140 |
+
author = {Liu, Yuqi and Qu, Tianyuan and Zhong, Zhisheng and Peng, Bohao and Liu, Shu and Yu, Bei and Jia, Jiaya},
|
| 141 |
+
journal = {arXiv preprint arXiv:2505.12081},
|
| 142 |
+
year = {2025}
|
| 143 |
+
}
|
| 144 |
+
```
|