Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ReasonSeg Test Dataset
This repository contains the ReasonSeg Test Dataset, which serves as an evaluation benchmark for the paper Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement.
Code: https://github.com/dvlab-research/Seg-Zero
Paper Abstract
Traditional methods for reasoning segmentation rely on supervised fine-tuning with categorical labels and simple descriptions, limiting its out-of-domain generalization and lacking explicit reasoning processes. To address these limitations, we propose Seg-Zero, a novel framework that demonstrates remarkable generalizability and derives explicit chain-of-thought reasoning through cognitive reinforcement. Seg-Zero introduces a decoupled architecture consisting of a reasoning model and a segmentation model. The reasoning model interprets user intentions, generates explicit reasoning chains, and produces positional prompts, which are subsequently used by the segmentation model to generate precious pixel-level masks. We design a sophisticated reward mechanism that integrates both format and accuracy rewards to effectively guide optimization directions. Trained exclusively via reinforcement learning with GRPO and without explicit reasoning data, Seg-Zero achieves robust zero-shot generalization and exhibits emergent test-time reasoning capabilities. Experiments show that Seg-Zero-7B achieves a zero-shot performance of 57.5 on the ReasonSeg benchmark, surpassing the prior LISA-7B by 18%. This significant improvement highlights Seg-Zero's ability to generalize across domains while presenting an explicit reasoning process.
About Seg-Zero
Seg-Zero is a novel framework for reasoning segmentation that utilizes cognitive reinforcement to achieve remarkable generalizability and explicit chain-of-thought reasoning.

Seg-Zero demonstrates the following features:
- Seg-Zero exhibits emergent test-time reasoning ability. It generates a reasoning chain before producing the final segmentation mask.
- Seg-Zero is trained exclusively using reinforcement learning, without any explicit supervised reasoning data.
- Compared to supervised fine-tuning, our Seg-Zero achieves superior performance on both in-domain and out-of-domain data.
Model Pipeline
Seg-Zero employs a decoupled architecture, including a reasoning model and segmentation model. A sophisticated reward mechanism integrates both format and accuracy rewards.

Examples

Sample Usage: Evaluation
This dataset (ReasonSeg-Test
) is designed for evaluating the zero-shot performance of models like Seg-Zero on reasoning-based image segmentation tasks.
First, install the necessary dependencies for the Seg-Zero project:
git clone https://github.com/dvlab-research/Seg-Zero.git
cd Seg-Zero
conda create -n visionreasoner python=3.12
conda activate visionreasoner
pip install torch==2.6.0 torchvision==0.21.0
pip install -e .
Then, you can run evaluation using the provided scripts. Make sure to download pretrained models first:
mkdir pretrained_models
cd pretrained_models
git lfs install
git clone https://huggingface.co/Ricky06662/VisionReasoner-7B
With the pretrained models downloaded, you can run the evaluation script for ReasonSeg:
bash evaluation_scripts/eval_reasonseg_visionreasoner.sh
Adjust '--batch_size'
in the bash scripts based on your GPU. You will see the gIoU in your command line.

The GRPO Algorithm
Seg-Zero generates several samples, calculates the rewards and then optimizes towards samples that achieve higher rewards.

Citation
If you use this dataset or the Seg-Zero framework, please cite the associated papers:
@article{liu2025segzero,
title = {Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement},
author = {Liu, Yuqi and Peng, Bohao and Zhong, Zhisheng and Yue, Zihao and Lu, Fanbin and Yu, Bei and Jia, Jiaya},
journal = {arXiv preprint arXiv:2503.06520},
year = {2025}
}
@article{liu2025visionreasoner,
title = {VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning},
author = {Liu, Yuqi and Qu, Tianyuan and Zhong, Zhisheng and Peng, Bohao and Liu, Shu and Yu, Bei and Jia, Jiaya},
journal = {arXiv preprint arXiv:2505.12081},
year = {2025}
}
- Downloads last month
- 1,770