Papers
arxiv:2507.16746

Zebra-CoT: A Dataset for Interleaved Vision Language Reasoning

Published on Jul 22
· Submitted by deqing on Jul 23
Authors:
Ang Li ,
,
,
,
,
,
,
,
,
,

Abstract

Humans often use visual aids, for example diagrams or sketches, when solving complex problems. Training multimodal models to do the same, known as Visual Chain of Thought (Visual CoT), is challenging due to: (1) poor off-the-shelf visual CoT performance, which hinders reinforcement learning, and (2) the lack of high-quality visual CoT training data. We introduce Zebra-CoT, a diverse large-scale dataset with 182,384 samples, containing logically coherent interleaved text-image reasoning traces. We focus on four categories of tasks where sketching or visual reasoning is especially natural, spanning scientific questions such as geometry, physics, and algorithms; 2D visual reasoning tasks like visual search and jigsaw puzzles; 3D reasoning tasks including 3D multi-hop inference, embodied and robot planning; visual logic problems and strategic games like chess. Fine-tuning the Anole-7B model on the Zebra-CoT training corpus results in an improvement of +12% in our test-set accuracy and yields up to +13% performance gain on standard VLM benchmark evaluations. Fine-tuning Bagel-7B yields a model that generates high-quality interleaved visual reasoning chains, underscoring Zebra-CoT's effectiveness for developing multimodal reasoning abilities. We open-source our dataset and models to support development and evaluation of visual CoT.

Community

Paper author Paper submitter

Introducing Zebra-CoT, a diverse large-scale dataset of 182,384 logically coherent interleaved text‑image reasoning traces spanning scientific, 2D, 3D, and logic tasks. Zebra-CoT enables intrinsic multimodal reasoning by training models to seamlessly integrate visual sketches and textual chains of thought for complex problem solving.

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.16746 in a Space README.md to link it from this page.

Collections including this paper 5