Causal3D / README.md
LLDDSS's picture
Update README.md
9a24dca verified
|
raw
history blame
4 kB
metadata
license: cc-by-4.0

🧠 Causal3D: A Benchmark for Visual Causal Reasoning

Causal3D is a comprehensive benchmark designed to evaluate models’ abilities to uncover latent causal relations from structured and visual data. This dataset integrates 3D-rendered scenes with tabular causal annotations, providing a unified testbed for advancing causal discovery, causal representation learning, and causal reasoning with vision-language models (VLMs) and large language models (LLMs).


πŸ“Œ Overview

While recent progress in AI and computer vision has been remarkable, there remains a major gap in evaluating causal reasoning over complex visual inputs. Causal3D bridges this gap by providing:

  • 19 curated 3D-scene datasets simulating diverse real-world causal phenomena.
  • Paired tabular causal graphs and image observations across multiple views and backgrounds.
  • Benchmarks for evaluating models in both structured (tabular) and unstructured (image) modalities.

🧩 Dataset Structure

Each sub-dataset (scene) contains:

  • images/: Rendered images under different camera views and backgrounds.
  • tabular.csv: Instance-level annotations including object attributes in causal graph.

πŸ–ΌοΈ Visual Previews

Below are example images from different Causal3D scenes:


parabola

convex

magnetic

pendulum

reflection

seesaw

spring

water_flow

🎯 Evaluation Tasks

Causal3D supports a range of causal reasoning tasks, including:

  • Causal discovery from image sequences or tables
  • Intervention prediction under modified object states or backgrounds
  • Counterfactual reasoning across views
  • VLM-based causal inference given multimodal prompts

πŸ“Š Benchmark Results

We evaluate a diverse set of methods:

  • Classical causal discovery: PC, GES, NOTEARS
  • Causal representation learning: CausalVAE, ICM-based encoders
  • Vision-Language and Large Language Models: GPT-4V, Claude-3.5, Gemini-1.5

Key Findings:

  • As causal structures grow more complex, model performance drops significantly without strong prior assumptions.
  • A noticeable performance gap exists between models trained on structured data and those applied directly to visual inputs.