The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
π§ Causal3D: A Benchmark for Visual Causal Reasoning
Causal3D is a comprehensive benchmark designed to evaluate modelsβ abilities to uncover latent causal relations from structured and visual data. This dataset integrates 3D-rendered scenes with tabular causal annotations, providing a unified testbed for advancing causal discovery, causal representation learning, and causal reasoning with vision-language models (VLMs) and large language models (LLMs).
πΌοΈ Visual Previews
Below are example images from different Causal3D scenes:
![]() parabola |
![]() convex |
|
![]() magnetic |
![]() pendulum |
![]() reflection |
![]() seesaw |
![]() spring |
![]() water_flow |
π Usage
πΉ Option 1: Load from Hugging Face
You can easily load a specific scene using the Hugging Face datasets
library:
from datasets import load_dataset
dataset = load_dataset(
"LLDDSS/Causal3D",
name="real_scenes_Real_Parabola",
download_mode="force_redownload", # Optional: force re-download
trust_remote_code=True # Required for custom dataset loading
)
print(dataset)
πΉ Option 2: Download via Kaggle + Croissant
import mlcroissant as mlc
import pandas as pd
# Load the dataset metadata from Kaggle
croissant_dataset = mlc.Dataset(
"https://www.kaggle.com/datasets/dsliu0011/causal3d-image-dataset/croissant/download"
)
record_sets = croissant_dataset.metadata.record_sets
print(record_sets)
df = pd.DataFrame(croissant_dataset.records(record_set=record_sets[0].uuid))
print(df.head())
π Overview
While recent progress in AI and computer vision has been remarkable, there remains a major gap in evaluating causal reasoning over complex visual inputs. Causal3D bridges this gap by providing:
- 19 curated 3D-scene datasets simulating diverse real-world causal phenomena.
- Paired tabular causal graphs and image observations across multiple views and backgrounds.
- Benchmarks for evaluating models in both structured (tabular) and unstructured (image) modalities.
π§© Dataset Structure
Each sub-dataset (scene) contains:
images/
: Rendered images under different camera views and backgrounds.tabular.csv
: Instance-level annotations including object attributes in causal graph.
π― Evaluation Tasks
Causal3D supports a range of causal reasoning tasks, including:
- Causal discovery from image sequences or tables
- Intervention prediction under modified object states or backgrounds
- Counterfactual reasoning across views
- VLM-based causal inference given multimodal prompts
π Benchmark Results
We evaluate a diverse set of methods:
- Classical causal discovery: PC, GES, NOTEARS
- Causal representation learning: CausalVAE, ICM-based encoders
- Vision-Language and Large Language Models: GPT-4V, Claude-3.5, Gemini-1.5
Key Findings:
- As causal structures grow more complex, model performance drops significantly without strong prior assumptions.
- A noticeable performance gap exists between models trained on structured data and those applied directly to visual inputs.
- Downloads last month
- 700