--- license: cc-by-4.0 language: - en size_categories: - 100K
parabola
convex
magnetic
pendulum
reflection
seesaw
spring
water_flow ## πŸ“š Usage #### πŸ”Ή Option 1: Load from Hugging Face You can easily load a specific scene using the Hugging Face `datasets` library: ```python from datasets import load_dataset dataset = load_dataset( "LLDDSS/Causal3D", name="real_scenes_Real_Parabola", download_mode="force_redownload", # Optional: force re-download trust_remote_code=True # Required for custom dataset loading ) print(dataset) ``` #### πŸ”Ή Option 2: Download via [**Kaggle**](https://www.kaggle.com/datasets/dsliu0011/causal3d-image-dataset) + Croissant ```python import mlcroissant as mlc import pandas as pd # Load the dataset metadata from Kaggle croissant_dataset = mlc.Dataset( "https://www.kaggle.com/datasets/dsliu0011/causal3d-image-dataset/croissant/download" ) record_sets = croissant_dataset.metadata.record_sets print(record_sets) df = pd.DataFrame(croissant_dataset.records(record_set=record_sets[0].uuid)) print(df.head()) ``` --- ## πŸ“Œ Overview While recent progress in AI and computer vision has been remarkable, there remains a major gap in evaluating causal reasoning over complex visual inputs. **Causal3D** bridges this gap by providing: - **19 curated 3D-scene datasets** simulating diverse real-world causal phenomena. - Paired **tabular causal graphs** and **image observations** across multiple views and backgrounds. - Benchmarks for evaluating models in both **structured** (tabular) and **unstructured** (image) modalities. --- ## 🧩 Dataset Structure Each sub-dataset (scene) contains: - `images/`: Rendered images under different camera views and backgrounds. - `tabular.csv`: Instance-level annotations including object attributes in causal graph. --- ## 🎯 Evaluation Tasks Causal3D supports a range of causal reasoning tasks, including: - **Causal discovery** from image sequences or tables - **Intervention prediction** under modified object states or backgrounds - **Counterfactual reasoning** across views - **VLM-based causal inference** given multimodal prompts --- ## πŸ“Š Benchmark Results We evaluate a diverse set of methods: - **Classical causal discovery**: PC, GES, NOTEARS - **Causal representation learning**: CausalVAE, ICM-based encoders - **Vision-Language and Large Language Models**: GPT-4V, Claude-3.5, Gemini-1.5 **Key Findings**: - As causal structures grow more complex, **model performance drops significantly** without strong prior assumptions. - A noticeable performance gap exists between models trained on structured data and those applied directly to visual inputs. ---