SSS-GS / README.md
JDihlmann's picture
fix: yaml content
355a987
metadata
language:
  - en
pretty_name: Light-Stage OLAT Subsurface-Scattering Dataset
tags:
  - computer-vision
  - 3d-reconstruction
  - subsurface-scattering
  - gaussian-splatting
  - inverse-rendering
  - photometric-stereo
  - light-stage
  - olat
  - multi-view
  - multi-light
  - image
license: other
task_categories:
  - image-to-3d
  - other
size_categories:
  - 10K<n<100K
dataset_info:
  features:
    - name: image
      dtype: image
    - name: camera_pose
      dtype: json
    - name: light_pose
      dtype: json
    - name: mask
      dtype: image
  splits:
    - name: train
      num_bytes: 30000000000
      num_examples: 30000
    - name: test
      num_bytes: 7000000000
      num_examples: 7000
  download_size: 30000000000
  dataset_size: 37000000000
configs:
  - config_name: real_world
    data_files:
      - split: train
        path: real_world/*/transforms_train.json
      - split: test
        path: real_world/*/transforms_test.json
  - config_name: synthetic
    data_files:
      - split: train
        path: synthetic/*_full/transforms_train.json
      - split: test
        path: synthetic/*_full/transforms_test.json
  - config_name: synthetic_small
    data_files:
      - split: train
        path: synthetic/*_small/transforms_train.json
      - split: test
        path: synthetic/*_small/transforms_test.json
      - split: eval
        path: synthetic/*_small/transforms_eval.json

🕯️ Light-Stage OLAT Subsurface-Scattering Dataset

Companion data for the paper "Subsurface Scattering for 3D Gaussian Splatting"

This README documents only the dataset.
A separate repo covers the training / rendering code: https://github.com/cgtuebingen/SSS-GS

Dataset overview

Overview

Subsurface scattering (SSS) gives translucent materials (wax, soap, jade, skin) their distinctive soft glow. Our paper introduces SSS-GS, the first 3D Gaussian-Splatting framework that jointly reconstructs shape, BRDF and volumetric SSS while running at real-time framerates. Training such a model requires dense multi-view ⇄ multi-light OLAT data.

This dataset delivers exactly that:

  • 25 objects – 20 captured on a physical light-stage, 5 rendered in a synthetic stage
  • > 37k images (≈ 1 TB raw / ≈ 30 GB processed) with known camera & light poses
  • Ready-to-use JSON transform files compatible with NeRF & 3D GS toolchains
  • Processed to 800 px images + masks; raw 16 MP capture available on request

Applications

  • Research on SSS, inverse-rendering, radiance-field relighting, differentiable shading
  • Benchmarking OLAT pipelines or light-stage calibration
  • Teaching datasets for photometric 3D reconstruction

Quick Start

# Download and extract one real-world object
curl -L https://…/real_world/candle.tar | tar -x

Directory Layout

dataset_root/
├── real_world/          # Captured objects (processed, ready to train)
│   └── <object>.tar     # Each tar = one object (≈ 4–8 GB)
└── synthetic/           # Procedurally rendered objects
    ├── <object>_full/   # full-resolution (800 px)
    └── <object>_small/  # 256 px "quick-train" version

Inside a real-world tar

<object>/
├── resized/                 # θ_φ_board_i.png  (≈ 800 × 650 px)
├── transforms_train.json    # (train-set only) ⇄  camera / light metadata
├── transforms_test.json     # (test-set only) ⇄  camera / light metadata
├── light_positions.json     # all θ_φ_board_i → (x,y,z)
├── exclude_list.json        # bad views (lens flare, matting error, …)
└── cam_lights_aligned.png   # sanity-check visualisation

Raw capture Full-resolution, unprocessed RGB-bayer images (~ 1 TB per object) are kept offline—contact us to arrange transfer.

Inside a synthetic object folder

<object>_full/
├── <object>.blend         # Blender scene with 112 HDR stage lights
├── train/                 # r_<cam>_l_<light>.png (= 800 × 800 px)
├── test/                  # r_<cam>_l_<light>.png (= 800 × 800 px)
├── eval/                  # only in "_small" subsets
├── transforms_train.json  # (train-set only) ⇄  camera / light metadata
└── transforms_test.json   # (test-set only) ⇄  camera / light metadata

The small variant differs only in image resolution & optional eval/.

Data Collection

Real-World Subset

Capture Setup:

  • Stage: 4 m diameter light-stage with 167 individually addressable LEDs
  • Camera: FLIR Oryx 12 MP with 35 mm F-mount, motorized turntable & vertical rail
  • Processing: COLMAP SfM, automatic masking (SAM + ViTMatte), resize → PNG
Objects Avg. Views Lights/View Resolution Masks
20 158 167 800×650 px α-mattes

Preprocessing pipeline

Synthetic Subset

Rendering Setup:

  • Models: Stanford 3D Scans and BlenderKit
  • Renderer: Blender Cycles with spectral SSS (Principled BSDF)
  • Lights: 112 positions (7 rings × 16), 200 test cameras on NeRF spiral path
Variant Images Views × Lights Resolution Notes
_full 11,200 100 × 112 800² Filmic tonemapping
_small 1,500 15 × 100 256² Quick prototyping

File & Naming Conventions

  • Real imagestheta_<θ>_phi_<φ>_board_<id>.png
    θ, φ in degrees; board 0-195 indexes the LED PCBs.
  • Synthetic imagesr_<camera>_l_<light>.png
  • JSON schema
    {
      "camera_angle_x": 0.3558,
      "frames": [{
        "file_paths": ["resized/theta_10.0_phi_0.0_board_1", …],
        "light_positions": [[x,y,z], …],   // metres, stage origin
        "transform_matrix": [[...], ...],  // 4×4 extrinsic
        "width": 800, "height": 650, "cx": 400.0, "cy": 324.5
      }]
    }
    
    For synthetic files: identical structure, naming r_<cam>_l_<light>.

Licensing & Third-Party Assets

Asset Source License / Note
Synthetic models Stanford 3-D Scans Varies (non-commercial / research)
BlenderKit CC-0, CC-BY or Royalty-Free (check per-asset page)
HDR env-maps Poly Haven CC-0
Code MIT (see repo)

The dataset is released for non-commercial research and educational use.
If you plan to redistribute or use individual synthetic assets commercially, verify the upstream license first.

Citation

If you use this dataset, please cite the paper:

@inproceeding{sss_gs,
 author = {Dihlmann, Jan-Niklas and Majumdar, Arjun and Engelhardt, Andreas and Braun, Raphael and Lensch, Hendrik P.A.},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang},
 pages = {121765--121789},
 publisher = {Curran Associates, Inc.},
 title = {Subsurface Scattering for Gaussian Splatting},
 url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/dc72529d604962a86b7730806b6113fa-Paper-Conference.pdf},
 volume = {37},
 year = {2024}
}

Contact & Acknowledgements

Questions, raw-capture requests, or pull-requests?
📧 jan-niklas.dihlmann (at) uni-tuebingen.de

This work was funded by DFG (EXC 2064/1, SFB 1233) and the Tübingen AI Center.