Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: TypeError Message: 'module' object is not callable Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 165, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1664, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1621, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 992, in get_module dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2035, in _from_yaml_list return cls.from_dict(from_yaml_inner(yaml_data)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2031, in from_yaml_inner return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2031, in <dictcomp> return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2020, in from_yaml_inner Value(obj["dtype"]) File "<string>", line 5, in __init__ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 528, in __post_init__ self.pa_type = string_to_arrow(self.dtype) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 145, in string_to_arrow return pa.__dict__[datasets_dtype]() TypeError: 'module' object is not callable
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
🕯️ Light-Stage OLAT Subsurface-Scattering Dataset
Companion data for the paper "Subsurface Scattering for 3D Gaussian Splatting"
This README documents only the dataset.
A separate repo covers the training / rendering code: https://github.com/cgtuebingen/SSS-GS
Overview
Subsurface scattering (SSS) gives translucent materials (wax, soap, jade, skin) their distinctive soft glow. Our paper introduces SSS-GS, the first 3D Gaussian-Splatting framework that jointly reconstructs shape, BRDF and volumetric SSS while running at real-time framerates. Training such a model requires dense multi-view ⇄ multi-light OLAT data.
This dataset delivers exactly that:
- 25 objects – 20 captured on a physical light-stage, 5 rendered in a synthetic stage
- > 37k images (≈ 1 TB raw / ≈ 30 GB processed) with known camera & light poses
- Ready-to-use JSON transform files compatible with NeRF & 3D GS toolchains
- Processed to 800 px images + masks; raw 16 MP capture available on request
Applications
- Research on SSS, inverse-rendering, radiance-field relighting, differentiable shading
- Benchmarking OLAT pipelines or light-stage calibration
- Teaching datasets for photometric 3D reconstruction
Quick Start
# Download and extract one real-world object
curl -L https://…/real_world/candle.tar | tar -x
Directory Layout
dataset_root/
├── real_world/ # Captured objects (processed, ready to train)
│ └── <object>.tar # Each tar = one object (≈ 4–8 GB)
└── synthetic/ # Procedurally rendered objects
├── <object>_full/ # full-resolution (800 px)
└── <object>_small/ # 256 px "quick-train" version
Inside a real-world tar
<object>/
├── resized/ # θ_φ_board_i.png (≈ 800 × 650 px)
├── transforms_train.json # (train-set only) ⇄ camera / light metadata
├── transforms_test.json # (test-set only) ⇄ camera / light metadata
├── light_positions.json # all θ_φ_board_i → (x,y,z)
├── exclude_list.json # bad views (lens flare, matting error, …)
└── cam_lights_aligned.png # sanity-check visualisation
Raw capture Full-resolution, unprocessed RGB-bayer images (~ 1 TB per object) are kept offline—contact us to arrange transfer.
Inside a synthetic object folder
<object>_full/
├── <object>.blend # Blender scene with 112 HDR stage lights
├── train/ # r_<cam>_l_<light>.png (= 800 × 800 px)
├── test/ # r_<cam>_l_<light>.png (= 800 × 800 px)
├── eval/ # only in "_small" subsets
├── transforms_train.json # (train-set only) ⇄ camera / light metadata
└── transforms_test.json # (test-set only) ⇄ camera / light metadata
The small variant differs only in image resolution & optional eval/
.
Data Collection
Real-World Subset
Capture Setup:
- Stage: 4 m diameter light-stage with 167 individually addressable LEDs
- Camera: FLIR Oryx 12 MP with 35 mm F-mount, motorized turntable & vertical rail
- Processing: COLMAP SfM, automatic masking (SAM + ViTMatte), resize → PNG
Objects | Avg. Views | Lights/View | Resolution | Masks |
---|---|---|---|---|
20 | 158 | 167 | 800×650 px | α-mattes |
Synthetic Subset
Rendering Setup:
- Models: Stanford 3D Scans and BlenderKit
- Renderer: Blender Cycles with spectral SSS (Principled BSDF)
- Lights: 112 positions (7 rings × 16), 200 test cameras on NeRF spiral path
Variant | Images | Views × Lights | Resolution | Notes |
---|---|---|---|---|
_full | 11,200 | 100 × 112 | 800² | Filmic tonemapping |
_small | 1,500 | 15 × 100 | 256² | Quick prototyping |
File & Naming Conventions
- Real images
theta_<θ>_phi_<φ>_board_<id>.png
θ, φ in degrees; board 0-195 indexes the LED PCBs. - Synthetic images
r_<camera>_l_<light>.png
- JSON schema
For synthetic files: identical structure, naming{ "camera_angle_x": 0.3558, "frames": [{ "file_paths": ["resized/theta_10.0_phi_0.0_board_1", …], "light_positions": [[x,y,z], …], // metres, stage origin "transform_matrix": [[...], ...], // 4×4 extrinsic "width": 800, "height": 650, "cx": 400.0, "cy": 324.5 }] }
r_<cam>_l_<light>
.
Licensing & Third-Party Assets
Asset | Source | License / Note |
---|---|---|
Synthetic models | Stanford 3-D Scans | Varies (non-commercial / research) |
BlenderKit | CC-0, CC-BY or Royalty-Free (check per-asset page) | |
HDR env-maps | Poly Haven | CC-0 |
Code | MIT (see repo) |
The dataset is released for non-commercial research and educational use.
If you plan to redistribute or use individual synthetic assets commercially, verify the upstream license first.
Citation
If you use this dataset, please cite the paper:
@inproceeding{sss_gs,
author = {Dihlmann, Jan-Niklas and Majumdar, Arjun and Engelhardt, Andreas and Braun, Raphael and Lensch, Hendrik P.A.},
booktitle = {Advances in Neural Information Processing Systems},
editor = {A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang},
pages = {121765--121789},
publisher = {Curran Associates, Inc.},
title = {Subsurface Scattering for Gaussian Splatting},
url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/dc72529d604962a86b7730806b6113fa-Paper-Conference.pdf},
volume = {37},
year = {2024}
}
Contact & Acknowledgements
Questions, raw-capture requests, or pull-requests?
📧 jan-niklas.dihlmann (at) uni-tuebingen.de
This work was funded by DFG (EXC 2064/1, SFB 1233) and the Tübingen AI Center.
- Downloads last month
- 328