File size: 7,657 Bytes
0b36675
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
355a987
0b36675
 
355a987
0b36675
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ecdaee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
---
language:
- en
pretty_name: "Light-Stage OLAT Subsurface-Scattering Dataset"
tags:
- computer-vision
- 3d-reconstruction
- subsurface-scattering
- gaussian-splatting
- inverse-rendering
- photometric-stereo
- light-stage
- olat
- multi-view
- multi-light
- image
license: "other"
task_categories:
- image-to-3d
- other
size_categories:
- 10K<n<100K
dataset_info:
  features:
  - name: image
    dtype: image
  - name: camera_pose
    dtype: json
  - name: light_pose
    dtype: json
  - name: mask
    dtype: image
  splits:
  - name: train
    num_bytes: 30000000000
    num_examples: 30000
  - name: test
    num_bytes: 7000000000
    num_examples: 7000
  download_size: 30000000000
  dataset_size: 37000000000
configs:
- config_name: real_world
  data_files:
  - split: train
    path: real_world/*/transforms_train.json
  - split: test
    path: real_world/*/transforms_test.json
- config_name: synthetic
  data_files:
  - split: train
    path: synthetic/*_full/transforms_train.json
  - split: test
    path: synthetic/*_full/transforms_test.json
- config_name: synthetic_small
  data_files:
  - split: train
    path: synthetic/*_small/transforms_train.json
  - split: test
    path: synthetic/*_small/transforms_test.json
  - split: eval
    path: synthetic/*_small/transforms_eval.json
---

# 🕯️ Light-Stage OLAT Subsurface-Scattering Dataset

*Companion data for the paper **"Subsurface Scattering for 3D Gaussian Splatting"***

> **This README documents *only the dataset*.**  
> A separate repo covers the training / rendering **code**: <https://github.com/cgtuebingen/SSS-GS>

<p align="center">
  <img src="other/dataset.png" width="80%" alt="Dataset overview"/>
</p>

## Overview

Subsurface scattering (SSS) gives translucent materials (wax, soap, jade, skin) their distinctive soft glow. Our paper introduces **SSS-GS**, the first 3D Gaussian-Splatting framework that *jointly* reconstructs shape, BRDF and volumetric SSS while running at real-time framerates. Training such a model requires dense **multi-view ⇄ multi-light OLAT** data.

This dataset delivers exactly that:

* **25 objects** – 20 captured on a physical light-stage, 5 rendered in a synthetic stage
* **> 37k images** (≈ 1 TB raw / ≈ 30 GB processed) with **known camera & light poses**
* Ready-to-use JSON transform files compatible with NeRF & 3D GS toolchains
* Processed to 800 px images + masks; **raw 16 MP capture** available on request

### Applications

* Research on SSS, inverse-rendering, radiance-field relighting, differentiable shading
* Benchmarking OLAT pipelines or light-stage calibration
* Teaching datasets for photometric 3D reconstruction

## Quick Start

```bash
# Download and extract one real-world object
curl -L https://…/real_world/candle.tar | tar -x
```

## Directory Layout
```
dataset_root/
├── real_world/          # Captured objects (processed, ready to train)
│   └── <object>.tar     # Each tar = one object (≈ 4–8 GB)
└── synthetic/           # Procedurally rendered objects
    ├── <object>_full/   # full-resolution (800 px)
    └── <object>_small/  # 256 px "quick-train" version
```

### Inside a **real-world** tar
```
<object>/
├── resized/                 # θ_φ_board_i.png  (≈ 800 × 650 px)
├── transforms_train.json    # (train-set only) ⇄  camera / light metadata
├── transforms_test.json     # (test-set only) ⇄  camera / light metadata
├── light_positions.json     # all θ_φ_board_i → (x,y,z)
├── exclude_list.json        # bad views (lens flare, matting error, …)
└── cam_lights_aligned.png   # sanity-check visualisation
```
*Raw capture* Full-resolution, unprocessed RGB-bayer images (~ 1 TB per object) are kept offline—contact us to arrange transfer.

### Inside a **synthetic** object folder
```
<object>_full/
├── <object>.blend         # Blender scene with 112 HDR stage lights
├── train/                 # r_<cam>_l_<light>.png (= 800 × 800 px)
├── test/                  # r_<cam>_l_<light>.png (= 800 × 800 px)
├── eval/                  # only in "_small" subsets
├── transforms_train.json  # (train-set only) ⇄  camera / light metadata
└── transforms_test.json   # (test-set only) ⇄  camera / light metadata
```
The *small* variant differs only in image resolution & optional `eval/`.

## Data Collection

### Real-World Subset

**Capture Setup:**
- **Stage**: 4 m diameter light-stage with 167 individually addressable LEDs
- **Camera**: FLIR Oryx 12 MP with 35 mm F-mount, motorized turntable & vertical rail
- **Processing**: COLMAP SfM, automatic masking (SAM + ViTMatte), resize → PNG

| Objects | Avg. Views | Lights/View | Resolution | Masks |
|---------|------------|-------------|------------|-------|
| 20      | 158        | 167         | 800×650 px | α-mattes |

<p align="center">
  <img src="other/preprocessing.png" width="60%" alt="Preprocessing pipeline"/>
</p>

### Synthetic Subset

**Rendering Setup:**
- **Models**: Stanford 3D Scans and BlenderKit
- **Renderer**: Blender Cycles with spectral SSS (Principled BSDF)
- **Lights**: 112 positions (7 rings × 16), 200 test cameras on NeRF spiral path

| Variant | Images | Views × Lights | Resolution | Notes |
|---------|--------|----------------|------------|-------|
| _full   | 11,200 | 100 × 112      | 800²       | Filmic tonemapping |
| _small  | 1,500  | 15 × 100       | 256²       | Quick prototyping |

## File & Naming Conventions
* **Real images**`theta_<θ>_phi_<φ>_board_<id>.png`  
  *θ, φ* in degrees; *board* 0-195 indexes the LED PCBs.  
* **Synthetic images**`r_<camera>_l_<light>.png`  
* **JSON schema**  
  ```jsonc
  {
    "camera_angle_x": 0.3558,
    "frames": [{
      "file_paths": ["resized/theta_10.0_phi_0.0_board_1", …],
      "light_positions": [[x,y,z], …],   // metres, stage origin
      "transform_matrix": [[...], ...],  // 4×4 extrinsic
      "width": 800, "height": 650, "cx": 400.0, "cy": 324.5
    }]
  }
  ```
  For synthetic files: identical structure, naming `r_<cam>_l_<light>`.

## Licensing & Third-Party Assets
| Asset | Source | License / Note |
|-------|--------|----------------|
| Synthetic models | [Stanford 3-D Scans](https://graphics.stanford.edu/data/3Dscanrep/) | Varies (non-commercial / research) |
|                  | [BlenderKit](https://www.blenderkit.com/) | CC-0, CC-BY or Royalty-Free (check per-asset page) |
| HDR env-maps     | [Poly Haven](https://polyhaven.com/) | CC-0 |
| Code             | MIT (see repo) |

The dataset is released **for non-commercial research and educational use**.  
If you plan to redistribute or use individual synthetic assets commercially, verify the upstream license first.

## Citation
If you use this dataset, please cite the paper:

```bibtex
@inproceeding{sss_gs,
 author = {Dihlmann, Jan-Niklas and Majumdar, Arjun and Engelhardt, Andreas and Braun, Raphael and Lensch, Hendrik P.A.},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang},
 pages = {121765--121789},
 publisher = {Curran Associates, Inc.},
 title = {Subsurface Scattering for Gaussian Splatting},
 url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/dc72529d604962a86b7730806b6113fa-Paper-Conference.pdf},
 volume = {37},
 year = {2024}
}

```

## Contact & Acknowledgements
Questions, raw-capture requests, or pull-requests?  
📧 `jan-niklas.dihlmann (at) uni-tuebingen.de`

This work was funded by DFG (EXC 2064/1, SFB 1233) and the Tübingen AI Center.