Datasets:

Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
SpatialLM-Dataset / README.md
ysmao's picture
Update README.md
ab8de09 verified
metadata
license: cc-by-nc-4.0
configs:
  - config_name: default
    data_files: split.csv

SpatialLM Dataset

SpatialLM

Project arXiv GitHub
Hugging Face Dataset Dataset

The SpatialLM dataset is a large-scale, high-quality synthetic dataset designed by professional 3D designers and used for real-world production. It contains point clouds from 12,328 diverse indoor scenes comprising 54,778 rooms, each paired with rich ground-truth 3D annotations. SpatialLM dataset provides an additional valuable resource for advancing research in indoor scene understanding, 3D perception, and related applications. For more details about the dataset construction, annotations, and benchmark tasks, please refer to the paper.

exmaple a exmaple c exmaple b exmaple d

Dataset Structure

The dataset is organized into the following folder structure:

SpatialLM-Dataset/
β”œβ”€β”€ pcd/                        # Point cloud PLY files for rooms
β”‚ └── .ply
β”œβ”€β”€ layout/                     # GT room layout
β”‚ └── .txt
β”œβ”€β”€ examples/                   # 10 point cloud and layout examples
β”‚ └── .ply
β”‚ └── .txt
β”œβ”€β”€ extract.sh                  # Extraction script
β”œβ”€β”€ dataset_info.json           # Dataset configuration file for training
β”œβ”€β”€ spatiallm_train.json        # SpatialLM conversations data for training
β”œβ”€β”€ spatiallm_val.json          # SpatialLM conversations data for validation
β”œβ”€β”€ spatiallm_test.json         # SpatialLM conversations data for testing
└── split.csv                   # Metadata CSV file

Metadata

The dataset metadata is provided in the split.csv file with the following columns:

  • id: Unique identifier for each sampled point cloud and layout following the naming convention {scene_id}_{room_id}_{sample} (e.g., scene_001523_00_2)
  • room_type: The functional type of each room (e.g., bedroom, living room)
  • scene_id: Unique identifier for multi-room apartment scenes
  • room_id: Unique identifier for individual rooms within a scene
  • sample: Point cloud sampling configuration for each room (4 types available):
    • 0: Most complete observations (8 panoramic views randomly sampled)
    • 1: Most sparse observations (8 perspective views randomly sampled)
    • 2: Less complete observations (16 perspective views randomly sampled)
    • 3: Less sparse observations (24 perspective views randomly sampled)
  • split: Dataset partition assignment (train, val, test, reserved)

The dataset is divided into 11,328/500/500 scenes for train/val/test splits, and 199,286/500/500 sampled point clouds accordingly, where multiple point cloud samples of the same room are randomly selected for the val/test splits for simplicity.

Data Extraction

Point clouds and layouts are compressed in zip files. To extract the files, run the following script:

cd SpatialLM-Dataset
chmod +x extract.sh
./extract.sh

Conversation Format

The spatiallm_train.json, spatiallm_val.json, and spatiallm_test.json data follows the SpatialLM format with ShareGPT-style conversations:

{
  "conversations": [
    {
      "from": "human",
      "value": "<point_cloud>Detect walls, doors, windows, boxes. The reference code is as followed: ..."
    },
    {
      "from": "gpt",
      "value": "<|layout_s|>wall_0=...<|layout_e|>"
    }
  ],
  "point_clouds": ["pcd/ID.ply"]
}

Usage

Use the SpatialLM code base for reading the point cloud and the layout data.

from spatiallm import Layout
from spatiallm.pcd import load_o3d_pcd

# Load Point Cloud
point_cloud = load_o3d_pcd(args.point_cloud)

# Load Layout
with open(args.layout, "r") as f:
    layout_content = f.read()
layout = Layout(layout_content)

Visualization

Use rerun to visualize the point cloud and the GT structured 3D layout output:

python visualize.py --point_cloud examples/scene_008456_00_3.ply --layout examples/scene_008456_00_3.txt --save scene_008456_00_3.rrd
rerun scene_008456_00_3.rrd

SpatialGen dataset

For access to photorealistic RGB/Depth/Normal/Semantic/Instance panoramic renderings and camera trajectories used to generate the SpatialLM point clouds, please refer to the SpatialGen project for more details.

Citation

If you find this work useful, please consider citing:

@inproceedings{SpatialLM,
  title     = {SpatialLM: Training Large Language Models for Structured Indoor Modeling},
  author    = {Mao, Yongsen and Zhong, Junhao and Fang, Chuan and Zheng, Jia and Tang, Rui and Zhu, Hao and Tan, Ping and Zhou, Zihan},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025}
}