--- license: cc-by-nc-4.0 configs: - config_name: default data_files: "split.csv" --- # SpatialLM Dataset
SpatialLM

Project arXiv GitHub
Hugging Face Dataset Dataset
The SpatialLM dataset is a large-scale, high-quality synthetic dataset designed by professional 3D designers and used for real-world production. It contains point clouds from 12,328 diverse indoor scenes comprising 54,778 rooms, each paired with rich ground-truth 3D annotations. SpatialLM dataset provides an additional valuable resource for advancing research in indoor scene understanding, 3D perception, and related applications. For more details about the dataset construction, annotations, and benchmark tasks, please refer to the [paper](https://arxiv.org/abs/2506.07491).
exmaple a exmaple c exmaple b exmaple d
## Dataset Structure The dataset is organized into the following folder structure: ```bash SpatialLM-Dataset/ ├── pcd/ # Point cloud PLY files for rooms │ └── .ply ├── layout/ # GT room layout │ └── .txt ├── examples/ # 10 point cloud and layout examples │ └── .ply │ └── .txt ├── extract.sh # Extraction script ├── dataset_info.json # Dataset configuration file for training ├── spatiallm_train.json # SpatialLM conversations data for training ├── spatiallm_val.json # SpatialLM conversations data for validation ├── spatiallm_test.json # SpatialLM conversations data for testing └── split.csv # Metadata CSV file ``` ## Metadata The dataset metadata is provided in the `split.csv` file with the following columns: - **id**: Unique identifier for each sampled point cloud and layout following the naming convention `{scene_id}_{room_id}_{sample}` (e.g., `scene_001523_00_2`) - **room_type**: The functional type of each room (e.g., bedroom, living room) - **scene_id**: Unique identifier for multi-room apartment scenes - **room_id**: Unique identifier for individual rooms within a scene - **sample**: Point cloud sampling configuration for each room (4 types available): - **0**: Most complete observations (8 panoramic views randomly sampled) - **1**: Most sparse observations (8 perspective views randomly sampled) - **2**: Less complete observations (16 perspective views randomly sampled) - **3**: Less sparse observations (24 perspective views randomly sampled) - **split**: Dataset partition assignment (`train`, `val`, `test`, `reserved`) The dataset is divided into 11,328/500/500 scenes for train/val/test splits, and 199,286/500/500 sampled point clouds accordingly, where multiple point cloud samples of the same room are randomly selected for the val/test splits for simplicity. ## Data Extraction Point clouds and layouts are compressed in zip files. To extract the files, run the following script: ```bash cd SpatialLM-Dataset chmod +x extract.sh ./extract.sh ``` ## Conversation Format The `spatiallm_train.json`, `spatiallm_val.json`, and `spatiallm_test.json` data follows the **SpatialLM format** with ShareGPT-style conversations: ```json { "conversations": [ { "from": "human", "value": "Detect walls, doors, windows, boxes. The reference code is as followed: ..." }, { "from": "gpt", "value": "<|layout_s|>wall_0=...<|layout_e|>" } ], "point_clouds": ["pcd/ID.ply"] } ``` ## Usage Use the [SpatialLM code base](https://github.com/manycore-research/SpatialLM/tree/main) for reading the point cloud and the layout data. ```python from spatiallm import Layout from spatiallm.pcd import load_o3d_pcd # Load Point Cloud point_cloud = load_o3d_pcd(args.point_cloud) # Load Layout with open(args.layout, "r") as f: layout_content = f.read() layout = Layout(layout_content) ``` ## Visualization Use `rerun` to visualize the point cloud and the GT structured 3D layout output: ```bash python visualize.py --point_cloud examples/scene_008456_00_3.ply --layout examples/scene_008456_00_3.txt --save scene_008456_00_3.rrd rerun scene_008456_00_3.rrd ``` ## SpatialGen dataset For access to photorealistic RGB/Depth/Normal/Semantic/Instance panoramic renderings and camera trajectories used to generate the SpatialLM point clouds, please refer to the [SpatialGen project](https://manycore-research.github.io/SpatialGen) for more details. ## Citation If you find this work useful, please consider citing: ```bibtex @inproceedings{SpatialLM, title = {SpatialLM: Training Large Language Models for Structured Indoor Modeling}, author = {Mao, Yongsen and Zhong, Junhao and Fang, Chuan and Zheng, Jia and Tang, Rui and Zhu, Hao and Tan, Ping and Zhou, Zihan}, booktitle = {Advances in Neural Information Processing Systems}, year = {2025} } ```