Enhance dataset card with task categories, tags, intro, and sample usage (#2)
Browse files- Enhance dataset card with task categories, tags, intro, and sample usage (2f35fd24638695f74c0e286672e4391686252d2d)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,9 +1,18 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
# SpatialGen Testset
|
| 6 |
|
|
|
|
|
|
|
| 7 |
[Project page](https://manycore-research.github.io/SpatialGen) | [Paper](https://arxiv.org/abs/2509.14981) | [Code](https://github.com/manycore-research/SpatialGen)
|
| 8 |
|
| 9 |
We provide a test set of 48 preprocessed point clouds and their corresponding GT layouts, multi-view images are cropped from the high-resolution panoramic images.
|
|
@@ -26,7 +35,9 @@ SpatialGen-Testset
|
|
| 26 |
└── test_split_caption.jsonl # textural captions for each scene
|
| 27 |
```
|
| 28 |
|
| 29 |
-
##
|
|
|
|
|
|
|
| 30 |
|
| 31 |
We provide a [code](https://github.com/manycore-research/SpatialGen/blob/main/visualize_layout.py) to visualize the layout data.
|
| 32 |
|
|
@@ -40,3 +51,21 @@ for scene_data_dir in scene_data_dirs:
|
|
| 40 |
# save layout_bbox.ply and camera poses in vis_output_dir
|
| 41 |
visualize_spatialgen_data(scene_data_dir, vis_output_dir)
|
| 42 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-3d
|
| 5 |
+
- text-to-3d
|
| 6 |
+
tags:
|
| 7 |
+
- 3d
|
| 8 |
+
- scene-generation
|
| 9 |
+
- indoor-scenes
|
| 10 |
---
|
| 11 |
|
| 12 |
# SpatialGen Testset
|
| 13 |
|
| 14 |
+
This repository contains the test set for [SPATIALGEN: Layout-guided 3D Indoor Scene Generation](https://arxiv.org/abs/2509.14981), a novel multi-view multi-modal diffusion model for generating realistic and semantically consistent 3D indoor scenes.
|
| 15 |
+
|
| 16 |
[Project page](https://manycore-research.github.io/SpatialGen) | [Paper](https://arxiv.org/abs/2509.14981) | [Code](https://github.com/manycore-research/SpatialGen)
|
| 17 |
|
| 18 |
We provide a test set of 48 preprocessed point clouds and their corresponding GT layouts, multi-view images are cropped from the high-resolution panoramic images.
|
|
|
|
| 35 |
└── test_split_caption.jsonl # textural captions for each scene
|
| 36 |
```
|
| 37 |
|
| 38 |
+
## Sample Usage
|
| 39 |
+
|
| 40 |
+
### Visualization
|
| 41 |
|
| 42 |
We provide a [code](https://github.com/manycore-research/SpatialGen/blob/main/visualize_layout.py) to visualize the layout data.
|
| 43 |
|
|
|
|
| 51 |
# save layout_bbox.ply and camera poses in vis_output_dir
|
| 52 |
visualize_spatialgen_data(scene_data_dir, vis_output_dir)
|
| 53 |
```
|
| 54 |
+
|
| 55 |
+
### Inference
|
| 56 |
+
|
| 57 |
+
This dataset is used for evaluating the SpatialGen models for 3D indoor scene generation. The following commands from the [code repository](https://github.com/manycore-research/SpatialGen) demonstrate how to run inference for different tasks (after following the installation instructions in the repository).
|
| 58 |
+
|
| 59 |
+
**Single image-to-3D Scene Generation**
|
| 60 |
+
|
| 61 |
+
```bash
|
| 62 |
+
bash scripts/infer_spatialgen_i2s.sh
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
**Text-to-image-to-3D Scene Generation**
|
| 66 |
+
|
| 67 |
+
You can choose a pair of `scene_id` and `prompt` from `captions/spatialgen_testset_captions.jsonl` to run the text-to-scene experiment.
|
| 68 |
+
|
| 69 |
+
```bash
|
| 70 |
+
bash scripts/infer_spatialgen_t2s.sh
|
| 71 |
+
```
|