|
--- |
|
license: apache-2.0 |
|
tags: |
|
- image |
|
- segmentation |
|
- space |
|
pretty_name: 'SWiM: Spacecraft With Masks (Instance Segmentation)' |
|
size_categories: |
|
- 1K<n<1M |
|
task_categories: |
|
- image-segmentation |
|
task_ids: |
|
- instance-segmentation |
|
annotations_creators: |
|
- machine-generated |
|
- expert-generated |
|
--- |
|
|
|
--- |
|
|
|
# SWiM: Spacecraft With Masks |
|
|
|
A large-scale instance segmentation dataset of nearly 64k annotated spacecraft images created using real spacecraft models, superimposed on a mixture of real and synthetic backgrounds generated using NASA's TTALOS pipeline. To mimic camera distortions and noise in real-world image acquisition, we added different types of noise and distortion. |
|
|
|
## Dataset Summary |
|
The dataset contains over 63,917 annotated images with instance masks for varied spacecraft. It's structured for YOLO and segmentation applications, and chunked to stay within Hugging Face's per-folder file limits. |
|
|
|
|
|
## Directory Structure Note |
|
|
|
Due to Hugging Face Hub's per-directory file limit (10,000 files), this dataset is chunked: each logical split (like `train/labels/`) is subdivided into folders (`000/`, `001/`, ...) containing no more than 5,000 files each. |
|
|
|
**Example Structure:** |
|
``` |
|
Baseline |
|
├──train/ |
|
├──images/ |
|
├── 000/ |
|
│ ├── img_0.png |
|
│ └── ... |
|
├── 001/ |
|
└── ... |
|
|
|
``` |
|
If you're using models/tools like **YOLO** or others that expect a **flat directory**, kindly use the --flatten argument provided in our utils/download_swim.py script. |
|
|
|
**YOLO Example Structure:** |
|
``` |
|
Baseline |
|
├──train/ |
|
├──images/ |
|
├── img_0.png |
|
├── ... |
|
└── imag_99.png |
|
|
|
``` |
|
|
|
## How to Load |
|
|
|
Without downloading it entirely, you can stream the SWiM-SpacecraftWithMasks dataset directly using the HuggingFace datasets library. This is particularly useful when working with limited local storage. |
|
|
|
Use the following code to load and iterate over the dataset efficiently: |
|
|
|
``` |
|
|
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset( |
|
"RiceD2KLab/SWiM-SpacecraftWithMasks", |
|
data_dir="Baseline", # for augmented, data_dir="Augmented" |
|
streaming=True |
|
) |
|
|
|
``` |
|
|
|
Note: The directory/folder structure obtained from HuggingFace's load_dataset API includes chunks(eg, 000, 001, etc). Hence, it does not support YOLO training. For YOLO training or CIFAR-10 based directory structure, you can use the use of the utils/download_swim.py script. |
|
|
|
## How to Download |
|
|
|
For local use, if you'd like to either sample a small portion or download the entire dataset to your filesystem, we provide two utility scripts under the utils/ folder: |
|
|
|
- sample_swim.py for quick sampling from a single chunk |
|
- download_swim.py downloads the full dataset, optionally flattening the directory structure. |
|
|
|
These scripts let you work offline or run faster experiments by controlling what and how much data you fetch. Furthermore, the download_swim.py helps download the data in a flattened YOLO/CIFAR-10 supported folder structure. |
|
|
|
### Utility Scripts |
|
|
|
The following scripts help you download this dataset. Due to the large nature of the data and the custom directory structure, it is recommended to use the following scripts to either sample or to download the entire dataset. Note, the scripts are in the utils subdirectory. |
|
|
|
|
|
#### 1. Setup |
|
|
|
Create your virtual environment to help manage dependencies and prevent conflicts: |
|
``` |
|
python -m venv env |
|
|
|
source env/bin/activate # On Windows: env\Scripts\activate |
|
|
|
pip install -r requirements.txt |
|
``` |
|
|
|
#### 2. Sample 500 items from a specific chunk: |
|
|
|
This script is helpful for quick local inspection, prototyping, or lightweight evaluation without downloading the full dataset. |
|
|
|
Usage: |
|
|
|
python3 utils/sample_swim.py --output-dir ./samples --count 100 |
|
|
|
|
|
Arguments: |
|
``` |
|
--repo-id Hugging Face dataset repository ID |
|
--image-subdir Path to image subdirectory inside the dataset repo |
|
--label-subdir Path to corresponding label subdirectory |
|
--output-dir Directory to save downloaded files |
|
--count Number of samples to download |
|
``` |
|
|
|
Example Usage with all args: |
|
``` |
|
python3 utils/sample_swim.py |
|
--repo-id RiceD2KLab/SWiM-SpacecraftWithMasks |
|
--image-subdir Baseline/images/val/000 |
|
--label-subdir Baseline/labels/val/000 |
|
--output-dir ./Sampled-SWiM |
|
--count 500 |
|
``` |
|
#### 3. Download the entire dataset (optionally flatten chunks for YOLO format): |
|
|
|
Streams and downloads the full paired dataset (images + label txt files) from a Hugging Face Hub repository. It recursively processes all available chunk subfolders (e.g., '000', '001', ...) under given parent paths. |
|
|
|
Features: |
|
- Recursively discovers subdirs (chunks) using HfFileSystem |
|
- Optionally flattens the directory structure by removing the deepest chunk level |
|
- Saves each .png image with its corresponding .txt label |
|
|
|
This script can download the complete dataset for model training or offline access. |
|
|
|
Usage: |
|
|
|
# Download all chunks (flattened/ YOLO format) |
|
|
|
python utils/download_swim.py --output-dir ./SWiM --flatten |
|
|
|
|
|
# Download specific chunks |
|
|
|
python3 utils/download_swim.py --chunks 000 001 002 |
|
|
|
Arguments: |
|
``` |
|
--repo-id Hugging Face dataset repository ID |
|
--images-parent Parent directory for image chunks (e.g., Baseline/images/train) |
|
--labels-parent Parent directory for label chunks (e.g., Baseline/labels/train) |
|
--output-dir Where to save the downloaded dataset |
|
--flatten Run with "--flatten" to flatten directories. If you want hierarchical chunk folders, omit --flatten |
|
--chunks Specific chunk names (e.g., 000 001); omit to download all |
|
``` |
|
Example usage with all args: |
|
``` |
|
python3 utils/download_swim.py |
|
--repo-id RiceD2KLab/SWiM-SpacecraftWithMasks |
|
--images-parent Baseline/images |
|
--labels-parent Baseline/labels |
|
--output-dir ./SWiM |
|
--flatten # flattens directories |
|
``` |
|
|
|
**Arguments are all configurable—see `--help` for details.** |
|
|
|
## Code and Data Generation Pipeline |
|
|
|
All dataset generation scripts, preprocessing tools, and model training code are available on GitHub: |
|
|
|
[GitHub Repository: https://github.com/RiceD2KLab/SWiM](https://github.com/RiceD2KLab/SWiM) |
|
|
|
|
|
## Citation |
|
|
|
If you use this dataset, please cite: |
|
|
|
@misc{sam2025newdatasetperformancebenchmark, |
|
title={A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers}, |
|
author={Jeffrey Joan Sam and Janhavi Sathe and Nikhil Chigali and Naman Gupta and Radhey Ruparel and Yicheng Jiang and Janmajay Singh and James W. Berck and Arko Barman}, |
|
year={2025}, |
|
eprint={2507.10775}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV}, |
|
url={https://arxiv.org/abs/2507.10775}, |
|
} |