Datasets:

Modalities:
Image
ArXiv:
License:
JeffreyJsam's picture
Update README.md
bbd3cc5 verified
|
raw
history blame
5.53 kB
metadata
license: apache-2.0
tags:
  - image
  - segmentation
  - space
pretty_name: 'SWiM: Spacecraft With Masks (Instance Segmentation)'
size_categories:
  - 1K<n<1M
task_categories:
  - image-segmentation
task_ids:
  - instance-segmentation
annotations_creators:
  - machine-generated
  - expert-generated

SWiM: Spacecraft With Masks

A large-scale instance segmentation dataset of nearly 64k annotated spacecraft images created using real spacecraft models, superimposed on a mixture of real and synthetic backgrounds generated using NASA's TTALOS pipeline. To mimic camera distortions and noise in real-world image acquisition, we added different types of noise and distortion.

Dataset Summary

The dataset contains over 63,917 annotated images with instance masks for varied spacecraft. It's structured for YOLO and segmentation applications, and chunked to stay within Hugging Face's per-folder file limits.

How to Use/Download

Directory Structure Note

Due to Hugging Face Hub's per-directory file limit (10,000 files), this dataset is chunked: each logical split (like train/labels/) is subdivided into folders (000/, 001/, ...) containing no more than 5,000 files each.

Example Structure:

  Baseline
    ├──train/
         ├──images/
              ├── 000/
              │   ├── img_0.png
              │   └── ...
              ├── 001/
              └── ...

If you're using models/tools like YOLO or others that expect a flat directory, kindly use the --flatten argument provided in our utils/download_swim.py script.

YOLO Example Structure:

  Baseline
    ├──train/
         ├──images/
              ├── img_0.png
              ├── ...
              └── imag_99.png

Utility Scripts

The following scripts help you download this dataset. Due to the large nature of the data and the custom directory structure, it is recommended to use the following scripts to either sample or to download the entire dataset. Note, the scripts are in the utils subdirectory.

1. Setup

Create your virtual environment to help manage dependencies and prevent conflicts:

  python -m venv env
  
  source env/bin/activate # On Windows: env\Scripts\activate
  
  pip install -r requirements.txt

2. Sample 500 items from a specific chunk:

This script is helpful for quick local inspection, prototyping, or lightweight evaluation without downloading the full dataset.

Usage: python3 utils/sample_swim.py --output-dir ./samples --count 100

Arguments: --repo-id Hugging Face dataset repository ID --image-subdir Path to image subdirectory inside the dataset repo --label-subdir Path to corresponding label subdirectory --output-dir Directory to save downloaded files --count Number of samples to download

Example Usage with all args:

  python3 utils/sample_swim.py
  --repo-id JeffreyJsam/SWiM-SpacecraftWithMasks
  --image-subdir Baseline/images/val/000
  --label-subdir Baseline/labels/val/000
  --output-dir ./Sampled-SWiM
  --count 500

3. Download the entire dataset (optionally flatten chunks for YOLO format):

Streams and downloads the full paired dataset (images + label txt files) from a Hugging Face Hub repository. It recursively processes all available chunk subfolders (e.g., '000', '001', ...) under given parent paths.

Features:

  • Recursively discovers subdirs (chunks) using HfFileSystem
  • Optionally flattens the directory structure by removing the deepest chunk level
  • Saves each .png image with its corresponding .txt label

You can use this script to download the complete dataset for model training or offline access.

Usage: # Download all chunks (flattened/ YOLO format) python utils/download_swim.py --output-dir ./SWiM --flatten

# Download specific chunks
python3 utils/download_swim.py --chunks 000 001 002 --flatten False

Arguments: --repo-id Hugging Face dataset repository ID --images-parent Parent directory for image chunks (e.g., Baseline/images/train) --labels-parent Parent directory for label chunks (e.g., Baseline/labels/train) --output-dir Where to save the downloaded dataset --flatten Remove final 'chunk' subdir in output paths (default: True) --chunks Specific chunk names (e.g., 000 001); omit to download all

Example usage with all args:

  python3 utils/download_swim.py
  --repo-id JeffreyJsam/SWiM-SpacecraftWithMasks
  --images-parent Baseline/images/val
  --labels-parent Baseline/labels/val
  --output-dir ./SWiM
  --flatten

Arguments are all configurable—see --help for details.

Code and Data Generation Pipeline

All dataset generation scripts, preprocessing tools, and model training code are available on GitHub:

GitHub Repository: https://github.com/RiceD2KLab/SWiM

Citation

If you use this dataset, please cite:

@misc{sam2025newdatasetperformancebenchmark, title={A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers}, author={Jeffrey Joan Sam and Janhavi Sathe and Nikhil Chigali and Naman Gupta and Radhey Ruparel and Yicheng Jiang and Janmajay Singh and James W. Berck and Arko Barman}, year={2025}, eprint={2507.10775}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2507.10775}, }