Datasets:
File size: 6,911 Bytes
831376d bbd3cc5 831376d 0e09f16 831376d 7c94a38 831376d c729fb6 455200e c729fb6 bbd3cc5 455200e c729fb6 455200e 12230e1 831376d c729fb6 831376d 7c94a38 8db3195 7c94a38 fc454b6 6dc9c2b 7c94a38 20aef1c 7c94a38 8db3195 7c94a38 455200e 192c6b4 bbd3cc5 192c6b4 455200e c729fb6 192c6b4 643b70c 192c6b4 643b70c 192c6b4 c729fb6 455200e bbd3cc5 192c6b4 455200e aa55ec0 455200e 192c6b4 9d49b0c 455200e 9d49b0c 455200e aa55ec0 192c6b4 455200e c729fb6 2f9245a fc454b6 192c6b4 c729fb6 455200e 192c6b4 455200e 192c6b4 455200e 7c94a38 455200e aa55ec0 455200e aa55ec0 455200e aa55ec0 455200e aa55ec0 9d49b0c 455200e 9d49b0c 455200e b0ac4f7 455200e aa55ec0 455200e c729fb6 2f9245a fc454b6 b0ac4f7 192c6b4 b0ac4f7 c729fb6 192c6b4 831376d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 |
---
license: apache-2.0
tags:
- image
- segmentation
- space
pretty_name: 'SWiM: Spacecraft With Masks (Instance Segmentation)'
size_categories:
- 1K<n<1M
task_categories:
- image-segmentation
task_ids:
- instance-segmentation
annotations_creators:
- machine-generated
- expert-generated
---
---
# SWiM: Spacecraft With Masks
A large-scale instance segmentation dataset of nearly 64k annotated spacecraft images created using real spacecraft models, superimposed on a mixture of real and synthetic backgrounds generated using NASA's TTALOS pipeline. To mimic camera distortions and noise in real-world image acquisition, we added different types of noise and distortion.
## Dataset Summary
The dataset contains over 63,917 annotated images with instance masks for varied spacecraft. It's structured for YOLO and segmentation applications, and chunked to stay within Hugging Face's per-folder file limits.
## Directory Structure Note
Due to Hugging Face Hub's per-directory file limit (10,000 files), this dataset is chunked: each logical split (like `train/labels/`) is subdivided into folders (`000/`, `001/`, ...) containing no more than 5,000 files each.
**Example Structure:**
```
Baseline
├──train/
├──images/
├── 000/
│ ├── img_0.png
│ └── ...
├── 001/
└── ...
```
If you're using models/tools like **YOLO** or others that expect a **flat directory**, kindly use the --flatten argument provided in our utils/download_swim.py script.
**YOLO Example Structure:**
```
Baseline
├──train/
├──images/
├── img_0.png
├── ...
└── imag_99.png
```
## How to Load
Without downloading it entirely, you can stream the SWiM-SpacecraftWithMasks dataset directly using the HuggingFace datasets library. This is particularly useful when working with limited local storage.
Use the following code to load and iterate over the dataset efficiently:
```
from datasets import load_dataset
dataset = load_dataset(
"RiceD2KLab/SWiM-SpacecraftWithMasks",
data_dir="Baseline", # for augmented, data_dir="Augmented"
streaming=True
)
```
Note: The directory/folder structure obtained from HuggingFace's load_dataset API includes chunks(eg, 000, 001, etc). Hence, it does not support YOLO training. For YOLO training or CIFAR-10 based directory structure, you can use the use of the utils/download_swim.py script.
## How to Download
For local use, if you'd like to either sample a small portion or download the entire dataset to your filesystem, we provide two utility scripts under the utils/ folder:
- sample_swim.py for quick sampling from a single chunk
- download_swim.py downloads the full dataset, optionally flattening the directory structure.
These scripts let you work offline or run faster experiments by controlling what and how much data you fetch. Furthermore, the download_swim.py helps download the data in a flattened YOLO/CIFAR-10 supported folder structure.
### Utility Scripts
The following scripts help you download this dataset. Due to the large nature of the data and the custom directory structure, it is recommended to use the following scripts to either sample or to download the entire dataset. Note, the scripts are in the utils subdirectory.
#### 1. Setup
Create your virtual environment to help manage dependencies and prevent conflicts:
```
python -m venv env
source env/bin/activate # On Windows: env\Scripts\activate
pip install -r requirements.txt
```
#### 2. Sample 500 items from a specific chunk:
This script is helpful for quick local inspection, prototyping, or lightweight evaluation without downloading the full dataset.
Usage:
python3 utils/sample_swim.py --output-dir ./samples --count 100
Arguments:
```
--repo-id Hugging Face dataset repository ID
--image-subdir Path to image subdirectory inside the dataset repo
--label-subdir Path to corresponding label subdirectory
--output-dir Directory to save downloaded files
--count Number of samples to download
```
Example Usage with all args:
```
python3 utils/sample_swim.py
--repo-id RiceD2KLab/SWiM-SpacecraftWithMasks
--image-subdir Baseline/images/val/000
--label-subdir Baseline/labels/val/000
--output-dir ./Sampled-SWiM
--count 500
```
#### 3. Download the entire dataset (optionally flatten chunks for YOLO format):
Streams and downloads the full paired dataset (images + label txt files) from a Hugging Face Hub repository. It recursively processes all available chunk subfolders (e.g., '000', '001', ...) under given parent paths.
Features:
- Recursively discovers subdirs (chunks) using HfFileSystem
- Optionally flattens the directory structure by removing the deepest chunk level
- Saves each .png image with its corresponding .txt label
This script can download the complete dataset for model training or offline access.
Usage:
# Download all chunks (flattened/ YOLO format)
python utils/download_swim.py --output-dir ./SWiM --flatten
# Download specific chunks
python3 utils/download_swim.py --chunks 000 001 002
Arguments:
```
--repo-id Hugging Face dataset repository ID
--images-parent Parent directory for image chunks (e.g., Baseline/images/train)
--labels-parent Parent directory for label chunks (e.g., Baseline/labels/train)
--output-dir Where to save the downloaded dataset
--flatten Run with "--flatten" to flatten directories. If you want hierarchical chunk folders, omit --flatten
--chunks Specific chunk names (e.g., 000 001); omit to download all
```
Example usage with all args:
```
python3 utils/download_swim.py
--repo-id RiceD2KLab/SWiM-SpacecraftWithMasks
--images-parent Baseline/images
--labels-parent Baseline/labels
--output-dir ./SWiM
--flatten # flattens directories
```
**Arguments are all configurable—see `--help` for details.**
## Code and Data Generation Pipeline
All dataset generation scripts, preprocessing tools, and model training code are available on GitHub:
[GitHub Repository: https://github.com/RiceD2KLab/SWiM](https://github.com/RiceD2KLab/SWiM)
## Citation
If you use this dataset, please cite:
@misc{sam2025newdatasetperformancebenchmark,
title={A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers},
author={Jeffrey Joan Sam and Janhavi Sathe and Nikhil Chigali and Naman Gupta and Radhey Ruparel and Yicheng Jiang and Janmajay Singh and James W. Berck and Arko Barman},
year={2025},
eprint={2507.10775},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.10775},
} |