license: mit
PreGRES: A Large-Scale Geospatial Dataset Collection
PreGRES is a large-scale structured collection of existing smaller-scale geospatial datasets, designed for fine-tuning vision-language models in remote sensing applications. It integrates multiple sources, each contributing to different aspects of geospatial data understanding.
The datasets within PreGRES support three major tasks, listed below. To use them, please download the associated image files via the provided links and place them in their respective folders. Then, download the pregres.json
file and ensure your directory is organized as follows:
βββ pregres.json
βββ NWPU-Captions
βββ RSICD
β βββ ...
1. Image Captioning
- NWPU-Captions (Cheng et al., 2022)
- RSICD (Lu et al., 2017)
- RSITMD (Yuan et al., 2022b)
- Sydney-Captions (Qu et al., 2016)
- UCM-Captions (Qu et al., 2016)
These datasets contribute paired image-text data and contain long-form descriptions of top-down imagery across diverse geospatial environments, enhancing language supervision.
2. Visual Question Answering (VQA)
- RSVQA LR and RSVQA HR (Lobry et al., 2020)
- FloodNet (Rahnemoonfar et al., 2021)
- RSIVQA (Zheng et al., 2021)
These datasets include structured question-answer pairs supporting reasoning over aerial and satellite images, covering tasks such as object identification, scene understanding, and disaster assessment.
3. Visual Grounding / Region-Level Captioning
- DIOR-RSVG (Zhan et al., 2023): Paired text-image data for object localization and spatial reference resolution.
- NWPU-RESISC45 (Cheng et al., 2017): Scene classification labels.
Dataset Statistics
- Images: 119,279
- Question-Answer Pairs: 1,204,993
PreGRES is used in the first-stage pre-training of the LISAT model, enabling general-purpose geospatial question answering.
For more details on dataset composition, see Table C.9 in our paper.
Citation
If you use PreGRES or LISAT in your work, please cite:
@article{quenum2025lisat,
title={LISAT: Language-Instructed Segmentation Assistant for Satellite Imagery},
author={Quenum, Jerome and Hsieh, Wen-Han and Wu, Tsung-Han and Gupta, Ritwik and Darrell, Trevor and Chan, David M},
journal={arXiv preprint arXiv:2505.02829},
year={2025}
}