Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
PreGRES / README.md
Jerome Quenum
PreGRES Data
0870cee
metadata
license: mit

PreGRES: A Large-Scale Geospatial Dataset Collection

PreGRES is a large-scale structured collection of existing smaller-scale geospatial datasets, designed for fine-tuning vision-language models in remote sensing applications. It integrates multiple sources, each contributing to different aspects of geospatial data understanding.

The datasets within PreGRES support three major tasks, listed below. To use them, please download the associated image files via the provided links and place them in their respective folders. Then, download the pregres.json file and ensure your directory is organized as follows:

β”œβ”€β”€ pregres.json
β”œβ”€β”€ NWPU-Captions
β”œβ”€β”€ RSICD
β”‚   └── ...

1. Image Captioning

These datasets contribute paired image-text data and contain long-form descriptions of top-down imagery across diverse geospatial environments, enhancing language supervision.


2. Visual Question Answering (VQA)

These datasets include structured question-answer pairs supporting reasoning over aerial and satellite images, covering tasks such as object identification, scene understanding, and disaster assessment.


3. Visual Grounding / Region-Level Captioning

  • DIOR-RSVG (Zhan et al., 2023): Paired text-image data for object localization and spatial reference resolution.
  • NWPU-RESISC45 (Cheng et al., 2017): Scene classification labels.

Dataset Statistics

  • Images: 119,279
  • Question-Answer Pairs: 1,204,993

PreGRES is used in the first-stage pre-training of the LISAT model, enabling general-purpose geospatial question answering.

For more details on dataset composition, see Table C.9 in our paper.

Citation

If you use PreGRES or LISAT in your work, please cite:

@article{quenum2025lisat,
  title={LISAT: Language-Instructed Segmentation Assistant for Satellite Imagery},
  author={Quenum, Jerome and Hsieh, Wen-Han and Wu, Tsung-Han and Gupta, Ritwik and Darrell, Trevor and Chan, David M},
  journal={arXiv preprint arXiv:2505.02829},
  year={2025}
}