The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Dronescapes Experts dataset
This dataset is an extension of the original dronescapes dataset with new modalities generated using VRE 100% from scratch (aka pretrained experts). The only data that is not generable by VRE is the Ground Truth: semantic (human annotated), depth & normals (SfM) that is inherited from the original dataset for evaluation purposes only.
1. Downloading the data
Option 1. Download the pre-processed dataset from HuggingFace repository
git lfs install # Make sure you have git-lfs installed (https://git-lfs.com)
git clone https://huggingface.co/datasets/Meehai/dronescapes
Option 2. Generate all the modalities from raw videos
Follow the instructions under this file.
Note: you can generate all the data except semantic_segprop8
(human annotated), depth_sfm_manual202204
and
normals_sfm_manual202204
(SfM tool was used).
2. Using the data
As per the split from the paper:
<img src="/datasets/Meehai/dronescapes-2024/resolve/main/split.png", width="500px">
The data is in data/*
(if you used git clone) (it should match even if you download from huggingface).
2.1 Using the provided viewer
The simplest way to explore the data is to use the provided notebook. Upon running it, you should get a collage with all the default tasks, like the picture at the top.
For a CLI-only method, you can use the VRE reader as well:
vre_reader data/test_set_annotated_only/ --config_path vre_dronescapes/cfg.yaml -I vre_dronescapes/semantic_mapper.py:get_new_semantic_mapped_tasks
3. Evaluation
See the original dronescapes evaluation description & benchmark for this.
- Downloads last month
- 49