File size: 1,851 Bytes
0947d41
4ecb39a
0947d41
4ecb39a
 
 
c68c5d3
4ecb39a
c68c5d3
4ecb39a
c68c5d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0947d41
c68c5d3
 
 
 
 
9b96e4c
c68c5d3
 
de19f35
c68c5d3
de19f35
 
c68c5d3
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# Dronescapes Experts dataset

This dataset is an extension of the original [dronescapes dataset](https://huggingface.co/dataset/Meehai/dronescapes) with new modalities generated using VRE 100% from scratch (aka pretrained experts). The only data that is not generable by VRE is the Ground Truth: semantic (human annotated), depth & normals (SfM) that is inherited from the original dataset for evaluation purposes only.

![Logo](logo.png)

# 1. Downloading the data

## Option 1. Download the pre-processed dataset from HuggingFace repository

```
git lfs install # Make sure you have git-lfs installed (https://git-lfs.com)
git clone https://huggingface.co/datasets/Meehai/dronescapes
```

## Option 2. Generate all the modalities from raw videos

Follow the instructions under [this file](./vre_dronescapes/commands.txt).

Note: you can generate all the data except `semantic_segprop8` (human annotated), `depth_sfm_manual202204` and
`normals_sfm_manual202204` (SfM tool was used).

## 2. Using the data

As per the split from the paper:

<img src="split.png", width="500px">

The data is in `data/*` (if you used git clone) (it should match even if you download from huggingface).

## 2.1 Using the provided viewer

The simplest way to explore the data is to use the [provided notebook](scripts/dronescapes_viewer/dronescapes_viewer.ipynb). Upon running
it, you should get a collage with all the default tasks, like the picture at the top.

For a CLI-only method, you can use the VRE reader as well:

```bash
vre_reader data/test_set_annotated_only/ --config_path vre_dronescapes/cfg.yaml -I vre_dronescapes/semantic_mapper.py:get_new_semantic_mapped_tasks
```

## 3. Evaluation

See the original [dronescapes evaluation description & benchmark](https://huggingface.co/datasets/Meehai/dronescapes#3-evaluation-for-semantic-segmentation) for this.