Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -26,7 +26,7 @@ Participate in obtaining more accurate maps for a more comprehensive description
|
|
| 26 |
|
| 27 |
- **Datapaper : https://arxiv.org/pdf/2305.14467.pdf**
|
| 28 |
|
| 29 |
-
- **Dataset links :** https://ignf.github.io/FLAIR/#FLAIR2
|
| 30 |
|
| 31 |
- **Challenge page : https://codalab.lisn.upsaclay.fr/competitions/13447**
|
| 32 |
|
|
@@ -42,31 +42,58 @@ Participate in obtaining more accurate maps for a more comprehensive description
|
|
| 42 |
The FLAIR #2 dataset is sampled countrywide and is composed of over 20 billion annotated pixels of very high resolution aerial imagery at 0.2 m spatial resolution, acquired over three years and different months (spatio-temporal domains). Aerial imagery patches consist of 5 channels (RVB-Near Infrared-Elevation) and have corresponding annotation (with 19 semantic classes or 13 for the baselines). Furthermore, to integrate broader spatial context and temporal information, high resolution Sentinel-2 1-year time series with 10 spectral band are also provided. More than 50,000 Sentinel-2 acquisitions with 10 m spatial resolution are available.
|
| 43 |
<br>
|
| 44 |
|
| 45 |
-
|
| 46 |
-
<img width="40%" src="images/flair-2-spatial.png">
|
| 47 |
-
<br>
|
| 48 |
-
<em>Spatial definitions of the FLAIR #2 dataset.</em>
|
| 49 |
-
</p>
|
| 50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
-
<p align="center">
|
| 53 |
-
<img width="85%" src="images/flair-2-patches.png">
|
| 54 |
-
<br>
|
| 55 |
-
<em>Example of input data (first three columns are from aerial imagery, fourth from Sentinel-2) and corresponding supervision masks (last column).</em>
|
| 56 |
-
</p>
|
| 57 |
|
| 58 |
<br><br>
|
| 59 |
-
## Baseline model
|
| 60 |
-
|
| 61 |
-
A two-branch architecture integrating a U-Net <a href="https://github.com/qubvel/segmentation_models.pytorch"><img src="https://img.shields.io/badge/Link%20to-SMP-f4dbaa.svg"/></a> with a pre-trained ResNet34 encoder and a U-TAE <a href="https://github.com/VSainteuf/utae-paps"><img src="https://img.shields.io/badge/Link%20to-U--TAE-f4dbaa.svg"/></a> encompassing a temporal self-attention encoder is presented. The U-TAE branch aims at learning spatio-temporal embeddings from the high resolution satellite time series that are further integrated into the U-Net branch exploiting the aerial imagery. The proposed _U-T&T_ model features a fusion module to extend and reshape the U-TAE embeddings in order to add them towards the U-Net branch.
|
| 62 |
|
| 63 |
-
<p align="center">
|
| 64 |
-
<img width="100%" src="images/flair-2-network.png">
|
| 65 |
-
<br>
|
| 66 |
-
<em>Overview of the proposed two-branch architecture.</em>
|
| 67 |
-
</p>
|
| 68 |
-
|
| 69 |
-
<br><br>
|
| 70 |
|
| 71 |
## Usage
|
| 72 |
|
|
@@ -84,32 +111,6 @@ A toy dataset (reduced size) is available to check that your installation and th
|
|
| 84 |
|
| 85 |
<br><br>
|
| 86 |
|
| 87 |
-
## Leaderboard
|
| 88 |
-
|
| 89 |
-
Please note that for participants to the FLAIR #2 challenge on CodaLab, a certain number of constraints must be satisfied (in particular, inference time). All infos are available on the _Overview_ page of the competion.
|
| 90 |
-
|
| 91 |
-
| Model|Input|mIoU
|
| 92 |
-
------------ | ------------- | -------------
|
| 93 |
-
| baseline U-Net (ResNet34) | aerial imagery | 0.5470
|
| 94 |
-
| baseline U-Net (ResNet34) + _metadata + augmentation_ | aerial imagery | 0.5593
|
| 95 |
-
|||
|
| 96 |
-
| baseline U-T&T | aerial and satellite imagery | 0.5594
|
| 97 |
-
| baseline U-T&T + _filter clouds + monthly averages + data augmentation_ | aerial and satellite imagery | 0.5758
|
| 98 |
-
|
| 99 |
-
If you want to submit a new entry, you can open a new issue.
|
| 100 |
-
<b> Results of the challenge will be reported after the end of the challenge early October! </b>
|
| 101 |
-
|
| 102 |
-
The baseline U-T&T + _filter clouds + monthly averages + data_augmentation_ obtains the following confusion matrix:
|
| 103 |
-
|
| 104 |
-
<br><br>
|
| 105 |
-
<p align="center">
|
| 106 |
-
<img width="50%" src="images/flair-2-confmat.png">
|
| 107 |
-
<br>
|
| 108 |
-
<em>Baseline confusion matrix of the test dataset normalized by rows.</em>
|
| 109 |
-
</p>
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
<br><br><br>
|
| 113 |
|
| 114 |
## Reference
|
| 115 |
Please include a citation to the following article if you use the FLAIR #2 dataset:
|
|
|
|
| 26 |
|
| 27 |
- **Datapaper : https://arxiv.org/pdf/2305.14467.pdf**
|
| 28 |
|
| 29 |
+
- **Dataset links :** https://ignf.github.io/FLAIR/#FLAIR2
|
| 30 |
|
| 31 |
- **Challenge page : https://codalab.lisn.upsaclay.fr/competitions/13447**
|
| 32 |
|
|
|
|
| 42 |
The FLAIR #2 dataset is sampled countrywide and is composed of over 20 billion annotated pixels of very high resolution aerial imagery at 0.2 m spatial resolution, acquired over three years and different months (spatio-temporal domains). Aerial imagery patches consist of 5 channels (RVB-Near Infrared-Elevation) and have corresponding annotation (with 19 semantic classes or 13 for the baselines). Furthermore, to integrate broader spatial context and temporal information, high resolution Sentinel-2 1-year time series with 10 spectral band are also provided. More than 50,000 Sentinel-2 acquisitions with 10 m spatial resolution are available.
|
| 43 |
<br>
|
| 44 |
|
| 45 |
+
The dataset covers 50 spatial domains, encompassing 916 areas spanning 817 km². With 13 semantic classes (plus 6 not used in this challenge), this dataset provides a robust foundation for advancing land cover mapping techniques.<br><br>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
+
<center>
|
| 48 |
+
<table style="width:80%;max-width:700px;">
|
| 49 |
+
<thead>
|
| 50 |
+
<tr><th width=7%></th><th>Class</th><th style='text-align: center' width=15%>Value</th><th style='text-align: center'>Freq.-train (%)</th><th style='text-align: center'>Freq.-test (%)</th></tr>
|
| 51 |
+
</thead>
|
| 52 |
+
<tbody>
|
| 53 |
+
<tr><td bgcolor='#db0e9a'></td><td>building</td><td style='text-align: center'>1</td><td style='text-align: center'>8.14</td><td style='text-align: center'>3.26</td></tr>
|
| 54 |
+
|
| 55 |
+
<tr><td bgcolor='#938e7b'></td><td>pervious surface</td><td style='text-align: center'>2</td><td style='text-align: center'>8.25</td><td style='text-align: center'>3.82</td></tr>
|
| 56 |
+
|
| 57 |
+
<tr><td bgcolor='#f80c00'></td><td>impervious surface</td><td style='text-align: center'>3</td><td style='text-align: center'>13.72</td><td style='text-align: center'>5.87</td></tr>
|
| 58 |
+
|
| 59 |
+
<tr><td bgcolor='#a97101'></td><td>bare soil</td><td style='text-align: center'>4</td><td style='text-align: center'>3.47</td><td style='text-align: center'>1.6</td></tr>
|
| 60 |
+
|
| 61 |
+
<tr><td bgcolor='#1553ae'></td><td>water</td><td style='text-align: center'>5</td><td style='text-align: center'>4.88</td><td style='text-align: center'>3.17</td></tr>
|
| 62 |
+
|
| 63 |
+
<tr><td bgcolor='#194a26'></td><td>coniferous</td><td style='text-align: center'>6</td><td style='text-align: center'>2.74</td><td style='text-align: center'>10.24</td></tr>
|
| 64 |
+
|
| 65 |
+
<tr><td bgcolor='#46e483'></td><td>deciduous</td><td style='text-align: center'>7</td><td style='text-align: center'>15.38</td><td style='text-align: center'>24.79</td></tr>
|
| 66 |
+
|
| 67 |
+
<tr><td bgcolor='#f3a60d'></td><td>brushwood</td><td style='text-align: center'>8</td><td style='text-align: center'>6.95</td><td style='text-align: center'>3.81</td></tr>
|
| 68 |
+
|
| 69 |
+
<tr><td bgcolor='#660082'></td><td>vineyard</td><td style='text-align: center'>9</td><td style='text-align: center'>3.13</td><td style='text-align: center'>2.55</td></tr>
|
| 70 |
+
|
| 71 |
+
<tr><td bgcolor='#55ff00'></td><td>herbaceous vegetation</td><td style='text-align: center'>10</td><td style='text-align: center'>17.84</td><td style='text-align: center'>19.76</td></tr>
|
| 72 |
+
|
| 73 |
+
<tr><td bgcolor='#fff30d'></td><td>agricultural land</td><td style='text-align: center'>11</td><td style='text-align: center'>10.98</td><td style='text-align: center'>18.19</td></tr>
|
| 74 |
+
|
| 75 |
+
<tr><td bgcolor='#e4df7c'></td><td>plowed land</td><td style='text-align: center'>12</td><td style='text-align: center'>3.88</td><td style='text-align: center'>1.81</td></tr>
|
| 76 |
+
|
| 77 |
+
<tr><td bgcolor='#3de6eb'></td><td>swimming pool</td><td style='text-align: center'>13</td><td style='text-align: center'>0.01</td><td style='text-align: center'>0.02</td></tr>
|
| 78 |
+
|
| 79 |
+
<tr><td bgcolor='#ffffff'></td><td>snow</td><td style='text-align: center'>14</td><td style='text-align: center'>0.15</td><td style='text-align: center'>-</td></tr>
|
| 80 |
+
|
| 81 |
+
<tr><td bgcolor='#8ab3a0'></td><td>clear cut</td><td style='text-align: center'>15</td><td style='text-align: center'>0.15</td><td style='text-align: center'>0.82</td></tr>
|
| 82 |
+
|
| 83 |
+
<tr><td bgcolor='#6b714f'></td><td>mixed</td><td style='text-align: center'>16</td><td style='text-align: center'>0.05</td><td style='text-align: center'>0.12</td></tr>
|
| 84 |
+
|
| 85 |
+
<tr><td bgcolor='#c5dc42'></td><td>ligneous</td><td style='text-align: center'>17</td><td style='text-align: center'>0.01</td><td style='text-align: center'>-</td></tr>
|
| 86 |
+
|
| 87 |
+
<tr><td bgcolor='#9999ff'></td><td>greenhouse</td><td style='text-align: center'>18</td><td style='text-align: center'>0.12</td><td style='text-align: center'>0.15</td></tr>
|
| 88 |
+
|
| 89 |
+
<tr><td bgcolor='#000000'></td><td>other</td><td style='text-align: center'>19</td><td style='text-align: center'>0.14</td><td style='text-align: center'>0.04</td></tr>
|
| 90 |
+
</tbody>
|
| 91 |
+
</table>
|
| 92 |
+
</center>
|
| 93 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
<br><br>
|
|
|
|
|
|
|
|
|
|
| 96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 97 |
|
| 98 |
## Usage
|
| 99 |
|
|
|
|
| 111 |
|
| 112 |
<br><br>
|
| 113 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
## Reference
|
| 116 |
Please include a citation to the following article if you use the FLAIR #2 dataset:
|