Awesome
Mowing/Grazing detection
Automatic detection of mowing and grazing from Sentinel images in the French Alps through deep learning approaches.
Process
The following process specify how to collect data, treat them and predict mowing from my end of study project at Laboratoire d'écologie Alpines (LECA) (Grenoble). This process is created for computer scientist or people who, at least, know how to use python3 and install needed packages with pip.
- Download preprocess S1 tiled and S2 corrected images from PEPS
- Organize downloaded images in the predefined folder architecture
- Compute vegetation indices from S2 images
- Compute and filter the groundtruth and the dataset from reference files and Sentinel images
- Launch the learning and testing of the multimodal-temporal detector
1 Preprocessed images downloading
Criteria selection
- Go on the PEPS Explore tab
SENTINEL-1
- Select the region of interest by drawing on the map, then select the criteria in the item list on the left (example below for Écrin National Park region of interest on years 2018-2019)
- Select the product you need by the checking boxes in the list below the map and click on <img src="https://github.com/lucasbat20/Grazing-modelling/blob/master/Images/addtoprocenter.png" alt="drawing" height="27"/>
SENTINEL-2
- Select the region of interest by tile in the item list on the left, then select the criteria in the item list on the left (example below for tile 31TGK on years 2018-2019)
- Select the product you need by the checking boxes in the list below the map and click on <img src="https://github.com/lucasbat20/Grazing-modelling/blob/master/Images/addtoprocenter.png" alt="drawing" height="27"/>
Preprocessing
- Go to the processing center by clicking on the gears on the top right of the page
- Select the processing you need (S1Tiling for S1 - MAJA for S2)
- Select the products you want to process
- Launch the processing by clicking on <img src="https://github.com/lucasbat20/Grazing-modelling/blob/master/Images/procprod.png" alt="drawing" height="27"/>
Downloading
- The process state can be checked in <img src="https://github.com/lucasbat20/Grazing-modelling/blob/master/Images/jobs.png" alt="drawing" height="27"/>
- When the process is done, go to <img src="https://github.com/lucasbat20/Grazing-modelling/blob/master/Images/results.png" alt="drawing" height="27"/> and download the products
2 Folder architecture
The folder architecture is essential for the further processing such as the vegetation index.
- Save the downloaded products (images or folder of images) by the following folder architecture
Example for SENTINEL-1, tile 31TGK and year 2018
<pre> parentfolder/31TGK/2018/SENTINEL-1/s1*.tiff </pre>Example for SENTINEL-2, tile 31TGK and year 2019
<pre> parentfolder/31TGK/2019/SENTINEL-2/SENTINEL2*/ </pre>NB: The SENTINEL-2 products are folders while SENTINEL-1's are images)
- Move the
Scripts/
directory of this Git in theParentfolder
3 Vegetation index
To compute the vegetation index images use the vegindex.py
script. Depending on the number of S2 images it may take a while (1 day for 2 years of images). Moreover it takes a lot of memory (openning and writing of multiple bands in the same time), therefore we put many 'memory release' code but with a processor with less than 16GB of RAM it will crash anyway for this resolution (10980 x 10980).
EXAMPLE for tile 31TGK on year 2018 and 2019
<pre> python3 vegindex.py ../31TGK/2018/SENTINEL-2/* ../31TGK/2019/SENTINEL-2/* </pre>4 Groundtruth and Dataset
Reference data
The aim for our references is to collect class 1 (mowed) and class 0 (not mowed lands)
Class 1 (Mowed)
Delphine DB collected global mowing area in Écrins National Park.
Class 0 (Not Mowed)
The CNES provide the OSO land map who give nice information (woodlands, grasslands and meadows) for class 0. It can be downloaded here
Class merging
Groundtruth image is processed by QGIS, downloadable here
Thus, open QGIS:
Class 1
- Add the Delphine DB raster (
DB/Delphine/Mowing.img
) as a layer (Layer
-Add Layer
-Add Raster Layer
)
- Right click on the Delphine DB layer in the bottom left, go to
Export - Save As...
- In the popup menu:
- Choose
GEOTIFF
as file format - Write
mowing1.tif
as file name - Choose
EPSG:32631 - WGS 84 / UTM zone 31N
as reference system - Define the
Extent
coordinates as5000040
N,809760
E,699960
W and4890240
S - Force the resolution to
10
x10
- Click on
OK
- Move the external file in DB/Mowing/
- Choose
Class 0
- Add the OSO raster (
DB/OSO/OSC_20XX.tif
) as a layer (Layer
-Add Layer
-Add Raster Layer
) - Right click on the OSO layer in the bottom left, go to
Export - Save As...
- In the popup menu: do exactly the same as for class 1 but write
mowing0.tif
as file name
Date grid
The dategrid is created by searching for a SENTINEL-1 - SENTINEL-2 correspondancy (according to their high revisit frequency, SENTINEL-1 images are selected by nearest-neighbours approximation)
dategrid.py
script:
Input
--folder
path to the parentfolder containing tile/year/SENTINEL-X/*
Output [All in the script folder]
dategrid.csv
dategrid correspondancy between SENTINEL-1 and SENTINEL-2
Example
<pre> python3 dategrid.py --folder path/to/parentfolder </pre>Parcel filtering and dataset computation
The parcel filtering and dataset creation is performed in 6 steps which can be done separately in the right order, or directly by filtering_n_dataset.py
- Select only meadows, grasslands and woody moorlands in OSO images and formalizing class 1 images (
1_prefilter.py
) - Remove overlayer parcels (
2_overlayremoval.py
) NB: the images are also resized to decrease the computation time of next process - Remove too small (< 1 hectare) and too big (> 100 hectares) parcels, crop them and save the final parcels' image (
3_sizeremoval.py
) - Compute the groundtruth vector (
4_groundtruthvect.py
) - Compute the contextual dataset (
5_contextualdataset.py
), for now only altitude - Compute the modal dataset (
6_modaldataset.py
), this step is truly long (almost 1 week for 4000 parcels with 17 modes on 144 images, 2 years with a 5 days frequency) it could be a good idea to do it separately
filtering_n_dataset.py
script:
Input
--class0
path to the unmowed parcel image--class1
path to the mowed parcel image--altitude
path to the altitude map image--datesgrid
path to dates grid csv file--tiledir
path to the tile directory (e.g. 31TGK shown in section 2 Folder architecture)
Output [All in the script folder]
parcels.tif
image containing parcels localisation and labels (from 1 to the number of parcels)grt.npy
groundtruth vector containing parcels groudtruth values (1 for mowed, 0 for unmowed)
Example
<pre> python3 filtering_n_dataset.py --class0 path/to/mowing0.tif --class1 path/to/mowing1.tif --altitude path/to/alti.tif --datesgrid path/to/dategrid.csv --tiledir path/to/31TGK </pre>5 Learning and testing
Launch the learning and the testing of the model by the model.py
script:
Input
--mode
path to the modal dataset array--context
path to the contextual dataset array--labels
path to the groundtruth labels array
Output
- AUC results for each folds
Example
<pre> python3 model.py --mode path/to/mode.npy --context path/to/context.npy --labels path/to/labels.npy </pre>NB: This model was made in a truly coarse manner (in less than 2 months), do not rely a lot on it