Awesome
Deep-Learning-for-Solar-Panel-Recognition
Recognition of photovoltaic cells in aerial images with Convolutional Neural Networks (CNNs). Object detection with YOLOv5 models and image segmentation with Unet++, FPN, DLV3+ and PSPNet.
๐ฝ Installation + pytorch CUDA 11.3
Create a Python 3.8 virtual environment and run the following command:
pip install -r requirements.txt && pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
With Anaconda:
pip install -r requirements.txt && conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
๐ป How to start?
OBJECT DETECTION
- Specify the location of the data in sp_dataset.yaml.
- Preprocess and generate annotations with yolo_preprocess_data.py and create_yolo_annotations.py respectively.
- Run yolo_train.py for training.
- Run yolo_detect.py for inference.
SEGMENTATION
- Specify the structure of the data in segmentation/datasets.py
- The code to train and run segmentation models can be found in the notebooks section.
๐ Data sources
-
โ Solar Panels Dataset
Multi-resolution dataset for photovoltaic panel segmentation from satellite and aerial imagery (https://zenodo.org/record/5171712) -
๐ Google Maps Aerial Images
- GoogleMapsAPI:
src/data/wrappers.GoogleMapsAPIDownloader
- Web Scraping:
src/data/wrappers.GoogleMapsWebDownloader
- GoogleMapsAPI:
-
๐ก Sentinel-2 Data (unused)
Sentinel-2 Satellite data from Copernicus.src/data/wrappers.Sentinel2Downloader
๐ Processing pipeline
๐งช Models
-
Object Detection
- YOLOv5-S: 7.2 M parameters
- YOLOv5-M: 21.2 M parameters
- YOLOv5-L: 46.5 M parameters
- YOLOv5-X: 86.7 M parameters
Architectures are based on YOLOv5 repository.
Download all the models here.
-
Image Segmentation
- Unet++: ~20 M parameters
- FPN: ~20 M parameters
- DeepLabV3+: ~20 M parameters
- PSPNet: ~20 M parameters
Architectures are based on segmentation_models.pytorch repository.
Download all the models here.
๐ Results
-
Metrics
-
Dataset and Google Maps images
๐ Project Organization
โโโ LICENSE
โโโ README.md <- The top-level README for developers using this project.
โโโ data <- Data for the project (ommited)
โโโ docs <- A default Sphinx project; see sphinx-doc.org for details
โ
โโโ models <- Trained and serialized models, model predictions, or model summaries
โ
โโโ notebooks <- Jupyter notebooks.
โ โโโ segmentation_pytorch_lightning.ipynb <- Segmentation modeling with Pytorch Ligthning.
โ โโโ segmentation_pytorch.ipynb <- Segmentation modeling with vanilla Pytorch.
โ
โโโ references <- Data dictionaries, manuals, and all other explanatory materials.
โ
โโโ reports <- Generated analysis as HTML, PDF, LaTeX, etc.
โ โโโ figures <- Generated graphics and figures to be used in reporting
โ โโโ Solar-Panels-Project-Report-UC3M <- Main report
โ โโโ Solar-Panels-Presentation-UC3M.pdf <- Presentation slides for the project.
โ
โโโ requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
โ generated with `pip freeze > requirements.txt`
โ
โโโ setup.py <- makes project pip installable (pip install -e .) so src can be imported
โโโ src <- Source code for use in this project.
โ โโโ __init__.py <- Makes src a Python module
โ โ
โ โโโ data <- Scripts to download or generate data
โ โ โโโ download.py <- Main scripts to download Google Maps and Sentinel-2 data.
โ โ โโโ wrappers.py <- Wrappers for all Google Maps and Sentinel-2.
โ โ โโโ utils.py <- Utility functions for coordinates operations.
โ โ
โ โโโ features <- Scripts to turn raw data into features for modeling
โ โ โโโ create_yolo_annotations.py <- Experimental script to create YOLO annotations.
โ โ โโโ yolo_preprocess_data.py <- Script to process YOLO annotations.
โ โ
โ โโโ models <- Scripts to train models and then use trained models to make predictions
โ โ โโโ segmentation <- Image segmentation scripts to train Unet++, FPN, DLV3+ and PSPNet models.
โ โ โโโ yolo <- Object detection scripts to train YOLO models.
โ โ
โ โโโ visualization <- Scripts to create exploratory and results oriented visualizations
โ โโโ visualize.py
โ
โโโ tox.ini <- tox file with settings for running tox; see tox.readthedocs.io