Home

Awesome

<p> <a href="https://www.gnu.org/software/bash/manual/bash.html"><img alt="Shell Script" src="https://img.shields.io/badge/-Shell Script-2C3840?style=flat-square&logo=gnu-bash&logoColor=white" /></a> <a href="https://www.python.org/"><img alt="Python 3" src="https://img.shields.io/badge/-Python-2b5b84?style=flat-square&logo=python&logoColor=white" /></a> <a href="https://pytorch.org/"><img alt="PyTorch" src="https://img.shields.io/badge/-PyTorch-ee4c2c?style=flat-square&logo=pytorch&logoColor=white" /></a> <a href="https://lightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792de4?style=flat-square&logo=lightning&logoColor=white" /></a> <a href="https://www.docker.com/"><img alt="Docker" src="https://img.shields.io/badge/-Docker-0073ec?style=flat-square&logo=docker&logoColor=white" /></a> <a href="https://www.comet.com/"><img alt="Comet" src="https://custom-icon-badges.herokuapp.com/badge/Comet-262c3e?style=flat-square&logo=logo_comet_ml&logoColor=white" /></a> </p>

sr-pytorch-lightning

Introduction

Super resolution algorithms implemented with Pytorch Lightning. Based on code by So Uchida.

Currently supports the following models:

Requirements

Usage

I decided to split the logic of dealing with docker (contained in Makefile) from running the python code itself (contained in start_here.sh). Since I run my code in a remote machine, I use gnu screen to keep the code running even if my connection fails.

In Makefile there is a environment variables section, where a few variables might be set. More specifically, DATASETS_PATH must point to the root folder of your super resolution datasets.

In start_here.sh a few variables might be set in the variables region. Default values have been set to allow easy experimentation.

Creating docker image

make

If you want to use the specific versions I used during my last experiments, check the pytorch_1.11 branch. To build the docker image using the specific versions that I used, simply run:

make DOCKERFILE=Dockerfile_fixed_versions

Testing docker image

make test

This should print information about all available GPUs, like this:

Found 2 devices:
        _CudaDeviceProperties(name='NVIDIA Quadro RTX 8000', major=7, minor=5, total_memory=48601MB, multi_processor_count=72)
        _CudaDeviceProperties(name='NVIDIA Quadro RTX 8000', major=7, minor=5, total_memory=48601MB, multi_processor_count=72)

Training model

If you haven't configured the telegram bot to notify when running is over, or don't want to use it, simply remove the line

$(TELEGRAM_BOT_MOUNT_STRING) \

from the make run command on the Makefile, and also comment the line

send_telegram_msg=1

in start_here.sh.

Then, to train the models, simply call

make run

By default, it will run the file start_here.sh.

If you want to run another command inside the docker container, just change the default value for the RUN_STRING variable.

make RUN_STRING="ipython3" run

Creating your own model

To create your own model, create a new file inside models/ and create a class that inherits from SRModel. Your class should implement the forward method. Then, add your model to __init__.py. The model will be automatically available as a model parameter option in train.py or test.py.

Some good starting points to create your own model are the SRCNN and EDSR models.

Using Comet

If you want to use Comet to log your experiments data, just create a file named .comet.config in the root folder here, and add the following lines:

[comet]
api_key=YOUR_API_KEY

More configuration variables can be found here.

Most of the things that I found useful to log (metrics, codes, log, image results) are already being logged. Check train.py and srmodel.py for more details. All these loggings are done by the comet logger already available from pytorch lightning. An example of these experiments logged in Comet can be found here.

Finished experiment Telegram notification

Since the experiments can run for a while, I decided to use a telegram bot to notify me when experiments are done (or when there is an error). For this, I use the telegram-send python package. I recommend you to install it in your machine and configure it properly.

To do this, simply use:

pip3 install telegram-send
telegram-send --configure

Then, simply copy the configuration file created under ~/.config/telegram-send.conf to another directory to make it easier to mount on the docker image. This can be configured in the source part of the TELEGRAM_BOT_MOUNT_STRING variable (by default is set to $(HOME)/Docker/telegram_bot_config) in the Makefile.