Home

Awesome

pix2pix-GANs

We are building pix2pix GANs. For this we would be using PyTorch. We would be using Satellite-Map Image dataset(http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/maps.tar.gz).

For more detailed explaination you may use this Blog

Model Architecture

Since pix2pix is a GAN based architecture, we have one generator which is generating images give some "input", and one discriminator which would discriminate the given image as real or fake. pix2pix is best for Image-to-Image translation, where we have one image from one domain and another image of different domain. Our generator will try to come up with image from second domain given an image from domain one.

Generator architecture is similar to an autoencoder model, where as discriminator's architecture is similar to a binary classifier.

Generator's architecture is similar to U-Net architecture. U-Net Architecture In case of discriminator, it is a patch-wise discriminator. The input given to the discriminator is the concatenation of Image from domain 1 and generated image of domain 2. Model Architectures Generator and Discriminator is being done in Models.py

Dataset Prepration

Since we are working on Satellite Image to Map generator, the dataset which is available consists of the image both satellite image and respective map image side by side. Sample 1Each image in the dataset is of shape (1200, 600, 3). So first we need split the image in that format so that, the dataloader gets the that in (satellite_image, map_image) format. We are also doing basic augmentation to the input in order to make it more our generator more robust. Dataprepration is being done in dataset.py

Hyperparameters

HyperparametrsValue
Learning Rate2e-4
beta10.5
Batch Size16
Number of workers2
Image Size256
L1_Lambda100
Lambda_GP10
Epochs800

Configuration of these hyperparameters is being done in config.py

Training Results

After 1st Epoch

Output after Epoch 1Satellite Image(left), Map(middle), Generated Map(right)

After 100 Epochs

Output after Epoch 100Satellite Image(left), Map(middle), Generated Map(right)

After 400 Epochs

Output after Epoch 400Satellite Image(left), Map(middle), Generated Map(right)

After 800 Epochs

Output after Epoch 800Satellite Image(left), Map(middle), Generated Map(right)

Generator Loss Vs. Discriminator Loss

Generator Loss Vs. Discriminator Loss

Training

bash download.sh
git clone git@github.com:shashi7679/pix2pix-GANs.git
cd pix2pix-GANs

Run train.ipynb on Jupyter Notebook

References