Home

Awesome

(this list is no longer maintained, and I am not sure how relevant it is in 2020)

How to Train a GAN? Tips and tricks to make GANs work

While research in Generative Adversarial Networks (GANs) continues to improve the fundamental stability of these models, we use a bunch of tricks to train them and make them stable day to day.

Here are a summary of some of the tricks.

Here's a link to the authors of this document

If you find a trick that is particularly useful in practice, please open a Pull Request to add it to the document. If we find it to be reasonable and verified, we will merge it in.

1. Normalize the inputs

2: A modified loss function

In GAN papers, the loss function to optimize G is min (log 1-D), but in practice folks practically use max log D

In practice, works well:

3: Use a spherical Z

cube

sphere

4: BatchNorm

batchmix

5: Avoid Sparse Gradients: ReLU, MaxPool

6: Use Soft and Noisy Labels

7: DCGAN / Hybrid Models

8: Use stability tricks from RL

9: Use the ADAM Optimizer

10: Track failures early

11: Dont balance loss via statistics (unless you have a good reason to)

For example

while lossD > A:
  train D
while lossG > B:
  train G

12: If you have labels, use them

13: Add noise to inputs, decay over time

14: [notsure] Train discriminator more (sometimes)

15: [notsure] Batch Discrimination

16: Discrete variables in Conditional GANs

17: Use Dropouts in G in both train and test phase

Authors