Home

Awesome

GAN Fingerprints

Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints

Ning Yu, Larry Davis, Mario Fritz<br> ICCV 2019

paper | project page | poster | media coverage in Chinese

<img src='fig/teaser.png' width=800>

GAN fingerprints demo

<img src='classifier_visNet/demo/demo.gif' width=800>

Abstract

Recent advances in Generative Adversarial Networks (GANs) have shown increasing success in generating photorealistic images. But they also raise challenges to visual forensics and model attribution. We present the first study of learning GAN fingerprints towards image attribution and using them to classify an image as real or GAN-generated. For GAN-generated images, we further identify their sources. Our experiments show that:

Prerequisites

Datasets

To train GANs and our classifiers, we consider two real-world datasets:

GAN sources

For each dataset, we pre-train four GAN sources: ProGAN, SNGAN, CramerGAN, and MMDGAN

GAN classifier

Given images of size 128x128 from real dataset or generated by different GANs, we train a classifier to attribute their sources. The code is modified from ProGAN.

GAN classifier visNet variant (fingerprint visualization)

This is another variant from the above regular GAN classifier. Given images of size 128x128 from real dataset or generated by different GANs, besides training a classifier to attribute their sources, we simultaneously learn to expose fingerprints for each image and for each source. The fingerprints are also in the size of 128x128. The code is modified from ProGAN and has a similar API to the above regular GAN classifier.

Citation

@inproceedings{yu2019attributing,
    author = {Yu, Ning and Davis, Larry and Fritz, Mario},
    title = {Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints},
    booktitle = {IEEE International Conference on Computer Vision (ICCV)},
    year = {2019}
}

Acknowledgement