Awesome
GAN Timeline
This is a timeline showing the development of Generative Adversarial Networks. It is expected to show the evolutions and connections of ideas and to follow the most recent progress in GAN researches.
The paper list partly refers to the lists in nightrome/really-awesome-gan and zhangqianhui/AdversarialNetsPapers.
Notice: All dates correspond to the initial version of the submissions.
Notice: Papers with "Title of this style" are the key papers in the development of GANs. Any suggestion about if a paper is key paper or not is welcome!!
Notice: Since GANs have already been adopted and have been approved in many researches recently, this list will mainly focus on the papers published or accepted in main conferences (e.g. ICML, ICLR, NIPS, CVPR, ICCV, ECCV) and journals (e.g. TPAMI, TIP, IJCV) of CV and ML, instead of ones on arXiv from now on (except for very important and widespread ones), to make sure the quality of the list.
2014-06-10 | [Theory] Ian J. Goodfellow et al. "Generative Adversarial Networks". GAN arXiv code
- The adversarial nets framework of a generator and a discriminator is first proposed.
- The framework is a two-player game that the generator is trained to generate images from inputed noises to fool the discriminator while the discriminator is trained to well discriminate real samples and fake samples.
- The criterion is formulated as
E_real(log(D)) + E_fake(log(1-D))
.
2014-11-06 | [Theory] Mehdi Mirza and Simon Osindero. "Conditional Generative Adversarial Nets". CGAN arXiv code
- Generative adversarial nets are extended to a conditional model by conditioning both the generator and discriminator on some extra information y. y could be any kind of auxiliary information, such as class labels, tags or attributes. the conditioning is performed by feeding y into the both the discriminator and generator as additional input layer.
2015-05-14 | [Theory] Gintare Karolina Dziugaite et al. "Training generative neural networks via Maximum Mean Discrepancy optimization". arXiv
2015-06-18 [Theory] Emily Denton et al. "Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks". LAPGAN arXiv code blog
- The approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
2015-11-17 | [CV App] Michael Mathieu et al. "Deep multi-scale video prediction beyond mean square error". arXiv code
- LeCun's paper.
2015-11-18 | [Theory] Alireza Makhzani et al. "Adversarial Autoencoders". AAE arXiv
- A probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution.
- The paper shows how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization on on MNIST, Street View House Numbers and Toronto Face datasets.
2015-11-19 | [Theory] Alec Radford et al. "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks". DCGAN arXiv code PytorchCode TensorflowCode TorchCode KerasCode
- A set of constraints on the architectural topology of Convolutional GANs, name this class of architectures Deep Convolutional GANs (DCGAN), is proposed and evaluated to make them stable to train in most settings.
- Many interesting visualized samples.
2016-02-16 | [Theory] Daniel Jiwoong Im et al. "Generating images with recurrent adversarial networks". GRAN arXiv
- The main difference between
GRAN
versus other generative adversarial models is that the generator G consists of a recurrent feedback loop that takes a sequence of noise samples drawn from the prior distribution z∼p(z) and draws an ouput at multiple time steps. - A encoder g(·) and a decoder f(·) are in G. At each time step t,
C_t = f([z, g(C_t-1)])
. The final generated sample is a mergence of all the outputs of f(·).
2016-03-12 | [CV App] Donggeun Yoo et al. "Pixel-Level Domain Transfer". arXiv(ECCV2016) code
2016-05-17 | [CV App] Scott Reed et al. "Generative Adversarial Text to Image Synthesis". arXiv code
2016-05-25 | [Text] Takeru Miyato et al. "Adversarial Training Methods for Semi-Supervised Text Classification". arXiv
2016-05-31 | [Theory] Jeff Donahue et al. "Adversarial Feature Learning". BiGANs arXiv code
2016-06-02 | [Theory] Vincent Dumoulin et al. "Adversarially Learned Inference". ALI arXiv code
2016-06-02 | [Theory] Sebastian Nowozin et al. "f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization". f-GAN arXiv code
- This paper shows that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. It shows that any f-divergence can be used for training generative neural samplers, and Variational Divergence Minimization (VDM) is proposed in this paper.
2016-06-10 | [Theory] Tim Salimans et al. "Improved Techniques for Training GANs".
- Feature matching: Feature matching addresses the instability of GANs by specifying a new objective for the generator. Instead of directly maximizing the output of the discriminator, the new objective requires the generator to generate data that matches the statistics of the real data.
- Minibatch discrimination:
- Historical averaging:
- One-sided label smoothing:
- Virtual batch normalization:
2016-06-10 | [Theory] Ming-Yu Liu and Oncel Tuzel. "Coupled Generative Adversarial Networks". CoGAN arXiv code
- This work jointly trains two GANs by inputing a signal Z into two typical generator-discriminator architectures and giving two GANs different tasks. During training, the weights of the first few layers of generators and the last few layers of discriminators are shared to learn a joint distribution of images without correspondence supervision.
- According to the paper, for CV applications, two tasks could be simultaneously generating realistic images and edge images, or normal color images and negative color images, using same input signal.
2016-09-08 | [CV app] Carl Vondrick et al. "Generating Videos with Scene Dynamics". arXiv code project
2016-09-11 | [Theory] Junbo Zhao et al. "Energy-based Generative Adversarial Network". EBGAN arXiv code
- The discriminator D, whose output is a scalar energy, takes either real or generated images, and estimates the energy value E accordingly.
- This work chooses to use a margin loss as energy function, but many other choices are possible.
- This paper devises a regularizaer to ensure the variety of generated images to replace
Minibatch Discrimination (MBD)
since MBD is hard to implemented with energy based discriminators.
2016-09-12 | [CV App] Jun-Yan Zhu et al. "Generative Visual Manipulation on the Natural Image Manifold". iGAN arXiv code project youtube
2016-09-18 | [Theory] Lantao Yu et al. "SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient". SeqGAN arXiv
2016-10-30 | [CV App] Augustus Odena et al. "Conditional Image Synthesis With Auxiliary Classifier GANs". arXiv(ICLR2017) code
- Google Brain.
2016-11-04 | [CV App] Leon Sixt et al. "RenderGAN: Generating Realistic Labeled Data". RenderGAN arXiv
2016-11-06 | [Theory] Shuangfei Zhai et al. "Generative Adversarial Networks as Variational Training of Energy Based Models". VGAN arXiv
- This paper proposes
VGAN
, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density p(x) is approximated by a variational distribution q(x) that is easy to sample from. - It is interesting that two papers of energy-based analysis of GANs are submitted such close.
2016-11-06 | [Theory] Vittal Premachandran and Alan L. Yuille. "Unsupervised Learning Using Generative Adversarial Training And Clustering]". ICLR2017 code
2016-11-07 | [Theory] Luke Metz et al. "Unrolled Generative Adversarial Networks". arXiv
2016-11-07 | [CV App] Yaniv Taigman et al. "Unsupervised Cross-Domain Image Generation". arXiv
2016-11-07 | [CV App] Mickaël Chen and Ludovic Denoyer. "Multi-view Generative Adversarial Networks". arXiv
2016-11-13 | [Theory] Xudong Mao et al. "Least Squares Generative Adversarial Networks". LSGANs arXiv code
- To overcome the vanishing gradients problem during the learning process, this paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator.
- It claims that first, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process.
2016-11-18 | [Theory] Xi Chen et al. "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets". InfoGAN arXiv code
- This paper describes
InfoGAN
, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. InfoGAN
successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset.
2016-11-18 | [Theory] Tarik Arici et al. "Associative Adversarial Networks". AANs arXiv
2016-11-19 | [CV App] Guim Perarnau et al. "Invertible Conditional GANs for image editing". IcGAN arXiv
2016-11-21 | [CV App] Phillip Isola et al. "Image-to-Image Translation with Conditional Adversarial Networks". CVPR 2017 code PytorchCode project
- Paper from
iGan
research group in UC Berkeley, a development ofiGan
. - This approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Semantic labels → photo, trained on the Cityscapes dataset. Architectural labels → photo, trained on the CMP Facades dataset. Map → aerial photo, trained on data scraped from Google Maps. BW → color photos. Edges → photo, binary edges generated using the HED edge detector. Sketch → photo, Day → night.
- The discriminator receives the pairs consisting of sketch and real image as positive samples and the pairs consisting of sketch and fake image as negative samples.
2016-11-21 | [CV App] Masaki Saito and Eiichi Matsumoto. "Temporal Generative Adversarial Nets". TGAN arXiv
- The temporal generator G0 yields a set of latent variables from z0. The image generator G1 transforms them into video frames. The image discriminator D1 first extracts a feature vector from each frame. The temporal discriminator D0 exploits them and evaluates whether these frames are from the dataset or the generator.
2016-11-25 | [CV App] Pauline Luc et al. "Semantic Segmentation using Adversarial Networks". arXiv
2016-11-27 | [CV App] Arna Ghosh et al. "SAD-GAN: Synthetic Autonomous Driving using Generative Adversarial Networks". SAD-Gan arXiv
2016-11-29 | [Music App] Olof Mogren. "C-RNN-GAN: Continuous recurrent neural networks with adversarial training". C-RNN-GAN arXiv
- Both generator and discriminator are built by LSTM-Conv architectures. The generator receives a random variable sequence to generate fake music while the discriminator distinguishes the real and fake music samples.
2016-11-30 | [CV App] Anh Nguyen et al. "Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space".arXiv code
2016-12-07 | [Theory] Tong Che et al. "Mode Regularized Generative Adversarial Networks". arXiv code
- This paper argues that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution.
- It proposes a novel regularizer for the GAN training target. The basic idea is simple yet powerful: in addition to the gradient information provided by the discriminator, the generator is expected to take advantage of other similarity metrics with much more predictable behavior, such asthe L2 norm.
- It designs a set of metrics to evaluate the generated samples in terms of both the
diversity of modes
and the distribution fairness of the probability mass. These metrics are shown to be more robust in judging complex generative models, including those which are well-trained and collapsed ones.
2016-12-10 | [CV App] Han Zhang et al. "StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks". StackGAN arXiv
2016-12-13 | [Theory] Daniel Jiwoong Im et al. "Generative Adversarial Parallelization". GAP arXiv code
- A framework in which many GANs or their variants are trained simultaneously, exchanging their discriminators, aiming to deal with
missing mode
problem. - Every iteration, each generator randomly chooses several discriminators to judge its fake outputs.
2016-12-13 | [Theory] Xun Huang et al. "Stacked Generative Adversarial Networks". SGAN arXiv
- The model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations.
- A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model.
- Unlike the original GAN that uses a single noise vector to represent all the variations,
SGAN
decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process.
2017-01-04 | [CV App] Junting Pan et al. "SalGAN: Visual Saliency Prediction with Generative Adversarial Networks". SalGAN arXiv project
2017-01-09 | [Theory] Ilya Tolstikhin et al. "AdaGAN: Boosting Generative Models". AdaGAN arXiv
- Original GANs are notoriously hard to train and can suffer from the problem of missing modes (lack of variety) where the model is not able to produce examples in certain regions of the space. AdaGan is an iterative procedure where at every step a new component is added into a mixture model by running a GAN algorithm on a reweighted sample. Such an incremental procedure is proved leading to convergence to the true distribution in a finite number of steps if each step is optimal.
2017-01-17 | [Theory] Martin Arjovsky and Léon Bottou. "Towards Principled Methods for Training Generative Adversarial Networks". arXiv
- Theory analysis of shortages of original GAN from the authors of
WGAN
.
2017-01-23 | [Theory] Guo-Jun Qi "Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities". LS-GAN arXiv code Generalized Loss-Sensitive GAN, GLS-GAN code
- Notice!
Least Squares GANs, LSGANs
andLoss-Sensitive GANs, LS-GAN
! - The proposed LS-GAN abandons to learn a discriminator that uses a probability to characterize the likelihood of real samples. Instead, it builds a loss function to distinguish real and generated samples by the assumption that a real example should have a smaller loss than a generated sample.
- The theoretical analysis presents a regularity condition on the underlying data density, which allows us to use a class of Lipschitz losses and generators to model the LS-GAN. It relaxes the assumption that the classic GAN should have infinite modeling capacity to obtain the similar theoretical guarantee. The paper also proves that the Wasserstein GAN also follows the Lipschitz constraint.
2017-01-26 | [Theory] Martin Arjovsky et al. "Wasserstein GAN". WGAN arXiv code
- This paper compares several distance measures, namely the Total Variation (TV) distance, the Kullback-Leibler (KL) divergence, the Jensen-Shannon (JS) divergence, and the Earth-Mover (EM, Wasserstein-1) distance, and follows the last one to formulate the criterion.
- The criterion is formulated as
E_real(D) - E_fake(D)
- The paper claim several training tricks, such as weight clipping and using
RMSProp
instead of momentum based methods, likeAdam
.
2017-01-26 | [CV App] Zhedong Zheng et al. "Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro arXiv code
2017-02-11 | [CV App] Wei Ren Tan et al. "ArtGAN: Artwork Synthesis with Conditional Categorical GANs". ArtGAN arXiv
2017-02-27 | [Theory] R Devon Hjelm et al. "Boundary-Seeking Generative Adversarial Networks". BS-GAN arXiv code
2017-02-27 | [CV App] Zhifei Zhang et al. "Age Progression/Regression by Conditional Adversarial Autoencoder". arXiv
- This paper proposes a conditional adversarial autoencoder (CAAE) that learns a face manifold, traversing on which smooth age progression and regression can be realized simultaneously. In
CAAE
, the face is first mapped to a latent vector through a convolutional encoder, and then the vector is projected to the face manifold conditional on age through a deconvolutional generator.
2017-03-06 | [Theory] Zhiming Zhou et al. "Generative Adversarial Nets with Labeled Data by Activation Maximization". AM-GAN arXiv
- This paper claims the current GAN model with labeled data still results in undesirable properties due to the overlay of the gradients from multiple classes.
- It argues that a better gradient should follow the intensity and direction that maximize each sample's activation on one and the only one class in each iteration, rather than weighted-averaging their gradients.
2017-03-07 | [Theory] Chongxuan Li et al. "Triple Generative Adversarial Nets". Triple-GAN arXiv
2017-03-15 | [CV App] Taeksoo Kim et al. "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks". DiscoGAN arXiv code
- This work proposes a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN).
- With
DiscoGAN
, many interesting tasks can be done, such as changing hair colors of inputed face images, generating shoes based on inputed bag styles, or generating a car facing the same direction of inputed chair.
2017-03-17 | [CV App] Bo Dai et al. "Towards Diverse and Natural Image Descriptions via a Conditional GAN". arXiv
2017-03-23 | [ML App] Akshay Mehrotra and Ambedkar Dukkipati. "Generative Adversarial Residual Pairwise Networks for One Shot Learning". arXiv
- This paper uses generated data acts as a strong regularizer for the task of similarity matching and designs a novel network based on th GAN framework that shows improvements for the one shot learning task.
2017-03-28 | [Speech] Santiago Pascual et al. "SEGAN: Speech Enhancement Generative Adversarial Network". SEGAN arXiv
2017-03-29 | [CV App] Kiana Ehsani, et al. "SeGAN: Segmenting and Generating the Invisible". SeGAN arXiv
2017-03-30 | [CV App] Jun-Yan Zhu et al. "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks". CycleGAN arXiv code project PytorchCode
- Paper from
iGan
andpix2pix
research group in UC Berkeley.
2017-03-31 | [Theory] David Berthelot et al. "BEGAN: Boundary Equilibrium Generative Adversarial Networks". BEGAN arXiv code
2017-03-31 | [Music] Li-Chia Yang et al. "MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation using 1D and 2D Condition". MidiNet arXiv
2017-03-31 | [Theory] Ishaan Gulrajani et al. "Improved Training of Wasserstein GANs" arXiv code
- This paper outlines the ways in which weight clipping in the discriminator can lead to pathological behavior which hurts stability and performance. Then, the paper proposes WGAN with gradient penalty, which does not suffer from the same issues, as an alternative.
- The training of GAN is validated very stable and fast.
- You can use
Adam
now! AndBatchNormalization
is no longer recommended in discriminator now based on the paper.
2017-04-07 | [CV App] Weidong Yin et al. "Semi-Latent GAN: Learning to generate and modify facial images from attributes". Semi-Latent GAN arXiv
2017-04-08 | [CV App] Zili Yi et al. "DualGAN: Unsupervised Dual Learning for Image-to-Image Translation". MAGAN arXiv code
2017-04-11 | [CV App] Xiaolong Wang et al. "A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection". A-Fast-RCNN arXiv(CVPR2017) code
2017-04-12 | [Theory] Ruohan Wang et al. "MAGAN: Margin Adaptation for Generative Adversarial Networks". MAGAN arXiv code
- This paper proposes a novel training procedure for Generative Adversarial Networks (GANs) to improve stability and performance by using an adaptive hinge loss objective function.
- A simple and robust training procedure that adapts the hinge loss margin based on training statistics. The dependence on the margin hyper-parameter is removed and no new hyper-parameters to complicate training.
- A principled analysis of the effects of the hinge loss margin on auto-encoder GANs training.
2017-04-13 | [CV App] Rui Huang et al. "Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis". arXiv
2017-04-17 | [CV App] Bo Zhao et al. "Multi-View Image Generation from a Single-View". arXiv
2017-04-17 | [Theory] Felix Juefei-Xu et al. "Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking". GoGAN arXiv
- This work aims at improving on the
WGAN
by first generalizing its discriminator loss to a margin-based one, which leads to a better discriminator, and in turn a better generator, and then carrying out a progressive training paradigm involving multiple GANs to contribute to the maximum margin ranking loss so that the GAN at later stages will improve upon early stages. WGAN
loss treats a gap of 10 or 1 equally and it tries to increase the gap even further. TheMGAN
(Margin GAN, WGAN with margin-based discriminator loss proposed in the paper) loss will focus on increasing separation of examples with gap 1 and leave the samples with separation 10, which ensures a better discriminator, hence a better generator.
2017-04-19 | [CV App] Yijun Li et al. "Generative Face Completion". arXiv(CVPR2017) code
2017-04-19 | [CV App] Jan Hendrik Metzen et al. "Universal Adversarial Perturbations Against Semantic Image Segmentation". arXiv
2017-04-20 | [Theory] Min Lin. "Softmax GAN". Softmax GAN arXiv
- Softmax GAN is a novel variant of Generative Adversarial Network (GAN). The key idea of Softmax GAN is to replace the classification loss in the original GAN with a softmax cross-entropy loss in the sample space of one single batch.
2017-04-24 | [CV App] Hengyue Pan and Hui Jiang. "Supervised Adversarial Networks for Image Saliency Detection". arXiv
2017-05-02 | [CV App] Tseng-Hung Chen et al. "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner". arXiv
2017-05-06 | [CV App] Zhimin Chen and Yuguang Tong. "Face Super-Resolution Through Wasserstein GANs". arXiv
2017-05-08 | [CV App] Jae Hyun Lim and Jong Chul Ye. "Geometric GAN". arXiv
2017-05-08 | [CV App] Qiangeng Xu et al. "Generative Cooperative Net for Image Generation and Data Augmentation". arXiv
2017-05-09 | [Theory] Hyeungill Lee et al. "Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN". arXiv
2017-05-14 | [CV App] Shuchang Zhou et al. "GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data". GeneGAN arXiv
2017-05-24 | [Theory] Aditya Grover et al. "Flow-GAN: Bridging implicit and prescribed learning in generative models". Flow-GAN arXiv
2017-05-24 | [Theory] Shuang Liu et al. "Approximation and Convergence Properties of Generative Adversarial Learning". arXiv
2017-05-25 | [Theory] Mathieu Sinn and Ambrish Rawat. "Towards Consistency of Adversarial Training for Generative Models". arXiv
2017-05-27 | [Theory] Zihang Dai et al. "Good Semi-supervised Learning that Requires a Bad GAN". arXiv
2017-05-31 | [NLP] Sai Rajeswar et al. "Adversarial Generation of Natural Language". arXiv
- This paper gave rise to a "discussion" between Yoav Goldberg and Yann LeCun about arXiv and researches of NLP. See Yoav Goldberg's Medium, Yann LeCun's Facebook and Yoav Goldberg's response for details.
2017-06-02 | [Theory] Zhiting Hu et al. "On Unifying Deep Generative Models". arXiv
2017-06-05 | [NLP] Ofir Press et al. "Language Generation with Recurrent Generative Adversarial Networks without Pre-training". arXiv
2017-06-06 | [Medical] Yuan Xue et al. "SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation". SeqAN arXiv
2017-06-07 | [Theory] Swaminathan Gurumurthy et al. "DeLiGAN:Generative Adversarial Networks for Diverse and Limited Data". DeLiGAN CVPR 2017 code
CVPR 2017 | [Theory] Seyed-Mohsen Moosavi-Dezfooli et al. Universal Adversarial Perturbations. CVPR 2017
CVPR 2017 | [CV] Konstantinos Bousmalis et al. Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks. CVPR 2017
CVPR 2017 | [CV] Christian Ledig et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. CVPR 2017 Torch code
CVPR 2017 | [Theory] Xun Huang et al. Stacked Generative Adversarial Networks. CVPR 2017
CVPR 2017 | [CV] Jianan Li et al. Perceptual Generative Adversarial Networks for Small Object Detection. CVPR 2017
CVPR 2017 | [Theory] Ashish Shrivastava et al. Learning From Simulated and Unsupervised Images Through Adversarial Training. CVPR 2017
CVPR 2017 | [CV] Behrooz Mahasseni et al. Unsupervised Video Summarization With Adversarial LSTM Networks. CVPR 2017
CVPR 2017 | [CV] Carl Vondrick and Antonio Torralba. Generating the Future With Adversarial Transformers. CVPR 2017
CVPR 2017 | [CV] Yunchao Wei et al. Object Region Mining With Adversarial Erasing: A Simple Classification to Semantic Segmentation Approach. CVPR 2017
CVPR 2017 | [CV] Shiyu Huang and Deva Ramanan. Expecting the Unexpected: Training Detectors for Unusual Pedestrians With Adversarial Imposters. CVPR 2017
CVPR 2017 | [CV] VSR Veeravasarapu et al. Adversarially Tuned Scene Generation. CVPR 2017
CVPR 2017 | [CV] Xiaolong Wang et al. A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection. CVPR 2017
CVPR 2017 | [CV] Mengmi Zhang et al. Deep Future Gaze: Gaze Anticipation on Egocentric Videos Using Adversarial Networks. CVPR 2017
CVPR 2017 | [CV] Zhifei Zhang et al. Age Progression/Regression by Conditional Adversarial Autoencoder. CVPR 2017
CVPR 2017 | [CV] Takuhiro Kaneko et al. Generative Attribute Controller With Conditional Filtered Generative Adversarial Networks. CVPR 2017
CVPR 2017 | [Theory] Eric Tzeng et al. Adversarial Discriminative Domain Adaptation. CVPR 2017
ICCV 2017 | [CV] Yu Chen et al. Adversarial PoseNet: A Structure-Aware Convolutional Network for Human Pose Estimation. ICCV 2017
ICCV 2017 | [CV] Jun-Yan Zhu et al. Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks. ICCV 2017
ICCV 2017 | [CV] Weiyue Wang et al. Shape Inpainting Using 3D Generative Adversarial Network and Recurrent Convolutional Networks. ICCV 2017
ICCV 2017 | [Theory] Xudong Mao et al. Least Squares Generative Adversarial Networks. LSGANs ICCV 2017
ICCV 2017 | [Theory] Masaki Saito et al. Temporal Generative Adversarial Nets With Singular Value Clipping. ICCV 2017
ICCV 2017 | [CV] Rakshith Shetty et al. Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training. ICCV 2017
ICCV 2017 | [CV] Hsiao-Yu Fish Tung et al. Adversarial Inverse Graphics Networks: Learning 2D-To-3D Lifting and Image-To-Image Translation From Unpaired Supervision. ICCV 2017
ICCV 2017 | [CV] Vu Nguyen et al. Shadow Detection With Conditional Generative Adversarial Networks. ICCV 2017
ICCV 2017 | [CV] Leonardo Galteri et al. Deep Generative Adversarial Compression Artifact Removal. ICCV 2017
ICCV 2017 | [CV] Nasim Souly et al. Semi Supervised Semantic Segmentation Using Generative Adversarial Network. ICCV 2017
ICCV 2017 | [CV] Hao Dong et al. Semantic Image Synthesis via Adversarial Learning. ICCV 2017
ICCV 2017 | [CV] Han Zhang et al. StackGAN: Text to Photo-Realistic Image Synthesis With Stacked Generative Adversarial Networks. ICCV 2017
ICCV 2017 | [CV] Xiaodan Liang et al. Dual Motion GAN for Future-Flow Embedded Video Prediction. ICCV 2017
ICCV 2017 | [CV] Anton Osokin et al. GANs for Biological Image Synthesis. ICCV 2017
ICCV 2017 | [CV] Rui Huang et al. Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis. ICCV 2017
ICCV 2017 | [Theory] Jianmin Bao et al. CVAE-GAN: Fine-Grained Image Generation Through Asymmetric Training. ICCV 2017
ICCV 2017 | [CV] Zili Yi et al. DualGAN: Unsupervised Dual Learning for Image-To-Image Translation. DualGAN ICCV 2017
ICCV 2017 | [CV] Bo Dai et al. Towards Diverse and Natural Image Descriptions via a Conditional GAN. ICCV 2017
ICCV 2017 | [CV-Text] Xiaodan Liang et al. Recurrent Topic-Transition GAN for Visual Paragraph Generation. ICCV 2017
ICCV 2017 | [CV] Zhedong Zheng et al. Unlabeled Samples Generated by GAN Improve the Person Re-Identification Baseline in Vitro. ICCV 2017 code
ICCV 2017 | [CV] Kyle Olszewski et al. Realistic Dynamic Facial Textures From a Single Image Using GANs. ICCV 2017
AAAI 2018 | [CV] Lingxiao Song et al. "Adversarial Discriminative Heterogeneous Face Recognition".
AAAI 2018 | [ML] Sungrae Park et al. "Adversarial Dropout for Supervised and Semi-Supervised Learning".
AAAI 2018 | [ML] Quanyu Dai et al. "Adversarial Network Embedding".
AAAI 2018 | [CV] Bin Tong et al. "Adversarial Zero-shot Learning with Semantic Augmentationn".
AAAI 2018 | [CV] Rui Zhao and Qiang Ji. "An Adversarial Hierarchical Hidden Markov Model for Human Pose Modeling and Generation".
AAAI 2018 | [NLP] Shiou Tian Hsu et al. "An Interpretable Generative Adversarial Approach to Classification of Latent Entity Relations in Unstructured Sentences".
AAAI 2018 | [CV] Yi Li et al. "Anti-Makeup: Learning A Bi-Level Adversarial Network for Makeup-Invariant Face Verification".
AAAI 2018 | [ML] Sima Behpour et al. "ARC: Adversarial Robust Cuts for Semi-Supervised and Multi-Label Classification".
AAAI 2018 | [CV] Jingkuan Song et al. "Binary Generative Adversarial Networks for Image Retrieval".
AAAI 2018 | [CV] Si Liu et al. "Cross-domain Human Parsing via Adversarial Feature and Label Adaptation".
AAAI 2018 | [ML] Aditya Grover et al. "Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models".
AAAI 2018 | [CV] Lingxiao Song et al. "Generative Adversarial Network based Heterogeneous Bibliographic Network Representation for Personalized Citation Recommendation".
AAAI 2018 | [ML] Hongwei Wang et al. "GraphGAN: Graph Representation Learning with Generative Adversarial Nets".
AAAI 2018 | [ML] Dmitry Ulyanov et al. "It Takes (Only) Two: Adversarial Generator-Encoder Networks".
AAAI 2018 | [CV] Jing Zhu et al. "Learning Adversarial 3D Model Generation With 2D Image Enhancer".
AAAI 2018 | [NLP] Jiaxian Guo et al. "Long Text Generation via Adversarial Training with Leaked Information".
AAAI 2018 | [Acoustic] Hao-Wen Dong et al. "MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment".
AAAI 2018 | [ML] Peter Henderson et al. "OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning".
AAAI 2018 | [CV] Hongyu Ren et al. "RAN4IQA: Restorative Adversarial Nets for No-Reference Image Quality Assessment".
AAAI 2018 | [ML] Swami Sankaranarayanan et al. "Regularizing Deep Networks Using Efficient Layerwise Adversarial Training".
AAAI 2018 | [NLP-CV] Jing Wang et al. "Show, Reward and Tell: Automatic Generation of Narrative Paragraph from Photo Stream by Adversarial Training".
AAAI 2018 | [ML] Maya Kabkab et al. "Task-Aware Compressed Sensing with Generative Adversarial Networks".
AAAI 2018 | [ML] Zhangjie Cao et al. "Transfer Adversarial Hashing for Hamming Space Retrieval".
AAAI 2018 | [CV] Gaurav Goswami et al. "Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks".
AAAI 2018 | [ML] Jian Zhang et al. "Unsupervised Generative Adversarial Cross-modal Hashing".
2018-05-21 | [Theory] Han Zhang, Ian Goodfellow et al. "Self-Attention Generative Adversarial Networks". SA-GAN arXiv
2018-09-28 | [Theory] Andrew Brock et al. "Large Scale GAN Training for High Fidelity Natural Image Synthesis". BigGAN arXiv Colab models