Home

Awesome

Evaluating Lossy Compression Rates of Deep Generative Models

The code accompanying the ICML paper: Evaluating Lossy Compression Rates of Deep Generative Models. This repo is released as it is, and will not be maintained in the future.

Authors: Sicong Huang*, Alireza Makhzani*, Yanshuai Cao , Roger Grosse (*Equal contribution)

Citing this work

@article{huang2020rd,
  title={Evaluating Lossy Compression Rates of Deep Generative Models},
  author={Huang, Sicong and Makhzani, Alireza and Cao, Yanshuai and Grosse, Roger},
  booktitle = {ICML},
  year={2020}
}

Running this code

Dependencies are listed in requirement.txt. Lite tracer can be found here.

There are only 2 argparse arguments:

The configuration for each experiment is defined by an Hparam object registered in rate_distortion/hparams. The default value for an undefined field is None. The Hparam object is hierarchical and compositional for modularity.

This codebase has a self-contained system for keeping track of checkpoints and outputs based on the Hparam object. To load checkpoint from another experiment registered in the codebase, assign load_hparam_name to the name of a registered hparam_set in the codebase. If the model you want to test is not trained with this codebase, to load your model, you can simply set specific_model_path to the path of your decoder weights.

Reproducing our results.

Test your own generative models.

The codebase is also modularized for testing your own decoder-based generative models. You need to register your model under rate_distortion/models/user_models, and register the Hparam object at rate_distortion/hparams/user_models. Your model should come with its decoder variance model.x_logvar as a scalar or vector tensor. Set specific_model_path to the path of your decoder weights.

PyTorch:

If the generative models is trained in PyTorch, the checkpoint should contain the key "state_dict" as the weights of the model.

Others:

If the generative models is trained in other frameworks, you'll need to manually bridge and load the weights. For example, the AAEs were trained in tensorflow, with the weights saved as numpy, and then loaded as nn.Parameter in PyTorch. Refer torate_distortion/utils/aad_utils for more details.

Detailed Experimental Settings

More details on how to control experimental settings can be found below.

General configuration:

Sub-hparams:

rd sub-Hparam:

model_train sub-Hparam:

dataset sub-Hparam

The rest: normally the below settings do not need to be changed.