Home

Awesome

Learning to Simulate Dynamic Environments with GameGAN

PyTorch code for GameGAN

<div> <img src="./assets/pacman_bandai_small.gif" width="240px" height="240px" style="float:left;"> <img src="./assets/pacman_single.gif" width="240px" height="240px" style="float:left;"> <img src="./assets/vizdoom_small.gif" width="240px" height="240px"> </div>

Learning to Simulate Dynamic Environments with GameGAN
Seung Wook Kim, Yuhao Zhou, Jonah Philion, Antonio Torralba, Sanja Fidler
CVPR, 2020
[Paper] [Project Page] [Video]

Abstract: Simulation is a crucial component of any robotic system. In order to simulate correctly, we need to write complex rules of the environment: how dynamic agents behave, and how the actions of each of the agents affect the behavior of others. In this paper, we aim to learn a simulator by simply watching an agent interact with an environment. We focus on graphics games as a proxy of the real environment. We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training. Given a key pressed by the agent, GameGAN "renders" the next screen using a carefully designed generative adversarial network. Our approach offers key advantages over existing work: we design a memory module that builds an internal map of the environment, allowing for the agent to return to previously visited locations with high visual consistency. In addition, GameGAN is able to disentangle static and dynamic components within an image making the behavior of the model more interpretable, and relevant for downstream tasks that require explicit reasoning over dynamic elements. This enables many interesting applications such as swapping different components of the game to build new games that do not exist.

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

Citation

@inproceedings{kim2020learning,
  title={Learning to Simulate Dynamic Environments with GameGAN},
  author={Kim, Seung Wook and Zhou, Yuhao and Philion, Jonah and Torralba, Antonio and Fidler, Sanja},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={1231--1240},
  year={2020}
}

Environment Setup

This codebase is tested with Ubuntu 18.04 and python3.8.3

git clone https://github.com/nv-tlabs/GameGAN_code.git
cd GameGAN_code
pip install -r requirements.txt

Dataset extraction

Please clone and follow https://github.com/hardmaru/WorldModelsExperiments/tree/master/doomrnn to install the VizDoom environment.

cp data/extract.py DOOMRNN_DIR/
cp data/extract_data.sh DOOMRNN_DIR/
cd DOOMRNN_DIR
./extract_data.sh
cd GameGAN_code/data
python dataloader.py DOOMRNN_DIR/vizdoom_skip3 vizdoom

For custom datasets, you can construct .npy files that contain a sequence of image and action pairs and define a dataloader similar to 'class vizdoom_dataset'. Please refer to 'data/dataloder.py'.

-- The above repository is deprecated and VizDoom environment might not run correctly in certain systems. In that case, you can use the docker installtaion of https://github.com/zacwellmer/WorldModels and copy the extraction scripts in the docker environment.

Training

We provide training scripts in './scripts'.

./scripts/vizdoom_multi.sh
./scripts/vizdoom_single.sh

Monitoring

tensorboard --port=PORT --logdir=./results

Tips