Home

Awesome

Deep Planning Network

Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, James Davidson

PlaNet policies and predictions

This project provides the open source implementation of the PlaNet agent introduced in Learning Latent Dynamics for Planning from Pixels. PlaNet is a purely model-based reinforcement learning algorithm that solves control tasks from images by efficient planning in a learned latent space. PlaNet competes with top model-free methods in terms of final performance and training time while using substantially less interaction with the environment.

If you find this open source release useful, please reference in your paper:

@inproceedings{hafner2019planet,
  title={Learning Latent Dynamics for Planning from Pixels},
  author={Hafner, Danijar and Lillicrap, Timothy and Fischer, Ian and Villegas, Ruben and Ha, David and Lee, Honglak and Davidson, James},
  booktitle={International Conference on Machine Learning},
  pages={2555--2565},
  year={2019}
}

Method

PlaNet model diagram

PlaNet models the world as a compact sequence of hidden states. For planning, we first encode the history of past images into the current state. From there, we efficiently predict future rewards for multiple action sequences in latent space. We execute the first action of the best sequence found and replan after observing the next image.

Find more information:

Instructions

To train an agent, install the dependencies and then run:

python3 -m planet.scripts.train --logdir /path/to/logdir --params '{tasks: [cheetah_run]}'

The code prints nan as the score for iterations during which no summaries were computed.

The available tasks are listed in scripts/tasks.py. The default parameters can be found in scripts/configs.py. To run the experiments from our paper, pass the following parameters to --params {...} in addition to the list of tasks:

ExperimentParameters
PlaNetNo additional parameters.
Random data collectionplanner_iterations: 0, train_action_noise: 1.0
Purely deterministicmean_only: True, divergence_scale: 0.0
Purely stochasticmodel: ssm
One agent all taskscollect_every: 30000

Please note that the agent has seen some improvements so the results may be a bit different now.

Modifications

These are good places to start when modifying the code:

DirectoryDescription
scripts/configs.pyAdd new parameters or change defaults.
scripts/tasks.pyAdd or modify environments.
modelsAdd or modify latent transition models.
networksAdd or modify encoder and decoder networks.

Tips for development:

Dependencies

The code was tested under Ubuntu 18 and uses these packages:

Disclaimer: This is not an official Google product.