Home

Awesome

Baby A3C: solving Atari environments in 180 lines

Sam Greydanus | October 2017 | MIT License

Results after training on 40M frames:

breakout-v4.gif pong-v4.gif spaceinvaders-v4.gif

Usage

If you're working on OpenAI's Breakout-v4 environment:

About

Make things as simple as possible, but not simpler.

Frustrated by the number of deep RL implementations that are clunky and opaque? In this repo, I've stripped a high-performance A3C model down to its bare essentials. Everything you'll need is contained in 180 lines...

Breakout-v4Pong-v4SpaceInvaders-v4
*Mean episode rewards @ 40M frames140 ± 2018.2 ± 1470 ± 30
*Mean episode rewards @ 80M frames190 ± 2017.9 ± 1550 ± 30

*same (default) hyperparameters across all environments

Architecture

self.conv1 = nn.Conv2d(channels, 32, 3, stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 32, 3, stride=2, padding=1)
self.conv3 = nn.Conv2d(32, 32, 3, stride=2, padding=1)
self.conv4 = nn.Conv2d(32, 32, 3, stride=2, padding=1)
self.gru = nn.GRUCell(32 * 5 * 5, memsize) # *see below
self.critic_linear, self.actor_linear = nn.Linear(memsize, 1), nn.Linear(memsize, num_actions)

*we use a GRU cell because it has fewer params, uses one memory vector instead of two, and attains the same performance as an LSTM cell.

Environments that work

(Use pip freeze to check your environment settings)

Known issues