Awesome
OpenAI Lab
<p align="center"><b><a href="https://github.com/kengz/SLM-Lab">NOTICE: Please use the next version, SLM-Lab.</a></b></p>
<p align="center"><b><a href="http://kengz.me/openai_lab">OpenAI Lab Documentation</a></b></p>
An experimentation framework for Reinforcement Learning using OpenAI Gym, Tensorflow, and Keras.
OpenAI Lab is created to do Reinforcement Learning (RL) like science - theorize, experiment. It provides an easy interface to OpenAI Gym and Keras, with an automated experimentation and evaluation framework.
Features
- Unified RL environment and agent interface using OpenAI Gym, Tensorflow, Keras, so you can focus on developing the algorithms.
- Core RL algorithms implementations, with reusable modular components for developing deep RL algorithms.
- An experimentation framework for running hundreds of trials of hyperparameter optimizations, with logs, plots and analytics for testing new RL algorithms. Experimental settings are stored in standardized JSONs for reproducibility and comparisons.
- Automated analytics of the experiments for evaluating the RL agents and environments, and to help pick the best solution.
- The Fitness Matrix, a table of the best scores of RL algorithms v.s. the environments; useful for research.
With OpenAI Lab, we could focus on researching the essential elements of reinforcement learning such as the algorithm, policy, memory, and parameter tuning. It allows us to build agents efficiently using existing components with the implementations from research ideas. We could then test the research hypotheses systematically by running experiments.
Read more about the research problems the Lab addresses in Motivations. Ultimately, the Lab is a generalized framework for doing reinforcement learning, agnostic of OpenAI Gym and Keras. E.g. Pytorch-based implementations are on the roadmap.
Implemented Algorithms
A list of the core RL algorithms implemented/planned.
To see their scores against OpenAI gym environments, go to Fitness Matrix.
algorithm | implementation | eval score (pending) |
---|---|---|
DQN | DQN | - |
Double DQN | DoubleDQN | - |
Dueling DQN | - | - |
Sarsa | DeepSarsa | - |
Off-Policy Sarsa | OffPolicySarsa | - |
PER (Prioritized Experience Replay) | PrioritizedExperienceReplay | - |
CEM (Cross Entropy Method) | next | - |
REINFORCE | - | - |
DPG (Deterministic Policy Gradient) off-policy actor-critic | ActorCritic | - |
DDPG (Deep-DPG) actor-critic with target networks | DDPG | - |
A3C (asynchronous advantage actor-critic) | - | - |
Dyna | next | - |
TRPO | - | - |
Q*(lambda) | - | - |
Retrace(lambda) | - | - |
Neural Episodic Control (NEC) | - | - |
EWC (Elastic Weight Consolidation) | - | - |
Run the Lab
Next, see Installation and jump to Quickstart.
<div style="max-width: 100%"><img alt="Timelapse of OpenAI Lab" src="http://kengz.me/openai_lab/images/lab_demo_dqn.gif" /></div>Timelapse of OpenAI Lab, solving CartPole-v0.