Awesome
<div align="center"><img src="https://raw.githubusercontent.com/chainer/chainerrl/master/assets/ChainerRL.png" width="400"/></div>ChainerRL and PFRL
ChainerRL (this repository) is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using Chainer, a flexible deep learning framework. PFRL is the PyTorch analog of ChainerRL.
Installation
ChainerRL is tested with 3.6. For other requirements, see requirements.txt.
ChainerRL can be installed via PyPI:
pip install chainerrl
It can also be installed from the source code:
python setup.py install
Refer to Installation for more information on installation.
Getting started
You can try ChainerRL Quickstart Guide first, or check the examples ready for Atari 2600 and Open AI Gym.
For more information, you can refer to ChainerRL's documentation.
Algorithms
Algorithm | Discrete Action | Continous Action | Recurrent Model | Batch Training | CPU Async Training |
---|---|---|---|---|---|
DQN (including DoubleDQN etc.) | ✓ | ✓ (NAF) | ✓ | ✓ | x |
Categorical DQN | ✓ | x | ✓ | ✓ | x |
Rainbow | ✓ | x | ✓ | ✓ | x |
IQN | ✓ | x | ✓ | ✓ | x |
DDPG | x | ✓ | ✓ | ✓ | x |
A3C | ✓ | ✓ | ✓ | ✓ (A2C) | ✓ |
ACER | ✓ | ✓ | ✓ | x | ✓ |
NSQ (N-step Q-learning) | ✓ | ✓ (NAF) | ✓ | x | ✓ |
PCL (Path Consistency Learning) | ✓ | ✓ | ✓ | x | ✓ |
PPO | ✓ | ✓ | ✓ | ✓ | x |
TRPO | ✓ | ✓ | ✓ | ✓ | x |
TD3 | x | ✓ | x | ✓ | x |
SAC | x | ✓ | x | ✓ | x |
Following algorithms have been implemented in ChainerRL:
- A2C (Synchronous variant of A3C)
- examples: [atari (batched)] [general gym (batched)]
- A3C (Asynchronous Advantage Actor-Critic)
- examples: [atari reproduction] [atari] [general gym]
- ACER (Actor-Critic with Experience Replay)
- examples: [atari] [general gym]
- Asynchronous N-step Q-learning
- examples: [atari]
- Categorical DQN
- examples: [atari] [general gym]
- DQN (Deep Q-Network) (including Double DQN, Persistent Advantage Learning (PAL), Double PAL, Dynamic Policy Programming (DPP))
- DDPG (Deep Deterministic Policy Gradients) (including SVG(0))
- examples: [mujoco reproduction] [mujoco] [mujoco (batched)]
- IQN (Implicit Quantile Networks)
- examples: [atari reproduction] [general gym]
- PCL (Path Consistency Learning)
- examples: [general gym]
- PPO (Proximal Policy Optimization)
- Rainbow
- examples: [atari reproduction]
- REINFORCE
- examples: [general gym]
- SAC (Soft Actor-Critic)
- examples: [mujoco reproduction]
- TRPO (Trust Region Policy Optimization) with GAE (Generalized Advantage Estimation)
- examples: [mujoco]
- TD3 (Twin Delayed Deep Deterministic policy gradient algorithm)
- examples: [mujoco reproduction]
Following useful techniques have been also implemented in ChainerRL:
- NoisyNet
- examples: [Rainbow] [DQN/DoubleDQN/PAL]
- Prioritized Experience Replay
- examples: [Rainbow] [DQN/DoubleDQN/PAL]
- Dueling Network
- examples: [Rainbow] [DQN/DoubleDQN/PAL]
- Normalized Advantage Function
- examples: [DQN] (for continuous-action envs only)
- Deep Recurrent Q-Network
- examples: [DQN]
Visualization
ChainerRL has a set of accompanying visualization tools in order to aid developers' ability to understand and debug their RL agents. With this visualization tool, the behavior of ChainerRL agents can be easily inspected from a browser UI.
Environments
Environments that support the subset of OpenAI Gym's interface (reset
and step
methods) can be used.
Contributing
Any kind of contribution to ChainerRL would be highly appreciated! If you are interested in contributing to ChainerRL, please read CONTRIBUTING.md.
License
Citations
To cite ChainerRL in publications, please cite our JMLR paper:
@article{JMLR:v22:20-376,
author = {Yasuhiro Fujita and Prabhat Nagarajan and Toshiki Kataoka and Takahiro Ishikawa},
title = {ChainerRL: A Deep Reinforcement Learning Library},
journal = {Journal of Machine Learning Research},
year = {2021},
volume = {22},
number = {77},
pages = {1-14},
url = {http://jmlr.org/papers/v22/20-376.html}
}