Awesome
Intro
This is the codebase for 'Mind Your Data! Hidding Backdoor in Offline Reinforcement Learning Datasets'.
d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.
import d3rlpy
dataset, env = d3rlpy.datasets.get_dataset("hopper-medium-v0")
# prepare algorithm
sac = d3rlpy.algos.SAC()
# train offline
sac.fit(dataset, n_steps=1000000)
# train online
sac.fit_online(env, n_steps=1000000)
# ready to control
actions = sac.predict(x)
- Documentation: https://d3rlpy.readthedocs.io
- Paper: https://arxiv.org/abs/2111.03788
key features
:zap: Most Practical RL Library Ever
- offline RL: d3rlpy supports state-of-the-art offline RL algorithms. Offline RL is extremely powerful when the online interaction is not feasible during training (e.g. robotics, medical).
- online RL: d3rlpy also supports conventional state-of-the-art online training algorithms without any compromising, which means that you can solve any kinds of RL problems only with
d3rlpy
. - advanced engineering: d3rlpy is designed to implement the faster and efficient training algorithms. For example, you can train Atari environments with x4 less memory space and as fast as the fastest RL library.
:beginner: Easy-To-Use API
- zero-knowledge of DL library: d3rlpy provides many state-of-the-art algorithms through intuitive APIs. You can become a RL engineer even without knowing how to use deep learning libraries.
- scikit-learn compatibility: d3rlpy is not only easy, but also completely compatible with scikit-learn API, which means that you can maximize your productivity with the useful scikit-learn's utilities.
:rocket: Beyond State-Of-The-Art
- distributional Q function: d3rlpy is the first library that supports distributional Q functions in the all algorithms. The distributional Q function is known as the very powerful method to achieve the state-of-the-performance.
- many tweek options: d3rlpy is also the first to support N-step TD backup and ensemble value functions in the all algorithms, which lead you to the place no one ever reached yet.
installation
d3rlpy supports Linux, macOS and Windows.
PyPI (recommended)
$ pip install d3rlpy
Anaconda
$ conda install -c conda-forge d3rlpy
Docker
$ docker run -it --gpus all --name d3rlpy takuseno/d3rlpy:latest bash
supported algorithms
algorithm | discrete control | continuous control | offline RL? |
---|---|---|---|
Behavior Cloning (supervised learning) | :white_check_mark: | :white_check_mark: | |
Deep Q-Network (DQN) | :white_check_mark: | :no_entry: | |
Double DQN | :white_check_mark: | :no_entry: | |
Deep Deterministic Policy Gradients (DDPG) | :no_entry: | :white_check_mark: | |
Twin Delayed Deep Deterministic Policy Gradients (TD3) | :no_entry: | :white_check_mark: | |
Soft Actor-Critic (SAC) | :white_check_mark: | :white_check_mark: | |
Batch Constrained Q-learning (BCQ) | :white_check_mark: | :white_check_mark: | :white_check_mark: |
Bootstrapping Error Accumulation Reduction (BEAR) | :no_entry: | :white_check_mark: | :white_check_mark: |
Conservative Q-Learning (CQL) | :white_check_mark: | :white_check_mark: | :white_check_mark: |
Advantage Weighted Actor-Critic (AWAC) | :no_entry: | :white_check_mark: | :white_check_mark: |
Critic Reguralized Regression (CRR) | :no_entry: | :white_check_mark: | :white_check_mark: |
Policy in Latent Action Space (PLAS) | :no_entry: | :white_check_mark: | :white_check_mark: |
TD3+BC | :no_entry: | :white_check_mark: | :white_check_mark: |
Implicit Q-Learning (IQL) | :no_entry: | :white_check_mark: | :white_check_mark: |
supported Q functions
- standard Q function
- Quantile Regression
- Implicit Quantile Network
experimental features
- Model-based Algorithms
- Q-functions
- Fully parametrized Quantile Function (experimental)
benchmark results
d3rlpy is benchmarked to ensure the implementation quality. The benchmark scripts are available reproductions directory. The benchmark results are available d3rlpy-benchmarks repository.
examples
MuJoCo
<p align="center"><img align="center" width="160px" src="assets/mujoco_hopper.gif"></p>import d3rlpy
# prepare dataset
dataset, env = d3rlpy.datasets.get_d4rl('hopper-medium-v0')
# prepare algorithm
cql = d3rlpy.algos.CQL(use_gpu=True)
# train
cql.fit(dataset,
eval_episodes=dataset,
n_epochs=100,
scorers={
'environment': d3rlpy.metrics.evaluate_on_environment(env),
'td_error': d3rlpy.metrics.td_error_scorer
})
See more datasets at d4rl.
Atari 2600
<p align="center"><img align="center" width="160px" src="assets/breakout.gif"></p>import d3rlpy
from sklearn.model_selection import train_test_split
# prepare dataset
dataset, env = d3rlpy.datasets.get_atari('breakout-expert-v0')
# split dataset
train_episodes, test_episodes = train_test_split(dataset, test_size=0.1)
# prepare algorithm
cql = d3rlpy.algos.DiscreteCQL(n_frames=4, q_func_factory='qr', scaler='pixel', use_gpu=True)
# start training
cql.fit(train_episodes,
eval_episodes=test_episodes,
n_epochs=100,
scorers={
'environment': d3rlpy.metrics.evaluate_on_environment(env),
'td_error': d3rlpy.metrics.td_error_scorer
})
See more Atari datasets at d4rl-atari.
Online Training
import d3rlpy
import gym
# prepare environment
env = gym.make('HopperBulletEnv-v0')
eval_env = gym.make('HopperBulletEnv-v0')
# prepare algorithm
sac = d3rlpy.algos.SAC(use_gpu=True)
# prepare replay buffer
buffer = d3rlpy.online.buffers.ReplayBuffer(maxlen=1000000, env=env)
# start training
sac.fit_online(env, buffer, n_steps=1000000, eval_env=eval_env)
tutorials
Try a cartpole example on Google Colaboratory!
contributions
Any kind of contribution to d3rlpy would be highly appreciated! Please check the contribution guide.
The release planning can be checked at milestones.
community
Channel | Link |
---|---|
Chat | Gitter |
Issues | GitHub Issues |
family projects
Project | Description |
---|---|
d4rl-pybullet | An offline RL datasets of PyBullet tasks |
d4rl-atari | A d4rl-style library of Google's Atari 2600 datasets |
MINERVA | An out-of-the-box GUI tool for offline RL |
roadmap
The roadmap to the future release is available in ROADMAP.md.
citation
The paper is available here.