Home

Awesome

<p align="center"><img align="center" width="300px" src="assets/logo.png"></p>

d3rlpy: An offline deep reinforcement learning library

test Documentation Status codecov Maintainability MIT

d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.

import d3rlpy

dataset, env = d3rlpy.datasets.get_dataset("hopper-medium-v0")

# prepare algorithm
sac = d3rlpy.algos.SACConfig(compile_graph=True).create(device="cuda:0")

# train offline
sac.fit(dataset, n_steps=1000000)

# train online
sac.fit_online(env, n_steps=1000000)

# ready to control
actions = sac.predict(x)

[!IMPORTANT] v2.x.x introduces breaking changes. If you still stick to v1.x.x, please explicitly install previous versions (e.g. pip install d3rlpy==1.1.1).

Key features

:zap: Most Practical RL Library Ever

:beginner: User-friendly API

:rocket: Beyond State-of-the-art

Installation

d3rlpy supports Linux, macOS and Windows.

Dependencies

Installing d3rlpy package will install or upgrade the following packages to satisfy requirements:

PyPI (recommended)

PyPI version PyPI - Downloads

$ pip install d3rlpy

Anaconda

Anaconda-Server Badge Anaconda-Server Badge

$ conda install conda-forge/noarch::d3rlpy

Docker

Docker Pulls

$ docker run -it --gpus all --name d3rlpy takuseno/d3rlpy:latest bash

Supported algorithms

algorithmdiscrete controlcontinuous control
Behavior Cloning (supervised learning):white_check_mark::white_check_mark:
Neural Fitted Q Iteration (NFQ):white_check_mark::no_entry:
Deep Q-Network (DQN):white_check_mark::no_entry:
Double DQN:white_check_mark::no_entry:
Deep Deterministic Policy Gradients (DDPG):no_entry::white_check_mark:
Twin Delayed Deep Deterministic Policy Gradients (TD3):no_entry::white_check_mark:
Soft Actor-Critic (SAC):white_check_mark::white_check_mark:
Batch Constrained Q-learning (BCQ):white_check_mark::white_check_mark:
Bootstrapping Error Accumulation Reduction (BEAR):no_entry::white_check_mark:
Conservative Q-Learning (CQL):white_check_mark::white_check_mark:
Advantage Weighted Actor-Critic (AWAC):no_entry::white_check_mark:
Critic Reguralized Regression (CRR):no_entry::white_check_mark:
Policy in Latent Action Space (PLAS):no_entry::white_check_mark:
TD3+BC:no_entry::white_check_mark:
Implicit Q-Learning (IQL):no_entry::white_check_mark:
Calibrated Q-Learning (Cal-QL):no_entry::white_check_mark:
ReBRAC:no_entry::white_check_mark:
Decision Transformer:white_check_mark::white_check_mark:
Gato:construction::construction:

Supported Q functions

Benchmark results

d3rlpy is benchmarked to ensure the implementation quality. The benchmark scripts are available reproductions directory. The benchmark results are available d3rlpy-benchmarks repository.

Examples

MuJoCo

<p align="center"><img align="center" width="160px" src="assets/mujoco_hopper.gif"></p>
import d3rlpy

# prepare dataset
dataset, env = d3rlpy.datasets.get_d4rl('hopper-medium-v0')

# prepare algorithm
cql = d3rlpy.algos.CQLConfig(compile_graph=True).create(device='cuda:0')

# train
cql.fit(
    dataset,
    n_steps=100000,
    evaluators={"environment": d3rlpy.metrics.EnvironmentEvaluator(env)},
)

See more datasets at d4rl.

Atari 2600

<p align="center"><img align="center" width="160px" src="assets/breakout.gif"></p>
import d3rlpy

# prepare dataset (1% dataset)
dataset, env = d3rlpy.datasets.get_atari_transitions(
    'breakout',
    fraction=0.01,
    num_stack=4,
)

# prepare algorithm
cql = d3rlpy.algos.DiscreteCQLConfig(
    observation_scaler=d3rlpy.preprocessing.PixelObservationScaler(),
    reward_scaler=d3rlpy.preprocessing.ClipRewardScaler(-1.0, 1.0),
    compile_graph=True,
).create(device='cuda:0')

# start training
cql.fit(
    dataset,
    n_steps=1000000,
    evaluators={"environment": d3rlpy.metrics.EnvironmentEvaluator(env, epsilon=0.001)},
)

See more Atari datasets at d4rl-atari.

Online Training

import d3rlpy
import gym

# prepare environment
env = gym.make('Hopper-v3')
eval_env = gym.make('Hopper-v3')

# prepare algorithm
sac = d3rlpy.algos.SACConfig(compile_graph=True).create(device='cuda:0')

# prepare replay buffer
buffer = d3rlpy.dataset.create_fifo_replay_buffer(limit=1000000, env=env)

# start training
sac.fit_online(env, buffer, n_steps=1000000, eval_env=eval_env)

Tutorials

Try cartpole examples on Google Colaboratory!

More tutorial documentations are available here.

Contributions

Any kind of contribution to d3rlpy would be highly appreciated! Please check the contribution guide.

Community

ChannelLink
IssuesGitHub Issues

[!IMPORTANT] Please do NOT email to any contributors including the owner of this project to ask for technical support. Such emails will be ignored without replying to your message. Use GitHub Issues to report your problems.

Projects using d3rlpy

ProjectDescription
MINERVAAn out-of-the-box GUI tool for offline RL
SCOPE-RLAn off-policy evaluation and selection library

Roadmap

The roadmap to the future release is available in ROADMAP.md.

Citation

The paper is available here.

@article{d3rlpy,
  author  = {Takuma Seno and Michita Imai},
  title   = {d3rlpy: An Offline Deep Reinforcement Learning Library},
  journal = {Journal of Machine Learning Research},
  year    = {2022},
  volume  = {23},
  number  = {315},
  pages   = {1--20},
  url     = {http://jmlr.org/papers/v23/22-0017.html}
}

Acknowledgement

This work started as a part of Takuma Seno's Ph.D project at Keio University in 2020.

This work is supported by Information-technology Promotion Agency, Japan (IPA), Exploratory IT Human Resources Project (MITOU Program) in the fiscal year 2020.