Home

Awesome

Intro

This is the codebase for 'Mind Your Data! Hidding Backdoor in Offline Reinforcement Learning Datasets'.

test build Documentation Status codecov Maintainability Gitter MIT

d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.

import d3rlpy

dataset, env = d3rlpy.datasets.get_dataset("hopper-medium-v0")

# prepare algorithm
sac = d3rlpy.algos.SAC()

# train offline
sac.fit(dataset, n_steps=1000000)

# train online
sac.fit_online(env, n_steps=1000000)

# ready to control
actions = sac.predict(x)

key features

:zap: Most Practical RL Library Ever

:beginner: Easy-To-Use API

:rocket: Beyond State-Of-The-Art

installation

d3rlpy supports Linux, macOS and Windows.

PyPI (recommended)

PyPI version PyPI - Downloads

$ pip install d3rlpy

Anaconda

Anaconda-Server Badge Anaconda-Server Badge Anaconda-Server Badge

$ conda install -c conda-forge d3rlpy

Docker

Docker Pulls

$ docker run -it --gpus all --name d3rlpy takuseno/d3rlpy:latest bash

supported algorithms

algorithmdiscrete controlcontinuous controloffline RL?
Behavior Cloning (supervised learning):white_check_mark::white_check_mark:
Deep Q-Network (DQN):white_check_mark::no_entry:
Double DQN:white_check_mark::no_entry:
Deep Deterministic Policy Gradients (DDPG):no_entry::white_check_mark:
Twin Delayed Deep Deterministic Policy Gradients (TD3):no_entry::white_check_mark:
Soft Actor-Critic (SAC):white_check_mark::white_check_mark:
Batch Constrained Q-learning (BCQ):white_check_mark::white_check_mark::white_check_mark:
Bootstrapping Error Accumulation Reduction (BEAR):no_entry::white_check_mark::white_check_mark:
Conservative Q-Learning (CQL):white_check_mark::white_check_mark::white_check_mark:
Advantage Weighted Actor-Critic (AWAC):no_entry::white_check_mark::white_check_mark:
Critic Reguralized Regression (CRR):no_entry::white_check_mark::white_check_mark:
Policy in Latent Action Space (PLAS):no_entry::white_check_mark::white_check_mark:
TD3+BC:no_entry::white_check_mark::white_check_mark:
Implicit Q-Learning (IQL):no_entry::white_check_mark::white_check_mark:

supported Q functions

experimental features

benchmark results

d3rlpy is benchmarked to ensure the implementation quality. The benchmark scripts are available reproductions directory. The benchmark results are available d3rlpy-benchmarks repository.

examples

MuJoCo

<p align="center"><img align="center" width="160px" src="assets/mujoco_hopper.gif"></p>
import d3rlpy

# prepare dataset
dataset, env = d3rlpy.datasets.get_d4rl('hopper-medium-v0')

# prepare algorithm
cql = d3rlpy.algos.CQL(use_gpu=True)

# train
cql.fit(dataset,
        eval_episodes=dataset,
        n_epochs=100,
        scorers={
            'environment': d3rlpy.metrics.evaluate_on_environment(env),
            'td_error': d3rlpy.metrics.td_error_scorer
        })

See more datasets at d4rl.

Atari 2600

<p align="center"><img align="center" width="160px" src="assets/breakout.gif"></p>
import d3rlpy
from sklearn.model_selection import train_test_split

# prepare dataset
dataset, env = d3rlpy.datasets.get_atari('breakout-expert-v0')

# split dataset
train_episodes, test_episodes = train_test_split(dataset, test_size=0.1)

# prepare algorithm
cql = d3rlpy.algos.DiscreteCQL(n_frames=4, q_func_factory='qr', scaler='pixel', use_gpu=True)

# start training
cql.fit(train_episodes,
        eval_episodes=test_episodes,
        n_epochs=100,
        scorers={
            'environment': d3rlpy.metrics.evaluate_on_environment(env),
            'td_error': d3rlpy.metrics.td_error_scorer
        })

See more Atari datasets at d4rl-atari.

Online Training

import d3rlpy
import gym

# prepare environment
env = gym.make('HopperBulletEnv-v0')
eval_env = gym.make('HopperBulletEnv-v0')

# prepare algorithm
sac = d3rlpy.algos.SAC(use_gpu=True)

# prepare replay buffer
buffer = d3rlpy.online.buffers.ReplayBuffer(maxlen=1000000, env=env)

# start training
sac.fit_online(env, buffer, n_steps=1000000, eval_env=eval_env)

tutorials

Try a cartpole example on Google Colaboratory!

contributions

Any kind of contribution to d3rlpy would be highly appreciated! Please check the contribution guide.

The release planning can be checked at milestones.

community

ChannelLink
ChatGitter
IssuesGitHub Issues

family projects

ProjectDescription
d4rl-pybulletAn offline RL datasets of PyBullet tasks
d4rl-atariA d4rl-style library of Google's Atari 2600 datasets
MINERVAAn out-of-the-box GUI tool for offline RL

roadmap

The roadmap to the future release is available in ROADMAP.md.

citation

The paper is available here.