Home

Awesome

<div align="center"> <a href="http://www.offline-saferl.org"><img width="300px" height="auto" src="https://github.com/liuzuxin/dsrl/raw/main/docs/dsrl-logo.png"></a> </div> <br/> <div align="center">

<a>Python 3.8+</a> License PyPI GitHub Repo Stars Downloads

<!-- [![Documentation Status](https://img.shields.io/readthedocs/fsrl?logo=readthedocs)](https://fsrl.readthedocs.io) --> <!-- [![CodeCov](https://codecov.io/github/liuzuxin/fsrl/branch/main/graph/badge.svg?token=BU27LTW9F3)](https://codecov.io/github/liuzuxin/fsrl) [![Tests](https://github.com/liuzuxin/fsrl/actions/workflows/test.yml/badge.svg)](https://github.com/liuzuxin/fsrl/actions/workflows/test.yml) --> <!-- [![CodeCov](https://img.shields.io/codecov/c/github/liuzuxin/fsrl/main?logo=codecov)](https://app.codecov.io/gh/liuzuxin/fsrl) --> <!-- [![tests](https://img.shields.io/github/actions/workflow/status/liuzuxin/fsrl/test.yml?label=tests&logo=github)](https://github.com/liuzuxin/fsrl/tree/HEAD/tests) --> </div>

DSRL (Datasets for Safe Reinforcement Learning) provides a rich collection of datasets specifically designed for offline Safe Reinforcement Learning (RL). Created with the objective of fostering progress in offline safe RL research, DSRL bridges a crucial gap in the availability of safety-centric public benchmarks and datasets.

<div align="center"> <img width="800px" height="auto" src="https://github.com/liuzuxin/dsrl/raw/main/docs/tasks.png"> </div>

DSRL provides:

  1. Diverse datasets: 38 datasets across different safe RL environments and difficulty levels in SafetyGymnasium, BulletSafetyGym, and MetaDrive, all prepared with safety considerations.
  2. Consistent API with D4RL: For easy use and evaluation of offline learning methods.
  3. Data post-processing filters: Allowing alteration of data density, noise level, and reward distributions to simulate various data collection conditions.

This package is a part of a comprehensive benchmarking suite that includes FSRL and OSRL and aims to promote advancements in the development and evaluation of safe learning algorithms.

We provided a detailed breakdown of the datasets, including all the environments we use, the dataset sizes, and the cost-reward-return plot for each dataset. These details can be found in the docs folder.

To learn more, please visit our project website. If you find this code useful, please cite our paper, which has been accepted by the DMLR journal:

@article{
  liu2024offlinesaferl,
  title={Datasets and Benchmarks for Offline Safe Reinforcement Learning},
  author={Zuxin Liu and Zijian Guo and Haohong Lin and Yihang Yao and Jiacheng Zhu and Zhepeng Cen and Hanjiang Hu and Wenhao Yu and Tingnan Zhang and Jie Tan and Ding Zhao},
  journal={Journal of Data-centric Machine Learning Research},
  year={2024}
}
<!-- To learn more, please visit our [project website](http://www.offline-saferl.org) or refer to our [documentation](./docs). -->

Installation

Install from PyPI

DSRL is currently hosted on PyPI, you can simply install it by:

pip install dsrl

It will by default install bullet-safety-gym and safety-gymnasium environments automatically.

If you want to use the MetaDrive environment, please install it via:

pip install git+https://github.com/HenryLHH/metadrive_clean.git@main

Install from source

Pull this repo and install:

git clone https://github.com/liuzuxin/DSRL.git
cd DSRL
pip install -e .

You can also install the MetaDrive package simply by specify the option:

pip install -e .[metadrive]

How to use DSRL

DSRL uses the Gymnasium API. Tasks are created via the gymnasium.make function. Each task is associated with a fixed offline dataset, which can be obtained with the env.get_dataset() method. This method returns a dictionary with:

The usage is similar to D4RL. Here is an example code:

import gymnasium as gym
import dsrl

# Create the environment
env = gym.make('OfflineCarCircle-v0')

# Each task is associated with a dataset
# dataset contains observations, next_observatiosn, actions, rewards, costs, terminals, timeouts
dataset = env.get_dataset()
print(dataset['observations']) # An N x obs_dim Numpy array of observations

# dsrl abides by the OpenAI gym interface
obs, info = env.reset()
obs, reward, terminal, timeout, info = env.step(env.action_space.sample())
cost = info["cost"]

# Apply dataset filters [optional]
# dataset = env.pre_process_data(dataset, filter_cfgs)

Datasets are automatically downloaded to the ~/.dsrl/datasets directory when get_dataset() is called. If you would like to change the location of this directory, you can set the $DSRL_DATASET_DIR environment variable to the directory of your choosing, or pass in the dataset filepath directly into the get_dataset method.

You can use run the following example scripts to play with the offline dataset of all the supported environments:

python examples/run_mujoco.py --agent [your_agent] --task [your_task]
python examples/run_bullet.py --agent [your_agent] --task [your_task]
python examples/run_metadrive.py --road [your_road] --traffic [your_traffic] 

We also add examples of rendering in the Metadrive environments for both third-person view and birds-eye-view:

python examples/run_metadrive.py --road [your_road] --traffic [your_traffic] --render [bev/3pv/none]

Normalizing Scores

License

All datasets are licensed under the Creative Commons Attribution 4.0 License (CC BY), and code is licensed under the Apache 2.0 License.