Home

Awesome

Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model <img src="assets/icon.png" width="50">

International Conference on Learning Representation (ICLR), 2024

[Project Page] [Arxiv] [Openreview]

Yinan Zheng*, Jianxiong Li*, Dongjie Yu, Yujie Yang, Shengbo Eben Li, Xianyuan Zhan, Jingjing Liu

🔥 The official implementation of FISOR, which represents a pioneering effort in considering hard constraints (Hamilton-Jacobi Reachability) within the safe offline RL setting.

🔥 It is truly exciting that FISOR has already been applied in several practical applications:

Methods

FISOR transforms the original tightly-coupled safety-constrained offline RL problem into three decoupled simple supervised objectives:

<p float="left"> <img src="assets/framework.jpg" width="800"> </p>

Branches Overview

Branch nameUsage
masterFISOR implementation for Point Robot, Safety-Gymnasium and Bullet-Safety-Gym; data quantity experiment; feasible region visualization.
metadrive_imitationFISOR implementation for MetaDrive; data quantity experiment; imitation learning experiment.

Installation

conda create -n env_name python=3.9
conda activate FISOR
git clone https://github.com/ZhengYinan-AIR/FISOR.git
cd FISOR
pip install -r requirements.txt

Main results

Run

# OfflineCarButton1Gymnasium-v0
export XLA_PYTHON_CLIENT_PREALLOCATE=False
python launcher/examples/train_offline.py --env_id 0 --config configs/train_config.py:fisor

where env_id serves as an index for the list of environments.

Data Quantity Experiments

We can run filter_data.py to generate offline data of varying volumes. We also can download the necessary offline datasets (Download link). Then run

python launcher/examples/train_offline.py --env_id 17 --config configs/train_config.py:fisor --ratio 0.1

where ratio refers to the proportion of the processed data to the original dataset.

Feasible Region Visualization

We need to download the necessary offline dataset for Point Robot environment (Download link). Training FISOR in the Point Robot environment

python launcher/examples/train_offline.py --env_id 29 --config configs/train_config.py:fisor

Then visualize the feasible region by running viz_map.py.

<p float="left"> <img src="assets/viz_map.png" width="800"> </p>

Bibtex

If you find our code and paper can help, please cite our paper as:

@inproceedings{
zheng2024safe,
title={Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model},
author={Yinan Zheng and Jianxiong Li and Dongjie Yu and Yujie Yang and Shengbo Eben Li and Xianyuan Zhan and Jingjing Liu},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=j5JvZCaDM0}
}

Acknowledgements

Parts of this code are adapted from IDQL and DRPO.