Home

Awesome

Reinforcement Learning in Possibly Nonstationary Environments (CUSUM-RL)

This repository contains the implementation for the paper "Reinforcement Learning in Possibly Nonstationary Environments" in Python (and R for plotting).

Summary of the paper

We consider reinforcement learning (RL) methods in offline nonstationary environments. Many existing RL algorithms in the literature rely on the stationarity assumption that requires the system transition and the reward function to be constant over time. However, the stationarity assumption is restrictive in practice and is likely to be violated in a number of applications, including traffic signal control, robotics and mobile health. In this paper, we develop a consistent procedure to test the nonstationarity of the optimal policy based on pre-collected historical data, without additional online data collection. Based on the proposed test, we further develop a sequential change point detection method that can be naturally coupled with existing state-of-the-art RL methods for policy optimisation in nonstationary environments. The usefulness of our method is illustrated by theoretical results, simulation studies, and a real data example from the 2018 Intern Health Study (IHS).

<img align="center" src="data_pvalue.png" alt="drawing" width="1000">

Figure: P-value over different values of kappa (the number of time points from the last time point). The discounted factor equal 0.9 and 0.95, from top plots to bottom plots. The specialties corresponds to emergency, pediatrics and family practice, from left plots to right plots.

File Overview