Home

Awesome

Policy Representation via Diffusion Probability Model for Reinforcement Learning

We formally build a theoretical foundation of policy representation via the diffusion probability model and provide practical implementations of diffusion policy for online model-free RL.

Paper link: https://arxiv.org/pdf/2305.13122.pdf

Experiments

Requirements

Installations of PyTorch and MuJoCo are needed. A suitable conda environment named DIPO can be created and activated with:

conda create DIPO
conda activate DIPO

To get started, install the additionally required python packages into you environment.

pip install -r requirements.txt

Running

Running experiments based our code could be quite easy, so below we use Hopper-v3 task as an example.

python main.py --env_name Hopper-v3 --num_steps 1000000 --n_timesteps 100 --cuda 0 --seed 0

Hyperparameters

Hyperparameters for DIPO have been shown as follow for easily reproducing our reported results.

Hyper-parameters for algorithms

HyperparameterDIPOSACTD3PPO
No. of hidden layers2222
No. of hidden nodes256256256256
Activationmishrelurelutanh
Batch size256256256256
Discount for reward $\gamma$0.990.990.990.99
Target smoothing coefficient $\tau$0.0050.0050.0050.005
Learning rate for actor$3 × 10^{-4}$$3 × 10^{-4}$$3 × 10^{-4}$$7 × 10^{-4}$
Learning rate for critic$3 × 10^{-4}$$3 × 10^{-4}$$3 × 10^{-4}$$7 × 10^{-4}$
Actor Critic grad norm2N/AN/A0.5
Memeroy size$1 × 10^6$$1 × 10^6$$1 × 10^6$$1 × 10^6$
Entropy coefficientN/A0.2N/A0.01
Value loss coefficientN/AN/AN/A0.5
Exploration noiseN/AN/A$\mathcal{N}$(0, 0.1)N/A
Policy noiseN/AN/A$\mathcal{N}$(0, 0.2)N/A
Noise clipN/AN/A0.5N/A
Use gaeN/AN/AN/ATrue

Hyper-parameters for MuJoCo.(DIPO)

HyperparameterHopper-v3Walker2d-v3Ant-v3HalfCheetah-v3Humanoid-v3
Learning rate for action0.030.030.030.030.03
Actor Critic grad norm120.822
Action grad norm ratio0.30.080.10.080.1
Action gradient steps2020204020
Diffusion inference timesteps100100100100100
Diffusion beta schedulecosinecosinecosinecosinecosine
Update actor target every11121

Contact

If you have any questions regarding the code or paper, feel free to send all correspondences to yanglong001@pku.edu.cn or zx.huang@zju.edu.cn

Difussion Policy