Awesome
RL MPC Locomotion
This repo aims to provide a fast simulation and RL training framework for a quadruped locomotion task by dynamically predicting the weight parameters of a MPC controller. The control framework is a hierarchical controller composed of a higher-level policy network and a lower-level model predictive controller.
The MPC controller refers to Cheetah Software but written in python, and it completely opens the interface between sensor data and motor commands, so that the controller can be easily ported to any mainstream simulators.
The RL training utilizes the NVIDIA Isaac Gym in parallel using Unitree Robotics Aliengo model, and transferring it from simulation to reality on a real Aliengo robot (sim2real is not included in this codebase).
Frameworks
<img src="images/controller_blocks.png" width=700>Dependencies
- Python - 3.8
- PyTorch - 1.10.0 with CUDA 11.3
- Isaac Gym - Preview 4
Installation
- Clone this repository
git clone git@github.com:silvery107/rl-mpc-locomotion.git
- Initialize submodules
Or use thegit submodule update --init
--recurse
option in step 1 to clone submodules at the same time. - Create the conda environment:
conda env create -f environment.yml
- Install
rsl_rl
at commit 2ad79cf under<extern>
foldercd extern/rsl_rl pip install -e .
- Compile python binding of the MPC solver:
pip install -e .
Quick Start
-
Play the MPC controller on Aliengo:
python RL_MPC_Locomotion.py --robot=Aliengo
All supported robot types are
Go1
,A1
andAliengo
.Note that you need to plug in your Xbox-like gamepad to control it, or pass
--disable-gamepad
. The controller mode is default toFsm
(Finite State Machine), and you can also tryMin
for the minimum MPC controller without FSM.-
Gamepad keymap
Press
LB
to switch gait types betweenTrot
,Walk
andBound
.Press
RB
to switch FSM states betweenLocomotion
andRecovery Stand
-
-
Train a new policy:
cd RL_Environment python train.py task=Aliengo headless=False
Press the
v
key to disable viewer updates, and press again to resume. Setheadless=True
to train without rendering.Tensorboard support is available, run
tensorboard --logdir runs
. -
Load a pretrained checkpoint:
python train.py task=Aliengo checkpoint=runs/Aliengo/nn/Aliengo.pth test=True num_envs=4
Set
test=False
to continue training. -
Run the pretrained weight-policy for MPC controller on Aliengo: Set
bridge_MPC_to_RL
toFalse
in<MPC_Controller/Parameters.py>
python RL_MPC_Locomotion.py --robot=Aliengo --mode=Policy --checkpoint=path/to/ckpt
If no
checkpoint
is given, it will load the latest run.
Roadmap
<img src="images/MPC_block.png" width=600>- MPC Controller
- Quadruped,
- RobotRunner ->
- RL Environment
- Gamepad Reader,
- Simulation Utils,
- Weight Policy,
- Train ->
User Notes
- Setup a Simulation in Isaac Gym
- Install MIT Cheetah Software
- OSQP, qpOASES and CVXOPT Solver Instructions
- Development Logs