Home

Awesome

RL MPC Locomotion

This repo aims to provide a fast simulation and RL training framework for a quadruped locomotion task by dynamically predicting the weight parameters of a MPC controller. The control framework is a hierarchical controller composed of a higher-level policy network and a lower-level model predictive controller.

The MPC controller refers to Cheetah Software but written in python, and it completely opens the interface between sensor data and motor commands, so that the controller can be easily ported to any mainstream simulators.

The RL training utilizes the NVIDIA Isaac Gym in parallel using Unitree Robotics Aliengo model, and transferring it from simulation to reality on a real Aliengo robot (sim2real is not included in this codebase).

Frameworks

<img src="images/controller_blocks.png" width=700>

Dependencies

<!-- - *OSQP* - 0.6.2 -->

Installation

  1. Clone this repository
    git clone git@github.com:silvery107/rl-mpc-locomotion.git
    
  2. Initialize submodules
     git submodule update --init
    
    Or use the --recurse option in step 1 to clone submodules at the same time.
  3. Create the conda environment:
    conda env create -f environment.yml
    
  4. Install rsl_rl at commit 2ad79cf under <extern> folder
    cd extern/rsl_rl
    pip install -e .
    
  5. Compile python binding of the MPC solver:
    pip install -e .
    

Quick Start

  1. Play the MPC controller on Aliengo:

    python RL_MPC_Locomotion.py --robot=Aliengo
    

    All supported robot types are Go1, A1 and Aliengo.

    Note that you need to plug in your Xbox-like gamepad to control it, or pass --disable-gamepad. The controller mode is default to Fsm (Finite State Machine), and you can also try Min for the minimum MPC controller without FSM.

    • Gamepad keymap

      Press LB to switch gait types between Trot, Walk and Bound.

      Press RB to switch FSM states between Locomotion and Recovery Stand

  2. Train a new policy:

    cd RL_Environment
    python train.py task=Aliengo headless=False
    

    Press the v key to disable viewer updates, and press again to resume. Set headless=True to train without rendering.

    Tensorboard support is available, run tensorboard --logdir runs.

  3. Load a pretrained checkpoint:

    python train.py task=Aliengo checkpoint=runs/Aliengo/nn/Aliengo.pth test=True num_envs=4
    

    Set test=False to continue training.

  4. Run the pretrained weight-policy for MPC controller on Aliengo: Set bridge_MPC_to_RL to False in <MPC_Controller/Parameters.py>

    python RL_MPC_Locomotion.py --robot=Aliengo --mode=Policy --checkpoint=path/to/ckpt
    

    If no checkpoint is given, it will load the latest run.

Roadmap

<img src="images/MPC_block.png" width=600> <img src="images/training_data_flow.png" width=400>

User Notes

Gallery

<img src="images/4_cheetah_trot.gif" width=500> <img src="images/RL_Paraller_16.gif" width=500> <img src="images/MPC_Stair_Demo.gif" width=500> <img src="images/MPC_Sim2Real.gif" width=500 tag> <a name="sim2real_anchor"></a>