Awesome
MTRL
Multi Task RL Algorithms
Contents
Introduction
MTRL is a library of multi-task reinforcement learning algorithms. It has two main components:
-
Building blocks and agents that implement the multi-task RL algorithms.
-
Experiment setups that enable training/evaluation on different setups.
Together, these two components enable use of MTRL across different environments and setups.
List of publications & submissions using MTRL (please create a pull request to add the missing entries):
- Learning Robust State Abstractions for Hidden-Parameter Block MDPs
- Multi-Task Reinforcement Learning with Context-based Representations
- We use the
af8417bfc82a3e249b4b02156518d775f29eb289
commit for the MetaWorld environments for our experiments.
- We use the
License
-
MTRL uses MIT License.
Citing MTRL
If you use MTRL in your research, please use the following BibTeX entry:
@Misc{Sodhani2021MTRL,
author = {Shagun Sodhani and Amy Zhang},
title = {MTRL - Multi Task RL Algorithms},
howpublished = {Github},
year = {2021},
url = {https://github.com/facebookresearch/mtrl}
}
Setup
-
Clone the repository:
git clone git@github.com:facebookresearch/mtrl.git
. -
Install dependencies:
pip install -r requirements/dev.txt
Usage
-
MTRL supports 8 different multi-task RL algorithms as described here.
-
MTRL supports multi-task environments using MTEnv. These environments include MetaWorld and multi-task variants of DMControl Suite
-
Refer the tutorial to get started with MTRL.
Documentation
Contributing to MTRL
There are several ways to contribute to MTRL.
-
Use MTRL in your research.
-
Contribute a new algorithm. We currently support 8 multi-task RL algorithms and are looking forward to adding more environments.
-
Check out the good-first-issues on GitHub and contribute to fixing those issues.
-
Check out additional details here.
Community
Ask questions in the chat or github issues: