Awesome
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20models-hugging%20face-F8D521"> <br> <span> </span>
<br> <p align="center"> <a href="https://skrl.readthedocs.io"> <img width="300rem" src="https://raw.githubusercontent.com/Toni-SM/skrl/main/docs/source/_static/data/logo-light-mode.png"> </a> </p> <h2 align="center" style="border-bottom: 0 !important;">SKRL - Reinforcement Learning library</h2> <br>skrl is an open-source modular library for Reinforcement Learning written in Python (on top of PyTorch and JAX) and designed with a focus on modularity, readability, simplicity, and transparency of algorithm implementation. In addition to supporting the OpenAI Gym, Farama Gymnasium and PettingZoo, Google DeepMind and Brax, among other environment interfaces, it allows loading and configuring NVIDIA Isaac Lab (as well as Isaac Gym and Omniverse Isaac Gym) environments, enabling agents' simultaneous training by scopes (subsets of environments among all available environments), which may or may not share resources, in the same run.
<br>Please, visit the documentation for usage details and examples
<strong>https://skrl.readthedocs.io</strong>
<br><br>Note: This project is under active continuous development. Please make sure you always have the latest version. Visit the develop branch or its documentation to access the latest updates to be released.
Citing this library
To cite this library in publications, please use the following reference:
@article{serrano2023skrl,
author = {Antonio Serrano-Muñoz and Dimitrios Chrysostomou and Simon Bøgh and Nestor Arana-Arexolaleiba},
title = {skrl: Modular and Flexible Library for Reinforcement Learning},
journal = {Journal of Machine Learning Research},
year = {2023},
volume = {24},
number = {254},
pages = {1--9},
url = {http://jmlr.org/papers/v24/23-0112.html}
}