Awesome
<div align="center"> <img src="docs/img/AllenAct.svg" width="350" /> <br> <i><h3>An open source framework for research in Embodied AI</h3></i> </p> <hr/> </div>AllenAct is a modular and flexible learning framework designed with a focus on the unique requirements of Embodied-AI research. It provides first-class support for a growing collection of embodied environments, tasks and algorithms, provides reproductions of state-of-the-art models and includes extensive documentation, tutorials, start-up code, and pre-trained models.
AllenAct is built and backed by the Allen Institute for AI (AI2). AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering.
Quick Links
Features & Highlights
- Support for multiple environments: Support for the iTHOR, RoboTHOR and Habitat embodied environments as well as for grid-worlds including MiniGrid.
- Task Abstraction: Tasks and environments are decoupled in AllenAct, enabling researchers to easily implement a large variety of tasks in the same environment.
- Algorithms: Support for a variety of on-policy algorithms including PPO, DD-PPO, A2C, Imitation Learning and DAgger as well as offline training such as offline IL.
- Sequential Algorithms: It is trivial to experiment with different sequences of training routines, which are often the key to successful policies.
- Simultaneous Losses: Easily combine various losses while training models (e.g. use an external self-supervised loss while optimizing a PPO loss).
- Multi-agent support: Support for multi-agent algorithms and tasks.
- Visualizations: Out of the box support to easily visualize first and third person views for agents as well as intermediate model tensors, integrated into Tensorboard.
- Pre-trained models: Code and models for a number of standard Embodied AI tasks.
- Tutorials: Start-up code and extensive tutorials to help ramp up to Embodied AI.
- First-class PyTorch support: One of the few RL frameworks to target PyTorch.
- Arbitrary action spaces: Supporting both discrete and continuous actions.
Environments | Tasks | Algorithms |
---|---|---|
iTHOR, RoboTHOR, Habitat, MiniGrid, OpenAI Gym | PointNav, ObjectNav, MiniGrid tasks, Gym Box2D tasks | A2C, PPO, DD-PPO, DAgger, Off-policy Imitation |
Contributions
We welcome contributions from the greater community. If you would like to make such a contributions we recommend first submitting an issue describing your proposed improvement. Doing so can ensure we can validate your suggestions before you spend a great deal of time upon them. Improvements and bug fixes should be made via a pull request from your fork of the repository at https://github.com/allenai/allenact.
All code in this repository is subject to formatting, documentation, and type-annotation guidelines. For more details, please see the our contribution guidelines.
Acknowledgments
This work builds upon the pytorch-a2c-ppo-acktr library of Ilya Kostrikov and uses some data structures from FAIR's habitat-lab. We would like to thank Dustin Schwenk for his help for the public release of the framework.
License
AllenAct is MIT licensed, as found in the LICENSE file.
Team
AllenAct is an open-source project built by members of the PRIOR research group at the Allen Institute for Artificial Intelligence (AI2).
<div align="left"> <a href="//prior.allenai.org/" target="_blank"> <img src="docs/img/ai2-prior.svg" width="400"> </a> <br> </div>Citation
If you use this work, please cite our paper:
@article{AllenAct,
author = {Luca Weihs and Jordi Salvador and Klemen Kotar and Unnat Jain and Kuo-Hao Zeng and Roozbeh Mottaghi and Aniruddha Kembhavi},
title = {AllenAct: A Framework for Embodied AI Research},
year = {2020},
journal = {arXiv preprint arXiv:2008.12760},
}