Awesome
<a href="https://lgtm.com/projects/g/enlite-ai/maze/context:python">
<img src="https://img.shields.io/lgtm/grade/python/g/enlite-ai/maze.svg?logo=lgtm&logoWidth=18" alt="Language grade: Python" />
</a>
Applied Reinforcement Learning with Python
MazeRL is an application oriented Deep Reinforcement Learning (RL) framework, addressing real-world decision problems. Our vision is to cover the complete development life cycle of RL applications ranging from simulation engineering up to agent development, training and deployment.
This is a preliminary, non-stable release of Maze. It is not yet complete and not all of our interfaces have settled yet. Hence, there might be some breaking changes on our way towards the first stable release.
Spotlight Features
Below we list a few selected Maze features.
- Design and visualize your policy and value networks with the Perception Module. It is based on PyTorch and provides a large variety of neural network building blocks and model styles. Quickly compose powerful representation learners from building blocks such as: dense, convolution, graph convolution and attention, recurrent architectures, action- and observation masking, self-attention etc.
- Create the conditions for efficient RL training without writing boiler plate code, e.g. by supporting best practices like pre-processing and normalizing your observations.
- Maze supports advanced environment structures reflecting the requirements of real-world industrial decision problems such as multi-step and multi-agent scenarios. You can of course work with existing Gym-compatible environments.
- Use the provided Maze trainers (A2C, PPO, Impala, SAC, Evolution Strategies), which are supporting dictionary action and observation spaces as well as multi-step (auto-regressive policies) training. Or stick to your favorite tools and trainers by combining Maze with other RL frameworks.
- Out of the box support for advanced training workflows such as imitation learning from teacher policies and policy fine-tuning.
- Keep even complex application and experiment configuration manageable with the Hydra Config System.
Get Started
-
Make sure PyTorch is installed and then get the latest released version of Maze as follows:
pip install -U maze-rl
Read more about other options like the installation of the latest development version.
:zap: We encourage you to start with Python 3.7, as many popular environments like Atari or Box2D can not easily be installed in newer Python environments. Maze itself supports newer Python versions, but for Python 3.9 you might have to install additional binary dependencies manually
-
Alternatively you can work with Maze in a <img alt="Docker" src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/4e/Docker_%28container_engine%29_logo.svg/1280px-Docker_%28container_engine%29_logo.svg.png" width="100" height="22" /> container with pre-installed Jupyter lab: Run
docker run -p 8888:8888 enliteai/maze:playground
and openlocalhost:8888
in your browser. This loads Jupyter -
To see Maze in action, check out a first example.
-
Try your own Gym env or visit our Maze step-by-step tutorial.
- Clone this project template repo to start your own Maze project.
Learn more about Maze
The documentation is the starting point to learn more about the underlying concepts, but most importantly also provides code snippets and minimum working examples to get you started quickly.
-
The Workflow section guides you through typical tasks in a RL project
-
Policy and Value Networks introduces you to the Perception Module, how to customize action spaces and the underlying action probability distributions and two styles of policy and value networks construction:
-
Template models are composed directly from an environment's observation and action space, allowing you to train with suitable agent networks on a new environment within minutes.
-
Custom models gives you the full flexibility of application specific models, either with the provided Maze building blocks or directly with PyTorch.
-
-
Learn more about core concepts and structures such as the Maze environment hierarchy, the Maze event system providing a convenient way to collect statistics and KPIs, enable flexible reward formulation and supporting offline analysis.
-
Structured Environments and Action Masking introduces you to a general concept, which can greatly improve the performance of the trained agents in practical RL problems.
License
Maze is freely available for research and non-commercial use. A commercial license is available, if interested please contact us on our company website or write us an email.
We believe in Open Source principles and aim at transitioning Maze to a commercial Open Source project, releasing larger parts of the framework under a permissive license in the near future.