Awesome
Gym Environments and Agents for Autonomous Driving
<p align="center"> <img src="https://github.com/bark-simulator/bark-ml/raw/master/docs/images/bark_ml_logo.png" width="65%" alt="BARK-ML" /> </p>Try it on Google Colab!
BARK-ML offers various OpenAI-Gym environments and reinforcement learning agents for autonomous driving.
Install BARK-ML using pip install bark-ml
.
Gym Environments
Highway Scenario
env = gym.make("highway-v0")
The highway scenario is a curved road with four lanes where all vehicles are being controlled by the intelligent driver model (IDM). For more details have a look here.
Available environments:
highway-v0
: Continuous highway environmenthighway-v1
: Discrete highway environment
Merging Scenario
env = gym.make("merging-v0")
In the merging scenario, the ego vehicle is placed on the right and its goal is placed on the left lane. All other vehicles are controlled by the MOBIL model. For more details have a look here.
Available environments:
merging-v0
: Continuous merging environmentmerging-v1
: Discrete merging environment
Intersection / Unprotected Left Turn
env = gym.make("intersection-v0")
In the intersection scenario, the ego vehicle starts on the bottom-right lane and its goal is set on the top-left lane (unprotected left turn). For more details have a look here.
Available environments:
intersection-v0
: Continuous intersection environmentintersection-v1
: Discrete intersection environment
Getting Started
An example using the OpenAi-Gym interface can be found here:
import gym
import numpy as np
# registers bark-ml environments
import bark_ml.environments.gym # pylint: disable=unused-import
env = gym.make("merging-v0")
initial_state = env.reset()
done = False
while done is False:
action = np.array([0., 0.]) # acceleration and steering-rate
observed_state, reward, done, info = env.step(action)
print(f"Observed state: {observed_state}, "
f"Action: {action}, Reward: {reward}, Done: {done}")
Building From Source
Clone the repository using git clone https://github.com/bark-simulator/bark-ml
, install the virtual python environment and activate it afterwards using:
bash utils/install.sh
source utils/dev_into.sh
Now - once in the virtual python environment - you can build any of the libraries or execute binaries within BARK-ML using Bazel.
To run the getting started example from above, use the following command: bazel run //examples:continuous_env
.
Documentation
Read the documentation online.
Publications
- Graph Neural Networks and Reinforcement Learning for Behavior Generation in Semantic Environments (IV 2020)
- BARK: Open Behavior Benchmarking in Multi-Agent Environments (IROS 2020)
- Counterfactual Policy Evaluation for Decision-Making in Autonomous Driving (IROS 2020, PLC Workshop)
License
BARK-ML code is distributed under MIT License.