Home

Awesome

MILE

This is the PyTorch implementation for inference and training of the world model and driving policy as described in:

Model-Based Imitation Learning for Urban Driving

Anthony Hu, Gianluca Corrado, Nicolas Griffiths, Zak Murez, Corina Gurau, Hudson Yeo, Alex Kendall, Roberto Cipolla and Jamie Shotton.

NeurIPS 2022<br/> Blog post

<p align="center"> <img src="https://github.com/wayveai/mile/releases/download/v1.0/mile_driving_in_imagination.gif" alt="MILE driving in imagination"> <br/> Our model can drive in the simulator with a driving plan predicted entirely from imagination. <br/> From left to right we visualise: RGB input, ground truth bird's-eye view semantic segmentation, predicted bird's-eye view segmentation. <br/> When the RGB input becomes sepia-coloured, the model is driving in imagination. <sub><em> </em></sub> </p>

If you find our work useful, please consider citing:

@inproceedings{mile2022,
  title     = {Model-Based Imitation Learning for Urban Driving},
  author    = {Anthony Hu and Gianluca Corrado and Nicolas Griffiths and Zak Murez and Corina Gurau
   and Hudson Yeo and Alex Kendall and Roberto Cipolla and Jamie Shotton},
  booktitle = {Advances in Neural Information Processing Systems ({NeurIPS})},
  year = {2022}
}

⚙ Setup

#!/bin/bash

export CARLA_ROOT="<path_to_carla_root>"
export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla/"

🏄 Evaluation

📖 Data Collection

🏊 Training

To train the model from scratch:

🙌 Credits

Thanks to the authors of End-to-End Urban Driving by Imitating a Reinforcement Learning Coach for providing a gym wrapper around CARLA making it easy to use, as well as an RL expert to collect data.