Awesome
LAV
Learning from All Vehicles
Dian Chen, Philipp Krähenbühl
CVPR 2022 (also arXiV 2203.11934)
This repo contains code for paper Learning from all vehicles.
It distills a model that performs joint perception, multi-modal prediction and planning, and we hope it to be a great starter kit for end-to-end autonomous driving research.
Reference
If you find our repo, dataset or paper useful, please cite us as
@inproceedings{chen2022lav,
title={Learning from all vehicles},
author={Chen, Dian and Kr{\"a}henb{\"u}hl, Philipp},
booktitle={CVPR},
year={2022}
}
Updates
- We have slightly updated codes optimized for leaderboard inference speed with temporal LiDAR scans (
team_code_v2/lav_agent_fast.py)
. - We have released agent and weights for our leaderboard submission.
Demo Video
Also checkout our website!
Getting Started
- To run CARLA and train the models, make sure you are using a machine with at least a mid-end GPU.
- Please follow INSTALL.md to setup the environment.
Data Collection
The data collection scripts reside in the data-collect
branch of the repo.
It will log dataset to the path specified in config.yaml
.
Specify the number of runners and towns you would like to collect in data_collect.py
.
The script supports parallel data collection with ray
.
You can view wandb for the routes that are being collected. Example: https://wandb.ai/trilobita/lav_data?workspace=user-trilobita
python data_collect.py --num-runners=8
Training
We adopt a LBC-style staged privileged distillation framework. Please refer to TRAINING.md for more details.
Evaluation
We additionally provide examplery trained weights in the weights
folder if you would like to directly evaluate.
They are trained on Town01, 03, 04, 06.
Make sure you are launching CARLA with the -vulkan
flag.
The agent file for the leaderboard submission is contained in team_code_v2
.
We additionally provide a faster version of our agent that uses torch.jit
and moves several CPU-heavy computation (point painting etc.) to GPU.
This code resides in team_code_v2/lav_agent_fast.py
. It will also logs visualization to the wandb
cloud which you can optionally view and debug.
Known issues for the fast agent:
- Since the torchscript trace file is generated using
pytorch==1.7.1
, it might be incompatible with later pytorch versions. Please refer to #23 for more details and how to regenerate the trace files locally. The amount of acceleration is also dependent on hardware platform.
Inside the root LAV repo, run
ROUTES=[PATH TO ROUTES] ./leaderboard/scripts/run_evaluation.sh
Use ROUTES=assets/routes_lav_valid.xml
to run our ablation routes, or ROUTES=leaderboard/data/routes_valid.xml
for the validation routes provided by leaderboard.
You can also try ROUTES=assets/routes_lav_train.xml
to test on some harder training routes.
Dataset
We also release our LAV dataset. Download the dataset HERE.
See TRAINING.md for more details.
Acknowledgements
We thank Tianwei Yin for the pillar generation code. The ERFNet codes are taken from the official ERFNet repo.
License
This repo is released under the Apache 2.0 License (please refer to the LICENSE file for details).