Awesome
PytorchAutoDrive: Framework for self-driving perception
PytorchAutoDrive is a pure Python framework includes semantic segmentation models, lane detection models based on PyTorch. Here we provide full stack supports from research (model training, testing, fair benchmarking by simply writing configs) to application (visualization, model deployment).
Paper: Rethinking Efficient Lane Detection via Curve Modeling (CVPR 2022)
Poster: PytorchAutoDrive: Toolkit & Fair Benchmark for Autonomous Driving Research (PyTorch Developer Day 2021)
This repository is under active development, results with models uploaded are stable. For legacy code users, please check deprecations for changes.
A demo video from ERFNet:
Highlights
Various methods on a wide range of backbones, config based implementations, modulated and easily understood codes, image/keypoint loading, transformations and visualizations, mixed precision training, tensorboard logging and deployment support with ONNX and TensorRT.
Models from this repo are faster to train (single card trainable) and often have better performance than other implementations, see wiki for reasons and technical specification of models.
Supported datasets:
Task | Dataset |
---|---|
semantic segmentation | PASCAL VOC 2012 |
semantic segmentation | Cityscapes |
semantic segmentation | GTAV* |
semantic segmentation | SYNTHIA* |
lane detection | CULane |
lane detection | TuSimple |
lane detection | LLAMAS |
lane detection | BDD100K (In progress) |
* The UDA baseline setup, with Cityscapes val set as validation.
Supported models:
Task | Backbone | Model/Method |
---|---|---|
semantic segmentation | ResNet-101 | FCN |
semantic segmentation | ResNet-101 | DeeplabV2 |
semantic segmentation | ResNet-101 | DeeplabV3 |
semantic segmentation | - | ENet |
semantic segmentation | - | ERFNet |
lane detection | ENet, ERFNet, VGG16, ResNets (18, 34, 50, 101), MobileNets (V2, V3-Large), RepVGGs (A0, A1, B0, B1g2, B2), Swin (Tiny) | Baseline |
lane detection | ERFNet, VGG16, ResNets (18, 34, 50, 101), RepVGGs (A1) | SCNN |
lane detection | ResNets (18, 34, 50, 101), MobileNets (V2, V3-Large), ERFNet | RESA |
lane detection | ERFNet, ENet | SAD (Postponed) |
lane detection | ERFNet | PRNet (In progress) |
lane detection | ResNets (18, 34, 50, 101), ResNet18-reduced | LSTR |
lane detection | ResNets (18, 34) | LaneATT |
lane detection | ResNets (18, 34) | BézierLaneNet |
Model Zoo
We provide solid results (average/best/detailed), training time, shell scripts and trained models available for download in MODEL_ZOO.md.
Installation
Please prepare the environment and code with INSTALL.md. Then follow the instructions in DATASET.md to set up datasets.
Getting Started
Get started with LANEDETECTION.md for lane detection.
Get started with SEGMENTATION.md for semantic segmentation.
Visualization Tools
Refer to VISUALIZATION.md for a visualization & inference tutorial, for image and video inputs.
Benchmark Tools
Refer to BENCHMARK.md for a benchmarking tutorial, including FPS test, FLOPs & memory count for each supported model.
Deployment
Refer to DEPLOY.md for ONNX and TensorRT deployment supports.
Advanced Tutorial
Checkout ADVANCED_TUTORIAL.md for advanced use cases and how to code in PytorchAutoDrive.
Contributing
Refer to CONTRIBUTING.md for contribution guides.
Citation
If you feel this framework substantially helped your research or you want a reference when using our results, please cite the following paper that made the official release of PytorchAutoDrive:
@inproceedings{feng2022rethinking,
title={Rethinking efficient lane detection via curve modeling},
author={Feng, Zhengyang and Guo, Shaohua and Tan, Xin and Xu, Ke and Wang, Min and Ma, Lizhuang},
booktitle={Computer Vision and Pattern Recognition},
year={2022}
}
Credits:
PytorchAutoDrive is maintained by Zhengyang Feng (voldemortX) and Shaohua Guo (cedricgsh).
Contributors (GitHub ID): kalkun, LittleJohnKhan, francis0407, PannenetsF, bjzhb666
People who sponsored us (e.g., with hardware): Lizhuang Ma, Xin Tan, Junshu Tang (junshutang), Fengqi Liu (FengqiLiu1221)