Awesome
MMDetection with Robustness Benchmarking
Important Notice!
This pull request was merged! Please use the original mmdetection toolbox for robustness benchmarking!
We will keep this fork for a while but may evantually delete it in the future.
Introduction
This repository contains a fork of the mmdetection toolbox with code to test models on coprrupted images. It was created as a part of the Robust Detection Benchmark Suite and has been submitted to mmdetection as pull request.
The benchmarking toolkit is part of the paper Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming.
For more information how to evaluate models on corrupted images and results for a set of standard models please refer to ROBUSTNESS_BENCHMARKING.md.
mmdetection Readme
For informations on mmdetection please refer to the mmdetection readme.
License
This project is released under the Apache 2.0 license.
Robustness Benchmark
Results for standard models are available in ROBUSTNESS_BENCHMARKING.md. For up-to-date results have a look at the official benchmark homepage.
Installation
Please refer to INSTALL.md for installation and dataset preparation.
Get Started
Please see GETTING_STARTED.md for the basic usage of MMDetection.
Evaluating Robustness
Plase see ROBUSTNESS_BENCHMARKING.md for instructions on robustness benchmarking.
Citation
If you use this toolbox or benchmark in your research, please cite this project.
@article{michaelis2019winter,
title={Benchmarking Robustness in Object Detection:
Autonomous Driving when Winter is Coming},
author={Michaelis, Claudio and Mitzkus, Benjamin and
Geirhos, Robert and Rusak, Evgenia and
Bringmann, Oliver and Ecker, Alexander S. and
Bethge, Matthias and Brendel, Wieland},
journal={arXiv:1907.07484},
year={2019}
}
Contact
This repo is currently maintained by Claudio Michaelis (@michaelisc).
For questions regarding mmdetection please visit the official repository.