Home

Awesome

Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration

The official implementation of Self-aware Object Detectors. Our implementation is based on mmdetection.

Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration, Kemal Oksuz, Tom Joy, Puneet K. Dokania, CVPR 2023 (Appendices)

What is Self-aware Object Detection?

The standard approach to evaluate an object detector assumes that the test images belong to the same distribution from which the training examples are sampled. The common metric of such evaluation is the Average Precision which indicates how accurate a detector is. However, in practical applications, the test sample can be very different compared to the training ones. For example, there can be scenes with the objects similar to the ones in the training set but in very different environments, known as domain shift. Or, these scenes can differ completely from the training set, referred here as out of distribution scenes. Considering these, we design Self-aware Object Detection task. As illustrated in the figure below, a self-aware object detector first decides whether it can reliably operate on a scene represented by the binary variable a. If it accepts the image, it produces accurate and calibrated detections. We evaluate such detectors considering:

To enable this task, we introduce datasets, performance measures as well as investigate uncertainty quantification and calibration of object detectors. Accordingly, this repository provides these necessary tools for self-aware object detection as well as for enabling self-aware object detectors as described in our paper.

<p align="center"> <img src="resources/thumbnail.png" width="1000"> </p>

1. Specification of Dependencies and Preparation

Preparing MMDetection

Please see get_started.md for requirements and installation of mmdetection.

Additional Dependencies and Preparation for this Repository

Having completed the standard preparation of mmdetection, please make the following additional changes:

# Remove the standard pycocotools
pip uninstall pycocotools

# Install pycocotools with LRP Error
pip install "git+https://github.com/kemaloksuz/LRP-Error.git#subdirectory=pycocotools"

Preparing Datasets

Please see SAOD datasets for configuration of the datasets.

2. Used Conventional Object Detectors

Here, we provide the models that we use in this project. You can either download the models and use them, or else you can train them using the provided configuration files.

Using Trained Detectors

Conventional Detectors Trained using COCO training set (General Object Detection Use-case)

MethodAPLRP[1]ConfigModel
Faster R-CNN39.959.5configmodel
RS R-CNN42.058.1configmodel
ATSS42.858.5configmodel
Deformable DETR44.355.9configmodel
NLL R-CNN40.159.5configmodel
Energy-Score R-CNN40.359.4configmodel

Conventional Detectors Trained using nuImages training set (AV Object Detection Use-case)

MethodAPLRP[1]ConfigModel
Faster R-CNN55.043.6configmodel
ATSS56.943.2configmodel

Note: While AP is a higher-better measure, LRP indicates the error, hence lower is better.

All models are included here. After downloading the models, please include them under work_dirs directory. For example for Faster R-CNN, the model should ideally be placed in work_dirs/faster_rcnn_r50_fpn_straug_3x_coco/epoch_36.pth.

Training the Detectors

Alternatively, the models can be trained. The configuration files of all models listed above can be found in the configs/saod/training folder. As an example, to train Faster R-CNN on 8 GPUs, use the following command:

 tools/dist_train.py configs/saod/training/faster_rcnn_r50_fpn_straug_3x_coco.py 8

This repository also includes the implementation of probabilistic object detectors minimizing Negative Log Likelihood[2] or Energy Score[3].

3. Inference with Detection-level Uncertainties Attached

configs/saod/test includes all of the configuration files that we use for testing. In that directory, there is a seperate directory for each detector that includes the necessary test configuration files for making an object detector self-aware and evaluating it. Specifically, there are five configuration files for each detector. To illustrate on our general object detection setting using Faster R-CNN:

Obtaining SAODets and evaluating them require COCO-style json outputs with detection-level uncertainties attached, which can be obtained by the configuration files above. To illustrate again on Faster R-CNN, you will find entropy and dempster shafer estimations for each detection following this configuration file. As a result, each detection is now represented by a bounding box, a class id, a detection confidence score and a set of pre-defined uncertainty values in the resulting json file. Note that 1-p_i is obtained using the detection confidence score, hence it is not explicitly stated as an uncertainty type in the configuration file. The uncertainty estimations supported by this repository is implemented in this script.

To obtain the desired json files, we provide a bash script template that can be utilized as:

tools/dist_test_for_saod.sh dir_name model_path num_gpus

Again on the same example on Faster R-CNN, the following command will generate the required 8 json files of detections (using the 5 configuration files above) under the detections directory:

tools/dist_test_for_saod.sh faster_rcnn_r50_fpn_straug_3x_coco work_dirs/faster_rcnn_r50_fpn_straug_3x_coco/epoch_36.pth 2

4. Making Object Detectors Self-aware and Their Evaluation

Given detection-level uncertainties on the eight necessary data splits, we can now make object detectors self-aware and evaluate them. To do so following the recommended configuration in our paper, please run the following command:

 python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --cls_unc_type 0 --calibrate linear_regression --benchmark True

Note: The resulting DAQ might differ 0.1-0.2 compared to our results in Table 6. This is because we generate corruptions on the fly and fixed a minor bug in the code.

Furthermore, the saod_evaluation script has several optional arguments facilitating the reproduction of the most of our ablation experiments and analyses in the paper. Please check out parse_args() function for the specifications of the arguments in this script. To illustrate some:

 python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --loc_unc_type 3 --max_det_num 2 --calibrate isotonic_regression --benchmark True
 python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --cls_unc_type 0 --image_level_threshold 0.95 --detection_level_threshold 0.50 --calibrate identity --benchmark True

5. Other Features Provided in this Repository

Evaluate only OOD performance using AUROC

 python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --ood_evaluate True

Evaluate only accuracy and calibration using LRP Error and LaECE

python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --calibrate isotonic_regression

Plot reliability Diagrams

python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --calibrate isotonic_regression --plot_reliability_diagram True

Standard COCO Style Evaluation using AP and LRP Error

python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --evaluate_top_100 True

How to Cite

Please cite the paper if you benefit from our paper or the repository:

@inproceedings{saod,
       title = {Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration},
       author = {Kemal Oksuz and Tom Joy and Puneet K. Dokania},
       booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
       year = {2023}
}

References

[1] One Metric to Measure them All: Localisation Recall Precision (LRP) for Evaluating Visual Detection Tasks, TPAMI in 2022 and ECCV 2018
[2] Bounding Box Regression with Uncertainty for Accurate Object Detection, CVPR 2019
[3] Estimating and Evaluating Regression Predictive Uncertainty in Deep Object Detectors, ICLR 2021