Home

Awesome

Robust Object Detection via Instance-Level Temporal Cycle Confusion

This repo contains the implementation of the ICCV 2021 paper, Robust Object Detection via Instance-Level Temporal Cycle Confusion.

Screenshot

Building reliable object detectors that are robust to domain shifts, such as various changes in context, viewpoint, and object appearances, is critical for real world applications. In this work, we study the effectiveness of auxiliary self-supervised tasks to improve out-of-distribution generalization of object detectors. Inspired by the principle of maximum entropy, we introduce a novel self-supervised task, instance-level cycle confusion (CycConf), which operates on the region features of the object detectors. For each object, the task is to find the most different object proposals in the adjacent frame in a video and then cycle back to itself for self-supervision. CycConf encourages the object detector to explore invariant structures across instances under various motion, which leads to improved model robustness in unseen domains at test time. We observe consistent out-of-domain performance improvements when training object detectors in tandem with self-supervised tasks on various domain adaptation benchmarks with static images (Cityscapes, Foggy Cityscapes, Sim10K) and large-scale video datasets (BDD100K and Waymo open data).

Installation

Environment

Dependencies

  1. Create a virtual env.
  1. Install dependencies.

pip3 install torch torchvision

Check out the previous Pytorch versions here.

Or, you can install Pre-built detectron2 (example for CUDA 10.2, Pytorch 1.9)

python -m pip install detectron2 -f \ https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html

More details can be found here.

Data Preparation

BDD100K

  1. Download the BDD100K MOT 2020 dataset (MOT 2020 Images and MOT 2020 Labels) and the detection labels (Detection 2020 Labels) here and the detailed description is available here. Put the BDD100K data under datasets/ in this repo. After downloading the data, the folder structure should be like below:
├── datasets
│   ├── bdd100k
│   │   ├── images
│   │   │    └── track
│   │   │        ├── train
│   │   │        ├── val
│   │   │        └── test
│   │   └── labels
│   │        ├── box_track_20
│   │        │   ├── train
│   │        │   └── val
│   │        └── det_20
│   │            ├── det_train.json
│   │            └── det_val.json
│   ├── waymo

Convert the labels of the MOT 2020 data (train & val sets) into COCO format by running:

python3 datasets/bdd100k2coco.py -i datasets/bdd100k/labels/box_track_20/val/ -o datasets/bdd100k/labels/track/bdd100k_mot_val_coco.json -m track
python3 datasets/bdd100k2coco.py -i datasets/bdd100k/labels/box_track_20/train/ -o datasets/bdd100k/labels/track/bdd100k_mot_train_coco.json -m track
  1. Split the original videos into different domains (time of day). Run the following command:
python3 -m datasets.domain_splits_bdd100k

This script will first extract the domain attributes from the BDD100K detection set and then map them to the tracking set sequences. After the processing steps, you would see two additional folders domain_splits and per_seq under the datasets/bdd100k/labels/box_track_20. The domain splits of all attributes in BDD100K detection set can be found at datasets/bdd100k/labels/domain_splits.

Waymo

  1. Download the Waymo dataset here. Put the Waymo raw data under datasets/ in this repo. After downloading the data, the folder structure should be like below:
├── datasets
│   ├── bdd100k
│   ├── waymo
│   │   └── raw

Convert the raw TFRecord data files into COCO format by running:

python3 -m datasets.waymo2coco

Note that this script takes a long time to run, be prepared to keep it running for over a day.

  1. Convert the BDD100K dataset labels into 3 classes (originally 8). This needs to be done in order to match the 3 classes of the Waymo dataset. Run the following command:
python3 -m datasets.convert_bdd_3cls

Get Started

For joint training,

python3 -m tools.train_net --config-file [config_file] --num-gpus 8

For evaluation,

python3 -m tools.train_net --config-file [config_file] --num-gpus [num] --eval-only

This command will load the latest checkpoint in the folder. If you want to specify a different checkpoint or evaluate the pretrained checkpoints, you can run

python3 -m tools.train_net --config-file [config_file] --num-gpus [num] --eval-only MODEL.WEIGHTS [PATH_TO_CHECKPOINT]

Benchmark Results

Dataset Statistics

DatasetSplitSeqframes/seq.boxesclasses
BDD100K Daytimetrain7572041.82M8
val108204287K8
BDD100K Nighttrain564204895K8
val71204137K8
Waymo Open Datatrain7981993.64M3
val202199886K3

Out of Domain Evaluation

BDD100K Daytime to Night. The base detector is Faster R-CNN with ResNet-50.

ModelAPAP50AP75APsAPmAPlConfigCheckpoint
Faster R-CNN17.8431.3517.684.9216.1535.56linklink
+ Rotation18.5832.9518.155.1616.9336.00linklink
+ Jigsaw17.4731.2216.815.0815.8033.84linklink
+ Cycle Consistency18.3532.4418.075.0417.0734.85linklink
+ Cycle Confusion19.0933.5819.145.7017.6835.86linklink

BDD100K Night to Daytime.

ModelAPAP50AP75APsAPmAPlConfigCheckpoint
Faster R-CNN19.1433.0419.165.3821.4240.34linklink
+ Rotation19.0733.2518.835.5321.3240.06linklink
+ Jigsaw19.2233.8718.715.6722.3538.57linklink
+ Cycle Consistency18.8933.5018.315.8221.0139.13linklink
+ Cycle Confusion19.5734.3419.266.0622.5538.95linklink

Waymo Front Left to BDD100K Night.

ModelAPAP50AP75APsAPmAPlConfigCheckpoint
Faster R-CNN10.0719.629.052.6710.8118.62linklink
+ Rotation11.3423.129.653.5311.7321.60linklink
+ Jigsaw9.8619.938.402.7710.5318.82linklink
+ Cycle Consistency11.5523.4410.002.9612.1921.99linklink
+ Cycle Confusion12.2726.0110.243.4412.2223.56linklink

Waymo Front Right to BDD100K Night.

ModelAPAP50AP75APsAPmAPlConfigCheckpoint
Faster R-CNN8.6517.267.491.768.2919.99linklink
+ Rotation9.2518.488.081.858.7121.08linklink
+ Jigsaw8.3416.587.261.618.0118.09linklink
+ Cycle Consistency9.1117.927.981.789.3619.18linklink
+ Cycle Confusion9.9920.588.302.1810.2520.54linklink

Citation

If you find this repository useful for your publications, please consider citing our paper.

@article{wang2021robust,
  title={Robust Object Detection via Instance-Level Temporal Cycle Confusion},
  author={Wang, Xin and Huang, Thomas E and Liu, Benlin and Yu, Fisher and Wang, Xiaolong and Gonzalez, Joseph E and Darrell, Trevor},
  journal={International Conference on Computer Vision (ICCV)},
  year={2021}
}