Awesome
Detectron2 implementation of DA-RetinaNet
This is the implementation of our Image and Vision Computing 2021 work 'An unsupervised domain adaptation scheme for single-stage artwork recognition in cultural sites'. The aim is to reduce the gap between source and target distribution improving the object detector performance on the target domain when training and test data belong to different distributions. The original paper can be found here.<br> If you want to use this code with your dataset, please follow the following guide. <br>
Please leave a star ⭐ and cite the following paper if you use this repository for your project.
@article{PASQUALINO2021104098,
title = "An unsupervised domain adaptation scheme for single-stage artwork recognition in cultural sites",
journal = "Image and Vision Computing",
pages = "104098",
year = "2021",
issn = "0262-8856",
doi = "https://doi.org/10.1016/j.imavis.2021.104098",
author = "Giovanni Pasqualino and Antonino Furnari and Giovanni Signorello and Giovanni Maria Farinella",
}
DA-RetinaNet Architecture
<center><img src='DA-RetinaNet.png' width=90%/></center>Installation
You can use this repo following one of these three methods:<br> NB: Detectron2 0.6 is required, installing other versions this code will not work.
Google Colab
Quickstart here 👉 <br>
Or load and run the DA-RetinaNet.ipynb
on Google Colab following the instructions inside the notebook.<br>
Detectron2 on your PC
Follow the official guide to install Detectron2 0.6<br>
Or<br>
Download the official Detectron2 0.6 from here<br>
Unzip the file and rename it in detectron2<br>
run python -m pip install -e detectron2
Detectron2 via Dockerfile
Follow these instructions:
cd docker/
# Build
docker build -t detectron2:v0 .
# Launch
docker run --gpus all -it --shm-size=8gb -v /home/yourpath/:/home/yourpath --name=name_container detectron2:v0
If you exit from the container you can restart it using:
docker start name_container
docker exec -it name_container /bin/bash
Dataset
Create the Cityscapes-Foggy Cityscapes dataset following the instructions available here<br> The UDA-CH dataset is available here
Data Preparation
If you want to use this code with your dataset arrange the dataset in the format of COCO or PASCAL VOC. <br>
For COCO annotations, inside the script uda_train.py
register your dataset using: <br>
register_coco_instances("dataset_name_source_training",{},"path_annotations","path_images")
<br>
register_coco_instances("dataset_name_target_training",{},"path_annotations","path_images")
<br>
register_coco_instances("dataset_name_target_test",{},"path_annotations","path_images")
<br>
For PASCAL VOC annotations, inside the cityscape_train.py
register your dataset using: <br>
register_pascal_voc("city_trainS", "cityscape/VOC2007/", "train_s", 2007, ['car','person','rider','truck','bus','train','motorcycle','bicycle'])
<br>
register_pascal_voc("city_trainT", "cityscape/VOC2007/", "train_t", 2007, ['car','person','rider','truck','bus','train','motorcycle','bicycle'])
<br>
register_pascal_voc("city_testT", "cityscape/VOC2007/", "test_t", 2007, ['car','person','rider','truck','bus','train','motorcycle','bicycle'])
<br>
You need to replace the parameters inside the register_pascal_voc()
function according to your dataset name and classes. <br>
Training
Replace at the following path detectron2/modeling/meta_arch/
the dense_detector.py
script with our dense_detector.py
. <br>
Do the same for the fpn.py
file at the path detectron2/modeling/backbone/
<br>
Run the script uda_train.py
for COCO annotations or cityscape_train.py
for PASCAL VOC annotations. <br>
Trained model on Cityscapes to FoggyCityscapes is available at this link: <br> DA-RetinaNet_Cityscapes <br>
Trained models on the proposed UDA-CH dataset are available at these links: <br> DA-RetinaNet <br> DA-RetinaNet-CycleGAN <br>
Testing
If you want to test the model load the new weights, set to 0 the number of iterations and rerun the same script used for the training.
Results on Cityscapes -> Foggy Cityscapes
<p> Results adaptation between Cityscapes and Foggy Cityscapes dataset. The performance scores of the methods marked with the “*” symbol are reported from the authors of their respective papers. </p> <table style="width:100%"> <tr> <th>Model</th> <th>mAP</th> </tr> <tr> <td><a href="https://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Domain_Adaptive_Faster_CVPR_2018_paper.pdf">Faster RCNN*</a></td> <td>20.30%</td> </tr> <tr> <td><a href="https://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Domain_Adaptive_Faster_CVPR_2018_paper.pdf">DA-Faster RCNN*</a></td> <td>27.60%</td> </tr> <tr> <td><a href= "https://openaccess.thecvf.com/content_CVPR_2019/papers/Saito_Strong-Weak_Distribution_Alignment_for_Adaptive_Object_Detection_CVPR_2019_paper.pdf"> StrongWeak*</a></td> <td>34.30%</td> </tr> <tr> <td><a href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Kim_Diversify_and_Match_A_Domain_Adaptive_Representation_Learning_Paradigm_for_CVPR_2019_paper.pdf">Diversify and Match*</td> <td>34.60%</td> </tr> <tr> <td>DA-RetinaNet</td> <td>44.87%</td> </tr> <tr> <td>RetinaNet (Oracle)</td> <td>53.46%</td> </tr> </table>Results on the proposed dataset Synthetic -> Real
<p> Results of DA-Faster RCNN, Strong-Weak and the proposed DA-RetinaNet combined with image-to-image translation approach. </p> <table style="width:100%"> <tr> <th></th> <th colspan="2">image to image translation (CycleGAN)</th> </tr> <tr> <td>Object Detector</td> <td>None</td> <td>Synthetic to Real</td> </tr> <tr> <td><a href="https://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Domain_Adaptive_Faster_CVPR_2018_paper.pdf">DA-Faster RCNN</a></td> <td>12.94%</td> <td>33.20%</td> </tr> <tr> <td><a href= "https://openaccess.thecvf.com/content_CVPR_2019/papers/Saito_Strong-Weak_Distribution_Alignment_for_Adaptive_Object_Detection_CVPR_2019_paper.pdf"> StrongWeak</a></td> <td>25.12%</td> <td>47.70%</td> </tr> <tr> <td>DA-RetinaNet</td> <td>31.04%</td> <td>58.01%</td> </tr> </table>Other Works
STMDA-RetinaNet<br> Detectron2 implementation of DA-Faster RCNN