Awesome
ObjectBox: From Centers to Boxes for Anchor-Free Object Detection
Dependencies
This code is tested under Ubuntu 18.04, CUDA 11.2, with one NVIDIA Titan RTX GPU.
Python 3.8.8 version is used for development.
Preparation
Set the 'PATH' in '/data/coco.yaml' and '/data/VOC.yaml'
Set the 'project' flag in flag_sets.py
Training
Set 'task' flag in flag_sets.py as: 'train'
For MS-COCO 2017 experiments, set:
exp = 'coco'
in flag_sets.py
For PASCAL VOC 2012 experiments, set:
exp = 'pascal'
in flag_sets.py
Run train.py
Test
Set 'task' flag in flag_sets.py as: 'test'
Run val.py
Pretrained Checkpoints
- Trained model on COCO can be found here.
AP<sub>0.5:0.95</sub> | AP<sub>0.5</sub> | AP<sub>0.75</sub> | AP<sub>S</sub> | AP<sub>M</sub> | AP<sub>L</sub> | AR<sub>1</sub> | AR<sub>10</sub> | AR<sub>100</sub> | AR<sub>S</sub> | AR<sub>M</sub> | AR<sub>L</sub> |
---|---|---|---|---|---|---|---|---|---|---|---|
46.8 | 66.4 | 50.4 | 28.7 | 51.8 | 61.1 | 36.9 | 58.8 | 63.0 | 44.5 | 68.0 | 78.6 |
- Trained model on PASCAL VOC 2012 can be found here.
mAP | plane | bicycle | bird | boat | bottle | bus | car | cat | chair | cow | table | dog | horse | bike | person | plant | sheep | sofa | train | tv |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
85.4 | 92.5 | 92.8 | 84.1 | 76.1 | 75.7 | 91.0 | 93.4 | 92.3 | 67.8 | 89.4 | 78.9 | 91.7 | 93.6 | 91.9 | 88.7 | 60.1 | 87.7 | 82.0 | 91.3 | 86.4 |
The given results are on the validation set. Please set 'iou_thres = 0.45'.
Acknowledgements
This project is supported by Geotab Inc., the City of Kingston, and the Natural Sciences and Engineering Research Council of Canada (NSERC)
Citation
Please cite our papers if you use code from this repository:
@article{zand2022objectbox,
title={ObjectBox: From Centers to Boxes for Anchor-Free Object Detection},
author={Zand, Mohsen and Etemad, Ali and Greenspan, Michael},
booktitle={European conference on computer vision},
pages={1--23},
year={2022},
organization={Springer}
}
@article{zand2021oriented,
title={Oriented bounding boxes for small and freely rotated objects},
author={Zand, Mohsen and Etemad, Ali and Greenspan, Michael},
journal={IEEE Transactions on Geoscience and Remote Sensing},
volume={60},
pages={1--15},
year={2021},
publisher={IEEE}
}
Reference
Many utility codes are borrowed from YOLO.