Awesome
下个月会有最后一次更新,以后正式说再见了,github不再更新,有代码问题可以电子邮件联系我
更新啦~~
TS-Conv: Task-wise Sampling Convolutions for Arbitrary-Oriented Object Detection in Aerial Images
<a href="https://github.com/Shank2358/TS-Conv/"> <img alt="Version" src="https://img.shields.io/badge/Version-1.3.0-blue" /> </a> <a href="https://github.com/Shank2358/TS-Conv/blob/main/LICENSE"> <img alt="GPLv3.0 License" src="https://img.shields.io/badge/License-GPLv3.0-blue" /> </a> <a href="https://github.com/Shank2358" target="_blank"> <img src="https://visitor-badge.glitch.me/badge?page_id=gghl.visitor-badge&right_color=blue" alt="Visitor" /> </a> <a href="mailto:zhanchao.h@outlook.com" target="_blank"> <img alt="E-mail" src="https://img.shields.io/badge/To-Email-blue" /> </a>This is the implementation of TS-Conv 👋👋👋
[Arxiv]
👹👹👹 不出意外的话是毕业前最后一个工作了,可能也是学术圈的最后一个工作,有点水大家见谅。
Please give a ⭐️ if this project helped you. If you use it, please consider citing:
@ARTICLE{9709203,
author={Huang, Zhanchao and Li, Wei and Xia, Xiang-Gen,Hao Wang and Tao, Ran},
journal={arXiv},
title={Task-wise Sampling Convolutions for Arbitrary-Oriented Object Detection in Aerial Images},
year={2022},
volume={},
number={},
pages={1-16},
doi={10.48550/arXiv.2209.02200}}
🤡🤡🤡 Clone不Star,都是耍流氓
0. Something Important 🦞 🦀 🦑
-
🎃🎃🎃 The usage of the TS-Conv repository is the same as that of the ancestral repository GGHL. If you have any questions, please see the issues there.
用法和祖传的GGHL仓库一样,有问题可以看那边的issues。MMRotate版本也在写着。TS-Conv还将持续更新一段时间,现在更新完的是主体模型的代码,重点在head,DCN,以及dataload的标签分配那部分,其他和GGHL差不多。可视化和更多其它部分的功能和实验我也在抓紧更新中。
-
💖💖💖 Thanks to [Crescent-Ao](https://github. com/Crescent-Ao) and haohaoolalahao for contributions to the GGHL repository, thanks to Crescent-Ao for the GGHL deployment Version. Relevant warehouses will continue to be updated, so stay tuned.
打个广告,GGHL部署版本GGHL-Deployment已经上线,欢迎大家使用~~ 感谢我最亲爱的师弟Crescent-Ao和haohaolalahao对GGHL仓库的贡献,感谢Crescent-Ao完成的GGHL部署版本。相关仓库还会持续更新中,敬请期待。
-
😺😺😺 Welcome everyone to pay attention to the MGAR completed by haohaoolalahao in cooperation with me, which has been accepted by IEEE TGRS.
再打个广告,欢迎大家关注haohaolalahao与我合作完成的遥感图像目标检测工作 MGAR: Multi-Grained Angle Representation for Remote Sensing Object Detection,论文已经正式接收IEEE TGRS Arxiv, 感谢大家引用:
@ARTICLE{9912396, author={Wang, Hao and Huang, Zhanchao and Chen, Zhengchao and Song, Ying and Li, Wei}, journal={IEEE Transactions on Geoscience and Remote Sensing}, title={Multi-Grained Angle Representation for Remote Sensing Object Detection}, year={2022}, volume={}, number={}, pages={1-1}, doi={10.1109/TGRS.2022.3212592}}
🌈 1.Environments
Linux (Ubuntu 18.04, GCC>=5.4) & Windows (Win10)
CUDA > 11.1, Cudnn > 8.0.4
First, install CUDA, Cudnn, and Pytorch. Second, install the dependent libraries in requirements.txt.
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt
🌟 2.Installation
-
git clone this repository
-
Polygen NMS
The poly_nms in this version is implemented using shapely and numpy libraries to ensure that it can work in different systems and environments without other dependencies. But doing so will slow down the detection speed in dense object scenes. If you want faster speed, you can compile and use the poly_iou library (C++ implementation version) in datasets_tools/DOTA_devkit. The compilation method is described in detail in DOTA_devkit .
cd datasets_tools/DOTA_devkit
sudo apt-get install swig
swig -c++ -python polyiou.i
python setup.py build_ext --inplace
🎃 3.Datasets
- DOTA dataset and its devkit
(1) Training Format
You need to write a script to convert them into the train.txt file required by this repository and put them in the ./dataR folder.
For the specific format of the train.txt file, see the example in the /dataR folder.
image_path xmin,ymin,xmax,ymax,class_id,x1,y1,x2,y2,x3,y3,x4,y4,area_ratio,angle[0,180) xmin,ymin,xmax,ymax,class_id,x1,y1,x2,y2,x3,y3,x4,y4,area_ratio,angle[0,180)...
The calculation method of angle is explained in Issues #1 and our paper.
(2) Validation & Testing Format
The same as the Pascal VOC Format
(3) DataSets Files Structure
cfg.DATA_PATH = "/opt/datasets/DOTA/"
├── ...
├── JPEGImages
| ├── 000001.png
| ├── 000002.png
| └── ...
├── Annotations (DOTA Dataset Format)
| ├── 000001.txt (class_idx x1 y1 x2 y2 x3 y3 x4 y4)
| ├── 000002.txt
| └── ...
├── ImageSets
├── test.txt (testing filename)
├── 000001
├── 000002
└── ...
There is a DOTA2Train.py file in the datasets_tools folder that can be used to generate training and test format labels. First, you need to use DOTA_devkit , the official tools of the DOTA dataset, for image and label splitting. Then, run DOTA2Train.py to convert them to the format required by GGHL. For the use of DOTA_devkit, please refer to the tutorial in the official repository.
🌠🌠🌠 4.Usage Example
(1) Training
sh train_GGHL_dist.sh
(2) Testing
python test.py
📝 License
Copyright © 2021 Shank2358.<br /> This project is GNU General Public License v3.0 licensed.