Home

Awesome

<h1 align="center"> <b>Change Detection Models</b><br> </h1> <p align="center"> <b>Python library with Neural Networks for Change Detection based on PyTorch.</b> </p> <img src="https://raw.githubusercontent.com/likyoo/change_detection.pytorch/main/resources/model%20architecture.png" alt="model architecture" style="zoom:80%;" />

This project is inspired by segmentation_models.pytorch and built based on it. 😄

🌱 How to use <a name="use"></a>

Please refer to local_test.py temporarily.

🔭 Models <a name="models"></a>

Architectures <a name="architectures"></a>

Encoders <a name="encoders"></a>

The following is a list of supported encoders in the CDP. Select the appropriate family of encoders and click to expand the table and select a specific encoder and its pre-trained weights (encoder_name and encoder_weights parameters).

<details> <summary style="margin-left: 25px;">ResNet</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
resnet18imagenet / ssl / swsl11M
resnet34imagenet21M
resnet50imagenet / ssl / swsl23M
resnet101imagenet42M
resnet152imagenet58M
</div> </details> <details> <summary style="margin-left: 25px;">ResNeXt</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
resnext50_32x4dimagenet / ssl / swsl22M
resnext101_32x4dssl / swsl42M
resnext101_32x8dimagenet / instagram / ssl / swsl86M
resnext101_32x16dinstagram / ssl / swsl191M
resnext101_32x32dinstagram466M
resnext101_32x48dinstagram826M
</div> </details> <details> <summary style="margin-left: 25px;">ResNeSt</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
timm-resnest14dimagenet8M
timm-resnest26dimagenet15M
timm-resnest50dimagenet25M
timm-resnest101eimagenet46M
timm-resnest200eimagenet68M
timm-resnest269eimagenet108M
timm-resnest50d_4s2x40dimagenet28M
timm-resnest50d_1s4x24dimagenet23M
</div> </details> <details> <summary style="margin-left: 25px;">Res2Ne(X)t</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
timm-res2net50_26w_4simagenet23M
timm-res2net101_26w_4simagenet43M
timm-res2net50_26w_6simagenet35M
timm-res2net50_26w_8simagenet46M
timm-res2net50_48w_2simagenet23M
timm-res2net50_14w_8simagenet23M
timm-res2next50imagenet22M
</div> </details> <details> <summary style="margin-left: 25px;">RegNet(x/y)</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
timm-regnetx_002imagenet2M
timm-regnetx_004imagenet4M
timm-regnetx_006imagenet5M
timm-regnetx_008imagenet6M
timm-regnetx_016imagenet8M
timm-regnetx_032imagenet14M
timm-regnetx_040imagenet20M
timm-regnetx_064imagenet24M
timm-regnetx_080imagenet37M
timm-regnetx_120imagenet43M
timm-regnetx_160imagenet52M
timm-regnetx_320imagenet105M
timm-regnety_002imagenet2M
timm-regnety_004imagenet3M
timm-regnety_006imagenet5M
timm-regnety_008imagenet5M
timm-regnety_016imagenet10M
timm-regnety_032imagenet17M
timm-regnety_040imagenet19M
timm-regnety_064imagenet29M
timm-regnety_080imagenet37M
timm-regnety_120imagenet49M
timm-regnety_160imagenet80M
timm-regnety_320imagenet141M
</div> </details> <details> <summary style="margin-left: 25px;">GERNet</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
timm-gernet_simagenet6M
timm-gernet_mimagenet18M
timm-gernet_limagenet28M
</div> </details> <details> <summary style="margin-left: 25px;">SE-Net</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
senet154imagenet113M
se_resnet50imagenet26M
se_resnet101imagenet47M
se_resnet152imagenet64M
se_resnext50_32x4dimagenet25M
se_resnext101_32x4dimagenet46M
</div> </details> <details> <summary style="margin-left: 25px;">SK-ResNe(X)t</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
timm-skresnet18imagenet11M
timm-skresnet34imagenet21M
timm-skresnext50_32x4dimagenet25M
</div> </details> <details> <summary style="margin-left: 25px;">DenseNet</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
densenet121imagenet6M
densenet169imagenet12M
densenet201imagenet18M
densenet161imagenet26M
</div> </details> <details> <summary style="margin-left: 25px;">Inception</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
inceptionresnetv2imagenet / imagenet+background54M
inceptionv4imagenet / imagenet+background41M
xceptionimagenet22M
</div> </details> <details> <summary style="margin-left: 25px;">EfficientNet</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
efficientnet-b0imagenet4M
efficientnet-b1imagenet6M
efficientnet-b2imagenet7M
efficientnet-b3imagenet10M
efficientnet-b4imagenet17M
efficientnet-b5imagenet28M
efficientnet-b6imagenet40M
efficientnet-b7imagenet63M
timm-efficientnet-b0imagenet / advprop / noisy-student4M
timm-efficientnet-b1imagenet / advprop / noisy-student6M
timm-efficientnet-b2imagenet / advprop / noisy-student7M
timm-efficientnet-b3imagenet / advprop / noisy-student10M
timm-efficientnet-b4imagenet / advprop / noisy-student17M
timm-efficientnet-b5imagenet / advprop / noisy-student28M
timm-efficientnet-b6imagenet / advprop / noisy-student40M
timm-efficientnet-b7imagenet / advprop / noisy-student63M
timm-efficientnet-b8imagenet / advprop84M
timm-efficientnet-l2noisy-student474M
timm-efficientnet-lite0imagenet4M
timm-efficientnet-lite1imagenet5M
timm-efficientnet-lite2imagenet6M
timm-efficientnet-lite3imagenet8M
timm-efficientnet-lite4imagenet13M
</div> </details> <details> <summary style="margin-left: 25px;">MobileNet</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
mobilenet_v2imagenet2M
timm-mobilenetv3_large_075imagenet1.78M
timm-mobilenetv3_large_100imagenet2.97M
timm-mobilenetv3_large_minimal_100imagenet1.41M
timm-mobilenetv3_small_075imagenet0.57M
timm-mobilenetv3_small_100imagenet0.93M
timm-mobilenetv3_small_minimal_100imagenet0.43M
</div> </details> <details> <summary style="margin-left: 25px;">DPN</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
dpn68imagenet11M
dpn68bimagenet+5k11M
dpn92imagenet+5k34M
dpn98imagenet58M
dpn107imagenet+5k84M
dpn131imagenet76M
</div> </details> <details> <summary style="margin-left: 25px;">VGG</summary> <div style="margin-left: 25px;">
EncoderWeightsParams, M
vgg11imagenet9M
vgg11_bnimagenet9M
vgg13imagenet9M
vgg13_bnimagenet9M
vgg16imagenet14M
vgg16_bnimagenet14M
vgg19imagenet20M
vgg19_bnimagenet20M
</div> </details>

:truck: Dataset <a name="dataset"></a>

🏆 Competitions won with the library

change_detection.pytorch has competitiveness and potential in the change detection competitions. Here you can find competitions, names of the winners and links to their solutions.

:page_with_curl: Citing <a name="citing"></a>

If you find this project useful in your research, please consider cite:

@article{li2023new,
      title={A New Learning Paradigm for Foundation Model-based Remote Sensing Change Detection}, 
      author={Li, Kaiyu and Cao, Xiangyong and Meng, Deyu},
      journal={arXiv preprint arXiv:2312.01163},
      year={2023}
}

@ARTICLE{10129139,
  author={Fang, Sheng and Li, Kaiyu and Li, Zhe},
  journal={IEEE Transactions on Geoscience and Remote Sensing}, 
  title={Changer: Feature Interaction is What You Need for Change Detection}, 
  year={2023},
  volume={61},
  number={},
  pages={1-11},
  doi={10.1109/TGRS.2023.3277496}}
@misc{likyoocdp:2021,
  Author = {Kaiyu Li, Fulin Sun, Xudong Liu},
  Title = {Change Detection Pytorch},
  Year = {2021},
  Publisher = {GitHub},
  Journal = {GitHub repository},
  Howpublished = {\url{https://github.com/likyoo/change_detection.pytorch}}
}

:books: Reference <a name="reference"></a>

:mailbox: Contact<a name="contact"></a>

⚡⚡⚡ I am trying to build this project, if you are interested, don't hesitate to join us!

👯👯👯 Contact me at likyoo@sdust.edu.cn or pull a request directly or join our WeChat group.

<div align=center><img src="resources/wechat.jpg" alt="wechat group" width="38%" height="38%" style="zoom:80%;" /></div>

若二维码已失效,可以添加微信likyoo7,添加时请备注姓名/昵称 + 单位/学校 + 变化检测