Home

Awesome

GLCONet

GLCONet: Learning Multisource Perception Representation for Camouflaged Object Detection

Our work has been accepted for TNNLS. The code is currently being organized and will be continuously updated.

If you are interested in our work, please do not hesitate to contact us at Sunyg@njust.edu.cn via email.

image

image

image

Prediction maps

We provide the prediction results of our GLCONet model with different backbones under camouflaged object detection tasks.

GLCONet_PVT_COD [baidu,PIN:a3dc]

GLCONet_ResNet_COD [baidu,PIN:ubg4]

GLCONet_Swin_COD [baidu,PIN:6di4]

Citation

If you use GLCONet method in your research or wish to refer to the baseline results published in the Model, please use the following BibTeX entry.

@article{GLCONet,
  title={GLCONet: Learning Multi-source Perception Representation for Camouflaged Object Detection},
  author={Sun, Yanguang and Xuan, Hanyu and Yang, Jian and Luo, Lei},
  journal={arXiv preprint arXiv:2409.09588},
  year={2024}
}
@article{GLCONet,
  title={GLCONet: Learning Multisource Perception Representation for Camouflaged Object Detection},
  author={Sun, Yanguang and Xuan, Hanyu and Yang, Jian and Luo, Lei},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  volume={}, 
  pages={1--14}, 
  year={2024}, 
  publisher={IEEE}, 
  note = {},
  doi={10.1109/TNNLS.2024.3461954},
}