Home

Awesome

Predictive Uncertainty Estimation for Camouflaged Object Detection (TIP 2023)

Authors: Yi Zhang, Jing Zhang, Wassim Hamidouche, Olivier Deforges


Introduction

Uncertainty is inherent in machine learning methods, especially those for camouflaged object detection aiming to finely segment the objects concealed in background. The strong “center bias” of the training dataset leads to models of poor generalization ability as the models learn to find camouflaged objects around image center, which we define as “model bias”. Further, due to the similar appearance of camouflaged object and its surroundings, it is difficult to label the accurate scope of the camouflaged object, especially along object boundaries, which we term as “data bias”. To effectively model the two types of biases, we resort to uncertainty estimation and introduce predictive uncertainty estimation technique, which is the sum of model uncertainty and data uncertainty, to estimate the two types of biases simultaneously. Specifically, we present a predictive uncertainty estimation network (PUENet) that consists of a Bayesian conditional variational auto-encoder (BCVAE) to achieve predictive uncertainty estimation, and a predictive uncertainty approximation (PUA) module to avoid the expensive sampling process at test-time. Experimental results show that our PUENet achieves both highly accurate prediction, and reliable uncertainty estimation representing the biases within both model parameters and the datasets.


Methodology

<p align="center"> <img src="./figs/fig_model_pipeline.jpg" width="95%"/> <br /> <em> Figure 1: The training/testing pipeline of our PUENet, which consists of a “Bayesian conditional variational auto-encoder” (BCVAE), and a “predictive uncertainty approximation” (PUA) module. Please refer to the paper for details. </em> </p> <p align="center"> <img src="./figs/fig_model_architecture.jpg"/> <br /> <em> Figure 2: Architectures of “predictive uncertainty approximation” (PUA) module (ω), and “Bayesian conditional variational auto-encoder” BCVAE’s encoder/prior-based decoder. Please refer to the paper for details. </em> </p>

Experiment

<p align="center"> <img src="./figs/fig_experiment_quantification.jpg"/> <br /> <em> Figure 3: Performance comparison with SOTA COD models. </em> </p> <p align="center"> <img src="./figs/fig_experiment_visualization.jpg"/> <br /> <em> Figure 4: Visual results of SOTAs and our PUENet. </em> </p>

Implementation

The source codes of PUENet are available at codes.

The training and testing datasets can be downloaded at COD10K-train, COD10K-test + CAMO-test + CHAMELEON and NC4K, respectively.

The results of our PUENet are available at PUENet-model and PUENet-predictions.


Citation

@article{zhang2023predictive,
         title={Predictive Uncertainty Estimation for Camouflaged Object Detection},
         author={Zhang, Yi and Zhang, Jing and Hamidouche, Wassim and Deforges, Olivier},
         journal={IEEE Transactions on Image Processing (TIP)},
         year={2023},
         publisher={IEEE}
}