Home

Awesome

Adjustment and Alignment for Unbiased Open Set Domain Adaptation (CVPR-23)

[Paper] [Video Presentation]

By Wuyang Li

Quick Summary

The main idea comes from open-set object detection, where the novel objects are hidden in the background. In OSDA, we do not separate objects from the background since both are out-of-base-class distributions and can be treated as unknown.

Experimental Environment

Get Start

Reproduced resuts by us:

A $\rightarrow$ CA $\rightarrow$ PA $\rightarrow$ RC $\rightarrow$ AC $\rightarrow$ PC $\rightarrow$ RP $\rightarrow$ AP $\rightarrow$ CP $\rightarrow$ RR $\rightarrow$ AR $\rightarrow$ CR $\rightarrow$ PAvg
69.373.276.364.768.672.765.963.976.070.668.178.770.7

Limitations and Disussions

Contact

If you have any questions or ideas you would like to discuss with me, feel free to let me know through wuyangli2-c @ my.cityu.edu.hk. Except for the main experiment on Officehome, other tiny-scaled benchmark settings will be released later if needed.

Citation

If this work is helpful for your project, please give it a star and citation. Thanks~

@InProceedings{Li_2023_CVPR,
    author    = {Li, Wuyang and Liu, Jie and Han, Bo and Yuan, Yixuan},
    title     = {Adjustment and Alignment for Unbiased Open Set Domain Adaptation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {24110-24119}
}

Abstract

Open Set Domain Adaptation (OSDA) transfers the model from a label-rich domain to a label-free one containing novel-class samples. Existing OSDA works overlook abundant novel-class semantics hidden in the source domain, leading to a biased model learning and transfer. Although the causality has been studied to remove the semantic-level bias, the non-available novel-class samples result in the failure of existing causal solutions in OSDA. To break through this barrier, we propose a novel causalitydriven solution with the unexplored front-door adjustment theory, and then implement it with a theoretically grounded framework, coined Adjustment and Alignment (ANNA), to achieve an unbiased OSDA. In a nutshell, ANNA consists of Front-Door Adjustment (FDA) to correct the biased learning in the source domain and Decoupled Causal Alignment (DCA) to transfer the model unbiasedly. On the one hand, FDA delves into fine-grained visual blocks to discover novel-class regions hidden in the base-class image. Then, it corrects the biased model optimization by implementing causal debiasing. On the other hand, DCA disentangles the base-class and novel-class regions with orthogonal masks, and then adapts the decoupled distribution for an unbiased model transfer. Extensive experiments show that ANNA achieves state-of-the-art results.

image