Awesome
<div align="center"> <h1>WeakSAM </h1> <h3>Segment Anything Meets Weakly-supervised Instance-level Recognition</h3>Lianghui Zhu<sup>1</sup> *,Junwei Zhou<sup>1</sup> *,Yan Liu<sup>2</sup>, Xin Hao<sup>2</sup>, Wenyu Liu<sup>1</sup>, Xinggang Wang<sup>1 :email:</sup>
<sup>1</sup> School of EIC, Huazhong University of Science and Technology, <sup>2</sup> Alipay Tian Qian Security Lab
(*) equal contribution, (<sup>:email:</sup>) corresponding author.
ArXiv Preprint (arXiv 2402.14812), Project Page(WeakSAM project page)
</div>News
Feb. 22nd, 2024
: We released our paper on Arxiv. Further details can be found in code and our updated arXiv.
Abstract
Weakly supervised visual recognition using inexact supervision is a critical yet challenging learning problem. It significantly reduces human labeling costs and traditionally relies on multi-instance learning and pseudo-labeling. This paper introduces WeakSAM and solves the weakly-supervised object detection (WSOD) and segmentation by utilizing the pre-learned world knowledge contained in a vision foundation model, i.e., the Segment Anything Model (SAM). WeakSAM addresses two critical limitations in traditional WSOD retraining, i.e., pseudo ground truth (PGT) incompleteness and noisy PGT instances, through adaptive PGT generation and Region of Interest (RoI) drop regularization. It also addresses the SAM's problems of requiring prompts and category unawareness for automatic object detection and segmentation. Our results indicate that WeakSAM significantly surpasses previous state-of-the-art methods in WSOD and WSIS benchmarks with large margins, i.e. average improvements of 7.4% and 8.5%, respectively.
<p align="middle"> <img src="sources/radarv1.3.png" alt="Highlight performances" width="400px"> </p>Overview
We first introduce classification clues and spatial points as automatic SAM prompts, which address the problem of SAM requiring interactive prompts. Next, we use the WeakSAM-proposals in the WSOD pipeline, in which the weakly-supervised detector performs class-aware perception to annotate pseudo ground truth (PGT). Then, we analyze the incompleteness and noise problem existing in PGT and propose adaptive PGT generation, RoI drop regularization to address them, respectively. Finally, we use WeakSAM-PGT to prompt SAM for WSIS extension. (The snowflake mark means the model is frozen.)
<p align="middle"> <img src="sources/weaksam_pipeline_v4.0_1.png" alt="WeakSAM pipeline" width="1600px"> </p>Main results
For WSOD task:
Dataset | WSOD method | WSOD performance | Retrain method | Retrain performance |
---|---|---|---|---|
VOC2007 | WeakSAM(OICR) | 58.9 AP50 | Faster R-CNN | 65.7 AP50 |
DINO | 66.1 AP50 | |||
WeakSAM(MIST) | 67.4 AP50 | Faster R-CNN | 71.8 AP50 | |
DINO | 73.4 AP50 | |||
COCO2014 | WeakSAM(OICR) | 19.9 mAP | Faster R-CNN | 22.3 mAP |
DINO | 24.9 mAP | |||
WeakSAM(MIST) | 22.9 mAP | Faster R-CNN | 23.8 mAP | |
DINO | 26.6 mAP |
For WSIS task:
Dataset | Retrain method | AP25 | AP50 | AP70 | AP75 |
---|---|---|---|---|---|
VOC2012 | Mask R-CNN | 70.3 | 59.6 | 43.1 | 36.2 |
Mask2Former | 73.4 | 64.4 | 49.7 | 45.3 |
Dataset | Retrain method | AP[50:95] | AP50 | AP75 |
---|---|---|---|---|
COCOval2017 | Mask R-CNN | 20.6 | 33.9 | 22.0 |
Mask2Former | 25.2 | 38.4 | 27.0 | |
COCOtest-dev | Mask R-CNN | 21.0 | 34.5 | 22.2 |
Mask2Former | 25.9 | 39.9 | 27.9 |
Data & Preliminaries
Generation & Training pipelines
Citation
If you find this repository/work helpful in your research, welcome to cite the paper and give a ⭐.
@article{zhu2024weaksam,
title={WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition},
author={Zhu, Lianghui and Zhou, Junwei and Liu, Yan and Hao, Xin and Liu, Wenyu and Wang, Xinggang},
journal={Proceedings of the 32nd ACM International Conference on Multimedia},
year={2024}
}
Acknowledgement
Thanks for these wonderful works and their codebases! ❤️ MIST, WSOD2, Segment-anything, WeakTr, SoS-WSOD