Home

Awesome

<div align="center"> <!-- <h2>Click2Trimap</h2> --> <h3>Diffusion for Natural Image Matting </h3>

Yihan Hu, Yiheng Lin, Wei Wang, Yao Zhao, Yunchao Wei, Humphrey Shi

Institute of Information Science, Beijing Jiaotong University Georgia Tech & Picsart AI Research (PAIR)

<p align="center"> <a href="https://opensource.org/licenses/MIT"> <img src="https://img.shields.io/badge/License-MIT-yellow.svg"/> </a> <a href="https://arxiv.org/pdf/2312.05915.pdf"> <img src="https://img.shields.io/badge/arxiv-2312.05915-red"/> </a> <a href="[https://arxiv.org/pdf/2312.05915.pdf](https://paperswithcode.com/sota/image-matting-on-composition-1k-1?p=diffusion-for-natural-image-matting)"> <img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/diffusion-for-natural-image-matting/image-matting-on-composition-1k-1"/> </a> </p> </div>

Introduction

<div align="center"><h4>Introducing diffusion process for iterative matting refinement</h4></div>

avatar

We aim to leverage diffusion to address the challenging image matting task. However, the presence of high computational overhead and the inconsistency of noise sampling between the training and inference processes pose significant obstacles to achieving this goal. In this paper, we present DiffMatte, a solution designed to effectively overcome these challenges. First, DiffMatte decouples the decoder from the intricately coupled matting network design, involving only one lightweight decoder in the iterations of the diffusion process. With such a strategy, DiffMatte mitigates the growth of computational overhead as the number of samples increases. Second, we employ a self-aligned training strategy with uniform time intervals, ensuring a consistent noise sampling between training and inference across the entire time domain. Our DiffMatte is designed with flexibility in mind and can seamlessly integrate into various modern matting architectures. Extensive experimental results demonstrate that DiffMatte not only reaches the state-of-the-art level on the Composition-1k test set, surpassing the best methods in the past by 5% and 15% in the SAD metric and MSE metric respectively, but also show stronger generalization ability in other benchmarks.

Quick Installation

Our approach is developed in : Python3.88, PyTorch 2.0, CUDA 11.7, CuDNN 8.5.

Run the following command to install required packages.

<!-- You can refer to the issues of [ViTMatte](https://github.com/hustvl/ViTMatte) if you run into problems. -->
pip install -r requirements.txt

Install detectron2 please following its document, you can also run following command

python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'

Visualization

avatar Qualitative results compared with previous SOTA methods on Composition-1k. avatar Visualizing the Inference Trajectory: Depicting the predicted alpha matte at different iterations while employing DDIM with 10 sampling steps.

Results

Quantitative Results on Composition-1k (step 10)

ModelSADMSEGradConnParamscheckpoints
DiffMatte-Res3429.206.0411.3725.4823.9MGoogleDrive
DiffMatte-SwinT20.873.236.3715.8448.8MGoogleDrive
DiffMatte-ViTS20.523.067.0514.8529.0MGoogleDrive
DiffMatte-ViTB18.632.545.8213.10101.4MGoogleDrive
DiffMatte-ViTS(1024)17.152.265.1311.4229.0MGoogleDrive

Data Preparation

  1. Get DIM dataset on Deep Image Matting.

  2. For DIM dataset preparation, please refer to GCA-Matting.

    • For Training, merge 'Adobe-licensed images' and 'Other' folder to use all 431 foregrounds and alphas
    • For Testing, use 'Composition_code.py' and 'copy_testing_alpha.sh' in GCA-Matting.
  3. For background images, Download dataset on PASCAL and COCO.

*If you want to download prepared test set directly : download link

Testing on Composition-1k dataset

1] Run inference code (the predicted alpha will be save to ./predDIM/pred_alpha by default)

python inference.py \
    --config-dir ./configs/CONFIG.py \
    --checkpoint-dir ./CHECKPOINT_PATH \
    --inference-dir ./SAVE_DIR \
    --data-dir /DataDir \
    --sample-strategy "ddim10"

2] Evaluate the results by the official evaluation MATLAB code. (provided by Deep Image Matting)

3] You can also check out the evaluation result simplicity with the python code (un-official)

CUDA_VISIBLE_DEVICES=3 python evaluation.py \
    --pred-dir ./SAVE_DIR \
    --label-dir /DataDir/Composition-1k-testset/alpha_copy \
    --trimap-dir /DataDir/Composition-1k-testset/trimaps

To do list

License

The code is released under the MIT License. It is a short, permissive software license. Basically, you can do whatever you want as long as you include the original copyright and license notice in any copy of the software/source.

Citation

@misc{hu2023diffusion,
      title={Diffusion for Natural Image Matting}, 
      author={Yihan Hu and Yiheng Lin and Wei Wang and Yao Zhao and Yunchao Wei and Humphrey Shi},
      year={2023},
      eprint={2312.05915},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

Our project is developed based on ViTMatte, Matteformer and GCA-Matting. Thanks for their wonderful work!<div align="center">