Awesome
Mask Guided Matting via Progressive Refinement Network
<p align="center"> <img src="result/teaser.png" width="1050" title="Teaser Image"/> </p>This repository includes the official project of Mask Guided (MG) Matting, presented in our paper:
Mask Guided Matting via Progressive Refinement Network (CVPR 2021)
Johns Hopkins University, Adobe Research
News
-
22 Apr 2021: Update the code base and pre-trained weights.
-
22 Mar 2021: Our real-world portrait dataset is now publicly avaliable at here. Codes (both training and inference) are released, please refer to code-base.
-
15 Dec 2020: Visually comparisons of different fully automatic matting systems are avaliable in SYSTEM.md.
-
15 Dec 2020: Release Arxiv version of paper and visualizations of sample images and videos.
Highlights
- Trimap-free Alpha Estimation: MG Matting does not require a carefully annotated trimap as guidance inputs. Instead, it takes a general rough mask, which could be generated by segmentation or saliency models automatically, and predicts an alpha matte with great details;
- Foreground Color Prediction: MG Matting predicts the foreground color besides alpha matte, we notice and address the inaccuracy of foreground annotations in Composition-1k by Random Alpha Blending;
- No Additional Training Data: MG Matting is trained only with the widely-used publicly avaliable synthetic dataset Composition-1k, and shows great performance on both synthetic and real-world benchmarks.
Visualization Examples
We provide examples for visually comparing MG Matting with other matting methods. We also note that our model can even potentially deal with video matting.
Dataset
In our experiments, only Composition-1k training set is used to train the model. And the obtained model is evaluated on three dataset: Composition-1k, Distinction-646, and our real-world portrait dataset.
For Compsition-1k, please contact Brian Price (bprice@adobe.com) requesting for the dataset. And please refer to GCA Matting for dataset preparation.
For Distinction-646, please refer to HAttMatting for the dataset.
Our real-world portrait dataset, it is available to public and you can download it at this link.
Citation
If you find this work or code useful for your research, please use the following BibTex entry:
@article{yu2020mask,
title={Mask Guided Matting via Progressive Refinement Network},
author={Yu, Qihang and Zhang, Jianming and Zhang, He and Wang, Yilin and Lin, Zhe and Xu, Ning and Bai, Yutong and Yuille, Alan},
journal={arXiv preprint arXiv:2012.06722},
year={2020}
}
Acknowledgment
Lisence
Research only;
The project can only be redistributed under a Creative Commons Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0) license; the terms of which are available at https://creativecommons.org/licenses/by-nc/2.0/deed.en_GB.