Awesome
Amodal Completion via Progressive Mixed Context Diffusion (CVPR 2024)
Project Page | Paper | arXiv | Bibtex
Katherine Xu$^{1}$, Lingzhi Zhang$^{2}$, Jianbo Shi$^1$<br> $^1$ University of Pennsylvania, $^2$ Adobe Inc.
Our method can recover the hidden pixels of objects in diverse images. Occluders may be co-occurring (a person on a surfboard), accidental (a cat in front of a microwave), the image boundary (giraffe), or a combination of these scenarios. The pink outline indicates an occluder object.
We use pretrained diffusion inpainting models, and no additional training is required!
🚀 Updates
- Stay tuned for our code release!
Table of Contents
Requirements
- Python 3.10
- Docker
Setup
-
Clone this
amodal
repository, and runcd Grounded-Segment-Anything
. -
In the Dockerfile, change all instances of
/home/appuser
to your path for theamodal
repository. -
Run
make build-image
. -
Start and attach to a docker container from the image
gsa:v0
. Then, navigate to theamodal
repository. -
Run
./install.sh
to finish setup and download model checkpoints.
Dataset
- Run
./download_dataset.sh
to download the COCO dataset.
Usage
Progressive Occlusion-aware Completion Pipeline
-
In
./main.sh
, modifyinput_dir
to your folder path for the images. -
Run
./main.sh
. You may need to usechmod
to change the file permissions first.
Citation
If you find our work useful, please cite our paper:
@inproceedings{xu2024amodal,
title={Amodal completion via progressive mixed context diffusion},
author={Xu, Katherine and Zhang, Lingzhi and Shi, Jianbo},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={9099--9109},
year={2024}
}