Home

Awesome

RePaint

Inpainting using Denoising Diffusion Probabilistic Models

CVPR 2022 [Paper]

Denoising_Diffusion_Inpainting_Animation

Setup

1. Code

git clone https://github.com/andreas128/RePaint.git

2. Environment

pip install numpy torch blobfile tqdm pyYaml pillow    # e.g. torch 1.7.1+cu110.

3. Download models and data

pip install --upgrade gdown && bash ./download.sh

That downloads the models for ImageNet, CelebA-HQ, and Places2, as well as the face example and example masks.

4. Run example

python test.py --conf_path confs/face_example.yml

Find the output in ./log/face_example/inpainted

Note: After refactoring the code, we did not reevaluate all experiments.

<br>

RePaint fills a missing image part using diffusion models

<table border="0" cellspacing="0" cellpadding="0"> <tr> <td><img alt="RePaint Inpainting using Denoising Diffusion Probabilistic Models Demo 1" src="https://user-images.githubusercontent.com/11280511/150766080-9f3d7bc9-99f2-472e-9e5d-b6ed456340d1.gif"></td> <td><img alt="RePaint Inpainting using Denoising Diffusion Probabilistic Models Demo 2" src="https://user-images.githubusercontent.com/11280511/150766125-adf5a3cb-17f2-432c-a8f6-ce0b97122819.gif"></td> </tr> </table>

What are the blue parts? <br> Those parts are missing and therefore have to be filled by RePaint. <br> RePaint generates the missing parts inspired by the known parts.

How does it work? <br> RePaint starts from pure noise. Then the image is denoised step-by-step. <br> It uses the known part to fill the unknown part in each step.

Why does the noise level fluctuate during generation? <br> Our noise schedule improves the harmony between the generated and <br> the known part [4.2 Resampling].

<br>

Details on data

Which datasets and masks have a ready-to-use config file?

We provide config files for ImageNet (inet256), CelebA-HQ (c256) and Places2 (p256) for the masks "thin", "thick", "every second line", "super-resolution", "expand" and "half" in ./confs. You can use them as shown in the example above.

How to prepare the test data?

We use LaMa for validation and testing. Follow their instructions and add the images as specified in the config files. When you download the data using download.sh, you can see examples of masks we used.

How to apply it to other images?

Copy the config file for the dataset that matches your data best (for faces aligned like CelebA-HQ _c256, for diverse images _inet256). Then set the gt_path and mask_path to where your input is. The masks have the value 255 for known regions and 0 for unknown areas (the ones that get generated).

How to apply it for other datasets?

If you work with other data than faces, places or general images, train a model using the guided-diffusion repository. Note that RePaint is an inference scheme. We do not train or finetune the diffusion model but condition pre-trained models.

Adapt the code

How to design a new schedule?

Fill in your own parameters in this line to visualize the schedule using python guided_diffusion/scheduler.py. Then copy a config file, set your parameters in these lines and run the inference using python test.py --conf_path confs/my_schedule.yml.

How to speed up the inference?

The following settings are in the schedule_jump_params key in the config files. You can visualize them as described above.

Code overview

Issues

Do you have further questions?

Please open an issue, and we will try to help you.

Did you find a mistake?

Please create a pull request. For examply by clicking the pencil button on the top right on the github page.

<br>

RePaint on diverse content and shapes of missing regions

The blue region is unknown and filled by RePaint:

Denoising Diffusion Probabilistic Models Inpainting

Note: RePaint creates many meaningful fillings. <br>

  1. Face: Expressions and features like an earring or a mole. <br>
  2. Computer: The computer screen shows different images, text, and even a logo. <br>
  3. Greens: RePaint makes sense of the tiny known part and incorporates it in a beetle, spaghetti, and plants. <br>
  4. Garden: From simple filling like a curtain to complex filling like a human. <br>
<br>

Extreme Case 1: Generate every second line

Denoising_Diffusion_Probabilistic_Models_Inpainting_Every_Second_Line

<br>

Extreme Case 2: Upscale an image

Denoising_Diffusion_Probabilistic_Models_Inpainting_Super_Resolution

<br>

RePaint conditions the diffusion model on the known part

Denoising Diffusion Probabilistic Models Inpainting Method

Intuition of one conditioned denoising step:

  1. Sample the known part: Add gaussian noise to the known regions of the image. <br> We obtain a noisy image that follows the denoising process exactly.
  2. Denoise one step: Denoise the previous image for one step. This generates <br> content for the unknown region conditioned on the known region.
  3. Join: Merge the images from both steps.

Details are in Algorithm 1 on Page 5. [Paper]

<br>

How to harmonize the generated with the known part?

<img width="1577" alt="Diffusion Model Resampling" src="https://user-images.githubusercontent.com/11280511/150822917-737c00b0-b6bb-439d-a5bf-e73238d30990.png"> <br>

RePaint Fails

<img width="1653" alt="RePaint Fails" src="https://user-images.githubusercontent.com/11280511/150853163-b965f59c-5ad4-485b-816e-4391e77b5199.png"> <br>

User Study State-of-the-Art Comparison

<br>

Explore the Visual Examples

<img width="1556" alt="Denosing Diffusion Inpainting Examples" src="https://user-images.githubusercontent.com/11280511/150864677-0eb482ae-c114-4b0b-b1e0-9be9574da307.png"> <br>

Acknowledgement

This work was supported by the ETH Zürich Fund (OK), a Huawei Technologies Oy (Finland) project, and an Nvidia GPU grant.

This repository is based on guided-diffuion from OpenAI.