Home

Awesome

Paint2Pix: Interactive Painting based Progressive Image Synthesis and Editing (ECCV 2022)

Jaskirat Singh, Liang Zheng, Cameron Smith, Jose Echevarria

Controllable image synthesis with user scribbles is a topic of keen interest in the computer vision community. In this paper, for the first time we study the problem of photorealistic image synthesis from incomplete and primitive human paintings. In particular, we propose a novel approach paint2pix, which learns to predict (and adapt) “what a user wants to draw” from rudimentary brushstroke inputs, by learning a mapping from the manifold of incomplete human paintings to their realistic renderings. When used in conjunction with recent works in autonomous painting agents, we show that paint2pix can be used for progressive image synthesis from scratch. During this process, paint2pix allows a novice user to progressively synthesize the desired image output, while requiring just few coarse user scribbles to accurately steer the trajectory of the synthesis process. Furthermore, we find that our approach also forms a surprisingly convenient approach for real image editing, and allows the user to perform a diverse range of custom fine-grained edits through the addition of only a few well-placed brushstrokes.

<!-- [[Paper](https://arxiv.org/abs/2208.08092)][[Project Page](https://1jsingh.github.io/paint2pix)][[Demo](http://exposition.cecs.anu.edu.au:6009/)][[Citation](#citation)] -->

<a href="https://arxiv.org/abs/2208.08092"><img src="https://img.shields.io/badge/Paper-arXiv-red?style=for-the-badge" height=22.5></a> <a href="https://1jsingh.github.io/paint2pix"><img src="https://img.shields.io/badge/Project-Page-succees?style=for-the-badge&logo=GitHub" height=22.5></a> <a href="http://exposition.cecs.anu.edu.au:6009/"><img src="https://img.shields.io/badge/Online-Demo-blue?style=for-the-badge&logo=Streamlit" height=22.5></a> <a href="#citation"><img src="https://img.shields.io/badge/Paper-Citation-green?style=for-the-badge&logo=Google%20Scholar" height=22.5></a> <a href="https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2F1jsingh%2Fpaint2pix&text=Unleash%20your%20inner%20artist%20...%20synthesize%20amazing%20artwork%2C%20and%20realistic%20image%20content%20or%20simply%20perform%20a%20range%20of%20diverse%20real%20image%20edits%20using%20just%20coarse%20user%20scribbles.&hashtags=Paint2Pix%2CECCV2022"><img src="https://img.shields.io/badge/Share--white?style=for-the-badge&logo=Twitter" height=22.5></a>

<p align="center"> <img src="https://1jsingh.github.io/assets/publications/images/paint2pix.png" width="800px"/> <br> We propose <em>paint2pix</em> which helps the user directly express his/her ideas in visual form by learning to predict user-intention from a few rudimentary brushstrokes. The proposed approach can be used for (a) synthesizing a desired image output directly from scratch wherein it allows the user to control the overall synthesis trajectory using just few coarse brushstrokes (blue arrows) at key points, or, (b) performing a diverse range of custom edits directly on real image inputs. </p>

Description

Official implementation of our Paint2pix paper with streamlit demo. By using autonomous painting agents as a proxy for the human painting process, Paint2pix learns to predict user-intention ("what a user wants to draw") from fairly rudimentary paintings and user-scribbles.

Updates

https://user-images.githubusercontent.com/25987491/185323657-a71c239c-892c-4202-b753-a84c0bf19a30.mp4

Table of Contents

<small><i><a href='http://ecotrust-canada.github.io/markdown-toc/'>Table of contents generated with markdown-toc</a></i></small>

Getting Started

Prerequisites

Installation

Pretrained Models

Please download the following pretrained models essential for running the provided demo.

Paint2pix models

PathDescription
Canvas Encoder - ReStylePaint2pix Canvas Encoder trained with a ReStyle architecture.
Identity Encoder - ReStylePaint2pix Identity Encoder trained with a ReStyle architecture.
StyleGAN - Watercolor PaintingStyleGAN decoder network trained to generate watercolor paintings. Used for artistic content generation with paint2pix.
IR-SE50 ModelPretrained IR-SE50 model taken from TreB1eN for use in ID loss and id-encoder training.

Please download and save the above models to the directory pretrained_models.

Using the Demo

We provide a streamlit-drawble canvas based demo for trying out different features of the Paint2pix model. To start the demo use,

CUDA_VISIBLE_DEVICES=2 streamlit run demo.py --server.port 6009

The demo can then be accessed on the local machine or ssh client via localhost.

The demo has been divided into 3 convenient sections:

  1. Real Image Editing: Allows the user to edit real images using coarse user scribbles
  2. Progressive Image Synthesis: Start from an empty canvas and design your desired image output using just coarse scribbles.
  3. Artistic Content Generation: Unleash your inner artist! create highly artistic portraits using just coarse scribbles.

Example Results

Progressive Image Synthesis

<p align="center"> <img src="docs/prog-synthesis.png" width="800px"/> <br> Paint2pix for progressive image synthesis </p>

Real Image Editing

<p align="center"> <img src="docs/custom-color-edits.png" width="800px"/> <br> Paint2pix for achieving diverse custom real-image edits </p>

Artistic Content Generation

<p align="center"> <img src="docs/watercolor-synthesis.png" width="800px"/> <br> Paint2pix for generating highly artistic content using coarse scribbles </p>

Acknowledgments

This code borrows heavily from pixel2style2pixel, encoder4editing and restyle-encoder.

Citation

If you use this code for your research, please cite the following works:

@inproceedings{singh2022paint2pix,
  title={Paint2Pix: Interactive Painting based Progressive
        Image Synthesis and Editing},
  author={Singh, Jaskirat and Zheng, Liang and Smith, Cameron and Echevarria, Jose},
  booktitle={European conference on computer vision},
  year={2022},
  organization={Springer}
}
@inproceedings{singh2022intelli,
  title={Intelli-Paint: Towards Developing Human-like Painting Agents},
  author={Singh, Jaskirat and Smith, Cameron and Echevarria, Jose and Zheng, Liang},
  booktitle={European conference on computer vision},
  year={2022},
  organization={Springer}
}