Awesome
<!-- # Paint-Anything -->We plan to create a very interesting demo by combining Segment Anything and a series of stroke-based painting models, which makes more human-like painting process! We will continue to improve it and create more interesting demos. đŸ”¥Interesting ideas, results, and contributions are warmly welcome!đŸ”¥
Demo
-
Like human, Paint-Anything creates arts by firstly drawing the background roughly and then drawing the foreground with fine-grained strokes.
Installation
python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install -r lama/requirements.txt
- The code is tested on the environment with Ubuntu 22.04, python 3.9.16, torch 1.10.1, cuda 11.3, opencv-python 4.7.0, and a 3090 GPU.
Get Started
-
Clone this repo:
git clone https://github.com/Huage001/Paint-Anything.git cd Paint-Anything
-
Download the model checkpoint of Segment Anything and move it to this project:
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth mv sam_vit_h_4b8939.pth segment-anything/
-
Download the model checkpoint of Lama and move it to this project:
curl -L $(yadisk-direct https://disk.yandex.ru/d/ouP6l8VJ0HpMZg) -o big-lama.zip mv big-lama.zip lama/ unzip lama/big-lama.zip rm lama/big-lama.zip
-
Download the model checkpoint of LearningToPaint: renderer.pkl and actor.pkl, and move them to this project:
mv [DOWNLOAD_PATH]/renderer.pkl painter/ mv [DOWNLOAD_PATH]/actor.pkl painter/
-
Run the following command:
python paint_anything.py --img_path input/demo_input.jpg
Follow the instruction printed on the console to run the interactive demo.
-
Full usage:
python paint_anything.py [-h] --img_path IMG_PATH [--output_dir OUTPUT_DIR]
Future Work
-
Integrate with more state-of-the-art stroke-based AI painting methods.
-
More user-friendly and stable user interface.
-
...
:cupid: Acknowledgement
Citation
If you find this project helpful for your research, please consider citing the following BibTeX entry.
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
@inproceedings{huang2019learning,
title={Learning to paint with model-based deep reinforcement learning},
author={Huang, Zhewei and Heng, Wen and Zhou, Shuchang},
booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
year={2019}
}
@inproceedings{liu2021paint,
title={Paint Transformer: Feed Forward Neural Painting with Stroke Prediction},
author={Liu, Songhua and Lin, Tianwei and He, Dongliang and Li, Fu and Deng, Ruifeng and Li, Xin and Ding, Errui and Wang, Hao},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
year={2021}
}