Home

Awesome

logo

<!-- # Transfer-Any-Style -->

We plan to create a very interesting demo by combining Segment Anything and a series of style transfer models! We will continue to improve it and create more interesting demos. Interesting ideas, results, and contributions are warmly welcome!

[Arxiv]

Demo

Installation

python -m pip install torch
python -m pip install -e segment_anything
python -m pip install opencv-python

Get Started

Future Work

:cupid: Acknowledgement

Citation

If you find this project helpful for your research, please consider citing the following BibTeX entry.


@article{liu2023any,
  title={Any-to-Any Style Transfer},
  author={Liu, Songhua and Ye, Jingwen and Wang, Xinchao},
  journal={arXiv preprint arXiv:2304.09728},
  year={2023}
}

@article{kirillov2023segany,
    title={Segment Anything}, 
    author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
    journal={arXiv:2304.02643},
    year={2023}
}

@inproceedings{liu2021adaattn,
  title={AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer},
  author={Liu, Songhua and Lin, Tianwei and He, Dongliang and Li, Fu and Wang, Meiling and Li, Xin and Sun, Zhengxing and Li, Qian and Ding, Errui},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2021}
}

@article{yu2023inpaint,
  title={Inpaint Anything: Segment Anything Meets Image Inpainting},
  author={Yu, Tao and Feng, Runseng and Feng, Ruoyu and Liu, Jinming and Jin, Xin and Zeng, Wenjun and Chen, Zhibo},
  journal={arXiv preprint arXiv:2304.06790},
  year={2023}
}