Awesome
FateZero: Fusing Attentions for Zero-shot Text-based Video Editing (ICCV23 Oral')
Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, and Qifeng Chen
<a href='https://arxiv.org/abs/2303.09535'><img src='https://img.shields.io/badge/ArXiv-2303.09535-red'></a> <a href='https://fate-zero-edit.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
<table class="center"> <td><img src="docs/gif_results/car_posche_local_blend.gif"></td> <td><img src="docs/gif_results/3_sunflower_vangogh_conat_result.gif"></td> <td><img src="docs/gif_results/attri/16_sq_eat_02_concat_result.gif"></td> <tr> <td width=25% style="text-align:center;">"silver jeep β posche car"</td> <td width=25% style="text-align:center;">"+ Van Gogh style"</td> <td width=25% style="text-align:center;">"squirrel,Carrot β rabbit,eggplant"</td> </tr> </table >π Abstract
<b>TL; DR: <font color="red">FateZero</font> is the first zero-shot framework for text-driven video editing via pretrained diffusion models without training.</b>
<details><summary>CLICK for the full abstract</summary></details>The diffusion-based generative models have achieved remarkable success in text-based image generation. However, since it contains enormous randomness in generation progress, it is still challenging to apply such models for real-world visual content editing, especially in videos. In this paper, we propose <font color="red">FateZero</font>, a zero-shot text-based editing method on real-world videos without per-prompt training or use-specific mask. To edit videos consistently, we propose several techniques based on the pre-trained models. Firstly, in contrast to the straightforward DDIM inversion technique, our approach captures intermediate attention maps during inversion, which effectively retain both structural and motion information. These maps are directly fused in the editing process rather than generated during denoising. To further minimize semantic leakage of the source video, we then fuse self-attentions with a blending mask obtained by cross-attention features from the source prompt. Furthermore, we have implemented a reform of the self-attention mechanism in denoising UNet by introducing spatial-temporal attention to ensure frame consistency. Yet succinct, our method is the first one to show the ability of zero-shot text-driven video style and local attribute editing from the trained text-to-image model. We also have a better zero-shot shape-aware editing ability based on the text-to-video model. Extensive experiments demonstrate our superior temporal consistency and editing capability than previous works.
π Changelog
- 2023.06.10 Two examples of large motion and multiple object.
- 2023.04.18 Code refactoring and support local blending using blend_latents option.
- 2023.04.04 Release Enhanced Tuning-a-Video configs and shape editing ckpts, data and config
- 2023.03.31 Refine hugging face demo.
- 2023.03.27 Excited to Release
Hugging face demo
,attribute editing config
anddata
-
2023.03.22 Upload
style editing config
and <!--[`data`](https://hkustconnect-my.sharepoint.com/:u:/g/personal/cqiaa_connect_ust_hk/EaTqRAuW0eJLj0z_JJrURkcBZCC3Zvgsdo6zsXHhpyHhHQ?e=FzuiNG) -->data
and acolab notebook
. -
2023.03.21 Provide Editing guidance for in-the-wild video. Update the
config
for lower resources computers (16G GPU and less than 16G CPU RAM).
- 2023.03.17 Release Code and Paper!
π§ Todo
<details><summary>Click for Previous todos </summary>- Release the edit config and data for all results, Tune-a-video optimization
- Memory and runtime profiling and Editing guidance documents
- Colab and hugging-face
- code refactoring
- time & memory optimization
- Release more application
π‘ Setup Environment
Our method is tested using cuda11, fp16 of accelerator and xformers on a single A100 or 3090.
conda create -n fatezero38 python=3.8
conda activate fatezero38
pip install -r requirements.txt
xformers
is recommended for A100 GPU to save memory and running time.
We find its installation not stable. You may try the following wheel:
wget https://github.com/ShivamShrirao/xformers-wheels/releases/download/4c06c79/xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl
pip install xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl
</details>
Validate the installation by
python test_install.py
You may download all data and checkpoints using the following bash command
bash download_all.sh
The above command take minutes and 100GB. Or you may download the required data and ckpts latter according to your interests.
Our environment is similar to Tune-A-video (official, unofficial) and prompt-to-prompt. You may check them for more details.
βοΈ FateZero Editing
Style and Attribute Editing in Teaser
Download the stable diffusion v1-4 (or other interesting image diffusion model) and put it to ./ckpt/stable-diffusion-v1-4
.
mkdir ./ckpt
cd ./ckpt
# download from huggingface face, takes 20G space
git lfs install
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
</details>
Then, you could reproduce style and shape editing results in our teaser by running:
accelerate launch test_fatezero.py --config config/teaser/jeep_watercolor.yaml
# or CUDA_VISIBLE_DEVICES=0 python test_fatezero.py --config config/teaser/jeep_watercolor.yaml
<details><summary>The result is saved at `./result` . (Click for directory structure) </summary>
result
βββ teaser
β βββ jeep_posche
β βββ jeep_watercolor
β βββ cross-attention # visualization of cross-attention during inversion
β βββ sample # result
β βββ train_samples # the input video
</details>
Editing 8 frames on an Nvidia 3090, use 100G CPU memory, 12G GPU memory
for editing. We also provide some low-cost setting
of style editing by different hyper-parameters on a 16GB GPU.
You may try these low-cost settings on colab.
More speed and hardware benchmarks are here.
Shape and large motion editing with Tune-A-Video
Besides style and attribution editing above, we also provide a Tune-A-Video
checkpoint. You may download from onedrive or from hugging face model repository. Then move it to ./ckpt/jeep_tuned_200/
.
mkdir ./ckpt
cd ./ckpt
# download from huggingface face, takes 10G space
git lfs install
git clone https://huggingface.co/chenyangqi/jeep_tuned_200
</details>
<details><summary>The directory structure should be like this: (Click for directory structure) </summary>
ckpt
βββ stable-diffusion-v1-4
βββ jeep_tuned_200
...
data
βββ car-turn
β βββ 00000000.png
β βββ 00000001.png
β βββ ...
video_diffusion
</details>
You could reproduce the shape editing result in our teaser by running:
accelerate launch test_fatezero.py --config config/teaser/jeep_posche.yaml
Reproduce other results in the paper
<!-- Download the data of [style editing](https://hkustconnect-my.sharepoint.com/:u:/g/personal/cqiaa_connect_ust_hk/EaTqRAuW0eJLj0z_JJrURkcBZCC3Zvgsdo6zsXHhpyHhHQ?e=FzuiNG) and [attribute editing](https://hkustconnect-my.sharepoint.com/:u:/g/personal/cqiaa_connect_ust_hk/Ee7J2IzZuaVGkefh-ZRp1GwB7RCUYU7MVJCKqeNWmOIpfg?e=dcOwb7) -->Download the data from onedrive or from Github Release.
<details><summary>Click for wget bash command: </summary>wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/attribute.zip
wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/style.zip
wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/shape.zip
</details>
Unzip and Place it in './data'. Then use the commands in 'config/style' and 'config/attribute' to get the results.
To reproduce other shape editing results, download Tune-A-Video checkpoints from huggingface :
<details><summary>Click for the bash command: </summary>mkdir ./ckpt
cd ./ckpt
# download from huggingface face, takes 10G space
git lfs install
git clone https://huggingface.co/chenyangqi/man_skate_250
git clone https://huggingface.co/chenyangqi/swan_150
</details>
Then use the commands in 'config/shape'.
For above Tune-A-Video checkpoints, we fintune stable diffusion with a synthetic negative-prompt dataset for regularization and low-rank conovlution for temporal-consistent generation using tuning config
<details><summary>Click for the bash command example: </summary>cd ./data
wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/negative_reg.zip
unzip negative_reg
cd ..
accelerate launch train_tune_a_video.py --config config/tune/jeep.yaml
To evaluate our results quantitatively, we provide CLIP/frame_acc_tem_con.py
to calculate frame accuracy and temporal consistency using pretrained CLIP.
Editing guidance for YOUR video
We provided a editing guidance for in-the-wild video here. The work is still in progress. Welcome to give your feedback in issues.
Style Editing Results with Stable Diffusion
We show the difference between the source prompt and the target prompt in the box below each video.
Note mp4 and gif files in this GitHub page are compressed. Please check our Project Page for mp4 files of original video editing results.
<table class="center"> <tr> <td><img src="docs/gif_results/style/1_surf_ukiyo_01_concat_result.gif"></td> <td><img src="docs/gif_results/style/2_car_watercolor_01_concat_result.gif"></td> <td><img src="docs/gif_results/style/6_lily_monet_01_concat_result.gif"></td> <!-- <td><img src="https://tuneavideo.github.io/assets/results/tuneavideo/man-skiing/wonder-woman.gif"></td> <td><img src="https://tuneavideo.github.io/assets/results/tuneavideo/man-skiing/pink-sunset.gif"></td> --> </tr> <tr> <td width=25% style="text-align:center;">"+ Ukiyo-e style"</td> <td width=25% style="text-align:center;">"+ watercolor painting"</td> <td width=25% style="text-align:center;">"+ Monet style"</td> </tr> <tr> <td><img src="docs/gif_results/style/4_rabit_pokemon_01_concat_result.gif"></td> <td><img src="docs/gif_results/style/5_train_shikai_01_concat_result.gif"></td> <td><img src="docs/gif_results/style/parkour_watercolor_0_-1.gif"></td> </tr> <tr> </tr> <tr> <td width=25% style="text-align:center;">"+ PokΓ©mon cartoon style"</td> <td width=25% style="text-align:center;">"+ Makoto Shinkai style"</td> <td width=25% style="text-align:center;">"+ watercolor painting"</td> </tr> </table>Attribute Editing Results with Stable Diffusion
<table class="center"> <tr> <td><img src="docs/gif_results/attri/15_rabbit_eat_01_concat_result.gif"></td> <td><img src="docs/gif_results/attri/15_rabbit_eat_02_concat_result.gif"></td> <td><img src="docs/gif_results/attri/15_rabbit_eat_04_concat_result.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">"rabbit, strawberry β white rabbit, flower"</td> <td width=25% style="text-align:center;">"rabbit, strawberry β squirrel, carrot"</td> <td width=25% style="text-align:center;">"rabbit, strawberry β white rabbit, leaves"</td> </tr> <tr> <td><img src="docs/gif_results/attri/13_bear_tiger_leopard_lion_01_concat_result.gif"></td> <td><img src="docs/gif_results/attri/13_bear_tiger_leopard_lion_02_concat_result.gif"></td> <td><img src="docs/gif_results/attri/13_bear_tiger_leopard_lion_03_concat_result.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">"bear β a red tiger"</td> <td width=25% style="text-align:center;">"bear β a yellow leopard"</td> <td width=25% style="text-align:center;">"bear β a brown lion"</td> </tr> <tr> <td><img src="docs/gif_results/attri/14_cat_grass_tiger_corgin_02_concat_result.gif"></td> <td><img src="docs/gif_results/attri/14_cat_grass_tiger_corgin_03_concat_result.gif"></td> <td><img src="docs/gif_results/attri/14_cat_grass_tiger_corgin_04_concat_result.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">"cat β black cat, grass..."</td> <td width=25% style="text-align:center;">"cat β red tiger"</td> <td width=25% style="text-align:center;">"cat β Shiba-Inu"</td> </tr> <tr> <td><img src="docs/gif_results/attri/goldfish_yellow.gif"></td> <td><img src="docs/gif_results/attri/16_sq_eat_04_concat_result.gif"></td> <td><img src="docs/gif_results/attri/16_sq_eat_03_concat_result.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">"orange fish β yellow fish"</td> <td width=25% style="text-align:center;">"squirrel β robot squirrel"</td> <td width=25% style="text-align:center;">"squirrel, Carrot β robot mouse, screwdriver"</td> </tr> <tr> <td><img src="docs/gif_results/attri/10_bus_gpu_01_concat_result.gif"></td> <td><img src="docs/gif_results/attri/11_dog_robotic_corgin_01_concat_result.gif"></td> <td><img src="docs/gif_results/attri/11_dog_robotic_corgin_02_concat_result.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">"bus β GPU"</td> <td width=25% style="text-align:center;">"gray dog β yellow corgi"</td> <td width=25% style="text-align:center;">"gray dog β robotic dog"</td> </tr> <tr> <td><img src="docs/gif_results/attri/9_duck_rubber_01_concat_result.gif"></td> <td><img src="docs/gif_results/attri/12_fox_snow_wolf_01_concat_result.gif"></td> <td><img src="docs/gif_results/attri/12_fox_snow_wolf_02_concat_result.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">"white duck β yellow rubber duck"</td> <td width=25% style="text-align:center;">"grass β snow"</td> <td width=25% style="text-align:center;">"white fox β grey wolf"</td> </tr> </table>Shape and large motion editing with Tune-A-Video
<table class="center"> <tr> <td><img src="docs/gif_results/shape/17_car_posche_01_concat_result.gif"></td> <td><img src="docs/gif_results/shape/18_swan_01_concat_result.gif"></td> <td><img src="docs/gif_results/shape/18_swan_02_concat_result.gif"></td> <!-- <td><img src="https://tuneavideo.github.io/assets/results/tuneavideo/man-skiing/wonder-woman.gif"></td> <td><img src="https://tuneavideo.github.io/assets/results/tuneavideo/man-skiing/pink-sunset.gif"></td> --> </tr> <tr> <td width=25% style="text-align:center;">"silver jeep β posche car"</td> <td width=25% style="text-align:center;">"Swan β White Duck"</td> <td width=25% style="text-align:center;">"Swan β Pink flamingo"</td> </tr> <tr> <td><img src="docs/gif_results/shape/19_man_wonder_01_concat_result.gif"></td> <td><img src="docs/gif_results/shape/19_man_wonder_02_concat_result.gif"></td> <td><img src="docs/gif_results/shape/19_man_wonder_03_concat_result.gif"></td> </tr> <tr> </tr> <tr> <td width=25% style="text-align:center;">"A man β A Batman"</td> <td width=25% style="text-align:center;">"A man β A Wonder Woman, With cowboy hat"</td> <td width=25% style="text-align:center;">"A man β A Spider-Man"</td> </tr> </table>πΉ Online Demo
Thanks to AK and the team from Hugging Face for providing computing resources to support our Hugging-face Demo, which supports up to 30 steps DDIM steps. .
You may use the UI for testing FateZero built with gradio locally.
git clone https://huggingface.co/spaces/chenyangqi/FateZero
python app_fatezero.py
# we will merge the FateZero on hugging face with that in github repo latter
We also provide a Colab demo, which supports 10 DDIM steps. You may launch the colab as a jupyter notebook on your local machine. We will refine and optimize the above demos in the following days.
π Demo Video
The video here is compressed due to the size limit of GitHub. The original full-resolution video is here.
π Citation
@article{qi2023fatezero,
title={FateZero: Fusing Attentions for Zero-shot Text-based Video Editing},
author={Chenyang Qi and Xiaodong Cun and Yong Zhang and Chenyang Lei and Xintao Wang and Ying Shan and Qifeng Chen},
year={2023},
journal={arXiv:2303.09535},
}
π Acknowledgements
This repository borrows heavily from Tune-A-Video and prompt-to-prompt. Thanks to the authors for sharing their code and models.
π§Ώ Maintenance
This is the codebase for our research work. We are still working hard to update this repo, and more details are coming in days. If you have any questions or ideas to discuss, feel free to contact Chenyang Qi or Xiaodong Cun.