Home

Awesome

<h2 align="center"> <a href="https://github.com/nazmul-karim170/SAVE-Text2Video-Diffusion"> SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing</a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update. </h2> <h5 align="center">

webpage arXiv License: MIT

</h5>

Project page | Paper

<img src="asset/Results.png"/>

😮 Highlights

SAVE allows you to edit your video in a matter of 3 minutes! instead of 30 minutes! in SOTA.

💡 Efficient, High-quality, and Fast-speed

🚩 Updates

Welcome to watch 👀 this repository for the latest updates.

[2024.06.07] : We have released our code

[2023.12.01] : We have released our paper, SAVE on arXiv.

[2023.12.01] : Release project page.

🛠️ Methodology

<img src="asset/Main.png"/>

Implementation of SAVE Algorithm.

First, create a conda environment using this

conda create -n save

First Install the following packages-

pip install -r requirements.txt

Run the following command to edit a given video.

python Edit_Video_SAVE.py

Change the "--config" option in arguments to provide a new video.

🚀 Video-Editing Results

Qualitative comparison

<img src="asset/Compare.png"/>

Quantitative comparison

<img src="asset/quant_S.png"/>

👍 Acknowledgement

This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!

✏️ Citation

If you find our paper and code useful in your research, please consider giving a star :star: and a citation :pencil:.

@misc{karim2023save,
      title={SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-driven Video Editing}, 
      author={Nazmul Karim and Umar Khalid and Mohsen Joneidi and Chen Chen and Nazanin Rahnavard},
      year={2023},
      eprint={2305.18670},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
<!---->