Awesome
<h2 align="center"> <a href="https://github.com/nazmul-karim170/SAVE-Text2Video-Diffusion"> SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing</a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update. </h2> <h5 align="center"> </h5>Project page | Paper
<img src="asset/Results.png"/>😮 Highlights
SAVE allows you to edit your video in a matter of 3 minutes! instead of 30 minutes! in SOTA.
💡 Efficient, High-quality, and Fast-speed
- Stable Diffusion (SD) for image generation --> high-quality
- Only fine-tune the Singular Values of the Query Matrices --> Efficient Adaptation
- Regularize the singular value updates
🚩 Updates
Welcome to watch 👀 this repository for the latest updates.
✅ [2024.06.07] : We have released our code
✅ [2023.12.01] : We have released our paper, SAVE on arXiv.
✅ [2023.12.01] : Release project page.
🛠️ Methodology
<img src="asset/Main.png"/>Implementation of SAVE Algorithm.
First, create a conda environment using this
conda create -n save
First Install the following packages-
pip install -r requirements.txt
Run the following command to edit a given video.
python Edit_Video_SAVE.py
Change the "--config" option in arguments to provide a new video.
🚀 Video-Editing Results
Qualitative comparison
<img src="asset/Compare.png"/>Quantitative comparison
<img src="asset/quant_S.png"/>👍 Acknowledgement
This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!
✏️ Citation
If you find our paper and code useful in your research, please consider giving a star :star: and a citation :pencil:.
@misc{karim2023save,
title={SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-driven Video Editing},
author={Nazmul Karim and Umar Khalid and Mohsen Joneidi and Chen Chen and Nazanin Rahnavard},
year={2023},
eprint={2305.18670},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
<!---->