Awesome
DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation
Zun Wang, Jialu Li, Han Lin, Jaehong Yoon, Mohit Bansal
<br> <img width="950" src="files/teaser.gif"/> <br>Code coming soon!
ToDos
- Release the inference code on T2V-ComBench.
- Release the code for retrieving videos and training character and motion loras.
- Release the inference code for storytelling video genetation.
Setup
Environment Setup
conda create -n dreamrunner python==3.10
conda activate dreamrunner
pip install -r requirements.txt
Download Models
DreamRunner is implemented using CogVideoX-2B. You can download it here and put it to pretrained_models/CogVideoX-2b
.
Running the Code
T2V-Combench
Inference
We provide the plans we used for T2V-ComBench in MotionDirector_SR3AI/t2v-combench/plan
.
You can specify the GPUs you want use in MotionDirector_SR3AI/t2v-combench-2b.sh
for parallel inference.
Then directly Infer 600 videos on 6 dimensions of T2V-ComBnech with the following script
cd MotionDirector_SR3AI
bash run_bench_2b.sh
The generated videos will be saved at MotionDirector_SR3AI/T2V-CompBench
.
Evaluation
Please follow T2V-ComBench for evaluating the generated videos.
Storytell Video Generation
Coming soon!
Citation
If you find our project useful in your research, please cite the following paper:
@article{zun2024dreamrunner,
author = {Zun Wang and Jialu Li and Han Lin and Jaehong Yoon and Mohit Bansal},
title = {DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation},
journal = {arxiv},
year = {2024},
url = {https://arxiv.org/abs/2411.16657}
}