Home

Awesome

CamI2V: Camera-Controlled Image-to-Video Diffusion Model

official repo of paper for "CamI2V: Camera-Controlled Image-to-Video Diffusion Model"

github io: https://zgctroy.github.io/CamI2V/

Abstract: Recently, camera pose, as a user-friendly and physics-related condition, has been introduced into text-to-video diffusion model for camera control. However, existing methods simply inject camera conditions through a side input. These approaches neglect the inherent physical knowledge of camera pose, resulting in imprecise camera control, inconsistencies, and also poor interpretability. In this paper, we emphasize the necessity of integrating explicit physical constraints into model design. Epipolar attention is proposed for modeling all cross-frame relationships from a novel perspective of noised condition. This ensures that features are aggregated from corresponding epipolar lines in all noised frames, overcoming the limitations of current attention mechanisms in tracking displaced features across frames, especially when features move significantly with the camera and become obscured by noise. Additionally, we introduce register tokens to handle cases without intersections between frames, commonly caused by rapid camera movements, dynamic objects, or occlusions. To support image-to-video, we propose the multiple guidance scale to allow for precise control for image, text, and camera, respectively. Furthermore, we establish a more robust and reproducible evaluation pipeline to solve the inaccuracy and instability of existing camera control measurement. We achieve a 25.5% improvement in camera controllability on RealEstate10K while maintaining strong generalization to out-of-domain images. With optimization, only 24GB and 12GB is required for training and inference, respectively. We plan to release checkpoints, along with training and evaluation codes.

News and ToDo List

256x256 resolution, 25steps, RTX 3090, 16 frames

Method$(c_\text{txt,img}=7.5,c_\text{cam}=1.0)$ParametersGeneration Time$\downarrow$RotErr$\downarrow$TransErr$\downarrow$CamMC$\downarrow$FVD (VideoGPT)$\downarrow$FVD (StyleGAN)$\downarrow$
DynamiCrafter1.4 B8.14 s3.37729.770011.544117.785103.510
DynamiCrafter + MotionCtrl+ 63.4 M8.27 s0.97712.44353.023568.54561.027
DynamiCrafter + CameraCtrl+ 211 M8.38 s0.69841.86582.244568.42260.235
DynamiCrafter + CamI2V+ 261 M10.3 s0.42571.42261.627763.94054.897
DynamiCrafter + CamI2V (only plucker, no epipolar )0.76242.03972.454266.23758.179
DynamiCrafter + CamI2V (no plucker, only epipolar )1.59055.29806.245787.24877.236

Performance

Visualization

1024x576

zoom in + zoom out

512x320

Also see 512 resolution part of https://zgctroy.github.io/CamI2V/

256x256

See 256 resolution part of https://zgctroy.github.io/CamI2V/

Related Repo

CameraCtrl: https://github.com/hehao13/CameraCtrl

MotionCtrl: https://github.com/TencentARC/MotionCtrl/tree/animatediff

Citation

@inproceedings{anonymous2025camiv,
    title={CamI2V: Camera-Controlled Image-to-Video Diffusion Model},
    author={Anonymous},
    booktitle={Submitted to The Thirteenth International Conference on Learning Representations},
    year={2025},
    url={https://openreview.net/forum?id=dIZB7jeSUv},
    note={under review}
}