Home

Awesome

Still-Moving: Open-Source Implementation

About

This repository contains an open-source implementation of the "Still-Moving" model, based on the paper "Still-Moving: Customized Video Generation without Customized Video Data" by Chefer et al. project page

Still-Moving is a novel framework for customizing text-to-video (T2V) generation models without requiring customized video data. It leverages customized text-to-image (T2I) models and adapts them for video generation, combining spatial priors from T2I models with motion priors from T2V models.

Progress

I trained Motion adapter and Spatial Adapter as mentioned in the paper. <b> Not sure why the motion is so fast, and output is bad with customized dreambooth model Alt text

Key Features

Installation

[Include installation instructions here]

Usage

[Provide basic usage examples here]

Implementation Details

Contributing

We welcome contributions from the community! Whether you're fixing bugs, improving documentation, or proposing new features, your efforts are appreciated.

Please make sure to update tests as appropriate and adhere to the project's coding standards.

Areas for Contribution

License

Open to use

Contact

Harsh Bhatt - harshbhatt7585@gmail.com