Home

Awesome

FluxMusic: Text-to-Music Generation with Rectified Flow Transformer <br><sub>Official PyTorch Implementation</sub>

<a href="https://arxiv.org/abs/2409.00587"><img src="https://img.shields.io/static/v1?label=Paper&message=FluxMusic&color=purple&logo=arxiv"></a><a href="https://huggingface.co/feizhengcong/fluxmusic"><img src="https://img.shields.io/static/v1?label=Models&message=HuggingFace&color=yellow"></a><a href="https://github.com/feizc/FluxMusic"><img src="https://img.shields.io/static/v1?label=Webpage&message=Cases&color=green"></a><a href="https://github.com/curtified/FluxMusicGUI"><img src="https://img.shields.io/static/v1?label=GUI&message=FluxMusic&color=orange&logo=demo"></a>

</div>

This repo contains PyTorch model definitions, pre-trained weights, and training/sampling code for paper Flux that plays music. It explores a simple extension of diffusion-based rectified flow Transformers for text-to-music generation. The model architecture can be seen as follows:

<img src=visuals/framework.png width=400 />

To-do list

1. Training

You can refer to the link to build the running environment.

To launch small version in the latent space training with N GPUs on one node with pytorch DDP:

torchrun --nnodes=1 --nproc_per_node=N train.py \
--version small \
--data-path xxx \
--global_batch_size 128

More scripts of different model size can reference to scripts file direction.

2. Inference

We include a sample.py script which samples music clips according to conditions from a MusicFlux model as:

python sample.py \
--version small \
--ckpt_path /path/to/model \
--prompt_file config/example.txt

All prompts used in paper are lists in config/example.txt.

3. Download Ckpts and Data

We use VAE and Vocoder in AudioLDM2, CLAP-L, and T5-XXL. You can download in the following table directly, we also provide the training scripts in our experiments.

Note that in actual experiments, a restart experiment was performed due to machine malfunction, so there will be resume options in some scripts.

ModelTraining stepsUrlTraining scripts
VAE-link-
Vocoder-link-
T5-XXL-link-
CLAP-L-link-
FluxMusic-Small200Klinklink
FluxMusic-Base200Klinklink
FluxMusic-Large200Klinklink
FluxMusic-Giant200Klinklink
FluxMusic-Giant-Full2Mlink-

Note that 200K-steps ckpts are trained on a sub-training set and used for ploted the scaling experiments as well as case studies in the paper. The full version of main results will be released right way.

The construction of training data can refer to the test.py file, showing a simple build of combing differnet datasets in json file.

Considering copyright issues, the data used in the paper needs to be downloaded by oneself.

We provide a clean subset in: <a href="https://huggingface.co/datasets/feizhengcong/FluxMusic"><img src="https://img.shields.io/static/v1?label=Datasets&message=HuggingFace&color=blue"></a>

A quick download link for other datasets can be found in Huggingface : ).

This is a research project, and it is recommended to try advanced products: <a href="https://www.melodio.ai/"><img src="https://img.shields.io/static/v1?label=Recommend&message=Application&color=orange&logo=demo"></a>

Acknowledgments

The codebase is based on the awesome Flux and AudioLDM2 repos.