Awesome
Audiocraft neurofeedback
possibility to stream EEG coherence into MusicGen: https://github.com/neuroidss/timeflux_neurofeedback_inverse_gamepad/blob/master/examples/neurofeedback_coherence_musicgen.yaml#L212
module: timeflux_neurofeedback_inverse_gamepad.nodes.gradiostreamer
class: GradioStreamer
demo of eeg coherence from file to transformer attention: https://drive.google.com/file/d/1gWsutN6pLT9-ZnFrjsEej8sQddpBGxpn/view?usp=sharing
here where eeg coherence applied to transformer attention in MusicGen: https://github.com/neuroidss/audiocraft_neurofeedback/blob/main/audiocraft/modules/transformer.py#L431
attn_mask = _coherence_attention_mask(q, k)
x = torch.nn.functional.scaled_dot_product_attention(
q, k, v, attn_mask, dropout_p=p)
here example of music with poetry, where electroencephalography coherence sent as attention mask to both musicgen and text generation transformers as streaming, for neurofeedback: https://drive.google.com/file/d/1CP3KXWBQ69m_MWdVgouhqEu6sSZow8cu/view?usp=drive_link
Audiocraft
Audiocraft is a PyTorch library for deep learning research on audio generation. At the moment, it contains the code for MusicGen, a state-of-the-art controllable text-to-music model.
MusicGen
Audiocraft provides the code and models for MusicGen, a simple and controllable model for music generation. MusicGen is a single stage auto-regressive Transformer model trained over a 32kHz <a href="https://github.com/facebookresearch/encodec">EnCodec tokenizer</a> with 4 codebooks sampled at 50 Hz. Unlike existing methods like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio. Check out our sample page or test the available demo!
<a target="_blank" href="https://colab.research.google.com/drive/1-Xe9NCdIs2sCUbiSmwHXozK6AAhMm7_i?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> <a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HugginFace"/> </a> <br>We use 20K hours of licensed music to train MusicGen. Specifically, we rely on an internal dataset of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data.
Installation
Audiocraft requires Python 3.9, PyTorch 2.0.0, and a GPU with at least 16 GB of memory (for the medium-sized model). To install Audiocraft, you can run the following:
# Best to make sure you have torch installed first, in particular before installing xformers.
# Don't run this if you already have PyTorch installed.
pip install 'torch>=2.0'
# Then proceed to one of the following
pip install -U audiocraft # stable release
pip install -U git+https://git@github.com/facebookresearch/audiocraft#egg=audiocraft # bleeding edge
pip install -e . # or if you cloned the repo locally
Usage
We offer a number of way to interact with MusicGen:
- A demo is also available on the
facebook/MusicGen
HuggingFace Space (huge thanks to all the HF team for their support). - You can run the extended demo on a Colab: colab notebook.
- You can use the gradio demo locally by running
python app.py
. - You can play with MusicGen by running the jupyter notebook at
demo.ipynb
locally (if you have a GPU). - Finally, checkout @camenduru Colab page which is regularly updated with contributions from @camenduru and the community.
API
We provide a simple API and 4 pre-trained models. The pre trained models are:
small
: 300M model, text to music only - 🤗 Hubmedium
: 1.5B model, text to music only - 🤗 Hubmelody
: 1.5B model, text to music and text+melody to music - 🤗 Hublarge
: 3.3B model, text to music only - 🤗 Hub
We observe the best trade-off between quality and compute with the medium
or melody
model.
In order to use MusicGen locally you must have a GPU. We recommend 16GB of memory, but smaller
GPUs will be able to generate short sequences, or longer sequences with the small
model.
Note: Please make sure to have ffmpeg installed when using newer version of torchaudio
.
You can install it with:
apt-get install ffmpeg
See after a quick example for using the API.
import torchaudio
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained('melody')
model.set_generation_params(duration=8) # generate 8 seconds.
wav = model.generate_unconditional(4) # generates 4 unconditional audio samples
descriptions = ['happy rock', 'energetic EDM', 'sad jazz']
wav = model.generate(descriptions) # generates 3 samples.
melody, sr = torchaudio.load('./assets/bach.mp3')
# generates using the melody from the given audio and the provided descriptions.
wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr)
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True)
Model Card
See the model card page.
FAQ
Will the training code be released?
Yes. We will soon release the training code for MusicGen and EnCodec.
I need help on Windows
@FurkanGozukara made a complete tutorial for Audiocraft/MusicGen on Windows
I need help for running the demo on Colab
Check @camenduru tutorial on Youtube.
Citation
@article{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
journal={arXiv preprint arXiv:2306.05284},
}
License
- The code in this repository is released under the MIT license as found in the LICENSE file.
- The weights in this repository are released under the CC-BY-NC 4.0 license as found in the LICENSE_weights file.