Home

Awesome

Notice: Bark is Suno's open-source text-to-speech+ model. If you are looking for our text-to-music models, please visit us on our web page and join our community on Discord.

🐢 Bark

Twitter

πŸ”— Examples β€’ Suno Studio Waitlist β€’ Updates β€’ How to Use β€’ Installation β€’ FAQ

<br>

<p align="center"> <img src="https://user-images.githubusercontent.com/5068315/235310676-a4b3b511-90ec-4edf-8153-7ccf14905d73.png" width="500"></img> </p> <br>

Bark is a transformer-based text-to-audio model created by Suno. Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints, which are ready for inference and available for commercial use.

⚠ Disclaimer

Bark was developed for research purposes. It is not a conventional text-to-speech model but instead a fully generative text-to-audio model, which can deviate in unexpected ways from provided prompts. Suno does not take responsibility for any output generated. Use at your own risk, and please act responsibly.

πŸ“– Quick Index

🎧 Demos

Open in Spaces Open on Replicate Open In Colab

πŸš€ Updates

2023.05.01

2023.04.20

🐍 Usage in Python

<details open> <summary><h3>πŸͺ‘ Basics</h3></summary>
from bark import SAMPLE_RATE, generate_audio, preload_models
from scipy.io.wavfile import write as write_wav
from IPython.display import Audio

# download and load all models
preload_models()

# generate audio from text
text_prompt = """
     Hello, my name is Suno. And, uh β€” and I like pizza. [laughs] 
     But I also have other interests such as playing tic tac toe.
"""
audio_array = generate_audio(text_prompt)

# save audio to disk
write_wav("bark_generation.wav", SAMPLE_RATE, audio_array)
  
# play text in notebook
Audio(audio_array, rate=SAMPLE_RATE)

pizza1.webm

</details> <details open> <summary><h3>🌎 Foreign Language</h3></summary> <br> Bark supports various languages out-of-the-box and automatically determines language from input text. When prompted with code-switched text, Bark will attempt to employ the native accent for the respective languages. English quality is best for the time being, and we expect other languages to further improve with scaling. <br> <br>

text_prompt = """
    좔석은 λ‚΄κ°€ κ°€μž₯ μ’‹μ•„ν•˜λŠ” λͺ…μ ˆμ΄λ‹€. λ‚˜λŠ” λ©°μΉ  λ™μ•ˆ νœ΄μ‹μ„ μ·¨ν•˜κ³  친ꡬ 및 κ°€μ‘±κ³Ό μ‹œκ°„μ„ 보낼 수 μžˆμŠ΅λ‹ˆλ‹€.
"""
audio_array = generate_audio(text_prompt)

suno_korean.webm

Note: since Bark recognizes languages automatically from input text, it is possible to use, for example, a german history prompt with english text. This usually leads to english audio with a german accent.

text_prompt = """
    Der DreißigjÀhrige Krieg (1618-1648) war ein verheerender Konflikt, der Europa stark geprÀgt hat.
    This is a beginning of the history. If you want to hear more, please continue.
"""
audio_array = generate_audio(text_prompt)

suno_german_accent.webm

</details> <details open> <summary><h3>🎢 Music</h3></summary> Bark can generate all types of audio, and, in principle, doesn't see a difference between speech and music. Sometimes Bark chooses to generate text as music, but you can help it out by adding music notes around your lyrics. <br> <br>
text_prompt = """
    β™ͺ In the jungle, the mighty jungle, the lion barks tonight β™ͺ
"""
audio_array = generate_audio(text_prompt)

lion.webm

</details> <details open> <summary><h3>🎀 Voice Presets</h3></summary>

Bark supports 100+ speaker presets across supported languages. You can browse the library of supported voice presets HERE, or in the code. The community also often shares presets in Discord.

Bark tries to match the tone, pitch, emotion and prosody of a given preset, but does not currently support custom voice cloning. The model also attempts to preserve music, ambient noise, etc.

text_prompt = """
    I have a silky smooth voice, and today I will tell you about 
    the exercise regimen of the common sloth.
"""
audio_array = generate_audio(text_prompt, history_prompt="v2/en_speaker_1")

sloth.webm

</details>

πŸ“ƒ Generating Longer Audio

By default, generate_audio works well with around 13 seconds of spoken text. For an example of how to do long-form generation, see πŸ‘‰ Notebook πŸ‘ˆ

<details> <summary>Click to toggle example long-form generations (from the example notebook)</summary>

dialog.webm

longform_advanced.webm

longform_basic.webm

</details>

Command line

python -m bark --text "Hello, my name is Suno." --output_filename "example.wav"

πŸ’» Installation

‼️ CAUTION ‼️ Do NOT use pip install bark. It installs a different package, which is not managed by Suno.

pip install git+https://github.com/suno-ai/bark.git

or

git clone https://github.com/suno-ai/bark
cd bark && pip install . 

πŸ€— Transformers Usage

Bark is available in the πŸ€— Transformers library from version 4.31.0 onwards, requiring minimal dependencies and additional packages. Steps to get started:

  1. First install the πŸ€— Transformers library from main:
pip install git+https://github.com/huggingface/transformers.git
  1. Run the following Python code to generate speech samples:
from transformers import AutoProcessor, BarkModel

processor = AutoProcessor.from_pretrained("suno/bark")
model = BarkModel.from_pretrained("suno/bark")

voice_preset = "v2/en_speaker_6"

inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)

audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
  1. Listen to the audio samples either in an ipynb notebook:
from IPython.display import Audio

sample_rate = model.generation_config.sample_rate
Audio(audio_array, rate=sample_rate)

Or save them as a .wav file using a third-party library, e.g. scipy:

import scipy

sample_rate = model.generation_config.sample_rate
scipy.io.wavfile.write("bark_out.wav", rate=sample_rate, data=audio_array)

For more details on using the Bark model for inference using the πŸ€— Transformers library, refer to the Bark docs or the hands-on Google Colab.

πŸ› οΈ Hardware and Inference Speed

Bark has been tested and works on both CPU and GPU (pytorch 2.0+, CUDA 11.7 and CUDA 12.0).

On enterprise GPUs and PyTorch nightly, Bark can generate audio in roughly real-time. On older GPUs, default colab, or CPU, inference time might be significantly slower. For older GPUs or CPU you might want to consider using smaller models. Details can be found in out tutorial sections here.

The full version of Bark requires around 12GB of VRAM to hold everything on GPU at the same time. To use a smaller version of the models, which should fit into 8GB VRAM, set the environment flag SUNO_USE_SMALL_MODELS=True.

If you don't have hardware available or if you want to play with bigger versions of our models, you can also sign up for early access to our model playground here.

βš™οΈ Details

Bark is fully generative text-to-audio model devolved for research and demo purposes. It follows a GPT style architecture similar to AudioLM and Vall-E and a quantized Audio representation from EnCodec. It is not a conventional TTS model, but instead a fully generative text-to-audio model capable of deviating in unexpected ways from any given script. Different to previous approaches, the input text prompt is converted directly to audio without the intermediate use of phonemes. It can therefore generalize to arbitrary instructions beyond speech such as music lyrics, sound effects or other non-speech sounds.

Below is a list of some known non-speech sounds, but we are finding more every day. Please let us know if you find patterns that work particularly well on Discord!

Supported Languages

LanguageStatus
English (en)βœ…
German (de)βœ…
Spanish (es)βœ…
French (fr)βœ…
Hindi (hi)βœ…
Italian (it)βœ…
Japanese (ja)βœ…
Korean (ko)βœ…
Polish (pl)βœ…
Portuguese (pt)βœ…
Russian (ru)βœ…
Turkish (tr)βœ…
Chinese, simplified (zh)βœ…

Requests for future language support here or in the #forums channel on Discord.

πŸ™ Appreciation

Β© License

Bark is licensed under the MIT License.

πŸ“±Β Community

🎧 Suno Studio (Early Access)

We’re developing a playground for our models, including Bark.

If you are interested, you can sign up for early access here.

❓ FAQ

How do I specify where models are downloaded and cached?

Bark's generations sometimes differ from my prompts. What's happening?

What voices are supported by Bark?

Why is the output limited to ~13-14 seconds?

How much VRAM do I need?

import os
os.environ["SUNO_OFFLOAD_CPU"] = "True"
os.environ["SUNO_USE_SMALL_MODELS"] = "True"

My generated audio sounds like a 1980s phone call. What's happening?