We're excited to introduce Chatterbox Multilingual, Resemble AI's first production-grade open source TTS model supporting 23 languages out of the box. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations.
Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life across languages. It's also the first open source TTS model to support emotion exaggeration control with robust multilingual zero-shot voice cloning. Try the english only version now on our English Hugging Face Gradio app.. Or try the multilingual version on our Multilingual Hugging Face Gradio app..
If you like the model but need to scale or tune it for higher accuracy, check out our competitively priced TTS service (link). It delivers reliable performance with ultra-low latency of sub 200ms—ideal for production use in agents, applications, or interactive media.
- Multilingual, zero-shot TTS supporting 23 languages
- SoTA zeroshot English TTS
- 0.5B Llama backbone
- Unique exaggeration/intensity control
- Ultra-stable with alignment-informed inference
- Trained on 0.5M hours of cleaned data
- Watermarked outputs
- Easy voice conversion script
- Outperforms ElevenLabs
Arabic (ar) • Danish (da) • German (de) • Greek (el) • English (en) • Spanish (es) • Finnish (fi) • French (fr) • Hebrew (he) • Hindi (hi) • Italian (it) • Japanese (ja) • Korean (ko) • Malay (ms) • Dutch (nl) • Norwegian (no) • Polish (pl) • Portuguese (pt) • Russian (ru) • Swedish (sv) • Swahili (sw) • Turkish (tr) • Chinese (zh)
-
General Use (TTS and Voice Agents):
- Ensure that the reference clip matches the specified language tag. Otherwise, language transfer outputs may inherit the accent of the reference clip’s language. To mitigate this, set
cfg_weightto0. - The default settings (
exaggeration=0.5,cfg_weight=0.5) work well for most prompts across all languages. - If the reference speaker has a fast speaking style, lowering
cfg_weightto around0.3can improve pacing. - Tune
diffusion_steps(default10) to balance latency and fidelity. More steps typically improve detail at the cost of slower inference; fewer steps speed things up for streaming.
- Ensure that the reference clip matches the specified language tag. Otherwise, language transfer outputs may inherit the accent of the reference clip’s language. To mitigate this, set
-
Expressive or Dramatic Speech:
- Try lower
cfg_weightvalues (e.g.~0.3) and increaseexaggerationto around0.7or higher. - Higher
exaggerationtends to speed up speech; reducingcfg_weighthelps compensate with slower, more deliberate pacing.
- Try lower
git clone https://github.com/prathamesh-chavan-22/chatterbox.git
cd chatterbox
pip install -e .Install with specific features:
# With quantization support (4-bit/8-bit)
pip install -e ".[quantization]"
# With performance optimizations (xFormers)
pip install -e ".[performance]"
# With streaming support
pip install -e ".[streaming]"
# Install everything
pip install -e ".[all]"- Python >= 3.10
- PyTorch >= 2.0.0
- CUDA-compatible GPU (recommended for best performance)
We developed and tested Chatterbox on Python 3.11 on Debian 11 OS; the versions of the dependencies are pinned in
pyproject.tomlto ensure consistency. You can modify the code or dependencies in this installation mode.
For Russian language support with proper stress marking, you'll need to install the russian-text-stresser package. Since the original package may have compatibility issues with newer Python and PyTorch versions, follow these steps to install a modified version:
# Clone the repository
git clone https://github.com/Vuizur/add-stress-to-epub.git
cd add-stress-to-epub
# Edit the pyproject.toml file to update dependencies for Python 3.12 and PyTorch 2.9+
# You can use any text editor to modify the file
# Update the following sections:
# - Change python version requirement to: requires-python = ">=3.12"
# - Update torch dependency to: torch>=2.9.0
# - Update all other dependencies to their latest compatible versions
# After editing pyproject.toml, install the package
pip install -e .Open pyproject.toml in the cloned add-stress-to-epub directory and update it with the latest dependencies:
[project]
requires-python = ">=3.12"
dependencies = [
"torch>=2.9.0",
"numpy>=1.26.0",
"transformers>=4.40.0",
# Add other updated dependencies as needed
]After making these changes, save the file and run pip install -e . from within the add-stress-to-epub directory.
Note: If you don't plan to use Russian language features, this installation is optional and the model will work without it.
import torchaudio as ta
from chatterbox.tts import ChatterboxTTS
from chatterbox.mtl_tts import ChatterboxMultilingualTTS
# English example
model = ChatterboxTTS.from_pretrained(device="cuda")
text = "Ezreal and Jinx teamed up with Ahri, Yasuo, and Teemo to take down the enemy's Nexus in an epic late-game pentakill."
wav = model.generate(text)
# Increase diffusion_steps for higher fidelity (slower/denser generation):
wav_hq = model.generate(text, diffusion_steps=18)
ta.save("test-english.wav", wav, model.sr)
# Multilingual examples
multilingual_model = ChatterboxMultilingualTTS.from_pretrained(device=device)
french_text = "Bonjour, comment ça va? Ceci est le modèle de synthèse vocale multilingue Chatterbox, il prend en charge 23 langues."
wav_french = multilingual_model.generate(spanish_text, language_id="fr")
ta.save("test-french.wav", wav_french, model.sr)
chinese_text = "你好,今天天气真不错,希望你有一个愉快的周末。"
wav_chinese = multilingual_model.generate(chinese_text, language_id="zh")
ta.save("test-chinese.wav", wav_chinese, model.sr)
# If you want to synthesize with a different voice, specify the audio prompt
AUDIO_PROMPT_PATH = "YOUR_FILE.wav"
wav = model.generate(text, audio_prompt_path=AUDIO_PROMPT_PATH)
wav_hq = model.generate(text, audio_prompt_path=AUDIO_PROMPT_PATH, diffusion_steps=18)
ta.save("test-2.wav", wav, model.sr)See example_tts.py and example_vc.py for more examples.
Streaming example (choose diffusion steps for faster or more accurate output):
python streaming_clone.py --diffusion-steps 18 --voice-clone reference_audio/3.wavEvery audio file generated by Chatterbox includes Resemble AI's Perth (Perceptual Threshold) Watermarker - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy.
You can look for the watermark using the following script.
import perth
import librosa
AUDIO_PATH = "YOUR_FILE.wav"
# Load the watermarked audio
watermarked_audio, sr = librosa.load(AUDIO_PATH, sr=None)
# Initialize watermarker (same as used for embedding)
watermarker = perth.PerthImplicitWatermarker()
# Extract watermark
watermark = watermarker.get_watermark(watermarked_audio, sample_rate=sr)
print(f"Extracted watermark: {watermark}")
# Output: 0.0 (no watermark) or 1.0 (watermarked)👋 Join us on Discord and let's build something awesome together!
If you find this model useful, please consider citing.
@misc{chatterboxtts2025,
author = {{Resemble AI}},
title = {{Chatterbox-TTS}},
year = {2025},
howpublished = {\url{https://github.com/resemble-ai/chatterbox}},
note = {GitHub repository}
}
Don't use this model to do bad things. Prompts are sourced from freely available data on the internet.

