A lightweight text-to-speech (TTS) application designed to run efficiently on CPUs. Forget about the hassle of using GPUs and web APIs serving TTS models. With Kyutai's Pocket TTS, generating audio is just a pip install and a function call away.
Supports Python 3.10, 3.11, 3.12, 3.13 and 3.14. Requires PyTorch 2.5+. Does not require the gpu version of PyTorch.
🔊 Demo | 🐱💻GitHub Repository | 🤗 Hugging Face Model Card | ⚙️ Tech report | 📄 Paper | 📚 Documentation
- Runs on CPU
- Small model size, 100M parameters
- Audio streaming
- Low latency, ~200ms to get the first audio chunk
- Faster than real-time, ~6x real-time on a CPU of MacBook Air M4
- Uses only 2 CPU cores
- Python API and CLI
- Voice cloning
- English only at the moment
- Can handle infinitely long text inputs
- Can run on client-side in the browser
Navigate to the Kyutai website to try it out directly in your browser. You can input text, select different voices, and generate speech without any installation.
You can use pocket-tts directly from the command line. We recommend using
uv as it installs any dependencies on the fly in an isolated environment (uv installation instructions here).
You can also use pip install pocket-tts to install it manually.
This will generate a wav file ./tts_output.wav saying the default text with the default voice, and display some speed statistics.
uvx pocket-tts generate
# or if you installed it manually with pip:
pocket-tts generateModify the voice with --voice and the text with --text. We provide a small catalog of voices.
You can take a look at this page which details the licenses for each voice.
The --voice argument can also take a plain wav file as input for voice cloning.
You can use your own or check out our voice repository.
We recommend cleaning the sample before using it with Pocket TTS, because the audio quality of the sample is also reproduced.
Feel free to check out the generate documentation for more details and examples.
For trying multiple voices and prompts quickly, prefer using the serve command.
You can also run a local server to generate audio via HTTP requests.
uvx pocket-tts serve
# or if you installed it manually with pip:
pocket-tts serveNavigate to http://localhost:8000 to try the web interface, it's faster than the command line as the model is kept in memory between requests.
You can check out the serve documentation for more details and examples.
Processing an audio file (e.g., a .wav or .mp3) for voice cloning is relatively slow, but loading a safetensors file -- a voice embedding converted from an audio file -- is very fast. You can use the export-voice command to do this conversion. See the export-voice documentation for more details and examples.
You can try out the Python library on Colab here.
Install the package with
pip install pocket-tts
# or
uv add pocket-ttsYou can use this package as a simple Python library to generate audio from text.
from pocket_tts import TTSModel
import scipy.io.wavfile
tts_model = TTSModel.load_model()
voice_state = tts_model.get_state_for_audio_prompt(
"alba" # One of the pre-made voices, see above
# You can also use any voice file you have locally or from Hugging Face:
# "./some_audio.wav"
# or "hf://kyutai/tts-voices/expresso/ex01-ex02_default_001_channel2_198s.wav"
)
audio = tts_model.generate_audio(voice_state, "Hello world, this is a test.")
# Audio is a 1D torch tensor containing PCM data.
scipy.io.wavfile.write("output.wav", tts_model.sample_rate, audio.numpy())You can have multiple voice states around if
you have multiple voices you want to use. load_model()
and get_state_for_audio_prompt() are relatively slow operations,
so we recommend to keep the model and voice states in memory if you can.
You can check out the Python API documentation for more details and examples.
At the moment, we do not support (but would love pull requests adding):
- Running the TTS inside a web browser (WebAssembly)
- A compiled version with for example
torch.compile()orcandle. - Adding silence in the text input to generate pauses.
- Quantization to run the computation in int8.
We tried running this TTS model on the GPU but did not observe a speedup compared to CPU execution, notably because we use a batch size of 1 and a very small model.
We accept contributions! Feel free to open issues or pull requests on GitHub.
You can find development instructions in the CONTRIBUTING.md file. You'll also find there how to have an editable install of the package for local development.
Pocket TTS is small enough to run directly in your browser in WebAssembly/JavaScript. We don't have official support for this yet, but you can try out one of these community implementations:
- babybirdprd/pocket-tts: Candle version (Rust) with WebAssembly and PyO3 bindings, meaning it can run on the web too.
- ekzhang/jax-js: Using jax-js, a ML library for the web. Demo here
- KevinAHM/pocket-tts-onnx-export: Model exported to .onnx and run using ONNX Runtime Web. Demo here
- lukasmwerner/pocket-reader - Browser screen reader
- ikidd/pocket-tts-wyoming - Docker container for pocket-tts using Wyoming protocol, ready for Home Assistant Voice use.
- slaughters85j/pocket-tts - Mac Desktop App + macOS Quick Action
Use of our model must comply with all applicable laws and regulations and must not result in, involve, or facilitate any illegal, harmful, deceptive, fraudulent, or unauthorized activity. Prohibited uses include, without limitation, voice impersonation or cloning without explicit and lawful consent; misinformation, disinformation, or deception (including fake news, fraudulent calls, or presenting generated content as genuine recordings of real people or events); and the generation of unlawful, harmful, libelous, abusive, harassing, discriminatory, hateful, or privacy-invasive content. We disclaim all liability for any non-compliant use.
Manu Orsini*, Simon Rouard*, Gabriel De Marmiesse*, Václav Volhejn, Neil Zeghidour, Alexandre Défossez
*equal contribution