A high-performance text-to-speech (TTS) and voice conversion system optimized for Apple Silicon Macs, achieving 2-3x faster inference through MPS (Metal Performance Shaders) GPU acceleration.
This project successfully optimized the Chatterbox TTS model for Apple Silicon, meeting the ambitious target of generating 1 minute of audio in under 20 seconds on M1/M2 Macs.
- Warm-up Speed: 12.27 iterations/second (75% improvement)
- Generation Speed: 15-17+ iterations/second without reference audio
- Real-world Performance: 20.10s of audio generated in 59.34s (RTF: 2.95)
- Target Achieved: β Can generate >1 minute of audio in <20 seconds
- Apple Silicon MPS Optimization: Custom patches for efficient GPU utilization
- Fast Model Loading: CPU-first loading strategy with optimized GPU transfer
- Smart Warm-up: Pre-compilation of MPS kernels for consistent performance
- Latency Monitoring: Real-time performance metrics and RTF calculations
- Gradio Web Interface: User-friendly UI for both TTS and voice conversion
- Adjustable Generation Steps: Control generation length/quality with steps slider (100-2000, default: 1000)
The key innovation is the mps_fast_patch.py
module that addresses PyTorch MPS limitations:
- Problem: LlamaRotaryEmbedding moves tensors to CPU for trigonometric operations on every forward pass
- Solution: Pre-compute all cos/sin values once and keep them on MPS
- Result: Eliminates expensive CPUβMPS transfers during inference
1. Load model on CPU (handles CUDA-saved checkpoints)
2. Transfer components to MPS
3. Apply FastMPSRotaryEmbedding patch
4. Run warm-up generation
5. Ready for fast inference
# Clone the repository
git clone https://github.com/clockworksquirrel/chatterbox.git
cd chatterbox
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
python gradio_tts_app.py
Features:
- Text input with automatic chunking for long texts
- Reference audio upload for voice cloning
- Adjustable parameters:
- Generation Steps: 100-2000 (default: 1000) - controls max generation length
- Exaggeration: 0.25-2.0 (default: 0.5) - controls expressiveness
- CFG/Pace: 0.0-1.0 (default: 0.5) - controls generation guidance
- Temperature: 0.05-5.0 (default: 0.8) - controls randomness
- Advanced options: min_p, top_p, repetition_penalty
python gradio_vc_app.py
Features:
- Source audio upload/recording
- Reference voice selection
- Automatic voice characteristic transfer
# Basic TTS
python example_tts.py
# Voice conversion
python example_vc.py
# macOS-specific optimized example
python example_for_mac.py
-
"MPS backend out of memory"
- Reduce batch size or chunk size
- Close other GPU-intensive applications
-
Slow first generation
- This is normal - MPS compiles kernels on first use
- Subsequent generations will be much faster
-
Model loading errors
- Ensure you have enough RAM (16GB recommended)
- Check that all model files downloaded correctly
- Use the default 1000 steps for balanced quality/speed
- Lower steps (500-800) for faster generation of shorter audio
- Higher steps (1500-2000) for longer, more detailed generations
- Keep exaggeration around 0.5 for natural-sounding speech
- Enable warm-up to ensure consistent performance
This project was created through an innovative AI-assisted development process:
- Cursor AI - AI-powered code editor that helped implement the MPS optimizations
- Claude Opus 4 - Provided expertise on PyTorch MPS optimization and transformer architectures
- Vibe Coding - Collaborative AI-human development approach
- CodeRabbit - Monitors all updates and ensures code quality
The entire optimization was achieved through natural language descriptions of the desired improvements, with AI handling the implementation details while maintaining human oversight and direction.
- Added adjustable generation steps slider (100-2000 range)
- Updated TTS interface with steps parameter
- Fixed voice conversion app to use correct API
- Improved documentation and error handling
- Bug fixes and optimizations:
- Added input validation for audio tensor inputs in voice conversion
- Fixed temporary file cleanup with proper try-finally blocks
- Removed unused imports and unnecessary f-strings
- Improved code structure with GenerationConfig dataclass
- Enhanced error handling and resource management
- Unlimited Text Generation Support: Successfully implemented unlimited text input capability with automatic chunking
- Tested on M2 MacBook Pro Max with impressive results:
- Input text: 12,506 characters
- Audio duration: 850.80 seconds (14.18 minutes)
- Generation time: 1887.03 seconds (31.45 minutes)
- Real-time factor (RTF): 2.22x
- Sample rate: 24,000 Hz
- The system can now handle texts of any length, from short sentences to entire books
- Tested on M2 MacBook Pro Max with impressive results:
- Voice Accuracy Note: Currently working on improving voice cloning accuracy with reference audio files
- The current TTS implementation has limitations with accent reproduction
- Example: Northern English accents from ElevenLabs voices are rendered with Australian characteristics
- We are monitoring Chatterbox development for improvements in this area
- Will update our implementation as soon as better voice cloning capabilities become available
This project is licensed under the MIT License - see the LICENSE file for details.
- ResembleAI for the original Chatterbox model
- PyTorch team for MPS backend development
- Apple for Metal Performance Shaders framework
Optimized with β€οΈ for Apple Silicon by the Vibe Coding community
We're excited to introduce Chatterbox, Resemble AI's first production-grade open source TTS model. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations.
Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open source TTS model to support emotion exaggeration control, a powerful feature that makes your voices stand out. Try it now on our Hugging Face Gradio app.
If you like the model but need to scale or tune it for higher accuracy, check out our competitively priced TTS service (link). It delivers reliable performance with ultra-low latency of sub 200msβideal for production use in agents, applications, or interactive media.
- SoTA zeroshot TTS
- 0.5B Llama backbone
- Unique exaggeration/intensity control
- Ultra-stable with alignment-informed inference
- Trained on 0.5M hours of cleaned data
- Watermarked outputs
- Easy voice conversion script
- Outperforms ElevenLabs
-
General Use (TTS and Voice Agents):
- The default settings (
exaggeration=0.5
,cfg_weight=0.5
) work well for most prompts. - If the reference speaker has a fast speaking style, lowering
cfg_weight
to around0.3
can improve pacing.
- The default settings (
-
Expressive or Dramatic Speech:
- Try lower
cfg_weight
values (e.g.~0.3
) and increaseexaggeration
to around0.7
or higher. - Higher
exaggeration
tends to speed up speech; reducingcfg_weight
helps compensate with slower, more deliberate pacing.
- Try lower
pip install chatterbox-tts
Alternatively, you can install from source:
# conda create -yn chatterbox python=3.11
# conda activate chatterbox
git clone https://github.com/resemble-ai/chatterbox.git
cd chatterbox
pip install -e .
We developed and tested Chatterbox on Python 3.11 on Debain 11 OS; the versions of the dependencies are pinned in pyproject.toml
to ensure consistency. You can modify the code or dependencies in this installation mode.
import torchaudio as ta
from chatterbox.tts import ChatterboxTTS
model = ChatterboxTTS.from_pretrained(device="cuda")
text = "Ezreal and Jinx teamed up with Ahri, Yasuo, and Teemo to take down the enemy's Nexus in an epic late-game pentakill."
wav = model.generate(text)
ta.save("test-1.wav", wav, model.sr)
# If you want to synthesize with a different voice, specify the audio prompt
AUDIO_PROMPT_PATH = "YOUR_FILE.wav"
wav = model.generate(text, audio_prompt_path=AUDIO_PROMPT_PATH)
ta.save("test-2.wav", wav, model.sr)
See example_tts.py
and example_vc.py
for more examples.
Currenlty only English.
Every audio file generated by Chatterbox includes Resemble AI's Perth (Perceptual Threshold) Watermarker - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy.
You can look for the watermark using the following script.
import perth
import librosa
AUDIO_PATH = "YOUR_FILE.wav"
# Load the watermarked audio
watermarked_audio, sr = librosa.load(AUDIO_PATH, sr=None)
# Initialize watermarker (same as used for embedding)
watermarker = perth.PerthImplicitWatermarker()
# Extract watermark
watermark = watermarker.get_watermark(watermarked_audio, sample_rate=sr)
print(f"Extracted watermark: {watermark}")
# Output: 0.0 (no watermark) or 1.0 (watermarked)
π Join us on Discord and let's build something awesome together!
Don't use this model to do bad things. Prompts are sourced from freely available data on the internet.