Clarify custom vocabulary model compatibility and approach selection#469
Conversation
Update CustomVocabulary.md to clearly explain which models work with which approaches: - Add Quick Start table showing TDT-CTC-110M uses Approach 1, Parakeet 0.6B uses Approach 2 - Add Model Compatibility section explaining hybrid vs pure TDT architectures - Expand comparison table with explicit compatibility checkmarks - Add "Which Approach Should I Use?" decision guide - Clarify that TDT-CTC-110M has built-in CTC head (1MB), while 0.6B models require separate CTC encoder (97.5MB) - Update all diagrams and descriptions to remove ambiguity about model requirements Resolves confusion about "v1 vs v2" terminology - these are approaches, not model versions.
PocketTTS Smoke Test ✅
Runtime: 0m36s Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon. |
Parakeet EOU Benchmark Results ✅Status: Benchmark passed Performance Metrics
Streaming Metrics
Test runtime: 1m22s • 03/30/2026, 12:32 AM EST RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O |
VAD Benchmark ResultsPerformance Comparison
Dataset Details
✅: Average F1-Score above 70% |
Sortformer High-Latency Benchmark ResultsES2004a Performance (30.4s latency config)
Sortformer High-Latency • ES2004a • Runtime: 2m 22s • 2026-03-30T04:33:07.961Z |
Qwen3-ASR int8 Smoke Test ✅
Performance Metrics
Runtime: 3m55s Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx. |
Speaker Diarization Benchmark ResultsSpeaker Diarization PerformanceEvaluating "who spoke when" detection accuracy
Diarization Pipeline Timing BreakdownTime spent in each stage of speaker diarization
Speaker Diarization Research ComparisonResearch baselines typically achieve 18-30% DER on standard datasets
Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:
🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 36.2s diarization time • Test runtime: 2m 6s • 03/30/2026, 12:41 AM EST |
Offline VBx Pipeline ResultsSpeaker Diarization Performance (VBx Batch Mode)Optimal clustering with Hungarian algorithm for maximum accuracy
Offline VBx Pipeline Timing BreakdownTime spent in each stage of batch diarization
Speaker Diarization Research ComparisonOffline VBx achieves competitive accuracy with batch processing
Pipeline Details:
🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 232.0s processing • Test runtime: 3m 58s • 03/30/2026, 12:41 AM EST |
ASR Benchmark Results ✅Status: All benchmarks passed Parakeet v3 (multilingual)
Parakeet v2 (English-optimized)
Streaming (v3)
Streaming (v2)
Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming 25 files per dataset • Test runtime: 5m24s • 03/30/2026, 12:41 AM EST RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time Expected RTFx Performance on Physical M1 Hardware:• M1 Mac: ~28x (clean), ~25x (other) Testing methodology follows HuggingFace Open ASR Leaderboard |
Kokoro TTS Smoke Test ✅
Runtime: 0m44s Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon. |
Summary
Resolves confusion about "v1 vs v2" terminology by clearly stating these are approaches, not model versions. The actual model versions are TDT-CTC-110M and Parakeet TDT 0.6B v2/v3.
Motivation
The previous documentation was unclear about:
This caused confusion when users saw "v1" and "v2" and thought they were model versions rather than implementation approaches.
Test plan
🤖 Generated with Claude Code