Skip to content

Fix KokoroTtsManager.initialize() hang on iOS#418

Merged
Alex-Wengg merged 1 commit intomainfrom
fix/kokoro-initialize-hang-ios
Mar 24, 2026
Merged

Fix KokoroTtsManager.initialize() hang on iOS#418
Alex-Wengg merged 1 commit intomainfrom
fix/kokoro-initialize-hang-ios

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Mar 24, 2026

Summary

Fixes #417 - KokoroTtsManager.initialize() hanging indefinitely on iOS.

Root Cause

The hang occurs during model warm-up in TtsModels.download():

  1. Working commit (3826150, Mar 20): No source_noise input, warm-up works fine
  2. Breaking commits:
    • 2ae0846 (Mar 21): Switched to fp16 models for ANE optimization
    • 4b03d1f (Mar 22): Added source_noise input requirement

The warm-up creates a massive source_noise tensor:

  • 5s model: [1, 120000, 9] = ~2.16 MB of random Float16 values
  • 15s model: [1, 360000, 9] = ~6.48 MB of random Float16 values

On iOS, ANE compilation with fp16 models + this large random tensor causes model.prediction() to hang indefinitely.

Solution

Skip warm-up entirely on iOS using #if os(macOS) guards:

  • Warm-up is just an optimization to pre-compile models for ANE
  • On iOS, first synthesis will naturally trigger compilation
  • Slightly slower first synthesis is acceptable vs hanging on initialization
  • macOS behavior unchanged (warm-up still runs)

Changes

#if os(macOS)
// Warm-up models on macOS to pre-compile for ANE
// Skip on iOS due to ANE compilation issues with fp16 models + large source_noise tensor
for (variant, model) in loaded {
    await warmUpModel(model, variant: variant)
}
#else
logger.info("Skipping warm-up on iOS - first synthesis will compile model")
#endif
  • Removed timeout workaround code (no longer needed)
  • Clean, platform-specific solution
  • No breaking API changes

Impact

  • iOS: initialize() returns immediately ✅ (no hang)
  • macOS: No change, warm-up still runs normally
  • First synthesis on iOS: Will be slower due to on-demand compilation (expected)

Test Plan

  • Builds successfully on macOS
  • Warm-up still runs on macOS (logs show timing)
  • No compilation errors or warnings
  • Test on iOS device to confirm initialize() completes
  • Verify first synthesis works on iOS (with expected delay)

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 24, 2026

Claude finished @Alex-Wengg's task —— View job


PR Review: Fix KokoroTtsManager.initialize() hang on iOS

Todo List:

  • Create pending PR review
  • Get PR diff information
  • Read and analyze the modified file
  • Add inline review comments
  • Submit comprehensive review

Review Complete ✅

I've thoroughly reviewed this PR and submitted a comprehensive code review. Here's my assessment:

Summary: This is an excellent fix that properly addresses the iOS ANE compilation hang with a clean timeout implementation.

Key Strengths:

  • Correct use of Swift TaskGroup for timeout functionality
  • Graceful degradation - continues initialization even on timeout
  • Follows repository guidelines (no @unchecked Sendable, proper logging)
  • No breaking API changes
  • Good error handling with clear log messages

Minor Suggestions:

  • Consider making the 60-second timeout configurable
  • The withTimeout helper could be moved to shared utilities for reuse
  • Timeout error could include more context

Verdict: The implementation demonstrates solid understanding of Swift concurrency patterns and effectively solves the ANE compilation hang issue without compromising functionality. Ready for merge.

The review has been submitted as a non-blocking comment review on the PR.


@Alex-Wengg Alex-Wengg force-pushed the fix/kokoro-initialize-hang-ios branch from aa275ec to 7e0c3dd Compare March 24, 2026 15:56
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 24, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 18.38x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 12.595 22.1 Fetching diarization models
Model Compile 5.398 9.5 CoreML compilation
Audio Load 0.118 0.2 Loading audio file
Segmentation 17.114 30.0 Detecting speech regions
Embedding 28.523 49.9 Extracting speaker voices
Clustering 11.409 20.0 Grouping same speakers
Total 57.107 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 57.0s diarization time • Test runtime: 5m 29s • 03/24/2026, 02:34 PM EST

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

This PR effectively addresses the iOS initialization hang issue with a clean, well-architected solution. The timeout implementation is solid and follows Swift best practices.

Strengths ✅

Clean Timeout Implementation

  • The withTimeout helper using TaskGroup is the correct approach for Swift async/await
  • Proper cancellation of both tasks when one completes
  • Good use of @Sendable constraints for thread safety

Graceful Error Handling

  • Timeout doesn't crash the app - continues initialization with warning
  • Clear, actionable log message explaining what happened
  • First synthesis will still work (just may be slower)

Follows Repository Guidelines

  • No @unchecked Sendable usage ✅
  • Uses AppLogger for proper logging ✅
  • Maintains thread safety with actors/proper concurrency ✅

Minor Suggestions 💡

Line 179-180: Consider making timeout configurable

private static func warmUpModel(_ model: MLModel, variant: ModelNames.TTS.Variant, timeoutSeconds: TimeInterval = 60) async

Different devices might benefit from different timeout values.

Lines 195-213: The withTimeout implementation is excellent and reusable
Consider moving this to a shared utility module (like Sources/FluidAudio/Shared/AsyncUtils.swift) if timeout functionality might be needed elsewhere in the codebase.

Line 206: The timeout error could include more context:

throw TimeoutError(operation: "model warm-up", timeoutSeconds: seconds)

Architecture Assessment 🏗️

Problem-Solution Fit: Perfect match - ANE compilation hangs are exactly what timeouts solve

API Consistency: No breaking changes, maintains existing public interface

Performance: Minimal overhead - timeout only runs during initialization, not per-synthesis

Error Recovery: Excellent - degrades gracefully rather than failing completely

Security & Reliability 🔒

  • No security concerns identified
  • Proper resource cleanup with group.cancelAll()
  • No force unwrapping or unsafe operations
  • Memory management looks correct for MLMultiArray usage

Verdict: Approve ✅

This is a well-crafted fix that solves a real problem without introducing technical debt. The implementation demonstrates good understanding of Swift concurrency patterns and follows the repository's engineering standards.

The 60-second timeout is reasonable for model initialization, and the graceful fallback ensures the app remains functional even when ANE compilation is problematic.

devin-ai-integration[bot]

This comment was marked as resolved.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 24, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 3m17s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 24, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 0.00% Average Word Error Rate
WER (Med) 0.00% Median Word Error Rate
RTFx 0.00x Real-time factor (higher = faster)
Total Audio 0.0s Total audio duration processed
Total Time 0.0s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.000s Average chunk processing time
Max Chunk Time 0.000s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m28s • 03/24/2026, 02:26 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 24, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (180.0 KB)

Runtime: 0m46s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 24, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 4.72x
test-other 1.40% 0.00% 2.60x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 3.51x
test-other 1.62% 0.00% 3.36x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.44x Streaming real-time factor
Avg Chunk Time 2.032s Average time to process each chunk
Max Chunk Time 2.787s Maximum chunk processing time
First Token 2.594s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.56x Streaming real-time factor
Avg Chunk Time 1.617s Average time to process each chunk
Max Chunk Time 1.946s Maximum chunk processing time
First Token 1.580s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 9m35s • 03/24/2026, 02:31 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 24, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 9.1x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 5m 19s • 2026-03-24T18:44:04.600Z

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 24, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 386.0x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 412.9x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 24, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 4.54x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 10.619 4.6 Fetching diarization models
Model Compile 4.551 2.0 CoreML compilation
Audio Load 0.091 0.0 Loading audio file
Segmentation 23.434 10.1 VAD + speech detection
Embedding 229.956 99.5 Speaker embedding extraction
Clustering (VBx) 0.952 0.4 Hungarian algorithm + VBx clustering
Total 231.115 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 254.3s processing • Test runtime: 4m 26s • 03/24/2026, 02:25 PM EST

@Alex-Wengg Alex-Wengg force-pushed the fix/kokoro-initialize-hang-ios branch from 8ec040d to 62c88fb Compare March 24, 2026 17:40
Fixes #417 - KokoroTtsManager.initialize() hanging indefinitely on iOS.

Root Cause:
The hang is caused by v2 models specifically, not warm-up itself. The v2
models (kokoro_21_5s_v2, kokoro_21_15s_v2) were introduced 3 days ago with:
- fp16 precision for ANE optimization (commit 2ae0846, Mar 21)
- source_noise input requirement (commit 4b03d1f, Mar 22)

On iOS, ANE compilation with fp16 models + large source_noise tensor
(up to 6.48 MB for 15s model) causes model.prediction() to hang
indefinitely during warm-up. The v1 models work fine on iOS.

Solution:
Use v1 models on iOS, v2 models on macOS:
- iOS: kokoro_21_5s.mlmodelc, kokoro_21_15s.mlmodelc (fp32, no source_noise)
- macOS: kokoro_21_5s_v2.mlmodelc, kokoro_21_15s_v2.mlmodelc (fp16, 1.67x faster)

Platform-specific model selection in ModelNames.swift:
  #if os(iOS)
    return "kokoro_21_5s.mlmodelc"  // v1
  #else
    return "kokoro_21_5s_v2.mlmodelc"  // v2
  #endif

Conditional source_noise input:
- Check model.modelDescription.inputDescriptionsByName["source_noise"]
- Only add source_noise if model expects it
- v1 models don't have this input, v2 models require it

Impact:
- iOS: initialize() returns immediately, no hang, warm-up works
- macOS: No change, keeps v2 performance benefits
- iOS loses v2 speed boost, but gains stability
@Alex-Wengg Alex-Wengg force-pushed the fix/kokoro-initialize-hang-ios branch from 62c88fb to 1d42321 Compare March 24, 2026 18:10
@Alex-Wengg Alex-Wengg merged commit cc5a4f4 into main Mar 24, 2026
12 checks passed
@Alex-Wengg Alex-Wengg deleted the fix/kokoro-initialize-hang-ios branch March 24, 2026 18:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

KokoroTtsManager.initialize() hangs after ANE fp16 + source_noise commits

1 participant