Make speakers publically mutable in DiarizerTimeline#402
Conversation
|
Claude finished @SGD2718's task —— View job PR Review: Make speakers publicly mutable in DiarizerTimelineReview Progress:
✅ Review CompleteI've submitted a comprehensive code review focusing on: Thread Safety: ✅ The implementation correctly maintains thread safety using API Design Concerns:
Recommendations:
The change is technically sound but would benefit from clearer documentation and use case definition to prevent misuse that could break timeline consistency. |
There was a problem hiding this comment.
Code Review Summary
✅ Thread Safety - Good
The implementation correctly maintains thread safety by using queue.sync for both getter and setter, consistent with the existing pattern in the class.
⚠️ API Design Concerns
Lines 663-664: The public setter for the entire speakers dictionary raises some design questions:
-
Data Consistency Risk: Directly setting
_speakerscould break internal consistency if the new speakers don't align with existing_finalizedPredictions,_tentativePredictions, orstatesarrays. -
Use Case Unclear: What's the intended use case? Common scenarios might be:
- Restoring speakers from saved state
- Bulk speaker updates after enrollment
- Merging timelines from different sources
-
Missing Documentation: The setter lacks documentation about when/how it should be used safely.
🔍 Suggestions
Consider these alternatives:
- Add validation in the setter to ensure consistency with existing timeline state
- Provide more granular methods like
addSpeaker(),removeSpeaker(), orupdateSpeaker() - Add comprehensive documentation explaining safe usage patterns
- Consider if a
replaceSpeakers(keepingSegments:)method would be more explicit
Testing Note: The PR mentions "not run" for testing. Consider adding unit tests that verify:
- Thread safety under concurrent access
- Behavior when setting speakers on a timeline with existing segments
- Performance impact of the setter
📋 Minor Items
- The change is minimal and maintains existing patterns
- No breaking changes to the public API
- Follows Swift property syntax conventions
Overall: The implementation is technically sound but would benefit from clearer documentation and use case definition.
Parakeet EOU Benchmark Results ✅Status: Benchmark passed Performance Metrics
Streaming Metrics
Test runtime: 0m25s • 03/19/2026, 08:24 PM EST RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O |
Qwen3-ASR int8 Smoke Test ✅
Runtime: 3m5s Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx. |
Sortformer High-Latency Benchmark ResultsES2004a Performance (30.4s latency config)
Sortformer High-Latency • ES2004a • Runtime: 4m 29s • 2026-03-20T00:37:57.201Z |
ASR Benchmark Results ✅Status: All benchmarks passed Parakeet v3 (multilingual)
Parakeet v2 (English-optimized)
Streaming (v3)
Streaming (v2)
Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming 25 files per dataset • Test runtime: 8m4s • 03/19/2026, 08:32 PM EST RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time Expected RTFx Performance on Physical M1 Hardware:• M1 Mac: ~28x (clean), ~25x (other) Testing methodology follows HuggingFace Open ASR Leaderboard |
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Offline VBx Pipeline ResultsSpeaker Diarization Performance (VBx Batch Mode)Optimal clustering with Hungarian algorithm for maximum accuracy
Offline VBx Pipeline Timing BreakdownTime spent in each stage of batch diarization
Speaker Diarization Research ComparisonOffline VBx achieves competitive accuracy with batch processing
Pipeline Details:
🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 223.1s processing • Test runtime: 3m 53s • 03/19/2026, 08:42 PM EST |
PocketTTS Smoke Test ✅
Runtime: 0m42s Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon. |
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Speaker Diarization Benchmark ResultsSpeaker Diarization PerformanceEvaluating "who spoke when" detection accuracy
Diarization Pipeline Timing BreakdownTime spent in each stage of speaker diarization
Speaker Diarization Research ComparisonResearch baselines typically achieve 18-30% DER on standard datasets
Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:
🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 48.7s diarization time • Test runtime: 3m 54s • 03/19/2026, 08:34 PM EST |
VAD Benchmark ResultsPerformance Comparison
Dataset Details
✅: Average F1-Score above 70% |
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Summary
Testing