Skip to content

Make speakers publically mutable in DiarizerTimeline#402

Merged
Alex-Wengg merged 21 commits intomainfrom
codex/publicly-mutable-diarizer-timeline
Mar 20, 2026
Merged

Make speakers publically mutable in DiarizerTimeline#402
Alex-Wengg merged 21 commits intomainfrom
codex/publicly-mutable-diarizer-timeline

Conversation

@SGD2718
Copy link
Copy Markdown
Collaborator

@SGD2718 SGD2718 commented Mar 19, 2026

Summary

  • expose a public setter for DiarizerTimeline.speakers
  • keep the existing queue-synchronized access pattern for reads and writes

Testing

  • not run

Copilot AI review requested due to automatic review settings March 19, 2026 20:38
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 19, 2026

Claude finished @SGD2718's task —— View job


PR Review: Make speakers publicly mutable in DiarizerTimeline

Review Progress:

  • Create pending PR review
  • Get PR diff to understand changes
  • Read modified file for context
  • Add inline review comments
  • Submit comprehensive review

Review Complete

I've submitted a comprehensive code review focusing on:

Thread Safety: ✅ The implementation correctly maintains thread safety using queue.sync for both getter and setter.

API Design Concerns: ⚠️ The public setter for the entire speakers dictionary raises questions about:

  • Data consistency with existing timeline state
  • Unclear use cases and missing documentation
  • Potential for breaking internal invariants

Recommendations:

  • Add documentation explaining safe usage patterns
  • Consider validation in the setter
  • Add unit tests for concurrent access and edge cases

The change is technically sound but would benefit from clearer documentation and use case definition to prevent misuse that could break timeline consistency.

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

Thread Safety - Good

The implementation correctly maintains thread safety by using queue.sync for both getter and setter, consistent with the existing pattern in the class.

⚠️ API Design Concerns

Lines 663-664: The public setter for the entire speakers dictionary raises some design questions:

  1. Data Consistency Risk: Directly setting _speakers could break internal consistency if the new speakers don't align with existing _finalizedPredictions, _tentativePredictions, or states arrays.

  2. Use Case Unclear: What's the intended use case? Common scenarios might be:

    • Restoring speakers from saved state
    • Bulk speaker updates after enrollment
    • Merging timelines from different sources
  3. Missing Documentation: The setter lacks documentation about when/how it should be used safely.

🔍 Suggestions

Consider these alternatives:

  • Add validation in the setter to ensure consistency with existing timeline state
  • Provide more granular methods like addSpeaker(), removeSpeaker(), or updateSpeaker()
  • Add comprehensive documentation explaining safe usage patterns
  • Consider if a replaceSpeakers(keepingSegments:) method would be more explicit

Testing Note: The PR mentions "not run" for testing. Consider adding unit tests that verify:

  • Thread safety under concurrent access
  • Behavior when setting speakers on a timeline with existing segments
  • Performance impact of the setter

📋 Minor Items

  • The change is minimal and maintains existing patterns
  • No breaking changes to the public API
  • Follows Swift property syntax conventions

Overall: The implementation is technically sound but would benefit from clearer documentation and use case definition.

devin-ai-integration[bot]

This comment was marked as resolved.

This comment was marked as resolved.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 19, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 0.00% Average Word Error Rate
WER (Med) 0.00% Median Word Error Rate
RTFx 0.00x Real-time factor (higher = faster)
Total Audio 0.0s Total audio duration processed
Total Time 0.0s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.000s Average chunk processing time
Max Chunk Time 0.000s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m25s • 03/19/2026, 08:24 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 19, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 3m5s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 19, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 10.8x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 4m 29s • 2026-03-20T00:37:57.201Z

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 19, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 4.61x
test-other 1.19% 0.00% 3.11x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 4.45x
test-other 1.78% 0.00% 3.39x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.57x Streaming real-time factor
Avg Chunk Time 1.683s Average time to process each chunk
Max Chunk Time 2.074s Maximum chunk processing time
First Token 1.979s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.60x Streaming real-time factor
Avg Chunk Time 1.493s Average time to process each chunk
Max Chunk Time 1.758s Maximum chunk processing time
First Token 1.544s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 8m4s • 03/19/2026, 08:32 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@SGD2718 SGD2718 requested review from Alex-Wengg March 19, 2026 20:58
@SGD2718 SGD2718 added enhancement New feature or request speaker-diarization Issues related to speaker diarization labels Mar 19, 2026
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 19, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 5.20x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 9.793 4.9 Fetching diarization models
Model Compile 4.197 2.1 CoreML compilation
Audio Load 0.043 0.0 Loading audio file
Segmentation 21.361 10.6 VAD + speech detection
Embedding 200.963 99.6 Speaker embedding extraction
Clustering (VBx) 0.750 0.4 Hungarian algorithm + VBx clustering
Total 201.854 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 223.1s processing • Test runtime: 3m 53s • 03/19/2026, 08:42 PM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 19, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (228.8 KB)

Runtime: 0m42s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@Alex-Wengg Alex-Wengg enabled auto-merge (squash) March 19, 2026 21:04
devin-ai-integration[bot]

This comment was marked as resolved.

SGD2718 and others added 2 commits March 19, 2026 14:06
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
devin-ai-integration[bot]

This comment was marked as resolved.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 19, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 21.51x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 10.516 21.6 Fetching diarization models
Model Compile 4.507 9.2 CoreML compilation
Audio Load 0.151 0.3 Loading audio file
Segmentation 14.619 30.0 Detecting speech regions
Embedding 24.366 49.9 Extracting speaker voices
Clustering 9.746 20.0 Grouping same speakers
Total 48.788 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 48.7s diarization time • Test runtime: 3m 54s • 03/19/2026, 08:34 PM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 19, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 641.7x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 396.5x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
devin-ai-integration[bot]

This comment was marked as resolved.

Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
@Alex-Wengg Alex-Wengg merged commit 401324d into main Mar 20, 2026
12 checks passed
@Alex-Wengg Alex-Wengg deleted the codex/publicly-mutable-diarizer-timeline branch March 20, 2026 00:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request speaker-diarization Issues related to speaker diarization

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants