Skip to content

docs: add existing benchmark datasets to model conversion guide#392

Merged
Alex-Wengg merged 4 commits intomainfrom
docs/add-benchmark-datasets
Mar 17, 2026
Merged

docs: add existing benchmark datasets to model conversion guide#392
Alex-Wengg merged 4 commits intomainfrom
docs/add-benchmark-datasets

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Mar 17, 2026

Summary

  • Follow-up to docs: add model conversion guide #391 — adds an Existing benchmark datasets table to Documentation/ModelConversion.md
  • Documents all 11 datasets already available in FluidAudio (LibriSpeech, FLEURS, AISHELL-1, Earnings22, Buckeye, VOiCES, MUSAN, AMI-SDM, VoxConverse, CharsiuG2P, text normalization) with domain, size, format, and download location
  • Notes that most CLI commands auto-download datasets, and explains when/how to add new ones
  • Updates the checklist to reference the dataset table

Context

PR #391 added the model conversion guide but didn't mention the existing benchmark datasets. Contributors and coding agents should use the existing datasets for benchmarking rather than creating new ones unless a domain gap exists.

Test plan

  • Verify all dataset names, sizes, and paths match the current codebase
  • Review table formatting renders correctly on GitHub

Open with Devin

Follow-up to #391. Documents all 11 existing benchmark datasets
(LibriSpeech, FLEURS, AISHELL-1, Buckeye, AMI-SDM, etc.) with their
domains, sizes, formats, and download locations so contributors know
what's already available before creating new datasets.
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 17, 2026

Claude finished @Alex-Wengg's task —— View job


PR Review Complete ✅

I've completed the review of PR #392 and provided feedback using the GitHub review system.

Todo List:

  • Create pending GitHub review
  • Get PR diff to understand changes
  • Review documentation changes for accuracy and formatting
  • Verify dataset information matches codebase
  • Add inline review comments for any issues
  • Submit review with overall assessment

Summary

This PR adds a valuable benchmark datasets table to the model conversion guide. The documentation is well-structured and mostly accurate. I identified a few minor inconsistencies:

  1. FLEURS dataset naming: Code uses both FluidInference/fleurs and FluidInference/fleurs-full
  2. Language count: Could be more precise (30 languages vs "24+ langs")
  3. Text normalization path: Relative path may need clarification

Overall: Strong documentation improvement that will help contributors use existing datasets effectively. The minor issues don't block the PR but could be addressed for completeness.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no bugs or issues to report.

Open in Devin Review

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

This PR adds a comprehensive table of existing benchmark datasets to the model conversion guide, which is a valuable addition. The documentation is generally well-structured and the information appears accurate based on verification against the codebase.

Key Findings

Strengths:

  • ✅ Comprehensive coverage of all 11 existing datasets
  • ✅ Accurate dataset sizes and locations verified against codebase
  • ✅ Clear guidance on using existing datasets vs creating new ones
  • ✅ Good table formatting with domain, size, format, and download info
  • ✅ Helpful note about auto-download functionality
  • ✅ Updated checklist reference to point to the new table

Minor Issues Identified:

  1. FLEURS dataset inconsistency: The codebase uses both FluidInference/fleurs and FluidInference/fleurs-full repositories in different benchmark commands. The table shows FluidInference/fleurs, but some commands reference the -full variant.

  2. Text normalization dataset location: The table mentions "Checked in at text-processing-rs/tests/data/" but this appears to be a relative path that may not exist in the current repository structure.

  3. Language count precision: The table shows "24+ langs" for FLEURS, but the code shows exactly 30 languages are supported. Consider being more precise.

Overall Assessment

This is a solid documentation improvement that will help contributors understand available benchmark datasets. The minor inconsistencies don't impact the core value of the documentation but could be clarified for completeness.

Recommendation: Approve with optional minor clarifications.

@Alex-Wengg Alex-Wengg merged commit 4cf1914 into main Mar 17, 2026
8 of 12 checks passed
@Alex-Wengg Alex-Wengg deleted the docs/add-benchmark-datasets branch March 17, 2026 22:41
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.51x
test-other 1.19% 0.00% 3.47x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.55x
test-other 1.00% 0.00% 3.65x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.67x Streaming real-time factor
Avg Chunk Time 1.370s Average time to process each chunk
Max Chunk Time 1.536s Maximum chunk processing time
First Token 1.632s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.64x Streaming real-time factor
Avg Chunk Time 1.386s Average time to process each chunk
Max Chunk Time 1.624s Maximum chunk processing time
First Token 1.373s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 6m43s • 03/17/2026, 09:03 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (172.5 KB)

Runtime: 0m31s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 3m56s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 14.4x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 4m 18s • 2026-03-17T23:42:49.444Z

@github-actions
Copy link
Copy Markdown

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 26.51x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 9.892 25.0 Fetching diarization models
Model Compile 4.239 10.7 CoreML compilation
Audio Load 0.048 0.1 Loading audio file
Segmentation 11.872 30.0 Detecting speech regions
Embedding 19.787 50.0 Extracting speaker voices
Clustering 7.915 20.0 Grouping same speakers
Total 39.581 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 39.6s diarization time • Test runtime: 5m 26s • 03/17/2026, 08:02 PM EST

@github-actions
Copy link
Copy Markdown

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 4.91x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 9.645 4.5 Fetching diarization models
Model Compile 4.134 1.9 CoreML compilation
Audio Load 0.033 0.0 Loading audio file
Segmentation 21.633 10.1 VAD + speech detection
Embedding 212.640 99.6 Speaker embedding extraction
Clustering (VBx) 0.732 0.3 Hungarian algorithm + VBx clustering
Total 213.550 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 235.0s processing • Test runtime: 4m 5s • 03/17/2026, 08:14 PM EST

@github-actions
Copy link
Copy Markdown

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 697.9x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 536.5x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 18, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 0.00% Average Word Error Rate
WER (Med) 0.00% Median Word Error Rate
RTFx 0.00x Real-time factor (higher = faster)
Total Audio 0.0s Total audio duration processed
Total Time 0.0s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.000s Average chunk processing time
Max Chunk Time 0.000s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m13s • 03/17/2026, 09:25 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant