Skip to content

docs: add model conversion guide#391

Merged
Alex-Wengg merged 1 commit intomainfrom
docs/model-conversion-guide
Mar 17, 2026
Merged

docs: add model conversion guide#391
Alex-Wengg merged 1 commit intomainfrom
docs/model-conversion-guide

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Mar 17, 2026

Summary

  • Adds Documentation/ModelConversion.md — an end-to-end guide for converting models to CoreML and integrating them into FluidAudio
  • Covers both stages: mobius (conversion + HuggingFace upload) and FluidAudio (registration, inference code, CLI, tests)
  • Includes checklists for both repositories so nothing gets missed
  • Intended audience: contributors and coding agents working on new model integrations

Test plan

  • Review documentation for accuracy against current codebase patterns
  • Verify all referenced file paths and code patterns match the repo

Open with Devin

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 17, 2026

Claude finished @Alex-Wengg's task —— View job


Reviewing PR: Model Conversion Guide

I've completed my review of the model conversion documentation. Overall, this is a comprehensive and well-structured guide that will be valuable for contributors and coding agents. Here are my findings:

Strengths

  1. Comprehensive Coverage: The guide covers both mobius conversion and FluidAudio integration stages effectively
  2. Practical Examples: Includes concrete code examples and file paths that match the current codebase structure
  3. Excellent Checklists: The checklist sections (lines 217-240) provide actionable to-do items for both repositories
  4. Accurate File Paths: All referenced paths like Sources/FluidAudio/ModelNames.swift and directory structures match the actual codebase
  5. Consistent Patterns: The ModelNames.swift examples (lines 91-124) accurately reflect current enum patterns
  6. Good DownloadUtils Usage: The DownloadUtils.loadModels() pattern (lines 141-147) matches actual usage throughout the codebase

⚠️ Issues Found

Line 136 - Actor vs Class Pattern Inconsistency
The example shows public actor MyModelManager, but the codebase has mixed patterns:

  • Actors: VadManager, StreamingAsrManager, Qwen3AsrManager, PocketTtsManager (newer pattern)
  • Classes: AsrManager (legacy public final class)

Recommendation: Clarify that new managers should use actors (per CLAUDE.md rule against @unchecked Sendable), while acknowledging some legacy managers are classes being migrated.

Line 170 - CLI Command Pattern
The CLI command example uses enum MyModelCommand with static func run(), but actual CLI commands vary in structure. Some use different patterns. Consider checking existing commands in Sources/FluidAudioCLI/Commands/ for the most current approach.

Line 185 - Testing Guidance
Good mention of "no synthetic data or dummy models" rule from CLAUDE.md. This is accurate and important.

📝 Minor Suggestions

  1. Line 39: Consider mentioning that diarization class also includes streaming variants
  2. Line 199-203: Build commands are accurate and match the project's swift-format configuration
  3. Line 245: The reference to Documentation/Models.md is correct and the file exists with the expected content

🎯 Overall Assessment

This documentation is production-ready and follows FluidAudio's established patterns well. The minor actor/class pattern note is the only significant issue to address. The guide will significantly help contributors understand the two-repository conversion workflow.

Recommendation: Approve with the actor pattern clarification.


Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 2 additional findings.

Open in Devin Review

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comprehensive model conversion guide that accurately reflects current codebase patterns. The documentation provides clear end-to-end workflow for both mobius conversion and FluidAudio integration stages.

Key strengths:

  • Accurate file paths and code examples
  • Proper ModelNames.swift enum patterns
  • Correct DownloadUtils.loadModels() usage
  • Comprehensive checklists for both repositories
  • Good adherence to CLAUDE.md rules (no dummy models, proper testing)

Minor issue: Line 136 shows actor pattern but some existing managers are classes. Recommend clarifying that actors are preferred for new implementations while acknowledging mixed patterns in legacy code.

Overall excellent documentation that will help contributors and coding agents effectively add new models to FluidAudio.

@Alex-Wengg Alex-Wengg force-pushed the docs/model-conversion-guide branch from 4d96e15 to e261a16 Compare March 17, 2026 21:36
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 0.00% Average Word Error Rate
WER (Med) 0.00% Median Word Error Rate
RTFx 0.00x Real-time factor (higher = faster)
Total Audio 0.0s Total audio duration processed
Total Time 0.0s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.000s Average chunk processing time
Max Chunk Time 0.000s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m28s • 03/17/2026, 08:48 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@Alex-Wengg Alex-Wengg force-pushed the docs/model-conversion-guide branch from e261a16 to 32fd92e Compare March 17, 2026 21:42
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 482.3x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 507.3x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 8.2x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 6m 29s • 2026-03-17T22:48:51.122Z

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 4m26s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@Alex-Wengg Alex-Wengg force-pushed the docs/model-conversion-guide branch 3 times, most recently from 32f2ef5 to 3604ff5 Compare March 17, 2026 21:53
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 4.01x
test-other 1.56% 0.00% 2.64x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 4.45x
test-other 1.00% 0.00% 3.03x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.49x Streaming real-time factor
Avg Chunk Time 1.888s Average time to process each chunk
Max Chunk Time 2.845s Maximum chunk processing time
First Token 2.424s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.53x Streaming real-time factor
Avg Chunk Time 1.717s Average time to process each chunk
Max Chunk Time 2.485s Maximum chunk processing time
First Token 1.774s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 9m58s • 03/17/2026, 09:21 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (195.0 KB)

Runtime: 0m39s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@Alex-Wengg Alex-Wengg force-pushed the docs/model-conversion-guide branch 2 times, most recently from 0c7098d to e41acf9 Compare March 17, 2026 21:59
End-to-end reference covering both stages of the model conversion
pipeline (mobius conversion + FluidAudio integration), with checklists
for each repository.
@Alex-Wengg Alex-Wengg force-pushed the docs/model-conversion-guide branch from e41acf9 to 47f37d1 Compare March 17, 2026 22:11
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 17, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 3.10x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 15.947 4.7 Fetching diarization models
Model Compile 6.835 2.0 CoreML compilation
Audio Load 0.107 0.0 Loading audio file
Segmentation 35.428 10.5 VAD + speech detection
Embedding 337.348 99.7 Speaker embedding extraction
Clustering (VBx) 0.844 0.2 Hungarian algorithm + VBx clustering
Total 338.425 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 373.6s processing • Test runtime: 6m 29s • 03/17/2026, 06:38 PM EST

@Alex-Wengg Alex-Wengg merged commit a7b044a into main Mar 17, 2026
10 of 12 checks passed
@Alex-Wengg Alex-Wengg deleted the docs/model-conversion-guide branch March 17, 2026 22:27
Alex-Wengg added a commit that referenced this pull request Mar 17, 2026
Follow-up to #391. Documents all 11 existing benchmark datasets
(LibriSpeech, FLEURS, AISHELL-1, Buckeye, AMI-SDM, etc.) with their
domains, sizes, formats, and download locations so contributors know
what's already available before creating new datasets.
Alex-Wengg added a commit that referenced this pull request Mar 17, 2026
## Summary
- Follow-up to #391 — adds an **Existing benchmark datasets** table to
`Documentation/ModelConversion.md`
- Documents all 11 datasets already available in FluidAudio
(LibriSpeech, FLEURS, AISHELL-1, Earnings22, Buckeye, VOiCES, MUSAN,
AMI-SDM, VoxConverse, CharsiuG2P, text normalization) with domain, size,
format, and download location
- Notes that most CLI commands auto-download datasets, and explains
when/how to add new ones
- Updates the checklist to reference the dataset table

## Context
PR #391 added the model conversion guide but didn't mention the existing
benchmark datasets. Contributors and coding agents should use the
existing datasets for benchmarking rather than creating new ones unless
a domain gap exists.

## Test plan
- [x] Verify all dataset names, sizes, and paths match the current
codebase
- [ ] Review table formatting renders correctly on GitHub
<!-- devin-review-badge-begin -->

---

<a href="https://app.devin.ai/review/fluidinference/fluidaudio/pull/392"
target="_blank">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://static.devin.ai/assets/gh-open-in-devin-review-dark.svg?v=1">
<img
src="https://static.devin.ai/assets/gh-open-in-devin-review-light.svg?v=1"
alt="Open with Devin">
  </picture>
</a>
<!-- devin-review-badge-end -->
@github-actions
Copy link
Copy Markdown

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 26.52x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 9.052 22.9 Fetching diarization models
Model Compile 3.879 9.8 CoreML compilation
Audio Load 0.038 0.1 Loading audio file
Segmentation 11.869 30.0 Detecting speech regions
Embedding 19.781 50.0 Extracting speaker voices
Clustering 7.913 20.0 Grouping same speakers
Total 39.576 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 39.6s diarization time • Test runtime: 4m 43s • 03/17/2026, 08:56 PM EST

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant