Conversation
|
Claude finished @Alex-Wengg's task —— View job Reviewing PR: Model Conversion GuideI've completed my review of the model conversion documentation. Overall, this is a comprehensive and well-structured guide that will be valuable for contributors and coding agents. Here are my findings: ✅ Strengths
|
There was a problem hiding this comment.
Comprehensive model conversion guide that accurately reflects current codebase patterns. The documentation provides clear end-to-end workflow for both mobius conversion and FluidAudio integration stages.
Key strengths:
- Accurate file paths and code examples
- Proper ModelNames.swift enum patterns
- Correct DownloadUtils.loadModels() usage
- Comprehensive checklists for both repositories
- Good adherence to CLAUDE.md rules (no dummy models, proper testing)
Minor issue: Line 136 shows actor pattern but some existing managers are classes. Recommend clarifying that actors are preferred for new implementations while acknowledging mixed patterns in legacy code.
Overall excellent documentation that will help contributors and coding agents effectively add new models to FluidAudio.
4d96e15 to
e261a16
Compare
Parakeet EOU Benchmark Results ✅Status: Benchmark passed Performance Metrics
Streaming Metrics
Test runtime: 0m28s • 03/17/2026, 08:48 PM EST RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O |
e261a16 to
32fd92e
Compare
VAD Benchmark ResultsPerformance Comparison
Dataset Details
✅: Average F1-Score above 70% |
Sortformer High-Latency Benchmark ResultsES2004a Performance (30.4s latency config)
Sortformer High-Latency • ES2004a • Runtime: 6m 29s • 2026-03-17T22:48:51.122Z |
Qwen3-ASR int8 Smoke Test ✅
Runtime: 4m26s Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx. |
32f2ef5 to
3604ff5
Compare
ASR Benchmark Results ✅Status: All benchmarks passed Parakeet v3 (multilingual)
Parakeet v2 (English-optimized)
Streaming (v3)
Streaming (v2)
Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming 25 files per dataset • Test runtime: 9m58s • 03/17/2026, 09:21 PM EST RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time Expected RTFx Performance on Physical M1 Hardware:• M1 Mac: ~28x (clean), ~25x (other) Testing methodology follows HuggingFace Open ASR Leaderboard |
PocketTTS Smoke Test ✅
Runtime: 0m39s Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon. |
0c7098d to
e41acf9
Compare
End-to-end reference covering both stages of the model conversion pipeline (mobius conversion + FluidAudio integration), with checklists for each repository.
e41acf9 to
47f37d1
Compare
Offline VBx Pipeline ResultsSpeaker Diarization Performance (VBx Batch Mode)Optimal clustering with Hungarian algorithm for maximum accuracy
Offline VBx Pipeline Timing BreakdownTime spent in each stage of batch diarization
Speaker Diarization Research ComparisonOffline VBx achieves competitive accuracy with batch processing
Pipeline Details:
🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 373.6s processing • Test runtime: 6m 29s • 03/17/2026, 06:38 PM EST |
Follow-up to #391. Documents all 11 existing benchmark datasets (LibriSpeech, FLEURS, AISHELL-1, Buckeye, AMI-SDM, etc.) with their domains, sizes, formats, and download locations so contributors know what's already available before creating new datasets.
## Summary - Follow-up to #391 — adds an **Existing benchmark datasets** table to `Documentation/ModelConversion.md` - Documents all 11 datasets already available in FluidAudio (LibriSpeech, FLEURS, AISHELL-1, Earnings22, Buckeye, VOiCES, MUSAN, AMI-SDM, VoxConverse, CharsiuG2P, text normalization) with domain, size, format, and download location - Notes that most CLI commands auto-download datasets, and explains when/how to add new ones - Updates the checklist to reference the dataset table ## Context PR #391 added the model conversion guide but didn't mention the existing benchmark datasets. Contributors and coding agents should use the existing datasets for benchmarking rather than creating new ones unless a domain gap exists. ## Test plan - [x] Verify all dataset names, sizes, and paths match the current codebase - [ ] Review table formatting renders correctly on GitHub <!-- devin-review-badge-begin --> --- <a href="https://app.devin.ai/review/fluidinference/fluidaudio/pull/392" target="_blank"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://static.devin.ai/assets/gh-open-in-devin-review-dark.svg?v=1"> <img src="https://static.devin.ai/assets/gh-open-in-devin-review-light.svg?v=1" alt="Open with Devin"> </picture> </a> <!-- devin-review-badge-end -->
Speaker Diarization Benchmark ResultsSpeaker Diarization PerformanceEvaluating "who spoke when" detection accuracy
Diarization Pipeline Timing BreakdownTime spent in each stage of speaker diarization
Speaker Diarization Research ComparisonResearch baselines typically achieve 18-30% DER on standard datasets
Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:
🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 39.6s diarization time • Test runtime: 4m 43s • 03/17/2026, 08:56 PM EST |
Summary
Documentation/ModelConversion.md— an end-to-end guide for converting models to CoreML and integrating them into FluidAudioTest plan