Skip to content

Refactor: Rename Repo.parakeetCtcJa to Repo.parakeetJa for accuracy#520

Merged
Alex-Wengg merged 2 commits intomainfrom
refactor/rename-parakeet-ja-repo
Apr 12, 2026
Merged

Refactor: Rename Repo.parakeetCtcJa to Repo.parakeetJa for accuracy#520
Alex-Wengg merged 2 commits intomainfrom
refactor/rename-parakeet-ja-repo

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Apr 12, 2026

Problem

The enum name Repo.parakeetCtcJa is misleading because it implies the repository only contains CTC models, but it actually contains both CTC and TDT models.

Verified Repository Contents

FluidInference/parakeet-ctc-0.6b-ja-coreml contains:

  • ✅ CTC models: CtcDecoder.mlmodelc
  • ✅ TDT v2 models: Decoderv2.mlmodelc + Jointerv2.mlmodelc
  • Shared: Preprocessor.mlmodelc, Encoder.mlmodelc, vocab.json

Solution

Renamed Repo.parakeetCtcJaRepo.parakeetJa to accurately reflect that it's the Japanese models repository containing both decoder variants.

Changes

  • ModelNames.swift: Renamed enum case from .parakeetCtcJa to .parakeetJa
  • AsrModels.swift: Updated .ctcJa and .tdtJa to use .parakeetJa
  • CtcJaModels.swift: Updated repository reference
  • TdtJaModels.swift: Updated repository reference and added comment

Testing

  • ✅ Build succeeds
  • ✅ Both CTC and TDT Japanese managers now use the correct repository name

Related


Open with Devin

The HuggingFace repo 'parakeet-ctc-0.6b-ja-coreml' contains BOTH CTC
and TDT models, so calling it 'parakeetCtcJa' is misleading.

Renamed to 'parakeetJa' to accurately reflect that it's the Japanese
models repository containing both decoder variants.

Repository contents verified:
- CtcDecoder.mlmodelc (CTC)
- Decoderv2.mlmodelc (TDT)
- Jointerv2.mlmodelc (TDT)
- Preprocessor.mlmodelc
- Encoder.mlmodelc

Changes:
- ModelNames.swift: Renamed Repo.parakeetCtcJa → Repo.parakeetJa
- AsrModels.swift: Updated .ctcJa and .tdtJa to use .parakeetJa
- CtcJaModels.swift: Updated repository reference
- TdtJaModels.swift: Updated repository reference
The HuggingFace repo 'parakeet-tdt-0.6b-ja-coreml' doesn't exist (404).
Both CTC and TDT Japanese models are in 'parakeet-ctc-0.6b-ja-coreml'
which is now correctly referenced as Repo.parakeetJa.

Removed all references to the non-existent parakeetTdtJa:
- Enum case definition
- folderName property case
- shortName property case
- getRequiredModelNames() case
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 12, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 349.9x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 430.2x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@Alex-Wengg
Copy link
Copy Markdown
Member Author

Updated: Removed Dead Code

Added a second commit that removes the non-existent Repo.parakeetTdtJa enum case entirely.

Since the HuggingFace repo parakeet-tdt-0.6b-ja-coreml doesn't exist (verified returns 404), keeping it in the code serves no purpose and could confuse future developers.

Changes in second commit:

  • Removed parakeetTdtJa enum case
  • Removed from folderName property
  • Removed from shortName property
  • Removed from getRequiredModelNames() function

✅ Build still succeeds

@Alex-Wengg Alex-Wengg merged commit 044bb0b into main Apr 12, 2026
13 checks passed
@Alex-Wengg Alex-Wengg deleted the refactor/rename-parakeet-ja-repo branch April 12, 2026 04:21
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 3 additional findings in Devin Review.

Open in Devin Review

Comment on lines +675 to 676
case .parakeetJa:
return ModelNames.CTCJa.requiredModels
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 getRequiredModelNames for .parakeetJa only returns CTC models, breaking TDT Japanese model download and loading

After merging .parakeetCtcJa and .parakeetTdtJa into a single .parakeetJa repo, the getRequiredModelNames function at Sources/FluidAudio/ModelNames.swift:675-676 only returns ModelNames.CTCJa.requiredModels (Preprocessor.mlmodelc, Encoder.mlmodelc, CtcDecoder.mlmodelc). The TDT-specific models (Decoderv2.mlmodelc, Jointerv2.mlmodelc from ModelNames.TDTJa.requiredModels) are never included.

This causes two failures when TdtJaModels.downloadAndLoad() is called (via ParakeetLanguageModels<TdtJaConfig>):

  1. Download is skipped when CTC models are cached: DownloadUtils.loadModelsOnce at Sources/FluidAudio/DownloadUtils.swift:191-195 checks getRequiredModelNames(.parakeetJa) (only CTC models) to decide if download is needed. If CTC models exist, it skips download even though Decoderv2.mlmodelc and Jointerv2.mlmodelc are missing.

  2. Download omits TDT models on fresh install: DownloadUtils.downloadRepo at Sources/FluidAudio/DownloadUtils.swift:279-290 uses the same getRequiredModelNames to build download patterns. Only CTC model patterns are generated, so TDT model files are never fetched from HuggingFace.

In both scenarios, loading fails with a file-not-found error when the loop at DownloadUtils.swift:211-213 tries to access Decoderv2.mlmodelc.

Prompt for agents
In Sources/FluidAudio/ModelNames.swift, the getRequiredModelNames function for .parakeetJa only returns ModelNames.CTCJa.requiredModels but the repo now contains both CTC and TDT models. When TdtJaModels tries to download or load via DownloadUtils.loadModelsOnce, the cache check and download patterns are based on getRequiredModelNames which only knows about CTC models.

The fix should ensure that getRequiredModelNames for .parakeetJa returns the union of both CTCJa.requiredModels and TDTJa.requiredModels, so that both model sets are downloaded and their existence is properly verified. For example:

  case .parakeetJa:
      return ModelNames.CTCJa.requiredModels.union(ModelNames.TDTJa.requiredModels)

This ensures DownloadUtils.downloadRepo fetches all model files (including Decoderv2.mlmodelc and Jointerv2.mlmodelc) and that the cache-existence check in loadModelsOnce correctly detects when TDT models are missing.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@github-actions
Copy link
Copy Markdown

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 13.4x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 2m 42s • 2026-04-12T04:29:08.261Z

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 12, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 8.15x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 59.6s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.060s Average chunk processing time
Max Chunk Time 0.119s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 1m23s • 04/12/2026, 12:42 AM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 12, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.69x
test-other 1.19% 0.00% 3.69x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 6.33x
test-other 1.00% 0.00% 3.56x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.66x Streaming real-time factor
Avg Chunk Time 1.377s Average time to process each chunk
Max Chunk Time 1.462s Maximum chunk processing time
First Token 1.641s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.65x Streaming real-time factor
Avg Chunk Time 1.383s Average time to process each chunk
Max Chunk Time 1.517s Maximum chunk processing time
First Token 1.375s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 5m36s • 04/12/2026, 01:19 AM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 12, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Performance Metrics

Metric CI Value Expected on Apple Silicon
Median RTFx 0.04x ~2.5x
Overall RTFx 0.04x ~2.5x

Runtime: 5m21s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 22.47x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 10.682 22.9 Fetching diarization models
Model Compile 4.578 9.8 CoreML compilation
Audio Load 0.057 0.1 Loading audio file
Segmentation 13.997 30.0 Detecting speech regions
Embedding 23.328 49.9 Extracting speaker voices
Clustering 9.331 20.0 Grouping same speakers
Total 46.705 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 46.7s diarization time • Test runtime: 3m 3s • 04/12/2026, 12:52 AM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 12, 2026

Kokoro TTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (634.8 KB)

Runtime: 0m57s

Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 12, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (195.0 KB)

Runtime: 0m34s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality and performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

✅ Japanese ASR Benchmark Results (CTC)

Status: Passed

Metric Value
CER 9.94%
Samples 50
Avg RTFx 2.5x
Decoder CTC

✅ Benchmark completed successfully. The TDT Japanese hybrid model (CTC preprocessor/encoder + TDT decoder/joint) is working correctly.

View benchmark log

1 similar comment
@github-actions
Copy link
Copy Markdown

✅ Japanese ASR Benchmark Results (CTC)

Status: Passed

Metric Value
CER 9.94%
Samples 50
Avg RTFx 2.5x
Decoder CTC

✅ Benchmark completed successfully. The TDT Japanese hybrid model (CTC preprocessor/encoder + TDT decoder/joint) is working correctly.

View benchmark log

@github-actions
Copy link
Copy Markdown

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 5.29x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 11.029 5.6 Fetching diarization models
Model Compile 4.727 2.4 CoreML compilation
Audio Load 0.032 0.0 Loading audio file
Segmentation 21.140 10.7 VAD + speech detection
Embedding 197.209 99.5 Speaker embedding extraction
Clustering (VBx) 0.875 0.4 Hungarian algorithm + VBx clustering
Total 198.282 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 219.2s processing • Test runtime: 3m 42s • 04/12/2026, 01:13 AM EST

@Alex-Wengg
Copy link
Copy Markdown
Member Author

✅ Fixed: TDT Japanese Model Downloads Now Work (Issue #517)

@Josscii found that TDT Japanese models weren't downloading correctly because getRequiredModelNames() was only returning CTC models for the .parakeetJa repo.

The Problem:
When TdtJaModels tried to download from Repo.parakeetJa, it only got:

  • ❌ Preprocessor.mlmodelc
  • ❌ Encoder.mlmodelc
  • ❌ CtcDecoder.mlmodelc

Missing the TDT-specific files:

  • ❌ Decoderv2.mlmodelc
  • ❌ Jointerv2.mlmodelc

The Fix (Commit 3):

case .parakeetJa:
    // Repo contains BOTH CTC and TDT models - return union of both sets
    return ModelNames.CTCJa.requiredModels.union(ModelNames.TDTJa.requiredModels)

Now downloads all 5 models:

  • ✅ Preprocessor.mlmodelc (shared)
  • ✅ Encoder.mlmodelc (shared)
  • ✅ CtcDecoder.mlmodelc (CTC)
  • ✅ Decoderv2.mlmodelc (TDT)
  • ✅ Jointerv2.mlmodelc (TDT)

Fixes #517

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant