Skip to content

ci: add Claude Code Action workflow#599

Merged
Alex-Wengg merged 1 commit into
mainfrom
ci/claude-code-action
May 11, 2026
Merged

ci: add Claude Code Action workflow#599
Alex-Wengg merged 1 commit into
mainfrom
ci/claude-code-action

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

Summary

Adds .github/workflows/claude.yml so the repo can respond to @claude mentions in issues, issue comments, PR reviews, and PR review comments via anthropics/claude-code-action@v1.

Motivation: PR #596 had a reviewer post @claude review and nothing happened because no workflow was wired up. This PR fixes that for future reviews.

What it does

  • Triggers on issue_comment, pull_request_review_comment, pull_request_review, issues (opened/assigned)
  • Job runs only when the body/title contains @claude (cheap filter, prevents wasted runs)
  • Uses ANTHROPIC_API_KEY repo secret for auth
  • Minimal read permissions on contents/PRs/issues; id-token: write for OIDC

Required configuration (repo settings)

Before this workflow can run, a maintainer needs to:

  1. Install the Claude GitHub App on FluidInference/FluidAudio
  2. Add an ANTHROPIC_API_KEY secret in repo Settings -> Secrets and variables -> Actions

Without those, the workflow file is inert (no failed runs, just no-op).

Test plan

  • Maintainer installs the Claude GitHub App and sets ANTHROPIC_API_KEY
  • After merge, post @claude help on a throwaway issue and confirm the workflow fires
  • Confirm non-@claude comments do not trigger the job

🤖 Generated with Claude Code

Listens for @claude mentions in issue comments, PR review comments,
PR reviews, and issues, and runs anthropics/claude-code-action@v1.

Requires:
- Claude GitHub App installed on the repo
- ANTHROPIC_API_KEY secret configured in repo settings

Template: https://github.com/anthropics/claude-code-action
@Alex-Wengg Alex-Wengg merged commit ae1ef30 into main May 11, 2026
12 checks passed
@Alex-Wengg Alex-Wengg deleted the ci/claude-code-action branch May 11, 2026 15:05
@github-actions
Copy link
Copy Markdown

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 10.4% <20% Diarization Error Rate (lower is better)
RTFx 10.22x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 11.897 11.6 Fetching diarization models
Model Compile 5.099 5.0 CoreML compilation
Audio Load 0.056 0.1 Loading audio file
Segmentation 25.042 24.4 VAD + speech detection
Embedding 102.333 99.6 Speaker embedding extraction
Clustering (VBx) 0.148 0.1 Hungarian algorithm + VBx clustering
Total 102.700 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 10.4% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 127.5s processing • Test runtime: 2m 10s • 05/11/2026, 11:14 AM EST

Alex-Wengg added a commit that referenced this pull request May 11, 2026
## Summary

Switches the Claude Code Action auth from `ANTHROPIC_API_KEY` to
`CLAUDE_CODE_OAUTH_TOKEN`, which uses a Claude Max/Pro subscription
instead of pay-per-token API billing.

The PR #599 workflow run failed with:
\`\`\`
Environment variable validation failed:
- Either ANTHROPIC_API_KEY or CLAUDE_CODE_OAUTH_TOKEN is required
\`\`\`

## Required setup (one-time, maintainer)

\`\`\`bash
# Generate an OAuth token tied to your Claude account
claude setup-token

# Store it in repo secrets
gh secret set CLAUDE_CODE_OAUTH_TOKEN --repo FluidInference/FluidAudio
# (paste the token when prompted)
\`\`\`

Verify:
\`\`\`bash
gh secret list --repo FluidInference/FluidAudio
\`\`\`

## Test plan

- [ ] Maintainer runs the two commands above to populate the secret
- [ ] After merge, post \`@claude help\` on a throwaway issue and
confirm the workflow runs without env-var errors
@github-actions
Copy link
Copy Markdown

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 4.25x
test-other 1.35% 0.00% 3.36x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 3.69x
test-other 1.00% 0.00% 2.50x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.50x Streaming real-time factor
Avg Chunk Time 1.810s Average time to process each chunk
Max Chunk Time 2.764s Maximum chunk processing time
First Token 2.091s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.47x Streaming real-time factor
Avg Chunk Time 1.890s Average time to process each chunk
Max Chunk Time 2.823s Maximum chunk processing time
First Token 1.823s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 8m31s • 05/11/2026, 11:20 AM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (146.3 KB)

Runtime: 0m28s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality and performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 674.7x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 542.4x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 30.3% <35%
Miss Rate 28.2% - -
False Alarm 0.9% - -
Speaker Error 1.2% - -
RTFx 9.1x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 3m 41s • 2026-05-11T15:34:18.369Z

@github-actions
Copy link
Copy Markdown

Kokoro TTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (634.8 KB)

Runtime: 0m51s

Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Performance Metrics

Metric CI Value Expected on Apple Silicon
Median RTFx 0.04x ~2.5x
Overall RTFx 0.04x ~2.5x

Runtime: 5m10s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 23.69x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 8.641 19.5 Fetching diarization models
Model Compile 3.703 8.4 CoreML compilation
Audio Load 0.072 0.2 Loading audio file
Segmentation 13.276 30.0 Detecting speech regions
Embedding 22.127 50.0 Extracting speaker voices
Clustering 8.851 20.0 Grouping same speakers
Total 44.286 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 44.3s diarization time • Test runtime: 3m 17s • 05/11/2026, 11:49 AM EST

@github-actions
Copy link
Copy Markdown

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 7.57x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 66.5s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.066s Average chunk processing time
Max Chunk Time 0.133s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 1m25s • 05/11/2026, 11:51 AM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant