Privacy-first audio transcription with speaker diarization. Entirely offline.
Transform recordings into detailed transcripts showing who said what and whenβall on your Mac, with complete privacy.
Quick Start β’ Installation β’ Examples β’ Documentation
| Feature | LocalTranscribe | Cloud Services |
|---|---|---|
| Privacy | 100% offline processing | Data uploaded to third-party servers |
| Cost | Free forever | $10-50/month subscription |
| Speaker Identification | Automatic speaker detection | Often extra cost or unavailable |
| Speed (Apple Silicon) | Real-time to 2x audio length | Depends on upload/download speed |
| Quality | OpenAI Whisper models | Varies by provider |
| Data Ownership | All files stay on your machine | Depends on provider terms |
Perfect for: Researchers, podcasters, journalists, legal professionals, content creatorsβanyone who needs accurate transcripts with speaker labels and complete data privacy.
- π Complete Privacy - All processing happens locally on your machine
- π― Speaker Diarization - Automatic detection of who spoke when
- π·οΈ Speaker Labeling - Replace speaker IDs with actual names
- π§ββοΈ Guided Wizard - Dummy-proof setup for beginners
- π Interactive File Browser - Navigate folders and select files with arrow keys
- π Smart Token Management - One-time HuggingFace token setup with validation
- π High Accuracy - Powered by OpenAI's Whisper models (defaults to medium)
- β‘οΈ Apple Silicon Optimized - Auto-detects and uses MLX on M1/M2/M3/M4 Macs
- π Simple CLI - Zero commands needed - just run
localtranscribe - π¦ Python SDK - Integrate transcription into your applications
- π Batch Processing - Process multiple files simultaneously
- π Multiple Formats - Output as TXT, JSON, SRT, or Markdown
- π― Intelligent Segment Processing - 50-70% reduction in false speaker switches
- π§ Enhanced Speaker Mapping - 30-40% better speaker attribution accuracy
- π Audio Quality Analysis - Pre-processing quality assessment with SNR calculation
- β Quality Gates System - Per-stage validation with actionable recommendations
- π Domain Dictionaries - 360+ specialized terms across 8 domains (military, technical, business, medical, legal, academic, common, entities)
- π€ Acronym Expansion - 180+ definitions with intelligent context-aware disambiguation
- π§ Context-Aware Matching - spaCy NER for intelligent acronym disambiguation (IP, PR, AI, OR, PI)
- β‘ High-Performance Matching - FlashText integration for 10-100x faster dictionary lookups
- β¨ Typo Tolerance - RapidFuzz fuzzy matching for automatic typo correction
- π€ Auto-Download Models - Automatic spaCy model management with user prompts
- π Real-Time Progress Tracking - Live progress bars and time estimates during transcription
Package: pypi.org/project/localtranscribe
pip install localtranscribeSpeaker diarization requires a free HuggingFace account. The wizard will guide you through setup:
- Create account & get token: https://huggingface.co/settings/tokens
- Accept model licenses (click "Agree" on each):
- Enter token when prompted - The wizard will:
- Validate your token format
- Auto-save to
.envfile - Never ask again after successful setup
Manual setup (optional):
echo "HUGGINGFACE_TOKEN=hf_your_token_here" > .envπ― The Simplest Way (Recommended for Everyone!):
# Option 1: Browse for files interactively
localtranscribe
# Option 2: Provide file path directly
localtranscribe your-audio.mp3Both methods start the guided wizard that walks you through all options interactively. The interactive browser lets you navigate folders and select files with arrow keys. Perfect for beginners, fast for everyone!
β‘οΈ Direct Mode (For Power Users):
localtranscribe process your-audio.mp3π― Advanced with All Features:
localtranscribe process your-audio.mp3 --labels speakers.json --proofreadDone! Results appear in ./output/ with speaker labels, timestamps, and full transcript.
# Basic installation
pip install localtranscribe
# For Apple Silicon optimization (recommended for M1/M2/M3/M4)
pip install localtranscribe[mlx]
# For NVIDIA GPU support
pip install localtranscribe[faster]
# Install all optional dependencies
pip install localtranscribe[all]# Clone repository
git clone https://github.com/aporb/LocalTranscribe.git
cd LocalTranscribe
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install in development mode
pip install -e .localtranscribe doctorThis command checks your system configuration and reports any issues.
# Option 1: Interactive file browser
localtranscribe
# Option 2: Provide file path directly
localtranscribe interview.mp3The wizard will guide you through:
- Interactive file selection (if no file provided)
- HuggingFace token setup with validation and auto-save
- Quality vs speed preferences (defaults to medium model)
- Speaker detection options
- Speaker labeling setup
- Automatic proofreading
- Output location
Note: The wizard runs automatically when you run localtranscribe or provide an audio file. Use localtranscribe process for direct mode.
# Smart defaults with minimal prompts
localtranscribe process meeting.mp3 --simpleSimple mode:
- Auto-detects speaker labels file if present
- Prompts for speaker count if unknown
- Asks about proofreading preferences
- Shows detailed progress
# Transcribe with automatic settings
localtranscribe process meeting.mp3
# Specify number of speakers for better accuracy
localtranscribe process interview.wav --speakers 2
# Use larger model for higher quality
localtranscribe process podcast.m4a --model medium
# Save to custom location
localtranscribe process audio.mp3 --output ./results/# Create a speaker labels file (speakers.json):
{
"SPEAKER_00": "John Smith",
"SPEAKER_01": "Jane Doe"
}
# Apply labels during processing
localtranscribe process meeting.mp3 --labels speakers.json
# Save speaker IDs for later labeling
localtranscribe process meeting.mp3 --save-labels speakers.json# Enable proofreading with default rules
localtranscribe process meeting.mp3 --proofread
# Use thorough proofreading
localtranscribe process meeting.mp3 --proofread --proofread-level thorough
# Custom proofreading rules
localtranscribe process meeting.mp3 --proofread --proofread-rules my-rules.json
# NEW in v3.1.1: Enable domain-specific dictionaries (360+ specialized terms)
localtranscribe process meeting.mp3 --proofread --domains technical business legal
# NEW in v3.1.1: Enable context-aware acronym expansion (180+ definitions)
localtranscribe process meeting.mp3 --proofread --expand-acronyms --context-aware
# Check NLP model status and download if needed
localtranscribe check-models
localtranscribe check-models --download en_core_web_smProofreading fixes:
- Technical terms (API, JavaScript, Python, AWS, Docker, etc.)
- Business terms (CEO, KPI, B2B, ROI, etc.)
- Military terms (Captain, Colonel, battalion, etc.)
- Medical terms (procedures, medications, conditions)
- Common homophones (your/you're, their/there)
- Contractions and grammar
- Excessive repetitions
v3.1.1 Enhancements:
- Domain Dictionaries: 360+ specialized terms across 8 domains (military, technical, business, medical, legal, academic, common, entities)
- Acronym Expansion: 180+ definitions with intelligent context-aware disambiguation
- Context-Aware Matching: spaCy NER for intelligent acronym resolution (IP, PR, AI, OR, PI)
- High-Performance Matching: FlashText integration for 10-100x faster dictionary lookups
- Typo Tolerance: RapidFuzz fuzzy matching for automatic typo correction
- Auto-Download Models: Automatic spaCy model management with user prompts
- Multiple Formats: Parenthetical
API (Application Programming Interface), replacement, or footnote styles
# Process entire folder
localtranscribe batch ./audio-files/
# Process with multiple workers
localtranscribe batch ./recordings/ --workers 4
# With custom settings
localtranscribe batch ./files/ --model small --output ./transcripts/# Skip speaker detection for faster processing
localtranscribe process lecture.mp3 --skip-diarizationlocaltranscribe process audio.mp3 \
--model medium \ # Model: tiny|base|small|medium|large
--speakers 3 \ # Number of speakers (if known)
--language en \ # Force specific language
--format txt json srt \ # Output formats
--output ./results/ \ # Output directory
--verbose # Show detailed progressBasic Usage:
from localtranscribe import LocalTranscribe
# Initialize with options
lt = LocalTranscribe(
model_size="base",
num_speakers=2,
output_dir="./transcripts"
)
# Process single file
result = lt.process("meeting.mp3")
# Access results
print(f"Transcript: {result.transcript}")
print(f"Speakers: {result.num_speakers}")
print(f"Duration: {result.duration}s")
# Access detailed segments
for segment in result.segments:
print(f"[{segment.speaker}] {segment.text}")
# Batch processing
results = lt.process_batch("./audio-files/", max_workers=4)
print(f"Completed: {results.successful}/{results.total}")NEW in v3.1 - Advanced Pipeline with Quality Features:
from localtranscribe.pipeline import PipelineOrchestrator
# Enable all quality enhancements
pipeline = PipelineOrchestrator(
audio_file="meeting.wav",
output_dir="./output",
# Phase 1: Segment Processing
enable_segment_processing=True,
use_speaker_regions=True,
# Phase 2: Audio Analysis & Quality Gates
enable_audio_analysis=True,
enable_quality_gates=True,
quality_report_path="./quality_report.txt",
# Phase 2: Enhanced Proofreading
enable_proofreading=True,
proofreading_domains=["technical", "business"],
enable_acronym_expansion=True,
verbose=True
)
result = pipeline.run()NEW in v3.1 - Standalone Quality Analysis:
# Audio Quality Analysis
from localtranscribe.audio import AudioAnalyzer
analyzer = AudioAnalyzer(verbose=True)
analysis = analyzer.analyze("audio.wav")
print(f"Quality: {analysis.quality_level.value}")
print(f"SNR: {analysis.snr_db:.1f} dB")
print(f"Recommended Model: {analysis.recommended_whisper_model}")
# Quality Gates Assessment
from localtranscribe.quality import QualityGate, QualityThresholds
gate = QualityGate(thresholds=QualityThresholds(), verbose=True)
assessment = gate.assess_diarization_quality(diarization_result)
print(f"Score: {assessment.overall_score:.2f}")
print(f"Passed: {assessment.passed}")LocalTranscribe generates multiple output files for different use cases:
| Format | File | Description |
|---|---|---|
| Markdown | *_combined.md |
Formatted transcript with speaker labels and timestamps |
| Plain Text | *_transcript.txt |
Simple text output for analysis |
| JSON | *_transcript.json |
Structured data for programming |
| SRT | *_transcript.srt |
Subtitle format for video |
| Diarization | *_diarization.md |
Speaker timeline and statistics |
Example Output:
# Combined Transcript
**Audio File:** interview.mp3
**Processing Date:** 2025-10-13 22:30:00
## SPEAKER_00
**Time:** [0.0s - 5.2s]
Hello, welcome to the show. Thanks for joining us today.
## SPEAKER_01
**Time:** [5.5s - 12.8s]
Thanks for having me. I'm excited to discuss our new project.localtranscribe # Interactive file browser + wizard
localtranscribe audio.mp3 # Automatically runs wizard - perfect for everyone!| Command | Description | Example |
|---|---|---|
| DEFAULT | π― Interactive file browser (no args) or wizard (with file) | localtranscribe or localtranscribe audio.mp3 |
wizard |
π§ββοΈ Guided interactive setup (explicit) | localtranscribe wizard audio.mp3 |
process |
Direct transcription without wizard | localtranscribe process audio.mp3 |
batch |
Process multiple files | localtranscribe batch ./folder/ |
doctor |
Verify system setup | localtranscribe doctor |
check-models |
π Check NLP model status and download models | localtranscribe check-models |
label |
Replace speaker IDs with names | localtranscribe label output.md |
version |
Show version information | localtranscribe version |
config |
Manage configuration | localtranscribe config show |
π‘ Pro Tip: Just run localtranscribe to browse and select files interactively, or localtranscribe audio.mp3 to transcribe directly!
Run localtranscribe --help or localtranscribe <command> --help for detailed options.
New in v3.1.1:
- π― Intelligent Segment Processing - Filters micro-segments, merges continuations (50-70% fewer false switches)
- π§ Enhanced Speaker Mapping - Region-based context for better attribution (30-40% accuracy improvement)
- π Audio Quality Analysis - Pre-processing SNR, quality assessment, parameter recommendations
- β Quality Gates System - Per-stage validation with actionable recommendations
- π Domain Dictionaries - 360+ specialized terms across 8 domains (military, technical, business, medical, legal, academic, common, entities)
- π€ Acronym Expansion - 180+ definitions with intelligent context-aware disambiguation
- π§ Context-Aware Matching - spaCy NER for intelligent acronym disambiguation (IP, PR, AI, OR, PI)
- β‘ High-Performance Matching - FlashText integration for 10-100x faster dictionary lookups
- β¨ Typo Tolerance - RapidFuzz fuzzy matching for automatic typo correction
- π€ Auto-Download Models - Automatic spaCy model management with interactive user prompts
- π Model Status CLI -
check-modelscommand to verify and download NLP models - π Real-Time Progress Tracking - Live progress bars (Faster-Whisper) and time estimates (MLX-Whisper) during transcription
- βοΈ 20+ New Configuration Options - Fine-tune quality thresholds, enable context-aware features
- π Quality Reports - Comprehensive quality assessment with severity indicators and recommendations
- π ~4,500 Lines of Production Code - 12 new files, 15+ new dataclasses, 60+ new methods
- β¨ 100% Backward Compatible - All features are opt-in and configurable
New in v3.0.0:
- β¨ Wizard is now the default - just provide your audio file!
--simplemode for process command--labelsand--proofreadflags- Automatic speaker labeling
- Intelligent proofreading with 100+ rules
Choose the right Whisper model for your needs:
| Model | Speed | Quality | RAM | Use Case |
|---|---|---|---|---|
| tiny | Fastest | Basic | 1GB | Quick drafts, testing |
| base | Fast | Good | 1GB | Quick transcription |
| small | Moderate | Better | 2GB | Longer recordings |
| medium | Moderate | Excellent | 5GB | Default - Best balance |
| large | Slow | Best | 10GB | Maximum accuracy |
Performance on M2 Mac with MLX (10-minute audio):
tiny: ~30 secondsbase: ~2 minutessmall: ~5 minutesmedium: ~7 minutes β Default starting pointlarge: ~15 minutes
Note: LocalTranscribe automatically uses MLX-Whisper on Apple Silicon Macs for optimal performance.
Recommended:
- Mac with Apple Silicon (M1/M2/M3/M4)
- 16GB RAM
- 10GB free disk space
- macOS 12.0 or later
Minimum:
- Any Mac with Python 3.9+
- 8GB RAM
- 5GB free disk space
- macOS 11.0 or later
Supported Audio Formats:
- Audio: MP3, WAV, OGG, M4A, FLAC, AAC, WMA, OPUS
- Video: MP4, MOV, AVI, MKV, WEBM (audio will be extracted)
LocalTranscribe uses a three-stage pipeline:
- Analyzes audio waveform patterns
- Identifies distinct speakers
- Creates precise speaker timeline
- Optimized for 2-10 speakers
- Converts speech to text using OpenAI's Whisper
- Automatically detects language
- Handles accents and background noise
- Creates timestamped segments
- Real-time progress tracking:
- MLX-Whisper: Shows audio duration and estimated completion time based on hardware benchmarks
- Faster-Whisper: Live progress bar updating as segments are processed
- Eliminates long silent waits during transcription
- Aligns speaker labels with transcript
- Matches timestamps accurately
- Formats output for readability
- Generates multiple export formats
Technologies:
- Whisper - State-of-the-art speech recognition
- MLX-Whisper - Apple Silicon optimization
- Pyannote.audio - Speaker diarization
- Typer - Modern CLI framework
- Rich - Beautiful terminal output
π SDK Reference - Python API documentation
π Troubleshooting Guide - Common issues and solutions
π Changelog - Version history and updates
π Contributing Guide - How to contribute
Command not found after installation:
# Ensure package is installed
pip install --upgrade localtranscribe
# If using virtual environment, activate it first
source .venv/bin/activateHuggingFace authentication error:
# Verify token is correctly set
cat .env
# Should show: HUGGINGFACE_TOKEN=hf_...
# Make sure you accepted both model licensesSlow processing:
# Use a faster model
localtranscribe process audio.mp3 --model tiny
# Skip diarization for single speaker
localtranscribe process audio.mp3 --skip-diarizationRun system check:
localtranscribe doctorThis command diagnoses common setup issues and suggests fixes.
β Full Troubleshooting Guide
- π Critical Bug Fix - Fixed
name 'Span' is not definederror in combination stage - π Live Progress Tracking - Real-time progress bars for MLX-Whisper and Original Whisper during transcription
- β±οΈ Time Estimates - Shows elapsed time and estimated remaining time during processing
- β‘ Background Progress Updates - Non-blocking progress tracker updating every 0.5s
- β Improved User Experience - No more silent waits during long transcriptions
- π§ Type Hint Fixes - Proper deferred evaluation for optional dependencies
- π§ Context-Aware Matching - spaCy NER for intelligent acronym disambiguation (IP, PR, AI, OR, PI)
- β‘ High-Performance Matching - FlashText integration for 10-100x faster dictionary lookups
- β¨ Typo Tolerance - RapidFuzz fuzzy matching for automatic typo correction (85% threshold)
- π€ Auto-Download Models - Automatic spaCy model management with interactive user prompts
- π Model Status CLI - New
check-modelscommand to verify and download NLP models - π Expanded Dictionaries - 360+ specialized terms across 8 domains (added Legal and Academic)
- π€ Enhanced Acronyms - 180+ definitions with context-aware disambiguation
- π Frequency Tracking - Usage statistics for intelligent expansion decisions
- βοΈ 20+ Configuration Options - New context-aware and fuzzy matching parameters
- β¨ 100% Backward Compatible - All features are opt-in and configurable
- π― Intelligent Segment Processing - 50-70% reduction in false speaker switches
- π§ Enhanced Speaker Mapping - 30-40% better speaker attribution accuracy
- π Audio Quality Analysis - Pre-processing SNR and quality assessment
- β Quality Gates System - Per-stage validation with actionable recommendations
- π Domain Dictionaries - 260+ specialized terms (technical, business, military, medical)
- π€ Acronym Expansion - 80+ definitions with intelligent context-aware expansion
- π Real-Time Progress Tracking - Live progress bars and time estimates during transcription
- βοΈ 15+ New Configuration Options - Fine-tune quality thresholds and processing
- π Quality Reports - Comprehensive assessment with recommendations
- β¨ 100% Backward Compatible - All features are opt-in and configurable
- β¨ Interactive File Browser - Navigate folders and select files with arrow keys
- β¨ Smart Token Management - Inline HuggingFace token entry with validation
- β¨ Guided Wizard - Dummy-proof interactive setup (now the default!)
- β¨ Auto-Proofreading - Fix 100+ common transcription errors
- β¨ Speaker Labeling - Integrated speaker name replacement
- π§ Default Model: Changed to
mediumfor better quality - π Auto MLX Detection - Automatically uses MLX-Whisper on Apple Silicon
- β Updated package description and metadata
- β Enhanced README with PyPI link
- β Professional documentation polish
- β
Published to PyPI - Install with
pip install localtranscribe - β Fixed pyannote.audio 3.x API compatibility
- β Updated documentation for model licenses
- β Complete rewrite with modern CLI
- β Python SDK for programmatic use
- β Batch processing support
- β
System health checks with
doctorcommand - β Modular architecture
We welcome contributions! Here's how to get started:
- Check existing issues at github.com/aporb/LocalTranscribe/issues
- Fork the repository and create your feature branch
- Make your changes following the existing code style
- Add tests if applicable
- Submit a pull request with a clear description
Development Setup:
git clone https://github.com/aporb/LocalTranscribe.git
cd LocalTranscribe
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"MIT License - Free for personal and commercial use.
See LICENSE for full details.
Need help?
- Run
localtranscribe doctorto check your setup - Check the Troubleshooting Guide
- Search existing issues
- Open a new issue with:
- Output from
localtranscribe doctor - Error message or unexpected behavior
- Your system info (OS, Python version)
- Output from
LocalTranscribe builds on excellent open-source work:
- OpenAI - Whisper speech recognition model
- Apple - MLX framework for Metal acceleration
- Pyannote team - Speaker diarization models
- HuggingFace - Model hosting and distribution
β Star on GitHub β’ π Report Bug β’ π‘ Request Feature
Made for privacy-conscious professionals who value data ownership.
Transform audio to text. Know who said what. Keep it private.