AI-powered meeting transcription and summarization that runs entirely on your device using small language models locally hosted. Privacy first approach & zero service costs.
- Local transcription using OpenAI Whisper
- AI summarization with Ollama models
- Multiple AI models - Choose from 4 models optimized for different use cases
- Privacy-first - no cloud dependencies
- macOS desktop app with intuitive interface
Transcription Models (Whisper):
small: Default model - good accuracy and speed on Apple Silicon (default)base: Faster but lower accuracy for basic meetingsmedium: High accuracy for important meetings (slower)
Summarization Models (Ollama):
llama3.2:3b(2GB): Fastest option for quick meetings (default)gemma3:4b(2.5GB): Lightweight and efficientqwen3:8b(4.7GB): Excellent at structured output and action itemsdeepseek-r1:8b(4.7GB): Strong reasoning and analysis capabilities
Switching Models:
- Click the π§ AI Settings icon in the app
- Select your preferred model
- Models download automatically when selected
β οΈ Note: Downloads will pause any active summarization
- Custom summarization templates
- Speaker Diarisation
Download the latest release for your Mac:
- Apple Silicon (M1/M2/M3/M4)
- Intel Macs Performance on Intel Macs is limited due to lack of dedicated AI inference capabilities on these older chips.
Since StenoAI is not code-signed with an Apple Developer certificate, you'll need to bypass macOS security warnings:
- Download DMG β You may see "StenoAI is damaged and can't be opened"
- Right-click the DMG β Select "Open" β Click "Open" in the dialog
- Drag StenoAI to Applications folder
- If the app won't launch, run this command in Terminal:
xattr -cr /Applications/StenoAI.app
- Right-click StenoAI in Applications β Select "Open" β Click "Open"
The app will then work normally on subsequent launches.
You can run it locally as well (see below) if you dont want to install a dmg.
- Python 3.8+
- Node.js 18+
- Homebrew
git clone https://github.com/ruzin/stenoai.git
cd stenoai
# Backend
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Install Ollama
brew install ollama
ollama serve &
ollama pull llama3.2:3b
# Install ffmpeg (required for audio processing)
brew install ffmpeg
# Frontend
cd app
npm install
npm startcd app
npm run buildcd app
# Patch release (bug fixes): 0.0.5 β 0.0.6
npm version patch
git add package.json package-lock.json
git commit -m "Version bump to $(node -p "require('./package.json').version")"
git push
git tag v$(node -p "require('./package.json').version")
git push origin v$(node -p "require('./package.json').version")
# Minor release (new features): 0.0.6 β 0.1.0
npm version minor
git add package.json package-lock.json
git commit -m "Version bump to $(node -p "require('./package.json').version")"
git push
git tag v$(node -p "require('./package.json').version")
git push origin v$(node -p "require('./package.json').version")
# Major release (breaking changes): 0.0.6 β 1.0.0
npm version major
git add package.json package-lock.json
git commit -m "Version bump to $(node -p "require('./package.json').version")"
git push
git tag v$(node -p "require('./package.json').version")
git push origin v$(node -p "require('./package.json').version")What happens:
npm versionupdates package.json and package-lock.json locally- Manual commit ensures version changes are saved to git
git pushsends the version commit to GitHubgit tagcreates the version tag locallygit push origin tagtriggers GitHub Actions workflow- Workflow automatically builds DMGs for Intel & Apple Silicon
- Creates GitHub release with downloadable assets
stenoai/
βββ app/ # Electron desktop app
βββ src/ # Python backend
βββ website/ # Marketing site
βββ recordings/ # Audio files
βββ transcripts/ # Text output
βββ output/ # Summaries
StenoAI includes a built-in debug panel for troubleshooting issues:
In-App Debug Panel:
- Launch StenoAI
- Click the π¨ hammer icon (next to settings)
- The debug panel shows real-time logs of all operations
Terminal Logging (Advanced): For detailed system-level logs, run the app from Terminal:
# Launch StenoAI with full logging
/Applications/StenoAI.app/Contents/MacOS/StenoAIThis displays comprehensive logs including:
- Python subprocess output
- Whisper transcription details
- Ollama API communication
- HTTP requests and responses
- Error stack traces
- Performance timing
System Console Logs: For system-level debugging:
# View recent StenoAI-related logs
log show --last 10m --predicate 'process CONTAINS "StenoAI" OR eventMessage CONTAINS "ollama"' --info
# Monitor live logs
log stream --predicate 'eventMessage CONTAINS "ollama" OR process CONTAINS "StenoAI"' --level infoCommon Issues:
- Recording stops early: Check microphone permissions and available disk space
- "Processing failed": Usually Ollama service or model issues - check terminal logs
- Empty transcripts: Whisper couldn't detect speech - verify audio input levels
- Slow processing: Normal for longer recordings - Ollama processing is CPU-intensive especially on older intel Macs
- User Data:
~/Library/Application Support/stenoai/ - Recordings:
~/Library/Application Support/stenoai/recordings/ - Transcripts:
~/Library/Application Support/stenoai/transcripts/ - Summaries:
~/Library/Application Support/stenoai/output/
MIT
