Get latest Product updates
Website •
LinkedIn •
Meetily Discord •
Privacy-First AI •
Reddit
A privacy-first AI meeting assistant that captures, transcribes, and summarizes meetings entirely on your infrastructure. Built by expert AI engineers passionate about data sovereignty and open source solutions. Perfect for enterprises that need advanced meeting intelligence without compromising on privacy, compliance, or control.
For enterprise version: Sign up for early access
For Partnerships and Custom AI development: Let's chat
- Overview
- The Privacy Problem
- Features
- System Architecture
- Quick Start Guide
- Prerequisites
- Setup Instructions
- Whisper Model Selection
- LLM Integration
- Troubleshooting
- Developer Console
- Uninstallation
- Enterprise Solutions
- Partnerships & Referrals
- Development Guidelines
- Contributing
- License
- About Our Team
- Acknowledgments
- Star History
A privacy-first AI meeting assistant that captures, transcribes, and summarizes meetings entirely on your infrastructure. Built by expert AI engineers passionate about data sovereignty and open source solutions. Perfect for professionals and enterprises that need advanced meeting intelligence without compromising privacy or control.
While there are many meeting transcription tools available, this solution stands out by offering:
- Privacy First: All processing happens locally on your device
- Cost Effective: Uses open-source AI models instead of expensive APIs
- Flexible: Works offline, supports multiple meeting platforms
- Customizable: Self-host and modify for your specific needs
- Intelligent: Built-in knowledge graph for semantic search across meetings
Meeting AI tools create significant privacy and compliance risks across all sectors:
- $4.4M average cost per data breach (IBM 2024)
- €5.88 billion in GDPR fines issued by 2025
- 400+ unlawful recording cases filed in California this year
Whether you're a defense consultant, enterprise executive, legal professional, or healthcare provider, your sensitive discussions shouldn't live on servers you don't control. Cloud meeting tools promise convenience but deliver privacy nightmares with unclear data storage practices and potential unauthorized access.
Meetily solves this: Complete data sovereignty on your infrastructure, zero vendor lock-in, full control over your sensitive conversations.
✅ Modern, responsive UI with real-time updates
✅ Real-time audio capture (microphone + system audio)
✅ Live transcription using locally-running Whisper
✅ Local processing for privacy
✅ Packaged the app for macOS and Windows
✅ Rich text editor for notes
🚧 Export to Markdown/PDF/HTML
🚧 Obsidian Integration
🚧 Speaker diarization
Choose your setup method based on your needs:
Best for: Regular users wanting optimal performance
Time: 10-15 minutes
System Requirements: 8GB+ RAM, 4GB+ disk space
- Frontend: Download and run meetily-frontend_0.0.5_x64-setup.exe
- Backend: Download backend zip from releases, extract, run
Get-ChildItem -Path . -Recurse | Unblock-File
, then.\start_with_output.ps1
For safety and to maintain proper user permissions for frontend app:
- Go to Latest Releases
- Download the file ending with
x64-setup.exe
- Important: Before running, right-click the file → Properties → Check Unblock at bottom → OK
- Double-click the installer to run it
- If Windows shows a security warning:
- Click
More info
and chooseRun anyway
, or - Follow the permission dialog prompts
- Click
- Follow the installation wizard
✅ Success Check: You should see the Meetily application window open successfully when launched.
- Complete Setup (Recommended):
# Install both frontend + backend brew tap zackriya-solutions/meetily brew install --cask meetily # Start the backend server meetily-server --language en --model medium
- Open Meetily from Applications folder
Best for: Developers, quick testing, or multi-environment deployment
Time: 5-10 minutes
System Requirements: 16GB+ RAM (8GB minimum for Docker), Docker Desktop
# Navigate to backend directory
cd backend
# Windows (PowerShell)
.\build-docker.ps1 cpu
.\run-docker.ps1 start -Interactive
# macOS/Linux (Bash)
./build-docker.sh cpu
./run-docker.sh start --interactive
After setup, verify everything works:
- Whisper Server: Visit http://localhost:8178 (should show API interface)
- Backend API: Visit http://localhost:5167/docs (should show API documentation)
- Frontend App: Open Meetily application and test microphone access
- Windows Defender blocking installer? → See Windows Defender Troubleshooting below
- Can't access localhost:8178 or 5167? → Check if backend is running and ports are available
- "Permission denied" errors? → Run
chmod +x
on script files (macOS/Linux) or check execution policy (Windows) - Docker containers crashing? → Increase Docker RAM allocation to 12GB+ and check available disk space
- Audio not working? → Grant microphone permissions to the app in system settings
👉 For detailed troubleshooting, see Troubleshooting Section
-
Audio Capture Service
- Real-time microphone/system audio capture
- Audio preprocessing pipeline
- Built with Rust (experimental) and Python
-
Transcription Engine
- Whisper.cpp for local transcription
- Supports multiple model sizes (tiny->large)
- GPU-accelerated processing
-
LLM Orchestrator
- Unified interface for multiple providers
- Automatic fallback handling
- Chunk processing with overlap
- Model configuration:
-
Data Services
- ChromaDB: Vector store for transcript embeddings
- SQLite: Process tracking and metadata storage
- Frontend: Tauri app + Next.js (packaged executables)
- Backend: Python FastAPI:
- Transcript workers
- LLM inference
- RAM: 8GB (16GB+ recommended)
- Storage: 4GB free space
- CPU: 4+ cores
- OS: Windows 10/11, macOS 10.15+, or Ubuntu 18.04+
- RAM: 16GB+ (for large Whisper models)
- Storage: 10GB+ free space
- CPU: 8+ cores or Apple Silicon Mac
- GPU: NVIDIA GPU with CUDA (optional, for faster processing)
Component | Windows | macOS | Purpose |
---|---|---|---|
Python | 3.9+ (python.org) | brew install python |
Backend runtime |
Node.js | 18+ LTS (nodejs.org) | brew install node |
Frontend build |
Git | (git-scm.com) | Pre-installed | Code download |
FFmpeg | winget install FFmpeg |
brew install ffmpeg |
Audio processing |
- Docker Desktop (docker.com)
- 16GB+ RAM allocated to Docker
- 4+ CPU cores allocated to Docker
# Install Visual Studio Build Tools (required for Whisper.cpp compilation)
# Download from: https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019
# Install Xcode Command Line Tools
xcode-select --install
# Install Homebrew (if not already installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
sudo apt-get update
sudo apt-get install build-essential cmake git ffmpeg python3 python3-pip nodejs npm
- Ollama (ollama.com) - For local AI models
- API Keys - For Claude (Anthropic) or Groq services
⏱️ Estimated Time: 10-15 minutes total
⏱️ Time: ~3-5 minutes
Manual Download (Recommended)
For safety and to maintain proper user permissions:
- Go to Latest Releases
- Download the file ending with
x64-setup.exe
- Important: Before running, right-click the file → Properties → Check Unblock at bottom → OK
- Double-click the installer to run it
- If Windows shows a security warning:
- Click
More info
and chooseRun anyway
, or - Follow the permission dialog prompts
- Click
- Follow the installation wizard
- The application will be available on your desktop
✅ Success Check: You should see the Meetily application window open successfully when launched.
Alternative: MSI Installer (Less likely to be blocked)
- Go to Latest Releases
- Download the file ending with
x64_en-US.msi
- Double-click the MSI file to run it
- Follow the installation wizard to complete the setup
- The application will be installed and available on your desktop
Provide necessary permissions for audio capture and microphone access.
⏱️ Time: ~5-10 minutes
Step 2: Install and Start the Backend
📦 Option 1: Pre-built Release (Recommended - Easiest)
The simplest way to get started with the backend is to download the pre-built release:
-
Download the backend:
- From the same releases page
- Download the backend zip file (e.g.,
meetily_backend.zip
) - Extract the zip to a folder like
C:\meetily_backend\
-
Prepare backend files:
- Open PowerShell (search for it in Start menu)
- Navigate to your extracted backend folder:
cd C:\meetily_backend
- Unblock all files (Windows security requirement):
Get-ChildItem -Path . -Recurse | Unblock-File
-
Start the backend services:
.\start_with_output.ps1
- This script will:
- Guide you through Whisper model selection (recommended:
base
ormedium
) - Ask for language preference (default: English)
- Download the selected model automatically
- Start both Whisper server (port 8178) and Meeting app (port 5167)
- Guide you through Whisper model selection (recommended:
- This script will:
What happens during startup:
- Model Selection: Choose from tiny (fastest, basic accuracy) to large (slowest, best accuracy)
- Language Setup: Select your preferred language for transcription
- Auto-download: Selected models are downloaded automatically (~150MB to 1.5GB depending on model)
- Service Launch: Both transcription and meeting services start automatically
✅ Success Verification:
- Check services are running:
- Open browser and visit http://localhost:8178 (should show Whisper API interface)
- Visit http://localhost:5167/docs (should show Meeting app API documentation)
- Test the application:
- Launch Meetily from desktop/Start menu
- Grant microphone permissions when prompted
- You should see the main interface ready to record meetings
🐳 Option 2: Docker (Alternative - Easier Dependency Management)
Docker provides easy setup with automatic dependency management, though it's slower than the pre-built release:
# Navigate to backend directory
cd ~/Downloads
git clone https://github.com/Zackriya-Solutions/meeting-minutes
cd meeting-minutes/backend
# Build and start using Docker (CPU version)
.\build-docker.ps1 cpu
.\run-docker.ps1 start -Interactive
Prerequisites for Docker:
- Docker Desktop installed (docker.com)
- 8GB+ RAM allocated to Docker
- Internet connection for model downloads
✅ Success Check: Docker will automatically handle dependencies and you should see both Whisper server (port 8178) and Meeting app (port 5167) start successfully.
🛠️ Option 3: Local Build (Best Performance)
Local building provides the best performance but requires installing all dependencies manually. Choose this if you want optimal speed and don't mind the extra setup steps.
Click on the image to see installation video
Step 1: Install Dependencies
- Python 3.9+ (with pip)
- Visual Studio Build Tools (C++ workload)
- CMake
- Git
- Visual Studio Redistributables
Open PowerShell as administrator and run the dependency installer:
cd ~/Downloads
git clone https://github.com/Zackriya-Solutions/meeting-minutes
cd meeting-minutes/backend
Set-ExecutionPolicy Bypass -Scope Process -Force
.\install_dependancies_for_windows.ps1
The script will install:
- Chocolatey (package manager)
- Python 3.11 (if not already installed)
- Git, CMake, Visual Studio Build Tools
- Visual Studio Redistributables
- Required development tools
Once installation is complete, restart your terminal before proceeding.
Step 2: Build Whisper
Enter the following commands to build the backend:
cd meeting-minutes/backend
.\build_whisper.cmd
If the build fails, run the command again:
.\build_whisper.cmd
The build process will:
- Update git submodules (whisper.cpp)
- Compile whisper.cpp with server support
- Create Python virtual environment
- Install Python dependencies
- Download the specified Whisper model
Step 3: Start the Backend
Finally, when the installation is successful, run the backend using:
.\start_with_output.ps1
✅ Success Check: You should see both Whisper server (port 8178) and Meeting app (port 5167) start successfully with log messages indicating they're running.
-
Warning - existing chocolatey installation is detected
To address this - Either use the current chocolatey version installed or remove the current one with:
rm C:\ProgramData\chocolatey
-
Error - ./start_with_output.ps1 shows security error
Run after making sure the file is unblocked:
Set-ExecutionPolicy Bypass -Scope Process -Force .\start_with_output.ps1
- Docker Desktop (Windows/Mac) or Docker Engine (Linux)
- 16GB+ RAM (8GB minimum allocated to Docker)
- 4+ CPU cores recommended
- For GPU: NVIDIA drivers + nvidia-container-toolkit (Windows/Linux only)
# Navigate to backend directory
cd backend
# Build and start services
.\build-docker.ps1 cpu # Build CPU version
.\run-docker.ps1 start -Interactive # Interactive setup (recommended)
# Navigate to backend directory
cd backend
# Build and start services
./build-docker.sh cpu # Build CPU version
./run-docker.sh start --interactive # Interactive setup (recommended)
- Whisper Server: http://localhost:8178
- Meeting App: http://localhost:5167 (with API docs at
/docs
)
# GPU acceleration (Windows/Linux only)
.\build-docker.ps1 gpu # Windows
./build-docker.sh gpu # Linux
# Custom configuration
.\run-docker.ps1 start -Model large-v3 -Language es -Detach
./run-docker.sh start --model large-v3 --language es --detach
# Check status and logs
.\run-docker.ps1 status # Windows
./run-docker.sh status # macOS/Linux
# Stop services
.\run-docker.ps1 stop # Windows
./run-docker.sh stop # macOS/Linux
⏱️ Estimated Time: 5-10 minutes total
Option 1: Using Homebrew (Recommended) - Complete Setup ⏱️ Time: ~5-7 minutes
Note: This single command installs both the frontend app and backend server.
# Install Meetily (frontend + backend)
brew tap zackriya-solutions/meetily
brew install --cask meetily
# Start the backend server
meetily-server --language en --model medium
How to use after installation:
- Run
meetily-server
in terminal (keep it running) - Open Meetily from Applications folder or Spotlight
- Grant microphone and screen recording permissions when prompted
✅ Success Check: Meetily app should open and you should be able to start recording meetings immediately.
To update existing installation:
# Update Homebrew and get latest package information
brew update
# Update to latest version
brew upgrade --cask meetily
brew upgrade meetily-backend
⚠️ Data Backup Warning: You are upgrading from Meetily 0.0.4 to 0.0.5. This upgrade will automatically migrate your data to a new persistent location, but it's recommended to backup your data first.Current Data Location (Version 0.0.4):
- Database:
/opt/homebrew/Cellar/meetily-backend/0.0.4/backend/meeting_minutes.db
New Persistent Location (Version 0.0.5+):
- Database:
/opt/homebrew/var/meetily/meeting_minutes.db
What Happens During Upgrade:
- ✅ Your data will be automatically migrated to the new persistent location
- ✅ Data will survive future upgrades
- ✅ The old data in the Cellar directory will be cleaned up
Backup Recommendation: The upgrade ensures data loss doesn't happen, but it's always better to backup your data before proceeding.
Option 2: Manual Installation ⏱️ Time: ~8-12 minutes
- Download the latest dmg_darwin_arch64.zip file
- Extract the file
- Double-click the
.dmg
file inside the extracted folder - Drag the application to your Applications folder
- Remove quarantine attribute:
xattr -c /Applications/meetily-frontend.app
- Grant necessary permissions for audio capture and microphone access
- Important: You'll need to install the backend separately (see Manual Backend Setup below)
Option 1: Using Homebrew Backend Only ⏱️ Time: ~3-5 minutes
# Install just the backend (if you manually installed frontend)
brew tap zackriya-solutions/meetily
brew install meetily-backend
# Start the backend server
meetily-server --language en --model medium
To update existing backend installation:
# Update Homebrew and get latest package information
brew update
# Update to latest version
brew upgrade meetily-backend
⚠️ Data Backup Warning: You are upgrading from Meetily 0.0.4 to 0.0.5. This upgrade will automatically migrate your data to a new persistent location, but it's recommended to backup your data first.Current Data Location (Version 0.0.4):
- Database:
/opt/homebrew/Cellar/meetily-backend/0.0.4/backend/meeting_minutes.db
New Persistent Location (Version 0.0.5+):
- Database:
/opt/homebrew/var/meetily/meeting_minutes.db
What Happens During Upgrade:
- ✅ Your data will be automatically migrated to the new persistent location
- ✅ Data will survive future upgrades
- ✅ The old data in the Cellar directory will be cleaned up
Backup Recommendation: The upgrade ensures data loss doesn't happen, but it's always better to backup your data before proceeding.
Option 2: Complete Manual Setup ⏱️ Time: ~10-15 minutes
# Clone the repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes.git
cd meeting-minutes/backend
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Build dependencies
chmod +x build_whisper.sh
./build_whisper.sh
# Start backend servers
./clean_start_backend.sh
# Navigate to frontend directory
cd frontend
# Give execute permissions to clean_build.sh
chmod +x clean_build.sh
# run clean_build.sh
./clean_build.sh
When setting up the backend (either via Homebrew, manual installation, or Docker), you can choose from various Whisper models based on your needs:
Model | Size | Accuracy | Speed | Best For |
---|---|---|---|---|
tiny | ~39 MB | Basic | Fastest | Testing, low resources |
base | ~142 MB | Good | Fast | General use (recommended) |
small | ~244 MB | Better | Medium | Better accuracy needed |
medium | ~769 MB | High | Slow | High accuracy requirements |
large-v3 | ~1550 MB | Best | Slowest | Maximum accuracy |
macOS (Metal acceleration):
- 8 GB RAM: small
- 16 GB RAM: medium
- 32 GB+ RAM: large-v3
Windows/Linux:
- 8 GB RAM: base or small
- 16 GB RAM: medium
- 32 GB+ RAM: large-v3
-
Standard models (balance of accuracy and speed):
- tiny, base, small, medium, large-v1, large-v2, large-v3, large-v3-turbo
-
English-optimized models (faster for English content):
- tiny.en, base.en, small.en, medium.en
-
Quantized models (reduced size, slightly lower quality):
- *-q5_1 (5-bit quantized), *-q8_0 (8-bit quantized)
- Example: tiny-q5_1, base-q5_1, small-q5_1, medium-q5_0
Recommendation: Start with base
model for general use, or base.en
if you're only transcribing English content.
- Smaller LLMs can hallucinate, making summarization quality poor; Please use model above 32B parameter size
- Backend build process requires CMake, C++ compiler, etc. Making it harder to build
- Backend build process requires Python 3.10 or newer
- Frontend build process requires Node.js
For those interested in using GPU for faster Whisper inference:
Windows/Linux GPU Setup:
-
Modify build_whisper.cmd:
- Locate line 55 in the build_whisper.cmd file
- Replace it with:
cmake .. -DBUILD_SHARED_LIBS=OFF -DWHISPER_BUILD_TESTS=OFF -DWHISPER_BUILD_SERVER=ON -DGGML_CUDA=1
-
Clean Rebuild Requirement:
- If you have previously compiled whisper.cpp for CPU inference, a clean rebuild is essential
- Create a new directory, git clone meetily into this new folder, then execute the build script
- This ensures all components are compiled with GPU support from scratch
-
CUDA Toolkit Installation:
- Verify that the CUDA Toolkit is correctly installed on your system
- This toolkit provides the necessary libraries and tools for CUDA development
-
Troubleshooting CMake Errors:
- If errors persist, refer to this Stack Overflow post
- Copy required files to Visual Studio folder if needed
For detailed GPU support discussion, see Issue #126
The backend supports multiple LLM providers through a unified interface. Current implementations include:
- Anthropic (Claude models)
- Groq (Llama3.2 90 B)
- Ollama (Local models that supports function calling)
Common issues and solutions organized by setup method:
# Stop services
./run-docker.sh stop # or .\run-docker.ps1 stop
# Check port usage
netstat -an | grep :8178
lsof -i :8178 # macOS/Linux
- Enable WSL2 integration in Docker Desktop
- Install nvidia-container-toolkit
- Verify with:
.\run-docker.ps1 gpu-test
# Manual download
./run-docker.sh models download base.en
# or
.\run-docker.ps1 models download base.en
If you see "Dropped old audio chunk X due to queue overflow" messages:
-
Increase Docker Resources (most important):
- Memory: 8GB minimum (12GB+ recommended)
- CPUs: 4+ cores recommended
- Disk: 20GB+ available space
-
Use smaller Whisper model:
./run-docker.sh start --model base --detach
-
Check container resource usage:
docker stats
If Windows Defender or antivirus software blocks the installer with "virus or potentially unwanted software" error:
- Download the installer from Latest Releases
- Right-click the downloaded
.exe
file → Properties - Check the Unblock checkbox at the bottom → OK
- Double-click the installer to run it
- Follow the installation prompts
- Open Windows Security → Virus & threat protection
- Under Virus & threat protection settings, click Manage settings
- Scroll to Exclusions and click Add or remove exclusions
- Add the downloaded installer file as an exclusion
- Run the installer manually
If Windows Defender continues to block:
- Use the MSI installer instead (often less flagged): Download
*x64_en-US.msi
from releases - Or use manual backend installation only and access via web browser at http://localhost:5167
Why this happens: New software releases may trigger false positives in antivirus software until they build trust/reputation.
# CMake not found - install Visual Studio Build Tools
# PowerShell execution blocked:
Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope Process
# Compilation errors
brew install cmake llvm libomp
export CC=/opt/homebrew/bin/clang
export CXX=/opt/homebrew/bin/clang++
# Permission denied
chmod +x build_whisper.sh
chmod +x clean_start_backend.sh
# Port conflicts
lsof -i :5167 # Find process using port
kill -9 PID # Kill process
- Check if ports 8178 (Whisper) and 5167 (Backend) are available
- Verify all dependencies are installed
- Check logs for specific error messages
- Ensure sufficient system resources (8GB+ RAM recommended)
If you encounter issues with the Whisper model:
# Try a different model size
meetily-download-model small
# Verify model installation
ls -la $(brew --prefix)/opt/meetily-backend/backend/whisper-server-package/models/
If the server fails to start:
-
Check if ports 8178 and 5167 are available:
lsof -i :8178 lsof -i :5167
-
Verify that FFmpeg is installed correctly:
which ffmpeg ffmpeg -version
-
Check the logs for specific error messages when running
meetily-server
-
Try running the Whisper server manually:
cd $(brew --prefix)/opt/meetily-backend/backend/whisper-server-package/ ./run-server.sh --model models/ggml-medium.bin
If the frontend application doesn't connect to the backend:
- Ensure the backend server is running (
meetily-server
) - Check if the application can access localhost:5167
- Restart the application after starting the backend
If the application fails to launch:
# Clear quarantine attributes
xattr -cr /Applications/meetily-frontend.app
Build Docker images with GPU support and cross-platform compatibility.
Usage:
# Build Types
cpu, gpu, macos, both, test-gpu
# Options
-Registry/-r REGISTRY # Docker registry
-Push/-p # Push to registry
-Tag/-t TAG # Custom tag
-Platforms PLATFORMS # Target platforms
-BuildArgs ARGS # Build arguments
-NoCache/--no-cache # Build without cache
-DryRun/--dry-run # Show commands only
Examples:
# Basic builds
.\build-docker.ps1 cpu
./build-docker.sh gpu
# Multi-platform with registry
.\build-docker.ps1 both -Registry "ghcr.io/user" -Push
./build-docker.sh cpu --platforms "linux/amd64,linux/arm64" --push
Complete Docker deployment manager with interactive setup.
Commands:
start, stop, restart, logs, status, shell, clean, build, models, gpu-test, setup-db, compose
Start Options:
-Model/-m MODEL # Whisper model (default: base.en)
-Port/-p PORT # Whisper port (default: 8178)
-AppPort/--app-port # Meeting app port (default: 5167)
-Gpu/-g/--gpu # Force GPU mode
-Cpu/-c/--cpu # Force CPU mode
-Language/--language # Language code (default: auto)
-Translate/--translate # Enable translation
-Diarize/--diarize # Enable diarization
-Detach/-d/--detach # Run in background
-Interactive/-i # Interactive setup
Examples:
# Interactive setup
.\run-docker.ps1 start -Interactive
./run-docker.sh start --interactive
# Advanced configuration
.\run-docker.ps1 start -Model large-v3 -Gpu -Language es -Detach
./run-docker.sh start --model base --translate --diarize --detach
# Management
.\run-docker.ps1 logs -Service whisper -Follow
./run-docker.sh logs --service app --follow --lines 100
Service URLs:
- Whisper Server: http://localhost:8178 (transcription service)
- Meeting App: http://localhost:5167 (AI-powered meeting management)
- API Documentation: http://localhost:5167/docs
The developer console provides real-time logging and debugging information for Meetily. It's particularly useful for troubleshooting issues and monitoring application behavior.
When running in development mode, the console is always visible:
pnpm tauri dev
All logs appear in the terminal where you run this command.
- Navigate to Settings in the app
- Scroll to the Developer section
- Use the Developer Console toggle to show/hide the console
- Windows: Controls the console window visibility
- macOS: Opens Terminal with filtered app logs
macOS:
# View live logs
log stream --process meetily-frontend --level info --style compact
# View historical logs (last hour)
log show --process meetily-frontend --last 1h
Windows:
# Run the executable directly to see console output
./target/release/meetily-frontend.exe
The console displays:
- Application startup and initialization logs
- Recording start/stop events
- Real-time transcription progress
- API connection status
- Error messages and stack traces
- Debug information (when
RUST_LOG=info
is set)
The console is helpful for:
- Debugging audio issues: See which audio devices are detected and used
- Monitoring transcription: Track progress and identify bottlenecks
- Troubleshooting connectivity: Verify API endpoints and connection status
- Performance analysis: Monitor resource usage and processing times
- Error diagnosis: Get detailed error messages and context
Windows:
- In release builds, the console window is hidden by default
- Use the UI toggle or run from terminal to see console output
- Console can be shown/hidden at runtime without restarting
macOS:
- Uses the system's unified logging
- Console opens in Terminal.app with filtered logs
- Logs persist in the system and can be viewed later
To completely remove Meetily:
# Remove the frontend
brew uninstall --cask meetily
# Remove the backend
brew uninstall meetily-backend
# Optional: remove the taps
brew untap zackriya-solutions/meetily
brew untap zackriya-solutions/meetily-backend
# Optional: remove Ollama if no longer needed
brew uninstall ollama
We are a team of expert AI engineers building privacy-first AI applications and agents. With experience across 20+ product development projects, we understand the critical importance of protecting privacy while delivering cutting-edge AI solutions.
Our Mission: Build comprehensive privacy-first AI applications that enterprises and professionals can trust with their most sensitive data.
Our Values:
- Privacy First: Data sovereignty should never be compromised
- Open Source: Transparency and community-driven development
- Enterprise Ready: Solutions that scale and meet compliance requirements
Meetily represents the beginning of our vision - a full ecosystem of privacy-first AI tools ranging from meeting assistants to compliance report generators, auditing systems, case research assistants, patent agents, HR automation, and more.
Meetily Enterprise is available for on-premise deployment, giving organizations complete control over their meeting intelligence infrastructure. This enterprise version includes:
- 100% On-Premise Deployment: Your data never leaves your infrastructure
- Centralized Management: Support for 100+ users with administrative controls
- Zero Vendor Lock-in: Open source MIT license ensures complete ownership
- Compliance Ready: Meet GDPR, SOX, HIPAA, and industry-specific requirements
- Custom Integration: APIs and webhooks for enterprise systems
For enterprise solutions: https://meetily.zackriya.com
Help us grow the privacy-first AI ecosystem!
We're looking for partners and referrals for early adopters of privacy-first AI solutions:
Target Industries & Use Cases:
- Meeting note takers and transcription services
- Compliance report generators
- Auditing support systems
- Case research assistants
- Patent agents and IP professionals
- HR automation and talent management
- Legal document processing
- Healthcare documentation
How You Can Help:
- Refer clients who need privacy-first AI solutions
- Partner with us on custom AI application development
- Collaborate on revenue sharing opportunities
- Get early access to new privacy-first AI tools
Your referrals keep us in business and help us build the future of privacy-first AI. We believe in partnerships that benefit everyone.
For partnerships and custom AI development: https://www.zackriya.com/service-interest-form/
- Follow the established project structure
- Write tests for new features
- Document API changes
- Use type hints in Python code
- Follow ESLint configuration for JavaScript/TypeScript
- Fork the repository
- Create a feature branch
- Submit a pull request
MIT License - Feel free to use this project for your own purposes.
Thanks for all the contributions. Our community is what makes this project possible. Below is the list of contributors:
We welcome contributions from the community! If you have any questions or suggestions, please open an issue or submit a pull request. Please follow the established project structure and guidelines. For more details, refer to the CONTRIBUTING file.
- We borrowes some code from Whisper.cpp
- We borrowes some code from Screenpipe