A cross-platform desktop application that provides AI-powered medical assistance without requiring an internet connection. Built with Electron and powered by Ollama for complete privacy and offline functionality.
π Full Documentation & Downloads
- π Complete Privacy: All data stays on your device
- π± Cross-Platform: Works on Windows, macOS, and Linux
- π€ AI-Powered: Uses advanced language models via Ollama
- π¬ Interactive Chat: Natural conversation with medical AI
- π Symptom Checker: Quick symptom analysis and guidance
- π Medication Tracker: Track medications and check interactions
- π Medical History: Local storage of consultation history
- βοΈ Configurable: Adjust AI model settings and preferences
This application is for informational purposes only and is not a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
None! Our fully automated setup handles everything for you.
The setup script will automatically install and configure:
- Node.js (if not present)
- Python (if not present)
- Ollama (if not present)
- All required dependencies and AI models
-
Clone the repository:
git clone https://github.com/lpolish/offlinedoctor.git cd offlinedoctor
-
Run the fully automated setup script:
# Linux/macOS - FULLY AUTOMATED ./setup.sh # Windows - FULLY AUTOMATED setup.bat
π― The setup script is COMPLETELY AUTOMATED and will:
- β Detect your operating system and architecture
- β Install Node.js automatically (from official sources)
- β Install Python automatically (from official sources)
- β Install Ollama automatically (from official sources)
- β Set up all Node.js dependencies
- β Create and configure Python virtual environment
- β Download and configure the AI medical model
- β Create run scripts and desktop shortcuts
- β Handle container environments seamlessly
- β Support all major Linux distributions, macOS, and Windows
- β No manual intervention required!
-
Start the application:
# Use the generated run script (recommended) ./run.sh # Linux/macOS run.bat # Windows # Or use npm directly npm start
-
Validate your setup (optional):
./test-setup.sh # Linux/macOS - Comprehensive validation
π Success! The setup is designed to be 100% automated with zero manual intervention required. If you encounter any issues, the setup script provides detailed error messages and recovery suggestions.
Our automated setup works seamlessly on:
- Linux: Ubuntu, Debian, Fedora, CentOS, Arch, openSUSE, Alpine
- macOS: 10.14+ (Intel and Apple Silicon)
- Windows: Windows 10/11 (x64)
- Native installations
- Docker containers (detects automatically)
- Virtual machines
- Cloud instances
- CI/CD environments
- Linux: apt, dnf, pacman, zypper, apk, yum
- macOS: Homebrew (optional)
- Windows: PowerShell with direct downloads
- Binary downloads when package managers aren't available
- Source compilation as last resort
- User-space installations when sudo isn't available
Create installer packages for different platforms:
# Build for current platform
npm run build
# Platform-specific builds
npm run build-win # Windows (.exe, .msi)
npm run build-mac # macOS (.dmg, .pkg)
npm run build-linux # Linux (.AppImage, .deb, .tar.gz)
The project uses GitHub Actions to automatically build installers for all platforms:
- On every push to
main
branch: Builds and tests the application - On pull requests: Builds and tests the changes
- On version tags (
v*
): Creates a draft release with installers for all platforms
To create a new release:
- Update version in
package.json
- Create and push a new tag:
git tag v1.0.0 # Use appropriate version git push origin v1.0.0
- GitHub Actions will automatically:
- Build installers for Windows, macOS, and Linux
- Create a draft release with all installers
- Generate release notes
npm run build-win # Windows NSIS installer
npm run build-mac # macOS DMG
npm run build-linux # Linux AppImage and DEB
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββββ
β Frontend β β Backend β β AI Engine β
β (Electron) βββββΊβ (Python Flask) βββββΊβ (Ollama) β
β β β β β β
β β’ HTML/CSS/JS β β β’ REST API β β β’ Llama2 Model β
β β’ Chat UI β β β’ Medical Logic β β β’ Local Inference β
β β’ Settings β β β’ Data Storage β β β’ No Internet β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββββ
- Frontend (Electron): Cross-platform desktop UI with medical consultation interface
- Backend (Python Flask): API server handling medical queries and business logic
- AI Engine (Ollama): Local language model serving medical AI responses
- Data Storage: Local SQLite/JSON for medical history and user preferences
The application supports multiple AI models:
- llama2 (Recommended): Best balance of performance and accuracy
- mistral: Faster responses, good for basic queries
- codellama: Specialized for technical medical information
Configure in the Settings tab or modify backend/server.py
:
DEFAULT_MODEL = "llama2" # Change to your preferred model
- Save History: Toggle consultation history storage
- Anonymize Data: Remove personal identifiers from stored data
- Auto-clear: Automatically clear history after specified time
offline-doctor/
βββ main.js # Main Electron process
βββ preload.js # Security context bridge
βββ renderer.js # Frontend application logic
βββ index.html # Main UI layout
βββ style.css # Application styles
βββ package.json # Node.js dependencies and scripts
βββ setup.sh # Linux/macOS setup script
βββ setup.bat # Windows setup script
βββ assets/ # Icons and images
βββ backend/ # Python Flask server
β βββ server.py # Main API server
β βββ requirements.txt # Python dependencies
β βββ venv/ # Python virtual environment
βββ .github/
βββ copilot-instructions.md
- No External Connections: All processing happens locally
- Data Encryption: Sensitive data encrypted at rest
- Memory Safety: Secure cleanup of medical data in memory
- Access Control: No unauthorized access to medical history
- HIPAA Considerations: Designed with healthcare privacy in mind
npm run dev
cd backend
source venv/bin/activate
python server.py
- Frontend Features: Modify
renderer.js
and update UI inindex.html
- Backend API: Add endpoints in
backend/server.py
- AI Prompts: Update medical context in
MedicalAI
class
Ollama not starting:
# Check if Ollama is running
ps aux | grep ollama
# Manually start Ollama
ollama serve
Python virtual environment issues:
# Recreate virtual environment
rm -rf backend/venv
cd backend
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Model download failures:
# Check available models
ollama list
# Re-download model
ollama pull llama2
- Electron logs: Check console in DevTools (F12)
- Backend logs: Terminal output when running
python server.py
- Ollama logs: Check
~/.ollama/logs/
directory
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes and test thoroughly
- Submit a pull request with detailed description
- Always include appropriate disclaimers
- Cite medical sources when possible
- Avoid providing specific dosage recommendations
- Emphasize the importance of professional medical care
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for local AI model hosting
- Electron for cross-platform desktop framework
- Flask for Python web framework
- Medical AI community for guidance on responsible AI healthcare applications
For support and questions:
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: hello@luispulido.com
Remember: This tool is designed to supplement, not replace, professional medical advice. Always consult healthcare professionals for serious medical concerns.