A comprehensive AI-powered emergency healthcare management system that connects patients with hospitals, manages medical resources, and provides intelligent allocation based on proximity, availability, and medical history.
MediLink AI is a full-stack application designed for emergency healthcare scenarios. It combines:
- Biometric Face Recognition for instant patient identification
- AI-Powered Hospital Allocation using proximity, bed availability, and doctor resources
- Medical Jargon Translation for patient-friendly communication
- Real-time Hospital Mapping with 15+ connected hospitals
- Automated Workflow Integration via N8N for donor alerts and notifications
- Framework: FastAPI (Python 3.9+)
- Database: PostgreSQL with SQLAlchemy ORM
- AI/ML:
- Ollama for natural language processing (Llama 3.2)
- NumPy for face descriptor matching (128-dimensional vectors)
- Face-API.js models for biometric detection
- External Services:
- N8N workflow automation (production webhook)
- Langflow for AI flow configuration (optional)
- Framework: React 18 with Vite
- Libraries:
- Face-API.js for browser-based face detection
- Leaflet/React-Leaflet for interactive maps
- Axios for API communication
- Lucide React for icons
- Styling: Custom CSS with modern medical SaaS design
- Python 3.9+ with pip
- Node.js 18+ and npm
- PostgreSQL 14+ (optional, falls back to in-memory storage)
- Ollama (for AI features)
- Docker (optional, for N8N)
cd medilink-simplecd backend
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# (Optional) Setup PostgreSQL
# Create .env file:
# DATABASE_URL=postgresql://postgres:postgres@localhost:5432/medilink
# Then run: bash init_db.shcd frontend
# Install dependencies
npm install# From project root
bash start-all.sh# Terminal 1: Start Ollama
ollama serve
# Terminal 2: Start Backend
cd backend
source venv/bin/activate
uvicorn main:app --reload
# Terminal 3: Start Frontend
cd frontend
npm run dev- Frontend: http://localhost:5173
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
Create a .env file in backend/:
# Database (Optional)
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/medilink
# N8N (Production)
N8N_BASE_URL=https://n8n.srv1045021.hstgr.cloud
# Langflow (Optional)
LANGFLOW_API_KEY=your_api_key_herePlace these JSON files in backend/:
Medical-nlp.json- AI Assistant flow configurationJargon-translator.json- Medical jargon translation flowdonor-alert.json- N8N workflow configuration
-
Patient Identification
- Click "Start Face Scan" in Biometric Scanner
- Allow camera access when prompted
- Position face in center of frame
- System automatically identifies patient and retrieves medical history
-
Emergency Allocation
- Use AI Assistant chat: "Critical patient needs O+ blood"
- System analyzes:
- Patient location
- Hospital proximity
- ICU bed availability
- Blood stock levels
- Doctor availability
- Best hospital is automatically allocated
-
Medical History Sharing
- After allocation, medical history is automatically shared with hospital
- View shared data in N8N Workflow component
-
Register Your Face
- Click "Register My Face" in Biometric Scanner
- Fill in medical history form
- Capture face photo
- System stores your biometric data for future identification
-
Medical Jargon Translation
- Type complex medical terms in Jargon AI Translator
- Get simplified explanations in plain language
- Integrated into AI Assistant for automatic translation
Click "Run Demo" in Hero Section to see:
- Emergency scenario simulation
- Patient identification
- Medical jargon translation
- Hospital allocation
- Medical history sharing
- β SQL Injection Protection: SQLAlchemy ORM with parameterized queries
- β Input Validation: Type checking and sanitization for all endpoints
- β CORS Configuration: Restricted origins (localhost only in dev)
- β Error Handling: Comprehensive try-catch blocks with fallbacks
- β Type Safety: Pydantic models for request validation
- β XSS Protection: React's built-in escaping, no dangerouslySetInnerHTML
- β Input Sanitization: All user inputs validated before API calls
- β Error Boundaries: Graceful error handling throughout
- β Secure API Calls: Axios with timeout and error normalization
- β Empty/null inputs
- β Invalid data types
- β Database connection failures (fallback to in-memory)
- β Missing face descriptors
- β Network timeouts
- β Camera access denied
- β Invalid patient/hospital IDs
medilink-simple/
βββ backend/
β βββ main.py # FastAPI application
β βββ database.py # PostgreSQL models and connection
β βββ requirements.txt # Python dependencies
β βββ init_db.sh # Database initialization script
β βββ Medical-nlp.json # AI Assistant flow config
β βββ Jargon-translator.json # Jargon translator flow config
β βββ donor-alert.json # N8N workflow config
βββ frontend/
β βββ src/
β β βββ App.jsx # Main application component
β β βββ App.css # Global styles
β β βββ main.jsx # Entry point
β β βββ components/ # React components
β β β βββ BiometricScanner.jsx
β β β βββ HospitalMap.jsx
β β β βββ FloatingChat.jsx
β β β βββ JargonTranslator.jsx
β β β βββ N8NWorkflow.jsx
β β β βββ Navbar.jsx
β β β βββ HeroSection.jsx
β β β βββ FeaturesSection.jsx
β β β βββ HowItWorksSection.jsx
β β β βββ Footer.jsx
β β βββ services/
β β βββ api.js # API client
β βββ package.json
β βββ vite.config.js
βββ start-all.sh # Automated startup script
GET /api/hospitals- Get all hospitals
GET /api/patients- Get all patientsPOST /api/patients/identify- Identify patient by face descriptor{ "face_descriptor": [128-dim array] }POST /api/patients/register- Register new patient{ "name": "John Doe", "age": 30, "blood_type": "O+", "medical_history": {...}, "face_descriptor": [128-dim array] }
POST /api/emergency/allocate- Allocate hospital for emergency{ "patient_id": "5", "latitude": 25.2048, "longitude": 55.2708, "severity": "critical" }POST /api/emergency/share-medical-history- Share medical history{ "patient_id": "5", "hospital_id": "1" }
POST /api/chat/query- Send chat query{ "query": "Critical patient needs O+ blood" }
POST /api/jargon/translate- Translate medical jargon{ "text": "Patient in hemorrhagic shock" }
POST /api/n8n/trigger- Trigger N8N workflow{ "workflow_id": "donor-alert", "data": {...} }
-
Patient Identification
- Face scan identifies registered patient
- Medical history retrieved correctly
- Demo patient (Ahmad Hassan) works in demo mode
-
Hospital Allocation
- Allocates based on proximity
- Considers bed availability
- Considers blood stock
- Recommends different hospitals for different scenarios
-
Medical Jargon Translation
- Translates complex terms
- Returns plain text (not JSON)
- Auto-triggers in AI Assistant
-
Error Handling
- Handles missing data gracefully
- Shows user-friendly error messages
- Falls back to legacy data if database unavailable
Database Connection Failed
- Check PostgreSQL is running:
psql -U postgres -l - Verify DATABASE_URL in .env
- System will fall back to in-memory storage automatically
Ollama Not Responding
- Start Ollama:
ollama serve - Check if model is available:
ollama list - Pull model if needed:
ollama pull llama3.2:latest
Camera Not Working
- Check browser permissions
- Try "Test Camera" button first
- Use HTTPS in production (required for camera access)
Face Detection Not Working
- Ensure good lighting
- Position face in center
- Check browser console for errors
- Models load from CDN (may take time)
API Calls Failing
- Verify backend is running on port 8000
- Check CORS configuration
- Check browser console for errors
-
Backend
- Set
DATABASE_URLenvironment variable - Restrict CORS origins to production domain
- Use production-grade WSGI server (Gunicorn/Uvicorn workers)
- Set up SSL/TLS certificates
- Configure proper logging
- Set
-
Frontend
- Update API base URL in
api.js - Build production bundle:
npm run build - Serve with HTTPS (required for camera access)
- Configure proper caching headers
- Update API base URL in
-
Database
- Run migrations:
bash init_db.sh - Set up database backups
- Configure connection pooling
- Run migrations:
- Face Detection: ~200-300ms per frame
- Patient Identification: ~50-100ms (database) or ~10ms (legacy)
- Hospital Allocation: ~100-200ms (includes scoring algorithm)
- AI Query Response: ~2-5s (depends on Ollama)
- Follow code style guidelines
- Add error handling for all edge cases
- Test thoroughly before submitting
- Update documentation for new features
Built for Jargon AI Hackathon 2024
For issues or questions:
- Check troubleshooting section
- Review API documentation at
/docs - Check browser/backend console logs
Built with β€οΈ for Emergency Healthcare