Made with ❤️ by the Hackathon Dream Team – LeadWith AIAgents (GenAI Community)
📫 Contact us: HackathonDreamTeam@rockingwoldtechnologies.com
MediGuard is a secure, multi-agent medical AI system built on GenAI AgentOS.
It uses AI-powered agents to extract, summarize, evaluate, and coordinate responses for healthcare data such as medical reports, lab results, and scanned PDFs.
Doctors often receive lengthy, unstructured patient reports, making it difficult to:
- Understand urgent risks instantly
- Prioritize patients
- Coordinate care across systems
MediGuard deploys three AI agents that:
- 🧠 Extract and summarize reports
- ⚖️ Evaluate clinical context and confidence
- 🤝 Coordinate decisions and next actions
Each agent operates independently but communicates via secure, session-bound messages.
This diagram explains how the agents (EAS → MEVAL → DEC) interact with each other and handle responses based on medical data analysis and confidence scoring.
- Accepts file uploads (PDFs, DOCs, images)
- Extracts text using LlamaParse or fallback OCR
- Generates:
- Emergency summary
- Mid-level clinical summary
- Full structured report
- Forwards output to MEVALAgent
ℹ️ Uses OpenAI GPT-4o + LlamaIndex
🔑 Requires API key from https://cloud.llamaindex.ai/
- Evaluates health summaries from EAS
- Calculates confidence score
- Triggers internal lookup if confidence is low
- Provides actionable advice to DEC agent
- Makes final decision:
- High confidence → PDF summary response
- Low confidence → re-investigation / no response
- Coordinates next steps or alerts
Ensure the following are installed:
- Python 3.10+
- Docker
- Git
uv(optional, recommended for agent CLI)
git clone https://github.com/MotsimAslam/MedContextAI.git
cd genai-agentospython -m venv .venv
.venv\Scripts\activate # On Windows
# OR
source .venv/bin/activate # On macOS/Linuxpip install -r requirements.txtMake sure Docker is running, then:
docker-compose up --buildpython cli.py signup -u yourusername
# You'll be prompted for a passwordpython cli.py login -u yourusername -p yourpasswordpython cli.py register_agent --name easagent --description "Extracts and summarizes health record data"This creates a new agent script inside the cli/agents/ folder.
uv sync
uv sync --active
uv run easagent.pyAfter registration, a JWT token will be auto-generated in the new agent file. Copy it into your working agent code to authorize it.
Then re-sync and run:
uv sync --active
uv run llamaindex_file_chat.pyYou can now test your agent at: http://localhost:3000
- Sign in on the GenAI UI
- Choose agent
llamaindex_file_chat - Upload a PDF/image/text file
- Select your preferred summary type:
- Emergency Summary
- Full Clinical Insight
- Extended Investigation
llama-index
llama-parse
openai
pdfplumber
pytesseract
Pillow
reportlab
python-dotenv
genai-agentos
If you're using llamaindex_file_chat, you’ll need a LlamaParse API key:
📎 https://cloud.llamaindex.ai/
⚠️ This project is for research and prototyping only.
It is not intended for direct clinical decision-making.
Always consult licensed professionals.
This repo is a submitted GenAI Hackathon project.
✅ Built and tested on GenAI AgentOS
✅ Docker-compatible
✅ Works with uploaded PDF/image health records
By Hackathon Dream Team – LeadWith AIAgents (GenAI Community)
📫 Contact us: HackathonDreamTeam@rockingwoldtechnologies.com
🧩 For guidance on the GenAI AgentOS structure and CLI usage, check the official GenAI sample repository:
👉 https://github.com/genai-works-org/getting-started-genai-agentos
This diverse and talented team of professionals came together during the hackathon to develop cutting-edge AI solutions. With expertise spanning AI research, ML engineering, data science, compliance, product architecture, and program management, this dream team was built to innovate.
-
Sean Krepp
Product / Program
Formerly at Google Data / AI -
Yehya Karout
Architect / Tech Program Manager
Formerly at Deloitte Cyber / AI -
Motsim Aslam
AI/ ML & Gen AI Developer / Data Scientist
Red Wireless & Beta Ambassador -
Dr. Chantal Spleiss
Architect / Product / Compliance
AI Strategist, Author & Speaker -
Nathan Ekanem
AI Researcher / Strategist
Northern Gas Net / Ipsos MORI -
Obaid Ullah Safi
Developer / AI/ML Engineer -
Carrie Smith
AI Coding Sleeper
Formerly at GM & JP Morgan Chase -
Umar Mazhar
AI Researcher / Developer / ML Engineer
Each team member brought their unique strengths to the table, making this collaboration not only effective but also inspiring. Together, we tackled real-world AI challenges and built solutions that matter.
After cloning our repo, you must:
-
Register the agent using the GenAI AgentOS CLI.
This will create a new agent directory and generate a unique JWT token. -
Copy that JWT token and replace it in your agent’s original source file (like
llamaindex_file_chat.py) inside our repo.
# Example:
AGENT_JWT = "your-new-generated-jwt-token"-
Copy the new generated agent Python file code and replace your existing agent's code with it if changes exist.
-
Run the following to sync and deploy:
uv sync --active
uv run llamaindex_file_chat.pyNow your agent is live on the GenAI platform and accessible at: 👉 http://localhost:3000
This repository provides the complete infrastructure for running GenAI agents, including:
- Backend
- Router
- Master Agents
- PostgreSQL Database
- Frontend
- CLI
- Redis
- Celery
The system supports multiple kinds of Agents:
| Agent Type | Description |
|---|---|
| GenAI Agents | Connected via genai-protocol library interface. |
| MCP Servers | MCP (Model Context Protocol) servers can be added by pasting their URL in the UI. |
| A2A Servers | A2A (Agent to Agent Protocol) servers can be added by pasting their URL in the UI. |
Make sure you have the following installed:
-
make(optional)- macOS:
brew install make - Linux:
sudo apt-get install make
- macOS:
-
Clone the repository:
git clone https://github.com/genai-works-org/genai-agentos.git cd genai-agentos/ -
Create a
.envfile by copying the example (can be empty and customized later):cp .env-example .env
- A
.envfile should be present for configuration. - All variables in
.env-exampleare commented. You can customize any environment setting by uncommenting the relevant line and providing a new value.
- A
-
Start Docker desktop and ensure it is running.
-
Start the infrastructure:
make up # or alternatively docker compose up -
After startup:
- Frontend UI: http://localhost:3000/
- Swagger API Docs: http://localhost:8000/docs#/
- OpenAI: gpt-4o
Ngrok can be used to expose the local WebSocket endpoint.
-
Install Ngrok:
- macOS (Homebrew):
brew install ngrok/ngrok/ngrok - Linux:
sudo snap install ngrok
- macOS (Homebrew):
-
Authenticate Ngrok:
-
Sign up or log in at ngrok dashboard.
-
Go to the "Your Authtoken" section and copy the token.
-
Run the command:
ngrok config add-authtoken <YOUR_AUTH_TOKEN>
-
-
Start a tunnel to local port 8080:
ngrok http 8080
-
Copy the generated WebSocket URL and update the
ws_urlfield in:genai_session.session.GenAISession
🤖GenAI Agent registration quick start (For more data check CLI)
cd cli/
python cli.py signup -u <username> # Register a new user, also available in [UI](http://localhost:3000/)
python cli.py login -u <username> -p <password> # Login to the system, get JWT user token
python cli.py register_agent --name <agent_name> --description <agent_description>
cd agents/
# Run the agent
uv run python <agent_name>.py # or alternatively
python <agent_name>.py | Variable | Description | Example / Default |
|---|---|---|
FRONTEND_PORT |
Port to start a frontend | 3000 - default. Can be changed by run in terminal source FRONTEND_PORT=<your_port> |
ROUTER_WS_URL |
WebSocket URL for the router container |
ws://genai-router:8080/ws - host is either localhost or router container name |
SECRET_KEY |
Secret key for cryptographic operations - JWT/ LLM config encryption | $(openssl rand -hex 32) |
POSTGRES_HOST |
PostgreSQL Host | genai-postgres |
POSTGRES_USER |
PostgreSQL Username | postgres |
POSTGRES_PASSWORD |
PostgreSQL Password | postgres |
POSTGRES_DB |
PostgreSQL Database Name | postgres |
POSTGRES_PORT |
PostgreSQL Port | 5432 |
DEBUG |
Enable/disable debug mode - Server/ ORM logging | True / False |
MASTER_AGENT_API_KEY |
API key for the Master Agent - internal identifier | e1adc3d8-fca1-40b2-b90a-7b48290f2d6a::master_server_ml |
MASTER_BE_API_KEY |
API key for the Master Backend - internal identifier | 7a3fd399-3e48-46a0-ab7c-0eaf38020283::master_server_be |
BACKEND_CORS_ORIGINS |
Allowed CORS origins for the backend |
["*"], ["http://localhost"] |
DEFAULT_FILES_FOLDER_NAME |
Default folder for file storage - Docker file volume path | /files |
CLI_BACKEND_ORIGIN_URL |
backend URL for CLI access |
http://localhost:8000 |
✅ If your MCP server or A2A card is hosted on your local machine, make sure to change the host name from http://localhost:<your_port> to http://host.docker.internal:<your_port> and try again.
🔎 Also make sure to pass the full url of your MCP server or A2A card, such as - http://host.docker.internal:8000/mcp for MCP or http://host.docker.internal:10002 for A2A
/.well-known/agent.json for your A2A card as genai-backend will do it for you!
✅ Make sure your MCP server supports streamable-http protocol and is remotely accessible.Also make sure that you're specifiying full URL of your server, like - http://host.docker.internal:8000/mcp
sse protocol is officially deprecated by MCP protocol devs, stdio protocol is not supported yet, but stay tuned for future announcements!


