Skip to content

HackathonCodeBase/Round-3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🏥 MediAssist — AI Response Suggestion System

Empowering hospital staff with AI-drafted patient responses, reviewed and sent with confidence.

FastAPI Next.js Groq SQLite Docker License: MIT

FeaturesDemoQuick StartAPI DocsDockerArchitecture


🌟 What is MediAssist?

MediAssist is a full-stack web application that helps hospital staff respond to patient queries faster and more professionally using AI-generated draft replies. Staff can review, edit, and approve AI suggestions before sending — ensuring quality, empathy, and medical safety in every response.

Built for TetherX Hackathon – Round 3


✨ Features

Feature Description
🤖 AI Response Generation Powered by Llama 3.3 70B via Groq — ultra-fast inference
✏️ Human-in-the-Loop Staff review and edit every AI draft before it's sent
📋 Query Inbox View all incoming patient queries in a clean dashboard
📜 Query History Full audit trail of all sent responses with timestamps
📊 Live Dashboard Stats overview — total queries, pending, resolved
🐳 Docker Support One-command deployment with Docker Compose
🔒 Safety First AI is instructed never to give medical diagnoses

🖥️ Screenshots

Coming soon — run locally and see the dark-themed UI in action!


📁 Project Structure

Round-3/
├── backend/                    # FastAPI Python backend
│   ├── main.py                 # App entrypoint, CORS, router mounting
│   ├── database.py             # SQLAlchemy engine + session
│   ├── models.py               # Query ORM model
│   ├── gemini_service.py       # Groq AI integration (llama-3.3-70b)
│   ├── routes/
│   │   └── queries.py          # All API endpoints
│   ├── requirements.txt
│   ├── Dockerfile
│   └── .env.example            # Environment variable template
│
├── frontend/                   # Next.js 16 frontend
│   ├── app/
│   │   ├── dashboard/          # Stats overview page
│   │   ├── queries/            # Query inbox list
│   │   │   └── [id]/           # Response workspace (per query)
│   │   ├── history/            # Sent responses history
│   │   └── globals.css         # Dark theme design system
│   ├── components/
│   │   └── Sidebar.tsx         # Navigation sidebar
│   ├── services/
│   │   └── api.ts              # API client (typed)
│   └── Dockerfile
│
├── docker-compose.yml          # Full-stack deployment
└── .gitignore

🚀 Quick Start

Prerequisites

1. Clone the Repository

git clone https://github.com/HackathonCodeBase/Round-3.git
cd Round-3

2. Backend Setup

cd backend

# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate          # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Configure environment
cp .env.example .env
# Edit .env and add your Groq API key

.env file:

GROQ_API_KEY="your_groq_api_key_here"
DATABASE_URL=sqlite:///./mediassist.db
# Start the backend
uvicorn main:app --reload
# API running at http://localhost:8000

3. Frontend Setup

cd ../frontend

# Install dependencies
npm install

# Start the dev server
npm run dev
# App running at http://localhost:3000

4. Open the App

Navigate to http://localhost:3000 — you'll be redirected to the Dashboard.


🔑 Getting a Groq API Key

  1. Go to console.groq.com and sign up (free, no credit card)
  2. Navigate to API KeysCreate API Key
  3. Copy the key (starts with gsk_...)
  4. Paste it into backend/.env as GROQ_API_KEY

Free tier: 14,400 requests/day with lightning-fast Llama 3.3 70B inference ⚡


📡 API Reference

Base URL: http://localhost:8000

Interactive docs: http://localhost:8000/docs (Swagger UI)

Endpoints

GET /queries — List all patient queries
GET /queries

Response:

[
  {
    "id": 1,
    "patient_name": "John Smith",
    "patient_query": "I have been experiencing severe headaches...",
    "status": "pending",
    "created_at": "2026-03-06T07:27:00",
    "ai_suggestion": null,
    "staff_response": null
  }
]
POST /generate-response — Generate AI suggestion for a query
POST /generate-response
Content-Type: application/json

{
  "query_id": 1
}

Response:

{
  "query_id": 1,
  "ai_suggestion": "Thank you for reaching out. We understand you're experiencing severe headaches..."
}
POST /send-response — Approve and send a response
POST /send-response
Content-Type: application/json

{
  "query_id": 1,
  "response": "Thank you for reaching out. We recommend scheduling an appointment..."
}

Response:

{
  "message": "Response sent successfully",
  "query_id": 1,
  "status": "resolved"
}
GET /stats — Dashboard statistics
GET /stats

Response:

{
  "total": 10,
  "pending": 4,
  "resolved": 6
}

🐳 Docker Deployment

Run the entire stack with a single command:

# Make sure .env is configured first
cp backend/.env.example backend/.env
# Edit backend/.env with your GROQ_API_KEY

# Start everything
docker-compose up --build
Service URL
Frontend http://localhost:3000
Backend API http://localhost:8000
Swagger Docs http://localhost:8000/docs
# Stop everything
docker-compose down

🏗️ Architecture

┌─────────────────────────────────────────────┐
│              Browser (Patient Staff)         │
└──────────────────┬──────────────────────────┘
                   │ HTTP
┌──────────────────▼──────────────────────────┐
│         Next.js 16 Frontend (Port 3000)      │
│  Dashboard | Query Inbox | History | Workspace│
└──────────────────┬──────────────────────────┘
                   │ REST API
┌──────────────────▼──────────────────────────┐
│         FastAPI Backend (Port 8000)          │
│   /queries  /generate-response  /stats       │
└──────┬───────────────────────────┬──────────┘
       │                           │
┌──────▼──────┐           ┌────────▼────────┐
│  SQLite DB  │           │   Groq API      │
│  (queries,  │           │ Llama 3.3 70B   │
│  responses) │           │  (AI drafts)    │
└─────────────┘           └─────────────────┘

🛠️ Tech Stack

Layer Technology
Frontend Next.js 16, TypeScript, Vanilla CSS
Backend FastAPI, Python 3.12, Uvicorn
AI Model Llama 3.3 70B (via Groq API)
Database SQLite + SQLAlchemy ORM
Deployment Docker + Docker Compose

🤝 Team

Built with ❤️ for VIT TetherX Hackathon – Round 3


📄 License

This project is licensed under the MIT License.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors