Skip to content

mieraci22/Agentic_AI

Repository files navigation

Agentic AI Physician Assistant for Lab Results

An agentic AI system that assists physicians with communicating lab results to patients. The system analyzes structured lab data, detects abnormal values using rule-based tools, generates draft clinician-to-patient messages via the Gemini API, and runs an automated safety review — all before saving the draft to an outbox for physician approval.

Built for the University of Virginia MS in Data Science program.

Author: Michael Ieraci

System Architecture

patient_labs.json
       │
       ▼
┌──────────────────────────────┐
│   TOOL LAYER (Python)        │
│  range_check()               │
│  severity_score()            │
│  prioritize_findings()       │
│  generate_followup_qs()      │
└──────────────┬───────────────┘
               │
               ▼
┌──────────────────────────────┐
│   DECISION LOGIC             │
│  Abnormal? → LLM draft       │
│  Normal?   → Template msg    │
└──────────────┬───────────────┘
               │
               ▼
┌──────────────────────────────┐
│  LLM LAYER — DRAFT (Gemini)  │
│  Clinician-to-patient msg    │
└──────────────┬───────────────┘
               │
               ▼
┌──────────────────────────────┐
│  SAFETY REVIEW (Gemini)      │
│  Diagnosis / Medication /    │
│  Overconfidence checks       │
└──────────────┬───────────────┘
               │
               ▼
         outbox.json
    (awaits physician review)

A full visual workflow diagram is included as workflow_diagram.svg.

What Makes This Agentic

This is not a single-prompt LLM wrapper. The system demonstrates:

  • Tool use — Python functions for rule-based lab range checking, severity scoring, and finding prioritization
  • Conditional branching — Normal results skip the LLM entirely and use a templated message; abnormal results trigger the full pipeline
  • Multi-step reasoning — Analysis → prioritization → clarification questions → draft generation → safety review → outbox save
  • Environment interaction — Reads structured patient data from patient_labs.json and writes audit records to outbox.json
  • Safety constraints — A second LLM call reviews every draft for diagnostic language, medication advice, overconfidence, hallucination, and alarming language

Lab Tests in Scope

Lab Reference Range Critical Threshold
LDL Cholesterol ≤100 mg/dL ≥190 mg/dL
HDL Cholesterol ≥40 mg/dL ≤25 mg/dL
Triglycerides ≤150 mg/dL ≥500 mg/dL
Hemoglobin A1c ≤5.6% ≥9.0%
Creatinine 0.74–1.35 (M) / 0.59–1.04 (F) mg/dL ≥2.0 (M) / ≥1.8 (F)
eGFR ≥60 mL/min/1.73m² ≤30 mL/min/1.73m²
ALT ≤56 U/L ≥200 U/L
AST ≤40 U/L ≥200 U/L
Vitamin D ≥20 ng/mL ≤10 ng/mL

Creatinine uses sex-stratified reference ranges per clinical guidelines.

Severity Classification

Level Criteria
Routine 0 abnormal values
Follow-up recommended 1–2 abnormal values, no critical flags
Urgent follow-up ≥3 abnormal values OR any critical-threshold breach

Test Cases

The system processes 3 synthetic patients from patient_labs.json:

  • PT-001 (Margaret Chen, F, 67) — 7 abnormal labs, no critical flags → Urgent follow-up
  • PT-002 (Robert Okafor, M, 45) — All normal → Routine
  • PT-003 (Diana Vasquez, F, 54) — 9 abnormal labs, 5 critical flags → Urgent follow-up

Setup

Prerequisites

Installation

git clone https://github.com/mieraci22/Agentic_AI.git
cd Agentic_AI
pip install google-genai python-dotenv

Configuration

Create a .env file in the project root:

GEMINI_API_KEY=your_api_key_here

Run

python physician_assistant_agent.py

Or open physician_assistant_agent.ipynb in Jupyter/VS Code and run all cells.

Project Structure

Agentic_AI/
├── physician_assistant_agent.py    # Standalone Python script
├── physician_assistant_agent.ipynb # Jupyter notebook (same logic, with outputs)
├── patient_labs.json               # Input: 3 synthetic patient records
├── outbox.json                     # Output: draft messages + audit records
├── workflow_diagram.svg            # System architecture diagram
├── reflection.md                   # Critical analysis of the system
├── .env                            # API key (not committed)
└── README.md

Safety Constraints

The system is designed to draft messages for physician review, not to replace clinical judgment. AI-generated drafts must not:

  • Diagnose medical conditions
  • Recommend medication changes
  • Provide dosage instructions
  • Make definitive clinical conclusions

The safety review step (a second LLM call) checks every draft against these constraints and flags violations before the message reaches the outbox.

Technologies

  • Python — Core logic, rule-based tools, orchestration
  • Google Gemini API — LLM draft generation and safety review
  • JSON — Structured input/output format

License

This project was developed for educational purposes as part of the UVA MSDS program. Not intended for clinical use.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors