ChatDoc is an intelligent medical assistant built with Streamlit and LangChain. It allows users to upload patient medical records in PDF format and ask health-related questions. The system provides responses by analyzing the document and leveraging powerful LLMs and vector embeddings.
- Upload and parse patient medical records (PDF)
- Automatically chunk, embed, and store documents using ObjectBox vector database
- Use GROQ's LLaMA 3.1 model for fast, intelligent responses
- Integrate Google Generative AI Embeddings for document understanding
- Ask contextual questions and receive medically-informed answers
- Shows relevant document chunks used for each answer
A sample file is provided for testing:
Sample-filled-in-MR.pdf
git clone https://github.com/your-username/chatdoc.git
cd chatdocpython -m venv chatdoc_envActivate it:
-
Windows:
.\chatdoc_env\Scripts\activate
-
macOS/Linux:
source chatdoc_env/bin/activate
Install the required Python libraries:
pip install -r requirements.txtYou will need:
Create a .env file in the root directory and add your keys:
GROQ_API_KEY=your_groq_api_key
GOOGLE_API_KEY=your_google_api_keyBefore running the app, run the setup in the notebook (if any). Otherwise, launch the app with:
streamlit run app.py- Upload a PDF file of a patient medical record (e.g.,
Sample-filled-in-MR.pdf). - Click "Creating Vector Store" to process the document.
- Ask any medical question in the text box (e.g., "What medication is the patient on?").
- Get an AI-generated answer, along with the relevant document context.
- Streamlit for interactive UI
- LangChain for chaining LLM + document retriever
- GROQ (LLaMA 3.1) for language generation
- Google Generative AI Embeddings for document representation
- ObjectBox as a vector store
- PyPDFLoader to parse medical PDFs
This is an AI-powered demo and not a certified medical diagnostic tool. Please consult licensed healthcare professionals for actual diagnosis or treatment.