Skip to content

thammuio/doc-genius-ai

Repository files navigation

DocGenius AI - Generative AI Chatbot for your Documents

Doc Genius AI

Architecture

DocGenius Architecture

RAG Implementation:

DocGenius RAG Architecture

Chatbot

DocGenius UI workflow

DocGenius UI Q&A

Deployment

Cloudera AMP

DocGenius as Cloudera AMP

Applications

DocGenius RAG Architecture

Model Serving

DocGenius RAG Architecture

Building your custom knowledge base

  • Use Vector DB

Understanding the User Inputs and Output

Inputs

Select Model - Here the user can select the Llama2 13B parameter chat model (llama-2-13b-chat)

Select Temperature (Randomness of Response) - Here the user can scale the randomness of the model's response. Lower numbers ensure a more approximate, objective answer while higher numbers encourage model creativity.

Select Number of Tokens (Length of Response) - Here several options have been provided. The number of tokens the user uses directly correlate with the length of the response the model returns.

Question - Just as it sounds; this is where the user can provide a question to the model

Outputs

Llama2 Model Response - This is the response generated by the model given the context in your vector database. Note that if the question cannot correlate to content in your knowledge base, you may get hallucinated responses.

Deployment

FastAPI for LLMs

app directrory hosts the FastAPI for your LLMs

Chatbot UI (Front end)

chat-ui directrory hosts the code for Chatbot UI.

Requirements

CML Instance Types

  • A GPU instance is required to perform inference on the LLM
  • A CUDA 5.0+ capable GPU instance type is recommended (This will fail on Step 2 if this requirement is not met)
    • The torch libraries in this require a GPU with CUDA compute capability 5.0 or higher. (i.e. NVIDIA V100, A100, T4 GPUs)

Recommended Runtime

JupyterLab - Python 3.9 - Nvidia GPU - 2023.08

AMP Docs

https://docs.cloudera.com/machine-learning/cloud/applied-ml-prototypes/topics/ml-amp-project-spec.html

Resource Requirements

This creates the following workloads with resource requirements:

  • CML Session: 2 CPU, 16GB MEM
  • CML Jobs: 2 CPU, 8GB MEM
  • CML Application: 2 CPU, 1 GPU, 16GB MEM

External Resources

This requires pip packages and models from huggingface. Depending on your CML networking setup, you may need to whitelist some domains:

  • pypi.python.org
  • pypi.org
  • pythonhosted.org
  • huggingface.co
  • pinecone.io (if using Pinecone)

Technologies Used

Open-Source Models and Utilities

Vector Database

Chat API

Deploying on CML

Code Structure

doc-genius-ai/
├── app/                      	# Application directory for API and Model Serving
│   └── [..subdirs..]
│   └── chatbot/		# has the model serving python files for RAG, Prompt, Fine-tuning models
│   └── main.py		# main.py file to start the API
├── chat-ui/                  	# Directory for the chatbot UI in Next.js
│   └── [..subdirs..]
│   └── app.py 		# app.py file to serve  build files in .next directory via Flask
├── pipeline/                 	# Pipeline directory for data processing or workflow pipelines and vector load
├── data/                     	# Data directory for storing datasets or data files
├── models/                   	# Models directory for LLMs / ML models
├── session/                  	# Scripts for CML Sessions and Validation Tasks
├── images/                   	# Directory for storing project related images
├── api.md                    	# Documentation for the APIs
├── README.md         	# Detailed description of the project
├── .gitignore                	# Specifies intentionally untracked files to ignore
├── catalog.yaml          	# YAML file that contains descriptive information and metadata for the displaying the AMP projects in the CML Project Catalog.           
├─ .project-metadata.yaml    # Project metadata file that provides configuration and setup details
├── cdsw-build.sh             	# Script for building the Model dependencies
└── requirements.txt          	# Python dependencies for Model Serving

Interim Fixes

  1. Increase Ephemeral Storage Limit by navigating to CML Workspace -> Site Administration -> Settings -> Ephemeral Storage (in GB) and set it to a value >= 50
  • When a CML model is created, the model is loaded in the scratch space of a pod, LLM models are larger than the default 10 GB which causes issues during deployment.
  1. Site Administration > Security > Allow applications to be configured with unauthenticated access. (Check the box)