CACBOT.AI represents an advanced implementation of Retrieval Augmented Generation (RAG) utilizing Pinecone (VectorDB) and open-source Large Language Models (LLMs). It facilitates the ability to respond to various queries based on the user-provided documents.
To use CACBOT.AI, you need to register an account on the website and log in. Then, you can access the following features:
- Upload Documents: You have the option to upload documents in either PDF or TXT format. Additionally, the platform supports the uploading of online URLs containing relevant information.
- Chatting: Upon successful document upload, you can freely pose any questions related to your documents within the playground section. The bot is equipped to provide answers to a diverse range of inquiries based on the content of your uploaded documents.
- Creating Key: The cryptographic key serves as the means to authenticate requests, ensuring that only authorized users can access their respective documents. This security measure prevents unauthorized access, safeguarding the privacy and confidentiality of each user's information.
- You have the ability to customize your profile by modifying details such as your email, status, and other relevant information.
- Frontend : ReactJs, Typescript and Material-UI
- Backend : Flask, Python
- Retrieval Augmented Generation : Langchain, Pinecone, OpenAI, and Google Vertex/Gemini.
To install and run CACBOT.AI on your local machine, you need to follow these steps:
- Clone the repository using command
git clone https://github.com/flashzzz/cacbot-ai.git
. - Install all the dependencies for python listed in requirements.txt file using
pip install -r requirements.txt
. - Navigate to the frontend directory using the command
cd frontend
. - Install the dependencies using the command
npm install
. - Start the vite react app using the command
npm run dev
. - Navigate to the NotUI directory using the command
cd ../backend
. - Start the flask server using the command
python main.py
. - Open your browser and go to the URL
http://localhost:5173
to view the website.
- Conversational Memory Context
- Streaming Response