The LLM-Product-Assistant is an interactive Q&A system designed to help users to better understand and navigate the functionalities of a particular product. For this specific project, we focused on troubleshooting issues users might have with Amazon VPC.
- LangChain: html loader, text summarization and chunking, wrappers for OpenAI and Pinecone
- OpenAI API: embeddings generation, retrieval
- Pinecone: embeddings storage and indexing, similairty search for
top_k
- PEFT: LLaMa fine-tunning
- Python version: 3.10.7
- Obtain the required API keys from a team member.
-
Clone the Repository:
git clone <repository-link> cd LLM-Product-Assistant
-
Navigate to the Main Folder:
cd path/to/main/folder
-
Build the Docker Image:
docker build -t chatbot -f 07_Docker/Dockerfile .
-
Run the Docker Container:
docker run -p 5000:5000 chatbot
-
Access the Application: Click on the link that appears in the console to start interacting with the chatbot.
- Sam Swain: Project Lead
- Zhengyuan (Donald) Li: Generative AI Engineer
- Brian Hong: Generative AI Engineer
- Wencheng Zhang: Generative AI Engineer