LLM Chatbot(ResumeBuddy) built using Supabase for vectoreStore, OpenAI API for embeddings with adjusted chunk size and overlaps for Resumes.
- Natural Language Understanding: Leverages OpenAI's GPT for understanding and responding to user queries.
- Document Embeddings: Utilizes vector embeddings for efficient document retrieval.
- Dynamic Conversation Handling: Maintains and uses conversation history for context-aware responses.
- Scalable Architecture: Built with Express.js and Supabase, ensuring scalability and performance.
- User-Friendly Interface: Simple and intuitive UI for seamless interaction.
- Frontend: HTML, CSS, JavaScript
- Backend: Node.js, Express.js
- Database: Supabase
- AI/ML: OpenAI GPT, LangChain
- PDF Parsing: pdf-parse
- Environment Management: dotenv
- Build Tools: Vite
- Clone the repository:
git clone https://github.com/yourusername/resumebuddy.git
- Navigate to the project directory:
cd resumebuddy - Install the dependencies:
npm install
- Set up environment variables:
Create a
.envfile in the root directory and add your API keys and other environment variables:OPENAI_API_KEY=your_openai_api_key SUPABASE_API_KEY=your_supabase_api_key SUPABASE_URL_LC_CHATBOT=your_supabase_url PORT=3000
- Start the server:
npm start
- Open your browser and navigate to
http://localhost:3000to interact with ResumeBuddy.
index.html: The main HTML file that serves the chatbot UI.server.js: The main server file that handles API requests and interactions with the AI model.package.json: Contains project metadata and dependencies.txt_embed.js: Script for embedding text documents into the vector store.combineDocuments.js: Utility to combine multiple documents into a single text.formatConvHistory.js: Utility to format conversation history.retriever.js: Script to set up the document retriever using Supabase and OpenAI embeddings.
- PDF Loading and Splitting: The resume PDF is loaded and split into manageable chunks using
RecursiveCharacterTextSplitter. - Embedding and Storing: The chunks are embedded into vector representations and stored in a Supabase vector store.
- Question Handling: User queries are processed, and standalone questions are generated if necessary.
- Context Retrieval: Relevant document chunks are retrieved based on the query context.
- Answer Generation: The AI model generates context-aware answers using the retrieved information.
Feel free to open issues or submit pull requests for improvements and bug fixes.