UBC CIC genAI Hackathon 2024
This sample application allows you to ask natural language questions of the resume PDF document you upload. It combines the text generation and analysis capabilities of an LLM with a vector search of the document content. The solution uses serverless services Amazon Bedrock to access foundational models, and Streamlit UI
Note This architecture creates resources that have costs associated with them. Please see the AWS Pricing page for details and make sure to understand the costs before deploying this stack.
- Amazon Bedrock for serverless embedding and inference
- LangChain to orchestrate a Q&A LLM chain
- FAISS vector store
- Frontend built in Streamlit
Serverless Resume PDF Suggestion architecture
- A user uploads a Resume PDF document into the platform
- This upload performs a metadata extraction and document embedding process. The process converts the text in the document into vectors. The vectors are loaded into a vector index.
- When a user chats with a Rseume PDF document and sends a prompt to the backend, a function retrieves the index and searches for information related to the prompt.
- A LLM then uses the results of this vector search, previous messages in the conversation, and its general-purpose capabilities to formulate a response to the user.
- support job posting and resume uploading for resume tailoring.
- cover letter generation.
- AWS SAM CLI
- Python 3.11 or greater