OptimAIzers
This application provides a Streamlit-based client interface to parse, analyze, and optimize a resume against a given job description. The solution integrates multiple components:
-
Client (Streamlit App): A front-end interface for users to upload their resumes, provide a job description URL, and receive a tailored analysis.
-
Backend Services (APIs):
- PDF Parser API: Extracts and processes text from uploaded resumes stored in Amazon S3.
- Web Scraper API: Retrieves and processes job description content from a given URL.
- Resume Analysis API: Compares the candidate's resume against the job description and provides an optimization analysis.
-
Amazon S3: Used as storage for both the uploaded resume PDFs and processed job descriptions.
- Python 3.8+ installed locally.
- pip (Python package manager) for installing dependencies.
- AWS Credentials: You will need valid AWS credentials (Access Key ID and Secret Access Key) with permissions to read/write to the specified S3 bucket.
- config.ini File: A configuration file containing details like:
- S3 bucket name
- AWS credentials and region
- URLs for the PDF Parser, Web Scraper, and Resume Analysis APIs
A sample config.ini might look like:
[s3]
bucket_name = your-s3-bucket-name
[s3readwrite]
aws_access_key_id = YOUR_AWS_ACCESS_KEY
aws_secret_access_key = YOUR_AWS_SECRET_KEY
region_name = your-region
[api]
pdf_parser_url = https://your-pdf-parser-api.com/parse
web_scraper_url = https://your-web-scraper-api.com/scrape
resume_analysis_url = https://your-analysis-api.com/analyze-
Clone or Download the Repository:
git clone https://github.com/SAIGANESH02/OptimAIzer.git cd your-repo -
Create and Activate a Virtual Environment (Optional but Recommended):
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install Dependencies: Assuming you have a
requirements.txtfile, run:pip install -r requirements.txt
If not, manually install packages required (e.g.,
streamlit,requests,boto3,configparser):pip install streamlit requests boto3 configparser
-
Set up Configuration:
- Copy the sample
config.ini(above) into the project directory. - Update the file with your AWS credentials, region, bucket name, and API endpoints.
- Copy the sample
-
Run the Application:
streamlit run app.py
For example, if your client code is in
app.py, you would run:streamlit run app.py
-
Access the Application: After running the command, Streamlit will start a local server. By default, it’s accessible at:
http://localhost:8501 -
Usage:
- Upload a PDF Resume: Click the "Browse files" button in the interface and select your resume PDF.
- Enter Job Description URL: Input the URL of the job description you want to analyze against.
- Select Analysis Type: Choose between "Quick Scan", "Detailed Analysis", or "ATS Optimization".
- Analyze: Click the "Analyze Resume" button and wait for the results to appear.
- Make sure your APIs are up and running and that the correct endpoints are specified in the
config.inifile. - Ensure that the S3 bucket is accessible with the provided credentials and that the credentials have proper permissions for
get_objectandput_object. - If you encounter errors or issues, check Streamlit logs in the terminal for debug information.
This concludes the setup instructions. You should now be able to run and interact with the application.