Skip to content

suhaasteja/AutoApply

Repository files navigation

AutoApply — AI Job Application Agent

Read your resume. Score every job. Pick the right resume. Apply on autopilot.


What it does

  1. Parses your resume — extracts your name, skills, and experience (PDF or TXT)
  2. Searches LinkedIn — scrapes job listings via your real Chrome session
  3. Scores each job 0–100 — Claude compares every listing against your resume
  4. Picks the best resume — if you have multiple tailored resumes, Claude selects the strongest match per job
  5. Auto-fills Easy Apply — browser automation submits the application for you
  6. Tracks everything — every application logged in a local SQLite dashboard

What's real vs demo right now

Feature Status
Resume parsing ✅ Real
Claude job scoring ✅ Real (uses your API key)
Resume matching ✅ Real (Claude picks from your resumes/ folder)
SQLite tracking ✅ Real
LinkedIn scraping ⚠️ Demo data (needs Chrome CDP connected)
Easy Apply auto-fill ⚠️ Needs Chrome CDP connected

Setup

1. Get the project

cd ~/Desktop/AutoApply

2. Add your API key

Create a .env file (already gitignored):

ANTHROPIC_API_KEY=sk-ant-your-key-here

3. Install dependencies

~/.pyenv/versions/3.11.8/bin/python3 setup.py

All green means you're good:

✅ Python 3.11
✅ Dependencies installed
✅ ANTHROPIC_API_KEY set
✅ pdfplumber, rich, browser-harness
✅ SQLite DB created
✅ Sample resume created

4. Add your resume(s)

  • Drop your main resume as resume.pdf in the project root
  • For resume matching, add tailored versions to the resumes/ folder:
resumes/
├── resume_ai_engineer.txt
├── resume_backend_python.txt
├── resume_ml_research.txt
└── resume_frontend.txt

Claude will automatically pick the best one per job. No resumes? It uses the sample.


Launch

ANTHROPIC_API_KEY=sk-ant-your-key-here ~/.pyenv/versions/3.11.8/bin/python3 app.py

Open http://localhost:8080


Using the Dashboard

Quick demo (no API key needed)

Click "Load Demo Data" — seeds 7 realistic applications instantly. Good for showing off the UI.

Run the real pipeline

  1. Type a job query (e.g. ML engineer remote)
  2. Toggle Dry Run ON to score without submitting
  3. Click Run AutoApply
  4. Watch the live log:
    • Resume parsed
    • LinkedIn searched
    • Jobs scored by Claude
    • Best resume selected per job
    • Forms auto-filled (when Chrome CDP is connected)

Update statuses

Click any status dropdown in the table: Applied → Interview → Offer 🎉 → Rejected


Resume Matching

After the pipeline runs, a "Resume Matched Per Job" section appears below the scored jobs. Claude reads each job description and each resume filename/content, then picks the strongest fit.

Example output:

  • Senior AI Engineer @ Anthropic → resume_ai_engineer.txt
  • ML Engineer @ OpenAI → resume_ml_research.txt
  • Python Developer @ Stripe → resume_backend_python.txt

CLI Usage

# Full pipeline
~/.pyenv/versions/3.11.8/bin/python3 main.py

# With specific resume
~/.pyenv/versions/3.11.8/bin/python3 main.py --resume resume.pdf

# Custom job query
~/.pyenv/versions/3.11.8/bin/python3 main.py --query "backend engineer TypeScript"

# Dry run (score only, no submissions)
~/.pyenv/versions/3.11.8/bin/python3 main.py --dry-run

# View stats
~/.pyenv/versions/3.11.8/bin/python3 tracker.py stats
~/.pyenv/versions/3.11.8/bin/python3 tracker.py list

Project Structure

AutoApply/
├── app.py              # Flask server + SSE live stream
├── main.py             # CLI entry point
├── resume_parser.py    # PDF/TXT → Claude → structured JSON
├── job_scraper.py      # LinkedIn scraper via browser-harness (CDP)
├── ai_matcher.py       # Claude scores jobs + picks best resume
├── auto_filler.py      # Easy Apply form automation
├── tracker.py          # SQLite logging + stats
├── config.py           # API key, paths, model settings
├── setup.py            # Environment checker + DB init
├── resumes/            # Your tailored resume variants go here
├── templates/
│   └── index.html      # Web dashboard (dark UI, vanilla JS)
├── sample_resume.txt   # Fallback demo resume
└── autoapply.db        # SQLite DB (auto-created on first run)

Troubleshooting

"No jobs found" — Chrome needs to be open and logged into LinkedIn. The scraper connects via CDP.

"ANTHROPIC_API_KEY not set" — Check your .env file. The key should be just sk-ant-... with no export prefix.

Port 8080 in usePORT=3000 ~/.pyenv/versions/3.11.8/bin/python3 app.py

"Easy Apply not found" — Not all LinkedIn jobs have Easy Apply. The agent skips those automatically.

Pipeline times out — LinkedIn can be slow. Use dry-run mode for demos.


Tech Stack

  • Python 3.11 — core logic
  • Anthropic Claude — resume parsing, job scoring, resume selection
  • browser-harness — CDP browser automation (your real Chrome session)
  • Flask — web server + SSE for live streaming
  • pdfplumber — PDF text extraction
  • SQLite — zero-setup local tracking
  • ARA — AI orchestrator on macOS

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors