Read your resume. Score every job. Pick the right resume. Apply on autopilot.
- Parses your resume — extracts your name, skills, and experience (PDF or TXT)
- Searches LinkedIn — scrapes job listings via your real Chrome session
- Scores each job 0–100 — Claude compares every listing against your resume
- Picks the best resume — if you have multiple tailored resumes, Claude selects the strongest match per job
- Auto-fills Easy Apply — browser automation submits the application for you
- Tracks everything — every application logged in a local SQLite dashboard
| Feature | Status |
|---|---|
| Resume parsing | ✅ Real |
| Claude job scoring | ✅ Real (uses your API key) |
| Resume matching | ✅ Real (Claude picks from your resumes/ folder) |
| SQLite tracking | ✅ Real |
| LinkedIn scraping | |
| Easy Apply auto-fill |
cd ~/Desktop/AutoApplyCreate a .env file (already gitignored):
ANTHROPIC_API_KEY=sk-ant-your-key-here
~/.pyenv/versions/3.11.8/bin/python3 setup.pyAll green means you're good:
✅ Python 3.11
✅ Dependencies installed
✅ ANTHROPIC_API_KEY set
✅ pdfplumber, rich, browser-harness
✅ SQLite DB created
✅ Sample resume created
- Drop your main resume as
resume.pdfin the project root - For resume matching, add tailored versions to the
resumes/folder:
resumes/
├── resume_ai_engineer.txt
├── resume_backend_python.txt
├── resume_ml_research.txt
└── resume_frontend.txt
Claude will automatically pick the best one per job. No resumes? It uses the sample.
ANTHROPIC_API_KEY=sk-ant-your-key-here ~/.pyenv/versions/3.11.8/bin/python3 app.pyClick "Load Demo Data" — seeds 7 realistic applications instantly. Good for showing off the UI.
- Type a job query (e.g.
ML engineer remote) - Toggle Dry Run ON to score without submitting
- Click Run AutoApply
- Watch the live log:
- Resume parsed
- LinkedIn searched
- Jobs scored by Claude
- Best resume selected per job
- Forms auto-filled (when Chrome CDP is connected)
Click any status dropdown in the table: Applied → Interview → Offer 🎉 → Rejected
After the pipeline runs, a "Resume Matched Per Job" section appears below the scored jobs. Claude reads each job description and each resume filename/content, then picks the strongest fit.
Example output:
- Senior AI Engineer @ Anthropic →
resume_ai_engineer.txt - ML Engineer @ OpenAI →
resume_ml_research.txt - Python Developer @ Stripe →
resume_backend_python.txt
# Full pipeline
~/.pyenv/versions/3.11.8/bin/python3 main.py
# With specific resume
~/.pyenv/versions/3.11.8/bin/python3 main.py --resume resume.pdf
# Custom job query
~/.pyenv/versions/3.11.8/bin/python3 main.py --query "backend engineer TypeScript"
# Dry run (score only, no submissions)
~/.pyenv/versions/3.11.8/bin/python3 main.py --dry-run
# View stats
~/.pyenv/versions/3.11.8/bin/python3 tracker.py stats
~/.pyenv/versions/3.11.8/bin/python3 tracker.py listAutoApply/
├── app.py # Flask server + SSE live stream
├── main.py # CLI entry point
├── resume_parser.py # PDF/TXT → Claude → structured JSON
├── job_scraper.py # LinkedIn scraper via browser-harness (CDP)
├── ai_matcher.py # Claude scores jobs + picks best resume
├── auto_filler.py # Easy Apply form automation
├── tracker.py # SQLite logging + stats
├── config.py # API key, paths, model settings
├── setup.py # Environment checker + DB init
├── resumes/ # Your tailored resume variants go here
├── templates/
│ └── index.html # Web dashboard (dark UI, vanilla JS)
├── sample_resume.txt # Fallback demo resume
└── autoapply.db # SQLite DB (auto-created on first run)
"No jobs found" — Chrome needs to be open and logged into LinkedIn. The scraper connects via CDP.
"ANTHROPIC_API_KEY not set" — Check your .env file. The key should be just sk-ant-... with no export prefix.
Port 8080 in use — PORT=3000 ~/.pyenv/versions/3.11.8/bin/python3 app.py
"Easy Apply not found" — Not all LinkedIn jobs have Easy Apply. The agent skips those automatically.
Pipeline times out — LinkedIn can be slow. Use dry-run mode for demos.
- Python 3.11 — core logic
- Anthropic Claude — resume parsing, job scoring, resume selection
- browser-harness — CDP browser automation (your real Chrome session)
- Flask — web server + SSE for live streaming
- pdfplumber — PDF text extraction
- SQLite — zero-setup local tracking
- ARA — AI orchestrator on macOS