Intelligent LinkedIn opportunity analysis and automated response generation powered by AI
Automate your LinkedIn job search with enterprise-grade AI. Analyze recruiter messages, score opportunities, and generate personalized responses - all while you focus on what matters.
git clone <repository-url> && cd nexton
# Choose one:
make start-lite # Lite: backend + frontend + postgres (~500MB RAM)
make start # Full: all services including Celery, Redis (~2GB RAM)That's it! Open http://localhost:3000
| Command | Services | Features | RAM |
|---|---|---|---|
make start-lite |
3 (postgres, backend, frontend) | Manual scraping, no emails | ~500MB |
make start |
8 (+ Redis, Celery, Mailpit, Flower) | Scheduled jobs, emails, monitoring | ~2GB |
Note: Both versions require editing
.envwith your LinkedIn credentials. Themake start*commands auto-create it from.env.example.For running without Docker, see Lite Version without Docker
| Dashboard | Opportunities | Responses |
|---|---|---|
| Stats, charts, scan button | Filter, search, score breakdown | Approve, edit, decline AI responses |
- Choose Your Version
- The Problem
- The Solution
- Key Features
- Web Dashboard
- Architecture
- Tech Stack
- Quick Start
- Usage Examples
- Observability
- Configuration
- Development
- Deployment
- Documentation
- Contributing
| Grafana Monitoring | Jaeger Tracing | API Documentation |
|---|---|---|
| Track opportunities, pipeline performance | Visualize complete request flows | Test endpoints in the browser |
| Celery Flower | Prometheus | PostgreSQL |
|---|---|---|
| Monitor background jobs | Query custom metrics | Persistent opportunity storage |
Job searching on LinkedIn is time-consuming:
- 50+ recruiter messages per month that need individual responses
- Manual analysis of each opportunity (salary, tech stack, company)
- Context switching between LinkedIn, research, and drafting responses
- Missed opportunities due to delayed responses
- Repetitive work that could be automated
LinkedIn AI Agent is an intelligent automation system that:
- π₯ Scrapes your LinkedIn messages once daily (9 AM)
- π§ Analyzes each opportunity using AI (DSPy + LLM)
- π Scores opportunities based on your preferences (tech stack, salary, location)
- βοΈ Generates personalized responses adapted to your professional situation
- π§ Sends ONE daily summary email with all new opportunities
- π Sends approved responses back to LinkedIn
All running on your infrastructure with full observability and production-grade reliability.
- AI-Powered Extraction: Automatically extracts company, role, salary, tech stack from messages
- Smart Scoring: Multi-dimensional scoring (tech match, salary, seniority, company quality)
- Tiered Classification: A/B/C/D tier system for opportunity prioritization
- Multi-Model Support: Use OpenAI, Anthropic, or Ollama (local/free) for LLM processing
- Context-Aware Responses: Generates human-like responses that mirror language and tone
- Real-time Granular Streaming: Watch the AI analyze messages step-by-step (extracting, scoring, drafting) in real-time via Server-Sent Events (SSE)
Daily at 9 AM:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LinkedIn Messages β Scraper β AI Analysis β Score & Tier β
β β β
β Generate Personalized Response (based on your job status) β
β β β
β Store in Database β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
ONE Daily Summary Email with ALL opportunities
β
Review β Edit β Approve β Send to LinkedIn
| Feature | Description |
|---|---|
| Daily Scraping | Playwright-based LinkedIn scraper runs once daily at 9 AM |
| Smart Caching | Redis-based multi-layer caching reduces LLM calls by 60% |
| Background Jobs | Celery Beat schedules daily scraping and cleanup tasks |
| Daily Summary Email | ONE beautiful HTML email with all new opportunities |
| Mailpit Integration | Local email testing in development (catches all emails) |
| Response Workflow | Review, edit, approve, and send responses via REST API |
| Rate Limiting | Respects LinkedIn limits to avoid account restrictions |
| Session Management | Persistent cookies for reliable long-term operation |
| Tool | Purpose | Access |
|---|---|---|
| Prometheus | Metrics collection (30+ custom metrics) | :9090 |
| Grafana | Pre-configured dashboards | :3000 |
| Jaeger | Distributed tracing (OpenTelemetry) | :16686 |
| Loki | Log aggregation | via Grafana |
| Flower | Celery task monitoring | :5555 |
Track everything:
- Pipeline execution time
- LLM token usage and costs
- Cache hit rates
- Opportunity distribution by tier
- System health metrics
- 85% code coverage with 140+ tests
- Unit tests for all core modules
- Integration tests for end-to-end workflows
- Load testing with Locust
- Automated CI/CD pipeline
A modern React dashboard for managing your LinkedIn opportunities - no command line needed!
After starting the application, open http://localhost:3000
| Page | URL | Description |
|---|---|---|
| Dashboard | /dashboard |
Overview with stats, charts, and "Scan LinkedIn" button |
| Opportunities | /opportunities |
Browse all opportunities with filters and search |
| Opportunity Detail | /opportunities/:id |
Full details, score breakdown, AI response |
| Responses | /responses |
Approve, edit, or decline pending AI responses |
| Profile | /profile |
Configure your preferences (tech stack, salary, etc.) |
| Settings | /settings |
LLM config, LinkedIn credentials, notifications |
- Scan LinkedIn Button: Trigger scraping directly from the dashboard
- Real-time Status: See scraping progress and health status
- Toast Notifications: User-friendly messages for scraping results (success, no messages, errors)
- Smart Filtering: Filter opportunities by tier, status, score, company
- Response Management: Review AI responses before sending
- Profile Editor: Visual editor for all your preferences
- Mobile Responsive: Works on desktop and mobile
- React 18 + TypeScript
- Vite for fast development
- Tailwind CSS + shadcn/ui components
- React Query for server state
- Zustand for UI state
- Recharts for visualizations
See docs/USER_GUIDE.md for the complete user guide.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LinkedIn AI Agent β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Frontend (React) β β
β β β β
β β Dashboard βββ Opportunities βββ Responses βββ Profile β β
β β β β β β β β
β β ββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββ β β
β β β β β
β β REST API β β
β βββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββ β
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Application Layer β β
β β β β
β β ββββββββββββ ββββββββββββ ββββββββββββ β β
β β β FastAPI βββββββΊβ Service βββββββΊβ DSPy β β β
β β β API β β Layer β β Pipeline β β β
β β ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ β β
β β β β β β β
β β βΌ βΌ βΌ β β
β β ββββββββββββ ββββββββββββ ββββββββββββ β β
β β βPostgreSQLβ β Redis β β Ollama β β β
β β β DB β β Cache β β LLM β β β
β β ββββββββββββ ββββββββββββ ββββββββββββ β β
β β β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Background Processing β β
β β β β
β β ββββββββββββ ββββββββββββ ββββββββββββ βββββββββββ β β
β β β Celery β βPlaywrightβ β Email β β Flower β β β
β β β Workers β β Scraper β β Sender β β Monitor β β β
β β ββββββββββββ ββββββββββββ ββββββββββββ βββββββββββ β β
β β β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Observability Stack (Optional) β β
β β β β
β β ββββββββββββ ββββββββββββ ββββββββββββ βββββββββββ β β
β β βPrometheusβββββ Grafana βββββ Loki βββββ Jaeger β β β
β β β Metrics β βDashboard β β Logs β β Traces β β β
β β ββββββββββββ ββββββββββββ ββββββββββββ βββββββββββ β β
β β β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Daily Scraping: Celery Beat triggers scraping at 9 AM daily
- Analysis: DSPy pipeline analyzes each message β extracts info β scores β classifies tier
- Response Generation: AI generates personalized response based on your professional status
- Storage: All opportunities stored in PostgreSQL with their AI responses
- Daily Summary: ONE email sent with ALL new opportunities (uses Mailpit in development)
- User Action: Review responses in email β Approve/Edit/Decline via API
- Send: Approved responses sent back to LinkedIn
- Monitoring: All operations tracked with metrics, traces, and logs
- FastAPI - Modern async Python web framework
- DSPy - Structured LLM programming framework
- PostgreSQL 15 - Primary database with async support
- Redis 7 - Caching and Celery broker
- Celery 5 - Distributed task queue
- Playwright - Browser automation for LinkedIn
- Ollama - Local LLM runtime (free, private)
- OpenAI - GPT-4, GPT-3.5-turbo support
- Anthropic - Claude 3 support
- Prometheus - Metrics collection
- Grafana - Visualization and dashboards
- Jaeger - Distributed tracing
- Loki - Log aggregation
- OpenTelemetry - Instrumentation
- pytest - Testing framework
- Docker - Containerization
- Alembic - Database migrations
- Pydantic - Data validation
- Docker and Docker Compose (recommended)
- Python 3.11+ (for local development)
- LinkedIn credentials (for scraping)
- 8GB+ RAM recommended
# 1. Clone repository
git clone https://github.com/yourusername/linkedin-ai-agent.git
cd linkedin-ai-agent
# 2. Configure environment
cp .env.example .env
nano .env # Add your LinkedIn credentials and settings
# 3. Start all services (one command!)
./scripts/start.sh
# 4. Verify deployment
curl http://localhost:8000/health
# Expected: {"status":"healthy","timestamp":"..."}That's it! The system is now:
- β Scheduled to scrape LinkedIn daily at 9 AM
- β Analyzing opportunities with AI (considering your professional status)
- β Caching results in Redis
- β Sending ONE daily summary email (view at http://localhost:8025 in dev)
- β Ready to generate personalized responses
# 1. Create virtual environment
python3.11 -m venv .venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
# 2. Install dependencies
pip install -r requirements.txt
playwright install chromium
# 3. Start dependencies (Postgres, Redis, Ollama)
docker-compose up -d postgres redis ollama
# 4. Run migrations
alembic upgrade head
# 5. Start FastAPI server
uvicorn app.main:app --reload
# 6. Start Celery worker (separate terminal)
celery -A app.tasks.celery_app worker --loglevel=info# Interactive API documentation
open http://localhost:8000/docs# Get all A-tier opportunities
curl "http://localhost:8000/api/v1/opportunities?tier=A&limit=10"
# Get opportunities above score 80
curl "http://localhost:8000/api/v1/opportunities?min_score=80"# Manually process a LinkedIn message
curl -X POST http://localhost:8000/api/v1/opportunities \
-H "Content-Type: application/json" \
-d '{
"recruiter_name": "Jane Smith",
"raw_message": "Hi! Senior Python Engineer role at Google. $180k-$220k, remote. Interested?"
}'# Get pending response for opportunity
curl http://localhost:8000/api/v1/responses/123
# Approve and send
curl -X POST http://localhost:8000/api/v1/responses/123/approve
# Edit before sending
curl -X POST http://localhost:8000/api/v1/responses/123/edit \
-H "Content-Type: application/json" \
-d '{"edited_response": "Thanks Jane! I'd love to learn more..."}'
# Decline (no message sent)
curl -X POST http://localhost:8000/api/v1/responses/123/decline# Get opportunity statistics
curl http://localhost:8000/api/v1/opportunities/analytics/stats
# Response:
# {
# "total": 150,
# "by_tier": {"A": 12, "B": 45, "C": 68, "D": 25},
# "avg_score": 62.5,
# "last_updated": "2024-01-18T..."
# }Once the system is running, access these dashboards:
| Service | URL | Credentials | Purpose |
|---|---|---|---|
| Web Dashboard | http://localhost:3000 | - | Main application UI |
| API Docs | http://localhost:8000/docs | - | Interactive API testing |
| Mailpit | http://localhost:8025 | - | View emails in development |
| Flower | http://localhost:5555 | admin/admin | Celery task monitoring |
| Grafana | http://localhost:3001 | admin/admin | Metrics dashboards (with monitoring stack) |
| Prometheus | http://localhost:9090 | - | Raw metrics queries |
| Jaeger | http://localhost:16686 | - | Request tracing |
Business Metrics:
opportunities_created_total- Total opportunities by tieropportunity_score_distribution- Score histogramopportunities_by_tier- Current distribution
Performance Metrics:
dspy_pipeline_execution_time_seconds- Pipeline latencyllm_api_latency_seconds- LLM response timellm_tokens_used_total- Token usage and costs
Cache Metrics:
cache_operations_total- Hit/miss ratescache_hit_rate- Percentage of cache hits
System Metrics:
db_query_latency_seconds- Database performancescraper_operations_total- Scraping success/failure
# Average pipeline execution time (last 1h)
rate(dspy_pipeline_execution_time_seconds_sum[1h]) /
rate(dspy_pipeline_execution_time_seconds_count[1h])
# Cache hit rate
sum(rate(cache_operations_total{status="hit"}[5m])) /
sum(rate(cache_operations_total[5m])) * 100
# Opportunities created per day by tier
sum by (tier) (increase(opportunities_created_total[1d]))
Pre-configured dashboards available in monitoring/grafana/dashboards/:
- LinkedIn Agent Overview - Main business metrics
- System Performance - CPU, memory, network
- DSPy Pipeline - AI/ML performance
- Database & Cache - Data layer metrics
Key configuration options (see .env.example for all options):
# === Application ===
ENV=development
LOG_LEVEL=INFO
# === LinkedIn Credentials ===
LINKEDIN_EMAIL=your@email.com
LINKEDIN_PASSWORD=your-password
# === Database ===
DATABASE_URL=postgresql+asyncpg://user:pass@localhost:5432/linkedin_agent
# === Redis ===
REDIS_URL=redis://localhost:6379/0
# === AI/ML Configuration ===
# Choose provider: ollama (local/free), openai, anthropic
LLM_PROVIDER=ollama
LLM_MODEL=llama3.2
# Ollama (local)
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2
# OpenAI (paid)
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4-turbo
# Anthropic (paid)
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL=claude-3-sonnet-20240229
# Per-module configuration (optional)
ANALYZER_LLM_PROVIDER=ollama
ANALYZER_LLM_MODEL=llama3.2
RESPONSE_LLM_PROVIDER=openai
RESPONSE_LLM_MODEL=gpt-4-turbo
# === Email Notifications ===
# Development: Use Mailpit (local email catcher)
SMTP_HOST=localhost
SMTP_PORT=1025
SMTP_USE_TLS=false
SMTP_USERNAME=
SMTP_PASSWORD=
SMTP_FROM_EMAIL=noreply@linkedin-agent.local
NOTIFICATION_EMAIL=you@example.com
# Production: Use real SMTP (Gmail, SendGrid, etc.)
# SMTP_HOST=smtp.gmail.com
# SMTP_PORT=587
# SMTP_USE_TLS=true
# SMTP_USERNAME=your_email@gmail.com
# SMTP_PASSWORD=your_app_password
# Only notify for these tiers
NOTIFICATION_TIER_THRESHOLD=["A", "B"]
NOTIFICATION_SCORE_THRESHOLD=60
# === Scraper Settings ===
SCRAPER_HEADLESS=true
SCRAPER_MAX_REQUESTS_PER_MINUTE=10
SCRAPER_MIN_DELAY_SECONDS=3.0
# === Observability (Optional) ===
OTEL_ENABLED=true
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
PROMETHEUS_MULTIPROC_DIR=/tmp/prometheusConfigure your preferences in config/profile.yaml:
# Personal Information
name: "Your Name"
# Skills and Experience
preferred_technologies:
- Python
- FastAPI
- PostgreSQL
- Docker
- React
years_of_experience: 5
current_seniority: "Senior" # Junior/Mid/Senior/Staff/Principal
# Compensation Expectations (USD)
minimum_salary_usd: 80000
ideal_salary_usd: 120000
# Work Preferences
preferred_remote_policy: "Remote" # Remote/Hybrid/On-site/Flexible
preferred_locations:
- "Remote"
- "United States"
# Company Preferences
preferred_company_size: "Mid-size" # Startup/Mid-size/Enterprise
industry_preferences:
- "Technology"
- "AI/ML"
- "SaaS"The system adapts AI-generated responses based on your current professional situation. Configure the job_search_status section to personalize how responses are generated:
# Professional Status (used for AI response generation)
job_search_status:
currently_employed: true
actively_looking: false # true = actively searching, false = only exceptional opportunities
# Urgency level determines response tone
# Options: urgent, moderate, selective, not_looking
urgency: "selective"
# Your current situation (free text - be specific!)
situation: |
Currently employed and happy, but open to exceptional opportunities.
Only considering roles with 4-day work week.
Focused on AI/ML engineering positions.
# Deal-breakers - opportunities missing these will be politely declined
must_have:
- "4-day work week (mandatory)"
- "Remote-first company"
- "Focus on AI/ML projects"
- "Senior or Staff level position"
# Nice to have - will express interest if present
nice_to_have:
- "Equity compensation"
- "Conference/learning budget"
- "Modern tech stack"
- "Flexible hours"
# Automatic rejection criteria - will decline opportunities matching these
reject_if:
- "Agencies or consulting firms"
- "Cryptocurrency/blockchain only"
- "Early-stage startups (pre-seed)"
- "5-day work week requirement"
- "Full-time on-site"How urgency affects responses:
| Urgency Level | Response Behavior |
|---|---|
urgent |
Proactive, enthusiastic responses. Express strong interest in good matches. |
moderate |
Balanced responses. Show interest and ask clarifying questions. |
selective |
Reserved responses. Emphasize specific requirements before proceeding. |
not_looking |
Polite but firm. Only engage with truly exceptional opportunities. |
Example response behaviors:
- HIGH_PRIORITY opportunity +
selectiveurgency: Express interest but ask about must-have requirements (e.g., "Before we proceed, does the role offer a 4-day work week?") - INTERESANTE opportunity +
not_lookingurgency: Politely acknowledge but mention you're not actively looking unless it meets specific criteria - Any opportunity missing
must_haveitems: Politely decline and mention the specific requirement that wasn't met - Opportunity matching
reject_ifcriteria: Automatic polite decline with brief explanation
Instead of sending individual emails for each opportunity, the system sends ONE daily summary email at 9 AM containing all new opportunities found.
Email includes for each opportunity:
- Tier classification (HIGH_PRIORITY, INTERESANTE, POCO_INTERESANTE, NO_INTERESA)
- Score breakdown (tech stack, salary, seniority, company)
- Extracted information (company, role, salary range, tech stack)
- AI-generated response (personalized to your professional status)
- Action buttons: Approve / Edit / Decline
Development with Mailpit:
In development, emails are captured by Mailpit instead of being sent to real addresses:
# Mailpit is included in docker-compose.yml
# View captured emails at:
open http://localhost:8025Mailpit captures all outgoing emails, making it easy to test and preview the daily summary without configuring a real SMTP server.
# Install development dependencies
pip install -r requirements-dev.txt
# Setup pre-commit hooks
pre-commit install
# Run tests
pytest tests/ -v --cov=app
# Run linters
black app/ tests/
ruff check --fix app/ tests/
mypy app/
# Security scan
bandit -r app/
safety checklinkedin-ai-agent/
βββ app/
β βββ api/ # REST API endpoints
β β βββ v1/
β β βββ opportunities.py
β β βββ responses.py
β β βββ health.py
β βββ cache/ # Redis caching layer
β βββ core/ # Configuration & utilities
β βββ database/ # SQLAlchemy models & repos
β βββ dspy_pipeline/ # AI analysis pipeline
β β βββ opportunity_analyzer.py
β β βββ response_generator.py
β βββ observability/ # Metrics & tracing
β βββ scraper/ # LinkedIn scraper
β βββ services/ # Business logic layer
β βββ tasks/ # Celery background tasks
β βββ main.py # FastAPI application
βββ tests/
β βββ unit/ # Unit tests
β βββ integration/ # Integration tests
β βββ performance/ # Load tests
βββ monitoring/ # Observability configs
β βββ grafana/
β βββ prometheus/
β βββ loki/
βββ infrastructure/
β βββ docker/ # Dockerfiles
βββ scripts/ # Automation scripts
βββ config/ # Configuration files
βββ docs/ # Documentation
βββ docker-compose.yml
βββ requirements.txt
# Run all tests with coverage
pytest tests/ -v --cov=app --cov-report=html
# Run specific test categories
pytest tests/unit/ -v # Unit tests only
pytest tests/integration/ -v # Integration tests
pytest -k "cache" -v # Cache tests only
# View coverage report
open htmlcov/index.html
# Load testing
locust -f tests/performance/locustfile.py --host=http://localhost:8000Test Coverage:
- β 140+ tests
- β 85% code coverage
- β All core modules tested
- β Integration tests for workflows
- β Performance benchmarks
# Development
docker-compose up -d
# Production
docker-compose -f docker-compose.prod.yml up -d
# With monitoring stack
docker-compose up -d
docker-compose -f docker-compose.monitoring.yml up -dSee docs/DEPLOYMENT.md for:
- Cloud deployment (AWS, GCP, Azure)
- Kubernetes manifests
- CI/CD setup with GitHub Actions
- Secrets management
- Backup/restore procedures
- Scaling strategies
Minimum:
- 2 CPU cores
- 4GB RAM
- 20GB disk
Recommended:
- 4 CPU cores
- 8GB RAM
- 50GB disk
Production:
- 8+ CPU cores
- 16GB+ RAM
- 100GB+ SSD
Comprehensive documentation available in docs/:
| Document | Description |
|---|---|
| USER_GUIDE.md | Start here! Complete user guide with frontend |
| ARCHITECTURE.md | System design and data flow |
| API.md | Complete API reference |
| DEPLOYMENT.md | Production deployment guide |
| DEVELOPMENT.md | Development workflow |
| TESTING_GUIDE.md | Testing strategies |
| MULTI_LLM_GUIDE.md | Multi-model LLM setup |
| NOTIFICATIONS_AND_RESPONSES.md | Email & response workflow |
| SCRAPER.md | LinkedIn scraper details |
- Getting Started with Ollama:
docs/guides/OLLAMA_SETUP.md - User Profile Configuration:
docs/guides/PROFILE_CONFIGURATION.md - Scraper Improvements:
docs/guides/SCRAPER_IMPROVEMENTS.md - Monitoring Stack:
monitoring/README.md - All Guides:
docs/guides/
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Python: Black formatting, Ruff linting, MyPy type checking
- Tests: 80%+ coverage required
- Commits: Conventional commits format
- Documentation: Update relevant docs with changes
Ollama not responding:
docker-compose restart ollama
docker-compose logs ollamaDatabase connection errors:
docker-compose exec postgres pg_isready
# Check DATABASE_URL in .envPlaywright browser issues:
playwright install chromium --with-depsLinkedIn login failing:
- Check credentials in
.env - Verify LinkedIn account isn't locked
- Try with
SCRAPER_HEADLESS=falseto debug
See docs/TROUBLESHOOTING.md for more solutions.
Based on testing (M1 MacBook Pro, 16GB RAM):
| Metric | Performance |
|---|---|
| API Response Time (no LLM) | p95 < 100ms |
| Pipeline Execution | 2-4s per message |
| Throughput | ~15 messages/min (single worker) |
| Cache Hit Rate | ~60% steady state |
| Database Queries | p95 < 10ms |
- Increase workers: Scale Celery workers for higher throughput
- Batch processing: Process multiple messages in batches
- Use cheaper models: Ollama/Llama for analysis, GPT-4 for responses
- Cache aggressively: Longer TTLs for stable data
- Connection pooling: Reuse DB connections
- β Secrets Management: All credentials in environment variables
- β Input Validation: Pydantic models for all inputs
- β SQL Injection: SQLAlchemy ORM with parameterized queries
- β Rate Limiting: Prevents LinkedIn account restrictions
- β
Dependency Scanning: Automated with
safetyandtrivy - β Container Scanning: Docker image vulnerability checks
- β Non-root Containers: All containers run as non-root users
This project is licensed under the MIT License - see the LICENSE file for details.
Built with amazing open-source tools:
- DSPy - Stanford NLP's structured prompting framework
- FastAPI - Modern Python web framework
- Ollama - Local LLM runtime
- Playwright - Browser automation
Special thanks to the open-source community for making projects like this possible.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: docs/
β Star this repo if you find it useful!
Built with β€οΈ for automating the job search
Report Bug β’ Request Feature β’ Documentation