Enterprise-grade AI-powered resume analysis using Amazon Bedrock Nova Premier with advanced search and filtering capabilities.
This system automatically processes resume attachments from email, analyzes them using Amazon Bedrock's Nova Premier AI model, and provides detailed insights through a beautiful web dashboard with advanced search and filtering capabilities.
Transform your hiring process with AI-powered automation, advanced analytics, and infinite scalability
Enterprise-grade serverless architecture with AI-powered resume analysis and email notifications
- AWS CLI configured with appropriate permissions
- Node.js 18+ and npm (for CDK deployment)
- Postmark account (for email integration)
# Clone repository
git clone <your-repo-url>
cd postmark
# Install CDK dependencies
cd infrastructure
npm install
# Deploy all AWS resources
cdk deploy --require-approval never
# Note the outputs - you'll need the URLs and IDs
# Automatically update configuration from CDK outputs
python3 scripts/update-frontend-config.py
# This script will:
# - Extract CDK outputs from CloudFormation
# - Update .env file with real values
# - Generate frontend/static/config.js from template using environment variables
# - Provide deployment-ready configuration without hardcoded values
# Deploy frontend with auto-generated configuration
python3 scripts/deploy-frontend.py
# This script will:
# - Generate config.js from template using environment variables
# - Create deployment package
# - Deploy to Amplify
# - Provide deployment URL
- Dashboard: Your Amplify app URL (from deployment output)
- GraphQL API: Your AppSync endpoint
- Webhook URL: Your API Gateway endpoint +
/webhook
Refer to the architecture diagrams above for a visual overview of the complete system design and data flow.
- Amazon Bedrock Integration: Direct API calls to Amazon Bedrock service
- Nova Premier Model: Amazon's latest multimodal AI model (
us.amazon.nova-premier-v1:0
) - Enhanced Analysis: Resume parsing, skill analysis, and candidate scoring
- Smart Processing: Automatic file detection and processing
- AWS S3: Resume file storage and processed content
- AWS Lambda: Serverless processing functions
- AWS DynamoDB: NoSQL database with 3 optimized tables
- AWS AppSync: GraphQL API with real-time capabilities
- Smart Monitoring: Automatic new file detection and processing
- Advanced Search: Real-time search across all resume data
- Dynamic Filtering: Multi-criteria filtering with presets
- Dual View Modes: Beautiful cards or compact table view
- Export Capabilities: CSV export of filtered results
- Responsive Design: Works on desktop and mobile
attachments # Resume files and metadata
βββ id (partition key)
βββ filename
βββ contentType
βββ size
βββ processingStatus
βββ createdAt
parsed_resumes # Extracted text content
βββ id (partition key)
βββ attachment_id
βββ raw_text_s3_key
βββ text_length
βββ parsing_status
βββ created_at
resume_information # AI analysis results
βββ id (partition key)
βββ parsed_resume_id
βββ candidate_name
βββ overall_score
βββ experience_level
βββ experience_years
βββ key_skills
βββ top_strengths
βββ fit_assessment
βββ summary
βββ created_at
The system uses a secure, template-based configuration approach:
- No hardcoded values in source code
- Environment-driven configuration
- Build-time generation from templates
- Git-safe - sensitive values never committed
frontend/static/
βββ config.js.template # Template with placeholders (committed)
βββ config.js # Generated file (ignored by git)
βββ ...
.env # Environment variables (ignored by git)
.env.example # Template for environment setup (committed)
- CDK Deployment β Outputs AWS resource IDs
- Automation Script β Extracts outputs to
.env
file - Template Processing β Generates
config.js
from template + environment variables - Frontend Deployment β Uses generated configuration
All configuration is managed through environment variables in .env
:
# AWS Configuration
AWS_REGION=us-east-1
S3_BUCKET=your-resume-bucket-name
LOG_LEVEL=INFO
# Frontend Configuration (auto-populated from CDK)
USER_POOL_ID=us-east-1_YourUserPoolId
USER_POOL_CLIENT_ID=YourUserPoolClientId
IDENTITY_POOL_ID=us-east-1:your-identity-pool-id
GRAPHQL_ENDPOINT=https://your-appsync-id.appsync-api.us-east-1.amazonaws.com/graphql
API_ID=your-api-gateway-id
listResumes
- Get all resume fileslistParsedResumes
- Get parsed resume datalistResumeAnalyses
- Get AI analysis resultsgetSystemHealth
- System status and statisticsgetResume(id)
- Get specific resume detailsgetResumeAnalysis(id)
- Get detailed AI analysis
triggerS3Monitor
- Manually scan S3 for new filesprocessResume(s3Key)
- Process specific resume file
- Real-time Search: Search across names, skills, experience
- Score Range: Filter by AI score (0-100)
- Experience Level: Entry, Junior, Mid, Senior, Lead
- Years of Experience: Min/Max range filtering
- Skills Matching: Comma-separated skill requirements
- Quick Presets: Top performers, senior level, cloud experts
- Multiple Sort Options: Score, name, experience, date, skills
- Grid View: Beautiful gradient cards with animations
- Table View: Compact tabular format for quick scanning
- Export: CSV download of filtered results
- Popular Skills: Auto-generated clickable skill badges
- Live Results: Instant filtering as you type
- Responsive Design: Works on all devices
- Performance Optimized: Handles thousands of resumes
- Direct API Integration: Uses boto3 to call Amazon Bedrock service
- Nova Premier Model: Amazon's latest multimodal AI model (
us.amazon.nova-premier-v1:0
) - Advanced Processing: Natural language understanding and structured data extraction
- Model Access: Bedrock provides secure, scalable access to Nova Premier for resume analysis
- Contact Information: Name, email extraction
- Technical Skills: Comprehensive skill identification
- Experience Assessment: Level and years calculation
- Fit Analysis: High/Medium/Low fit assessment
- Scoring: 0-100 overall candidate score
- Strengths: Top 3-5 candidate strengths
- Recommendations: Improvement suggestions
The system sends rich HTML email notifications when resume analysis is complete.
Update your .env
file with your Postmark credentials:
# Postmark Email Configuration
POSTMARK_SERVER_TOKEN=your-actual-postmark-server-token
NOTIFICATION_EMAIL=your-hr-team@company.com
FROM_EMAIL=noreply@company.com
Configure your Postmark webhook URL to point to your API Gateway:
https://your-api-gateway-id.execute-api.us-east-1.amazonaws.com/prod/webhook
- Rich HTML Templates: Professional email design with candidate insights
- Score-Based Styling: Color-coded performance indicators
- Comprehensive Analysis: Skills, strengths, recommendations, and summary
- Dashboard Integration: Direct links to view full analysis
- Async Delivery: Non-blocking email processing
- Email with resume attachment sent to monitored address
- Postmark processes email and triggers webhook
- AWS Lambda extracts attachment and stores in S3
- S3 processor detects new file and triggers analysis
- Amazon Bedrock (Nova Premier) analyzes resume content
- Email notification sent with analysis results (NEW!)
- Results stored in DynamoDB and displayed in dashboard
# Monitor processing progress
python3 scripts/monitor_processing.py
# Check system via GraphQL
curl -X POST https://your-appsync-endpoint/graphql \
-H "Content-Type: application/json" \
-d '{"query": "query { getSystemHealth { status totalResumes processedResumes successRate } }"}'
# Deploy frontend updates
python3 scripts/deploy-frontend.py
# Clear all data (use with caution)
python3 scripts/clear-all-resources.py
- Serverless Architecture: Auto-scaling based on demand
- Cost Optimized: Pay only for what you use
- High Availability: Multi-AZ deployment
- Performance: Handles thousands of resumes efficiently
- CloudWatch Logs: Comprehensive logging
- Error Tracking: Automatic error detection
- Performance Metrics: Processing time and success rates
- Cost Monitoring: AWS cost tracking
- AWS Cognito: User authentication
- IAM Roles: Least privilege access
- API Security: GraphQL with authentication
- Data Encryption: At rest and in transit
- Secure Storage: S3 and DynamoDB encryption
- Access Control: Role-based permissions
- Audit Trail: CloudTrail logging
- GDPR Compliance: Data deletion capabilities
postmark/
βββ README.md # This comprehensive guide
βββ DEPLOYMENT.md # Detailed deployment instructions
βββ .gitignore # Git ignore rules
βββ .env.example # Environment variables template
β
βββ infrastructure/ # AWS CDK Infrastructure as Code
β βββ lib/
β β βββ resume-ranking-stack.ts # Main CDK stack
β β βββ constructs/ # Reusable CDK constructs
β β βββ api-construct.ts # AppSync GraphQL API
β β βββ auth-construct.ts # Cognito authentication
β β βββ database-construct.ts # DynamoDB tables
β β βββ lambda-construct.ts # Lambda functions
β βββ lambda/ # Lambda function code
β β βββ graphql-resolvers/ # GraphQL resolvers
β β βββ s3-processor/ # S3 event processor
β β βββ resume-processor/ # AI analysis processor
β β βββ webhook-handler/ # Postmark webhook processor
β βββ graphql/
β β βββ schema.graphql # GraphQL schema definition
β βββ package.json # CDK dependencies
β βββ cdk.json # CDK configuration
β
βββ frontend/ # Web dashboard
β βββ static/
β βββ index.html # Main dashboard interface
β βββ auth.js # Authentication logic
β βββ graphql.js # GraphQL client
β βββ config.js.template # Frontend configuration template
β βββ style.css # Custom styling
β
βββ scripts/ # Utility scripts
β βββ deploy-frontend.py # Frontend deployment
β βββ monitor_processing.py # Processing monitor
β βββ update-frontend-config.py # Auto-update frontend config (NEW!)
β βββ clear-all-resources.py # Complete resource cleanup utility
β
βββ samples/ # Sample resume files
βββ *.pdf # Sample PDF resumes
βββ generate_sample_resumes.py # Sample generator
The system uses a template-based configuration approach for security and flexibility:
frontend/static/config.js.template
- Frontend configuration template.env.example
- Environment variables template
frontend/static/config.js
- Auto-generated from template (not committed).env
- Environment variables (not committed)
# 1. Deploy infrastructure
cdk deploy
# 2. Auto-configure from CDK outputs
python3 scripts/update-frontend-config.py
# 3. Deploy frontend (auto-generates config.js)
python3 scripts/deploy-frontend.py
- Backend: Update CDK constructs and Lambda functions
- API: Modify GraphQL schema and resolvers
- Frontend: Update dashboard components
- Configuration: Add new variables to template if needed
- Deploy: Use CDK for infrastructure, script for frontend
# Test Lambda functions locally
cd infrastructure/lambda/function-name
python3 -m pytest
# Deploy infrastructure changes
cd infrastructure
cdk diff
cdk deploy
# Deploy frontend changes
python3 scripts/deploy-frontend.py
- AWS Lambda: ~$0.02 (100 invocations Γ 5 seconds Γ 512MB = ~$0.0000083 per invocation)
- DynamoDB: ~$0.38 (300 writes @ $1.25/million + 1,000 reads @ $0.25/million)
- S3: ~$0.25 (1GB storage @ $0.023/GB + minimal requests)
- Amazon Bedrock: ~$1.50 (Nova Premier: ~$0.003/1K input tokens, ~$0.012/1K output tokens)
- AppSync: ~$0.40 (100,000 requests @ $4/million requests)
- Amplify: ~$1.00 (basic hosting)
- Cognito: Free (under 50,000 MAU)
- CloudWatch: ~$0.50 (basic logs)
- Total: ~$4.05/month
- AWS Lambda: ~$0.20 (1,000 invocations)
- DynamoDB: ~$3.75 (3,000 writes + 10,000 reads)
- S3: ~$2.30 (10GB storage + requests)
- Amazon Bedrock: ~$15.00 (Nova Premier at scale)
- AppSync: ~$4.00 (1 million requests)
- Amplify: ~$3.00 (moderate traffic)
- Cognito: ~$2.75 (5,000 MAU @ $0.0055/MAU after free tier)
- CloudWatch: ~$2.00 (moderate logging)
- Total: ~$33.00/month
- AWS Lambda: ~$2.00 (10,000 invocations)
- DynamoDB: ~$37.50 (30,000 writes + 100,000 reads)
- S3: ~$23.00 (100GB storage + requests)
- Amazon Bedrock: ~$150.00 (Nova Premier at enterprise scale)
- AppSync: ~$40.00 (10 million requests)
- Amplify: ~$10.00 (high traffic)
- Cognito: ~$27.50 (50,000 MAU)
- CloudWatch: ~$10.00 (comprehensive monitoring)
- Total: ~$300.00/month
- S3 Intelligent Tiering: Automatically moves old files to cheaper storage classes
- DynamoDB On-Demand: Pay only for actual usage instead of provisioned capacity
- Lambda Memory Optimization: Right-size memory allocation for optimal cost/performance
- CloudWatch Log Retention: Set appropriate retention periods (7-30 days)
- Reserved Capacity: For predictable workloads, use DynamoDB reserved capacity
- S3 Lifecycle Policies: Automatically archive old resumes to Glacier
- Lambda Provisioned Concurrency: For consistent performance at scale
- CloudFront CDN: Cache static assets to reduce Amplify costs
- Batch Processing: Process multiple resumes in single API calls when possible
- Content Filtering: Pre-filter resumes to avoid unnecessary AI analysis
- Caching Results: Store analysis results to avoid re-processing
- Model Selection: Use appropriate model size for your accuracy requirements
- Amazon Bedrock: Directly proportional to number of resumes analyzed
- Lambda Invocations: Scales with processing volume
- S3 Storage: Grows with resume collection size
- DynamoDB: Costs increase in capacity unit steps
- AppSync: Pricing tiers based on request volume
- Cognito: Free tier up to 50,000 MAU, then per-user pricing
- Amplify Hosting: Base hosting cost regardless of usage
- CloudWatch: Base monitoring costs
# Set up billing alerts
aws cloudwatch put-metric-alarm \
--alarm-name "ResumeRanking-MonthlyCost" \
--alarm-description "Alert when monthly costs exceed threshold" \
--metric-name EstimatedCharges \
--namespace AWS/Billing \
--statistic Maximum \
--period 86400 \
--threshold 100 \
--comparison-operator GreaterThanThreshold
- AWS Cost Explorer: Analyze spending patterns
- AWS Budgets: Set spending limits and alerts
- AWS Trusted Advisor: Get cost optimization recommendations
- AWS Cost Anomaly Detection: Detect unusual spending patterns
For detailed cost estimates based on your specific usage:
- AWS Pricing Calculator: https://calculator.aws
- Bedrock Pricing: https://aws.amazon.com/bedrock/pricing/
- Lambda Pricing: https://aws.amazon.com/lambda/pricing/
- File Upload & Storage: S3 costs
- Text Extraction: Lambda processing costs
- AI Analysis: Bedrock Nova Premier costs (highest component)
- Data Storage: DynamoDB costs
- Frontend Hosting: Amplify costs
- API Requests: AppSync costs
- User Authentication: Cognito costs
- Real-time Updates: Additional AppSync costs
- Search & Filtering: Additional DynamoDB read costs
- Export Functions: Lambda processing costs
- Monitoring: CloudWatch costs
- Backup & Recovery: Additional S3 costs
π‘ Pro Tip: Start with the small scale deployment and monitor actual usage patterns before scaling up. The system's serverless architecture means you only pay for what you use!
# Check S3 bucket contents
aws s3 ls s3://your-bucket-name/
# Monitor processing
python3 scripts/monitor_processing.py
# Check CloudWatch logs
aws logs describe-log-groups --log-group-name-prefix "/aws/lambda/resume-ranking"
# Verify Bedrock access
aws bedrock list-foundation-models --region us-east-1
# Check Nova Premier model
aws bedrock get-foundation-model --model-identifier us.amazon.nova-premier-v1:0 --region us-east-1
- Check browser console for JavaScript errors
- Verify config.js has correct AWS resource IDs
- Ensure Cognito user pool is configured
Enable detailed logging in CloudWatch to troubleshoot issues.
- Processing Success Rate: % of resumes successfully analyzed
- Average Processing Time: Time from upload to analysis complete
- AI Analysis Quality: Confidence scores and accuracy
- System Utilization: Lambda invocations and DynamoDB usage
Set up CloudWatch alarms for:
- Processing failures
- High latency
- Cost thresholds
- Error rates
- AWS Lambda - Serverless compute
- AWS DynamoDB - NoSQL database
- AWS S3 - Object storage
- AWS AppSync - GraphQL API
- Amazon Bedrock - AI service for accessing Nova Premier models
- AWS Cognito - Authentication
- HTML5 - Modern web standards
- Tailwind CSS - Utility-first CSS framework
- Alpine.js - Lightweight JavaScript framework
- GraphQL - API query language
- AWS CDK - Infrastructure as Code
- TypeScript - CDK development language
- AWS Amplify - Frontend hosting
- CloudFormation - AWS resource management
MIT License - see LICENSE file for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
For issues and questions:
- Check the troubleshooting section
- Review CloudWatch logs
- Open an issue on GitHub
- Check AWS service status
π AI Resume Ranking System is a complete, enterprise-grade solution for intelligent talent management with advanced search, filtering, and AI-powered analysis capabilities!
Quick Access:
- Dashboard: Your Amplify app URL
- API: Your AppSync GraphQL endpoint
- Monitoring: CloudWatch logs and metrics
Key Commands:
- Deploy:
cdk deploy
(infrastructure),python3 scripts/deploy-frontend.py
(frontend) - Monitor:
python3 scripts/monitor_processing.py
- Clean:
python3 scripts/clear-all-resources.py