-
Notifications
You must be signed in to change notification settings - Fork 0
Add RDS database support and enhance security group configurations across infrastructure #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ross infrastructure
WalkthroughThis pull request extends the AWS CDK infrastructure to support RDS database provisioning, adds cost center tracking throughout the stack, enhances security group configurations for network access control, and modifies S3 bucket encryption settings. A new RdsConstruct is introduced alongside updates to multiple existing constructs to wire in the new infrastructure components. Changes
Sequence Diagram(s)sequenceDiagram
participant Stack as InfrastructureStack
participant VPC as VpcConstruct
participant EKS as EksConstruct
participant RDS as RdsConstruct
participant CF as CloudFrontConstruct
participant AWS as AWS Resources
Stack->>VPC: Create VPC + default SecurityGroup
VPC->>AWS: Provision VPC & SecurityGroup
Stack->>EKS: Create EKS + nodeSecurityGroup
EKS->>AWS: Provision Cluster & Node SecurityGroup
Stack->>RDS: Create RDS with PostgreSQL config
RDS->>AWS: Provision DB Instance + SecurityGroup
Stack->>CF: Create CloudFront + Tags
CF->>AWS: Provision Distribution with Tags
AWS-->>Stack: Resources Ready
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/stacks/infrastructure-stack.ts (1)
78-85: Secure RDS connectivity by SG-to-SG rules; don’t rely on open ingress.After removing the 0.0.0.0/0 rule from RdsConstruct, explicitly allow DB access from bastion and EKS node SGs here:
+import * as ec2 from 'aws-cdk-lib/aws-ec2'; @@ const cloudfrontConstruct = new CloudFrontConstruct(this, 'CloudFront', { @@ - const rdsConstruct = new RdsConstruct(this, 'Rds', { + const rdsConstruct = new RdsConstruct(this, 'Rds', { vpc: vpcConstruct.vpc, instanceClass: config.rdsInstanceClass, allocatedStorage: config.rdsAllocatedStorage, maxAllocatedStorage: config.rdsMaxAllocatedStorage, databaseName: config.rdsDatabaseName, masterUsername: config.rdsMasterUsername, environment: config.environment, }); + + // Restrict DB ingress to trusted SGs only + rdsConstruct.securityGroup.addIngressRule( + bastionConstruct.securityGroup, + ec2.Port.tcp(5432), + 'PostgreSQL from bastion' + ); + rdsConstruct.securityGroup.addIngressRule( + eksConstruct.nodeSecurityGroup, + ec2.Port.tcp(5432), + 'PostgreSQL from EKS nodes' + );Optional: The earlier rule allowing bastion to “access EKS API” via cluster SG is likely unnecessary for EKS endpoint access; consider removing separately.
Also applies to: 87-96
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
config/environment.ts(3 hunks)lib/constructs/bastion-construct.ts(2 hunks)lib/constructs/cloudfront-construct.ts(2 hunks)lib/constructs/eks-construct.ts(2 hunks)lib/constructs/rds-construct.ts(1 hunks)lib/constructs/s3-construct.ts(1 hunks)lib/constructs/vpc-construct.ts(1 hunks)lib/stacks/infrastructure-stack.ts(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
lib/stacks/infrastructure-stack.ts (1)
lib/constructs/rds-construct.ts (1)
RdsConstruct(16-110)
🔇 Additional comments (1)
lib/constructs/bastion-construct.ts (1)
123-126: IMDS tags enablement looks fine.Using a property override to set InstanceMetadataTags is acceptable here alongside requireImdsv2. No changes required.
Ensure organizational policy requires IMDSv2 and metadata tags; if so, you’re compliant here.
| costCenter?: string; | ||
|
|
||
| // Route53 Configuration | ||
| domainName: string; | ||
| createHostedZone: boolean; | ||
| hostedZoneId?: string; | ||
|
|
||
| // RDS Configuration | ||
| rdsInstanceClass: string; | ||
| rdsAllocatedStorage: number; | ||
| rdsMaxAllocatedStorage: number; | ||
| rdsDatabaseName: string; | ||
| rdsMasterUsername: string; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Security posture follow-up: restrict SSH CIDRs and plan DB secrets.
- Dev SSH from 0.0.0.0/0 is risky even for dev. Prefer your office/VPN IP ranges.
- Keep masterUsername in config, but manage the password via AWS Secrets Manager and inject at deploy time (don’t hardcode).
No code change here required; ensure the constructs use a generated/managed secret.
Also applies to: 83-95, 133-145
🏁 Script executed:
#!/bin/bash
# 1. Examine the full config/environment.ts file
echo "=== Full config/environment.ts file ==="
cat config/environment.ts
# 2. Check for any hardcoded secrets, passwords, or API keys
echo -e "\n=== Searching for potential hardcoded secrets ==="
rg -n -i "(password|secret|apikey|token|credential)" config/environment.ts
# 3. Search for CIDR or SSH related configurations
echo -e "\n=== Searching for CIDR and SSH configurations ==="
rg -n -i "(cidr|bastion|ssh|allowedcidrs)" config/Length of output: 4256
🏁 Script executed:
#!/bin/bash
# Search for RDS stack/infrastructure code that might handle passwords
echo "=== Searching for RDS CDK stack or infrastructure code ==="
fd -e ts -e js . | xargs rg -l "rds|RDS" | head -20
# Look for any password-related code in infrastructure/stack files
echo -e "\n=== Searching for password injection or secret handling in CDK/stack code ==="
rg -n "rdsMaster|rdsPassword|Secrets|SecretsManager" --type=ts --type=js -g "!node_modules"
# Check if there's any hardcoded 'admin' password patterns
echo -e "\n=== Searching for hardcoded credentials patterns ==="
rg -n "(password|passwd).*[:=].*['\"]" --type=ts --type=js -g "!node_modules" -g "!test" | head -20Length of output: 567
🏁 Script executed:
#!/bin/bash
# Examine the RDS construct implementation
echo "=== RDS Construct Implementation ==="
cat lib/constructs/rds-construct.ts
# Examine how RDS is instantiated in the infrastructure stack
echo -e "\n=== Infrastructure Stack RDS Instantiation (lines 80-110) ==="
sed -n '80,110p' lib/stacks/infrastructure-stack.ts
# Check for Secret generation patterns
echo -e "\n=== Searching for password generation or secret creation ==="
rg -n "generate|Secret|GeneratedSecret" lib/constructs/rds-construct.ts lib/stacks/infrastructure-stack.tsLength of output: 5018
Critical: Remove hardcoded RDS password and restrict database access.
The security concerns are valid but more critical than noted:
-
Hardcoded credential in rds-construct.ts (line 60):
cdk.SecretValue.unsafePlainText('admin123')must be replaced with AWS Secrets Manager integration usingrds.Credentials.fromGeneratedSecret()orrds.Credentials.fromUsername()with a properly managed secret. -
Dev SSH unrestricted: Restrict
bastionAllowedCidrsin devConfig (line 73) from['0.0.0.0/0']to specific IP ranges (office/VPN). -
Additional RDS exposures (rds-construct.ts):
- Line 34: Remove public ingress (0.0.0.0/0) on PostgreSQL port 5432
- Line 69: Set
publiclyAccessible: false - Line 65: Enable
storageEncrypted: true - Line 71: Enable
backupRetention(minimum 7 days for prod) - Line 73: Enable
deletionProtection: truefor prod
🤖 Prompt for AI Agents
In config/environment.ts around lines 34-47 and rds-construct.ts at the
referenced lines (rds ingress line ~34, hardcoded secret line ~60,
storageEncrypted line ~65, deletionProtection/backupRetention/publiclyAccessible
around lines ~69-73), remove the hardcoded CDK secret and instead wire the RDS
construct to use AWS Secrets Manager via rds.Credentials.fromGeneratedSecret()
or rds.Credentials.fromUsername() with a managed secret; restrict dev
bastionAllowedCidrs to specific office/VPN CIDR(s) instead of ['0.0.0.0/0'];
remove any security group ingress allowing 0.0.0.0/0 to port 5432; set
publiclyAccessible to false; enable storageEncrypted: true; configure
backupRetention (at least 7 days for prod) and enable deletionProtection for
prod environments; ensure the code reads the secret ARN/secret reference from
environment or context rather than embedding plaintext.
| // Add HTTP ingress | ||
| this.securityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.tcp(80), | ||
| 'HTTP' | ||
| ); | ||
|
|
||
| // Add HTTPS ingress | ||
| this.securityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.tcp(443), | ||
| 'HTTPS' | ||
| ); | ||
|
|
||
| // Add MySQL/Aurora ingress | ||
| this.securityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.tcp(3306), | ||
| 'MySQL/Aurora' | ||
| ); | ||
|
|
||
| // Add PostgreSQL ingress | ||
| this.securityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.tcp(5432), | ||
| 'PostgreSQL' | ||
| ); | ||
|
|
||
| // Add Redis ingress | ||
| this.securityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.tcp(6379), | ||
| 'Redis' | ||
| ); | ||
|
|
||
| // Add custom app port ingress | ||
| this.securityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.tcp(8080), | ||
| 'Custom app port' | ||
| ); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocker: Bastion SG opens DB/cache/web ports to the internet.
A bastion should expose SSH only (and preferably be SSM-only). Opening 80/443/3306/5432/6379/8080 to 0.0.0.0/0 is unsafe.
- Remove these ingress rules.
- Keep SSH restricted to allowedCidrs or switch to SSM exclusively (then drop SSH).
- // Add HTTP ingress
- this.securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(80), 'HTTP');
-
- // Add HTTPS ingress
- this.securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(443), 'HTTPS');
-
- // Add MySQL/Aurora ingress
- this.securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(3306), 'MySQL/Aurora');
-
- // Add PostgreSQL ingress
- this.securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(5432), 'PostgreSQL');
-
- // Add Redis ingress
- this.securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(6379), 'Redis');
-
- // Add custom app port ingress
- this.securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(8080), 'Custom app port');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Add HTTP ingress | |
| this.securityGroup.addIngressRule( | |
| ec2.Peer.anyIpv4(), | |
| ec2.Port.tcp(80), | |
| 'HTTP' | |
| ); | |
| // Add HTTPS ingress | |
| this.securityGroup.addIngressRule( | |
| ec2.Peer.anyIpv4(), | |
| ec2.Port.tcp(443), | |
| 'HTTPS' | |
| ); | |
| // Add MySQL/Aurora ingress | |
| this.securityGroup.addIngressRule( | |
| ec2.Peer.anyIpv4(), | |
| ec2.Port.tcp(3306), | |
| 'MySQL/Aurora' | |
| ); | |
| // Add PostgreSQL ingress | |
| this.securityGroup.addIngressRule( | |
| ec2.Peer.anyIpv4(), | |
| ec2.Port.tcp(5432), | |
| 'PostgreSQL' | |
| ); | |
| // Add Redis ingress | |
| this.securityGroup.addIngressRule( | |
| ec2.Peer.anyIpv4(), | |
| ec2.Port.tcp(6379), | |
| 'Redis' | |
| ); | |
| // Add custom app port ingress | |
| this.securityGroup.addIngressRule( | |
| ec2.Peer.anyIpv4(), | |
| ec2.Port.tcp(8080), | |
| 'Custom app port' | |
| ); |
🤖 Prompt for AI Agents
In lib/constructs/bastion-construct.ts around lines 38 to 79, the bastion
security group is opening HTTP/HTTPS, DB and cache ports (80, 443, 3306, 5432,
6379, 8080) to anyIpv4 which is unsafe; remove those ingressRule calls and
instead either restrict SSH (port 22) ingress to the allowedCidrs variable or
remove SSH entirely and configure the bastion to use SSM-only access, ensuring
no public 0.0.0.0/0 ingress remains for web/db/cache ports.
| // Add tags to CloudFront distribution | ||
| cdk.Tags.of(this.distribution).add('Name', `${props.environment}-cloudfront`); | ||
| cdk.Tags.of(this.distribution).add('Environment', props.environment); | ||
| cdk.Tags.of(this.distribution).add('ManagedBy', 'Terraform'); | ||
| cdk.Tags.of(this.distribution).add('Service', 'CDN'); | ||
| if (props.costCenter) { | ||
| cdk.Tags.of(this.distribution).add('CostCenter', props.costCenter); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tag inconsistency: ManagedBy should reflect CDK, not Terraform.
Retain CostCenter tagging; change ManagedBy to “AWS CDK”.
- cdk.Tags.of(this.distribution).add('ManagedBy', 'Terraform');
+ cdk.Tags.of(this.distribution).add('ManagedBy', 'AWS CDK');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Add tags to CloudFront distribution | |
| cdk.Tags.of(this.distribution).add('Name', `${props.environment}-cloudfront`); | |
| cdk.Tags.of(this.distribution).add('Environment', props.environment); | |
| cdk.Tags.of(this.distribution).add('ManagedBy', 'Terraform'); | |
| cdk.Tags.of(this.distribution).add('Service', 'CDN'); | |
| if (props.costCenter) { | |
| cdk.Tags.of(this.distribution).add('CostCenter', props.costCenter); | |
| } | |
| // Add tags to CloudFront distribution | |
| cdk.Tags.of(this.distribution).add('Name', `${props.environment}-cloudfront`); | |
| cdk.Tags.of(this.distribution).add('Environment', props.environment); | |
| cdk.Tags.of(this.distribution).add('ManagedBy', 'AWS CDK'); | |
| cdk.Tags.of(this.distribution).add('Service', 'CDN'); | |
| if (props.costCenter) { | |
| cdk.Tags.of(this.distribution).add('CostCenter', props.costCenter); | |
| } |
🤖 Prompt for AI Agents
In lib/constructs/cloudfront-construct.ts around lines 186 to 194, the ManagedBy
tag is incorrectly set to "Terraform"; change it to "AWS CDK" to reflect that
CDK is managing the resource while keeping the existing Name, Environment,
Service tags and the conditional CostCenter tag untouched. Update the
cdk.Tags.of(this.distribution).add('ManagedBy', ...) call to use 'AWS CDK' and
leave all other tag lines as-is.
| // Create node security group | ||
| this.nodeSecurityGroup = new ec2.SecurityGroup(this, 'NodeSecurityGroup', { | ||
| vpc: props.vpc, | ||
| description: 'Security group for EKS nodes', | ||
| allowAllOutbound: true, | ||
| }); | ||
|
|
||
| // Add Kubelet API ingress | ||
| this.nodeSecurityGroup.addIngressRule( | ||
| ec2.Peer.ipv4(props.vpcCidr), | ||
| ec2.Port.tcp(10251), | ||
| 'Kubelet API' | ||
| ); | ||
|
|
||
| // Add SSH access ingress | ||
| this.nodeSecurityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.tcp(22), | ||
| 'SSH access' | ||
| ); | ||
|
|
||
| // Add NodePort services ingress | ||
| this.nodeSecurityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.tcpRange(30000, 32767), | ||
| 'NodePort services' | ||
| ); | ||
|
|
||
| cdk.Tags.of(this.nodeSecurityGroup).add('Name', `${props.clusterName}-node-sg`); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Blocker: EKS node SG opens SSH/NodePort to the world and targets wrong kubelet port.
- Kubelet port is 10250 (not 10251).
- Don’t expose SSH or NodePort to 0.0.0.0/0. Restrict to VPC CIDR and/or bastion SG; kubelet should be from the cluster/control-plane SG.
Minimal safer change inside this construct:
- // Add Kubelet API ingress
- this.nodeSecurityGroup.addIngressRule(ec2.Peer.ipv4(props.vpcCidr), ec2.Port.tcp(10251), 'Kubelet API');
+ // Kubelet API from within VPC
+ this.nodeSecurityGroup.addIngressRule(ec2.Peer.ipv4(props.vpcCidr), ec2.Port.tcp(10250), 'Kubelet API');
- // Add SSH access ingress
- this.nodeSecurityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(22), 'SSH access');
+ // Optional: restrict SSH to VPC; better: allow only from bastion SG in the stack wiring
+ this.nodeSecurityGroup.addIngressRule(ec2.Peer.ipv4(props.vpcCidr), ec2.Port.tcp(22), 'SSH from VPC');
- // Add NodePort services ingress
- this.nodeSecurityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcpRange(30000, 32767), 'NodePort services');
+ // NodePort only within VPC
+ this.nodeSecurityGroup.addIngressRule(ec2.Peer.ipv4(props.vpcCidr), ec2.Port.tcpRange(30000, 32767), 'NodePort within VPC');Follow-up (in the stack): add precise SG-to-SG rules from cluster SG and bastion SG as needed.
I'll verify the technical accuracy of the review comment by checking AWS EKS best practices and inspecting the codebase.
🏁 Script executed:
#!/bin/bash
# Locate and display the EKS construct file to verify the exact code
find . -type f -name "eks-construct.ts" | head -5Length of output: 33
🌐 Web query:
AWS EKS Kubelet API port 10250 10251 security group ingress
💡 Result:
Short answer
-
Kubelet API (TCP 10250) is the port the kubelet listens on and the EKS control plane must be able to reach it. Do NOT open 10250 to the Internet — allow it only from the EKS control-plane security group (or other trusted CIDRs/SGs). [1][2][3]
-
Port 10251 is the (insecure) kube-scheduler port used by control‑plane components; you normally do NOT need to open 10251 on worker nodes for EKS (scheduler runs on the control plane). [2][4]
Recommended security-group rules (minimal)
- Control-plane SG -> Node SG: allow outbound TCP 10250 (control plane egress). [1][3]
- Node SG -> Control-plane SG: allow outbound TCP 443 (nodes to API server). [1][3]
- Node SG: allow inbound TCP 10250 only from the control-plane SG (or from specific admin CIDR if you run kubelet direct access for ops). Never 0.0.0.0/0. [1][7]
Notes and hardening tips
- Authenticate/authorize Kubelet API (TLS, client certs, rotation) and avoid anonymous or read-only ports (e.g., 10255). [2][8]
- If you use security-groups-for-pods, ensure pod SGs allow kubelet probe traffic as required. [5]
Sources
- AWS EKS network/security best practices and required SG rules. [1][3]
- Kubernetes ports reference (kubelet, scheduler, controller-manager). [2]
- EKS security-group requirements & pod SG guidance. [5]
References
[1] AWS EKS — Network security / Best practices.
[2] Kubernetes — Ports and Protocols (kubelet 10250, scheduler 10251).
[3] AWS EKS — View Amazon EKS security group requirements for clusters.
[4] EKS Anywhere / docs listing control-plane ports.
[5] AWS EKS — Security groups for pods (pod SG requirements).
[7] (see AWS examples showing control-plane SG -> node SG 10250 rules).
[8] Kubelet security hardening guidance.
Now I'll inspect the actual file and search for related security group definitions in the codebase.
🏁 Script executed:
#!/bin/bash
# Display the full EKS construct file
cat ./lib/constructs/eks-construct.tsLength of output: 6130
🏁 Script executed:
#!/bin/bash
# Search for other SecurityGroup or control-plane references in constructs
rg -n "SecurityGroup|controlPlane|Bastion|bastion" ./lib/constructs --type ts -C 2Length of output: 11929
🏁 Script executed:
#!/bin/bash
# Search for props interface/type definition for EksConstruct
rg -n "interface.*Props|props.*vpcCidr|props.*vpc" ./lib/constructs/eks-construct.ts -A 3 -B 1Length of output: 764
Critical: EKS node security group exposes SSH/NodePort to 0.0.0.0/0 and targets wrong Kubelet port.
Port 10250 is the Kubelet API port that the EKS control plane must reach; 10251 is the insecure kube-scheduler port that runs on the control plane only. The code at line 41 uses port 10251, which is incorrect. Additionally, lines 47 and 54 expose SSH and NodePort services to 0.0.0.0/0, creating critical security vulnerabilities.
Fix required:
// Add Kubelet API ingress
- this.nodeSecurityGroup.addIngressRule(
- ec2.Peer.ipv4(props.vpcCidr),
- ec2.Port.tcp(10251),
- 'Kubelet API'
- );
+ this.nodeSecurityGroup.addIngressRule(
+ ec2.Peer.ipv4(props.vpcCidr),
+ ec2.Port.tcp(10250),
+ 'Kubelet API'
+ );
// Add SSH access ingress
- this.nodeSecurityGroup.addIngressRule(
- ec2.Peer.anyIpv4(),
- ec2.Port.tcp(22),
- 'SSH access'
- );
+ this.nodeSecurityGroup.addIngressRule(
+ ec2.Peer.ipv4(props.vpcCidr),
+ ec2.Port.tcp(22),
+ 'SSH from VPC'
+ );
// Add NodePort services ingress
- this.nodeSecurityGroup.addIngressRule(
- ec2.Peer.anyIpv4(),
- ec2.Port.tcpRange(30000, 32767),
- 'NodePort services'
- );
+ this.nodeSecurityGroup.addIngressRule(
+ ec2.Peer.ipv4(props.vpcCidr),
+ ec2.Port.tcpRange(30000, 32767),
+ 'NodePort within VPC'
+ );For production hardening, add SG-to-SG rules in the consuming stack to allow kubelet/SSH access only from the cluster control-plane SG and bastion SG as needed.
🤖 Prompt for AI Agents
In lib/constructs/eks-construct.ts around lines 31–60, the node security group
currently opens insecure rules and uses the wrong kubelet port: change the
Kubelet API ingress port from 10251 to 10250, remove the ec2.Peer.anyIpv4()
rules for SSH and NodePort (do not expose 0.0.0.0/0), and instead accept
optional props (e.g. controlPlaneSecurityGroup and bastionSecurityGroup) and
only add SG-to-SG ingress rules when those SG props are provided (allow
controlPlaneSecurityGroup to access tcp(10250) and node ports, and allow
bastionSecurityGroup to access tcp(22)); if those props are not supplied, do not
create wide-open SSH/NodePort rules so that consuming stacks can add strict
SG-to-SG rules as recommended.
| this.securityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.tcp(5432), | ||
| 'PostgreSQL from anywhere' | ||
| ); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Blocker: RDS instance is publicly exposed, unencrypted, with hardcoded credentials and no backups.
Please address all of the following:
- Remove 0.0.0.0/0 DB ingress; define no-ingress here and grant SG-scoped access in the stack.
- Use Secrets Manager (generated secret) instead of a hardcoded password.
- Enable storage encryption.
- Make the DB not publicly accessible (keep in private subnets).
- Set sane backup retention and consider deletion protection for prod.
Suggested secure baseline:
- // Add PostgreSQL ingress from anywhere (0.0.0.0/0)
- this.securityGroup.addIngressRule(
- ec2.Peer.anyIpv4(),
- ec2.Port.tcp(5432),
- 'PostgreSQL from anywhere'
- );
+ // No default ingress here. Grant SG-scoped access from callers (bastion/EKS) in the stack.
@@
- credentials: rds.Credentials.fromPassword(
- props.masterUsername,
- cdk.SecretValue.unsafePlainText('admin123') // WARNING: Hardcoded password from patch
- ),
+ credentials: rds.Credentials.fromGeneratedSecret(props.masterUsername, {
+ secretName: `${props.environment}/rds/${props.databaseName}/master`,
+ }),
@@
- storageEncrypted: false, // As per patch
+ storageEncrypted: true,
@@
- publiclyAccessible: true, // As per patch
+ publiclyAccessible: false,
@@
- backupRetention: cdk.Duration.days(0), // No backups as per patch
- deleteAutomatedBackups: true,
- removalPolicy: cdk.RemovalPolicy.DESTROY,
- deletionProtection: false,
+ backupRetention: props.environment === 'prod' ? cdk.Duration.days(7) : cdk.Duration.days(1),
+ deleteAutomatedBackups: false,
+ removalPolicy: props.environment === 'prod' ? cdk.RemovalPolicy.RETAIN : cdk.RemovalPolicy.DESTROY,
+ deletionProtection: props.environment === 'prod',Optionally also enable logs and Performance Insights:
- enablePerformanceInsights: false,
- cloudwatchLogsExports: [],
+ enablePerformanceInsights: props.environment === 'prod',
+ cloudwatchLogsExports: ['postgresql'],Also applies to: 56-62, 63-71, 72-77
---
🏁 Script executed:
```shell
#!/bin/bash
# Read the RDS construct file to verify security issues
cat lib/constructs/rds-construct.ts
Length of output: 3594
Blocker: RDS instance is publicly exposed, unencrypted, with hardcoded credentials and no backups.
Please address all of the following:
- Remove 0.0.0.0/0 DB ingress (lines 31–36); define no-ingress here and grant SG-scoped access in the stack.
- Use Secrets Manager (generated secret) instead of hardcoded password (lines 56–58).
- Enable storage encryption (line 63).
- Make the DB not publicly accessible (line 66).
- Set sane backup retention and deletion protection for prod (lines 69–72).
Suggested secure baseline:
- // Add PostgreSQL ingress from anywhere (0.0.0.0/0)
- this.securityGroup.addIngressRule(
- ec2.Peer.anyIpv4(),
- ec2.Port.tcp(5432),
- 'PostgreSQL from anywhere'
- );
+ // No default ingress here. Grant SG-scoped access from callers (bastion/EKS) in the stack.
@@
- credentials: rds.Credentials.fromPassword(
- props.masterUsername,
- cdk.SecretValue.unsafePlainText('admin123') // WARNING: Hardcoded password from patch
- ),
+ credentials: rds.Credentials.fromGeneratedSecret(props.masterUsername, {
+ secretName: `${props.environment}/rds/${props.databaseName}/master`,
+ }),
@@
- storageEncrypted: false, // As per patch
+ storageEncrypted: true,
@@
- publiclyAccessible: true, // As per patch
+ publiclyAccessible: false,
@@
- backupRetention: cdk.Duration.days(0), // No backups as per patch
- deleteAutomatedBackups: true,
- removalPolicy: cdk.RemovalPolicy.DESTROY,
- deletionProtection: false,
+ backupRetention: props.environment === 'prod' ? cdk.Duration.days(7) : cdk.Duration.days(1),
+ deleteAutomatedBackups: false,
+ removalPolicy: props.environment === 'prod' ? cdk.RemovalPolicy.RETAIN : cdk.RemovalPolicy.DESTROY,
+ deletionProtection: props.environment === 'prod',Optionally also enable logs and Performance Insights:
- enablePerformanceInsights: false,
- cloudwatchLogsExports: [],
+ enablePerformanceInsights: props.environment === 'prod',
+ cloudwatchLogsExports: ['postgresql'],Committable suggestion skipped: line range outside the PR's diff.
| blockPublicAccess: new s3.BlockPublicAccess({ | ||
| blockPublicAcls: false, | ||
| blockPublicPolicy: false, | ||
| ignorePublicAcls: false, | ||
| restrictPublicBuckets: false, | ||
| }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocker: Public access protections disabled and encryption removed on S3.
- Keep Block Public Access = BLOCK_ALL; OAI works without disabling BPA.
- Enable encryption (SSE-S3 or SSE-KMS) on both buckets.
Suggested fix:
this.staticBucket = new s3.Bucket(this, 'StaticBucket', {
bucketName: props.staticBucketName,
- blockPublicAccess: new s3.BlockPublicAccess({
- blockPublicAcls: false,
- blockPublicPolicy: false,
- ignorePublicAcls: false,
- restrictPublicBuckets: false,
- }),
+ blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
+ encryption: s3.BucketEncryption.S3_MANAGED,
versioned: props.enableVersioning,
cors: staticBucketCors,
removalPolicy: cdk.RemovalPolicy.RETAIN,
autoDeleteObjects: false,
});
this.userBucket = new s3.Bucket(this, 'UserBucket', {
bucketName: props.userBucketName,
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
+ encryption: s3.BucketEncryption.S3_MANAGED,
versioned: props.enableVersioning,
cors: userBucketCors,
removalPolicy: cdk.RemovalPolicy.RETAIN,
autoDeleteObjects: false,
lifecycleRules: props.enableLifecycleRules ? [ /* ... */ ] : undefined,
});Also applies to: 79-117
🤖 Prompt for AI Agents
In lib/constructs/s3-construct.ts around lines 62 to 67 (and also apply same
fixes to lines 79-117), the S3 buckets have Block Public Access flags disabled
and no server-side encryption; restore strong defaults by setting block public
access to BLOCK_ALL (or construct BlockPublicAccess with all true) and enable
bucket encryption using SSE-S3 (s3-managed) or SSE-KMS (KMS key) as appropriate;
update the bucket properties to include blockPublicAccess:
s3.BlockPublicAccess.BLOCK_ALL and encryption: s3.BucketEncryption.S3_MANAGED
(or ENCRYPTION_KMS with a KMS key) for both buckets.
| // Create default security group with allow all inbound | ||
| const defaultSecurityGroup = new ec2.SecurityGroup(this, 'DefaultSecurityGroup', { | ||
| vpc: this.vpc, | ||
| description: 'Default security group with allow all inbound', | ||
| allowAllOutbound: true, | ||
| securityGroupName: `${props.environment}-default-sg`, | ||
| }); | ||
|
|
||
| // Add allow all inbound rule | ||
| defaultSecurityGroup.addIngressRule( | ||
| ec2.Peer.anyIpv4(), | ||
| ec2.Port.allTraffic(), | ||
| 'Allow all inbound' | ||
| ); | ||
|
|
||
| cdk.Tags.of(defaultSecurityGroup).add('Name', `${props.environment}-default-sg`); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocker: Security group allows all inbound from the internet.
Creating a “default” SG that permits all inbound (0.0.0.0/0, all ports) is a severe risk. If attached anywhere, it exposes workloads.
- Remove the allow-all ingress. Keep no-ingress baseline or add only specific ports/sources when wiring.
- If you need a baseline SG, prefer no inbound and allowAllOutbound: true.
Suggested change:
- // Add allow all inbound rule
- defaultSecurityGroup.addIngressRule(
- ec2.Peer.anyIpv4(),
- ec2.Port.allTraffic(),
- 'Allow all inbound'
- );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Create default security group with allow all inbound | |
| const defaultSecurityGroup = new ec2.SecurityGroup(this, 'DefaultSecurityGroup', { | |
| vpc: this.vpc, | |
| description: 'Default security group with allow all inbound', | |
| allowAllOutbound: true, | |
| securityGroupName: `${props.environment}-default-sg`, | |
| }); | |
| // Add allow all inbound rule | |
| defaultSecurityGroup.addIngressRule( | |
| ec2.Peer.anyIpv4(), | |
| ec2.Port.allTraffic(), | |
| 'Allow all inbound' | |
| ); | |
| cdk.Tags.of(defaultSecurityGroup).add('Name', `${props.environment}-default-sg`); | |
| // Create default security group with allow all inbound | |
| const defaultSecurityGroup = new ec2.SecurityGroup(this, 'DefaultSecurityGroup', { | |
| vpc: this.vpc, | |
| description: 'Default security group with allow all inbound', | |
| allowAllOutbound: true, | |
| securityGroupName: `${props.environment}-default-sg`, | |
| }); | |
| cdk.Tags.of(defaultSecurityGroup).add('Name', `${props.environment}-default-sg`); |
🤖 Prompt for AI Agents
In lib/constructs/vpc-construct.ts around lines 56 to 72, the code creates a
default security group that adds an ingress rule allowing all inbound traffic
from any IPv4 address; remove the call that adds this open ingress and instead
leave the security group with no inbound rules (only allowAllOutbound: true), or
if required later, add minimal, explicit ingress rules scoped to specific ports
and trusted CIDR/peers when wiring resources; also keep the Name tag but do not
reintroduce any 0.0.0.0/0 all-ports rules anywhere in this construct.
Summary by CodeRabbit
New Features
Chores