Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only allow SSH to reach Packer instance from Elastic Stack #1308

Merged
merged 1 commit into from
Apr 3, 2024

Conversation

steveh
Copy link
Member

@steveh steveh commented Apr 3, 2024

GuardDuty is complaining to us about Packer instances being probed on their SSH port, which by default is open to the world.

Packer can be configured to allow connections only from a particular CIDR, however elastic-stack runs in the main VPC on 10.16.0.0/16, whereas Packer runs in the default VPC on 172.31.0.0/16, and there is no routing configured between them.

Apparently they communicate over the public internet, so this seems the simplest method to block public SSH.

There's a few other options to consider, I'm happy to go down one of these paths instead:

  • Launch the Packer instances in the same VPC as the Elastic Stack by hard coding vpc_id (brittle if you update the stack)
  • Launch the Packer instances in the same VPC as the Elastic Stack by setting vpc_filter (we currently have 2 identically tagged VPCs, leftover from a previous stack version perhaps?)
  • Peer the Packer VPC and Elastic Stack VPCs so they can communicate privately

GuardDuty is complaining to us about Packer instances being probed on their SSH port, which by default is open to the world.

Packer can be configured to allow connections only from a particular CIDR, however elastic-stack runs in the `main` VPC on `10.16.0.0/16`, whereas Packer runs in the default VPC on `172.31.0.0/16`, and there is no routing configured between them.

Apparently they communicate over the public internet, so this seems the simplest method to block public SSH.

There's a few other options to consider, I'm happy to go down one of these paths instead:

* Launch the Packer instances in the same VPC as the Elastic Stack by hard coding `vpc_id` (brittle if you update the stack)
* Launch the Packer instances in the same VPC as the Elastic Stack by setting `vpc_filter` (we currently have 2 identically tagged VPCs, leftover from a previous stack version perhaps?)
* Peer the Packer VPC and Elastic Stack VPCs so they can communicate privately
@steveh steveh requested a review from a team April 3, 2024 02:27
@steveh
Copy link
Member Author

steveh commented Apr 3, 2024

Checked it works locally:

$ aws-vault exec buildkite-sandbox-admin -- make packer-linux-arm64.output
==> amazon-ebs.elastic-ci-stack-ami: Creating temporary security group for this instance: packer_660db2d5-affb-5cb0-4afc-78f531d07987
==> amazon-ebs.elastic-ci-stack-ami: Checking current host's public IP...
==> amazon-ebs.elastic-ci-stack-ami: Current host's public IP: 101.98.[redacted]
==> amazon-ebs.elastic-ci-stack-ami: Authorizing access to port 22 from [101.98.[redacted]/32] in the temporary security groups...
Build 'amazon-ebs.elastic-ci-stack-ami' finished after 6 minutes 41 seconds.

CleanShot 2024-04-04 at 08 51 55

@steveh steveh merged commit e0a8ab8 into main Apr 3, 2024
1 check passed
@steveh steveh deleted the packer-security-group branch April 3, 2024 19:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants