Skip to content
This repository has been archived by the owner on Jan 19, 2022. It is now read-only.

(WIP) Management subnets proposal, and splitting servers for security #68

Closed

Conversation

oskarpearson
Copy link

(Please note that this is an interim PR for review purposes.)

It might be best to chat about this around a whiteboard :)

For many systems, a 'flat' network where the salt servers and clients run on the same infrastructure is not appropriate.

Further, having the salt server live a randomly-selected host in the autoscaling group makes life difficult. If it goes away, autoscaling won't restore it.

For the project I'm working on (DSDS), we foresee:

  1. The docker servers, running containers for serving web requests.
  2. Docker containers for things like Sidekiq. These may be on different hosts to the webserver hosts, or shared.
  3. A management network. This contains the salt server, build servers, monitoring, logging, and similar things. There may be a 'SSH bastion host' that limits access to the management network.
  4. Security groups that manage the interactions between the networks and hosts.

A key change to support this would be that building the docker servers will happen entirely through autoscaling. The bootstrap engine would not be responsible for bringing up those hosts. It will, however, create the AWS autoscaling group for them, and then let AWS build the hosts to fulfil that autoscaling group from salt.

Said slightly differently: AWS will be responsible for bringing up servers in an Autoscaling config, and the bootup process of those hosts will check in with salt and fetch their configs from the salt server.

What I've done at a Code Level

On review, everything in the 'bootstrap_cfn/ec2.py' file actually revolved around managing the salt master and client configs. I've thus renamed the file. This additionally makes sense because the bootstrap engine wouldn't be responsible for actually creating any ec2 instances other than the management host instances.

Next up would be to:

  1. Clean up lots of code to make more sense, given the rename.
  2. Change so that the autoscaling config will automatically fetch it's config from the salt server.
  3. Push the salt config to the salt server, and only then create the autoscaling groups.

We may also:

  1. Support the management network existing in multiple availability zones.
  2. Support multiple salt masters.

Feedback about this direction is welcome.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.1%) to 53.16% when pulling ad0b044 on management_subnets into cce973f on dockerfile_for_local_execution.

@ashb
Copy link
Contributor

ashb commented Apr 9, 2015

⬜ ✏️ time! (tomorrow?)

@ashb ashb added the question label Apr 9, 2015
@ashb
Copy link
Contributor

ashb commented Apr 14, 2015

@oskarpearson code renaming aside I think some of what you want can now be achieved once #74 is merged, specifailly this point

It adds the ability to include custom cloudformation templates.

Though if you wanted the salt master else where that would likely still require a code change.

@oskarpearson
Copy link
Author

I'm going to close this and refactor into smaller changes.

@ashb ashb deleted the management_subnets branch June 22, 2015 11:16
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants