This repository has been archived by the owner on Feb 5, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 268
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Tested successfully on AWS also exposing a LoadBalancer service. |
Currently masters and workers share a pretty open security group. Furthermore workers expose ingress traffic at critical k8s ports like 10250 and 10255. This fixes it by removing the common cluster default security group and specifying separate ingress/egress rules reflecting settings from the current tectonic installer. It also assigns only one security group for masters and workers. Fixes coreos#248, coreos#243, coreos#227
s-urbaniak
force-pushed
the
fix-248
branch
2 times, most recently
from
April 19, 2017 14:54
05b3e0a
to
80df194
Compare
still testing with external VPC, hence do not merge yet. |
... because that one is configured and recommended since it is the IANA based one. Tools like tcpdump then decode vxlan packets natively. The old port (8472) is retained as the default port in the kernel for backwards compatibility purposes only, see [1]. Other projects also switched to the new IANA assigned port. [1] http://lxr.free-electrons.com/source/drivers/net/vxlan.c#L43
ok, tested successfully also with an external VPC, this is ready to go/review. |
alexsomesan
approved these changes
Apr 19, 2017
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I paired with @s-urbaniak for various tests on this change. Builds well.
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Aug 20, 2018
This happend for masters and workers in b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264, where: * Master ingress/egress rules moved from inline entries in modules/aws/master-asg/master.tf to stand-alone rules in modules/aws/vpc/sg-master.tf. * Worker ingress/egress rules moved from inline entries in modules/aws/worker-asg/security-groups.tf to stand-alone rules in modules/aws/vpc/sg-worker.tf. For some reason, b620c16 moved the etcd security group from modules/aws/etcd/network.tf to modules/aws/vpc/sg-etcd.tf without splitting out the inline rules, so this commit catches up for consistency with the other node classes. From the Terraform docs [1]: Terraform currently provides both a standalone Security Group Rule resource (a single ingress or egress rule), and a Security Group resource with ingress and egress rules defined in-line. At this time you cannot use a Security Group with in-line rules in conjunction with any Security Group Rule resources. Doing so will cause a conflict of rule settings and will overwrite rules. We can also use the rule name to hint at the purpose of a rule, while with inline rules we just have port numbers. In this case, the *_etcd and *_peer suffixes are based on [2]: The official etcd ports are 2379 for client requests, and 2380 for peer communication. [1]: https://www.terraform.io/docs/providers/aws/r/security_group_rule.html [2]: https://github.com/coreos/etcd/tree/v3.3.9#etcd-tcp-ports
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Aug 20, 2018
Patterned on the existing worker_ingress_kubelet_insecure_from_master from b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264). This should address errors like: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/artifacts/e2e-aws/nodes/ip-10-0-52-134.ec2.internal/journal.gz | zcat >journal $ journalread journal | grep 'current config label' | head -n3 2018-08-20T20:30:08.000827895Z I0820 20:30:08.826840 1 tnc.go:375] Node ip-10-0-134-147.ec2.internal does not have a current config label 2018-08-20T20:30:08.00082814Z I0820 20:30:08.826860 1 tnc.go:375] Node ip-10-0-153-195.ec2.internal does not have a current config label 2018-08-20T20:30:08.000828371Z I0820 20:30:08.826866 1 tnc.go:375] Node ip-10-0-166-239.ec2.internal does not have a current config label on the master node: $ journalread journal | grep -A15 'Starting Ignition' | grep -v INFO 2018-08-20T20:21:40.00097323Z Starting Ignition (files)... 2018-08-20T20:21:40.000991225Z DEBUG : parsed url from cmdline: "" 2018-08-20T20:21:41.000010266Z DEBUG : parsing config: { 2018-08-20T20:21:41.0000122Z "ignition": { 2018-08-20T20:21:41.000014165Z "config": { 2018-08-20T20:21:41.000016186Z "append": [ 2018-08-20T20:21:41.000018193Z { 2018-08-20T20:21:41.000023296Z "source": "http://ci-op-imi5mbig-68485-tnc.origin-ci-int-aws.dev.rhcloud.com:80/config/master", which were resulting in: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/build-log.txt | grep Ginkgo | Ginkgo timed out waiting for all parallel nodes to report back! | Ginkgo ran 1 suite in 10m6.626061944s inbound 10250 is the kubelet API used by the control plane [1]. Clayton suspects the e2e-aws tests are trying to get metrics from the kubelets, and hanging on the etcd kubelet because this rule was missing. I'm not clear why we've only been seeing this issue for the last week though. [1]: https://kubernetes.io/docs/setup/independent/install-kubeadm/#worker-node-s
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Aug 20, 2018
Patterned on the existing worker_ingress_kubelet_insecure_from_master from b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264). This should address errors like: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/artifacts/e2e-aws/nodes/ip-10-0-52-134.ec2.internal/journal.gz | zcat >journal $ journalread journal | grep 'current config label' | head -n3 2018-08-20T20:30:08.000827895Z I0820 20:30:08.826840 1 tnc.go:375] Node ip-10-0-134-147.ec2.internal does not have a current config label 2018-08-20T20:30:08.00082814Z I0820 20:30:08.826860 1 tnc.go:375] Node ip-10-0-153-195.ec2.internal does not have a current config label 2018-08-20T20:30:08.000828371Z I0820 20:30:08.826866 1 tnc.go:375] Node ip-10-0-166-239.ec2.internal does not have a current config label on the master node: $ journalread journal | grep -A15 'Starting Ignition' | grep -v INFO 2018-08-20T20:21:40.00097323Z Starting Ignition (files)... 2018-08-20T20:21:40.000991225Z DEBUG : parsed url from cmdline: "" 2018-08-20T20:21:41.000010266Z DEBUG : parsing config: { 2018-08-20T20:21:41.0000122Z "ignition": { 2018-08-20T20:21:41.000014165Z "config": { 2018-08-20T20:21:41.000016186Z "append": [ 2018-08-20T20:21:41.000018193Z { 2018-08-20T20:21:41.000023296Z "source": "http://ci-op-imi5mbig-68485-tnc.origin-ci-int-aws.dev.rhcloud.com:80/config/master", which were resulting in: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/build-log.txt | grep Ginkgo | Ginkgo timed out waiting for all parallel nodes to report back! | Ginkgo ran 1 suite in 10m6.626061944s Inbound 10250 is the kubelet API used by the control plane [1]. Clayton suspects the e2e-aws tests are trying to get metrics from the kubelets, and hanging on the etcd kubelet because this rule was missing. I'm not clear why we've only been seeing this issue for the last week though. [1]: https://kubernetes.io/docs/setup/independent/install-kubeadm/#worker-node-s
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Aug 20, 2018
Patterned on the existing worker_ingress_kubelet_insecure_from_master from b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264). This should address errors like: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/artifacts/e2e-aws/nodes/ip-10-0-52-134.ec2.internal/journal.gz | zcat >journal $ journalread journal | grep 'current config label' | head -n3 2018-08-20T20:30:08.000827895Z I0820 20:30:08.826840 1 tnc.go:375] Node ip-10-0-134-147.ec2.internal does not have a current config label 2018-08-20T20:30:08.00082814Z I0820 20:30:08.826860 1 tnc.go:375] Node ip-10-0-153-195.ec2.internal does not have a current config label 2018-08-20T20:30:08.000828371Z I0820 20:30:08.826866 1 tnc.go:375] Node ip-10-0-166-239.ec2.internal does not have a current config label on the master node: $ journalread journal | grep -A15 'Starting Ignition' | grep -v INFO 2018-08-20T20:21:40.00097323Z Starting Ignition (files)... 2018-08-20T20:21:40.000991225Z DEBUG : parsed url from cmdline: "" 2018-08-20T20:21:41.000010266Z DEBUG : parsing config: { 2018-08-20T20:21:41.0000122Z "ignition": { 2018-08-20T20:21:41.000014165Z "config": { 2018-08-20T20:21:41.000016186Z "append": [ 2018-08-20T20:21:41.000018193Z { 2018-08-20T20:21:41.000023296Z "source": "http://ci-op-imi5mbig-68485-tnc.origin-ci-int-aws.dev.rhcloud.com:80/config/master", which were resulting in: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/build-log.txt | grep Ginkgo | Ginkgo timed out waiting for all parallel nodes to report back! | Ginkgo ran 1 suite in 10m6.626061944s Inbound 10250 is the kubelet API used by the control plane [1]. Clayton suspects the e2e-aws tests are trying to get metrics from the kubelets, and hanging on the etcd kubelet because this rule was missing. I'm not clear why we've only been seeing this issue for the last week though. In the commands above, journalread is from [2]. [1]: https://kubernetes.io/docs/setup/independent/install-kubeadm/#worker-node-s [2]: https://github.com/smarterclayton/journalread
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Aug 20, 2018
Patterned on the existing worker_ingress_kubelet_insecure_from_master from b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264). This should address errors like: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/artifacts/e2e-aws/nodes/ip-10-0-52-134.ec2.internal/journal.gz | zcat >journal $ journalread journal | grep 'current config label' | head -n3 2018-08-20T20:30:08.000827895Z I0820 20:30:08.826840 1 tnc.go:375] Node ip-10-0-134-147.ec2.internal does not have a current config label 2018-08-20T20:30:08.00082814Z I0820 20:30:08.826860 1 tnc.go:375] Node ip-10-0-153-195.ec2.internal does not have a current config label 2018-08-20T20:30:08.000828371Z I0820 20:30:08.826866 1 tnc.go:375] Node ip-10-0-166-239.ec2.internal does not have a current config label on the master node: $ journalread journal | grep -A15 'Starting Ignition' | grep -v INFO 2018-08-20T20:21:40.00097323Z Starting Ignition (files)... 2018-08-20T20:21:40.000991225Z DEBUG : parsed url from cmdline: "" 2018-08-20T20:21:41.000010266Z DEBUG : parsing config: { 2018-08-20T20:21:41.0000122Z "ignition": { 2018-08-20T20:21:41.000014165Z "config": { 2018-08-20T20:21:41.000016186Z "append": [ 2018-08-20T20:21:41.000018193Z { 2018-08-20T20:21:41.000023296Z "source": "http://ci-op-imi5mbig-68485-tnc.origin-ci-int-aws.dev.rhcloud.com:80/config/master", which were resulting in: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/build-log.txt | grep Ginkgo | Ginkgo timed out waiting for all parallel nodes to report back! | Ginkgo ran 1 suite in 10m6.626061944s Inbound 10250 is the kubelet API used by the control plane [1]. Clayton suspects the e2e-aws tests are trying to get metrics from the kubelets, and hanging on the etcd kubelet because this rule was missing. I'm not clear why we've only been seeing this issue for the last week though. In the commands above, journalread is from [2]. [1]: https://kubernetes.io/docs/setup/independent/install-kubeadm/#worker-node-s [2]: https://github.com/smarterclayton/journalread
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Aug 21, 2018
Patterned on the existing worker_ingress_kubelet_insecure_from_master from b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264). This should address errors like: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/artifacts/e2e-aws/nodes/ip-10-0-52-134.ec2.internal/journal.gz | zcat >journal $ journalread journal | grep 'current config label' | head -n3 2018-08-20T20:30:08.000827895Z I0820 20:30:08.826840 1 tnc.go:375] Node ip-10-0-134-147.ec2.internal does not have a current config label 2018-08-20T20:30:08.00082814Z I0820 20:30:08.826860 1 tnc.go:375] Node ip-10-0-153-195.ec2.internal does not have a current config label 2018-08-20T20:30:08.000828371Z I0820 20:30:08.826866 1 tnc.go:375] Node ip-10-0-166-239.ec2.internal does not have a current config label on the master node: $ journalread journal | grep -A15 'Starting Ignition' | grep -v INFO 2018-08-20T20:21:40.00097323Z Starting Ignition (files)... 2018-08-20T20:21:40.000991225Z DEBUG : parsed url from cmdline: "" 2018-08-20T20:21:41.000010266Z DEBUG : parsing config: { 2018-08-20T20:21:41.0000122Z "ignition": { 2018-08-20T20:21:41.000014165Z "config": { 2018-08-20T20:21:41.000016186Z "append": [ 2018-08-20T20:21:41.000018193Z { 2018-08-20T20:21:41.000023296Z "source": "http://ci-op-imi5mbig-68485-tnc.origin-ci-int-aws.dev.rhcloud.com:80/config/master", which were resulting in: $ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/150/pull-ci-origin-installer-e2e-aws/546/build-log.txt | grep Ginkgo | Ginkgo timed out waiting for all parallel nodes to report back! | Ginkgo ran 1 suite in 10m6.626061944s Inbound 10250 is the kubelet API used by the control plane [1]. Clayton suspects the e2e-aws tests are trying to get metrics from the kubelets, and hanging on the etcd kubelet because this rule was missing. I'm not clear why we've only been seeing this issue for the last week though. In the commands above, journalread is from [2]. [1]: https://kubernetes.io/docs/setup/independent/install-kubeadm/#worker-node-s [2]: https://github.com/smarterclayton/journalread
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Aug 21, 2018
This happend for masters and workers in b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264, where: * Master ingress/egress rules moved from inline entries in modules/aws/master-asg/master.tf to stand-alone rules in modules/aws/vpc/sg-master.tf. * Worker ingress/egress rules moved from inline entries in modules/aws/worker-asg/security-groups.tf to stand-alone rules in modules/aws/vpc/sg-worker.tf. This commit catches up for consistency with the other node classes. From the Terraform docs [1]: Terraform currently provides both a standalone Security Group Rule resource (a single ingress or egress rule), and a Security Group resource with ingress and egress rules defined in-line. At this time you cannot use a Security Group with in-line rules in conjunction with any Security Group Rule resources. Doing so will cause a conflict of rule settings and will overwrite rules. We can also use the rule name to hint at the purpose of a rule (e.g. tnc_ingress_http), while with inline rules we just have port numbers. This also drops some 'self' properties from egress rules, which didn't make sense. 'self' and 'cidr_blocks' are incompatible, resulting in [2]: 3 error(s) occurred: * module.vpc.aws_security_group_rule.api_egress: 1 error(s) occurred: * aws_security_group_rule.api_egress: 'self': conflicts with 'cidr_blocks' ([]interface {}{"0.0.0.0/0"}) * module.vpc.aws_security_group_rule.tnc_egress: 1 error(s) occurred: * aws_security_group_rule.tnc_egress: 'self': conflicts with 'cidr_blocks' ([]interface {}{"0.0.0.0/0"}) * module.vpc.aws_security_group_rule.console_egress: 1 error(s) occurred: * aws_security_group_rule.console_egress: 'self': conflicts with 'cidr_blocks' ([]interface {}{"0.0.0.0/0"}) And these are supposed to be generic egress blocks anyway. The erroneous use of 'self' and 'cidr_blocks' together dates back to e2709ba (Build VPC and ETCD cluster, 2017-02-21). [1]: https://www.terraform.io/docs/providers/aws/r/security_group_rule.html [2]: https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/151/pull-ci-origin-installer-e2e-aws/552/build-log.txt
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Aug 23, 2018
This happend for masters and workers in b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264, where: * Master ingress/egress rules moved from inline entries in modules/aws/master-asg/master.tf to stand-alone rules in modules/aws/vpc/sg-master.tf. * Worker ingress/egress rules moved from inline entries in modules/aws/worker-asg/security-groups.tf to stand-alone rules in modules/aws/vpc/sg-worker.tf. For some reason, b620c16 moved the etcd security group from modules/aws/etcd/network.tf to modules/aws/vpc/sg-etcd.tf without splitting out the inline rules, so this commit catches up for consistency with the other node classes. From the Terraform docs [1]: Terraform currently provides both a standalone Security Group Rule resource (a single ingress or egress rule), and a Security Group resource with ingress and egress rules defined in-line. At this time you cannot use a Security Group with in-line rules in conjunction with any Security Group Rule resources. Doing so will cause a conflict of rule settings and will overwrite rules. We can also use the rule name to hint at the purpose of a rule, while with inline rules we just have port numbers. In this case, the *_etcd and *_peer suffixes are based on [2]: The official etcd ports are 2379 for client requests, and 2380 for peer communication. [1]: https://www.terraform.io/docs/providers/aws/r/security_group_rule.html [2]: https://github.com/coreos/etcd/tree/v3.3.9#etcd-tcp-ports
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Sep 26, 2019
And in the UPI CloudFormation templates too. We've allowed ICMP ingress for OpenStack since 6f76298 (OpenStack prototype, 2017-02-16, coreos/tectonic-installer#1), which did not motivate the ICMP ingress. Allowing ICMP ingress for AWS dates back to b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264). The master rule was restricted to the VPC in e7bd29a (modules/aws/vpc - Better security for master nodes, 2017-10-16, coreos/tectonic-installer#2147). And the worker rules was restricted to the VPC in e131a74 (aws: fix ICMP ACL, 2019-04-08, in-cluster ICMP ingress. There are reasons to allow in-cluster ICMP, including Path MTU Discovery (PMTUD) [1,2,3]. Folks also use ping to troubleshoot connectivity [4]. Restricting this to in-cluster security groups will avoid exposing ICMP ports to siblings living in shared VPCs, as we move towards allowing the installer to launch clusters in a pre-existing VPC. It might also block ICMP ingress from our load balancers, where we probably want PMTUD and possibly other ICMP calls. I'm not sure if there's a convenient way to allow access from the load-balancers while excluding it from sibling clusters that share the same VPC, but this commit is my attempt to get that. [1]: http://shouldiblockicmp.com/ [2]: https://tools.ietf.org/html/rfc1191 [3]: https://tools.ietf.org/html/rfc8201 [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1689857#c2
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Sep 26, 2019
And in the UPI CloudFormation templates too. We've allowed ICMP ingress for OpenStack since 6f76298 (OpenStack prototype, 2017-02-16, coreos/tectonic-installer#1), which did not motivate the ICMP ingress. Allowing ICMP ingress for AWS dates back to b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264). The master rule was restricted to the VPC in e7bd29a (modules/aws/vpc - Better security for master nodes, 2017-10-16, coreos/tectonic-installer#2147). And the worker rules was restricted to the VPC in e131a74 (aws: fix ICMP ACL, 2019-04-08, openshift#1550), before which a typo had blocked all ICMP ingress. There are reasons to allow in-cluster ICMP, including Path MTU Discovery (PMTUD) [1,2,3]. Folks also use ping to troubleshoot connectivity [4]. Restricting this to in-cluster security groups will avoid exposing ICMP ports to siblings living in shared VPCs, as we move towards allowing the installer to launch clusters in a pre-existing VPC. It might also block ICMP ingress from our load balancers, where we probably want PMTUD and possibly other ICMP calls. I'm not sure if there's a convenient way to allow access from the load-balancers while excluding it from sibling clusters that share the same VPC, but this commit is my attempt to get that. [1]: http://shouldiblockicmp.com/ [2]: https://tools.ietf.org/html/rfc1191 [3]: https://tools.ietf.org/html/rfc8201 [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1689857#c2
wking
added a commit
to wking/openshift-installer
that referenced
this pull request
Sep 26, 2019
And in the UPI CloudFormation templates too. We've allowed ICMP ingress for OpenStack since 6f76298 (OpenStack prototype, 2017-02-16, coreos/tectonic-installer#1), which did not motivate the ICMP ingress. Allowing ICMP ingress for AWS dates back to b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264). The master rule was restricted to the VPC in e7bd29a (modules/aws/vpc - Better security for master nodes, 2017-10-16, coreos/tectonic-installer#2147). And the worker rules was restricted to the VPC in e131a74 (aws: fix ICMP ACL, 2019-04-08, openshift#1550), before which a typo had blocked all ICMP ingress. There are reasons to allow in-cluster ICMP, including Path MTU Discovery (PMTUD) [1,2,3]. Folks also use ping to troubleshoot connectivity [4]. Restricting this to in-cluster security groups will avoid exposing ICMP ports to siblings living in shared VPCs, as we move towards allowing the installer to launch clusters in a pre-existing VPC. It might also block ICMP ingress from our load balancers, where we probably want PMTUD and possibly other ICMP calls. I'm not sure if there's a convenient way to allow access from the load-balancers while excluding it from sibling clusters that share the same VPC, but this commit is my attempt to get that. [1]: http://shouldiblockicmp.com/ [2]: https://tools.ietf.org/html/rfc1191 [3]: https://tools.ietf.org/html/rfc8201 [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1689857#c2
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently masters and workers share a pretty open security group.
Furthermore workers expose ingress traffic at critical k8s ports like
10250 and 10255.
This fixes it by removing the common cluster default security group and
specifying separate ingress/egress rules reflecting settings from the
current tectonic installer.
It also assigns only one security group for masters and workers.
Fixes #248, #243, #227