Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

modules/aws/vpc/sg-elb: Split out aws_security_group_rule #156

Conversation

wking
Copy link
Member

@wking wking commented Aug 21, 2018

This happend for masters and workers in b620c16 (coreos/tectonic-installer#264). This commit catches up for consistency with the other node classes. From the Terraform docs:

Terraform currently provides both a standalone Security Group Rule resource (a single ingress or egress rule), and a Security Group resource with ingress and egress rules defined in-line. At this time you cannot use a Security Group with in-line rules in conjunction with any Security Group Rule resources. Doing so will cause a conflict of rule settings and will overwrite rules.

We can also use the rule name to hint at the purpose of a rule (e.g. tnc_ingress_http), while with inline rules we just have port numbers.

This also drops some self properties from egress rules, which didn't make sense. self and cidr_blocks are incompatible, resulting in:

3 error(s) occurred:

* module.vpc.aws_security_group_rule.api_egress: 1 error(s) occurred:

* aws_security_group_rule.api_egress: 'self': conflicts with 'cidr_blocks' ([]interface {}{"0.0.0.0/0"})
* module.vpc.aws_security_group_rule.tnc_egress: 1 error(s) occurred:

* aws_security_group_rule.tnc_egress: 'self': conflicts with 'cidr_blocks' ([]interface {}{"0.0.0.0/0"})
* module.vpc.aws_security_group_rule.console_egress: 1 error(s) occurred:

* aws_security_group_rule.console_egress: 'self': conflicts with 'cidr_blocks' ([]interface {}{"0.0.0.0/0"})

And these are supposed to be generic egress blocks anyway. The erroneous use of self and cidr_blocks together dates back to e2709ba (Build VPC and ETCD cluster, 2017-02-21).

This happend for masters and workers in b620c16 (modules/aws: tighten
security groups, 2017-04-19, coreos/tectonic-installer#264, where:

* Master ingress/egress rules moved from inline entries in
  modules/aws/master-asg/master.tf to stand-alone rules in
  modules/aws/vpc/sg-master.tf.

* Worker ingress/egress rules moved from inline entries in
  modules/aws/worker-asg/security-groups.tf to stand-alone rules in
  modules/aws/vpc/sg-worker.tf.

This commit catches up for consistency with the other node classes.
From the Terraform docs [1]:

  Terraform currently provides both a standalone Security Group Rule
  resource (a single ingress or egress rule), and a Security Group
  resource with ingress and egress rules defined in-line.  At this
  time you cannot use a Security Group with in-line rules in
  conjunction with any Security Group Rule resources.  Doing so will
  cause a conflict of rule settings and will overwrite rules.

We can also use the rule name to hint at the purpose of a rule
(e.g. tnc_ingress_http), while with inline rules we just have port
numbers.

This also drops some 'self' properties from egress rules, which didn't
make sense.  'self' and 'cidr_blocks' are incompatible, resulting in
[2]:

  3 error(s) occurred:

  * module.vpc.aws_security_group_rule.api_egress: 1 error(s) occurred:

  * aws_security_group_rule.api_egress: 'self': conflicts with 'cidr_blocks' ([]interface {}{"0.0.0.0/0"})
  * module.vpc.aws_security_group_rule.tnc_egress: 1 error(s) occurred:

  * aws_security_group_rule.tnc_egress: 'self': conflicts with 'cidr_blocks' ([]interface {}{"0.0.0.0/0"})
  * module.vpc.aws_security_group_rule.console_egress: 1 error(s) occurred:

  * aws_security_group_rule.console_egress: 'self': conflicts with 'cidr_blocks' ([]interface {}{"0.0.0.0/0"})

And these are supposed to be generic egress blocks anyway.  The
erroneous use of 'self' and 'cidr_blocks' together dates back to
e2709ba (Build VPC and ETCD cluster, 2017-02-21).

[1]: https://www.terraform.io/docs/providers/aws/r/security_group_rule.html
[2]: https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/151/pull-ci-origin-installer-e2e-aws/552/build-log.txt
@openshift-ci-robot openshift-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Aug 21, 2018
@wking
Copy link
Member Author

wking commented Aug 21, 2018

The e2e-aws error was:

Ginkgo timed out waiting for all parallel nodes to report back

with the same connection errors mentioned earlier in #151:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/156/pull-ci-origin-installer-e2e-aws/559/artifacts/e2e-aws/nodes/ip-10-0-77-118.ec2.internal/journal.gz | zcat >journal 
$ journalread journal | grep 'current config label' | head -n3
2018-08-21T21:43:50.000244516Z I0821 21:43:50.238884       1 tnc.go:375] Node ip-10-0-139-34.ec2.internal does not have a current config label
2018-08-21T21:43:50.000244748Z I0821 21:43:50.238895       1 tnc.go:375] Node ip-10-0-152-96.ec2.internal does not have a current config label
2018-08-21T21:43:50.000244972Z I0821 21:43:50.238900       1 tnc.go:375] Node ip-10-0-165-88.ec2.internal does not have a current config label

@abhinavdahiya
Copy link
Contributor

@wking the error you mentioned above only means that TNC is waiting for node-agent to start and report its current config. This stops when daemon runs on the node. This is normal behavior.

@eparis
Copy link
Member

eparis commented Aug 23, 2018

/lgtm
/retest

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Aug 23, 2018
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: eparis, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@wking
Copy link
Member Author

wking commented Aug 24, 2018

/retest

@wking
Copy link
Member Author

wking commented Aug 24, 2018

retest this please

@openshift-merge-robot openshift-merge-robot merged commit 05e3ed5 into openshift:master Aug 24, 2018
@wking wking deleted the api-console-and-tnc-security-group-rules branch August 24, 2018 05:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants