Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security Groups need to be able to depend on each other #539

Closed
psa opened this issue Nov 4, 2014 · 48 comments
Closed

Security Groups need to be able to depend on each other #539

psa opened this issue Nov 4, 2014 · 48 comments

Comments

@psa
Copy link

psa commented Nov 4, 2014

Overview

Sometimes it is necessary to have security groups (we use AWS, but this probably applies to other platforms) which depend on each other.

The examples below are stripped down versions of what we need in order to migrate where all our machines are in the default group which allows them to connect to puppet on our admin servers and our admin servers are permitted to ssh to any instance. Similar issues exist with monitoring (connecting out to instances for external checks as well as allowing instances to connect to the message queue on the monitoring server to submit data).

Current Behaviour

Given two security groups that depend on each other, Terraform currently fails with a cyclic dependency.

Here's an example configuration:

resource "aws_security_group" "default" {
  name = "default"
  description = "Default Group"

  # admin should be able to SSH to any machine
  ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"
      security_groups = ["${aws_security_group.admin.id}"]
  }
}

resource "aws_security_group" "admin" {
  name = "admin"
  description = "Admin Server Group"

  # Allow all machines to access puppet
  ingress {
      from_port = 8140
      to_port = 8140
      protocol = "tcp"
      security_groups = ["${aws_security_group.default.id}"]
  }
}

This generates

Error configuring: The dependency graph is not valid:

* Cycle: aws_security_group.admin -> aws_security_group.default

Desired Behaviour

I suspect the best way to allow this to work is to split group add/deletion from other group operations in the graph.

Here's graphviz digraphs for what I'm proposing:

What it does today and breaks:

digraph {
  compound = true;
  subgraph {
    "0_aws_security_group.default" [
      label="aws_security_group.default"
      shape=box
    ];
    "0_aws_security_group.admin" [
      label="aws_security_group.admin"
      shape=box
    ];
  }

  "0_aws_security_group.default" -> "0_provider.aws";
  "0_aws_security_group.admin"   -> "0_provider.aws";
  "0_aws_security_group.admin"   -> "0_aws_security_group.default";
  "0_aws_security_group.default" -> "0_aws_security_group.admin";

  subgraph {
    "0_provider.aws" [
      label="provider.aws"
      shape=diamond
    ];
  }

}

What it should do:

digraph {
  compound = true;
  subgraph {
    "0_aws_security_group.default{create}" [
      label="aws_security_group.default{create}"
      shape=box
    ];
    "0_aws_security_group.default{add_rules}" [
      label="aws_security_group.default{add_rules}"
      shape=box
    ];
    "0_aws_security_group.admin{create}" [
      label="aws_security_group.admin{create}"
      shape=box
    ];
    "0_aws_security_group.admin{add_rules}" [
      label="aws_security_group.admin{add_rules}"
      shape=box
    ];
  }

  "0_aws_security_group.default{create}"    -> "0_provider.aws";
  "0_aws_security_group.admin{create}"      -> "0_provider.aws";
  "0_aws_security_group.admin{add_rules}"   -> "0_aws_security_group.admin{create}";
  "0_aws_security_group.default{add_rules}" -> "0_aws_security_group.default{create}";
  "0_aws_security_group.admin{add_rules}"   -> "0_aws_security_group.default{create}";
  "0_aws_security_group.default{add_rules}" -> "0_aws_security_group.admin{create}";

  subgraph {
    "0_provider.aws" [
      label="provider.aws"
      shape=diamond
    ];
  }

}
@psa
Copy link
Author

psa commented Nov 4, 2014

This was brought up in #530 but this is not the self-referential problem.

@spyrospph
Copy link

+1 for this problem to be resolved as I sense it must be a common case

@piavlo
Copy link

piavlo commented Nov 18, 2014

+1

@pmoust
Copy link
Contributor

pmoust commented Nov 18, 2014

The proposal is solid.
+1

@psa
Copy link
Author

psa commented Nov 18, 2014

Thinking more about it, it probably needs to be broken into 4 steps:

  1. Remove old entries
  2. Remove old groups
  3. Add new groups
  4. Add new entries

The order is up for debate as it depends on whether you're trying to minimize issues caused by switching access around (where removing access before adding the new access can cause issues, especially if the process fails part way through) or minimizing running into AWS limits (the issue I'm most concerned with as we run into the group size limits all the time).

@sethvargo sethvargo changed the title Security Groups Need To Be Able to Depend On Each Other Security Groups need to be able to depend on each other Nov 19, 2014
@pmoust
Copy link
Contributor

pmoust commented Nov 20, 2014

@sethvargo should be tagged bug imho. It is not an Enhancement per say, it breaks most multi-tier infrastructure setups in AWS.

@spyrospph
Copy link

Hi,

The only way to overcome this problem is to create lots of "allow_xxx" security groups.

So instead of having a structure with 1 security group and 1 webserver assignment such as:

resource "aws_security_group" "sg1" {
name = "sg1"

ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    self        = true
}

ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    self        = true
}

ingress {
    from_port   = 8081
    to_port     = 8081
    protocol    = "tcp"
    security_groups = ["${aws_security_group.sg2.id}"]
}

}

resource "aws_instance" "instance1" {
security_groups = [ "${aws_security_group.sg1.id}" ]
}

We need to create another security group called "allow_sg2" and assign 2 security groups to the instance.
resource "aws_security_group" "sg1" {
name = "sg1"

ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    self        = true
}

ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    self        = true
}

}

resource "aws_security_group" "allow_sg2" {
name = "sg2"

ingress {
    from_port   = 8081
    to_port     = 8081
    protocol    = "tcp"
    security_groups = ["${aws_security_group.sg2.id}"]
}

}

resource "aws_instance" "instance1" {
security_groups = [ "${aws_security_group.sg1.id}",
"${aws_security_group.allow_sg2.id}" ]
}

Despite this being a solution I would say it is not the most elegant

@armon
Copy link
Member

armon commented Nov 20, 2014

Similar to the thoughts before, I think to fix this the creation and setup of the security group must be separated. Most resources do both at the same time, since it's generally the sensible thing to do.

However, we cannot introduce cycles in the dependency graph, and having security groups that depend on each other is causing an impossible situation. Instead, if we maybe have "aws_security_group_rules" distinct from "aws_security_group", then a structure like this is possible:

  1. aws_security_group.bar
  2. aws_security_group.foo
  3. aws_security_group_rules.bar depends on [aws_security_group.bar, aws_security_group.foo]
  4. aws_security_group_rules.far depends on [aws_security_group.bar, aws_security_group.foo]

This allows the creation of foo and bar to happen in parallel without a cycle, and then again the provisioning of the two can happen in parallel only depending on the creation. This will solve the cycle issue as well.

The down side is that you now need to use the special "aws_security_group_rules" resource when you have a cycle situation. It's maybe less than intuitive. Thoughts?

@pmoust
Copy link
Contributor

pmoust commented Nov 20, 2014

I don't think we should burden the DSL by introducing another type.
The ingress/egress are already there in the aws_security_group entity, it is just a matter of - as you said - separating the creation (thus getting the .id) from the addition of rules process when generating the graph.
@armon what is it that you don't like in OPs proposal?
Is there something fundamentally wrong with the current implementation of the graph generation that cannot be bypassed for two-way dependencies?

@pmoust
Copy link
Contributor

pmoust commented Nov 20, 2014

I'd argue that cycle dependency is impossible to fight when entities are created in single action/atomic operation and their attributes/properties cannot be altered, that is not the case with SG though.

  • figure from ingress rules that sg1 depends on sg2 and vice versa
  • perform creation of sg1 and sg2 (no rules)
  • add the rules

@armon
Copy link
Member

armon commented Nov 24, 2014

The problem is that Terraform has a separation of core from providers. The core doesn't know anything about the semantics of resources. It knows how to manage dependency graphs, parallelism, lifecycle, etc. The provides know the semantics and CRUD APIs.

To support this sort of "multiple entry" of a provider where the create is split between the creation and configuration dramatically complicates the interaction between providers and the core. Adding another resource however doesn't require any changes to core and allows this cycle to be broken pretty simply.

So my opposition to the OPs proposal is not one of UX, I think that is actually nicer than what I'm proposing. But we need to balance that with the complexity that we add to the core, and to the provider APIs.

@ianatha
Copy link

ianatha commented Feb 24, 2015

+1
Also encountered this problem while trying to port an existing infrastructure to Terraform.

@josh-padnick
Copy link

+1

I encountered an issue where Security Group A depended on Security Group B. Terraform attempted to delete and replace Security Group B to make a change, but AWS wouldn't allow it to be deleted because Security Group A depended on it.

This put Terraform in an indefinite loop. Fix here is to recognize when there's a dependency chain like this and incorporate that into the create/destroy cycle.

@zxjinn
Copy link

zxjinn commented Mar 3, 2015

After creating a pretty large proof of concept (vs CloudFormation) I really like the idea of aws_security_group_rules that @armon suggested. One problem is I want to create a VPC and security groups in one module, then add ingress rules to the security groups in another module (based on if the region needs that module or not). This would be similar to how CloudFormation does it in their docs. Which specifically says:

Use AWS::EC2::SecurityGroupIngress and AWS::EC2::SecurityGroupEgress only when necessary, typically to allow security groups to reference each other in ingress and egress rules.

Which seems exactly what @psa ran into.

As it stands right now you either have to have all ingress/egress rules referenced by the subnet CIDR instead of the security group id, or create all security groups in one module and not have any SG's reference each other.

@piavlo
Copy link

piavlo commented Mar 10, 2015

@mitchellh are there plans to move forward with aws_security_group_rules or any other possible solution, this issue has been a show stopper for us with adapting terraform for over 5 months already.

@leevs
Copy link

leevs commented Mar 18, 2015

+1 It should be possible to create the security_groups before adding all the rules.
Also like @armon his solution by splitting them up.

@CpuID
Copy link
Contributor

CpuID commented Mar 28, 2015

+1 to this being ticked off :) will be great for multi tier AWS topologies

@rhartkopf
Copy link

+1, I'm attempting to capture our AWS environment for future deployments and would rather not have to whiteboard my security groups all over again 😄

As an aside, it would be great if the rules are transparent to terraform graph, otherwise graphs will be unreadable.

@franklinwise
Copy link

+1 Also, a group needs to be able to reference it's self so members of a group can "talk" to each other.

Errors:

  * 6 error(s) occurred:

* Self reference: aws_security_group.zookeeper (destroy tainted)
* Self reference: aws_security_group.elasticsearch_master
* Self reference: aws_security_group.elasticsearch_data
* Self reference: aws_security_group.zookeeper
* Self reference: aws_security_group.elasticsearch_master (destroy tainted)
* Self reference: aws_security_group.elasticsearch_data (destroy tainted)

@franklinwise
Copy link

I think the reason this feature has not been addressed is that it would create a cycle in the DAG. One solution is to create a 3rd resource like IoC. Create a proxy resource that is mostly a facade that breaks the dependency cycle.

Example:

Today:
GroupA --> GroupB
GroupB --> GroupA

Solution:
GroupA --> GroupB
GroupA --> GroupAProxy
GroupB --> GroupAProxy

  1. When GroupAProxy get "created" it creates GroupA on AWS without any rules.
  2. When GroupB gets "created" it points to the actual GroupA created by the GroupAProxy on AWS.
    In addition it "registers" itself with the GroupAProxy object as being a delegated group.
  3. When GroupA gets "created" it sees that it was already created on AWS and looks to see if it is connected to any proxy resources and if it is then it accepts that itself has been created. Then it creates its rules and the ones that point to the Proxy object ask the Proxy object for the Id of the "other" group.

This really is the only way to allow a circular dependency, which is done all the time in code using interfaces. The only other solution is to not use a DAG or just not use a DAG specifically for SecurityGroups.

@franklinwise
Copy link

After looking at the code, it seems like a time consuming problem to solve. For now I guess I can create a dummy empty group as a work around.

GroupA --> GroupB
GroupB --> EmptyGroup

instanceA has:
GroupA
EmptyGroup

instanceB has:
GroupB

@franklinwise
Copy link

After working with the problem for a few days, I decided to just restructured how I use groups. It has some pros and cons. Here's an example for others in the hopes that it helps.

A "server group" exposes server ports to a "Client Marker Group", which is just empty security group to identify a client to that specific "server group"

Intra-node communication, since there's no "self" currently supported in the security group, is done with a seperate "internal group" exposing the intra-node ports to the "server group".

Here's an example with zoo-keeper. I kind of like it better than connecting my "kafka group" to my "zookeeper group"

resource "aws_security_group" "zookeeper_client" {
    vpc_id = "${aws_vpc.main.id}"
    name = "zookeeper_client"
    description = "Zookeeper Client - Marker Group"
}

resource "aws_security_group" "zookeeper_server" {
    vpc_id = "${aws_vpc.main.id}"
    name = "zookeeper_server"
    description = "Zookeeper Server"

    # client interface
    ingress {
        from_port = 2181
        to_port = 2181
        protocol = "tcp"
        cidr_blocks = ["${aws_security_group.zookeeper_client.id}"] 
    }
}

resource "aws_security_group" "zookeeper_internal" {
    vpc_id = "${aws_vpc.main.id}"
    name = "zookeeper_internal"
    description = "Zookeeper intra-node communication"

    ingress {
        from_port = 2181
        to_port = 2181
        protocol = "tcp"
        cidr_blocks = ["${aws_security_group.zookeeper_server.id}"] 
    }

    ingress {
        from_port = 2888
        to_port = 2888
        protocol = "tcp"
        cidr_blocks = ["${aws_security_group.zookeeper_server.id}"] 
    }

    ingress {
        from_port = 3888
        to_port = 3888
        protocol = "tcp"
        cidr_blocks = ["$aws_security_group.zookeeper_server.id}"] 
    }
}

@catsby catsby self-assigned this Apr 21, 2015
@geofffranks
Copy link

+1

I have a cyclical module dependency issue caused by security group creation/rule definitions, and it seems like adding an additional rule resource would resolve this for me. any eta on it being merged in + released?

@rhartkopf
Copy link

I tested with my 11 groups and 50+ rules and #1620 fixes my issue. Many thanks, this was a big roadblock for us!

@phinze
Copy link
Contributor

phinze commented Jun 4, 2015

Fix released! Still working on iterating on some details over in #2081, but let's call this one closed.

@phinze phinze closed this as completed Jun 4, 2015
@vlerenc
Copy link

vlerenc commented Oct 22, 2015

Can this issue be reopened as the solution that was implemented supports only AWS, but as the reporter mentioned, the problem is a general one across many platforms: "this probably applies to other platforms". We have the same issue with OpenStack and there sec grp rules are no top-level resources yet.

@apparentlymart
Copy link
Contributor

@vlerenc I think it's better to have a separate issue for each provider this affects (as you did for OpenStack), since that makes it much clearer when each issue is closed, vs. having a potentially-never-ending issue with a bunch of different provider pull requests hanging from it.

@vlerenc
Copy link

vlerenc commented Oct 22, 2015

Sure, thank you, I opened #3601 for OpenStack.

@Morriz
Copy link

Morriz commented Mar 2, 2016

This should not be closed as the rule fix introduced the non-idempotency issue (#2366) that should have been caught. It comes down to aws merging the rules into the sec group after submittal, mutating the sec group state, which makes the next plan/apply step want to change the security group.

So we have an issue here no? Either all rules need to go into the sec group, and cyclic deps must be solved differently, or we find a (probably dirty and hacky) way to allow this mutation and not act upon it.

Maybe I am behind on some of the work being done atm, but I am stuck with this bit...

@phinze
Copy link
Contributor

phinze commented Mar 2, 2016

Hi @Morriz - can you explain a bit more about your issue? As far as can tell, the issue described in #2366 is solved.

@phinze
Copy link
Contributor

phinze commented Mar 2, 2016

Whoops hit send early! Was going to continue:

I believe that provided you don't mix and match nested rules and top-level rules everything should work AOK with v0.6.12. 👍

@Morriz
Copy link

Morriz commented Mar 2, 2016

Aha, so either I move ALL *gress rules to their own top level item, or I put them all in the sec group?

On 3 mrt. 2016, at 00:47, Paul Hinze notifications@github.com wrote:

Whoops hit send early! Was going to continue:

I believe that provided you don't mix and match nested rules and top-level rules everything should work AOK with v0.6.12.


Reply to this email directly or view it on GitHub #539 (comment).

@phinze
Copy link
Contributor

phinze commented Mar 2, 2016

@Morriz yep! Otherwise the two forms will fight with each other.

@Morriz
Copy link

Morriz commented Mar 2, 2016

Alrighty, changing stuff around now....tnx for the heads up. Will it end up in the docs soon?

On 3 mrt. 2016, at 00:52, Paul Hinze notifications@github.com wrote:

@Morriz https://github.com/Morriz yep! Otherwise the two forms will fight with each other.


Reply to this email directly or view it on GitHub #539 (comment).

@phinze
Copy link
Contributor

phinze commented Mar 3, 2016

@Morriz
Copy link

Morriz commented Mar 3, 2016

@jmstone617
Copy link
Contributor

Is either solution (all in security_group or individual security_group_rules) equally acceptable? When I first set up our config, I was able to mix-and-match in order to resolve the dependency issue. After upgrading to 0.6.13, plan now wants to kill off all of my security_group_rules. It would seem I can roll them into my security_group resources now to prevent that from happening (presumably because the dependent groups already exist?), but if I were to use this plan for a brand new set of infrastructure, does TF now resolve the dependencies internally? While annoying to have to add a bunch of individual security_group_rule resources, I'd much rather do that if it will work equally for new infrastructure as it would for updates to existing infrastructure.

@StoyanIvanovI
Copy link

StoyanIvanovI commented May 6, 2016

The solution provided is so far from actually being usable. I end up having variables of the desired security group ids and circumvented the dependency clash like that. Far from ideal...
(I am sorry about the rant note of my comment but it really has caused us trouble as our system has quite some rules that go between two groups and using the security_group_rules in a real-world production environment would be tedious to maintain to say the least. Looking forward to a proper fix of the problem.)

@davyt10
Copy link

davyt10 commented Jun 10, 2016

Is anyone able to provide an example of how they have used aws_security_group_rule resource to mitigate against the cycle issue. I have about 14 SG's per environment on AWS and many of the security groups are nested inside each other. When I do terraform apply I am continually forced to comment out the references to the SG's where cycle warnings are returned. Eventually I get to the point where all SG's exist in AWS but even then I have to reference some SG's by their sg-id as using interpolation fails with similar cycle warnings. Its not a show stopper but makes for a clunky experience trying to lay down my security groups for a new environment.

Example error

  • Cycle: aws_security_group.SG_EUW1_QA03_BESV, aws_security_group.SG_EUW1_QA03_SMED�[0m�[0m

Example of a couple of my SG's with nested SG's.

resource "aws_security_group" "SG_EUW1_QA03_SMED" {
name = "SG_EUW1_QA03_SMED"
description = "SG for web"
vpc_id = "${var.aws_vpc_id}"

ingress {
from_port   = "445"
to_port     = "445"
protocol    = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_IWEB.id}"]
self = "true"

}

ingress {
from_port   = "443"
to_port     = "443"
protocol    = "tcp"
cidr_blocks = ["10.23.0.0/16", "10.29.0.0/16","10.49.0.0/16","10.52.0.0/16"]
security_groups = [
"${aws_security_group.SG_EUW1_QA03_BESV.id}",
"${aws_security_group.SG_EUW1_QA03_SLRM.id}",
"${aws_security_group.SG_EUW1_QA03_SLRS.id}",
"${aws_security_group.SG_EUW1_QA03_MGMT.id}",
"${aws_security_group.SG_EUW1_QA03_PWEB.id}",
"${aws_security_group.SG_EUW1_QA03_SMED_ELB.id}"]

}

ingress{
from_port = "555"
to_port = "555"
protocol = "tcp"
security_groups = ["sg-dcbb84b8"] /SG_EUW1_QA03_BESV/
}
ingress{
from_port = "135"
to_port = "135"
protocol = "tcp"
security_groups = ["sg-a1bb84c5"] /SG_EUW1_QA03_DATABASE/

}

ingress{

from_port   = "5000"
to_port     = "5020"
protocol    = "tcp"
security_groups = ["sg-a1bb84c5"] /*SG_EUW1_QA03_DATABASE*/

}

ingress{
from_port = "8000"
to_port = "8000"
protocol = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_BESV.id}"]
}

ingress{
from_port = "11025"
to_port = "11025"
protocol = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_BESV.id}"]
}

egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags { 
Name = "SG_EUW1_QA03_SMED"
Environment = "${var.environment}"
Role = "ServiceMediation"

}
}

resource "aws_security_group" "SG_EUW1_QA03_BESV" {
name = "SG_EUW1_QA03_BESV"
description = "SG for Backend Services role"
vpc_id = "${var.aws_vpc_id}"

ingress {
from_port = "5058"
to_port = "5058"
protocol = "tcp"
cidr_blocks = ["10.52.0.0/16","10.49.66.0/24"]
security_groups = ["sg-dfbb84bb"]/SG_EUW1_QA01_SMED/
}

   ingress {
from_port   = "5080"
to_port     = "5080"
protocol    = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_SMED.id}"

}

ingress {
from_port = "8080"
to_port = "8080"
protocol = "tcp"
cidr_blocks = ["10.52.0.0/16"]
security_groups = ["${aws_security_group.SG_EUW1_QA03_MGMT.id}"]

}
ingress {
from_port = "555"
to_port = "555"
protocol = "tcp"
cidr_blocks = ["10.52.0.0/16"]
security_groups = ["${aws_security_group.SG_EUW1_QA03_MGMT.id}"]

}

   ingress {
from_port   = "8081"
to_port     = "8081"
protocol    = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_SMED.id}"]

}

ingress{
from_port   = "1801"
to_port     = "1801"
protocol    = "tcp"
security_groups = ["sg-dfbb84bb"] /*SG_EUW1_QA03_SMED*/

}

  ingress{
from_port   = "80"
to_port     = "80"
protocol    = "tcp"
security_groups = ["sg-dfbb84bb"]/*SG_EUW1_QA03_SMED*/

}

egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags { 
Name = "SG_EUW1_QA03_BESV"
Environment = "${var.environment}"
Role ="BackendServices"

}
}

@Morriz
Copy link

Morriz commented Jun 10, 2016

Just DON'T put any rules in your group and you'll be fine. All rules should be one explicit aws_security_group_rule

On 10 jun. 2016, at 11:36, davyt10 notifications@github.com wrote:

Is anyone able to provide an example of how they have used aws_security_group_rule resource to mitigate against the cycle issue. I have about 14 SG's per environment on AWS and many of the security groups are nested inside each other. When I do terraform apply I am continually forced to comment out the references to the SG's where cycle warnings are returned. Eventually I get to the point where all SG's exist in AWS but even then I have to reference some SG's by their sg-id as using interpolation fails with similar cycle warnings. Its not a show stopper but makes for a clunky experience trying to lay down my security groups for a new environment.

Example error

Cycle: aws_security_group.SG_EUW1_QA03_BESV, aws_security_group.SG_EUW1_QA03_SMED�[0m�[0m
Example of a couple of my SG's with nested SG's.

resource "aws_security_group" "SG_EUW1_QA03_SMED" {
name = "SG_EUW1_QA03_SMED"
description = "SG for web"
vpc_id = "${var.aws_vpc_id}"

ingress {
from_port = "445"
to_port = "445"
protocol = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_IWEB.id}"]
self = "true"
}

ingress {
from_port = "443"
to_port = "443"
protocol = "tcp"
cidr_blocks = ["10.23.0.0/16", "10.29.0.0/16","10.49.0.0/16","10.52.0.0/16"]
security_groups = [
"${aws_security_group.SG_EUW1_QA03_BESV.id}",
"${aws_security_group.SG_EUW1_QA03_SLRM.id}",
"${aws_security_group.SG_EUW1_QA03_SLRS.id}",
"${aws_security_group.SG_EUW1_QA03_MGMT.id}",
"${aws_security_group.SG_EUW1_QA03_PWEB.id}",
"${aws_security_group.SG_EUW1_QA03_SMED_ELB.id}"]
}

ingress{
from_port = "555"
to_port = "555"
protocol = "tcp"
security_groups = ["sg-dcbb84b8"] /SG_EUW1_QA03_BESV/
}
ingress{
from_port = "135"
to_port = "135"
protocol = "tcp"
security_groups = ["sg-a1bb84c5"] /SG_EUW1_QA03_DATABASE/

}

ingress{

from_port = "5000"
to_port = "5020"
protocol = "tcp"
security_groups = ["sg-a1bb84c5"] /SG_EUW1_QA03_DATABASE/
}

ingress{
from_port = "8000"
to_port = "8000"
protocol = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_BESV.id}"]
}

ingress{
from_port = "11025"
to_port = "11025"
protocol = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_BESV.id}"]
}

egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags {
Name = "SG_EUW1_QA03_SMED"
Environment = "${var.environment}"
Role = "ServiceMediation"
}
}

resource "aws_security_group" "SG_EUW1_QA03_BESV" {
name = "SG_EUW1_QA03_BESV"
description = "SG for Backend Services role"
vpc_id = "${var.aws_vpc_id}"

ingress {
from_port = "5058"
to_port = "5058"
protocol = "tcp"
cidr_blocks = ["10.52.0.0/16","10.49.66.0/24"]
security_groups = ["sg-dfbb84bb"]/SG_EUW1_QA01_SMED/
}

ingress {
from_port = "5080"
to_port = "5080"
protocol = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_SMED.id}"
}

ingress {
from_port = "8080"
to_port = "8080"
protocol = "tcp"
cidr_blocks = ["10.52.0.0/16"]
security_groups = ["${aws_security_group.SG_EUW1_QA03_MGMT.id}"]

}
ingress {
from_port = "555"
to_port = "555"
protocol = "tcp"
cidr_blocks = ["10.52.0.0/16"]
security_groups = ["${aws_security_group.SG_EUW1_QA03_MGMT.id}"]

}

ingress {
from_port = "8081"
to_port = "8081"
protocol = "tcp"
security_groups = ["${aws_security_group.SG_EUW1_QA03_SMED.id}"]
}

ingress{
from_port = "1801"
to_port = "1801"
protocol = "tcp"
security_groups = ["sg-dfbb84bb"] /SG_EUW1_QA03_SMED/
}

ingress{
from_port = "80"
to_port = "80"
protocol = "tcp"
security_groups = ["sg-dfbb84bb"]/SG_EUW1_QA03_SMED/
}

egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags {
Name = "SG_EUW1_QA03_BESV"
Environment = "${var.environment}"
Role ="BackendServices"
}
}


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@davyt10
Copy link

davyt10 commented Jun 10, 2016

So each Security group resource gets a dedicated aws_security_group_rule with its rules defined within it?

@Morriz
Copy link

Morriz commented Jun 10, 2016

Yeah. One rule per aws_security_group_rule

Check out my setup: https://github.com/Morriz/k8sdemo-infra/blob/master/terraform/sg-acc.tf

On 10 jun. 2016, at 11:52, davyt10 notifications@github.com wrote:

So each Security group resource gets a dedicated aws_security_group_rule with its rules defined within it?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@davyt10
Copy link

davyt10 commented Jun 10, 2016

OK thanks. I have 18 SG's per environment most with 10-12 rules each. Not sure if this approach is viable for me otherwise I will have to create a few hundred rules but I'll investigate none the less. Thanks for your input.

@jasonmoo
Copy link

:(

@jasonkuehl
Copy link

jasonkuehl commented Jan 10, 2018

Getting the same issue

  • Cycle: module.sg.aws_security_group.emr_masters, module.sg.aws_security_group.emr_slaves
resource "aws_security_group" "emr_slaves" {
  name        = "emr_slaves"
  vpc_id      = "${var.vpc_id}"
  description = "emr_slaves"

  ingress {
    from_port       = 0
    to_port         = 65535
    protocol        = "tcp"
    security_groups = ["${aws_security_group.emr_masters.id}"]
  }

  ingress {
    from_port       = 0
    to_port         = 65535
    protocol        = "udp"
    security_groups = ["${aws_security_group.emr_masters.id}"]
  }

  ingress {
    from_port       = 8443
    to_port         = 8443
    protocol        = "tcp"
    security_groups = ["${aws_security_group.emr_masters.id}, ${aws_security_group.emr_service.id}"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group" "emr_masters" {
  name        = "emr_master"
  vpc_id      = "${var.vpc_id}"
  description = "emr_master"

  ingress {
    from_port       = 0
    to_port         = 65535
    protocol        = "tcp"
    security_groups = ["${aws_security_group.emr_slaves.id}"]
  }

  ingress {
    from_port       = 0
    to_port         = 65535
    protocol        = "udp"
    security_groups = ["${aws_security_group.emr_slaves.id}"]
  }

  ingress {
    from_port       = 8443
    to_port         = 8443
    protocol        = "tcp"
    security_groups = ["${aws_security_group.emr_slaves.id}, ${aws_security_group.emr_service.id}"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

@jasonkuehl
Copy link

FYI this is how I solved my issue. I then did the same thing for other SG (masters). You could also add a depends on to each "aws_security_group_rule" to going to the "aws_security_group" but I had no issue on run time.

resource "aws_security_group" "emr_slaves" {
  name        = "emr_slaves"
  vpc_id      = "${var.vpc_id}"
  description = "emr_slaves"

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group_rule" "emr_slaves_tcp" {
  type                     = "ingress"
  from_port                = 0
  to_port                  = 65535
  protocol                 = "tcp"
  security_group_id        = "${aws_security_group.emr_slaves.id}"
  source_security_group_id = "${aws_security_group.emr_masters.id}"
}

resource "aws_security_group_rule" "emr_slaves_udp" {
  type                     = "ingress"
  from_port                = 0
  to_port                  = 65535
  protocol                 = "udp"
  security_group_id        = "${aws_security_group.emr_slaves.id}"
  source_security_group_id = "${aws_security_group.emr_masters.id}"
}

resource "aws_security_group_rule" "emr_slaves_service1" {
  type                     = "ingress"
  from_port                = 8443
  to_port                  = 8443
  protocol                 = "tcp"
  security_group_id        = "${aws_security_group.emr_slaves.id}"
  source_security_group_id = "${aws_security_group.emr_service.id}"
}

resource "aws_security_group_rule" "emr_slaves_service2" {
  type                     = "ingress"
  from_port                = 8443
  to_port                  = 8443
  protocol                 = "tcp"
  security_group_id        = "${aws_security_group.emr_slaves.id}"
  source_security_group_id = "${aws_security_group.emr_masters.id}"
}

@binarymist
Copy link

The following PRs fix this... when they're merged:
1824
9032

@ghost
Copy link

ghost commented Sep 7, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Sep 7, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests