-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Security Groups need to be able to depend on each other #539
Comments
This was brought up in #530 but this is not the self-referential problem. |
+1 for this problem to be resolved as I sense it must be a common case |
+1 |
The proposal is solid. |
Thinking more about it, it probably needs to be broken into 4 steps:
The order is up for debate as it depends on whether you're trying to minimize issues caused by switching access around (where removing access before adding the new access can cause issues, especially if the process fails part way through) or minimizing running into AWS limits (the issue I'm most concerned with as we run into the group size limits all the time). |
@sethvargo should be tagged bug imho. It is not an Enhancement per say, it breaks most multi-tier infrastructure setups in AWS. |
Hi, The only way to overcome this problem is to create lots of "allow_xxx" security groups. So instead of having a structure with 1 security group and 1 webserver assignment such as: resource "aws_security_group" "sg1" {
} resource "aws_instance" "instance1" { We need to create another security group called "allow_sg2" and assign 2 security groups to the instance.
} resource "aws_security_group" "allow_sg2" {
} resource "aws_instance" "instance1" { Despite this being a solution I would say it is not the most elegant |
Similar to the thoughts before, I think to fix this the creation and setup of the security group must be separated. Most resources do both at the same time, since it's generally the sensible thing to do. However, we cannot introduce cycles in the dependency graph, and having security groups that depend on each other is causing an impossible situation. Instead, if we maybe have "aws_security_group_rules" distinct from "aws_security_group", then a structure like this is possible:
This allows the creation of foo and bar to happen in parallel without a cycle, and then again the provisioning of the two can happen in parallel only depending on the creation. This will solve the cycle issue as well. The down side is that you now need to use the special "aws_security_group_rules" resource when you have a cycle situation. It's maybe less than intuitive. Thoughts? |
I don't think we should burden the DSL by introducing another type. |
I'd argue that cycle dependency is impossible to fight when entities are created in single action/atomic operation and their attributes/properties cannot be altered, that is not the case with SG though.
|
The problem is that Terraform has a separation of core from providers. The core doesn't know anything about the semantics of resources. It knows how to manage dependency graphs, parallelism, lifecycle, etc. The provides know the semantics and CRUD APIs. To support this sort of "multiple entry" of a provider where the create is split between the creation and configuration dramatically complicates the interaction between providers and the core. Adding another resource however doesn't require any changes to core and allows this cycle to be broken pretty simply. So my opposition to the OPs proposal is not one of UX, I think that is actually nicer than what I'm proposing. But we need to balance that with the complexity that we add to the core, and to the provider APIs. |
+1 |
+1 I encountered an issue where Security Group A depended on Security Group B. Terraform attempted to delete and replace Security Group B to make a change, but AWS wouldn't allow it to be deleted because Security Group A depended on it. This put Terraform in an indefinite loop. Fix here is to recognize when there's a dependency chain like this and incorporate that into the create/destroy cycle. |
After creating a pretty large proof of concept (vs CloudFormation) I really like the idea of
Which seems exactly what @psa ran into. As it stands right now you either have to have all ingress/egress rules referenced by the subnet CIDR instead of the security group id, or create all security groups in one module and not have any SG's reference each other. |
@mitchellh are there plans to move forward with |
+1 It should be possible to create the security_groups before adding all the rules. |
+1 to this being ticked off :) will be great for multi tier AWS topologies |
+1, I'm attempting to capture our AWS environment for future deployments and would rather not have to whiteboard my security groups all over again 😄 As an aside, it would be great if the rules are transparent to terraform graph, otherwise graphs will be unreadable. |
+1 Also, a group needs to be able to reference it's self so members of a group can "talk" to each other.
|
I think the reason this feature has not been addressed is that it would create a cycle in the DAG. One solution is to create a 3rd resource like IoC. Create a proxy resource that is mostly a facade that breaks the dependency cycle. Example: Today: Solution:
This really is the only way to allow a circular dependency, which is done all the time in code using interfaces. The only other solution is to not use a DAG or just not use a DAG specifically for SecurityGroups. |
After looking at the code, it seems like a time consuming problem to solve. For now I guess I can create a dummy empty group as a work around. GroupA --> GroupB instanceA has: instanceB has: |
After working with the problem for a few days, I decided to just restructured how I use groups. It has some pros and cons. Here's an example for others in the hopes that it helps. A "server group" exposes server ports to a "Client Marker Group", which is just empty security group to identify a client to that specific "server group" Intra-node communication, since there's no "self" currently supported in the security group, is done with a seperate "internal group" exposing the intra-node ports to the "server group". Here's an example with zoo-keeper. I kind of like it better than connecting my "kafka group" to my "zookeeper group"
|
+1 I have a cyclical module dependency issue caused by security group creation/rule definitions, and it seems like adding an additional rule resource would resolve this for me. any eta on it being merged in + released? |
I tested with my 11 groups and 50+ rules and #1620 fixes my issue. Many thanks, this was a big roadblock for us! |
Fix released! Still working on iterating on some details over in #2081, but let's call this one closed. |
Can this issue be reopened as the solution that was implemented supports only AWS, but as the reporter mentioned, the problem is a general one across many platforms: "this probably applies to other platforms". We have the same issue with OpenStack and there sec grp rules are no top-level resources yet. |
@vlerenc I think it's better to have a separate issue for each provider this affects (as you did for OpenStack), since that makes it much clearer when each issue is closed, vs. having a potentially-never-ending issue with a bunch of different provider pull requests hanging from it. |
Sure, thank you, I opened #3601 for OpenStack. |
This should not be closed as the rule fix introduced the non-idempotency issue (#2366) that should have been caught. It comes down to aws merging the rules into the sec group after submittal, mutating the sec group state, which makes the next plan/apply step want to change the security group. So we have an issue here no? Either all rules need to go into the sec group, and cyclic deps must be solved differently, or we find a (probably dirty and hacky) way to allow this mutation and not act upon it. Maybe I am behind on some of the work being done atm, but I am stuck with this bit... |
Whoops hit send early! Was going to continue: I believe that provided you don't mix and match nested rules and top-level rules everything should work AOK with v0.6.12. 👍 |
Aha, so either I move ALL *gress rules to their own top level item, or I put them all in the sec group?
|
@Morriz yep! Otherwise the two forms will fight with each other. |
Alrighty, changing stuff around now....tnx for the heads up. Will it end up in the docs soon?
|
It shows up as a |
Pfffff....my bad
|
Is either solution (all in |
The solution provided is so far from actually being usable. I end up having variables of the desired security group ids and circumvented the dependency clash like that. Far from ideal... |
Is anyone able to provide an example of how they have used aws_security_group_rule resource to mitigate against the cycle issue. I have about 14 SG's per environment on AWS and many of the security groups are nested inside each other. When I do terraform apply I am continually forced to comment out the references to the SG's where cycle warnings are returned. Eventually I get to the point where all SG's exist in AWS but even then I have to reference some SG's by their sg-id as using interpolation fails with similar cycle warnings. Its not a show stopper but makes for a clunky experience trying to lay down my security groups for a new environment. Example error
Example of a couple of my SG's with nested SG's. resource "aws_security_group" "SG_EUW1_QA03_SMED" {
}
} ingress{ } ingress{
} ingress{ ingress{ egress {
} resource "aws_security_group" "SG_EUW1_QA03_BESV" { ingress {
} ingress { } }
}
}
} egress {
} |
Just DON'T put any rules in your group and you'll be fine. All rules should be one explicit aws_security_group_rule
|
So each Security group resource gets a dedicated aws_security_group_rule with its rules defined within it? |
Yeah. One rule per aws_security_group_rule Check out my setup: https://github.com/Morriz/k8sdemo-infra/blob/master/terraform/sg-acc.tf
|
OK thanks. I have 18 SG's per environment most with 10-12 rules each. Not sure if this approach is viable for me otherwise I will have to create a few hundred rules but I'll investigate none the less. Thanks for your input. |
:( |
Getting the same issue
|
FYI this is how I solved my issue. I then did the same thing for other SG (masters). You could also add a depends on to each "aws_security_group_rule" to going to the "aws_security_group" but I had no issue on run time.
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Overview
Sometimes it is necessary to have security groups (we use AWS, but this probably applies to other platforms) which depend on each other.
The examples below are stripped down versions of what we need in order to migrate where all our machines are in the default group which allows them to connect to puppet on our admin servers and our admin servers are permitted to ssh to any instance. Similar issues exist with monitoring (connecting out to instances for external checks as well as allowing instances to connect to the message queue on the monitoring server to submit data).
Current Behaviour
Given two security groups that depend on each other, Terraform currently fails with a cyclic dependency.
Here's an example configuration:
This generates
Desired Behaviour
I suspect the best way to allow this to work is to split group add/deletion from other group operations in the graph.
Here's graphviz digraphs for what I'm proposing:
What it does today and breaks:
What it should do:
The text was updated successfully, but these errors were encountered: