New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/instance profile iam #187
Feat/instance profile iam #187
Conversation
/test all |
Enable trigger to new resources associated Co-authored-by: nitrocode <nitrocode@users.noreply.github.com>
name = "${module.this.id}-eb-service" | ||
assume_role_policy = data.aws_iam_policy_document.service.json | ||
tags = module.this.tags | ||
lifecycle { | ||
create_before_destroy = true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If these resources are created before destroyed, there would be a name conflict, no?
Why are these lifecycles added?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The role should never change except if we rename it so it's safe to add.
I added it because I encountered a stuck issue with beanstalk while refreshing my environment. Imagine you are changing from default aws_iam_role to service_role_name but for some reason, beanstalk fail during process. Beanstalk will try to rollback but if terraform already removed the original role, it's going to hard stuck the environment and will require an advanced debug maybe with CLI or by recreating the whole environment.
By ensuring we destroy this resource at the end of beanstalk deployment, we make sure beanstalk is able to rollback to its previous configuration in case of failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added it because I encountered a stuck issue with beanstalk while refreshing my environment. Imagine you are changing from default aws_iam_role to service_role_name but for some reason, beanstalk fail during process.
This sounds like a corner case. Is that right ?
The create_before_destroy
lifecycle sounds like it could cause other problems. I'll defer to my teammates to see if they have any issues with it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding some insights for this discussion.
The actual default iam role and policy give a lot of rights for launching anything using beanstalk. The goal here is to let the user define a fully new instance profile from scratch.
Considering it's beanstalk, a single missing right can mess easily an environment. Someone who would switch from the module default iam role to a custom one may stuck its environment since we implement too many policies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A missing right? What do you mean? Do you mean a missing resource?
Why should it matter how many policies we apply? That shouldn't be causing issues
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I'm talking about missing rights, it's in the case where the user use its own instance_profile or service role for beanstalk.
In the module defaults, we already defined basic requirements that ensure beanstalk get health status of applications and have rights for deploying.
But when a user choose to manage its own rights, we should consider the eventuality where he does not define correctly these and miss some rights he thought were not needed. Which can lead to beanstalk failing the deployment and rolling back then.
I'm not sure if I'm clear enough, maybe we could reach through slack if you prefer ?
The create_before_destroy lifecycle sounds like it could cause other problems. I'll defer to my teammates to see if they have any issues with it. cc: @aknysh @jamengual |
Added count condition to it
/test all |
Refresh code according to update Refresh example
/test all |
Hello @nitrocode just fixed some things according to your suggestions. Is the security groups feature still blocking MR in this repository ? |
This pull request is now in conflict. Could you fix it @florian0410? 🙏 |
@florian0410 please resolve the conflicts |
1 similar comment
@florian0410 please resolve the conflicts |
@florian0410 please resolve the conflicts! |
1 similar comment
@florian0410 please resolve the conflicts! |
please resolve the conflicts! we need this |
@lbeltramino-uala @lbeltramino @damiromero-uala -- At this point, I think @florian0410 is likely too busy and isn't likely to pick this one up. I would highly suggest that one of you take his work and create a new branch, work through the conflicts, and PR that. I would be happy to review, so please add me as a reviewer if you choose to do so. Thanks! |
what
service_role_name
as another 'override', likeinstance_role_name
is in the original PR.why
references
Mentions
I reused propositions from #113 and #107 for this PR with some rebase, thank you to @bstascavage and @JBarna