New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvement/named iam roles cleanup #1285

Merged
merged 4 commits into from May 21, 2018

Conversation

Projects
None yet
5 participants
@c-knowles
Collaborator

c-knowles commented May 6, 2018

Roles used to include the stack name but since removing that they can clash for clusters with mirrored config in the same account+region. Fixes most of #698.

Doco updates fixes comments from #753 (comment)

Release note:

  • You need to update trust-relationships of the roles assumed by kube2iam or kiam before running kube-aws update. We recommend bringing up a new cluster for kube-aws upgrade to make it impossible affect your prod workloads. But it worth noting anyway.

@c-knowles c-knowles referenced this pull request May 6, 2018

Open

Remove deprecated keys in cluster.yaml #753

25 of 30 tasks complete
@codecov-io

This comment has been minimized.

Show comment
Hide comment
@codecov-io

codecov-io May 6, 2018

Codecov Report

Merging #1285 into master will not change coverage.
The diff coverage is 60%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1285   +/-   ##
=======================================
  Coverage   37.08%   37.08%           
=======================================
  Files          64       64           
  Lines        3958     3958           
=======================================
  Hits         1468     1468           
  Misses       2275     2275           
  Partials      215      215
Impacted Files Coverage Δ
core/nodepool/config/config.go 32.28% <0%> (ø) ⬆️
core/controlplane/config/config.go 62.97% <0%> (ø) ⬆️
cfnresource/naming.go 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f40057f...7b35c6f. Read the comment docs.

codecov-io commented May 6, 2018

Codecov Report

Merging #1285 into master will not change coverage.
The diff coverage is 60%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1285   +/-   ##
=======================================
  Coverage   37.08%   37.08%           
=======================================
  Files          64       64           
  Lines        3958     3958           
=======================================
  Hits         1468     1468           
  Misses       2275     2275           
  Partials      215      215
Impacted Files Coverage Δ
core/nodepool/config/config.go 32.28% <0%> (ø) ⬆️
core/controlplane/config/config.go 62.97% <0%> (ø) ⬆️
cfnresource/naming.go 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f40057f...7b35c6f. Read the comment docs.

@c-knowles

This comment has been minimized.

Show comment
Hide comment
@c-knowles

c-knowles May 6, 2018

Collaborator

@Fsero might be worth you having a quick look as this will affect your use case but only in a very basic way, the names are still stable.

Collaborator

c-knowles commented May 6, 2018

@Fsero might be worth you having a quick look as this will affect your use case but only in a very basic way, the names are still stable.

c-knowles added some commits May 6, 2018

Re-add cluster name to managed IAM roles.
Roles used to include the stack name but since removing that they can clash for clusters with mirrored config in the same account+region.

Fixes most of #698.
# # When enabled it will grant sts:assumeRole permission to the IAM roles for worker nodes in this pool.
# # This is intended to be used in combination with managedIamRoleName. See #297 for more information.
# # When enabled this will grant sts:assumeRole permission to the IAM roles for worker nodes in this pool.
# # This is intended to be used in combination with nodePools[].iam.role.name. See #297 for more information.
# kube2IamSupport:
# enabled: true

This comment has been minimized.

@c-knowles

c-knowles May 11, 2018

Collaborator

@mumoshu do you know why we have kube2IamSupport on node pools? I thought it was to allow only some node pools to have this enabled but having looked at it again I cannot see anything which prevents the daemon set from running across everything.

@c-knowles

c-knowles May 11, 2018

Collaborator

@mumoshu do you know why we have kube2IamSupport on node pools? I thought it was to allow only some node pools to have this enabled but having looked at it again I cannot see anything which prevents the daemon set from running across everything.

This comment has been minimized.

@mumoshu

mumoshu May 13, 2018

Collaborator

@c-knowles Wow, good catch! I can't imagine how I miss that until now, but probably we need node affinity:

              affinity:
                nodeAffinity:
                  requiredDuringSchedulingIgnoredDuringExecution:
                    nodeSelectorTerms:
                    - matchExpressions:
                      - key: "kube-aws.coreos.com/assume-role-supported"
                        operator: "In"
                        values:
                        - "true"

and corresponding node labels?

@mumoshu

mumoshu May 13, 2018

Collaborator

@c-knowles Wow, good catch! I can't imagine how I miss that until now, but probably we need node affinity:

              affinity:
                nodeAffinity:
                  requiredDuringSchedulingIgnoredDuringExecution:
                    nodeSelectorTerms:
                    - matchExpressions:
                      - key: "kube-aws.coreos.com/assume-role-supported"
                        operator: "In"
                        values:
                        - "true"

and corresponding node labels?

This comment has been minimized.

@c-knowles

c-knowles May 21, 2018

Collaborator

@mumoshu yeah something like that, it seems totally missing from the beginning of this feature as far as I can tell.

@c-knowles

c-knowles May 21, 2018

Collaborator

@mumoshu yeah something like that, it seems totally missing from the beginning of this feature as far as I can tell.

@@ -349,7 +349,7 @@
"Resource": "arn:{{.Region.Partition}}:s3:::{{ .KubeResourcesAutosave.S3Path }}/*"
},
{{end}}
{{if or .Kube2IamSupport.Enabled .KIAMSupport.Enabled }}
{{if .Kube2IamSupport.Enabled }}

This comment has been minimized.

@mumoshu

mumoshu May 13, 2018

Collaborator

Ah, yes. We have a nodeSelector to stick kiam server to controller nodes. So we don't need to consult .KIAMSupport.Enabled here. Good catch!

@mumoshu

mumoshu May 13, 2018

Collaborator

Ah, yes. We have a nodeSelector to stick kiam server to controller nodes. So we don't need to consult .KIAMSupport.Enabled here. Good catch!

# # It will also grant sts:assumeRole permission to the IAM role for worker nodes in this pool.
# # This will use SSL certificates generated during `kube-aws render credentials`, ensure this has been run with kube-aws v0.9.9+
# kiamSupport:
# enabled: true

This comment has been minimized.

@mumoshu

mumoshu May 13, 2018

Collaborator

Ah, yes. Currently we have a nodeSelector to stick kiam server to controller nodes. So no need for kiamSupport in worker node pools. Good catch 👍

@mumoshu

mumoshu May 13, 2018

Collaborator

Ah, yes. Currently we have a nodeSelector to stick kiam server to controller nodes. So no need for kiamSupport in worker node pools. Good catch 👍

@mumoshu

This comment has been minimized.

Show comment
Hide comment
@mumoshu

mumoshu May 13, 2018

Collaborator

@c-knowles Should we note about a breaking change for this? Probably you need to update trust-relationship of the roles assumed by kube2iam or kiam before running kube-aws update. (I'd recommend bringing up a new cluster for kube-aws upgrade, but worth noting ,anyway?

Collaborator

mumoshu commented May 13, 2018

@c-knowles Should we note about a breaking change for this? Probably you need to update trust-relationship of the roles assumed by kube2iam or kiam before running kube-aws update. (I'd recommend bringing up a new cluster for kube-aws upgrade, but worth noting ,anyway?

@Fsero

This comment has been minimized.

Show comment
Hide comment
@Fsero

Fsero May 13, 2018

Contributor

@mumoshu i think is a good idea to note the breaking change over the release notes yes.
@c-knowles thanks for the refactor, i think it makes sense and it does not affect me because i'm handling IAM outside kube-aws and instructing kube-aws to use an existing instance-profile since my IAM Roles scheme are pretty specific for our use case.

Contributor

Fsero commented May 13, 2018

@mumoshu i think is a good idea to note the breaking change over the release notes yes.
@c-knowles thanks for the refactor, i think it makes sense and it does not affect me because i'm handling IAM outside kube-aws and instructing kube-aws to use an existing instance-profile since my IAM Roles scheme are pretty specific for our use case.

@mumoshu

This comment has been minimized.

Show comment
Hide comment
@mumoshu

mumoshu May 21, 2018

Collaborator

@Fsero Thanks for the confirmation 👍 Will add a release note.

Collaborator

mumoshu commented May 21, 2018

@Fsero Thanks for the confirmation 👍 Will add a release note.

@mumoshu

This comment has been minimized.

Show comment
Hide comment
@mumoshu

mumoshu May 21, 2018

Collaborator

LGTM. Probably we should fix the node affinity later.

Collaborator

mumoshu commented May 21, 2018

LGTM. Probably we should fix the node affinity later.

@mumoshu mumoshu merged commit e0283a4 into kubernetes-incubator:master May 21, 2018

2 checks passed

cla/linuxfoundation c-knowles authorized
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

@mumoshu mumoshu added this to the v0.11.0 milestone May 21, 2018

@c-knowles c-knowles deleted the c-knowles:improvement/named-iam-roles-cleanup branch May 27, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment