New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kube-aws:version to stacktags #1409

Merged
merged 3 commits into from Jul 19, 2018

Conversation

Projects
None yet
5 participants
@davidmccormick
Contributor

davidmccormick commented Jul 16, 2018

We don't currently tag our stack or instances with the version of kube-aws that created them. This change adds a 'kube-aws:version' tag into each stack, including the root stack, and fixes a bug where tags are not updated when stacks are updated. These stacktags are also inherited by instances, so all ec2 instances also recieve the tag. This makes it very easy to use AWS API/CLI/GUI to check on the version of kube-aws used to deploy a cluster.

This idea was mentioned in #1405 where it was suggested as a good idea - giving us less reason for further requests to embed it the cluster.yaml.

@c-knowles

This comment has been minimized.

Show comment
Hide comment
@c-knowles

c-knowles Jul 16, 2018

Collaborator

Awesome! Does this also work with spot instances? They do not inherit the stack tags unfortunately and kube-aws supports some other attributes for that, although I kind of which we connected them as to the stack tags since AWS doesn't.

Collaborator

c-knowles commented Jul 16, 2018

Awesome! Does this also work with spot instances? They do not inherit the stack tags unfortunately and kube-aws supports some other attributes for that, although I kind of which we connected them as to the stack tags since AWS doesn't.

@codecov-io

This comment has been minimized.

Show comment
Hide comment
@codecov-io

codecov-io Jul 16, 2018

Codecov Report

Merging #1409 into master will increase coverage by 0.03%.
The diff coverage is 50%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1409      +/-   ##
==========================================
+ Coverage   37.53%   37.57%   +0.03%     
==========================================
  Files          74       74              
  Lines        4534     4546      +12     
==========================================
+ Hits         1702     1708       +6     
- Misses       2597     2603       +6     
  Partials      235      235
Impacted Files Coverage Δ
cfnstack/provisioner.go 0% <0%> (ø) ⬆️
core/nodepool/cluster/cluster.go 46.51% <100%> (+1.27%) ⬆️
core/controlplane/cluster/cluster.go 50% <100%> (+0.69%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 77c43b5...14b4462. Read the comment docs.

codecov-io commented Jul 16, 2018

Codecov Report

Merging #1409 into master will increase coverage by 0.03%.
The diff coverage is 50%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1409      +/-   ##
==========================================
+ Coverage   37.53%   37.57%   +0.03%     
==========================================
  Files          74       74              
  Lines        4534     4546      +12     
==========================================
+ Hits         1702     1708       +6     
- Misses       2597     2603       +6     
  Partials      235      235
Impacted Files Coverage Δ
cfnstack/provisioner.go 0% <0%> (ø) ⬆️
core/nodepool/cluster/cluster.go 46.51% <100%> (+1.27%) ⬆️
core/controlplane/cluster/cluster.go 50% <100%> (+0.69%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 77c43b5...14b4462. Read the comment docs.

@davidmccormick

This comment has been minimized.

Show comment
Hide comment
@davidmccormick

davidmccormick Jul 16, 2018

Contributor

Hi, I haven't tested spot instances but it looks as though they are responsible for setting their own tags and the stack tags are propigated to them through their cloud-config files, so they should also set this new tag!

I've added the new 'kube-aws:version' tag to StackTags so it behaves like any other StackTag - and so I'm quite confident that it will be propigated down. However, I am a little less confident that this is the behavoir that you desire (hopefully it is)!

I have tested it rolling out to ASG style clusters and updating as the version of kube-aws is changed.

Contributor

davidmccormick commented Jul 16, 2018

Hi, I haven't tested spot instances but it looks as though they are responsible for setting their own tags and the stack tags are propigated to them through their cloud-config files, so they should also set this new tag!

I've added the new 'kube-aws:version' tag to StackTags so it behaves like any other StackTag - and so I'm quite confident that it will be propigated down. However, I am a little less confident that this is the behavoir that you desire (hopefully it is)!

I have tested it rolling out to ASG style clusters and updating as the version of kube-aws is changed.

@mumoshu

This comment has been minimized.

Show comment
Hide comment
@mumoshu

mumoshu Jul 17, 2018

Collaborator

We have a set of script/unit to tag spot instances by their own!

{{if .SpotFleet.Enabled}}
- path: /opt/bin/tag-spot-instance
owner: root:root
permissions: 0700
content: |
#!/bin/bash -e
instance_id=$(curl http://169.254.169.254/latest/meta-data/instance-id)
TAGS=""
TAGS="${TAGS}Key=\"kubernetes.io/cluster/{{ .ClusterName }}\",Value=\"owned\" "
TAGS="${TAGS}Key=\"kube-aws:node-pool:name\",Value=\"{{.NodePoolName}}\" "
TAGS="${TAGS}Key=\"Name\",Value=\"{{.ClusterName}}-{{.StackName}}-kube-aws-worker\" "
{{if .Autoscaling.ClusterAutoscaler.Enabled -}}
TAGS="${TAGS}Key=\"{{.Autoscaling.ClusterAutoscaler.AutoDiscoveryTagKey}}\",Value=\"\" "
{{end -}}
{{range $k, $v := .StackTags -}}
TAGS="${TAGS}Key=\"{{$k}}\",Value=\"{{$v}}\" "
{{end -}}
{{range $k, $v := .InstanceTags -}}
TAGS="${TAGS}Key=\"{{$k}}\",Value=\"{{$v}}\" "
{{end -}}
echo Tagging this EC2 instance with: "$TAGS"
rkt run \
--volume=ssl,kind=host,source=/etc/kubernetes/ssl,readOnly=false \
--mount=volume=ssl,target=/etc/kubernetes/ssl \
--uuid-file-save=/var/run/coreos/tag-spot-instance.uuid \
--volume=dns,kind=host,source=/etc/resolv.conf,readOnly=true --mount volume=dns,target=/etc/resolv.conf \
--net=host \
--trust-keys-from-https \
--insecure-options=ondisk \
{{.AWSCliImage.Options}}{{.AWSCliImage.RktRepo}} --exec=/bin/bash -- \
-vxec \
'echo tagging this spot instance
instance_id="'$instance_id'"
/usr/bin/aws \
--region {{.Region}} ec2 create-tags \
--resource $instance_id \
--tags '"$TAGS"'
echo done.'
rkt rm --uuid-file=/var/run/coreos/tag-spot-instance.uuid || :

It would be nice if we could add the support for it later.

Collaborator

mumoshu commented Jul 17, 2018

We have a set of script/unit to tag spot instances by their own!

{{if .SpotFleet.Enabled}}
- path: /opt/bin/tag-spot-instance
owner: root:root
permissions: 0700
content: |
#!/bin/bash -e
instance_id=$(curl http://169.254.169.254/latest/meta-data/instance-id)
TAGS=""
TAGS="${TAGS}Key=\"kubernetes.io/cluster/{{ .ClusterName }}\",Value=\"owned\" "
TAGS="${TAGS}Key=\"kube-aws:node-pool:name\",Value=\"{{.NodePoolName}}\" "
TAGS="${TAGS}Key=\"Name\",Value=\"{{.ClusterName}}-{{.StackName}}-kube-aws-worker\" "
{{if .Autoscaling.ClusterAutoscaler.Enabled -}}
TAGS="${TAGS}Key=\"{{.Autoscaling.ClusterAutoscaler.AutoDiscoveryTagKey}}\",Value=\"\" "
{{end -}}
{{range $k, $v := .StackTags -}}
TAGS="${TAGS}Key=\"{{$k}}\",Value=\"{{$v}}\" "
{{end -}}
{{range $k, $v := .InstanceTags -}}
TAGS="${TAGS}Key=\"{{$k}}\",Value=\"{{$v}}\" "
{{end -}}
echo Tagging this EC2 instance with: "$TAGS"
rkt run \
--volume=ssl,kind=host,source=/etc/kubernetes/ssl,readOnly=false \
--mount=volume=ssl,target=/etc/kubernetes/ssl \
--uuid-file-save=/var/run/coreos/tag-spot-instance.uuid \
--volume=dns,kind=host,source=/etc/resolv.conf,readOnly=true --mount volume=dns,target=/etc/resolv.conf \
--net=host \
--trust-keys-from-https \
--insecure-options=ondisk \
{{.AWSCliImage.Options}}{{.AWSCliImage.RktRepo}} --exec=/bin/bash -- \
-vxec \
'echo tagging this spot instance
instance_id="'$instance_id'"
/usr/bin/aws \
--region {{.Region}} ec2 create-tags \
--resource $instance_id \
--tags '"$TAGS"'
echo done.'
rkt rm --uuid-file=/var/run/coreos/tag-spot-instance.uuid || :

It would be nice if we could add the support for it later.

@mumoshu

This comment has been minimized.

Show comment
Hide comment
@mumoshu

mumoshu Jul 17, 2018

Collaborator

Oh, it seems like we can remove the script/unit in favor of the official cfn support for spotfleet tag specification as per #901!

Collaborator

mumoshu commented Jul 17, 2018

Oh, it seems like we can remove the script/unit in favor of the official cfn support for spotfleet tag specification as per #901!

@davidmccormick

This comment has been minimized.

Show comment
Hide comment
@davidmccormick

davidmccormick Jul 17, 2018

Contributor

Could changing the implementation of setting spot tags be another issue/pr? I can't properly test it here.

Contributor

davidmccormick commented Jul 17, 2018

Could changing the implementation of setting spot tags be another issue/pr? I can't properly test it here.

@c-knowles

This comment has been minimized.

Show comment
Hide comment
@c-knowles

c-knowles Jul 17, 2018

Collaborator

My vote goes for separate, it's a bit delicate as kube-aws also supports separate tags per node pool as well.

Collaborator

c-knowles commented Jul 17, 2018

My vote goes for separate, it's a bit delicate as kube-aws also supports separate tags per node pool as well.

@mumoshu

LGTM. I really appreciate your continuous effort to improve kube-aws 🙇

@mumoshu mumoshu merged commit 28e06e6 into kubernetes-incubator:master Jul 19, 2018

2 checks passed

cla/linuxfoundation davidmccormick authorized
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

@mumoshu mumoshu added this to the v0.11.0 milestone Jul 19, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment