Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recognize instance-type node label when EC2 metadata isn't available #1060

Merged
merged 1 commit into from
Sep 13, 2021

Conversation

rifelpet
Copy link
Contributor

@rifelpet rifelpet commented Sep 12, 2021

Is this a bug fix or adding new feature?
/kind bug

What is this PR about? / Why do we need it?
This fixes volume limits on nodes that dont have the EC2 metadata endpoint available.

kOps is experiencing volume limit test failures and I believe this is because the limit is being set incorrectly.

The nodes are t3.mediums which should have a limit of 28 volumes.

The csi-node pod logs report that ec2 metadata is not available so we fallback to the metadata from the k8s API which doesn't set an instance type.

The volume limit code doesn't match the nitro regex and then defaults to the non-nitro limit of 39 volumes which is higher than the t3.medium's actual limit of 28:

func (d *nodeService) getVolumesLimit() int64 {
if d.driverOptions.volumeAttachLimit >= 0 {
return d.driverOptions.volumeAttachLimit
}
ebsNitroInstanceTypeRegex := "^[cmr]5.*|t3|z1d"
instanceType := d.metadata.GetInstanceType()
if ok, _ := regexp.MatchString(ebsNitroInstanceTypeRegex, instanceType); ok {
return defaultMaxEBSNitroVolumes
}
return defaultMaxEBSVolumes
}

The test output confirms that it incorrectly uses a limit of 39 volumes:
Sep 12 01:37:14.436: INFO: Node ip-172-20-34-51.ap-northeast-2.compute.internal can handle 39 volumes of driver ebs.csi.aws.com

What testing is done?

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 12, 2021
@k8s-ci-robot k8s-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Sep 12, 2021
This fixes volume limits on nodes that dont have the EC2 metadata endpoint available.
@wongma7
Copy link
Contributor

wongma7 commented Sep 13, 2021

/lgtm
/approve

makes perfect sense, thanks.

I suppose we also ought to reduce the default from 39 to 27 (28 - 1 ENI) as I predict nitro usage will overtake non-nitro if it hasn't already. But there are still plenty of exceptions https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#instance-type-volume-limits and the ENI+EBS sharing limit is still an issue so reducing the default limit is just best-effort , we can't guarantee the limit won't get hit, even when we DO know the instance type.

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 13, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rifelpet, wongma7

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 13, 2021
@k8s-ci-robot k8s-ci-robot merged commit adfec04 into kubernetes-sigs:master Sep 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants