Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(hetzner): missing error return in scale up/down #6750

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

apricote
Copy link
Member

What type of PR is this?

/kind bug

What this PR does / why we need it:

There is no Node Group/Autoscaling Group in Hetzner Cloud API, so the Hetzner provider implemented this by manually creating as many servers as needed.

The current code did not return any of the errors that could have happened. Without any returned errors, cluster-autoscaler assumed that everything was fine with the Node Group.

In cases where there is a temporary issue with one of the node groups (ie. Location is unavailable, no leftover capacity for the requested server type), cluster-autoscaler should consider this and try to scale up a different Node Group. This will automatically happen once we return an error, as cluster-autoscaler backs off from scaling Node Groups that have recently returned errors.

Which issue(s) this PR fixes:

Fixes #6240

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Fixed exhausted node groups not backing off for Hetzner Provider

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


There is no Node Group/Autoscaling Group in Hetzner Cloud API, so the
Hetzner provider implemented this by manually creating as many servers
as needed.

The current code did not return any of the errors that could have
happened. Without any returned errors, cluster-autoscaler assumed that
everything was fine with the Node Group.

In cases where there is a temporary issue with one of the node groups
(ie. Location is unavailable, no leftover capacity for the requested
server type), cluster-autoscaler should consider this and try to scale
up a different Node Group. This will automatically happen once we return
an error, as cluster-autoscaler backs off from scaling Node Groups that
have recently returned errors.
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 24, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: apricote

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot requested a review from x13n April 24, 2024 09:40
@k8s-ci-robot k8s-ci-robot added area/provider/hetzner Issues or PRs related to Hetzner provider approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 24, 2024
@apricote
Copy link
Member Author

@elohmeier Could you test this PR to see if it fixes your issue? In my tests I have observed that the Node Group is now properly marked in an error state, but I do not have a production-like setup to test this with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cluster-autoscaler area/provider/hetzner Issues or PRs related to Hetzner provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cluster Autoscaler not backing off exhausted node group
2 participants