Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[autoscaler][kubernetes] autoscaling hotfix #14024

Merged
merged 1 commit into from
Feb 10, 2021

Conversation

DmitriGekhtman
Copy link
Contributor

@DmitriGekhtman DmitriGekhtman commented Feb 10, 2021

Why are these changes needed?

Right now Kubernetes fill_out_available_node_type_resources logic fills "GPU":0 for non-gpu nodes, which interacts badly with the resource demand scheduler's gpu conservation logic, preventing autoscaling on k8s.
This fixes KubernetesNodeProvider's resource-filling logic to not fill fields with value 0.

Related issue number

Checks

  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Did quick manual check that this fixes the problem.
Will add more test logic to the K8s operator unit test (not currently in CI) later.

@ericl ericl merged commit 8ca0a32 into ray-project:master Feb 10, 2021
@DmitriGekhtman DmitriGekhtman deleted the k8s-autoscaler-hotfix branch February 10, 2021 21:04
fishbone pushed a commit to fishbone/ray that referenced this pull request Feb 16, 2021
fishbone added a commit to fishbone/ray that referenced this pull request Feb 16, 2021
fishbone added a commit to fishbone/ray that referenced this pull request Feb 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants