Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable the user scheduler to pay attention to CSI volume count #1699

merged 2 commits into from
Aug 25, 2020


Copy link

I've been getting the 0.9.0 chart running on an autoscaling cluster on Digital Ocean. Most things are working, but it would try to schedule more than seven pods with volumes on a given node. (Digital Ocean has a relatively low limit of 7 on this.) This only happened when using the userScheduler; if that was disabled, the limits were respected.

Digital Ocean support pointed out that the scheduler did not have the "MaxCSIVolumeCountPred" predicate, which they use to count the attached volumes. This appears to be a more general predicate, which will replace the provider specific ones we currently have. (See here.)

I could only get this to work with an updated version of the kube-scheduler image. I don't know what was going wrong with the current version, and I also don't know if it needs to be updated this far. If we're worried about upgrading this far, I can check some of the previous versions and report back.

In my testing, I was not able to get this working on the existing
kube-scheduler image.  I don't know if updating this much was
This works in testing on Digital Ocean; let's see if it passes CI.
Copy link
Contributor Author

It's working on 1.16 and 1.18. Not sure what 1.17 is unhappy about.

Copy link

minrk commented Aug 25, 2020

Just had to restart 1.17, turns out it was an intermittent error on CI, not a true failure.


@minrk minrk merged commit 25782ea into jupyterhub:master Aug 25, 2020
Copy link
Contributor Author

Thanks for finding this one -- I had forgotten about it!

Copy link

consideRatio commented Sep 8, 2020

Removed this comment in favor of #1773.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet

Successfully merging this pull request may close these issues.

None yet

3 participants