You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently, Binder ran into a surprising autoscaling behavior when it hit the 110 pod-per-node limit that we didn't know about. Yesterday, I ran into the fact that GKE only allows 16 persistent volumes per node. This is a bug that has a planned alpha fix for kubernetes 1.11.
We've accounted for CPU and RAM limits in our capacity planning docs, but there are other limits we aren't covering, and should include.
All the exhaustible resources I'm aware of right now:
CPU
RAM
pods (110 by default)
persistent volumes (16 by default, overridable sometimes, but not managed providers/GKE)
Some deployments may have to account for GPUs, etc.
The text was updated successfully, but these errors were encountered:
This is still relevant, I'm embedding and summarizing this task into another issue though to get down to a manageable size of issues in this repo though.
Recently, Binder ran into a surprising autoscaling behavior when it hit the 110 pod-per-node limit that we didn't know about. Yesterday, I ran into the fact that GKE only allows 16 persistent volumes per node. This is a bug that has a planned alpha fix for kubernetes 1.11.
We've accounted for CPU and RAM limits in our capacity planning docs, but there are other limits we aren't covering, and should include.
All the exhaustible resources I'm aware of right now:
Some deployments may have to account for GPUs, etc.
The text was updated successfully, but these errors were encountered: