Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes maximum 50 pod limit on latest 2.0.1.0 docker edge #3453

Closed
bgehman opened this issue Jan 11, 2019 · 9 comments

Comments

@bgehman
Copy link

commented Jan 11, 2019

  • I have tried with the latest version of my channel (Stable or Edge)
  • I have uploaded Diagnostics
  • Diagnostics ID:

Expected behavior

Able to deploy as many kubernetes PODs as I want.

Actual behavior

Hitting what seems to be a 50-pod limit. Exactly 50 PODs get to Running state, additional ones have this kubernetes event (from kubectl describe pod -n <namespace> <podname>:

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  1m (x92 over 16m)  default-scheduler  0/1 nodes are available: 1 Insufficient pods.

Information

  • macOS Version: 10.14.2
  • docker edge version: 2.0.1.0 (30090)

Steps to reproduce the behavior

Try to deploy more than 50 PODs, only 50 get to running:

$ kubectl get pods --all-namespaces | grep Running | wc -l  
      50

@guillaumerose

This comment has been minimized.

Copy link
Member

commented Jan 11, 2019

Interesting! Have you tried to add more CPU to your VM ?

@bgehman

This comment has been minimized.

Copy link
Author

commented Jan 11, 2019

@guillaumerose It is not a CPU or RAM limitation -- seems to be a hard-coded POD limit. Found this article: https://prefetch.net/blog/2018/02/10/the-kubernetes-110-pod-limit-per-node/ , but don't think we can override the kubelet --max-pods with how Docker is bundling Kubernetes (no option in the preferences).

I'm downgrading back to 2.0.0.0 where this was working fine prior to the upgrade.

@guillaumerose

This comment has been minimized.

Copy link
Member

commented Jan 11, 2019

What should be a good limit for you ? I'm guessing 200 is enough.

@bgehman

This comment has been minimized.

Copy link
Author

commented Jan 11, 2019

@guillaumerose Unlimited :) . But yeah, 200 should be plenty to stay out of the way. 50 is just too low -- especially as the internal kubernetes pods take up 9 of them, only leaving 41 for user-land. Thanks.

@guillaumerose

This comment has been minimized.

@bgehman

This comment has been minimized.

Copy link
Author

commented Jan 11, 2019

Just to confirm: I am seeing a 50-POD limit in v2.0.1.0 (which is just too low).
I do not see that lower limit in v2.0.0.0 -- I suspect it is 110 in that version...

Either the default 110, or increasing it to 200 would be sufficient from my POV. Thanks.

@bgehman

This comment has been minimized.

Copy link
Author

commented Jan 15, 2019

@guillaumerose Any explanation for why this was bug was closed? Thanks.

@guillaumerose

This comment has been minimized.

Copy link
Member

commented Jan 15, 2019

I merged the fix in our private repo. This was closed automatically, sorry.

The limit is removed, we let the default value. It will be inside the next release.

@bgehman

This comment has been minimized.

Copy link
Author

commented Feb 6, 2019

Confirmed fixed in edge version v2.0.2.0 (30972).
Thanks again, @guillaumerose 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.