Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

if there is too many pods (>100) kubectl proxy ui does not work #1954

Closed
jam182 opened this issue May 15, 2017 · 3 comments
Closed

if there is too many pods (>100) kubectl proxy ui does not work #1954

jam182 opened this issue May 15, 2017 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@jam182
Copy link

jam182 commented May 15, 2017

Steps to reproduce

deploy more thatn 100 pods maybe 200
try to access the dashboard connecting to your localhost
scroll pages in the list of pods section
you get the error

Environment
Installation method: 
Kubernetes version:
Dashboard version: v1.6.0
Commit: bfab10151f012d1acc5dfb1979f3172e2400aa3c
Observed result

Dashboard reported Service Unavailable (503):

No error data available```


##### Comments
<!-- If you have any comments or more details, put them here. -->
@maciaszczykm
Copy link
Member

Can you paste results of kubectl describe on your Dashboard deployment and kubectl logs from its pod? It probably is out of memory error and we have increased limits recently.

@maciaszczykm maciaszczykm added kind/bug Categorizes issue or PR as related to a bug. priority/P1 labels May 15, 2017
@jam182
Copy link
Author

jam182 commented May 15, 2017

yeah you are right it gets OOmKilled:

➜  ~ kubectl -n kube-system describe pod kubernetes-dashboard-490794276-wm538
Name:		kubernetes-dashboard-490794276-wm538
Namespace:	kube-system
Node:		gke-xxxxx-xxxxxxxxxxxxxxxxxxx
Start Time:	Wed, 10 May 2017 11:29:06 +0100
Labels:		k8s-app=kubernetes-dashboard
		pod-template-hash=xxxxxx
Annotations:	kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kubernetes-dashboard-xxx","uid":"dxxxxxxxx...
		scheduler.alpha.kubernetes.io/critical-pod=
Status:		Running
IP:		xx.xx.1.135
Controllers:	ReplicaSet/kubernetes-dashboard-xxxxxxxx
Containers:
  kubernetes-dashboard:
    Container ID:	docker://xxxxx
    Image:		xxxx/google_containers/kubernetes-dashboard-amd64:v1.6.0
    Image ID:		docker://sha256:xxxxxxxxxxxxxx
    Port:		9090/TCP
    State:		Running
      Started:		Mon, 15 May 2017 14:57:30 +0100
    Last State:		Terminated
      Reason:		OOMKilled
      Exit Code:	137
      Started:		Mon, 01 Jan 0001 00:00:00 +0000
      Finished:		Mon, 15 May 2017 14:57:29 +0100
    Ready:		True
    Restart Count:	10
    Limits:
      cpu:	100m
      memory:	50Mi
    Requests:
      cpu:		100m
      memory:		50Mi
    Liveness:		http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:	<none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxx (ro)
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  default-token-xxxxx:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-xxxx
    Optional:	false
QoS Class:	Guaranteed
Node-Selectors:	<none>
Tolerations:	CriticalAddonsOnly=:Exists
		node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
		node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:
  FirstSeen	LastSeen	Count	From						SubObjectPath				Type		Reason	Message
  ---------	--------	-----	----						-------------				--------	------	-------
  4d		6m		10	kubelet, xxxxxxxx	spec.containers{kubernetes-dashboard}	Normal		Pulled	Container image "xxxxxx/google_containers/kubernetes-dashboard-amd64:v1.6.0" already present on machine
  6m		6m		1	kubelet, gke-xxxxx-xxxxxxxxxxxxxxxxxxx	Normal		Created	Created container with id xxxxxx
  6m		6m		1	kubelet, gke-xxxxx-xxxxxxxxxxxxxxxxxxx	spec.containers{kubernetes-dashboard}	Normal		Started	Started container with id xxxxx
➜  ~

@maciaszczykm
Copy link
Member

You should increase limits and requests for it. We've done it recently there:
kubernetes/kubernetes#44712

You can use same values. I will close this issue, please reopen if it doesn't help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants