Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[e2e] validates resource limits of pods that are allowed to run failed due to incorrect cpu calculation #47627

Closed
bizhao opened this issue Jun 16, 2017 · 4 comments · Fixed by #47628
Labels
sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/testing Categorizes an issue or PR as relevant to SIG Testing.

Comments

@bizhao
Copy link
Contributor

bizhao commented Jun 16, 2017

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.):
No.
Note: Please file issues for subcomponents under the appropriate repo

Component Repo
kubectl kubernetes/kubectl
kubeadm kubernetes/kubeadm

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
validates resource limits of pods that are allowed to run

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Kubernetes version (use kubectl version):
1.6.1
kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"351eadd265c6b752c600fd53f4c15da379cdada2", GitTreeState:"clean", BuildDate:"2017-06-01T06:22:44Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"351eadd265c6b752c600fd53f4c15da379cdada2", GitTreeState:"clean", BuildDate:"2017-06-01T06:22:44Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:
    Local Setup

  • OS (e.g. from /etc/os-release):
    VMware Photon 1.0"

  • Kernel (e.g. uname -a):
    Linux k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed 4.4.41-1.ph1-esx Unit test coverage in Kubelet is lousy. (~30%) #1-photon SMP Tue Jan 10 23:46:44 UTC 2017 x86_64 GNU/Linux

  • Install tools:

  • Others:

What happened:
This e2e test (validates resource limits of pods that are allowed to run) failed.

What you expected to happen:
Should always pass.

How to reproduce it (as minimally and precisely as possible):
Run this test in a setup which node's cpu capacity is different than cpu allocatable.

Anything else we need to know:
I will submit a fix for this issue.

Details about capacity and allocatable:
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md

In the local setup, capacity = 2000m and allocatable = 1900m.

root@k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed [ ~ ]# kubectl describe node k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
Name: k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=SDDC
failure-domain.beta.kubernetes.io/zone=nova
kubernetes.io/hostname=k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
node-role.kubernetes.io/node=true
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:
CreationTimestamp: Tue, 13 Jun 2017 10:46:07 +0000
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


OutOfDisk False Thu, 15 Jun 2017 06:51:47 +0000 Tue, 13 Jun 2017 10:46:07 +0000 KubeletHasSufficientDisk kubelet has su
MemoryPressure False Thu, 15 Jun 2017 06:51:47 +0000 Tue, 13 Jun 2017 10:46:07 +0000 KubeletHasSufficientMemory kubelet has su
DiskPressure False Thu, 15 Jun 2017 06:51:47 +0000 Tue, 13 Jun 2017 10:46:07 +0000 KubeletHasNoDiskPressure kubelet has no
Ready True Thu, 15 Jun 2017 06:51:47 +0000 Tue, 13 Jun 2017 10:46:07 +0000 KubeletReady kubelet is pos
Addresses: 192.168.0.4,k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
Capacity:
cpu: 2
memory: 4048232Ki
pods: 110
Allocatable:
cpu: 1900m
memory: 3445832Ki
pods: 110
System Info:
Machine ID: 8e025a21a4254e11b028584d9d8b12c4
System UUID: 4217DB4D-954B-7D1A-014B-CB31E02B0679
Boot ID: eabbc849-5c82-453a-9e03-4d6dbe5018fc
Kernel Version: 4.4.41-1.ph1-esx
OS Image: Debian GNU/Linux 8 (jessie)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.12.1
Kubelet Version: v1.6.1
Kube-Proxy Version: v1.6.1
ExternalID: e7d14427-d6ea-4d62-958a-a70ba79054fa
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory


kube-system dnsmasq-z77g3 70m (3%) 100m (5%) 70Mi (2%) 170Mi
kube-system flannel-k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed 150m (7%) 300m (15%) 64M (1%) 500M (
kube-system kube-proxy-k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed 150m (7%) 500m (26%) 64M (1%) 2G (56
kube-system kubedns-autoscaler-1428750645-btwl6 20m (1%) 0 (0%) 10Mi (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits


390m (20%) 900m (47%) 206920Ki (6%) 2678257920 (75%)
Events:

Full output of the test case:
04:31:52 [It] validates resource limits of pods that are allowed to run [Conformance]
04:31:52 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:216
04:31:52 Jun 13 11:37:49.256: INFO: Pod dnsmasq-1508732111-sk20q requesting resource cpu=40m on Node k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Pod dnsmasq-autoscaler-3605072793-45s0t requesting resource cpu=20m on Node k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Pod flannel-k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed requesting resource cpu=150m on Node k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Pod flannel-k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed requesting resource cpu=150m on Node k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Pod flannel-k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed requesting resource cpu=150m on Node k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Pod kube-proxy-k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed requesting resource cpu=150m on Node k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Pod kube-proxy-k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed requesting resource cpu=150m on Node k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Pod kube-proxy-k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed requesting resource cpu=150m on Node k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Pod kubedns-1482036730-6d6k7 requesting resource cpu=150m on Node k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Pod kubedns-autoscaler-1428750645-btwl6 requesting resource cpu=20m on Node k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:31:52 Jun 13 11:37:49.256: INFO: Using pod capacity: 500m
04:31:52 Jun 13 11:37:49.256: INFO: Node: k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed has cpu capacity: 1660m
04:31:52 Jun 13 11:37:49.256: INFO: Node: k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed has cpu capacity: 1510m
04:31:52 Jun 13 11:37:49.256: INFO: Node: k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed has cpu capacity: 1700m
04:31:52 STEP: Starting additional 9 Pods to fully saturate the cluster CPU and trying to start another one
04:31:53 Jun 13 11:37:50.582: INFO: Waiting for running...
04:41:53 Jun 13 11:47:50.611: INFO: Unexpected error occurred: Error waiting for 9 pods to be running - probably a timeout: Timeout while waiting for pods with labels "startPodsID=ba603655-502c-11e7-b75f-0242ac110003" to be running
04:41:53 [AfterEach] [k8s.io] SchedulerPredicates [Serial]
04:41:53 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
04:41:53 STEP: Collecting events from namespace "e2e-tests-sched-pred-jhgng".
04:41:54 STEP: Found 35 events.
04:41:54 Jun 13 11:47:50.792: INFO: At 2017-06-13 11:37:59.651185657 +0000 UTC - event for overcommit-0: {default-scheduler } Scheduled: Successfully assigned overcommit-0 to k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:50.793: INFO: At 2017-06-13 11:37:59.775804379 +0000 UTC - event for overcommit-1: {default-scheduler } Scheduled: Successfully assigned overcommit-1 to k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:50.793: INFO: At 2017-06-13 11:37:59.893304345 +0000 UTC - event for overcommit-2: {default-scheduler } Scheduled: Successfully assigned overcommit-2 to k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:50.793: INFO: At 2017-06-13 11:38:00 +0000 UTC - event for overcommit-8: {default-scheduler } FailedScheduling: No nodes are available that match all of the following predicates:: Insufficient cpu (3).
04:41:54 Jun 13 11:47:50.793: INFO: At 2017-06-13 11:38:00.015292995 +0000 UTC - event for overcommit-0: {kubelet k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
04:41:54 Jun 13 11:47:50.793: INFO: At 2017-06-13 11:38:00.019736997 +0000 UTC - event for overcommit-3: {default-scheduler } Scheduled: Successfully assigned overcommit-3 to k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:50.793: INFO: At 2017-06-13 11:38:00.050906811 +0000 UTC - event for overcommit-0: {kubelet k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed} Created: Created container with docker id bc88097573f4; Security:[seccomp=unconfined]
04:41:54 Jun 13 11:47:50.794: INFO: At 2017-06-13 11:38:00.142455157 +0000 UTC - event for overcommit-0: {kubelet k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed} Started: Started container with docker id bc88097573f4
04:41:54 Jun 13 11:47:50.794: INFO: At 2017-06-13 11:38:00.157982586 +0000 UTC - event for overcommit-4: {default-scheduler } Scheduled: Successfully assigned overcommit-4 to k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:50.794: INFO: At 2017-06-13 11:38:00.166539231 +0000 UTC - event for overcommit-1: {kubelet k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
04:41:54 Jun 13 11:47:50.794: INFO: At 2017-06-13 11:38:00.204710213 +0000 UTC - event for overcommit-1: {kubelet k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed} Created: Created container with docker id e8e873bd8c84; Security:[seccomp=unconfined]
04:41:54 Jun 13 11:47:50.794: INFO: At 2017-06-13 11:38:00.294017361 +0000 UTC - event for overcommit-5: {default-scheduler } Scheduled: Successfully assigned overcommit-5 to k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:50.794: INFO: At 2017-06-13 11:38:00.338767642 +0000 UTC - event for overcommit-1: {kubelet k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed} Started: Started container with docker id e8e873bd8c84
04:41:54 Jun 13 11:47:50.794: INFO: At 2017-06-13 11:38:00.419582613 +0000 UTC - event for overcommit-6: {default-scheduler } Scheduled: Successfully assigned overcommit-6 to k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:50.795: INFO: At 2017-06-13 11:38:00.463381364 +0000 UTC - event for overcommit-3: {kubelet k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
04:41:54 Jun 13 11:47:50.795: INFO: At 2017-06-13 11:38:00.555049661 +0000 UTC - event for overcommit-3: {kubelet k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed} Created: Created container with docker id c8a4c8b52203; Security:[seccomp=unconfined]
04:41:54 Jun 13 11:47:50.795: INFO: At 2017-06-13 11:38:00.555079061 +0000 UTC - event for overcommit-7: {default-scheduler } Scheduled: Successfully assigned overcommit-7 to k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:50.795: INFO: At 2017-06-13 11:38:00.598968664 +0000 UTC - event for overcommit-4: {kubelet k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
04:41:54 Jun 13 11:47:50.795: INFO: At 2017-06-13 11:38:00.653063316 +0000 UTC - event for overcommit-4: {kubelet k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed} Created: Created container with docker id 4fce6687e8a4; Security:[seccomp=unconfined]
04:41:54 Jun 13 11:47:50.795: INFO: At 2017-06-13 11:38:00.674731813 +0000 UTC - event for overcommit-3: {kubelet k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed} Started: Started container with docker id c8a4c8b52203
04:41:54 Jun 13 11:47:50.795: INFO: At 2017-06-13 11:38:00.785977451 +0000 UTC - event for overcommit-4: {kubelet k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed} Started: Started container with docker id 4fce6687e8a4
04:41:54 Jun 13 11:47:50.795: INFO: At 2017-06-13 11:38:00.882331297 +0000 UTC - event for overcommit-6: {kubelet k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
04:41:54 Jun 13 11:47:50.796: INFO: At 2017-06-13 11:38:00.882933189 +0000 UTC - event for overcommit-2: {kubelet k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulling: pulling image "gcr.io/google_containers/pause-amd64:3.0"
04:41:54 Jun 13 11:47:50.796: INFO: At 2017-06-13 11:38:00.92095361 +0000 UTC - event for overcommit-6: {kubelet k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed} Created: Created container with docker id e7f67991a9f7; Security:[seccomp=unconfined]
04:41:54 Jun 13 11:47:50.796: INFO: At 2017-06-13 11:38:00.971598439 +0000 UTC - event for overcommit-7: {kubelet k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
04:41:54 Jun 13 11:47:50.796: INFO: At 2017-06-13 11:38:00.976781249 +0000 UTC - event for overcommit-5: {kubelet k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulling: pulling image "gcr.io/google_containers/pause-amd64:3.0"
04:41:54 Jun 13 11:47:50.796: INFO: At 2017-06-13 11:38:00.989614179 +0000 UTC - event for overcommit-7: {kubelet k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed} Created: Created container with docker id a4bd102d342b; Security:[seccomp=unconfined]
04:41:54 Jun 13 11:47:50.796: INFO: At 2017-06-13 11:38:01.011193603 +0000 UTC - event for overcommit-6: {kubelet k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed} Started: Started container with docker id e7f67991a9f7
04:41:54 Jun 13 11:47:50.796: INFO: At 2017-06-13 11:38:01.096810602 +0000 UTC - event for overcommit-7: {kubelet k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed} Started: Started container with docker id a4bd102d342b
04:41:54 Jun 13 11:47:50.796: INFO: At 2017-06-13 11:38:06.662955962 +0000 UTC - event for overcommit-2: {kubelet k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulled: Successfully pulled image "gcr.io/google_containers/pause-amd64:3.0"
04:41:54 Jun 13 11:47:50.797: INFO: At 2017-06-13 11:38:06.689096418 +0000 UTC - event for overcommit-2: {kubelet k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed} Created: Created container with docker id 6720d8dd2600; Security:[seccomp=unconfined]
04:41:54 Jun 13 11:47:50.797: INFO: At 2017-06-13 11:38:06.8270745 +0000 UTC - event for overcommit-2: {kubelet k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed} Started: Started container with docker id 6720d8dd2600
04:41:54 Jun 13 11:47:50.797: INFO: At 2017-06-13 11:38:09.572915768 +0000 UTC - event for overcommit-5: {kubelet k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed} Pulled: Successfully pulled image "gcr.io/google_containers/pause-amd64:3.0"
04:41:54 Jun 13 11:47:50.797: INFO: At 2017-06-13 11:38:09.597306711 +0000 UTC - event for overcommit-5: {kubelet k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed} Created: Created container with docker id f2950a4e9469; Security:[seccomp=unconfined]
04:41:54 Jun 13 11:47:50.797: INFO: At 2017-06-13 11:38:09.691636257 +0000 UTC - event for overcommit-5: {kubelet k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed} Started: Started container with docker id f2950a4e9469
04:41:54 Jun 13 11:47:51.055: INFO: POD NODE PHASE GRACE CONDITIONS
04:41:54 Jun 13 11:47:51.055: INFO: overcommit-0 k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:37:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:37:59 +0000 UTC }]
04:41:54 Jun 13 11:47:51.055: INFO: overcommit-1 k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:37:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:37:59 +0000 UTC }]
04:41:54 Jun 13 11:47:51.055: INFO: overcommit-2 k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:37:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:37:59 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: overcommit-3 k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:37:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: overcommit-4 k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:37:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: overcommit-5 k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: overcommit-6 k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: overcommit-7 k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: overcommit-8 Pending [{PodScheduled False 0001-01-01 00:00:00 +0000 UTC 2017-06-13 11:38:00.678382931 +0000 UTC Unschedulable No nodes are available that match all of the following predicates:: Insufficient cpu (3).}]
04:41:54 Jun 13 11:47:51.056: INFO: dnsmasq-1508732111-sk20q k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:36 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: dnsmasq-autoscaler-3605072793-45s0t k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:47:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:39 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: flannel-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:37 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: flannel-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:37 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: flannel-k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:38 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: flannel-k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:37 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: flannel-k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:55:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:56:10 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: kube-apiserver-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:56 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: kube-apiserver-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:56 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: kube-controller-manager-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:11 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: kube-controller-manager-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:11 +0000 UTC }]
04:41:54 Jun 13 11:47:51.056: INFO: kube-proxy-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:40 +0000 UTC }]
04:41:54 Jun 13 11:47:51.057: INFO: kube-proxy-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:41 +0000 UTC }]
04:41:54 Jun 13 11:47:51.057: INFO: kube-proxy-k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:40 +0000 UTC }]
04:41:54 Jun 13 11:47:51.057: INFO: kube-proxy-k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:45:39 +0000 UTC }]
04:41:54 Jun 13 11:47:51.057: INFO: kube-proxy-k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:54:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:56:10 +0000 UTC }]
04:41:54 Jun 13 11:47:51.057: INFO: kube-scheduler-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:12 +0000 UTC }]
04:41:54 Jun 13 11:47:51.057: INFO: kube-scheduler-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:46:12 +0000 UTC }]
04:41:54 Jun 13 11:47:51.057: INFO: kubedns-1482036730-6d6k7 k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:47:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:47:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:47:08 +0000 UTC }]
04:41:54 Jun 13 11:47:51.057: INFO: kubedns-autoscaler-1428750645-btwl6 k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:47:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:47:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-13 10:47:10 +0000 UTC }]
04:41:54 Jun 13 11:47:51.057: INFO:
04:41:54 Jun 13 11:47:51.198: INFO:
04:41:54 Logging node info for node k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:51.320: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed,GenerateName:,Namespace:,SelfLink:/api/v1/nodesk8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed,UID:7e91cd56-5025-11e7-9536-fa163e83263e,ResourceVersion:8623,Generation:0,CreationTimestamp:2017-06-13 10:46:02.558599399 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: SDDC,failure-domain.beta.kubernetes.io/zone: nova,kubernetes.io/hostname: k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed,node-role.kubernetes.io/master: true,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:1e16518a-7d19-4ef8-b2ab-32bd905ba736,ProviderID:openstack:///1e16518a-7d19-4ef8-b2ab-32bd905ba736,Unschedulable:true,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4145389568 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1900 -3} {} 1900m DecimalSI},memory: {{3528531968 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2017-06-13 11:47:58 +0000 UTC 2017-06-13 10:46:02 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-06-13 11:47:58 +0000 UTC 2017-06-13 10:46:02 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-06-13 11:47:58 +0000 UTC 2017-06-13 10:46:02 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-06-13 11:47:58 +0000 UTC 2017-06-13 10:46:02 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 192.168.0.6} {ExternalIP 10.111.89.136} {Hostname k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8e025a21a4254e11b028584d9d8b12c4,SystemUUID:4217B741-B992-F08D-976D-EA97EF8D988D,BootID:9cdf95ac-6927-407a-9b79-242495ae45f1,KernelVersion:4.4.41-1.ph1-esx,OSImage:Debian GNU/Linux 8 (jessie),ContainerRuntimeVersion:docker://1.12.1,KubeletVersion:v1.6.1,KubeProxyVersion:v1.6.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[10.111.89.130:11000/coreos/hyperkube@sha256:44e441510dfed56ff6f3b4fb7e229b2e95cc3e526816710fcf68a1c49016b9ec 10.111.89.130:11000/coreos/hyperkube:v1.6.1_coreos.0] 855664488} {[10.111.89.130:11000/nginx@sha256:0c17c3a8e78a256677b3d9a3cc68e8e8e4244dafc7bd9ca0dfa96759a7047f69 10.111.89.130:11000/nginx:1.11.4] 181346890} {[10.111.89.130:11000/google_containers/kubedns-amd64@sha256:04a394cffc5d338b074e663825110b5695fc3efadabebf93a8dd48c443843e37 10.111.89.130:11000/google_containers/kubedns-amd64:1.7] 55064828} {[10.111.89.130:11000/coreos/flannel@sha256:75acf97abe273247770342054dbeba7803c866cc25dd7075c224e483cfcc65d7 10.111.89.130:11000/coreos/flannel:v0.6.2] 27885643} {[10.111.89.130:11000/google_containers/exechealthz-amd64@sha256:b9b6093890bc39f15f9c77dd1e44250efb21c1b0ace334ac6e61c708fa5fa810 10.111.89.130:11000/google_containers/exechealthz-amd64:1.1] 8332223} {[10.111.89.130:11000/google_containers/kube-dnsmasq-amd64@sha256:5c6b7b984c71a44c5045b41041ff94ccb8078feccffd10ea93b0626b60d784d3 10.111.89.130:11000/google_containers/kube-dnsmasq-amd64:1.3] 5125973} {[10.111.89.130:11000/busybox@sha256:68effe31a4ae8312e47f54bec52d1fc925908009ce7e6f734e1b54a4169081c5 10.111.89.130:11000/busybox:latest] 1109996} {[10.111.89.130:11000/google_containers/pause-amd64@sha256:7b23a11e164b0cfa08188d9f976da9f890464cdeb81c1f7c8ef008f03df3681e 10.111.89.130:11000/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
04:41:54 Jun 13 11:47:51.321: INFO:
04:41:54 Logging kubelet events for node k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:51.442: INFO:
04:41:54 Logging pods the kubelet thinks is on node k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:54 Jun 13 11:47:51.738: INFO: flannel-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:54 Jun 13 11:47:51.739: INFO: kube-proxy-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:54 Jun 13 11:47:51.739: INFO: kube-apiserver-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:54 Jun 13 11:47:51.739: INFO: kube-controller-manager-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:54 Jun 13 11:47:51.739: INFO: kube-scheduler-k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:55 W0613 11:47:51.870248 275 metrics_grabber.go:74] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
04:41:55 Jun 13 11:47:52.279: INFO:
04:41:55 Latency metrics for node k8s-master-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:55 Jun 13 11:47:52.280: INFO:
04:41:55 Logging node info for node k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:55 Jun 13 11:47:52.390: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed,GenerateName:,Namespace:,SelfLink:/api/v1/nodesk8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed,UID:7e7ad1a2-5025-11e7-b8fd-fa163e30e2f5,ResourceVersion:8620,Generation:0,CreationTimestamp:2017-06-13 10:46:02.407975439 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: SDDC,failure-domain.beta.kubernetes.io/zone: nova,kubernetes.io/hostname: k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed,node-role.kubernetes.io/master: true,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:da819334-c340-4be7-893c-7b5fe4261b61,ProviderID:openstack:///da819334-c340-4be7-893c-7b5fe4261b61,Unschedulable:true,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4145389568 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1900 -3} {} 1900m DecimalSI},memory: {{3528531968 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2017-06-13 11:47:56 +0000 UTC 2017-06-13 10:46:02 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-06-13 11:47:56 +0000 UTC 2017-06-13 10:46:02 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-06-13 11:47:56 +0000 UTC 2017-06-13 10:46:02 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-06-13 11:47:56 +0000 UTC 2017-06-13 10:46:02 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 192.168.0.5} {Hostname k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8e025a21a4254e11b028584d9d8b12c4,SystemUUID:42179DBD-7BE0-07B4-49EF-DFFE428E61C9,BootID:8bcb6d48-afe5-4d6b-8de4-24c219b4773b,KernelVersion:4.4.41-1.ph1-esx,OSImage:Debian GNU/Linux 8 (jessie),ContainerRuntimeVersion:docker://1.12.1,KubeletVersion:v1.6.1,KubeProxyVersion:v1.6.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[10.111.89.130:11000/coreos/hyperkube@sha256:44e441510dfed56ff6f3b4fb7e229b2e95cc3e526816710fcf68a1c49016b9ec 10.111.89.130:11000/coreos/hyperkube:v1.6.1_coreos.0] 855664488} {[10.111.89.130:11000/nginx@sha256:0c17c3a8e78a256677b3d9a3cc68e8e8e4244dafc7bd9ca0dfa96759a7047f69 10.111.89.130:11000/nginx:1.11.4] 181346890} {[10.111.89.130:11000/google_containers/kubedns-amd64@sha256:04a394cffc5d338b074e663825110b5695fc3efadabebf93a8dd48c443843e37 10.111.89.130:11000/google_containers/kubedns-amd64:1.7] 55064828} {[10.111.89.130:11000/coreos/flannel@sha256:75acf97abe273247770342054dbeba7803c866cc25dd7075c224e483cfcc65d7 10.111.89.130:11000/coreos/flannel:v0.6.2] 27885643} {[10.111.89.130:11000/google_containers/exechealthz-amd64@sha256:b9b6093890bc39f15f9c77dd1e44250efb21c1b0ace334ac6e61c708fa5fa810 10.111.89.130:11000/google_containers/exechealthz-amd64:1.1] 8332223} {[10.111.89.130:11000/google_containers/kube-dnsmasq-amd64@sha256:5c6b7b984c71a44c5045b41041ff94ccb8078feccffd10ea93b0626b60d784d3 10.111.89.130:11000/google_containers/kube-dnsmasq-amd64:1.3] 5125973} {[10.111.89.130:11000/busybox@sha256:68effe31a4ae8312e47f54bec52d1fc925908009ce7e6f734e1b54a4169081c5 10.111.89.130:11000/busybox:latest] 1109996} {[10.111.89.130:11000/google_containers/pause-amd64@sha256:7b23a11e164b0cfa08188d9f976da9f890464cdeb81c1f7c8ef008f03df3681e 10.111.89.130:11000/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
04:41:55 Jun 13 11:47:52.391: INFO:
04:41:55 Logging kubelet events for node k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:55 Jun 13 11:47:52.504: INFO:
04:41:55 Logging pods the kubelet thinks is on node k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:56 Jun 13 11:47:52.766: INFO: kube-scheduler-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:56 Jun 13 11:47:52.766: INFO: flannel-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:56 Jun 13 11:47:52.766: INFO: kube-proxy-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:56 Jun 13 11:47:52.766: INFO: kube-apiserver-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:56 Jun 13 11:47:52.766: INFO: kube-controller-manager-k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:56 W0613 11:47:52.881229 275 metrics_grabber.go:74] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
04:41:56 Jun 13 11:47:53.251: INFO:
04:41:56 Latency metrics for node k8s-master-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:56 Jun 13 11:47:53.251: INFO:
04:41:56 Logging node info for node k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:56 Jun 13 11:47:53.367: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed,GenerateName:,Namespace:,SelfLink:/api/v1/nodesk8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed,UID:81576249-5025-11e7-b8fd-fa163e30e2f5,ResourceVersion:8615,Generation:0,CreationTimestamp:2017-06-13 10:46:07.208911577 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: SDDC,failure-domain.beta.kubernetes.io/zone: nova,kubernetes.io/hostname: k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed,node-role.kubernetes.io/node: true,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:e7d14427-d6ea-4d62-958a-a70ba79054fa,ProviderID:openstack:///e7d14427-d6ea-4d62-958a-a70ba79054fa,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4145389568 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1900 -3} {} 1900m DecimalSI},memory: {{3528531968 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2017-06-13 11:47:53 +0000 UTC 2017-06-13 10:46:07 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-06-13 11:47:53 +0000 UTC 2017-06-13 10:46:07 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-06-13 11:47:53 +0000 UTC 2017-06-13 10:46:07 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-06-13 11:47:53 +0000 UTC 2017-06-13 10:46:07 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 192.168.0.4} {Hostname k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8e025a21a4254e11b028584d9d8b12c4,SystemUUID:4217DB4D-954B-7D1A-014B-CB31E02B0679,BootID:eabbc849-5c82-453a-9e03-4d6dbe5018fc,KernelVersion:4.4.41-1.ph1-esx,OSImage:Debian GNU/Linux 8 (jessie),ContainerRuntimeVersion:docker://1.12.1,KubeletVersion:v1.6.1,KubeProxyVersion:v1.6.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[10.111.89.130:11000/coreos/hyperkube@sha256:44e441510dfed56ff6f3b4fb7e229b2e95cc3e526816710fcf68a1c49016b9ec 10.111.89.130:11000/coreos/hyperkube:v1.6.1_coreos.0] 855664488} {[gcr.io/google-samples/gb-frontend@sha256:d44e7d7491a537f822e7fe8615437e4a8a08f3a7a1d7d4cb9066b92f7556ba6d gcr.io/google-samples/gb-frontend:v4] 512107183} {[10.111.89.130:11000/nginx@sha256:0c17c3a8e78a256677b3d9a3cc68e8e8e4244dafc7bd9ca0dfa96759a7047f69 10.111.89.130:11000/nginx:1.11.4] 181346890} {[gcr.io/google_samples/gb-redisslave@sha256:90f62695e641e1a27d1a5e0bbb8b622205a48e18311b51b0da419ffad24b9016 gcr.io/google_samples/gb-redisslave:v1] 109462535} {[10.111.89.130:11000/google_containers/kubedns-amd64@sha256:04a394cffc5d338b074e663825110b5695fc3efadabebf93a8dd48c443843e37 10.111.89.130:11000/google_containers/kubedns-amd64:1.7] 55064828} {[gcr.io/google_containers/cluster-proportional-autoscaler-amd64@sha256:e6e071f2e598a13caece1a39c7f166fc574a3837b791eba71631960e026c70aa gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1] 48156375} {[10.111.89.130:11000/coreos/flannel@sha256:75acf97abe273247770342054dbeba7803c866cc25dd7075c224e483cfcc65d7 10.111.89.130:11000/coreos/flannel:v0.6.2] 27885643} {[10.111.89.130:11000/google_containers/exechealthz-amd64@sha256:b9b6093890bc39f15f9c77dd1e44250efb21c1b0ace334ac6e61c708fa5fa810 10.111.89.130:11000/google_containers/exechealthz-amd64:1.1] 8332223} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[10.111.89.130:11000/google_containers/kube-dnsmasq-amd64@sha256:5c6b7b984c71a44c5045b41041ff94ccb8078feccffd10ea93b0626b60d784d3 10.111.89.130:11000/google_containers/kube-dnsmasq-amd64:1.3] 5125973} {[gcr.io/google_containers/update-demo@sha256:915a51866b3b77de77c833ac69c699b88d060312d41fb878fab7428b69ef382c gcr.io/google_containers/update-demo:kitten] 4549069} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/portforwardtester@sha256:306879729d3eff635a11b89f3e62e440c9f2fe4dabdfb9ef02bc67f2275f67ab gcr.io/google_containers/portforwardtester:1.2] 1892642} {[10.111.89.130:11000/busybox@sha256:68effe31a4ae8312e47f54bec52d1fc925908009ce7e6f734e1b54a4169081c5 10.111.89.130:11000/busybox:latest] 1109996} {[10.111.89.130:11000/google_containers/pause-amd64@sha256:7b23a11e164b0cfa08188d9f976da9f890464cdeb81c1f7c8ef008f03df3681e gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 10.111.89.130:11000/google_containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
04:41:56 Jun 13 11:47:53.368: INFO:
04:41:56 Logging kubelet events for node k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:56 Jun 13 11:47:53.489: INFO:
04:41:56 Logging pods the kubelet thinks is on node k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:57 Jun 13 11:47:53.776: INFO: flannel-k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:57 Jun 13 11:47:53.776: INFO: kube-proxy-k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:57 Jun 13 11:47:53.777: INFO: dnsmasq-autoscaler-3605072793-45s0t started at 2017-06-13 10:46:39 +0000 UTC (0+1 container statuses recorded)
04:41:57 Jun 13 11:47:53.777: INFO: Container autoscaler ready: true, restart count 0
04:41:57 Jun 13 11:47:53.777: INFO: kubedns-autoscaler-1428750645-btwl6 started at 2017-06-13 10:47:10 +0000 UTC (0+1 container statuses recorded)
04:41:57 Jun 13 11:47:53.777: INFO: Container autoscaler ready: true, restart count 0
04:41:57 Jun 13 11:47:53.777: INFO: overcommit-1 started at 2017-06-13 11:37:59 +0000 UTC (0+1 container statuses recorded)
04:41:57 Jun 13 11:47:53.777: INFO: Container overcommit-1 ready: true, restart count 0
04:41:57 Jun 13 11:47:53.778: INFO: overcommit-3 started at 2017-06-13 11:37:59 +0000 UTC (0+1 container statuses recorded)
04:41:57 Jun 13 11:47:53.778: INFO: Container overcommit-3 ready: true, restart count 0
04:41:57 Jun 13 11:47:53.778: INFO: overcommit-7 started at 2017-06-13 11:38:00 +0000 UTC (0+1 container statuses recorded)
04:41:57 Jun 13 11:47:53.778: INFO: Container overcommit-7 ready: true, restart count 0
04:41:57 W0613 11:47:53.900245 275 metrics_grabber.go:74] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
04:41:57 Jun 13 11:47:54.336: INFO:
04:41:57 Latency metrics for node k8s-node-0-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:57 Jun 13 11:47:54.336: INFO:
04:41:57 Logging node info for node k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:57 Jun 13 11:47:54.454: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed,GenerateName:,Namespace:,SelfLink:/api/v1/nodesk8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed,UID:7e339da2-5025-11e7-9536-fa163e83263e,ResourceVersion:8626,Generation:0,CreationTimestamp:2017-06-13 10:46:01.941336963 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: SDDC,failure-domain.beta.kubernetes.io/zone: nova,kubernetes.io/hostname: k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed,node-role.kubernetes.io/node: true,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:36389ef9-f999-4cee-8c49-84838de3f2d6,ProviderID:openstack:///36389ef9-f999-4cee-8c49-84838de3f2d6,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4145389568 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1900 -3} {} 1900m DecimalSI},memory: {{3528531968 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2017-06-13 11:48:01 +0000 UTC 2017-06-13 10:46:01 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-06-13 11:48:01 +0000 UTC 2017-06-13 10:46:01 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-06-13 11:48:01 +0000 UTC 2017-06-13 10:46:01 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-06-13 11:48:01 +0000 UTC 2017-06-13 10:46:01 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 192.168.0.3} {Hostname k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8e025a21a4254e11b028584d9d8b12c4,SystemUUID:4217527B-BE59-4714-EEC8-8FCFC4392E6F,BootID:c1a665bd-458f-4e70-ab5d-67d01d65d803,KernelVersion:4.4.41-1.ph1-esx,OSImage:Debian GNU/Linux 8 (jessie),ContainerRuntimeVersion:docker://1.12.1,KubeletVersion:v1.6.1,KubeProxyVersion:v1.6.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[10.111.89.130:11000/coreos/hyperkube@sha256:44e441510dfed56ff6f3b4fb7e229b2e95cc3e526816710fcf68a1c49016b9ec 10.111.89.130:11000/coreos/hyperkube:v1.6.1_coreos.0] 855664488} {[gcr.io/google-samples/gb-frontend@sha256:d44e7d7491a537f822e7fe8615437e4a8a08f3a7a1d7d4cb9066b92f7556ba6d gcr.io/google-samples/gb-frontend:v4] 512107183} {[10.111.89.130:11000/nginx@sha256:0c17c3a8e78a256677b3d9a3cc68e8e8e4244dafc7bd9ca0dfa96759a7047f69 10.111.89.130:11000/nginx:1.11.4] 181346890} {[10.111.89.130:11000/google_containers/kubedns-amd64@sha256:04a394cffc5d338b074e663825110b5695fc3efadabebf93a8dd48c443843e37 10.111.89.130:11000/google_containers/kubedns-amd64:1.7] 55064828} {[10.111.89.130:11000/google_containers/kubedns-amd64@sha256:1b21c69cd89b9bb47879ef94f03be2b0db194c7c04af4faa781cdd47474b88ec 10.111.89.130:11000/google_containers/kubedns-amd64:1.9] 46998769} {[10.111.89.130:11000/coreos/flannel@sha256:75acf97abe273247770342054dbeba7803c866cc25dd7075c224e483cfcc65d7 10.111.89.130:11000/coreos/flannel:v0.6.2] 27885643} {[10.111.89.130:11000/google_containers/exechealthz-amd64@sha256:b9b6093890bc39f15f9c77dd1e44250efb21c1b0ace334ac6e61c708fa5fa810 10.111.89.130:11000/google_containers/exechealthz-amd64:1.1] 8332223} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[10.111.89.130:11000/andyshinn/dnsmasq@sha256:be348473129bd3c7f7253ac4c93eff0e582b878cffb5553b91b83855886e0f99 10.111.89.130:11000/andyshinn/dnsmasq:2.72] 6268826} {[10.111.89.130:11000/google_containers/kube-dnsmasq-amd64@sha256:5c6b7b984c71a44c5045b41041ff94ccb8078feccffd10ea93b0626b60d784d3 10.111.89.130:11000/google_containers/kube-dnsmasq-amd64:1.3] 5125973} {[gcr.io/google_containers/update-demo@sha256:89ac104fa7c43880d2324f377b79be95b0b2b3fb32e4bd03b8d1e6d91a41f009 gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/busybox:1.24] 1113554} {[10.111.89.130:11000/busybox@sha256:68effe31a4ae8312e47f54bec52d1fc925908009ce7e6f734e1b54a4169081c5 10.111.89.130:11000/busybox:latest] 1109996} {[10.111.89.130:11000/google_containers/pause-amd64@sha256:7b23a11e164b0cfa08188d9f976da9f890464cdeb81c1f7c8ef008f03df3681e gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 10.111.89.130:11000/google_containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
04:41:57 Jun 13 11:47:54.454: INFO:
04:41:57 Logging kubelet events for node k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:57 Jun 13 11:47:54.568: INFO:
04:41:57 Logging pods the kubelet thinks is on node k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:58 Jun 13 11:47:54.812: INFO: overcommit-5 started at 2017-06-13 11:38:00 +0000 UTC (0+1 container statuses recorded)
04:41:58 Jun 13 11:47:54.812: INFO: Container overcommit-5 ready: true, restart count 0
04:41:58 Jun 13 11:47:54.813: INFO: flannel-k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:58 Jun 13 11:47:54.813: INFO: kube-proxy-k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:58 Jun 13 11:47:54.813: INFO: dnsmasq-1508732111-sk20q started at 2017-06-13 10:46:36 +0000 UTC (0+1 container statuses recorded)
04:41:58 Jun 13 11:47:54.813: INFO: Container dnsmasq ready: true, restart count 0
04:41:58 Jun 13 11:47:54.813: INFO: kubedns-1482036730-6d6k7 started at 2017-06-13 10:47:08 +0000 UTC (0+3 container statuses recorded)
04:41:58 Jun 13 11:47:54.813: INFO: Container dnsmasq ready: true, restart count 0
04:41:58 Jun 13 11:47:54.813: INFO: Container healthz ready: true, restart count 0
04:41:58 Jun 13 11:47:54.813: INFO: Container kubedns ready: true, restart count 0
04:41:58 Jun 13 11:47:54.813: INFO: overcommit-2 started at 2017-06-13 11:37:59 +0000 UTC (0+1 container statuses recorded)
04:41:58 Jun 13 11:47:54.813: INFO: Container overcommit-2 ready: true, restart count 0
04:41:58 W0613 11:47:54.937187 275 metrics_grabber.go:74] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
04:41:58 Jun 13 11:47:55.330: INFO:
04:41:58 Latency metrics for node k8s-node-1-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:58 Jun 13 11:47:55.331: INFO:
04:41:58 Logging node info for node k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:58 Jun 13 11:47:55.455: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed,GenerateName:,Namespace:,SelfLink:/api/v1/nodesk8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed,UID:a06c4659-5026-11e7-b8fd-fa163e30e2f5,ResourceVersion:8632,Generation:0,CreationTimestamp:2017-06-13 10:54:08.851921562 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: SDDC,failure-domain.beta.kubernetes.io/zone: nova,kubernetes.io/hostname: k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed,node-role.kubernetes.io/node: true,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:c9b7505c-7a1f-4ed2-bc75-337f4ad40238,ProviderID:openstack:///c9b7505c-7a1f-4ed2-bc75-337f4ad40238,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4145389568 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1900 -3} {} 1900m DecimalSI},memory: {{3528531968 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2017-06-13 11:48:05 +0000 UTC 2017-06-13 10:54:08 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-06-13 11:48:05 +0000 UTC 2017-06-13 10:54:08 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-06-13 11:48:05 +0000 UTC 2017-06-13 10:54:08 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-06-13 11:48:05 +0000 UTC 2017-06-13 10:54:08 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 192.168.0.9} {Hostname k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8e025a21a4254e11b028584d9d8b12c4,SystemUUID:42178CEB-F2F4-C246-9DC8-EB18C5318F31,BootID:a5483589-fbd2-432c-b2d4-a8a5f7998c04,KernelVersion:4.4.41-1.ph1-esx,OSImage:Debian GNU/Linux 8 (jessie),ContainerRuntimeVersion:docker://1.12.1,KubeletVersion:v1.6.1,KubeProxyVersion:v1.6.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[10.111.89.130:11000/coreos/hyperkube@sha256:44e441510dfed56ff6f3b4fb7e229b2e95cc3e526816710fcf68a1c49016b9ec 10.111.89.130:11000/coreos/hyperkube:v1.6.1_coreos.0] 855664488} {[gcr.io/google-samples/gb-frontend@sha256:d44e7d7491a537f822e7fe8615437e4a8a08f3a7a1d7d4cb9066b92f7556ba6d gcr.io/google-samples/gb-frontend:v4] 512107183} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 418929769} {[10.111.89.130:11000/nginx@sha256:0c17c3a8e78a256677b3d9a3cc68e8e8e4244dafc7bd9ca0dfa96759a7047f69 10.111.89.130:11000/nginx:1.11.4] 181346890} {[gcr.io/google_samples/gb-redisslave@sha256:90f62695e641e1a27d1a5e0bbb8b622205a48e18311b51b0da419ffad24b9016 gcr.io/google_samples/gb-redisslave:v1] 109462535} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86838142} {[10.111.89.130:11000/google_containers/kubedns-amd64@sha256:04a394cffc5d338b074e663825110b5695fc3efadabebf93a8dd48c443843e37 10.111.89.130:11000/google_containers/kubedns-amd64:1.7] 55064828} {[10.111.89.130:11000/coreos/flannel@sha256:75acf97abe273247770342054dbeba7803c866cc25dd7075c224e483cfcc65d7 10.111.89.130:11000/coreos/flannel:v0.6.2] 27885643} {[gcr.io/google_containers/hostexec@sha256:cab8d4e2526f8f767c64febe4ce9e0f0e58cd35fdff81b3aadba4dd041ba9f00 gcr.io/google_containers/hostexec:1.2] 13185747} {[10.111.89.130:11000/google_containers/exechealthz-amd64@sha256:b9b6093890bc39f15f9c77dd1e44250efb21c1b0ace334ac6e61c708fa5fa810 10.111.89.130:11000/google_containers/exechealthz-amd64:1.1] 8332223} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[10.111.89.130:11000/google_containers/kube-dnsmasq-amd64@sha256:5c6b7b984c71a44c5045b41041ff94ccb8078feccffd10ea93b0626b60d784d3 10.111.89.130:11000/google_containers/kube-dnsmasq-amd64:1.3] 5125973} {[gcr.io/google_containers/update-demo@sha256:89ac104fa7c43880d2324f377b79be95b0b2b3fb32e4bd03b8d1e6d91a41f009 gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/update-demo@sha256:915a51866b3b77de77c833ac69c699b88d060312d41fb878fab7428b69ef382c gcr.io/google_containers/update-demo:kitten] 4549069} {[gcr.io/google_containers/test-webserver@sha256:f804e8837490d1dfdb5002e073f715fd0a08115de74e5a4847ca952315739372 gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/liveness@sha256:90994881062c7de7bb1761f2f3d020fe9aa3d332a90e00ebd3ca9dcc1ed74f1c gcr.io/google_containers/liveness:e2e] 4387474} {[gcr.io/google_containers/eptest@sha256:bb088b26ed78613cce171420168db9a6c62a8dbea17d7be13077e7010bae162f gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 gcr.io/google_containers/mounttest:0.8] 1450761} {[gcr.io/google_containers/mounttest-user@sha256:5487c126b03abf4119a8f7950cd5f591f72dbe4ab15623f3387d3917e1268b4e gcr.io/google_containers/mounttest-user:0.5] 1450761} {[gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/busybox:1.24] 1113554} {[10.111.89.130:11000/busybox@sha256:68effe31a4ae8312e47f54bec52d1fc925908009ce7e6f734e1b54a4169081c5 10.111.89.130:11000/busybox:latest] 1109996} {[10.111.89.130:11000/google_containers/pause-amd64@sha256:7b23a11e164b0cfa08188d9f976da9f890464cdeb81c1f7c8ef008f03df3681e gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 10.111.89.130:11000/google_containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
04:41:58 Jun 13 11:47:55.455: INFO:
04:41:58 Logging kubelet events for node k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:58 Jun 13 11:47:55.569: INFO:
04:41:58 Logging pods the kubelet thinks is on node k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:59 Jun 13 11:47:55.819: INFO: kube-proxy-k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:59 Jun 13 11:47:55.819: INFO: overcommit-0 started at 2017-06-13 11:37:59 +0000 UTC (0+1 container statuses recorded)
04:41:59 Jun 13 11:47:55.819: INFO: Container overcommit-0 ready: true, restart count 0
04:41:59 Jun 13 11:47:55.819: INFO: overcommit-4 started at 2017-06-13 11:37:59 +0000 UTC (0+1 container statuses recorded)
04:41:59 Jun 13 11:47:55.819: INFO: Container overcommit-4 ready: true, restart count 0
04:41:59 Jun 13 11:47:55.820: INFO: overcommit-6 started at 2017-06-13 11:38:00 +0000 UTC (0+1 container statuses recorded)
04:41:59 Jun 13 11:47:55.820: INFO: Container overcommit-6 ready: true, restart count 0
04:41:59 Jun 13 11:47:55.820: INFO: flannel-k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed started at (0+0 container statuses recorded)
04:41:59 W0613 11:47:55.943635 275 metrics_grabber.go:74] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
04:41:59 Jun 13 11:47:56.375: INFO:
04:41:59 Latency metrics for node k8s-node-2-ebbe850c-0a28-4fd8-b416-03084eb816ed
04:41:59 STEP: Dumping a list of prepulled images on each node
04:41:59 Jun 13 11:47:56.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
04:41:59 STEP: Destroying namespace "e2e-tests-sched-pred-jhgng" for this suite.
04:42:29 Jun 13 11:48:25.881: INFO: namespace: e2e-tests-sched-pred-jhgng, resource: bindings, ignored listing per whitelist
04:42:29 [AfterEach] [k8s.io] SchedulerPredicates [Serial]
04:42:29 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70
04:42:29
04:42:29 • Failure [700.011 seconds]
04:42:29 [k8s.io] SchedulerPredicates [Serial]
04:42:29 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:650
04:42:29 validates resource limits of pods that are allowed to run [Conformance] [It]
04:42:29 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:216
04:42:29
04:42:29 Expected error:
04:42:29 <*errors.errorString | 0xc421280120>: {
04:42:29 s: "Error waiting for 9 pods to be running - probably a timeout: Timeout while waiting for pods with labels "startPodsID=ba603655-502c-11e7-b75f-0242ac110003" to be running",
04:42:29 }
04:42:29 Error waiting for 9 pods to be running - probably a timeout: Timeout while waiting for pods with labels "startPodsID=ba603655-502c-11e7-b75f-0242ac110003" to be running
04:42:29 not to have occurred
04:42:29
04:42:29 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:202
04:42:29 ------------------------------
04:42:29 S

@k8s-github-robot
Copy link

@bizhao There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
(2) specifying the label manually: /sig <label>

Note: method (1) will trigger a notification to the team. You can find the team list here and label list here

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 16, 2017
@bizhao
Copy link
Contributor Author

bizhao commented Jun 16, 2017

/sig scheduling

@k8s-ci-robot k8s-ci-robot added the sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. label Jun 16, 2017
@bizhao
Copy link
Contributor Author

bizhao commented Jun 16, 2017

/sig testing

@k8s-ci-robot k8s-ci-robot added the sig/testing Categorizes an issue or PR as relevant to SIG Testing. label Jun 16, 2017
@bizhao
Copy link
Contributor Author

bizhao commented Jun 16, 2017

@kubernetes/sig-testing-misc

@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 16, 2017
k8s-github-robot pushed a commit that referenced this issue Jun 28, 2017
Automatic merge from submit-queue (batch tested with PRs 45610, 47628)

Replace capacity with allocatable to calculate pod resource

It is not accurate to use capacity to do the calculation.



**What this PR does / why we need it**:
The currently cpu resource calculation for a pod in end2end test is incorrect.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes #47627

**Special notes for your reviewer**:
More details about capacity and allocatable:
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md

**Release note**:

NONE
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants