Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new: broken test run #38469

Closed
k8s-github-robot opened this issue Dec 9, 2016 · 44 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/43/

Multiple broken tests:

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 18 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-c38e57a3  n1-standard-2               2016-12-07T15:28:41.936-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-b68604d4-grp  us-central1-a  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-b68604d4-5z24  us-central1-a  n1-standard-2               10.240.0.3   130.211.225.32   RUNNING
+gke-bootstrap-e2e-default-pool-b68604d4-8w8a  us-central1-a  n1-standard-2               10.240.0.4   104.198.65.185   RUNNING
+gke-bootstrap-e2e-default-pool-b68604d4-wfo0  us-central1-a  n1-standard-2               10.240.0.2   104.197.101.240  RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-b68604d4-5z24  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-b68604d4-8w8a  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-b68604d4-wfo0  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-854608d0-24e56f92-bcf4-11e6-94e7-42010af0003b  bootstrap-e2e  10.96.5.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-b68604d4-8w8a  1000
+gke-bootstrap-e2e-854608d0-6e95e366-bcd6-11e6-94e7-42010af0003b  bootstrap-e2e  10.96.4.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-b68604d4-wfo0  1000
+gke-bootstrap-e2e-854608d0-e2aa326d-bcd5-11e6-94e7-42010af0003b  bootstrap-e2e  10.96.3.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-b68604d4-5z24  1000
+gke-bootstrap-e2e-854608d0-all           bootstrap-e2e  10.96.0.0/14        tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-854608d0-ssh           bootstrap-e2e  104.154.182.176/32  tcp:22                                  gke-bootstrap-e2e-854608d0-node
+gke-bootstrap-e2e-854608d0-vms           bootstrap-e2e  10.240.0.0/16       icmp,tcp:1-65535,udp:1-65535            gke-bootstrap-e2e-854608d0-node

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Dec 9, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/48/

Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 18 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-dd1149a1  n1-standard-2               2016-12-09T08:50:02.847-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-fa618b98-grp  us-central1-a  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-fa618b98-fout  us-central1-a  n1-standard-2               10.240.0.2   35.184.32.172   RUNNING
+gke-bootstrap-e2e-default-pool-fa618b98-ipn6  us-central1-a  n1-standard-2               10.240.0.4   104.197.64.121  RUNNING
+gke-bootstrap-e2e-default-pool-fa618b98-m9c2  us-central1-a  n1-standard-2               10.240.0.3   108.59.85.126   RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-fa618b98-fout  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-fa618b98-ipn6  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-fa618b98-m9c2  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-94b2b228-65fecda3-be30-11e6-a641-42010af00035  bootstrap-e2e  10.96.3.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-fa618b98-ipn6  1000
+gke-bootstrap-e2e-94b2b228-70f950b1-be46-11e6-aef9-42010af00019  bootstrap-e2e  10.96.0.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-fa618b98-fout  1000
+gke-bootstrap-e2e-94b2b228-ddb3cc59-be30-11e6-a641-42010af00035  bootstrap-e2e  10.96.4.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-fa618b98-m9c2  1000
+gke-bootstrap-e2e-94b2b228-all           bootstrap-e2e  10.96.0.0/14     esp,ah,sctp,tcp,udp,icmp
+gke-bootstrap-e2e-94b2b228-ssh           bootstrap-e2e  35.184.58.18/32  tcp:22                                  gke-bootstrap-e2e-94b2b228-node
+gke-bootstrap-e2e-94b2b228-vms           bootstrap-e2e  10.240.0.0/16    icmp,tcp:1-65535,udp:1-65535            gke-bootstrap-e2e-94b2b228-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/51/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc422661550>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422cce640>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42167a670>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f60a40>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e0e740>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420350560>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Dec 10 16:58:12.887: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ef5f30>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421f9e9a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42066caf0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f4b6e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217f5b80>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225c1e60>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f37110>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f58870>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42066c490>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209accd0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4223611f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 12:52:35 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215f5cc0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212223e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 gke-bootstrap-e2e-default-pool-bc24da2a-xwc1 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-10 11:18:24 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/52/

Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 18 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-6dfac8ab  n1-standard-2               2016-12-10T20:41:28.380-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-30afd041-grp  us-central1-a  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-30afd041-e3cb  us-central1-a  n1-standard-2               10.240.0.2   130.211.229.0   RUNNING
+gke-bootstrap-e2e-default-pool-30afd041-hv1n  us-central1-a  n1-standard-2               10.240.0.4   104.154.144.80  RUNNING
+gke-bootstrap-e2e-default-pool-30afd041-rpql  us-central1-a  n1-standard-2               10.240.0.3   130.211.233.6   RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-30afd041-e3cb  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-30afd041-hv1n  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-30afd041-rpql  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-36fba380-1c192789-bf6e-11e6-8564-42010af0002c  bootstrap-e2e  10.96.5.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-30afd041-e3cb  1000
+gke-bootstrap-e2e-36fba380-65fa28a7-bf5d-11e6-8564-42010af0002c  bootstrap-e2e  10.96.3.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-30afd041-rpql  1000
+gke-bootstrap-e2e-36fba380-e6780193-bf5c-11e6-8564-42010af0002c  bootstrap-e2e  10.96.2.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-30afd041-hv1n  1000
+gke-bootstrap-e2e-36fba380-all           bootstrap-e2e  10.96.0.0/14       tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-36fba380-ssh           bootstrap-e2e  146.148.69.230/32  tcp:22                                  gke-bootstrap-e2e-36fba380-node
+gke-bootstrap-e2e-36fba380-vms           bootstrap-e2e  10.240.0.0/16      udp:1-65535,icmp,tcp:1-65535            gke-bootstrap-e2e-36fba380-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/53/

Multiple broken tests:

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:222
Expected success, but got an error:
    <*errors.errorString | 0xc4203fb690>: {
        s: "http2: no cached connection was available",
    }
    http2: no cached connection was available
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:732

Issues about this test specifically: #27957

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203fba10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc4203fba10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203fba10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc4209775d0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 14, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 14, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Dec 11 12:52:05.421: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1098
Expected error:
    <*errors.errorString | 0xc4221c8340>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #26172

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Dec 11 15:30:32.871: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/55/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f337a0>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219698d0>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214b4a60>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc421546ff0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 69, 38],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.69.38:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421870e20>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210559c0>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215400b0>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421978af0>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421979120>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421aadad0>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215377c0>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42127ff30>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc4216c71a0>: {
        s: "failed to get logs from pod-38bc0b3a-c04f-11e6-8c25-0242ac110008 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-38bc0b3a-c04f-11e6-8c25-0242ac110008)",
    }
    failed to get logs from pod-38bc0b3a-c04f-11e6-8c25-0242ac110008 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-38bc0b3a-c04f-11e6-8c25-0242ac110008)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421aace20>: {
        s: "Namespace e2e-tests-services-bf75s is active",
    }
    Namespace e2e-tests-services-bf75s is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/58/

Multiple broken tests:

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 09:00:27.328: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214ee000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 05:22:12.366: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42207aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36183

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:10:04.568: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ffd400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 06:01:50.680: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fd8000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:35:07.080: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42069c000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36950

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:40:08.828: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42140c000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ac6b50>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422201d90>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422159100>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ac6db0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:19:41.213: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42164f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 07:48:18.261: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f18a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 05:55:13.275: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42134aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:06:25.215: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211bc000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 07:52:02.528: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4227d0000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:57:11.193: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4227d0a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 06:49:30.685: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421030000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:13:16.794: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422c99400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 06:19:44.496: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42207aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 05:48:48.092: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223a4a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 07:27:17.292: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422742000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 09:18:02.170: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c03400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 07:44:54.121: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e0ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f3a240>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:33:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-c3a502c2-pd6l            gke-bootstrap-e2e-default-pool-c3a502c2-pd6l Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:23:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 23:40:25 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-13 04:22:37 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 05:52:00.258: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220e6000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 05:33:26.251: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b59400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 06:16:15.380: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421418a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 06:46:03.433: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221e8000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc42269c570>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-c3a502c2-pd6l boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-c3a502c2-pd6l boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:50:24.404: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219eea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 06:23:00.694: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421216000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 06:42:31.613: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e2ea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37259

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:47:12.249: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4227d0a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:43:43.005: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215b6000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 05:30:14.902: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421dad400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 08:53:39.956: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212f6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 05:58:36.469: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421603400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-c3a502c2-pd6l\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-c3a502c2-pd6l" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 13 05:27:02.746: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421419400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38254

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/64/

Multiple broken tests:

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc42038ecf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:69
Expected error:
    <*errors.errorString | 0xc421356050>: {
        s: "expected pod \"pod-4c125730-c2dc-11e6-af02-0242ac110009\" success: gave up waiting for pod 'pod-4c125730-c2dc-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-4c125730-c2dc-11e6-af02-0242ac110009" success: gave up waiting for pod 'pod-4c125730-c2dc-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36183

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:113
Expected error:
    <*errors.errorString | 0xc421d213f0>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218c9790>: {
        s: "Namespace e2e-tests-services-3dctv is active",
    }
    Namespace e2e-tests-services-3dctv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d20f00>: {
        s: "Namespace e2e-tests-services-3dctv is active",
    }
    Namespace e2e-tests-services-3dctv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421591450>: {
        s: "Namespace e2e-tests-services-3dctv is active",
    }
    Namespace e2e-tests-services-3dctv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
Expected error:
    <*errors.errorString | 0xc421016ca0>: {
        s: "expected pod \"pod-configmaps-bded02c4-c2ef-11e6-af02-0242ac110009\" success: gave up waiting for pod 'pod-configmaps-bded02c4-c2ef-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-bded02c4-c2ef-11e6-af02-0242ac110009" success: gave up waiting for pod 'pod-configmaps-bded02c4-c2ef-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34827

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc421ff8370>: {
        s: "expected pod \"pod-secrets-10ec1d13-c2d4-11e6-af02-0242ac110009\" success: gave up waiting for pod 'pod-secrets-10ec1d13-c2d4-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-10ec1d13-c2d4-11e6-af02-0242ac110009" success: gave up waiting for pod 'pod-secrets-10ec1d13-c2d4-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29221

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220d6ff0>: {
        s: "Namespace e2e-tests-services-3dctv is active",
    }
    Namespace e2e-tests-services-3dctv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:77
Expected error:
    <*errors.errorString | 0xc4220d73c0>: {
        s: "expected pod \"pod-secrets-d2d8640c-c2dd-11e6-af02-0242ac110009\" success: gave up waiting for pod 'pod-secrets-d2d8640c-c2dd-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-d2d8640c-c2dd-11e6-af02-0242ac110009" success: gave up waiting for pod 'pod-secrets-d2d8640c-c2dd-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37525

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc4218c94b0>: {
        s: "expected pod \"pod-bd7c062f-c2df-11e6-af02-0242ac110009\" success: gave up waiting for pod 'pod-bd7c062f-c2df-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-bd7c062f-c2df-11e6-af02-0242ac110009" success: gave up waiting for pod 'pod-bd7c062f-c2df-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37500

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc421ce12d0>: {
        s: "expected pod \"pod-1ac589be-c2d5-11e6-af02-0242ac110009\" success: gave up waiting for pod 'pod-1ac589be-c2d5-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-1ac589be-c2d5-11e6-af02-0242ac110009" success: gave up waiting for pod 'pod-1ac589be-c2d5-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31400

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc421cdbdc0>: {
        s: "expected pod \"downwardapi-volume-8b3374ee-c2db-11e6-af02-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-8b3374ee-c2db-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-8b3374ee-c2db-11e6-af02-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-8b3374ee-c2db-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc42184ddb0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 130, 211, 212, 205],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 130.211.212.205:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc421d1bf40>: {
        s: "expected pod \"pod-secrets-13711e8a-c2dd-11e6-af02-0242ac110009\" success: gave up waiting for pod 'pod-secrets-13711e8a-c2dd-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-13711e8a-c2dd-11e6-af02-0242ac110009" success: gave up waiting for pod 'pod-secrets-13711e8a-c2dd-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35256

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc4218c9dc0>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42177da00>: {
        s: "Namespace e2e-tests-services-3dctv is active",
    }
    Namespace e2e-tests-services-3dctv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:162
Expected error:
    <*errors.errorString | 0xc421ce0500>: {
        s: "expected pod \"downwardapi-volume-9e2d6679-c2ed-11e6-af02-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-9e2d6679-c2ed-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-9e2d6679-c2ed-11e6-af02-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-9e2d6679-c2ed-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36694

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:269
Expected error:
    <*errors.errorString | 0xc421ce10f0>: {
        s: "expected pod \"pod-configmaps-8418c967-c2f0-11e6-af02-0242ac110009\" success: gave up waiting for pod 'pod-configmaps-8418c967-c2f0-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-8418c967-c2f0-11e6-af02-0242ac110009" success: gave up waiting for pod 'pod-configmaps-8418c967-c2f0-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37515

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215bba80>: {
        s: "Namespace e2e-tests-services-3dctv is active",
    }
    Namespace e2e-tests-services-3dctv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc421d1a4b0>: {
        s: "expected pod \"pod-cd972429-c2ee-11e6-af02-0242ac110009\" success: gave up waiting for pod 'pod-cd972429-c2ee-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-cd972429-c2ee-11e6-af02-0242ac110009" success: gave up waiting for pod 'pod-cd972429-c2ee-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37439

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421326140>: {
        s: "Namespace e2e-tests-services-3dctv is active",
    }
    Namespace e2e-tests-services-3dctv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
Expected error:
    <*errors.errorString | 0xc42038ecf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc421356ca0>: {
        s: "expected pod \"downwardapi-volume-e26ab655-c2e4-11e6-af02-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-e26ab655-c2e4-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-e26ab655-c2e4-11e6-af02-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-e26ab655-c2e4-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:180
Expected error:
    <*errors.errorString | 0xc4220d7fc0>: {
        s: "expected pod \"downwardapi-volume-626adbbb-c2e2-11e6-af02-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-626adbbb-c2e2-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-626adbbb-c2e2-11e6-af02-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-626adbbb-c2e2-11e6-af02-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421cdaad0>: {
        s: "Namespace e2e-tests-services-3dctv is active",
    }
    Namespace e2e-tests-services-3dctv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/65/

Multiple broken tests:

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc42038ce50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc422529b40>: {
        s: "expected pod \"pod-769b87c7-c32d-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'pod-769b87c7-c32d-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-769b87c7-c32d-11e6-ab88-0242ac110002" success: gave up waiting for pod 'pod-769b87c7-c32d-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:48
Expected error:
    <*errors.errorString | 0xc4217d1090>: {
        s: "expected pod \"downwardapi-volume-35cb516a-c32f-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'downwardapi-volume-35cb516a-c32f-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-35cb516a-c32f-11e6-ab88-0242ac110002" success: gave up waiting for pod 'downwardapi-volume-35cb516a-c32f-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31836

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Expected error:
    <*errors.errorString | 0xc421a24300>: {
        s: "expected pod \"pod-configmaps-ce98e00a-c331-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'pod-configmaps-ce98e00a-c331-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-ce98e00a-c331-11e6-ab88-0242ac110002" success: gave up waiting for pod 'pod-configmaps-ce98e00a-c331-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29052

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc422470c00>: {
        s: "expected pod \"pod-secrets-a4e54780-c338-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'pod-secrets-a4e54780-c338-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-a4e54780-c338-11e6-ab88-0242ac110002" success: gave up waiting for pod 'pod-secrets-a4e54780-c338-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:154
Expected error:
    <*errors.errorString | 0xc42038ce50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc4225b58a0>: {
        s: "expected pod \"pod-secrets-b71abdb6-c32c-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'pod-secrets-b71abdb6-c32c-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-b71abdb6-c32c-11e6-ab88-0242ac110002" success: gave up waiting for pod 'pod-secrets-b71abdb6-c32c-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc421f58120>: {
        s: "expected pod \"pod-f15e5ba2-c33c-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'pod-f15e5ba2-c33c-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-f15e5ba2-c33c-11e6-ab88-0242ac110002" success: gave up waiting for pod 'pod-f15e5ba2-c33c-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37500

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc4221100f0>: {
        s: "expected pod \"pod-secrets-c0ac3061-c33d-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'pod-secrets-c0ac3061-c33d-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-c0ac3061-c33d-11e6-ab88-0242ac110002" success: gave up waiting for pod 'pod-secrets-c0ac3061-c33d-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35256

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:162
Expected error:
    <*errors.errorString | 0xc421a3f9a0>: {
        s: "expected pod \"downwardapi-volume-a4921919-c339-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'downwardapi-volume-a4921919-c339-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-a4921919-c339-11e6-ab88-0242ac110002" success: gave up waiting for pod 'downwardapi-volume-a4921919-c339-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36694

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc4222ac080>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
Expected error:
    <*errors.errorString | 0xc4224657a0>: {
        s: "expected pod \"pod-1a3c7daf-c333-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'pod-1a3c7daf-c333-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-1a3c7daf-c333-11e6-ab88-0242ac110002" success: gave up waiting for pod 'pod-1a3c7daf-c333-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #26780

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc422b345f0>: {
        s: "expected pod \"pod-5f26c34d-c33a-11e6-ab88-0242ac110002\" success: gave up waiting for pod 'pod-5f26c34d-c33a-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-5f26c34d-c33a-11e6-ab88-0242ac110002" success: gave up waiting for pod 'pod-5f26c34d-c33a-11e6-ab88-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31400

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/66/

Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 04:44:05.159: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e22000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37056

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc4203aaca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc421ef3b70>: {
        s: "service verification failed for: 10.99.248.103\nexpected [service1-f54dn service1-jw2fp service1-tjfc6]\nreceived []",
    }
    service verification failed for: 10.99.248.103
    expected [service1-f54dn service1-jw2fp service1-tjfc6]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:20:08.705: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421760a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 03:22:19.446: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225ef400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 03:29:56.927: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ee0a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 04:39:20.893: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421140000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30851

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:10:31.689: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421eff400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37500

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 04:50:30.085: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420da0a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:13:43.865: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d62000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29461

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc4203aaca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:23:20.845: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422385400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32467 #36276

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 03:36:22.481: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ee5400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 04:02:24.305: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421461400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 03:26:31.680: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421df4a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 04:47:15.354: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222ca000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:03:35.672: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b19400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:29:44.683: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42220c000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26172

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 03:59:12.100: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d92a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d61f00>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-68d5a15f-drqg gke-bootstrap-e2e-default-pool-68d5a15f-drqg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:20 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:38:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:20 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-68d5a15f-drqg            gke-bootstrap-e2e-default-pool-68d5a15f-drqg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:20 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:20 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-68d5a15f-drqg gke-bootstrap-e2e-default-pool-68d5a15f-drqg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:20 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:38:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:20 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-68d5a15f-drqg            gke-bootstrap-e2e-default-pool-68d5a15f-drqg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:20 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 01:37:20 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:59:45.610: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421892000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 03:19:06.187: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219b6000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 06:03:35.654: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225c0a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:52:02.124: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e22000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:45:39.655: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f34a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:07:19.426: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a8ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 04:56:52.636: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42140b400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:48:49.961: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ffc000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35297

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Dec 16 04:19:57.753: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-68d5a15f-drqg:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 06:06:49.866: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a5c000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203aaca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32830

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:00:10.446: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422e7ea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:56:14.391: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bb0a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 04:18:57.428: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222ca000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 05:42:27.431: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420266a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 04:53:40.404: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218c1400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421ef3b60>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-68d5a15f-drqg boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-68d5a15f-drqg boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 16 03:33:07.359: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219b6000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27507 #28275 #38583

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/71/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Dec 17 14:33:29.203: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.3.240 8081
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Dec 17 12:39:30.265: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.3.220:8080/dial?request=hostName&protocol=udp&host=10.99.243.250&port=90&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34064

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Dec 17 09:38:30.041: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.5.41:8080/dial?request=hostName&protocol=http&host=10.99.252.205&port=80&tries=1'
retrieved map[netserver-1:{} netserver-2:{}]
expected map[netserver-1:{} netserver-2:{} netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #36178

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Dec 17 14:55:27.083: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.217:8080/dial?request=hostName&protocol=udp&host=10.99.240.88&port=90&tries=1'
retrieved map[netserver-0:{} netserver-1:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34317

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Dec 17 15:36:58.657: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.248:8080/dial?request=hostName&protocol=udp&host=10.96.3.249&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421be05d0>: {
        s: "service verification failed for: 10.99.253.53\nexpected [service1-j0vct service1-mtcbq service1-x936l]\nreceived [service1-mtcbq service1-x936l]",
    }
    service verification failed for: 10.99.253.53
    expected [service1-j0vct service1-mtcbq service1-x936l]
    received [service1-mtcbq service1-x936l]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Expected error:
    <*errors.StatusError | 0xc422127080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'ssh: rejected: connect failed (Connection timed out)'\\nTrying to reach: 'http://10.96.3.15:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'\") has prevented the request from succeeding (post services rs-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rs-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.3.15:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.3.15:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'") has prevented the request from succeeding (post services rs-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.StatusError | 0xc421366800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'ssh: rejected: connect failed (Connection timed out)'\\nTrying to reach: 'http://10.96.3.236:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'\") has prevented the request from succeeding (post services rc-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.3.236:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.3.236:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'") has prevented the request from succeeding (post services rc-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Dec 17 09:16:16.552: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:149

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Dec 17 14:06:47.380: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://104.155.128.126:32270/hostName
retrieved map[netserver-0:{} netserver-1:{}]
expected map[netserver-2:{} netserver-0:{} netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Dec 17 09:42:22.485: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.5.44:8080/dial?request=hostName&protocol=http&host=10.99.253.27&port=80&tries=1'
retrieved map[netserver-2:{} netserver-0:{}]
expected map[netserver-2:{} netserver-0:{} netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #33887

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Dec 17 09:28:26.319: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://104.155.128.126:31271/hostName
retrieved map[netserver-0:{} netserver-2:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #32684 #36278 #37948

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Dec 17 13:46:45.220: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Dec 17 15:56:04.352: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Dec 17 16:23:46.870: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #32023

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421743b70>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Dec 17 14:49:35.079: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Dec 17 11:13:33.147: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 104.155.128.126 32216
retrieved map[netserver-2:{} netserver-0:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #36271

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/75/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c87ba0>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215dda80>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422235030>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4211db930>: {
        s: "error while stopping RC: service2: Scaling the resource failed with: Put https://104.154.72.137/api/v1/namespaces/e2e-tests-services-6s64c/replicationcontrollers/service2: unexpected EOF; Current resource version 13831",
    }
    error while stopping RC: service2: Scaling the resource failed with: Put https://104.154.72.137/api/v1/namespaces/e2e-tests-services-6s64c/replicationcontrollers/service2: unexpected EOF; Current resource version 13831
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422257560>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d0e8e0>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42202f390>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224f0f20>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Dec 18 17:47:26.982: Node gke-bootstrap-e2e-default-pool-bb55aebe-3bxu did not become ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:291

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422605a00>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220f6f40>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215ecfe0>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42200f030>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ab9110>: {
        s: "Namespace e2e-tests-services-6s64c is active",
    }
    Namespace e2e-tests-services-6s64c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/79/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Dec 20 07:25:34.246: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.99.246.100 90
retrieved map[netserver-0:{} netserver-1:{}]
expected map[netserver-1:{} netserver-2:{} netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #36271

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 20 07:29:00.077: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.96.1.226:8080/hostName
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203cf520>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Dec 20 08:11:58.455: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://104.197.139.1:31387/hostName
retrieved map[netserver-0:{} netserver-2:{}]
expected map[netserver-2:{} netserver-0:{} netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Dec 20 07:11:54.448: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/80/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422770160>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc42038c7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc422200f20>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc42038c7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc42038c7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc420c86150>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc42038c7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc42038c7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421530050>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc422238a90>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc422757400>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422200000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/81/
Multiple broken tests:

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Dec 20 20:29:10.292: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:184

Issues about this test specifically: #26955

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:34
Expected error:
    <*errors.errorString | 0xc42248e090>: {
        s: "expected pod \"client-containers-09443bd2-c73e-11e6-8ed0-0242ac110009\" success: gave up waiting for pod 'client-containers-09443bd2-c73e-11e6-8ed0-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-09443bd2-c73e-11e6-8ed0-0242ac110009" success: gave up waiting for pod 'client-containers-09443bd2-c73e-11e6-8ed0-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34520

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:64
Expected error:
    <*errors.errorString | 0xc421ef8710>: {
        s: "expected pod \"client-containers-2ebfd212-c754-11e6-8ed0-0242ac110009\" success: gave up waiting for pod 'client-containers-2ebfd212-c754-11e6-8ed0-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-2ebfd212-c754-11e6-8ed0-0242ac110009" success: gave up waiting for pod 'client-containers-2ebfd212-c754-11e6-8ed0-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29467

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/84/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Dec 21 19:49:49.913: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.3.50:8080/dial?request=hostName&protocol=udp&host=10.99.253.83&port=90&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34317

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc421a4e540>: {
        s: "service verification failed for: 10.99.253.170\nexpected [service1-h1dsc service1-mvvht service1-zgwcl]\nreceived [service1-h1dsc service1-zgwcl]",
    }
    service verification failed for: 10.99.253.170
    expected [service1-h1dsc service1-mvvht service1-zgwcl]
    received [service1-h1dsc service1-zgwcl]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc420fd9940>: {
        s: "service verification failed for: 10.99.253.211\nexpected [service1-566ml service1-lrpnx service1-wt3mh]\nreceived [service1-lrpnx service1-wt3mh]",
    }
    service verification failed for: 10.99.253.211
    expected [service1-566ml service1-lrpnx service1-wt3mh]
    received [service1-lrpnx service1-wt3mh]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Dec 21 22:31:07.218: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.99.240.45 90
retrieved map[netserver-0:{} netserver-2:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #36271

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Dec 21 23:50:59.062: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.0.196:8080/dial?request=hostName&protocol=http&host=10.99.247.1&port=80&tries=1'
retrieved map[netserver-1:{} netserver-0:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34104

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Dec 21 20:09:03.617: Could not reach HTTP service through 130.211.153.53:30307 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2443

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Dec 22 01:12:54.343: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42038cbb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Dec 21 20:26:00.631: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 130.211.153.53 32335
retrieved map[netserver-0:{} netserver-2:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 21 21:58:33.831: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.3.232:8080/dial?request=hostName&protocol=http&host=10.96.0.163&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1165
Expected error:
    <*errors.errorString | 0xc42038cbb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2850

Issues about this test specifically: #38174

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Dec 22 03:13:11.006: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.3.221 8081
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4221777c0>: {
        s: "service verification failed for: 10.99.245.150\nexpected [service1-39sk5 service1-dvdq5 service1-t9qss]\nreceived [service1-39sk5 service1-t9qss]",
    }
    service verification failed for: 10.99.245.150
    expected [service1-39sk5 service1-dvdq5 service1-t9qss]
    received [service1-39sk5 service1-t9qss]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc421801800>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-21 23:47:28 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-21 23:48:00 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-21 23:47:28 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.96.3.45 StartTime:2016-12-21 23:47:28 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc42149f1f0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://e12495e0403e3d0b547b614835c78722d3a6b9b2881a09668e3d9fb009952f39}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-21 23:47:28 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-21 23:48:00 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-21 23:47:28 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.96.3.45 StartTime:2016-12-21 23:47:28 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc42149f1f0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://e12495e0403e3d0b547b614835c78722d3a6b9b2881a09668e3d9fb009952f39}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Dec 22 00:42:35.563: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Dec 22 01:26:00.885: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.5.114:8080/dial?request=hostName&protocol=http&host=10.99.241.184&port=80&tries=1'
retrieved map[netserver-0:{} netserver-2:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #36178

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Failed waiting for pod wrapped-volume-race-b72a2865-c7fc-11e6-9435-0242ac110006-2blfr to enter running state
Expected error:
    <*errors.errorString | 0xc42038cbb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:383

Issues about this test specifically: #32945

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Dec 21 23:44:58.893: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.99.241.93:80/hostName
retrieved map[netserver-1:{} netserver-2:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Dec 22 03:04:19.315: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.3.209:8080/dial?request=hostName&protocol=udp&host=10.99.242.48&port=90&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34064

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/86/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc422638330>: {
        s: "expected pod \"pod-553b1882-c8aa-11e6-a4ed-0242ac11000a\" success: gave up waiting for pod 'pod-553b1882-c8aa-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-553b1882-c8aa-11e6-a4ed-0242ac11000a" success: gave up waiting for pod 'pod-553b1882-c8aa-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #30851

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
Expected error:
    <*errors.errorString | 0xc4225e2a20>: {
        s: "expected pod \"pod-fecea4a8-c8b8-11e6-a4ed-0242ac11000a\" success: gave up waiting for pod 'pod-fecea4a8-c8b8-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-fecea4a8-c8b8-11e6-a4ed-0242ac11000a" success: gave up waiting for pod 'pod-fecea4a8-c8b8-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34658

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc4223e31d0>: {
        s: "expected pod \"pod-4740ebcb-c8bb-11e6-a4ed-0242ac11000a\" success: gave up waiting for pod 'pod-4740ebcb-c8bb-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-4740ebcb-c8bb-11e6-a4ed-0242ac11000a" success: gave up waiting for pod 'pod-4740ebcb-c8bb-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37071

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc421b57b80>: {
        s: "expected pod \"pod-35e113dd-c8a0-11e6-a4ed-0242ac11000a\" success: gave up waiting for pod 'pod-35e113dd-c8a0-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-35e113dd-c8a0-11e6-a4ed-0242ac11000a" success: gave up waiting for pod 'pod-35e113dd-c8a0-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:117
Expected error:
    <*errors.errorString | 0xc4219f4c90>: {
        s: "expected pod \"pod-ea31773f-c8a8-11e6-a4ed-0242ac11000a\" success: gave up waiting for pod 'pod-ea31773f-c8a8-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-ea31773f-c8a8-11e6-a4ed-0242ac11000a" success: gave up waiting for pod 'pod-ea31773f-c8a8-11e6-a4ed-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/87/
Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 23:18:20.022: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42110e278), (*api.Node)(0xc42110e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:2, cap:2>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-6c24a35a-fnhp\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-6c24a35a-rmw6\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-bootstrap-e2e-default-pool-6c24a35a-fnhp" is not ready yet, Resource usage on node "gke-bootstrap-e2e-default-pool-6c24a35a-rmw6" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 05:21:32.278: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c5cc78), (*api.Node)(0xc421c5cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37774

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 00:42:02.394: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42026ec78), (*api.Node)(0xc42026eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:30:27.084: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c0b678), (*api.Node)(0xc420c0b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:27:10.892: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fecc78), (*api.Node)(0xc420fecef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Dec 22 20:45:22.606: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 05:33:16.972: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bb7678), (*api.Node)(0xc421bb78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:06:44.766: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212cec78), (*api.Node)(0xc4212ceef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:46:41.850: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42133ec78), (*api.Node)(0xc42133eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:31:31.202: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b2278), (*api.Node)(0xc4211b24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:51:48.213: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206c9678), (*api.Node)(0xc4206c98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:31:48.364: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212e2278), (*api.Node)(0xc4212e24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:34:43.014: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ad4278), (*api.Node)(0xc420ad44f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:17:30.448: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420722278), (*api.Node)(0xc4207224f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:15:21.530: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42026ec78), (*api.Node)(0xc42026eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:03:31.061: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421470c78), (*api.Node)(0xc421470ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:38:25.392: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421122c78), (*api.Node)(0xc421122ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:46:35.094: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421355678), (*api.Node)(0xc4213558f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213321f0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:50:03.993: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421010c78), (*api.Node)(0xc421010ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:26:18.039: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d3f678), (*api.Node)(0xc420d3f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:43:22.870: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421456278), (*api.Node)(0xc4214564f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:47:35.879: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42091f678), (*api.Node)(0xc42091f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:23:05.818: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216a7678), (*api.Node)(0xc4216a78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 23:11:26.560: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212f3678), (*api.Node)(0xc4212f38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37531

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:04:48.393: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420cfb678), (*api.Node)(0xc420cfb8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:49:47.252: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206bb678), (*api.Node)(0xc4206bb8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33987

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:33:40.424: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a02278), (*api.Node)(0xc420a024f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31873

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:11:18.092: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d19678), (*api.Node)(0xc420d198f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc421332cd0>: {
        s: "Only 2 pods started out of 5",
    }
    Only 2 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:23:23.663: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420722c78), (*api.Node)(0xc420722ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:58:27.822: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206f2278), (*api.Node)(0xc4206f24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 00:45:56.819: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d19678), (*api.Node)(0xc420d198f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc4203aae90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:08:07.963: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dd0c78), (*api.Node)(0xc420dd0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:20:46.436: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420daec78), (*api.Node)(0xc420daeef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 23:31:14.142: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d1d678), (*api.Node)(0xc420d1d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28283

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:26:35.889: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209bb678), (*api.Node)(0xc4209bb8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 01:53:20.873: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213aec78), (*api.Node)(0xc4213aeef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203aae90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:40:50.531: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206c8278), (*api.Node)(0xc4206c84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:43:01.476: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42076cc78), (*api.Node)(0xc42076cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:37:08.999: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421451678), (*api.Node)(0xc4214518f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:16:16.828: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206bac78), (*api.Node)(0xc4206baef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:19:46.425: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d26278), (*api.Node)(0xc420d264f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:14:16.241: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420eb9678), (*api.Node)(0xc420eb98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34827

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:09:56.039: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420eb0c78), (*api.Node)(0xc420eb0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 05:26:48.504: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fc0c78), (*api.Node)(0xc420fc0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38083

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 02:13:06.574: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d19678), (*api.Node)(0xc420d198f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:54:23.952: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212f6278), (*api.Node)(0xc4212f64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:23:58.662: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420da8278), (*api.Node)(0xc420da84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:16:29.310: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42131a278), (*api.Node)(0xc42131a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 23:14:39.682: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42077ec78), (*api.Node)(0xc42077eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:58:27.309: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42144ec78), (*api.Node)(0xc42144eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421296320>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212ae000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 03:55:15.361: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42146cc78), (*api.Node)(0xc42146cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27079

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 05:05:25.901: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420773678), (*api.Node)(0xc4207738f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:33:49.841: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209ba278), (*api.Node)(0xc4209ba4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38174

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:40:10.132: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420db4c78), (*api.Node)(0xc420db4ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 23:21:32.184: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42026f678), (*api.Node)(0xc42026f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 04:35:13.129: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ddcc78), (*api.Node)(0xc420ddcef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30441

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+NAME                                                             ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-f1b6-pvc-5b4af7a6-c8c8-11e6-ba39-42010af0001b  us-central1-a  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 22 21:19:53.591: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42072b678), (*api.Node)(0xc42072b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 05:30:00.720: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209a6c78), (*api.Node)(0xc4209a6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 01:59:58.843: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206ef678), (*api.Node)(0xc4206ef8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32054 #36010

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/88/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212bf960>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Dec 23 13:04:04.248: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.163:8080/dial?request=hostName&protocol=udp&host=10.99.255.249&port=90&tries=1'
retrieved map[netserver-2:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34250

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Failed waiting for pod wrapped-volume-race-bdf23bcf-c956-11e6-b14e-0242ac110004-0r3jt to enter running state
Expected error:
    <*errors.errorString | 0xc4203d4f60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:383

Issues about this test specifically: #32945

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42174b4a0>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421788180>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:808
Dec 23 12:20:24.519: Could not reach HTTP service through 104.197.155.29:32594 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2443

Issues about this test specifically: #26134

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4209ba540>: {
        s: "error while stopping RC: service2: timed out waiting for \"service2\" to be synced",
    }
    error while stopping RC: service2: timed out waiting for "service2" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Dec 23 12:53:27.849: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216d2540>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Dec 23 13:20:10.169: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216b2da0>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42176a150>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203d4f60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4225ebb70>: {
        s: "service verification failed for: 10.99.255.85\nexpected [service1-k6rcx service1-kts8k service1-n1l1h]\nreceived [service1-kts8k]",
    }
    service verification failed for: 10.99.255.85
    expected [service1-k6rcx service1-kts8k service1-n1l1h]
    received [service1-kts8k]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215f2590>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc421462c40>: {
        s: "error waiting for deployment \"test-rolling-update-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rolling-update-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:333

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f3c0e0>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422f539f0>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212bf250>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422b106f0>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:943
Dec 23 12:31:22.099: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc422c1ed40>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:953

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421362050>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Dec 23 12:23:23.674: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.4.52:8080/dial?request=hostName&protocol=udp&host=10.99.243.225&port=90&tries=1'
retrieved map[netserver-2:{} netserver-0:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34064

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a6b550>: {
        s: "Namespace e2e-tests-services-58hth is active",
    }
    Namespace e2e-tests-services-58hth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/91/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Dec 24 09:24:39.808: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Dec 24 09:44:21.189: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.3.241:8080/dial?request=hostName&protocol=udp&host=10.99.254.208&port=90&tries=1'
retrieved map[netserver-1:{} netserver-0:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34064

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Dec 24 08:10:11.113: Could not reach HTTP service through 104.197.223.162:30015 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2443

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Dec 24 08:51:31.496: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1585

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 24 11:09:26.617: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.4.69:8080/dial?request=hostName&protocol=http&host=10.96.3.242&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Dec 24 11:31:24.231: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://104.197.223.162:32765/hostName
retrieved map[netserver-1:{} netserver-0:{}]
expected map[netserver-1:{} netserver-2:{} netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
validating pre-stop.
Expected error:
    <*errors.errorString | 0xc4203ace10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:159

Issues about this test specifically: #30287 #35953

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Dec 24 11:25:32.542: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1165
Expected error:
    <*errors.errorString | 0xc4203ace10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2850

Issues about this test specifically: #38174

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/95/
Multiple broken tests:

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:66
Expected error:
    <*errors.errorString | 0xc42038ad00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:283

Issues about this test specifically: #37144

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Dec 25 22:45:18.762: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1098
Expected error:
    <*errors.errorString | 0xc420b92060>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #26172

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:74
Expected error:
    <*errors.errorString | 0xc4225742f0>: {
        s: "want pod 'test-webserver-ff665b51-cb24-11e6-b0ea-0242ac110007' on 'gke-bootstrap-e2e-default-pool-3f1efcde-gf5x' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-ff665b51-cb24-11e6-b0ea-0242ac110007' on 'gke-bootstrap-e2e-default-pool-3f1efcde-gf5x' to be 'Running' but was 'Pending'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:56

Issues about this test specifically: #29521

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc42038ad00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Dec 25 21:48:18.676: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420d83d40>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 15, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 15, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/97/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d7aa80>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ab01e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221ab5f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421db9d20>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 12:21:15 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 12:21:15 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422228ba0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc422331bf0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42271e2c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422229df0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 12:21:15 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 12:21:15 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218b84e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421061b10>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42178a0e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216a3d60>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 12:21:15 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 12:21:15 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421919900>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42184d110>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42066e7b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42271f730>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc422651f30>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 13:44:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220c09c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 12:21:15 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 12:21:15 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421659280>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9a8be591-hnjl gke-bootstrap-e2e-default-pool-9a8be591-hnjl Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:15:26 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:31:26 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-26 08:32:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/100/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421899530>: {
        s: "error while stopping RC: service2: timed out waiting for \"service2\" to be synced",
    }
    error while stopping RC: service2: timed out waiting for "service2" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e6ca30>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422622b90>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422429730>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4226f48f0>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42261f330>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42244b190>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421daaa30>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224a9230>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227afd10>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42247bc20>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421dd26b0>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e6a540>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42247b2c0>: {
        s: "Namespace e2e-tests-services-pwnbh is active",
    }
    Namespace e2e-tests-services-pwnbh is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/106/
Multiple broken tests:

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:205
Expected error:
    <*errors.errorString | 0xc4203ab940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:186

Issues about this test specifically: #28283

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Dec 29 06:46:20.057: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:257
Expected error:
    <*errors.errorString | 0xc4203ab940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:246

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:309
Expected error:
    <*errors.errorString | 0xc4203ab940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:298

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc422b80010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Dec 29 05:55:32.085: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc422644010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Dec 29 11:44:59.778: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc420fa1720>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:22, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618623663, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618623663, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618623698, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618623661, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"NewReplicaSetAvailable\", Message:\"Replica set \\\"nginx-641789310\\\" has successfully progressed.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:22, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618623663, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618623663, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618623698, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618623661, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"NewReplicaSetAvailable", Message:"Replica set \"nginx-641789310\" has successfully progressed."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc42204c110>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/107/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216fcb20>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42266d180>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420dc5a50>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422465dc0>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420797a30>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421252780>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42168ce30>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:34
Expected error:
    <*errors.errorString | 0xc422464420>: {
        s: "expected pod \"client-containers-85586d65-ce37-11e6-a13a-0242ac110007\" success: gave up waiting for pod 'client-containers-85586d65-ce37-11e6-a13a-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-85586d65-ce37-11e6-a13a-0242ac110007" success: gave up waiting for pod 'client-containers-85586d65-ce37-11e6-a13a-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34520

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216f6a90>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42166fcd0>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
Expected error:
    <*errors.errorString | 0xc420a4fb90>: {
        s: "expected pod \"client-containers-dde5c0f0-ce27-11e6-a13a-0242ac110007\" success: gave up waiting for pod 'client-containers-dde5c0f0-ce27-11e6-a13a-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-dde5c0f0-ce27-11e6-a13a-0242ac110007" success: gave up waiting for pod 'client-containers-dde5c0f0-ce27-11e6-a13a-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36706

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421979180>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a30740>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42129e700>: {
        s: "Namespace e2e-tests-services-twl0p is active",
    }
    Namespace e2e-tests-services-twl0p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420e30870>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 154, 151, 40],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.154.151.40:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/114/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227f75e0>: {
        s: "Namespace e2e-tests-services-m3tdg is active",
    }
    Namespace e2e-tests-services-m3tdg is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc4219a7c90>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4227a1cc0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 154, 151, 40],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.154.151.40:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:287
Jan  1 00:14:54.803: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154

Issues about this test specifically: #27680 #38211

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42287a670>: {
        s: "Namespace e2e-tests-services-m3tdg is active",
    }
    Namespace e2e-tests-services-m3tdg is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422415c30>: {
        s: "Namespace e2e-tests-services-m3tdg is active",
    }
    Namespace e2e-tests-services-m3tdg is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/118/
Multiple broken tests:

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34212

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28503

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36178

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d409d0>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42237d270>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ab5d90>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f7d030>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d93160>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420697b00>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4223bb500>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225de650>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d40a30>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29512

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42276fa20>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42284cdb0>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #35279

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42001a190>: {s: "unexpected EOF"}
    unexpected EOF
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42180c350>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42128ada0>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42237d640>: {
        s: "Namespace e2e-tests-services-rcn6d is active",
    }
    Namespace e2e-tests-services-rcn6d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/121/
Multiple broken tests:

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ce0120>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421992540>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d6c710>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422cbaf50>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211c3fa0>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ad3a30>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227075c0>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420ca1040>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 30, 36],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.30.36:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ff8220>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224eb0f0>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422c32040>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42293f890>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214e2a30>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220974c0>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421992ad0>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218baae0>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42265b000>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421674120>: {
        s: "Namespace e2e-tests-services-xv2l7 is active",
    }
    Namespace e2e-tests-services-xv2l7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/122/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f4b960>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e44630>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219702d0>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Jan  3 14:55:50.745: Pods on node gke-bootstrap-e2e-default-pool-2b038bfd-c06t did not become ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:235

Issues about this test specifically: #36794

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42191aa10>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42119a1c0>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420faa790>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420fab420>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ed7660>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4206d9d40>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42166f930>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421dbf820>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218ff4a0>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422258ee0>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422172400>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421dbf920>: {
        s: "Namespace e2e-tests-pet-set-recreate-8qd2w is active",
    }
    Namespace e2e-tests-pet-set-recreate-8qd2w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:501
Jan  3 09:33:10.598: Pod test-pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:456

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/123/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc420348670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219653e0>: {
        s: "Namespace e2e-tests-services-3g5w4 is active",
    }
    Namespace e2e-tests-services-3g5w4 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4207c04b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421acd490>: {
        s: "Namespace e2e-tests-services-3g5w4 is active",
    }
    Namespace e2e-tests-services-3g5w4 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42219c150>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e68da0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4225d59f0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 36, 127],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.36.127:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:417

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc420348670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e27d50>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc420348670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #28337

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ad25f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ad3190>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc420fac830>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214a16f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42157d440>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420713800>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c46930>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Jan  4 00:48:36.530: CPU usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-473e8e4d-s0cq:
 container "runtime": expected 95th% usage < 0.200; got 0.539
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e0e280>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc420cede90>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d1ab20>: {
        s: "Namespace e2e-tests-services-3g5w4 is active",
    }
    Namespace e2e-tests-services-3g5w4 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420348670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421262170>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
Expected error:
    <*errors.errorString | 0xc420bd45b0>: {
        s: "expected pod \"pod-a055052d-d25e-11e6-b7ab-0242ac110009\" success: gave up waiting for pod 'pod-a055052d-d25e-11e6-b7ab-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-a055052d-d25e-11e6-b7ab-0242ac110009" success: gave up waiting for pod 'pod-a055052d-d25e-11e6-b7ab-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34658

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc420348670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Expected error:
    <*errors.errorString | 0xc420fe40d0>: {
        s: "expected pod \"pod-37c9262d-d24a-11e6-b7ab-0242ac110009\" success: gave up waiting for pod 'pod-37c9262d-d24a-11e6-b7ab-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-37c9262d-d24a-11e6-b7ab-0242ac110009" success: gave up waiting for pod 'pod-37c9262d-d24a-11e6-b7ab-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34226

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220f77e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 19:03:13 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b34e30>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-473e8e4d-s0cq gke-bootstrap-e2e-default-pool-473e8e4d-s0cq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 16:25:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/125/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422461e80>: {
        s: "Namespace e2e-tests-services-g2z42 is active",
    }
    Namespace e2e-tests-services-g2z42 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422338160>: {
        s: "Namespace e2e-tests-services-g2z42 is active",
    }
    Namespace e2e-tests-services-g2z42 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4225271d0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 3, 99],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.3.99:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422eac1b0>: {
        s: "Namespace e2e-tests-services-g2z42 is active",
    }
    Namespace e2e-tests-services-g2z42 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42294e240>: {
        s: "Namespace e2e-tests-services-g2z42 is active",
    }
    Namespace e2e-tests-services-g2z42 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d8f5c0>: {
        s: "Namespace e2e-tests-services-g2z42 is active",
    }
    Namespace e2e-tests-services-g2z42 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/136/
Multiple broken tests:

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 03:19:59.278: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421575678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36564

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 03:40:11.052: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421276c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:25:45.517: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42178f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 09:30:04.129: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213af400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 03:37:00.838: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42126c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc42144c990>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Waiting for pods in namespace "e2e-tests-disruption-k0tq2" to be ready
Expected error:
    <*errors.errorString | 0xc4203fd780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:247

Issues about this test specifically: #32644

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 03:16:42.418: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a2f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:37:03.945: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ec2a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 04:17:58.441: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212bc278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203fd780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 02:22:20.756: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420eb78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 09:33:22.260: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f7d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 04:05:20.471: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421098c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:17:29.264: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211f44f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 03:10:25.688: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bdf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:49:41.317: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c88ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:17:50.366: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42177ea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 09:19:49.579: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213b9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:43:59.184: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42110d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:02:41.052: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42023b400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 07:24:37.975: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42069d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211808a0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 23:46:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:02:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm            gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 23:46:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:02:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm            gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 09:23:00.843: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210d8000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 04:11:07.843: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420371678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:30:37.317: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421104a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:36:07.512: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b1aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 07:21:12.746: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214ed400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 03:58:56.093: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420eb7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 05:25:56.865: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420df4000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 06:27:17.397: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421098a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Jan  8 01:30:02.054: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-48a9a65f-5wfm:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:21:02.581: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211e6000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 04:02:08.243: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217b7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 05:29:08.093: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213cb400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33008

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:52:57.479: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420eceef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 07:49:58.290: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a57400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211b9c10>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 23:46:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:02:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm            gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 23:46:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:02:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm            gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:12:58.895: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210384f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 07:11:36.295: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42155e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:46:17.512: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4204daef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26134

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 09:12:31.332: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b1aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29521

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:59:29.850: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4208bb8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 07:41:02.191: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4208af400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:09:55.786: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a82a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203fd780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203fd780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 03:46:33.476: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e28278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 03:43:21.295: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bb5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:52:37.667: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200f6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27957

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:56:19.560: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4203718f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 05:43:51.219: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4204cf400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 05:59:54.119: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e14000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4213dda10>: {
        s: "at least one node failed to be ready",
    }
    at least one node failed to be ready
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:80

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:27:26.986: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42177f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:22:13.376: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42194eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37056

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 09:16:02.371: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b47400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 07:44:16.388: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b14000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:29:01.746: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216f0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 04:21:46.116: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216a6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36794

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 09:26:43.097: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214d6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 06:07:38.412: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42069d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 06:03:06.373: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42101a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 05:56:07.733: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216b6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 07:05:13.880: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bb4a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 05:39:48.221: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c24a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 02:05:54.220: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214744f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30851

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 09:37:10.433: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42152d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.errorString | 0xc4203fd780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:143

Issues about this test specifically: #32639

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:47:11.374: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421583400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 06:42:40.770: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219f9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 06:30:34.941: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421536000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203fd780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:13:22.073: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e1f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421768b00>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 23:46:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:02:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm            gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 23:46:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:02:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-48a9a65f-5wfm            gke-bootstrap-e2e-default-pool-48a9a65f-5wfm Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 00:01:51 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 02:02:42.003: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213e6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 06:24:01.102: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d34000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37500

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:40:31.986: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420eb7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33985

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 05:35:47.586: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f6a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:06:41.569: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200f6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 06:58:27.473: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b97400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 05:32:35.359: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42110ea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 08:33:47.604: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216a7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421648360>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc421db15b0>: {
        s: "service verification failed for: 10.99.247.23\nexpected [service1-7042s service1-f4gl3 service1-p749q]\nreceived []",
    }
    service verification failed for: 10.99.247.23
    expected [service1-7042s service1-f4gl3 service1-p749q]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 06:55:03.511: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420eb6000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc4203fd780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34064

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 07:08:24.073: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42104c000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/142/
Multiple broken tests:

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421786be0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 17, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 17, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:133
Expected error:
    <*errors.StatusError | 0xc42183db00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "ResourceQuota \"test-quota\" is invalid: [spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource for quota]",
            Reason: "Invalid",
            Details: {
                Name: "test-quota",
                Group: "",
                Kind: "ResourceQuota",
                Causes: [
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource type or fully qualified",
                        Field: "spec.hard[requests.storage]",
                    },
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource for quota",
                        Field: "spec.hard[requests.storage]",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    ResourceQuota "test-quota" is invalid: [spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource for quota]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:103

Issues about this test specifically: #32053 #32758

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:232
Expected
    <*api.Event | 0x0>: nil
not to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:230

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc421584900>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: total pods available: 1, less than the min required: 3",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: total pods available: 1, less than the min required: 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:272
error waiting for daemon pods to be running on no nodes
Expected error:
    <*errors.errorString | 0xc4203300a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:249

Issues about this test specifically: #30441

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Expected error:
    <*errors.StatusError | 0xc4219c1c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:287

Issues about this test specifically: #38083

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Expected error:
    <*errors.StatusError | 0xc42135a100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:407

Issues about this test specifically: #37774

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203300a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Expected error:
    <*errors.StatusError | 0xc422a03380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:222

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
Expected error:
    <*errors.errorString | 0xc421ed2420>: {
        s: "expected \"mode of file \\\"/etc/configmap-volume/data-1\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/configmap-volume/data-1\": -rw-r--r--\n    content of file \"/etc/configmap-volume/data-1\": value-1\n    \nto contain substring\n    <string>: mode of file \"/etc/configmap-volume/data-1\": -r--------",
    }
    expected "mode of file \"/etc/configmap-volume/data-1\": -r--------" in container output: Expected
        <string>: mode of file "/etc/configmap-volume/data-1": -rw-r--r--
        content of file "/etc/configmap-volume/data-1": value-1
        
    to contain substring
        <string>: mode of file "/etc/configmap-volume/data-1": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34827

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Pod was not deleted during network partition.
Expected
    <nil>: nil
to equal
    <*errors.errorString | 0xc4203300a0>: {
        s: "timed out waiting for the condition",
    }
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:347

Issues about this test specifically: #37479

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:89
Expected error:
    <*errors.StatusError | 0xc421015400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "ResourceQuota \"test-quota\" is invalid: [spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource for quota]",
            Reason: "Invalid",
            Details: {
                Name: "test-quota",
                Group: "",
                Kind: "ResourceQuota",
                Causes: [
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource type or fully qualified",
                        Field: "spec.hard[requests.storage]",
                    },
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource for quota",
                        Field: "spec.hard[requests.storage]",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    ResourceQuota "test-quota" is invalid: [spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource for quota]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:61

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc420e468f0>: {
        s: "expected \"mode of file \\\"/etc/podname\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/podname\": -rw-r--r--\n    \nto contain substring\n    <string>: mode of file \"/etc/podname\": -r--------",
    }
    expected "mode of file \"/etc/podname\": -r--------" in container output: Expected
        <string>: mode of file "/etc/podname": -rw-r--r--
        
    to contain substring
        <string>: mode of file "/etc/podname": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:136
Expected error:
    <*errors.StatusError | 0xc42138b700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:125

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:268
Expected error:
    <*errors.StatusError | 0xc421f70a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "ResourceQuota \"test-quota\" is invalid: [spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource for quota]",
            Reason: "Invalid",
            Details: {
                Name: "test-quota",
                Group: "",
                Kind: "ResourceQuota",
                Causes: [
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource type or fully qualified",
                        Field: "spec.hard[requests.storage]",
                    },
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource for quota",
                        Field: "spec.hard[requests.storage]",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    ResourceQuota "test-quota" is invalid: [spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource for quota]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:240

Issues about this test specifically: #34372

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc4230255f0>: {
        s: "expected \"mode of file \\\"/etc/secret-volume/data-1\\\": -r--r-----\" in container output: Expected\n    <string>: mode of file \"/etc/secret-volume/data-1\": grwxrwxrwx\n    content of file \"/etc/secret-volume/data-1\": value-1\n    \n    \nto contain substring\n    <string>: mode of file \"/etc/secret-volume/data-1\": -r--r-----",
    }
    expected "mode of file \"/etc/secret-volume/data-1\": -r--r-----" in container output: Expected
        <string>: mode of file "/etc/secret-volume/data-1": grwxrwxrwx
        content of file "/etc/secret-volume/data-1": value-1
        
        
    to contain substring
        <string>: mode of file "/etc/secret-volume/data-1": -r--r-----
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Expected error:
    <*errors.errorString | 0xc421bcb560>: {
        s: "Expected 5 replicas for the new replica set, got 3",
    }
    Expected 5 replicas for the new replica set, got 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1037

Issues about this test specifically: #29828

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1165
Expected error:
    <*errors.errorString | 0xc4203300a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2850

Issues about this test specifically: #38174

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:77
Jan 10 06:20:11.624: unable to create test secret : Secret "secret-test-e593c5df-d73f-11e6-b592-0242ac110009" is invalid: data[this_should_not_match_content_of_other_secret]: Invalid value: "this_should_not_match_content_of_other_secret": must have at most 253 characters and match regex \.?[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:74

Issues about this test specifically: #37525

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.StatusError | 0xc4202eb980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #32644

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.StatusError | 0xc421429100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Sysctls should support sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:122
Expected
    <string>: kernel.shm_rmid_forced = 0
    
to contain substring
    <string>: kernel.shm_rmid_forced = 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:121

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Expected error:
    <*errors.StatusError | 0xc4206f3800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:114

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:475
Pod was not deleted during network partition.
Expected
    <nil>: nil
to equal
    <*errors.errorString | 0xc4203300a0>: {
        s: "timed out waiting for the condition",
    }
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:464

Issues about this test specifically: #36950

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc421610060>: {
        s: "expected \"mode of file \\\"/etc/secret-volume/data-1\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/secret-volume/data-1\": -rw-r--r--\n    content of file \"/etc/secret-volume/data-1\": value-1\n    \n    \nto contain substring\n    <string>: mode of file \"/etc/secret-volume/data-1\": -r--------",
    }
    expected "mode of file \"/etc/secret-volume/data-1\": -r--------" in container output: Expected
        <string>: mode of file "/etc/secret-volume/data-1": -rw-r--r--
        content of file "/etc/secret-volume/data-1": value-1
        
        
    to contain substring
        <string>: mode of file "/etc/secret-volume/data-1": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35256

Failed: list nodes {e2e.go}

exit status 1

Issues about this test specifically: #38667

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Jan 10 07:17:18.053: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected
    <float64>: 3542
to be <
    <int>: 60
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:204

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:203
Expected error:
    <*errors.errorString | 0xc421ed65c0>: {
        s: "expected \"[1-9]\" in container output: Expected\n    <string>: content of file \"/etc/memory_limit\": 0\n    \nto match regular expression\n    <string>: [1-9]",
    }
    expected "[1-9]" in container output: Expected
        <string>: content of file "/etc/memory_limit": 0
        
    to match regular expression
        <string>: [1-9]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37531

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:51
Expected error:
    <*errors.errorString | 0xc420b7ca60>: {
        s: "rc manager never added the failure condition for rc \"condition-test\": []api.ReplicationControllerCondition(nil)",
    }
    rc manager never added the failure condition for rc "condition-test": []api.ReplicationControllerCondition(nil)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:206

Issues about this test specifically: #37027

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.StatusError | 0xc421422980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:461
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:930

Issues about this test specifically: #31918

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:52
Expected error:
    <*errors.StatusError | 0xc422b4ee00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #37017

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc422b724e0>: {
        s: "deployment \"nginx\" never updated with the desired condition and reason: []",
    }
    deployment "nginx" never updated with the desired condition and reason: []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1468

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.errorString | 0xc4203300a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:175

Issues about this test specifically: #32646

Failed: kubectl version {e2e.go}

exit status 1

Issues about this test specifically: #34378

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:56
Expected error:
    <*errors.errorString | 0xc420c1b5c0>: {
        s: "expected \"mode of file \\\"/etc/secret-volume/new-path-data-1\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/secret-volume/new-path-data-1\": -rw-r--r--\n    content of file \"/etc/secret-volume/new-path-data-1\": value-1\n    \n    \nto contain substring\n    <string>: mode of file \"/etc/secret-volume/new-path-data-1\": -r--------",
    }
    expected "mode of file \"/etc/secret-volume/new-path-data-1\": -r--------" in container output: Expected
        <string>: mode of file "/etc/secret-volume/new-path-data-1": -rw-r--r--
        content of file "/etc/secret-volume/new-path-data-1": value-1
        
        
    to contain substring
        <string>: mode of file "/etc/secret-volume/new-path-data-1": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37529

Failed: [k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:163
Expected
    <string>: kernel.shm_rmid_forced = 0
    
to contain substring
    <string>: kernel.shm_rmid_forced = 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:162

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:501
Expected error:
    <*errors.StatusError | 0xc420af7080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:452

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:71
Expected error:
    <*errors.StatusError | 0xc423009800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Jan 10 08:10:48.308: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc420ee0470>: {
        s: "deployment \"nginx\" never updated with the desired condition and reason: []",
    }
    deployment "nginx" never updated with the desired condition and reason: []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1323

Issues about this test specifically: #31697 #36574

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Expected error:
    <*errors.StatusError | 0xc421fb2080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:391

Issues about this test specifically: #37373

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:855
Jan 10 02:53:46.793: should use same NodePort for new service: &TypeMeta{Kind:,APIVersion:,}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:853

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Failed to update the second deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc4203300a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1223

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:240
Expected error:
    <*errors.StatusError | 0xc42183cc00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Pod \"node-problem-detector-ef803910-d738-11e6-b592-0242ac110009\" is invalid: spec.containers[0].env[0].valueFrom.fieldRef.fieldPath: Unsupported value: \"spec.nodeName\": supported values: metadata.name, metadata.namespace, status.podIP",
            Reason: "Invalid",
            Details: {
                Name: "node-problem-detector-ef803910-d738-11e6-b592-0242ac110009",
                Group: "",
                Kind: "Pod",
                Causes: [
                    {
                        Type: "FieldValueNotSupported",
                        Message: "Unsupported value: \"spec.nodeName\": supported values: metadata.name, metadata.namespace, status.podIP",
                        Field: "spec.containers[0].env[0].valueFrom.fieldRef.fieldPath",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    Pod "node-problem-detector-ef803910-d738-11e6-b592-0242ac110009" is invalid: spec.containers[0].env[0].valueFrom.fieldRef.fieldPath: Unsupported value: "spec.nodeName": supported values: metadata.name, metadata.namespace, status.podIP
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:230

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Expected error:
    <*errors.errorString | 0xc420ed81d0>: {
        s: "expected \"mode of file \\\"/etc/podname\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/podname\": -rw-r--r--\n    \nto contain substring\n    <string>: mode of file \"/etc/podname\": -r--------",
    }
    expected "mode of file \"/etc/podname\": -r--------" in container output: Expected
        <string>: mode of file "/etc/podname": -rw-r--r--
        
    to contain substring
        <string>: mode of file "/etc/podname": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36300

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Expected error:
    <*errors.StatusError | 0xc420a40e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:183

Issues about this test specifically: #38439

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:312
Jan 10 02:16:55.748: Failed to query for cronJobs: the server could not find the requested resource
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:269

Issues about this test specifically: #37428

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Expected error:
    <*errors.StatusError | 0xc421384d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:149

Issues about this test specifically: #38254

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:233
Expected error:
    <*errors.StatusError | 0xc42179d580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "ResourceQuota \"test-quota\" is invalid: [spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource for quota]",
            Reason: "Invalid",
            Details: {
                Name: "test-quota",
                Group: "",
                Kind: "ResourceQuota",
                Causes: [
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource type or fully qualified",
                        Field: "spec.hard[requests.storage]",
                    },
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource for quota",
                        Field: "spec.hard[requests.storage]",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    ResourceQuota "test-quota" is invalid: [spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource for quota]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:205

Issues about this test specifically: #34367

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc420bfc310>: {
        s: "expected \"[1-9]\" in container output: Expected\n    <string>: content of file \"/etc/cpu_limit\": 0\n    \nto match regular expression\n    <string>: [1-9]",
    }
    expected "[1-9]" in container output: Expected
        <string>: content of file "/etc/cpu_limit": 0
        
    to match regular expression
        <string>: [1-9]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:493
Expected
    <int>: 1
to equal
    <int>: 42
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:463

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:306
Expected error:
    <*errors.StatusError | 0xc422b29000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "ResourceQuota \"test-quota\" is invalid: [spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource for quota]",
            Reason: "Invalid",
            Details: {
                Name: "test-quota",
                Group: "",
                Kind: "ResourceQuota",
                Causes: [
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource type or fully qualified",
                        Field: "spec.hard[requests.storage]",
                    },
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource for quota",
                        Field: "spec.hard[requests.storage]",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    ResourceQuota "test-quota" is invalid: [spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource for quota]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:275

Issues about this test specifically: #34212

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:64
Expected error:
    <*errors.errorString | 0xc422ac3690>: {
        s: "expected \"mode of file \\\"/etc/configmap-volume/path/to/data-2\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/configmap-volume/path/to/data-2\": -rw-r--r--\n    content of file \"/etc/configmap-volume/path/to/data-2\": value-2\n    \nto contain substring\n    <string>: mode of file \"/etc/configmap-volume/path/to/data-2\": -r--------",
    }
    expected "mode of file \"/etc/configmap-volume/path/to/data-2\": -r--------" in container output: Expected
        <string>: mode of file "/etc/configmap-volume/path/to/data-2": -rw-r--r--
        content of file "/etc/configmap-volume/path/to/data-2": value-2
        
    to contain substring
        <string>: mode of file "/etc/configmap-volume/path/to/data-2": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35790

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.StatusError | 0xc420f9f780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Service \"dns-test-service-3\" is invalid: [spec.ports: Required value, spec.type: Unsupported value: \"ExternalName\": supported values: ClusterIP, LoadBalancer, NodePort]",
            Reason: "Invalid",
            Details: {
                Name: "dns-test-service-3",
                Group: "",
                Kind: "Service",
                Causes: [
                    {
                        Type: "FieldValueRequired",
                        Message: "Required value",
                        Field: "spec.ports",
                    },
                    {
                        Type: "FieldValueNotSupported",
                        Message: "Unsupported value: \"ExternalName\": supported values: ClusterIP, LoadBalancer, NodePort",
                        Field: "spec.type",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    Service "dns-test-service-3" is invalid: [spec.ports: Required value, spec.type: Unsupported value: "ExternalName": supported values: ClusterIP, LoadBalancer, NodePort]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:445

Issues about this test specifically: #32584

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:54
Expected error:
    <*errors.StatusError | 0xc421af2500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "ResourceQuota \"test-quota\" is invalid: [spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource for quota]",
            Reason: "Invalid",
            Details: {
                Name: "test-quota",
                Group: "",
                Kind: "ResourceQuota",
                Causes: [
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource type or fully qualified",
                        Field: "spec.hard[requests.storage]",
                    },
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource for quota",
                        Field: "spec.hard[requests.storage]",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    ResourceQuota "test-quota" is invalid: [spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource for quota]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:47

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:564
Not scheduled Pods: []api.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:930

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:205
Expected
    <nil>: nil
not to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:200

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:69
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:60

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.StatusError | 0xc4219e5300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #32639

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:92
Expected error:
    <*errors.errorString | 0xc4222871f0>: {
        s: "rs controller never added the failure condition for replica set \"condition-test\": []extensions.ReplicaSetCondition(nil)",
    }
    rs controller never added the failure condition for replica set "condition-test": []extensions.ReplicaSetCondition(nil)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:221

Issues about this test specifically: #36554

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:198
Expected error:
    <*errors.StatusError | 0xc4211d4400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "ResourceQuota \"test-quota\" is invalid: [spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: \"requests.storage\": must be a standard resource for quota]",
            Reason: "Invalid",
            Details: {
                Name: "test-quota",
                Group: "",
                Kind: "ResourceQuota",
                Causes: [
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource type or fully qualified",
                        Field: "spec.hard[requests.storage]",
                    },
                    {
                        Type: "FieldValueInvalid",
                        Message: "Invalid value: \"requests.storage\": must be a standard resource for quota",
                        Field: "spec.hard[requests.storage]",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    ResourceQuota "test-quota" is invalid: [spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource type or fully qualified, spec.hard[requests.storage]: Invalid value: "requests.storage": must be a standard resource for quota]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:140

Issues about this test specifically: #38516

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/143/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b10370>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421cbe6b0>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4215e0fa0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 145, 211],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.198.145.211:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221b6aa0>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc42275ca70>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63619690770, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63619690770, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63619690871, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63619690871, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3837372172\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63619690770, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63619690770, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63619690871, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63619690871, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3837372172\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1335

Issues about this test specifically: #31697 #36574

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421dd0310>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a95250>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc422aa8010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a53440>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Jan 10 17:46:53.453: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215ce2a0>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1087
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.145.211 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-mbvhm] []  <nil> Created e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4\nScaling up e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4 up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc4203d6e40 exit status 1 <nil> <nil> true [0xc42028a758 0xc42028a780 0xc42028a7a0] [0xc42028a758 0xc42028a780 0xc42028a7a0] [0xc42028a770 0xc42028a798] [0x970e80 0x970e80] 0xc42147b7a0 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4\nScaling up e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4 up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.145.211 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-mbvhm] []  <nil> Created e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4
    Scaling up e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4 up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc4203d6e40 exit status 1 <nil> <nil> true [0xc42028a758 0xc42028a780 0xc42028a7a0] [0xc42028a758 0xc42028a780 0xc42028a7a0] [0xc42028a770 0xc42028a798] [0x970e80 0x970e80] 0xc42147b7a0 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4
    Scaling up e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-543de7dc52ad5a6bec36f573501e39c4 up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:169

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e19770>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Jan 10 15:20:23.658: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Jan 10 16:35:34.239: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 10 14:57:11.457: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421396420>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:68
Expected error:
    <*errors.errorString | 0xc422aa8010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:371

Issues about this test specifically: #27232

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222c2b70>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Jan 10 18:36:13.137: Failed getting pod e2e-test-nginx-pod: Timeout while waiting for pods with labels "run=e2e-test-nginx-pod" to be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1231

Issues about this test specifically: #29834 #35757

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 10 17:10:13.983: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ac18f0>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c341b0>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e18370>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422adbbe0>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e53b10>: {
        s: "Namespace e2e-tests-services-tnk33 is active",
    }
    Namespace e2e-tests-services-tnk33 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/153/
Multiple broken tests:

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:31:49.972: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421758000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Jan 14 04:25:37.561: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:38:00.291: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42031cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:24:37.255: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421758a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:45:09.161: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212c2a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 02:04:00.707: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206ea278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421037fa0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\nkube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    kube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 02:25:37.120: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42174e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 03:04:32.039: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421096000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209eb650>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\nkube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    kube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 03:50:01.820: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421096000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34827

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 22:36:09.037: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421106c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:51:34.753: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421774000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:21:05.447: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420183400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:48:22.559: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4203e9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4204594b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33285

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 02:00:00.603: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206ea278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:26:50.602: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42171cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 23:42:18.206: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212c6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:16:23.300: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42187a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:56:31.352: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421830278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33008

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:38:14.320: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206ea000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:41:54.997: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421758a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 03:33:35.549: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c21400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:09:43.678: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421932278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:05:36.727: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42102aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:54:45.760: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421089400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 22:32:42.796: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c42c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30441

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:09:20.113: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f28c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37259

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 23:59:25.951: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4203e9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:35:02.112: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42150ea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30851

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:31:39.736: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f9c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:33:11.121: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421267678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:08:48.935: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4203e9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 05:16:05.910: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42144aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 02:16:45.440: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420709678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 03:40:27.234: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421758000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 02:22:12.825: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f6d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc4204594b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 02:07:12.908: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421396278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 23:48:44.608: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421287678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37531

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:30:00.875: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421027678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:23:10.374: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218c5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 03:07:42.333: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4203e9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29710

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 23:17:15.120: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f9d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4204594b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 02:29:19.311: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211eac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc4218fa030>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:02:56.141: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f0b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29521

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 23:45:30.404: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a60c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:47:35.328: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214bac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:15:57.601: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42101cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36109

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:53:20.284: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42174ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32945

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 22:51:44.066: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212c3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421886bd0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:06:29.479: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211eac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:12:47.329: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214f4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b3cd20>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\nkube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    kube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:41:12.505: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209c6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37439

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421810310>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\nkube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    kube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:57:57.981: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42144aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29461

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4204594b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 23:56:13.761: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42159b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 22:48:31.851: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f9d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34520

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 03:43:39.476: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216a5400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:19:55.185: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421681678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 04:02:24.529: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42144a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:19:16.707: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212c3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 00:36:38.704: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f9e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214c21c0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\nkube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    kube-dns-4101612645-vlj4g                                          gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:47:15 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb            gke-bootstrap-e2e-default-pool-a3d30c2d-b5tb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-13 19:46:49 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 13 23:52:45.295: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42151ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 05:07:28.331: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c21400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 14 01:44:24.716: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216a4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/163/
Multiple broken tests:

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 12:13:01.032: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421330278), (*api.Node)(0xc4213304f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 13:52:29.236: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422278c78), (*api.Node)(0xc422278ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422026e50>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 13:49:19.003: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421524c78), (*api.Node)(0xc421524ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 13:45:52.758: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b96c78), (*api.Node)(0xc421b96ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31400

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 12:35:16.838: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c96278), (*api.Node)(0xc421c964f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 14:02:04.102: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fff678), (*api.Node)(0xc421fff8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26172

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 13:42:40.578: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fc3678), (*api.Node)(0xc421fc38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-3360-pvc-bd2e1431-dcea-11e6-b0ff-42010af00015  us-central1-a  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc4203a6f40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34064

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 17 11:41:53.461: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan 17 10:11:21.881: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 12:31:49.674: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fc5678), (*api.Node)(0xc420fc58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 12:18:33.201: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42174ac78), (*api.Node)(0xc42174aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32945

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 14:20:37.930: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209fcc78), (*api.Node)(0xc4209fcef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 14:40:31.308: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421017678), (*api.Node)(0xc4210178f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 14:44:03.529: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b64278), (*api.Node)(0xc421b644f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/169/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 15:58:42.026: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d884f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36970

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 15:07:39.942: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b44ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 16:05:04.489: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f2eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421761570>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-422febbd-ss7v gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:33:03 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]\nkube-dns-4101612645-rgfzm                                          gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-422febbd-ss7v            gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-422febbd-ss7v gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:33:03 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]
    kube-dns-4101612645-rgfzm                                          gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-422febbd-ss7v            gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 14:24:08.814: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b2b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 16:19:25.147: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a9eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421629f50>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-422febbd-ss7v gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:33:03 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]\nkube-dns-4101612645-rgfzm                                          gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-422febbd-ss7v            gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-422febbd-ss7v gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:33:03 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]
    kube-dns-4101612645-rgfzm                                          gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-422febbd-ss7v            gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 16:40:40.508: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220faef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 16:34:06.059: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213da4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 16:53:03.407: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a9f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 16:24:40.619: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421682ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38254

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 15:04:29.729: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214a2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421f85960>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218558b0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-422febbd-ss7v gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:33:03 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]\nkube-dns-4101612645-rgfzm                                          gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-422febbd-ss7v            gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-422febbd-ss7v gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:33:03 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]
    kube-dns-4101612645-rgfzm                                          gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-422febbd-ss7v            gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc421cf69a0>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 15:22:02.697: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211838f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27079

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 16:01:52.317: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b5c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4205dd360>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-422febbd-ss7v gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:33:03 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]\nkube-dns-4101612645-rgfzm                                          gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-422febbd-ss7v            gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-422febbd-ss7v gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:33:03 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]
    kube-dns-4101612645-rgfzm                                          gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 11:59:56 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-422febbd-ss7v            gke-bootstrap-e2e-default-pool-422febbd-ss7v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-19 07:32:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-19 12:40:09 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc420415410>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 16:27:58.726: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42215aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 16:56:15.588: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42143c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33987

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 15:53:58.464: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206c84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/187/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 14:56:24.500: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b4d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 15:39:04.504: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421448278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 13:46:58.001: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210fe278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 13:53:38.310: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d1d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 15:24:38.976: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421576278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 15:28:57.272: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f44c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 11:17:14.118: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420127678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 15:17:57.143: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c38278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 13:11:41.053: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421866278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 15:00:57.779: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214bb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 12:00:55.098: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a60278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 14:06:57.997: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211cb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37439

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 09:18:05.654: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218b9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27079

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d56000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 14:15:21.059: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209b6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 14:03:45.786: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c26c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 14:18:33.246: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213fc278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 15:35:52.305: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42189e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 10:53:03.702: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42090cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 11:30:47.228: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421846278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 09:10:43.691: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d3ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 15:21:22.263: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c87678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203ace20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33285

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 11:48:06.469: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421038c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 09:24:46.901: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c87678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 22 08:39:37.440: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 10:56:15.888: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c76278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 10:42:38.151: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421389678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 11:27:01.055: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218c2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203ace20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 14:53:08.505: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c87678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 13:40:15.160: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211f8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-a8ed-pvc-1b215651-e0bf-11e6-9b62-42010af0002c  us-central1-a  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d54000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 11:57:27.889: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c13678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37259

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 11:35:22.398: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420df4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 11:20:26.725: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f86c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 12:04:07.323: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420efb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37525

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 11:41:59.106: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d42278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 13:50:14.148: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218c0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 09:27:59.098: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210a0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 13:56:51.388: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d71678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 15:54:24.130: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212a4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36950

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 15:04:10.735: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213a8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 11:00:47.761: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f06278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38174

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 14:47:01.172: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209c6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 09:14:38.482: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42119e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31408

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 09:31:13.189: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42166c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 13:43:47.751: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421014278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 22 14:10:10.785: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421279678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/188/
Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
0: path /api/v1/namespaces/e2e-tests-proxy-clm5f/pods/http:proxy-service-sjccq-bw2td:1080/proxy/ gave error: Get https://35.184.23.203/api/v1/namespaces/e2e-tests-proxy-clm5f/pods/http:proxy-service-sjccq-bw2td:1080/proxy/: http2: no cached connection was available
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:270

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421ef0010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc4203b09e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc4203b09e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc4203b09e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc4203b09e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc423568da0>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc42228a030>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421fcc020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc423538a60>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc4203b09e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4217fab40>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc423532000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/190/
Multiple broken tests:

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc4211a3f90>: {
        s: "expected pod \"pod-secrets-3c875fb7-e1ac-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-3c875fb7-e1ac-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-3c875fb7-e1ac-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-secrets-3c875fb7-e1ac-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc421b0ccf0>: {
        s: "expected pod \"downwardapi-volume-1c205e48-e1bd-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-1c205e48-e1bd-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-1c205e48-e1bd-11e6-a411-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-1c205e48-e1bd-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc422327d30>: {
        s: "expected pod \"pod-secrets-71cb71ac-e1ba-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-71cb71ac-e1ba-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-71cb71ac-e1ba-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-secrets-71cb71ac-e1ba-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29221

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:189
Expected error:
    <*errors.errorString | 0xc42217a230>: {
        s: "expected pod \"downwardapi-volume-34887234-e1d2-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-34887234-e1d2-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-34887234-e1d2-11e6-a411-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-34887234-e1d2-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:150
Expected error:
    <*errors.errorString | 0xc422af8f60>: {
        s: "expected pod \"pod-secrets-78d8e943-e1d8-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-78d8e943-e1d8-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-78d8e943-e1d8-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-secrets-78d8e943-e1d8-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Expected error:
    <*errors.errorString | 0xc42133b620>: {
        s: "expected pod \"pod-configmaps-c20fae40-e1b3-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-c20fae40-e1b3-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-c20fae40-e1b3-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-c20fae40-e1b3-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29052

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:69
Expected error:
    <*errors.errorString | 0xc42240ef00>: {
        s: "expected pod \"pod-b5307c20-e1d6-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-b5307c20-e1d6-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-b5307c20-e1d6-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-b5307c20-e1d6-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36183

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
Expected error:
    <*errors.errorString | 0xc4213d8670>: {
        s: "expected pod \"\" success: gave up waiting for pod 'pod-service-account-37726fca-e1ab-11e6-a411-0242ac11000b-sjg24' to be 'success or failure' after 5m0s",
    }
    expected pod "" success: gave up waiting for pod 'pod-service-account-37726fca-e1ab-11e6-a411-0242ac11000b-sjg24' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37526

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:180
Expected error:
    <*errors.errorString | 0xc421b0c520>: {
        s: "expected pod \"downwardapi-volume-95accebe-e1bb-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-95accebe-e1bb-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-95accebe-e1bb-11e6-a411-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-95accebe-e1bb-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc42269b810>: {
        s: "expected pod \"pod-35fdc82e-e1d4-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-35fdc82e-e1d4-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-35fdc82e-e1d4-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-35fdc82e-e1d4-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:269
Expected error:
    <*errors.errorString | 0xc42187a9a0>: {
        s: "expected pod \"pod-configmaps-ae791992-e1b7-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-ae791992-e1b7-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-ae791992-e1b7-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-ae791992-e1b7-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37515

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc422327bd0>: {
        s: "expected pod \"pod-b7397c0f-e1b9-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-b7397c0f-e1b9-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-b7397c0f-e1b9-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-b7397c0f-e1b9-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37439

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc42203cc30>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:171
Expected error:
    <*errors.errorString | 0xc421911e90>: {
        s: "expected pod \"downwardapi-volume-3623ef94-e1af-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-3623ef94-e1af-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-3623ef94-e1af-11e6-a411-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-3623ef94-e1af-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc4220ec050>: {
        s: "expected pod \"pod-configmaps-27af804b-e1d5-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-27af804b-e1d5-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-27af804b-e1d5-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-27af804b-e1d5-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #27245

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc4203d1050>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:77
Expected error:
    <*errors.errorString | 0xc421b0d2c0>: {
        s: "expected pod \"pod-secrets-e963a5c0-e1b8-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-e963a5c0-e1b8-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-e963a5c0-e1b8-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-secrets-e963a5c0-e1b8-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37525

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:68
Expected error:
    <*errors.errorString | 0xc42173e3e0>: {
        s: "expected pod \"pod-configmaps-5d2139c7-e1bc-11e6-a411-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-5d2139c7-e1bc-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-5d2139c7-e1bc-11e6-a411-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-5d2139c7-e1bc-11e6-a411-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:154
Expected error:
    <*errors.errorString | 0xc4203d1050>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30352 #38166

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/192/
Multiple broken tests:

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 24 08:23:11.245: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Jan 24 10:09:14.270: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Jan 24 11:15:42.681: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc4226de000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:953

Issues about this test specifically: #29629 #36270 #37462

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/193/
Multiple broken tests:

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:10:04.301: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d98a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:43:04.241: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421851400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:52:19.636: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421712a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc4203d10e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34250

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:39:20.922: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fbd400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211e90d0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\nkube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    kube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Expected error:
    <*errors.errorString | 0xc421be9da0>: {
        s: "at least one node failed to be ready",
    }
    at least one node failed to be ready
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:397

Issues about this test specifically: #37373

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a24e10>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\nkube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    kube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:35:24.358: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420127400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc42162ac60>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-1552b54a-nkjx boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-1552b54a-nkjx boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:15:18.027: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cd0a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28984 #33827 #36917

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:46:20.411: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420127400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:06:52.105: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bca000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:31:43.169: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422244a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:21:44.892: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421480a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:55:41.702: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42139d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42190e530>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\nkube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    kube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:35:47.667: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420868a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:47:47.024: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421602000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37439

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:01:01.570: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dc2000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32945

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:05:09.786: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dc3400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:27:16.175: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421956000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:39:38.929: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215d2000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35793

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:21:38.526: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421843400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:28:15.658: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c6b400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420dfe2b0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\nkube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    kube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:31:37.514: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421730000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc4203d10e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Expected error:
    <*errors.errorString | 0xc4203d10e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36271

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421abeeb0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\nkube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    kube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:18:31.115: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201f9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:16:26.647: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b32000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37017

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:00:29.821: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dc3400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222df550>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\nkube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1552b54a-nkjx gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:03:34 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    kube-dns-4101612645-mdhbx                                          gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:21 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 15:08:14 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-1552b54a-nkjx            gke-bootstrap-e2e-default-pool-1552b54a-nkjx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 12:02:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/197/
Multiple broken tests:

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc421abe050>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:477

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1051
Jan 26 05:32:59.106: Pods for rc e2e-test-nginx-rc were not ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1044

Issues about this test specifically: #28507 #29315 #35595

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc420e3ba70>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621027999, nsec:0, loc:(*time.Location)(0x3cee280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621027999, nsec:0, loc:(*time.Location)(0x3cee280)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621028104, nsec:0, loc:(*time.Location)(0x3cee280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621028104, nsec:0, loc:(*time.Location)(0x3cee280)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3837372172\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621027999, nsec:0, loc:(*time.Location)(0x3cee280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621027999, nsec:0, loc:(*time.Location)(0x3cee280)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621028104, nsec:0, loc:(*time.Location)(0x3cee280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621028104, nsec:0, loc:(*time.Location)(0x3cee280)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3837372172\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1335

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Jan 26 04:26:59.127: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Jan 26 06:02:40.462: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc4227d6000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-8d2a-pvc-34b1c734-e3dd-11e6-a537-42010af00003  us-central1-a  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Jan 26 07:06:08.774: Failed getting pod e2e-test-nginx-pod: Timeout while waiting for pods with labels "run=e2e-test-nginx-pod" to be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1231

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Jan 26 04:02:37.759: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:68
Expected error:
    <*errors.errorString | 0xc421502010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:371

Issues about this test specifically: #27232

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/204/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Jan 28 18:03:42.965: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Expected error:
    <*errors.StatusError | 0xc422887500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'ssh: rejected: connect failed (Connection timed out)'\\nTrying to reach: 'http://10.96.4.131:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'\") has prevented the request from succeeding (post services rs-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rs-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.4.131:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.4.131:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'") has prevented the request from succeeding (post services rs-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Jan 28 15:58:32.492: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422ecaca0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Jan 28 17:47:35.114: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #30981

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.227.202 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-dqgvm execpod-sourceip-gke-bootstrap-e2e-default-pool-e01188bf-pj5vp3 -- /bin/sh -c wget -T 30 -qO- 10.99.251.185:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc42129d020 exit status 1 <nil> <nil> true [0xc4214d2098 0xc4214d20b8 0xc4214d20d0] [0xc4214d2098 0xc4214d20b8 0xc4214d20d0] [0xc4214d20b0 0xc4214d20c8] [0x9728b0 0x9728b0] 0xc4211fccc0 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.227.202 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-dqgvm execpod-sourceip-gke-bootstrap-e2e-default-pool-e01188bf-pj5vp3 -- /bin/sh -c wget -T 30 -qO- 10.99.251.185:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc42129d020 exit status 1 <nil> <nil> true [0xc4214d2098 0xc4214d20b8 0xc4214d20d0] [0xc4214d2098 0xc4214d20b8 0xc4214d20d0] [0xc4214d20b0 0xc4214d20c8] [0x9728b0 0x9728b0] 0xc4211fccc0 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc422996900>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:953

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421b54d50>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Jan 28 18:25:38.650: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:149

Issues about this test specifically: #32087

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/208/
Multiple broken tests:

Failed: kubectl version {e2e.go}

exit status 1

Issues about this test specifically: #34378

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: list nodes {e2e.go}

exit status 1

Issues about this test specifically: #38667

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745 #40486

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

2 participants