Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-master: broken test run #37758

Closed
k8s-github-robot opened this issue Dec 1, 2016 · 6 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-master/122/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821619790>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820c31d70>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #31918

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-uelmn] []  0xc821f12d20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821f13360 exit status 1 <nil> true [0xc8211f40b8 0xc8211f40e0 0xc8211f40f0] [0xc8211f40b8 0xc8211f40e0 0xc8211f40f0] [0xc8211f40c0 0xc8211f40d8 0xc8211f40e8] [0xafa5c0 0xafa720 0xafa720] 0xc821bb74a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-uelmn] []  0xc821f12d20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821f13360 exit status 1 <nil> true [0xc8211f40b8 0xc8211f40e0 0xc8211f40f0] [0xc8211f40b8 0xc8211f40e0 0xc8211f40f0] [0xc8211f40c0 0xc8211f40d8 0xc8211f40e8] [0xafa5c0 0xafa720 0xafa720] 0xc821bb74a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820176b80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-e9jns] []  0xc8211d4b20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8211d5240 exit status 1 <nil> true [0xc820b8a128 0xc820b8a1d8 0xc820b8a1f0] [0xc820b8a128 0xc820b8a1d8 0xc820b8a1f0] [0xc820b8a138 0xc820b8a1c0 0xc820b8a1e0] [0xafa5c0 0xafa720 0xafa720] 0xc8216679e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-e9jns] []  0xc8211d4b20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8211d5240 exit status 1 <nil> true [0xc820b8a128 0xc820b8a1d8 0xc820b8a1f0] [0xc820b8a128 0xc820b8a1d8 0xc820b8a1f0] [0xc820b8a138 0xc820b8a1c0 0xc820b8a1e0] [0xafa5c0 0xafa720 0xafa720] 0xc8216679e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820176b80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820de1b00>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28019

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ole5a] []  0xc821926e80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8219274c0 exit status 1 <nil> true [0xc8211f4908 0xc8211f4930 0xc8211f4948] [0xc8211f4908 0xc8211f4930 0xc8211f4948] [0xc8211f4910 0xc8211f4928 0xc8211f4940] [0xafa5c0 0xafa720 0xafa720] 0xc82160c720}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ole5a] []  0xc821926e80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8219274c0 exit status 1 <nil> true [0xc8211f4908 0xc8211f4930 0xc8211f4948] [0xc8211f4908 0xc8211f4930 0xc8211f4948] [0xc8211f4910 0xc8211f4928 0xc8211f4940] [0xafa5c0 0xafa720 0xafa720] 0xc82160c720}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820e7c360>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8208f76c0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-soeqp] []  0xc820d8a5e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820d8b040 exit status 1 <nil> true [0xc82017add0 0xc82017ae38 0xc82017ae68] [0xc82017add0 0xc82017ae38 0xc82017ae68] [0xc82017ade8 0xc82017ae28 0xc82017ae58] [0xafa5c0 0xafa720 0xafa720] 0xc821a7a5a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-soeqp] []  0xc820d8a5e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820d8b040 exit status 1 <nil> true [0xc82017add0 0xc82017ae38 0xc82017ae68] [0xc82017add0 0xc82017ae38 0xc82017ae68] [0xc82017ade8 0xc82017ae28 0xc82017ae58] [0xafa5c0 0xafa720 0xafa720] 0xc821a7a5a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8209f4a30>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #35279

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:478
Expected error:
    <*errors.errorString | 0xc821a3c2f0>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:471

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820837e80>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8209b2ee0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821085fe0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820eebf70>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820cfb400>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820be5cf0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8217bca40>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82139bf20>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-372ct] []  0xc821496360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821497340 exit status 1 <nil> true [0xc82017a7e0 0xc82017a810 0xc82017a838] [0xc82017a7e0 0xc82017a810 0xc82017a838] [0xc82017a7e8 0xc82017a808 0xc82017a820] [0xafa5c0 0xafa720 0xafa720] 0xc821daf4a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-372ct] []  0xc821496360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821497340 exit status 1 <nil> true [0xc82017a7e0 0xc82017a810 0xc82017a838] [0xc82017a7e0 0xc82017a810 0xc82017a838] [0xc82017a7e8 0xc82017a808 0xc82017a820] [0xafa5c0 0xafa720 0xafa720] 0xc821daf4a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-88bp3] []  0xc821783380  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8217839c0 exit status 1 <nil> true [0xc82017a178 0xc82017a208 0xc82017a298] [0xc82017a178 0xc82017a208 0xc82017a298] [0xc82017a180 0xc82017a200 0xc82017a248] [0xafa5c0 0xafa720 0xafa720] 0xc82167baa0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-88bp3] []  0xc821783380  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8217839c0 exit status 1 <nil> true [0xc82017a178 0xc82017a208 0xc82017a298] [0xc82017a178 0xc82017a208 0xc82017a298] [0xc82017a180 0xc82017a200 0xc82017a248] [0xafa5c0 0xafa720 0xafa720] 0xc82167baa0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jq9b9] []  0xc821d836e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821d83da0 exit status 1 <nil> true [0xc8213021b0 0xc821302220 0xc821302250] [0xc8213021b0 0xc821302220 0xc821302250] [0xc8213021c0 0xc821302200 0xc821302240] [0xafa5c0 0xafa720 0xafa720] 0xc8218d0c00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jq9b9] []  0xc821d836e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821d83da0 exit status 1 <nil> true [0xc8213021b0 0xc821302220 0xc821302250] [0xc8213021b0 0xc821302220 0xc821302250] [0xc8213021c0 0xc821302200 0xc821302240] [0xafa5c0 0xafa720 0xafa720] 0xc8218d0c00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zix2d] []  0xc820c6c300  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820c6cc60 exit status 1 <nil> true [0xc8200c2440 0xc8200c2468 0xc8200c2480] [0xc8200c2440 0xc8200c2468 0xc8200c2480] [0xc8200c2448 0xc8200c2460 0xc8200c2478] [0xafa5c0 0xafa720 0xafa720] 0xc820c1bd40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zix2d] []  0xc820c6c300  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820c6cc60 exit status 1 <nil> true [0xc8200c2440 0xc8200c2468 0xc8200c2480] [0xc8200c2440 0xc8200c2468 0xc8200c2480] [0xc8200c2448 0xc8200c2460 0xc8200c2478] [0xafa5c0 0xafa720 0xafa720] 0xc820c1bd40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-aoykd -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"3581\", \"creationTimestamp\":\"2016-11-29T19:28:18Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-aoykd\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-aoykd/services/redis-master\", \"uid\":\"fb31e654-b669-11e6-8a16-42010af0001f\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.244.226\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820741cc0 exit status 1 <nil> true [0xc8200374e8 0xc820037500 0xc820037518] [0xc8200374e8 0xc820037500 0xc820037518] [0xc8200374f8 0xc820037510] [0xafa720 0xafa720] 0xc820eca8a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"3581\", \"creationTimestamp\":\"2016-11-29T19:28:18Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-aoykd\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-aoykd/services/redis-master\", \"uid\":\"fb31e654-b669-11e6-8a16-42010af0001f\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.244.226\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-aoykd -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"3581", "creationTimestamp":"2016-11-29T19:28:18Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-aoykd", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-aoykd/services/redis-master", "uid":"fb31e654-b669-11e6-8a16-42010af0001f"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.244.226", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820741cc0 exit status 1 <nil> true [0xc8200374e8 0xc820037500 0xc820037518] [0xc8200374e8 0xc820037500 0xc820037518] [0xc8200374f8 0xc820037510] [0xafa720 0xafa720] 0xc820eca8a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"3581", "creationTimestamp":"2016-11-29T19:28:18Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-aoykd", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-aoykd/services/redis-master", "uid":"fb31e654-b669-11e6-8a16-42010af0001f"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.244.226", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820fbacf0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820e7c2d0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820bfdc20>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:430
Nov 29 17:33:41.219: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:421

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-airk1] []  0xc821bcaa80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821bcb160 exit status 1 <nil> true [0xc8211f40b8 0xc8211f40e0 0xc8211f40f0] [0xc8211f40b8 0xc8211f40e0 0xc8211f40f0] [0xc8211f40c0 0xc8211f40d8 0xc8211f40e8] [0xafa5c0 0xafa720 0xafa720] 0xc8213c7860}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-airk1] []  0xc821bcaa80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821bcb160 exit status 1 <nil> true [0xc8211f40b8 0xc8211f40e0 0xc8211f40f0] [0xc8211f40b8 0xc8211f40e0 0xc8211f40f0] [0xc8211f40c0 0xc8211f40d8 0xc8211f40e8] [0xafa5c0 0xafa720 0xafa720] 0xc8213c7860}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zj6wb] []  0xc820cd9620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820d68080 exit status 1 <nil> true [0xc8200c29d0 0xc8200c29f8 0xc8200c2a08] [0xc8200c29d0 0xc8200c29f8 0xc8200c2a08] [0xc8200c29d8 0xc8200c29f0 0xc8200c2a00] [0xafa5c0 0xafa720 0xafa720] 0xc820c1aea0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.208.184 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zj6wb] []  0xc820cd9620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820d68080 exit status 1 <nil> true [0xc8200c29d0 0xc8200c29f8 0xc8200c2a08] [0xc8200c29d0 0xc8200c29f8 0xc8200c2a08] [0xc8200c29d8 0xc8200c29f0 0xc8200c2a00] [0xafa5c0 0xafa720 0xafa720] 0xc820c1aea0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820176b80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-master/124/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zyci7] []  0xc820ba6880  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820ba7120 exit status 1 <nil> true [0xc8211b2120 0xc8211b2148 0xc8211b2158] [0xc8211b2120 0xc8211b2148 0xc8211b2158] [0xc8211b2128 0xc8211b2140 0xc8211b2150] [0xafa5c0 0xafa720 0xafa720] 0xc82124c2a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zyci7] []  0xc820ba6880  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820ba7120 exit status 1 <nil> true [0xc8211b2120 0xc8211b2148 0xc8211b2158] [0xc8211b2120 0xc8211b2148 0xc8211b2158] [0xc8211b2128 0xc8211b2140 0xc8211b2150] [0xafa5c0 0xafa720 0xafa720] 0xc82124c2a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wtsuz] []  0xc820852a20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820853160 exit status 1 <nil> true [0xc820036310 0xc820036338 0xc820036348] [0xc820036310 0xc820036338 0xc820036348] [0xc820036318 0xc820036330 0xc820036340] [0xafa5c0 0xafa720 0xafa720] 0xc820bad320}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wtsuz] []  0xc820852a20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820853160 exit status 1 <nil> true [0xc820036310 0xc820036338 0xc820036348] [0xc820036310 0xc820036338 0xc820036348] [0xc820036318 0xc820036330 0xc820036340] [0xafa5c0 0xafa720 0xafa720] 0xc820bad320}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82083e1e0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821185080>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820849690>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #35279

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:430
Expected error:
    <*errors.errorString | 0xc821d135d0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:427

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-0qdc1] []  0xc820bc5ae0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8209a8320 exit status 1 <nil> true [0xc82017c230 0xc82017c260 0xc82017c280] [0xc82017c230 0xc82017c260 0xc82017c280] [0xc82017c238 0xc82017c258 0xc82017c270] [0xafa5c0 0xafa720 0xafa720] 0xc8205eea20}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-0qdc1] []  0xc820bc5ae0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8209a8320 exit status 1 <nil> true [0xc82017c230 0xc82017c260 0xc82017c280] [0xc82017c230 0xc82017c260 0xc82017c280] [0xc82017c238 0xc82017c258 0xc82017c270] [0xafa5c0 0xafa720 0xafa720] 0xc8205eea20}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820e19b50>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821844d20>: {
        s: "Namespace e2e-tests-services-u91vq is active",
    }
    Namespace e2e-tests-services-u91vq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82201eaa0>: {
        s: "Namespace e2e-tests-services-u91vq is active",
    }
    Namespace e2e-tests-services-u91vq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82179fbb0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821969950>: {
        s: "Namespace e2e-tests-services-u91vq is active",
    }
    Namespace e2e-tests-services-u91vq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821185bf0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #34223

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3nhsj] []  0xc82163fc00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc822042200 exit status 1 <nil> true [0xc8202ea8b8 0xc8202ea938 0xc8202ea970] [0xc8202ea8b8 0xc8202ea938 0xc8202ea970] [0xc8202ea8d8 0xc8202ea920 0xc8202ea950] [0xafa5c0 0xafa720 0xafa720] 0xc821fa9560}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3nhsj] []  0xc82163fc00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc822042200 exit status 1 <nil> true [0xc8202ea8b8 0xc8202ea938 0xc8202ea970] [0xc8202ea8b8 0xc8202ea938 0xc8202ea970] [0xc8202ea8d8 0xc8202ea920 0xc8202ea950] [0xafa5c0 0xafa720 0xafa720] 0xc821fa9560}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820174a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821adcc40>: {
        s: "Namespace e2e-tests-services-u91vq is active",
    }
    Namespace e2e-tests-services-u91vq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #29516

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820174a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:430
Expected error:
    <*errors.errorString | 0xc82195a850>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:427

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-x896a -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.253.194\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"77accd6b-b6d3-11e6-913d-42010af00031\", \"resourceVersion\":\"28705\", \"creationTimestamp\":\"2016-11-30T08:03:24Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-x896a\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-x896a/services/redis-master\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821e9a620 exit status 1 <nil> true [0xc8200361f0 0xc820036240 0xc8200362b0] [0xc8200361f0 0xc820036240 0xc8200362b0] [0xc820036218 0xc820036290] [0xafa720 0xafa720] 0xc821ac4240}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.253.194\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"77accd6b-b6d3-11e6-913d-42010af00031\", \"resourceVersion\":\"28705\", \"creationTimestamp\":\"2016-11-30T08:03:24Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-x896a\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-x896a/services/redis-master\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-x896a -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.253.194", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"77accd6b-b6d3-11e6-913d-42010af00031", "resourceVersion":"28705", "creationTimestamp":"2016-11-30T08:03:24Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-x896a", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-x896a/services/redis-master"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821e9a620 exit status 1 <nil> true [0xc8200361f0 0xc820036240 0xc8200362b0] [0xc8200361f0 0xc820036240 0xc8200362b0] [0xc820036218 0xc820036290] [0xafa720 0xafa720] 0xc821ac4240}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.253.194", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"77accd6b-b6d3-11e6-913d-42010af00031", "resourceVersion":"28705", "creationTimestamp":"2016-11-30T08:03:24Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-x896a", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-x896a/services/redis-master"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82155f5b0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28019

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-1pxhp] []  0xc821b5d4a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821b5db40 exit status 1 <nil> true [0xc82017c1a0 0xc82017c210 0xc82017c230] [0xc82017c1a0 0xc82017c210 0xc82017c230] [0xc82017c1a8 0xc82017c208 0xc82017c228] [0xafa5c0 0xafa720 0xafa720] 0xc820ff8240}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-1pxhp] []  0xc821b5d4a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821b5db40 exit status 1 <nil> true [0xc82017c1a0 0xc82017c210 0xc82017c230] [0xc82017c1a0 0xc82017c210 0xc82017c230] [0xc82017c1a8 0xc82017c208 0xc82017c228] [0xafa5c0 0xafa720 0xafa720] 0xc820ff8240}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-o3gss] []  0xc821c65b80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821e9a220 exit status 1 <nil> true [0xc8202ea8d0 0xc8202ea908 0xc8202ea920] [0xc8202ea8d0 0xc8202ea908 0xc8202ea920] [0xc8202ea8d8 0xc8202ea900 0xc8202ea918] [0xafa5c0 0xafa720 0xafa720] 0xc82228f140}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-o3gss] []  0xc821c65b80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821e9a220 exit status 1 <nil> true [0xc8202ea8d0 0xc8202ea908 0xc8202ea920] [0xc8202ea8d0 0xc8202ea908 0xc8202ea920] [0xc8202ea8d8 0xc8202ea900 0xc8202ea918] [0xafa5c0 0xafa720 0xafa720] 0xc82228f140}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82115deb0>: {
        s: "Namespace e2e-tests-services-u91vq is active",
    }
    Namespace e2e-tests-services-u91vq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821bd13b0>: {
        s: "Namespace e2e-tests-services-u91vq is active",
    }
    Namespace e2e-tests-services-u91vq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82186bbc0>: {
        s: "Namespace e2e-tests-services-u91vq is active",
    }
    Namespace e2e-tests-services-u91vq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82194f6e0>: {
        s: "Namespace e2e-tests-services-u91vq is active",
    }
    Namespace e2e-tests-services-u91vq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821d497f0>: {
        s: "Namespace e2e-tests-services-u91vq is active",
    }
    Namespace e2e-tests-services-u91vq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g2i1a] []  0xc820bd4e20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820bd5a60 exit status 1 <nil> true [0xc82017c128 0xc82017c198 0xc82017c1a8] [0xc82017c128 0xc82017c198 0xc82017c1a8] [0xc82017c148 0xc82017c190 0xc82017c1a0] [0xafa5c0 0xafa720 0xafa720] 0xc8214c9680}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g2i1a] []  0xc820bd4e20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820bd5a60 exit status 1 <nil> true [0xc82017c128 0xc82017c198 0xc82017c1a8] [0xc82017c128 0xc82017c198 0xc82017c1a8] [0xc82017c148 0xc82017c190 0xc82017c1a0] [0xafa5c0 0xafa720 0xafa720] 0xc8214c9680}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820fa9da0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tsfdx] []  0xc821c1fda0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8219bc3a0 exit status 1 <nil> true [0xc82017cfd8 0xc82017d008 0xc82017d018] [0xc82017cfd8 0xc82017d008 0xc82017d018] [0xc82017cfe0 0xc82017cff8 0xc82017d010] [0xafa5c0 0xafa720 0xafa720] 0xc820fce000}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tsfdx] []  0xc821c1fda0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8219bc3a0 exit status 1 <nil> true [0xc82017cfd8 0xc82017d008 0xc82017d018] [0xc82017cfd8 0xc82017d008 0xc82017d018] [0xc82017cfe0 0xc82017cff8 0xc82017d010] [0xafa5c0 0xafa720 0xafa720] 0xc820fce000}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-0edkp] []  0xc820599a80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820a501e0 exit status 1 <nil> true [0xc8211b2130 0xc8211b2158 0xc8211b2170] [0xc8211b2130 0xc8211b2158 0xc8211b2170] [0xc8211b2138 0xc8211b2150 0xc8211b2168] [0xafa5c0 0xafa720 0xafa720] 0xc820f37a40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-0edkp] []  0xc820599a80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820a501e0 exit status 1 <nil> true [0xc8211b2130 0xc8211b2158 0xc8211b2170] [0xc8211b2130 0xc8211b2158 0xc8211b2170] [0xc8211b2138 0xc8211b2150 0xc8211b2168] [0xafa5c0 0xafa720 0xafa720] 0xc820f37a40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428
Expected error:
    <*errors.errorString | 0xc821246cc0>: {
        s: "error while stopping RC: service2: Scaling the resource failed with: client: etcd cluster is unavailable or misconfigured; Current resource version 26501",
    }
    error while stopping RC: service2: Scaling the resource failed with: client: etcd cluster is unavailable or misconfigured; Current resource version 26501
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820174a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-by462] []  0xc821be3340  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821be3940 exit status 1 <nil> true [0xc820036708 0xc820036788 0xc8200367c0] [0xc820036708 0xc820036788 0xc8200367c0] [0xc820036718 0xc820036758 0xc8200367a8] [0xafa5c0 0xafa720 0xafa720] 0xc8224d0540}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.253.167 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-by462] []  0xc821be3340  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821be3940 exit status 1 <nil> true [0xc820036708 0xc820036788 0xc8200367c0] [0xc820036708 0xc820036788 0xc8200367c0] [0xc820036718 0xc820036758 0xc8200367a8] [0xafa5c0 0xafa720 0xafa720] 0xc8224d0540}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-master/125/

Multiple broken tests:

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820174b80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-fjsqr] []  0xc821e8f200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821e8f9e0 exit status 1 <nil> true [0xc8211be1e0 0xc8211be240 0xc8211be250] [0xc8211be1e0 0xc8211be240 0xc8211be250] [0xc8211be200 0xc8211be238 0xc8211be248] [0xafa5c0 0xafa720 0xafa720] 0xc8216b0120}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-fjsqr] []  0xc821e8f200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821e8f9e0 exit status 1 <nil> true [0xc8211be1e0 0xc8211be240 0xc8211be250] [0xc8211be1e0 0xc8211be240 0xc8211be250] [0xc8211be200 0xc8211be238 0xc8211be248] [0xafa5c0 0xafa720 0xafa720] 0xc8216b0120}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xs1vw] []  0xc820c31860  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820c31e60 exit status 1 <nil> true [0xc8202d0ec0 0xc8202d0ef0 0xc8202d0f00] [0xc8202d0ec0 0xc8202d0ef0 0xc8202d0f00] [0xc8202d0ec8 0xc8202d0ee8 0xc8202d0ef8] [0xafa5c0 0xafa720 0xafa720] 0xc821144ba0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xs1vw] []  0xc820c31860  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820c31e60 exit status 1 <nil> true [0xc8202d0ec0 0xc8202d0ef0 0xc8202d0f00] [0xc8202d0ec0 0xc8202d0ef0 0xc8202d0f00] [0xc8202d0ec8 0xc8202d0ee8 0xc8202d0ef8] [0xafa5c0 0xafa720 0xafa720] 0xc821144ba0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-82qjd] []  0xc8207742e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820774f60 exit status 1 <nil> true [0xc820037508 0xc820037550 0xc820037570] [0xc820037508 0xc820037550 0xc820037570] [0xc820037510 0xc820037540 0xc820037560] [0xafa5c0 0xafa720 0xafa720] 0xc820785560}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-82qjd] []  0xc8207742e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820774f60 exit status 1 <nil> true [0xc820037508 0xc820037550 0xc820037570] [0xc820037508 0xc820037550 0xc820037570] [0xc820037510 0xc820037540 0xc820037560] [0xafa5c0 0xafa720 0xafa720] 0xc820785560}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g33k3] []  0xc821aa4d80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821aa5420 exit status 1 <nil> true [0xc8200362c8 0xc820036308 0xc820036330] [0xc8200362c8 0xc820036308 0xc820036330] [0xc8200362d0 0xc8200362e8 0xc820036310] [0xafa5c0 0xafa720 0xafa720] 0xc822354780}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g33k3] []  0xc821aa4d80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821aa5420 exit status 1 <nil> true [0xc8200362c8 0xc820036308 0xc820036330] [0xc8200362c8 0xc820036308 0xc820036330] [0xc8200362d0 0xc8200362e8 0xc820036310] [0xafa5c0 0xafa720 0xafa720] 0xc822354780}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc820ad5a40>: {
        s: "failed to wait for pods responding: pod with UID a39d37cb-b6fd-11e6-8c45-42010af00039 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-fzw6f/pods 10517} [{{ } {my-hostname-delete-node-7x95t my-hostname-delete-node- e2e-tests-resize-nodes-fzw6f /api/v1/namespaces/e2e-tests-resize-nodes-fzw6f/pods/my-hostname-delete-node-7x95t a39d1925-b6fd-11e6-8c45-42010af00039 10184 0 2016-11-30 05:05:16 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-fzw6f\",\"name\":\"my-hostname-delete-node\",\"uid\":\"a39a3daf-b6fd-11e6-8c45-42010af00039\",\"apiVersion\":\"v1\",\"resourceVersion\":\"10166\"}}\n] [{v1 ReplicationController my-hostname-delete-node a39a3daf-b6fd-11e6-8c45-42010af00039 0xc8212c37f7}] [] } {[{default-token-c0xj7 {<nil> <nil> <nil> <nil> <nil> 0xc82139d170 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c0xj7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212c3910 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-694a42a4-m17b 0xc821373000 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:16 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:18 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:16 -0800 PST  }]   10.240.0.7 10.124.2.27 2016-11-30 05:05:16 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821421720 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e13763243368f958e4b26c90a2863b5c44faa6e9be28c20dbc9a23569e1bc572}]}} {{ } {my-hostname-delete-node-d5dfg my-hostname-delete-node- e2e-tests-resize-nodes-fzw6f /api/v1/namespaces/e2e-tests-resize-nodes-fzw6f/pods/my-hostname-delete-node-d5dfg a39cdae0-b6fd-11e6-8c45-42010af00039 10182 0 2016-11-30 05:05:16 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-fzw6f\",\"name\":\"my-hostname-delete-node\",\"uid\":\"a39a3daf-b6fd-11e6-8c45-42010af00039\",\"apiVersion\":\"v1\",\"resourceVersion\":\"10166\"}}\n] [{v1 ReplicationController my-hostname-delete-node a39a3daf-b6fd-11e6-8c45-42010af00039 0xc8212c3c87}] [] } {[{default-token-c0xj7 {<nil> <nil> <nil> <nil> <nil> 0xc82139d1d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c0xj7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212c3e20 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-694a42a4-rpsp 0xc821373140 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:16 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:18 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:16 -0800 PST  }]   10.240.0.6 10.124.1.50 2016-11-30 05:05:16 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821421740 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a563438ca006f664ab53bac1880365678e3bb1647ab39a7e4c0a523b0a1cb239}]}} {{ } {my-hostname-delete-node-kf3fz my-hostname-delete-node- e2e-tests-resize-nodes-fzw6f /api/v1/namespaces/e2e-tests-resize-nodes-fzw6f/pods/my-hostname-delete-node-kf3fz e09092a9-b6fd-11e6-8c45-42010af00039 10357 0 2016-11-30 05:06:59 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-fzw6f\",\"name\":\"my-hostname-delete-node\",\"uid\":\"a39a3daf-b6fd-11e6-8c45-42010af00039\",\"apiVersion\":\"v1\",\"resourceVersion\":\"10254\"}}\n] [{v1 ReplicationController my-hostname-delete-node a39a3daf-b6fd-11e6-8c45-42010af00039 0xc820d52107}] [] } {[{default-token-c0xj7 {<nil> <nil> <nil> <nil> <nil> 0xc82139d230 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c0xj7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d52210 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-694a42a4-rpsp 0xc8213732c0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:06:59 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:07:01 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:06:59 -0800 PST  }]   10.240.0.6 10.124.1.51 2016-11-30 05:06:59 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821421760 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://03f44c37abc20072e431bc65b45c364a29f165b36d7c60333f5dc76563c2732a}]}}]}",
    }
    failed to wait for pods responding: pod with UID a39d37cb-b6fd-11e6-8c45-42010af00039 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-fzw6f/pods 10517} [{{ } {my-hostname-delete-node-7x95t my-hostname-delete-node- e2e-tests-resize-nodes-fzw6f /api/v1/namespaces/e2e-tests-resize-nodes-fzw6f/pods/my-hostname-delete-node-7x95t a39d1925-b6fd-11e6-8c45-42010af00039 10184 0 2016-11-30 05:05:16 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-fzw6f","name":"my-hostname-delete-node","uid":"a39a3daf-b6fd-11e6-8c45-42010af00039","apiVersion":"v1","resourceVersion":"10166"}}
    ] [{v1 ReplicationController my-hostname-delete-node a39a3daf-b6fd-11e6-8c45-42010af00039 0xc8212c37f7}] [] } {[{default-token-c0xj7 {<nil> <nil> <nil> <nil> <nil> 0xc82139d170 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c0xj7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212c3910 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-694a42a4-m17b 0xc821373000 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:16 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:18 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:16 -0800 PST  }]   10.240.0.7 10.124.2.27 2016-11-30 05:05:16 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821421720 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e13763243368f958e4b26c90a2863b5c44faa6e9be28c20dbc9a23569e1bc572}]}} {{ } {my-hostname-delete-node-d5dfg my-hostname-delete-node- e2e-tests-resize-nodes-fzw6f /api/v1/namespaces/e2e-tests-resize-nodes-fzw6f/pods/my-hostname-delete-node-d5dfg a39cdae0-b6fd-11e6-8c45-42010af00039 10182 0 2016-11-30 05:05:16 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-fzw6f","name":"my-hostname-delete-node","uid":"a39a3daf-b6fd-11e6-8c45-42010af00039","apiVersion":"v1","resourceVersion":"10166"}}
    ] [{v1 ReplicationController my-hostname-delete-node a39a3daf-b6fd-11e6-8c45-42010af00039 0xc8212c3c87}] [] } {[{default-token-c0xj7 {<nil> <nil> <nil> <nil> <nil> 0xc82139d1d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c0xj7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212c3e20 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-694a42a4-rpsp 0xc821373140 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:16 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:18 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:05:16 -0800 PST  }]   10.240.0.6 10.124.1.50 2016-11-30 05:05:16 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821421740 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a563438ca006f664ab53bac1880365678e3bb1647ab39a7e4c0a523b0a1cb239}]}} {{ } {my-hostname-delete-node-kf3fz my-hostname-delete-node- e2e-tests-resize-nodes-fzw6f /api/v1/namespaces/e2e-tests-resize-nodes-fzw6f/pods/my-hostname-delete-node-kf3fz e09092a9-b6fd-11e6-8c45-42010af00039 10357 0 2016-11-30 05:06:59 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-fzw6f","name":"my-hostname-delete-node","uid":"a39a3daf-b6fd-11e6-8c45-42010af00039","apiVersion":"v1","resourceVersion":"10254"}}
    ] [{v1 ReplicationController my-hostname-delete-node a39a3daf-b6fd-11e6-8c45-42010af00039 0xc820d52107}] [] } {[{default-token-c0xj7 {<nil> <nil> <nil> <nil> <nil> 0xc82139d230 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c0xj7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d52210 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-694a42a4-rpsp 0xc8213732c0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:06:59 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:07:01 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 05:06:59 -0800 PST  }]   10.240.0.6 10.124.1.51 2016-11-30 05:06:59 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821421760 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://03f44c37abc20072e431bc65b45c364a29f165b36d7c60333f5dc76563c2732a}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-spwrm] []  0xc820e0a3e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e0aea0 exit status 1 <nil> true [0xc8211be2e0 0xc8211be320 0xc8211be340] [0xc8211be2e0 0xc8211be320 0xc8211be340] [0xc8211be2e8 0xc8211be318 0xc8211be338] [0xafa5c0 0xafa720 0xafa720] 0xc820a43e60}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-spwrm] []  0xc820e0a3e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e0aea0 exit status 1 <nil> true [0xc8211be2e0 0xc8211be320 0xc8211be340] [0xc8211be2e0 0xc8211be320 0xc8211be340] [0xc8211be2e8 0xc8211be318 0xc8211be338] [0xafa5c0 0xafa720 0xafa720] 0xc820a43e60}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-h4pm4] []  0xc82156e900  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82156ef20 exit status 1 <nil> true [0xc8202d04b0 0xc8202d0520 0xc8202d0560] [0xc8202d04b0 0xc8202d0520 0xc8202d0560] [0xc8202d04c0 0xc8202d04f8 0xc8202d0548] [0xafa5c0 0xafa720 0xafa720] 0xc821289740}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-h4pm4] []  0xc82156e900  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82156ef20 exit status 1 <nil> true [0xc8202d04b0 0xc8202d0520 0xc8202d0560] [0xc8202d04b0 0xc8202d0520 0xc8202d0560] [0xc8202d04c0 0xc8202d04f8 0xc8202d0548] [0xafa5c0 0xafa720 0xafa720] 0xc821289740}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820174b80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820174b80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l7m0q] []  0xc8215d0e80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8215d17a0 exit status 1 <nil> true [0xc820037068 0xc8200370a8 0xc8200370b8] [0xc820037068 0xc8200370a8 0xc8200370b8] [0xc820037078 0xc8200370a0 0xc8200370b0] [0xafa5c0 0xafa720 0xafa720] 0xc82114f3e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l7m0q] []  0xc8215d0e80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8215d17a0 exit status 1 <nil> true [0xc820037068 0xc8200370a8 0xc8200370b8] [0xc820037068 0xc8200370a8 0xc8200370b8] [0xc820037078 0xc8200370a0 0xc8200370b0] [0xafa5c0 0xafa720 0xafa720] 0xc82114f3e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-819z7] []  0xc820b2b2e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820b2bd20 exit status 1 <nil> true [0xc8200b6810 0xc8200b6838 0xc8200b6848] [0xc8200b6810 0xc8200b6838 0xc8200b6848] [0xc8200b6818 0xc8200b6830 0xc8200b6840] [0xafa5c0 0xafa720 0xafa720] 0xc820a43320}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-819z7] []  0xc820b2b2e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820b2bd20 exit status 1 <nil> true [0xc8200b6810 0xc8200b6838 0xc8200b6848] [0xc8200b6810 0xc8200b6838 0xc8200b6848] [0xc8200b6818 0xc8200b6830 0xc8200b6840] [0xafa5c0 0xafa720 0xafa720] 0xc820a43320}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-k410s] []  0xc82163a6a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82163ad00 exit status 1 <nil> true [0xc8211be9c0 0xc8211be9e8 0xc8211be9f8] [0xc8211be9c0 0xc8211be9e8 0xc8211be9f8] [0xc8211be9c8 0xc8211be9e0 0xc8211be9f0] [0xafa5c0 0xafa720 0xafa720] 0xc820faf740}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-k410s] []  0xc82163a6a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82163ad00 exit status 1 <nil> true [0xc8211be9c0 0xc8211be9e8 0xc8211be9f8] [0xc8211be9c0 0xc8211be9e8 0xc8211be9f8] [0xc8211be9c8 0xc8211be9e0 0xc8211be9f0] [0xafa5c0 0xafa720 0xafa720] 0xc820faf740}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gqtw2] []  0xc8205d90a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8205d9de0 exit status 1 <nil> true [0xc820179268 0xc820179290 0xc8201792a0] [0xc820179268 0xc820179290 0xc8201792a0] [0xc820179270 0xc820179288 0xc820179298] [0xafa5c0 0xafa720 0xafa720] 0xc8209c8cc0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gqtw2] []  0xc8205d90a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8205d9de0 exit status 1 <nil> true [0xc820179268 0xc820179290 0xc8201792a0] [0xc820179268 0xc820179290 0xc8201792a0] [0xc820179270 0xc820179288 0xc820179298] [0xafa5c0 0xafa720 0xafa720] 0xc8209c8cc0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-056kb -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-11-30T17:22:38Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-056kb\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-056kb/services/redis-master\", \"uid\":\"97835b0a-b721-11e6-b375-42010af00039\", \"resourceVersion\":\"43344\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.250.4\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8214f4600 exit status 1 <nil> true [0xc8202d16a8 0xc8202d16c0 0xc8202d16d8] [0xc8202d16a8 0xc8202d16c0 0xc8202d16d8] [0xc8202d16b8 0xc8202d16d0] [0xafa720 0xafa720] 0xc82104ff80}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-11-30T17:22:38Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-056kb\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-056kb/services/redis-master\", \"uid\":\"97835b0a-b721-11e6-b375-42010af00039\", \"resourceVersion\":\"43344\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.250.4\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.130.253 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-056kb -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-11-30T17:22:38Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-056kb", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-056kb/services/redis-master", "uid":"97835b0a-b721-11e6-b375-42010af00039", "resourceVersion":"43344"}, "spec":map[string]interface {}{"clusterIP":"10.127.250.4", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8214f4600 exit status 1 <nil> true [0xc8202d16a8 0xc8202d16c0 0xc8202d16d8] [0xc8202d16a8 0xc8202d16c0 0xc8202d16d8] [0xc8202d16b8 0xc8202d16d0] [0xafa720 0xafa720] 0xc82104ff80}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-11-30T17:22:38Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-056kb", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-056kb/services/redis-master", "uid":"97835b0a-b721-11e6-b375-42010af00039", "resourceVersion":"43344"}, "spec":map[string]interface {}{"clusterIP":"10.127.250.4", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-master/126/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-tkh3g -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-tkh3g\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-tkh3g/services/redis-master\", \"uid\":\"b56a4dde-b732-11e6-b62c-42010af0002e\", \"resourceVersion\":\"6949\", \"creationTimestamp\":\"2016-11-30T19:25:10Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.248.113\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820ecbdc0 exit status 1 <nil> true [0xc821058748 0xc821058760 0xc821058778] [0xc821058748 0xc821058760 0xc821058778] [0xc821058758 0xc821058770] [0xafa720 0xafa720] 0xc820aaab40}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-tkh3g\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-tkh3g/services/redis-master\", \"uid\":\"b56a4dde-b732-11e6-b62c-42010af0002e\", \"resourceVersion\":\"6949\", \"creationTimestamp\":\"2016-11-30T19:25:10Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.248.113\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-tkh3g -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-tkh3g", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-tkh3g/services/redis-master", "uid":"b56a4dde-b732-11e6-b62c-42010af0002e", "resourceVersion":"6949", "creationTimestamp":"2016-11-30T19:25:10Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.248.113", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820ecbdc0 exit status 1 <nil> true [0xc821058748 0xc821058760 0xc821058778] [0xc821058748 0xc821058760 0xc821058778] [0xc821058758 0xc821058770] [0xafa720 0xafa720] 0xc820aaab40}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-tkh3g", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-tkh3g/services/redis-master", "uid":"b56a4dde-b732-11e6-b62c-42010af0002e", "resourceVersion":"6949", "creationTimestamp":"2016-11-30T19:25:10Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.248.113", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc8214cb5c0>: {
        s: "failed to wait for pods responding: pod with UID 38d5052b-b75b-11e6-b62c-42010af0002e is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-2qlfj/pods 42321} [{{ } {my-hostname-delete-node-6vlnk my-hostname-delete-node- e2e-tests-resize-nodes-2qlfj /api/v1/namespaces/e2e-tests-resize-nodes-2qlfj/pods/my-hostname-delete-node-6vlnk 38d576ce-b75b-11e6-b62c-42010af0002e 41990 0 2016-11-30 16:15:10 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-2qlfj\",\"name\":\"my-hostname-delete-node\",\"uid\":\"38cd3eb7-b75b-11e6-b62c-42010af0002e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"41972\"}}\n] [{v1 ReplicationController my-hostname-delete-node 38cd3eb7-b75b-11e6-b62c-42010af0002e 0xc8214d30a7}] [] } {[{default-token-f325p {<nil> <nil> <nil> <nil> <nil> 0xc821aab560 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f325p true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8214d31d0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-eddb0a97-a1qk 0xc822087180 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:10 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:12 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:10 -0800 PST  }]   10.240.0.5 10.124.2.14 2016-11-30 16:15:10 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82167fd00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2acf4cea1f9c1f431837c7047cbebc320f28df6851724519a36994b8fed32878}]}} {{ } {my-hostname-delete-node-k9x78 my-hostname-delete-node- e2e-tests-resize-nodes-2qlfj /api/v1/namespaces/e2e-tests-resize-nodes-2qlfj/pods/my-hostname-delete-node-k9x78 38d45ed0-b75b-11e6-b62c-42010af0002e 41992 0 2016-11-30 16:15:10 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-2qlfj\",\"name\":\"my-hostname-delete-node\",\"uid\":\"38cd3eb7-b75b-11e6-b62c-42010af0002e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"41972\"}}\n] [{v1 ReplicationController my-hostname-delete-node 38cd3eb7-b75b-11e6-b62c-42010af0002e 0xc8214d3467}] [] } {[{default-token-f325p {<nil> <nil> <nil> <nil> <nil> 0xc821aab5c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f325p true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8214d3570 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-eddb0a97-a1qk 0xc822087240 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:10 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:12 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:10 -0800 PST  }]   10.240.0.5 10.124.2.13 2016-11-30 16:15:10 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82167fd20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3ed0feee92e3426e34a7b3490cc6bd4307e52f08f9986702390d27845357e523}]}} {{ } {my-hostname-delete-node-xb9k5 my-hostname-delete-node- e2e-tests-resize-nodes-2qlfj /api/v1/namespaces/e2e-tests-resize-nodes-2qlfj/pods/my-hostname-delete-node-xb9k5 7487163b-b75b-11e6-b62c-42010af0002e 42159 0 2016-11-30 16:16:50 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-2qlfj\",\"name\":\"my-hostname-delete-node\",\"uid\":\"38cd3eb7-b75b-11e6-b62c-42010af0002e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"42063\"}}\n] [{v1 ReplicationController my-hostname-delete-node 38cd3eb7-b75b-11e6-b62c-42010af0002e 0xc8214d39d7}] [] } {[{default-token-f325p {<nil> <nil> <nil> <nil> <nil> 0xc821aab620 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f325p true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8214d3b10 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-eddb0a97-88tr 0xc822087300 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:16:50 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:16:51 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:16:50 -0800 PST  }]   10.240.0.7 10.124.1.34 2016-11-30 16:16:50 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82167fd60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f6c719da93147633fa8a6d5aab5577723a8769bc929d1604b91e52de85239c24}]}}]}",
    }
    failed to wait for pods responding: pod with UID 38d5052b-b75b-11e6-b62c-42010af0002e is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-2qlfj/pods 42321} [{{ } {my-hostname-delete-node-6vlnk my-hostname-delete-node- e2e-tests-resize-nodes-2qlfj /api/v1/namespaces/e2e-tests-resize-nodes-2qlfj/pods/my-hostname-delete-node-6vlnk 38d576ce-b75b-11e6-b62c-42010af0002e 41990 0 2016-11-30 16:15:10 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-2qlfj","name":"my-hostname-delete-node","uid":"38cd3eb7-b75b-11e6-b62c-42010af0002e","apiVersion":"v1","resourceVersion":"41972"}}
    ] [{v1 ReplicationController my-hostname-delete-node 38cd3eb7-b75b-11e6-b62c-42010af0002e 0xc8214d30a7}] [] } {[{default-token-f325p {<nil> <nil> <nil> <nil> <nil> 0xc821aab560 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f325p true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8214d31d0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-eddb0a97-a1qk 0xc822087180 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:10 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:12 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:10 -0800 PST  }]   10.240.0.5 10.124.2.14 2016-11-30 16:15:10 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82167fd00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2acf4cea1f9c1f431837c7047cbebc320f28df6851724519a36994b8fed32878}]}} {{ } {my-hostname-delete-node-k9x78 my-hostname-delete-node- e2e-tests-resize-nodes-2qlfj /api/v1/namespaces/e2e-tests-resize-nodes-2qlfj/pods/my-hostname-delete-node-k9x78 38d45ed0-b75b-11e6-b62c-42010af0002e 41992 0 2016-11-30 16:15:10 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-2qlfj","name":"my-hostname-delete-node","uid":"38cd3eb7-b75b-11e6-b62c-42010af0002e","apiVersion":"v1","resourceVersion":"41972"}}
    ] [{v1 ReplicationController my-hostname-delete-node 38cd3eb7-b75b-11e6-b62c-42010af0002e 0xc8214d3467}] [] } {[{default-token-f325p {<nil> <nil> <nil> <nil> <nil> 0xc821aab5c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f325p true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8214d3570 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-eddb0a97-a1qk 0xc822087240 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:10 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:12 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:15:10 -0800 PST  }]   10.240.0.5 10.124.2.13 2016-11-30 16:15:10 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82167fd20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3ed0feee92e3426e34a7b3490cc6bd4307e52f08f9986702390d27845357e523}]}} {{ } {my-hostname-delete-node-xb9k5 my-hostname-delete-node- e2e-tests-resize-nodes-2qlfj /api/v1/namespaces/e2e-tests-resize-nodes-2qlfj/pods/my-hostname-delete-node-xb9k5 7487163b-b75b-11e6-b62c-42010af0002e 42159 0 2016-11-30 16:16:50 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-2qlfj","name":"my-hostname-delete-node","uid":"38cd3eb7-b75b-11e6-b62c-42010af0002e","apiVersion":"v1","resourceVersion":"42063"}}
    ] [{v1 ReplicationController my-hostname-delete-node 38cd3eb7-b75b-11e6-b62c-42010af0002e 0xc8214d39d7}] [] } {[{default-token-f325p {<nil> <nil> <nil> <nil> <nil> 0xc821aab620 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f325p true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8214d3b10 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-eddb0a97-88tr 0xc822087300 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:16:50 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:16:51 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 16:16:50 -0800 PST  }]   10.240.0.7 10.124.1.34 2016-11-30 16:16:50 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82167fd60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f6c719da93147633fa8a6d5aab5577723a8769bc929d1604b91e52de85239c24}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-r98jb] []  0xc820e67d80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820c323a0 exit status 1 <nil> true [0xc8204e8190 0xc8204e8200 0xc8204e8248] [0xc8204e8190 0xc8204e8200 0xc8204e8248] [0xc8204e81a8 0xc8204e81f8 0xc8204e8230] [0xafa5c0 0xafa720 0xafa720] 0xc820c476e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-r98jb] []  0xc820e67d80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820c323a0 exit status 1 <nil> true [0xc8204e8190 0xc8204e8200 0xc8204e8248] [0xc8204e8190 0xc8204e8200 0xc8204e8248] [0xc8204e81a8 0xc8204e81f8 0xc8204e8230] [0xafa5c0 0xafa720 0xafa720] 0xc820c476e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bn9jh] []  0xc821ffc600  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821ffcc00 exit status 1 <nil> true [0xc821baa7e0 0xc821baa820 0xc821baa830] [0xc821baa7e0 0xc821baa820 0xc821baa830] [0xc821baa7f0 0xc821baa818 0xc821baa828] [0xafa5c0 0xafa720 0xafa720] 0xc8211143c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bn9jh] []  0xc821ffc600  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821ffcc00 exit status 1 <nil> true [0xc821baa7e0 0xc821baa820 0xc821baa830] [0xc821baa7e0 0xc821baa820 0xc821baa830] [0xc821baa7f0 0xc821baa818 0xc821baa828] [0xafa5c0 0xafa720 0xafa720] 0xc8211143c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2hlsd] []  0xc821a93320  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821a93920 exit status 1 <nil> true [0xc8200c2ea0 0xc8200c2ef0 0xc8200c2f08] [0xc8200c2ea0 0xc8200c2ef0 0xc8200c2f08] [0xc8200c2eb0 0xc8200c2ee0 0xc8200c2f00] [0xafa5c0 0xafa720 0xafa720] 0xc8216f4120}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2hlsd] []  0xc821a93320  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821a93920 exit status 1 <nil> true [0xc8200c2ea0 0xc8200c2ef0 0xc8200c2f08] [0xc8200c2ea0 0xc8200c2ef0 0xc8200c2f08] [0xc8200c2eb0 0xc8200c2ee0 0xc8200c2f00] [0xafa5c0 0xafa720 0xafa720] 0xc8216f4120}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8200ed7d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mgmf3] []  0xc821774c60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8217754a0 exit status 1 <nil> true [0xc8200c2230 0xc8200c2258 0xc8200c2270] [0xc8200c2230 0xc8200c2258 0xc8200c2270] [0xc8200c2238 0xc8200c2250 0xc8200c2268] [0xafa5c0 0xafa720 0xafa720] 0xc821004900}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mgmf3] []  0xc821774c60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8217754a0 exit status 1 <nil> true [0xc8200c2230 0xc8200c2258 0xc8200c2270] [0xc8200c2230 0xc8200c2258 0xc8200c2270] [0xc8200c2238 0xc8200c2250 0xc8200c2268] [0xafa5c0 0xafa720 0xafa720] 0xc821004900}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ps459] []  0xc820a70fa0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820a71660 exit status 1 <nil> true [0xc8204d6920 0xc8204d69c8 0xc8204d6a00] [0xc8204d6920 0xc8204d69c8 0xc8204d6a00] [0xc8204d6948 0xc8204d69b0 0xc8204d69d8] [0xafa5c0 0xafa720 0xafa720] 0xc82095dd40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ps459] []  0xc820a70fa0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820a71660 exit status 1 <nil> true [0xc8204d6920 0xc8204d69c8 0xc8204d6a00] [0xc8204d6920 0xc8204d69c8 0xc8204d6a00] [0xc8204d6948 0xc8204d69b0 0xc8204d69d8] [0xafa5c0 0xafa720 0xafa720] 0xc82095dd40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xpprx] []  0xc822020940  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc822020f40 exit status 1 <nil> true [0xc8204e9590 0xc8204e95f0 0xc8204e9618] [0xc8204e9590 0xc8204e95f0 0xc8204e9618] [0xc8204e95a0 0xc8204e95d8 0xc8204e9600] [0xafa5c0 0xafa720 0xafa720] 0xc821d1ec00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xpprx] []  0xc822020940  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc822020f40 exit status 1 <nil> true [0xc8204e9590 0xc8204e95f0 0xc8204e9618] [0xc8204e9590 0xc8204e95f0 0xc8204e9618] [0xc8204e95a0 0xc8204e95d8 0xc8204e9600] [0xafa5c0 0xafa720 0xafa720] 0xc821d1ec00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ed7d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ed7d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5lvnf] []  0xc82186b6e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82186bce0 exit status 1 <nil> true [0xc8204e8980 0xc8204e8a48 0xc8204e8a80] [0xc8204e8980 0xc8204e8a48 0xc8204e8a80] [0xc8204e89a8 0xc8204e8a28 0xc8204e8a78] [0xafa5c0 0xafa720 0xafa720] 0xc821bd90e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5lvnf] []  0xc82186b6e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82186bce0 exit status 1 <nil> true [0xc8204e8980 0xc8204e8a48 0xc8204e8a80] [0xc8204e8980 0xc8204e8a48 0xc8204e8a80] [0xc8204e89a8 0xc8204e8a28 0xc8204e8a78] [0xafa5c0 0xafa720 0xafa720] 0xc821bd90e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-03ml8] []  0xc820dba000  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820dba2e0 exit status 1 <nil> true [0xc8204d7080 0xc8204d70d0 0xc8204d70e0] [0xc8204d7080 0xc8204d70d0 0xc8204d70e0] [0xc8204d70a0 0xc8204d70c8 0xc8204d70d8] [0xafa5c0 0xafa720 0xafa720] 0xc820c7c6c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-03ml8] []  0xc820dba000  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820dba2e0 exit status 1 <nil> true [0xc8204d7080 0xc8204d70d0 0xc8204d70e0] [0xc8204d7080 0xc8204d70d0 0xc8204d70e0] [0xc8204d70a0 0xc8204d70c8 0xc8204d70d8] [0xafa5c0 0xafa720 0xafa720] 0xc820c7c6c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-h7409] []  0xc820a433e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820a439e0 exit status 1 <nil> true [0xc8200c22c0 0xc8200c2300 0xc8200c2338] [0xc8200c22c0 0xc8200c2300 0xc8200c2338] [0xc8200c22c8 0xc8200c22f0 0xc8200c2328] [0xafa5c0 0xafa720 0xafa720] 0xc820dc5aa0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-h7409] []  0xc820a433e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820a439e0 exit status 1 <nil> true [0xc8200c22c0 0xc8200c2300 0xc8200c2338] [0xc8200c22c0 0xc8200c2300 0xc8200c2338] [0xc8200c22c8 0xc8200c22f0 0xc8200c2328] [0xafa5c0 0xafa720 0xafa720] 0xc820dc5aa0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jxcbt] []  0xc821373c80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8213ca560 exit status 1 <nil> true [0xc821032100 0xc821032128 0xc821032138] [0xc821032100 0xc821032128 0xc821032138] [0xc821032108 0xc821032120 0xc821032130] [0xafa5c0 0xafa720 0xafa720] 0xc821c85680}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.183.158 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jxcbt] []  0xc821373c80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8213ca560 exit status 1 <nil> true [0xc821032100 0xc821032128 0xc821032138] [0xc821032100 0xc821032128 0xc821032138] [0xc821032108 0xc821032120 0xc821032130] [0xafa5c0 0xafa720 0xafa720] 0xc821c85680}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-master/127/

Multiple broken tests:

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc82007fa60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.135.126 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-ztfvb -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"8f6e69b6-b78f-11e6-8045-42010af0001a\", \"resourceVersion\":\"34744\", \"creationTimestamp\":\"2016-12-01T06:29:49Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-ztfvb\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-ztfvb/services/redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.241.253\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82199f8a0 exit status 1 <nil> true [0xc8200e0380 0xc8200e03a0 0xc8200e03d8] [0xc8200e0380 0xc8200e03a0 0xc8200e03d8] [0xc8200e0398 0xc8200e03c8] [0xafa720 0xafa720] 0xc821f3a780}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"8f6e69b6-b78f-11e6-8045-42010af0001a\", \"resourceVersion\":\"34744\", \"creationTimestamp\":\"2016-12-01T06:29:49Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-ztfvb\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-ztfvb/services/redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.241.253\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.135.126 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-ztfvb -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"8f6e69b6-b78f-11e6-8045-42010af0001a", "resourceVersion":"34744", "creationTimestamp":"2016-12-01T06:29:49Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-ztfvb", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-ztfvb/services/redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.241.253", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82199f8a0 exit status 1 <nil> true [0xc8200e0380 0xc8200e03a0 0xc8200e03d8] [0xc8200e0380 0xc8200e03a0 0xc8200e03d8] [0xc8200e0398 0xc8200e03c8] [0xafa720 0xafa720] 0xc821f3a780}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"8f6e69b6-b78f-11e6-8045-42010af0001a", "resourceVersion":"34744", "creationTimestamp":"2016-12-01T06:29:49Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-ztfvb", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-ztfvb/services/redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.241.253", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc820c848c0>: {
        s: "failed to wait for pods responding: pod with UID 4df52c75-b76a-11e6-8045-42010af0001a is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-vgv14/pods 3711} [{{ } {my-hostname-delete-node-jxr5j my-hostname-delete-node- e2e-tests-resize-nodes-vgv14 /api/v1/namespaces/e2e-tests-resize-nodes-vgv14/pods/my-hostname-delete-node-jxr5j 4df3a9c2-b76a-11e6-8045-42010af0001a 3385 0 2016-11-30 18:03:08 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-vgv14\",\"name\":\"my-hostname-delete-node\",\"uid\":\"4df180a7-b76a-11e6-8045-42010af0001a\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3365\"}}\n] [{v1 ReplicationController my-hostname-delete-node 4df180a7-b76a-11e6-8045-42010af0001a 0xc821515137}] [] } {[{default-token-q9rxz {<nil> <nil> <nil> <nil> <nil> 0xc8214b70e0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q9rxz true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821515230 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-88e7761c-uvxe 0xc8213f6ac0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:08 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:08 -0800 PST  }]   10.240.0.7 10.124.0.22 2016-11-30 18:03:08 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820d02620 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ac7be79f221f3f578ed9946fa9b5f367f4599a3e8cc4b278642df956fe68d62d}]}} {{ } {my-hostname-delete-node-rbnld my-hostname-delete-node- e2e-tests-resize-nodes-vgv14 /api/v1/namespaces/e2e-tests-resize-nodes-vgv14/pods/my-hostname-delete-node-rbnld 87b086c0-b76a-11e6-8045-42010af0001a 3548 0 2016-11-30 18:04:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-vgv14\",\"name\":\"my-hostname-delete-node\",\"uid\":\"4df180a7-b76a-11e6-8045-42010af0001a\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3457\"}}\n] [{v1 ReplicationController my-hostname-delete-node 4df180a7-b76a-11e6-8045-42010af0001a 0xc8215154c7}] [] } {[{default-token-q9rxz {<nil> <nil> <nil> <nil> <nil> 0xc8214b7140 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q9rxz true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8215155c0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-88e7761c-uvxe 0xc8213f6b80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:04:45 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:04:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:04:45 -0800 PST  }]   10.240.0.7 10.124.0.23 2016-11-30 18:04:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820d02640 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9317525099a2c22a75c0d380aad47a09c7aeeb5426b0972a4e77841a6b6b9407}]}} {{ } {my-hostname-delete-node-w5cnd my-hostname-delete-node- e2e-tests-resize-nodes-vgv14 /api/v1/namespaces/e2e-tests-resize-nodes-vgv14/pods/my-hostname-delete-node-w5cnd 4df37bad-b76a-11e6-8045-42010af0001a 3381 0 2016-11-30 18:03:08 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-vgv14\",\"name\":\"my-hostname-delete-node\",\"uid\":\"4df180a7-b76a-11e6-8045-42010af0001a\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3365\"}}\n] [{v1 ReplicationController my-hostname-delete-node 4df180a7-b76a-11e6-8045-42010af0001a 0xc821515857}] [] } {[{default-token-q9rxz {<nil> <nil> <nil> <nil> <nil> 0xc8214b71a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q9rxz true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821515950 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-88e7761c-x20b 0xc8213f6c40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:08 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:08 -0800 PST  }]   10.240.0.5 10.124.1.18 2016-11-30 18:03:08 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820d02660 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://bd6b07904ef2dd321076a62d37b8fd2428ebda2fdbf243100ca810d9e032ac0f}]}}]}",
    }
    failed to wait for pods responding: pod with UID 4df52c75-b76a-11e6-8045-42010af0001a is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-vgv14/pods 3711} [{{ } {my-hostname-delete-node-jxr5j my-hostname-delete-node- e2e-tests-resize-nodes-vgv14 /api/v1/namespaces/e2e-tests-resize-nodes-vgv14/pods/my-hostname-delete-node-jxr5j 4df3a9c2-b76a-11e6-8045-42010af0001a 3385 0 2016-11-30 18:03:08 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-vgv14","name":"my-hostname-delete-node","uid":"4df180a7-b76a-11e6-8045-42010af0001a","apiVersion":"v1","resourceVersion":"3365"}}
    ] [{v1 ReplicationController my-hostname-delete-node 4df180a7-b76a-11e6-8045-42010af0001a 0xc821515137}] [] } {[{default-token-q9rxz {<nil> <nil> <nil> <nil> <nil> 0xc8214b70e0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q9rxz true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821515230 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-88e7761c-uvxe 0xc8213f6ac0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:08 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:08 -0800 PST  }]   10.240.0.7 10.124.0.22 2016-11-30 18:03:08 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820d02620 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ac7be79f221f3f578ed9946fa9b5f367f4599a3e8cc4b278642df956fe68d62d}]}} {{ } {my-hostname-delete-node-rbnld my-hostname-delete-node- e2e-tests-resize-nodes-vgv14 /api/v1/namespaces/e2e-tests-resize-nodes-vgv14/pods/my-hostname-delete-node-rbnld 87b086c0-b76a-11e6-8045-42010af0001a 3548 0 2016-11-30 18:04:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-vgv14","name":"my-hostname-delete-node","uid":"4df180a7-b76a-11e6-8045-42010af0001a","apiVersion":"v1","resourceVersion":"3457"}}
    ] [{v1 ReplicationController my-hostname-delete-node 4df180a7-b76a-11e6-8045-42010af0001a 0xc8215154c7}] [] } {[{default-token-q9rxz {<nil> <nil> <nil> <nil> <nil> 0xc8214b7140 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q9rxz true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8215155c0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-88e7761c-uvxe 0xc8213f6b80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:04:45 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:04:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:04:45 -0800 PST  }]   10.240.0.7 10.124.0.23 2016-11-30 18:04:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820d02640 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9317525099a2c22a75c0d380aad47a09c7aeeb5426b0972a4e77841a6b6b9407}]}} {{ } {my-hostname-delete-node-w5cnd my-hostname-delete-node- e2e-tests-resize-nodes-vgv14 /api/v1/namespaces/e2e-tests-resize-nodes-vgv14/pods/my-hostname-delete-node-w5cnd 4df37bad-b76a-11e6-8045-42010af0001a 3381 0 2016-11-30 18:03:08 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-vgv14","name":"my-hostname-delete-node","uid":"4df180a7-b76a-11e6-8045-42010af0001a","apiVersion":"v1","resourceVersion":"3365"}}
    ] [{v1 ReplicationController my-hostname-delete-node 4df180a7-b76a-11e6-8045-42010af0001a 0xc821515857}] [] } {[{default-token-q9rxz {<nil> <nil> <nil> <nil> <nil> 0xc8214b71a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q9rxz true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821515950 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-88e7761c-x20b 0xc8213f6c40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:08 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 18:03:08 -0800 PST  }]   10.240.0.5 10.124.1.18 2016-11-30 18:03:08 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820d02660 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://bd6b07904ef2dd321076a62d37b8fd2428ebda2fdbf243100ca810d9e032ac0f}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007fa60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007fa60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-master/128/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82019c890>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc82019c890>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.135.126 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-3ptpz -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.242.131\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-3ptpz/services/redis-master\", \"uid\":\"19f4d521-b7b1-11e6-9b2a-42010af0002c\", \"resourceVersion\":\"12114\", \"creationTimestamp\":\"2016-12-01T10:29:55Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-3ptpz\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820dfde60 exit status 1 <nil> true [0xc8202ea340 0xc8202ea388 0xc8202ea3b8] [0xc8202ea340 0xc8202ea388 0xc8202ea3b8] [0xc8202ea378 0xc8202ea3a0] [0xafa720 0xafa720] 0xc8213f3200}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.242.131\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-3ptpz/services/redis-master\", \"uid\":\"19f4d521-b7b1-11e6-9b2a-42010af0002c\", \"resourceVersion\":\"12114\", \"creationTimestamp\":\"2016-12-01T10:29:55Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-3ptpz\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.135.126 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-3ptpz -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.242.131", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-3ptpz/services/redis-master", "uid":"19f4d521-b7b1-11e6-9b2a-42010af0002c", "resourceVersion":"12114", "creationTimestamp":"2016-12-01T10:29:55Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-3ptpz"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820dfde60 exit status 1 <nil> true [0xc8202ea340 0xc8202ea388 0xc8202ea3b8] [0xc8202ea340 0xc8202ea388 0xc8202ea3b8] [0xc8202ea378 0xc8202ea3a0] [0xafa720 0xafa720] 0xc8213f3200}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.242.131", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-3ptpz/services/redis-master", "uid":"19f4d521-b7b1-11e6-9b2a-42010af0002c", "resourceVersion":"12114", "creationTimestamp":"2016-12-01T10:29:55Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-3ptpz"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc8223db120>: {
        s: "failed to wait for pods responding: pod with UID 0da9ff8f-b7d8-11e6-b0bd-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-2jq3n/pods 48376} [{{ } {my-hostname-delete-node-jgfzf my-hostname-delete-node- e2e-tests-resize-nodes-2jq3n /api/v1/namespaces/e2e-tests-resize-nodes-2jq3n/pods/my-hostname-delete-node-jgfzf 0daa33fe-b7d8-11e6-b0bd-42010af00014 48104 0 2016-12-01 07:08:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-2jq3n\",\"name\":\"my-hostname-delete-node\",\"uid\":\"0da7a14d-b7d8-11e6-b0bd-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"48090\"}}\n] [{v1 ReplicationController my-hostname-delete-node 0da7a14d-b7d8-11e6-b0bd-42010af00014 0xc821dbd687}] [] } {[{default-token-dlq31 {<nil> <nil> <nil> <nil> <nil> 0xc821c8ba70 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlq31 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821dbd780 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e34f7d60-yt2g 0xc821ef2fc0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:45 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:45 -0800 PST  }]   10.240.0.5 10.124.0.111 2016-12-01 07:08:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821160ba0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://cee2b646e32b7841ad4da852b625584c523425b3e477185d820228d79237f125}]}} {{ } {my-hostname-delete-node-t3sn0 my-hostname-delete-node- e2e-tests-resize-nodes-2jq3n /api/v1/namespaces/e2e-tests-resize-nodes-2jq3n/pods/my-hostname-delete-node-t3sn0 0daa67c5-b7d8-11e6-b0bd-42010af00014 48108 0 2016-12-01 07:08:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-2jq3n\",\"name\":\"my-hostname-delete-node\",\"uid\":\"0da7a14d-b7d8-11e6-b0bd-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"48090\"}}\n] [{v1 ReplicationController my-hostname-delete-node 0da7a14d-b7d8-11e6-b0bd-42010af00014 0xc821dbda97}] [] } {[{default-token-dlq31 {<nil> <nil> <nil> <nil> <nil> 0xc821c8bad0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlq31 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821dbdbc0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e34f7d60-rctl 0xc821ef3080 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:45 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:45 -0800 PST  }]   10.240.0.7 10.124.2.151 2016-12-01 07:08:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821160bc0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://abc0e53cdf532835dfb9e45454311e3dcd443d22c0dfad765bc3ce0a3e54df69}]}} {{ } {my-hostname-delete-node-zkxj6 my-hostname-delete-node- e2e-tests-resize-nodes-2jq3n /api/v1/namespaces/e2e-tests-resize-nodes-2jq3n/pods/my-hostname-delete-node-zkxj6 3f496e88-b7d8-11e6-b0bd-42010af00014 48229 0 2016-12-01 07:10:08 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-2jq3n\",\"name\":\"my-hostname-delete-node\",\"uid\":\"0da7a14d-b7d8-11e6-b0bd-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"48173\"}}\n] [{v1 ReplicationController my-hostname-delete-node 0da7a14d-b7d8-11e6-b0bd-42010af00014 0xc821dbdea7}] [] } {[{default-token-dlq31 {<nil> <nil> <nil> <nil> <nil> 0xc821c8bb30 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlq31 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821dbdfd0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e34f7d60-yt2g 0xc821ef3140 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:10:08 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:10:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:10:08 -0800 PST  }]   10.240.0.5 10.124.0.112 2016-12-01 07:10:08 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821160be0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9e2e992101c76f3b44b3da6d82f8f5496ca36a6272e96a17c95c08e7f5781578}]}}]}",
    }
    failed to wait for pods responding: pod with UID 0da9ff8f-b7d8-11e6-b0bd-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-2jq3n/pods 48376} [{{ } {my-hostname-delete-node-jgfzf my-hostname-delete-node- e2e-tests-resize-nodes-2jq3n /api/v1/namespaces/e2e-tests-resize-nodes-2jq3n/pods/my-hostname-delete-node-jgfzf 0daa33fe-b7d8-11e6-b0bd-42010af00014 48104 0 2016-12-01 07:08:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-2jq3n","name":"my-hostname-delete-node","uid":"0da7a14d-b7d8-11e6-b0bd-42010af00014","apiVersion":"v1","resourceVersion":"48090"}}
    ] [{v1 ReplicationController my-hostname-delete-node 0da7a14d-b7d8-11e6-b0bd-42010af00014 0xc821dbd687}] [] } {[{default-token-dlq31 {<nil> <nil> <nil> <nil> <nil> 0xc821c8ba70 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlq31 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821dbd780 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e34f7d60-yt2g 0xc821ef2fc0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:45 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:45 -0800 PST  }]   10.240.0.5 10.124.0.111 2016-12-01 07:08:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821160ba0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://cee2b646e32b7841ad4da852b625584c523425b3e477185d820228d79237f125}]}} {{ } {my-hostname-delete-node-t3sn0 my-hostname-delete-node- e2e-tests-resize-nodes-2jq3n /api/v1/namespaces/e2e-tests-resize-nodes-2jq3n/pods/my-hostname-delete-node-t3sn0 0daa67c5-b7d8-11e6-b0bd-42010af00014 48108 0 2016-12-01 07:08:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-2jq3n","name":"my-hostname-delete-node","uid":"0da7a14d-b7d8-11e6-b0bd-42010af00014","apiVersion":"v1","resourceVersion":"48090"}}
    ] [{v1 ReplicationController my-hostname-delete-node 0da7a14d-b7d8-11e6-b0bd-42010af00014 0xc821dbda97}] [] } {[{default-token-dlq31 {<nil> <nil> <nil> <nil> <nil> 0xc821c8bad0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlq31 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821dbdbc0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e34f7d60-rctl 0xc821ef3080 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:45 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:08:45 -0800 PST  }]   10.240.0.7 10.124.2.151 2016-12-01 07:08:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821160bc0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://abc0e53cdf532835dfb9e45454311e3dcd443d22c0dfad765bc3ce0a3e54df69}]}} {{ } {my-hostname-delete-node-zkxj6 my-hostname-delete-node- e2e-tests-resize-nodes-2jq3n /api/v1/namespaces/e2e-tests-resize-nodes-2jq3n/pods/my-hostname-delete-node-zkxj6 3f496e88-b7d8-11e6-b0bd-42010af00014 48229 0 2016-12-01 07:10:08 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-2jq3n","name":"my-hostname-delete-node","uid":"0da7a14d-b7d8-11e6-b0bd-42010af00014","apiVersion":"v1","resourceVersion":"48173"}}
    ] [{v1 ReplicationController my-hostname-delete-node 0da7a14d-b7d8-11e6-b0bd-42010af00014 0xc821dbdea7}] [] } {[{default-token-dlq31 {<nil> <nil> <nil> <nil> <nil> 0xc821c8bb30 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlq31 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821dbdfd0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e34f7d60-yt2g 0xc821ef3140 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:10:08 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:10:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 07:10:08 -0800 PST  }]   10.240.0.5 10.124.0.112 2016-12-01 07:10:08 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821160be0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9e2e992101c76f3b44b3da6d82f8f5496ca36a6272e96a17c95c08e7f5781578}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82019c890>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-master/129/

Multiple broken tests:

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8200e3650>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200e3650>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.191.207 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-39dwf -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-01T22:12:00Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-39dwf\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-39dwf/services/redis-master\", \"uid\":\"2e565865-b813-11e6-a39e-42010af00014\", \"resourceVersion\":\"47799\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.254.186\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821ddd4a0 exit status 1 <nil> true [0xc82017a418 0xc82017a458 0xc82017a488] [0xc82017a418 0xc82017a458 0xc82017a488] [0xc82017a448 0xc82017a480] [0xafa720 0xafa720] 0xc821bda780}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-01T22:12:00Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-39dwf\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-39dwf/services/redis-master\", \"uid\":\"2e565865-b813-11e6-a39e-42010af00014\", \"resourceVersion\":\"47799\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.254.186\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.191.207 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-39dwf -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-01T22:12:00Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-39dwf", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-39dwf/services/redis-master", "uid":"2e565865-b813-11e6-a39e-42010af00014", "resourceVersion":"47799"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.254.186", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821ddd4a0 exit status 1 <nil> true [0xc82017a418 0xc82017a458 0xc82017a488] [0xc82017a418 0xc82017a458 0xc82017a488] [0xc82017a448 0xc82017a480] [0xafa720 0xafa720] 0xc821bda780}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-01T22:12:00Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-39dwf", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-39dwf/services/redis-master", "uid":"2e565865-b813-11e6-a39e-42010af00014", "resourceVersion":"47799"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.254.186", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200e3650>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants