Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-master: broken test run #37763

Closed
k8s-github-robot opened this issue Dec 1, 2016 · 7 comments
Closed
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-master/138/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:478
Expected error:
    <*errors.errorString | 0xc82154c780>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:471

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82122efa0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #34223

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-v0tn4] []  0xc82143b080  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82143b6c0 exit status 1 <nil> true [0xc820c89a88 0xc820c89ab0 0xc820c89ac0] [0xc820c89a88 0xc820c89ab0 0xc820c89ac0] [0xc820c89a90 0xc820c89aa8 0xc820c89ab8] [0xafa5c0 0xafa720 0xafa720] 0xc8218065a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-v0tn4] []  0xc82143b080  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82143b6c0 exit status 1 <nil> true [0xc820c89a88 0xc820c89ab0 0xc820c89ac0] [0xc820c89a88 0xc820c89ab0 0xc820c89ac0] [0xc820c89a90 0xc820c89aa8 0xc820c89ab8] [0xafa5c0 0xafa720 0xafa720] 0xc8218065a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821a6e390>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821542be0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8212b4ab0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820a16330>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28071

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-0c76u] []  0xc820ffb620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820ffbe80 exit status 1 <nil> true [0xc8200dc1f0 0xc8200dc400 0xc8200dc438] [0xc8200dc1f0 0xc8200dc400 0xc8200dc438] [0xc8200dc210 0xc8200dc3f8 0xc8200dc408] [0xafa5c0 0xafa720 0xafa720] 0xc82172d9e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-0c76u] []  0xc820ffb620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820ffbe80 exit status 1 <nil> true [0xc8200dc1f0 0xc8200dc400 0xc8200dc438] [0xc8200dc1f0 0xc8200dc400 0xc8200dc438] [0xc8200dc210 0xc8200dc3f8 0xc8200dc408] [0xafa5c0 0xafa720 0xafa720] 0xc82172d9e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82148bd70>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d2d4d0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820c4ad10>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-r6gja] []  0xc820f59d20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82031e780 exit status 1 <nil> true [0xc8200dcaa0 0xc8200dcac8 0xc8200dcad8] [0xc8200dcaa0 0xc8200dcac8 0xc8200dcad8] [0xc8200dcaa8 0xc8200dcac0 0xc8200dcad0] [0xafa5c0 0xafa720 0xafa720] 0xc820ffcc00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-r6gja] []  0xc820f59d20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82031e780 exit status 1 <nil> true [0xc8200dcaa0 0xc8200dcac8 0xc8200dcad8] [0xc8200dcaa0 0xc8200dcac8 0xc8200dcad8] [0xc8200dcaa8 0xc8200dcac0 0xc8200dcad0] [0xafa5c0 0xafa720 0xafa720] 0xc820ffcc00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8210f3b00>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821a42b70>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p1knh] []  0xc820ec6260  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820ec68a0 exit status 1 <nil> true [0xc820092c38 0xc820092c80 0xc820092c98] [0xc820092c38 0xc820092c80 0xc820092c98] [0xc820092c40 0xc820092c78 0xc820092c90] [0xafa5c0 0xafa720 0xafa720] 0xc8211dab40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p1knh] []  0xc820ec6260  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820ec68a0 exit status 1 <nil> true [0xc820092c38 0xc820092c80 0xc820092c98] [0xc820092c38 0xc820092c80 0xc820092c98] [0xc820092c40 0xc820092c78 0xc820092c90] [0xafa5c0 0xafa720 0xafa720] 0xc8211dab40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8211bd5d0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82137eaf0>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821848a60>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28091

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-17sav] []  0xc821ea2760  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821ea2ee0 exit status 1 <nil> true [0xc820d62058 0xc820d62080 0xc820d62090] [0xc820d62058 0xc820d62080 0xc820d62090] [0xc820d62060 0xc820d62078 0xc820d62088] [0xafa5c0 0xafa720 0xafa720] 0xc821ed05a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-17sav] []  0xc821ea2760  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821ea2ee0 exit status 1 <nil> true [0xc820d62058 0xc820d62080 0xc820d62090] [0xc820d62058 0xc820d62080 0xc820d62090] [0xc820d62060 0xc820d62078 0xc820d62088] [0xafa5c0 0xafa720 0xafa720] 0xc821ed05a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201c0760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8201c0760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82154db90>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #33883

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-l0wgy -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"fa299489-b63d-11e6-a0be-42010af00028\", \"resourceVersion\":\"27477\", \"creationTimestamp\":\"2016-11-29T14:13:18Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-l0wgy\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-l0wgy/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.248.137\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82174de40 exit status 1 <nil> true [0xc820bd6530 0xc820bd6560 0xc820bd6578] [0xc820bd6530 0xc820bd6560 0xc820bd6578] [0xc820bd6558 0xc820bd6570] [0xafa720 0xafa720] 0xc820c2b8c0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"fa299489-b63d-11e6-a0be-42010af00028\", \"resourceVersion\":\"27477\", \"creationTimestamp\":\"2016-11-29T14:13:18Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-l0wgy\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-l0wgy/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.248.137\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-l0wgy -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"fa299489-b63d-11e6-a0be-42010af00028", "resourceVersion":"27477", "creationTimestamp":"2016-11-29T14:13:18Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-l0wgy", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-l0wgy/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.127.248.137", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82174de40 exit status 1 <nil> true [0xc820bd6530 0xc820bd6560 0xc820bd6578] [0xc820bd6530 0xc820bd6560 0xc820bd6578] [0xc820bd6558 0xc820bd6570] [0xafa720 0xafa720] 0xc820c2b8c0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"fa299489-b63d-11e6-a0be-42010af00028", "resourceVersion":"27477", "creationTimestamp":"2016-11-29T14:13:18Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-l0wgy", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-l0wgy/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.127.248.137", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bctiq] []  0xc8214aeb20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8214af1e0 exit status 1 <nil> true [0xc8200372e8 0xc820037310 0xc820037320] [0xc8200372e8 0xc820037310 0xc820037320] [0xc8200372f0 0xc820037308 0xc820037318] [0xafa5c0 0xafa720 0xafa720] 0xc8218770e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bctiq] []  0xc8214aeb20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8214af1e0 exit status 1 <nil> true [0xc8200372e8 0xc820037310 0xc820037320] [0xc8200372e8 0xc820037310 0xc820037320] [0xc8200372f0 0xc820037308 0xc820037318] [0xafa5c0 0xafa720 0xafa720] 0xc8218770e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d47ab0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #35279

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc821ad1fc0>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:447

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc820f64590>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.009619986s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.009619986s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:56

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.errorString | 0xc8217b6f80>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:351

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821604640>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9v3fl] []  0xc82178b0e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82178b960 exit status 1 <nil> true [0xc8214b2dd0 0xc8214b2df8 0xc8214b2e08] [0xc8214b2dd0 0xc8214b2df8 0xc8214b2e08] [0xc8214b2dd8 0xc8214b2df0 0xc8214b2e00] [0xafa5c0 0xafa720 0xafa720] 0xc8213cdce0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9v3fl] []  0xc82178b0e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82178b960 exit status 1 <nil> true [0xc8214b2dd0 0xc8214b2df8 0xc8214b2e08] [0xc8214b2dd0 0xc8214b2df8 0xc8214b2e08] [0xc8214b2dd8 0xc8214b2df0 0xc8214b2e00] [0xafa5c0 0xafa720 0xafa720] 0xc8213cdce0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-teepw] []  0xc8217c2ae0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8217c3620 exit status 1 <nil> true [0xc820d620e8 0xc820d62130 0xc820d62140] [0xc820d620e8 0xc820d62130 0xc820d62140] [0xc820d620f0 0xc820d62118 0xc820d62138] [0xafa5c0 0xafa720 0xafa720] 0xc8218506c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-teepw] []  0xc8217c2ae0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8217c3620 exit status 1 <nil> true [0xc820d620e8 0xc820d62130 0xc820d62140] [0xc820d620e8 0xc820d62130 0xc820d62140] [0xc820d620f0 0xc820d62118 0xc820d62138] [0xafa5c0 0xafa720 0xafa720] 0xc8218506c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dlxuu] []  0xc820bff980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8208f22a0 exit status 1 <nil> true [0xc820d63c80 0xc820d63ca8 0xc820d63cb8] [0xc820d63c80 0xc820d63ca8 0xc820d63cb8] [0xc820d63c88 0xc820d63ca0 0xc820d63cb0] [0xafa5c0 0xafa720 0xafa720] 0xc821f2a3c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dlxuu] []  0xc820bff980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8208f22a0 exit status 1 <nil> true [0xc820d63c80 0xc820d63ca8 0xc820d63cb8] [0xc820d63c80 0xc820d63ca8 0xc820d63cb8] [0xc820d63c88 0xc820d63ca0 0xc820d63cb0] [0xafa5c0 0xafa720 0xafa720] 0xc821f2a3c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wz5p8] []  0xc8217c2820  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8217c2fc0 exit status 1 <nil> true [0xc8219705a8 0xc8219705d0 0xc8219705e0] [0xc8219705a8 0xc8219705d0 0xc8219705e0] [0xc8219705b0 0xc8219705c8 0xc8219705d8] [0xafa5c0 0xafa720 0xafa720] 0xc821f47740}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.150.195 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wz5p8] []  0xc8217c2820  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8217c2fc0 exit status 1 <nil> true [0xc8219705a8 0xc8219705d0 0xc8219705e0] [0xc8219705a8 0xc8219705d0 0xc8219705e0] [0xc8219705b0 0xc8219705c8 0xc8219705d8] [0xafa5c0 0xafa720 0xafa720] 0xc821f47740}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8210fd580>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8201c0760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-master/139/

Multiple broken tests:

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:76
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:70

Issues about this test specifically: #28297 #37101

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-h1cy8] []  0xc820942360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820942b20 exit status 1 <nil> true [0xc820038898 0xc8200388c0 0xc8200388d0] [0xc820038898 0xc8200388c0 0xc8200388d0] [0xc8200388a0 0xc8200388b8 0xc8200388c8] [0xafa5c0 0xafa720 0xafa720] 0xc8212933e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-h1cy8] []  0xc820942360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820942b20 exit status 1 <nil> true [0xc820038898 0xc8200388c0 0xc8200388d0] [0xc820038898 0xc8200388c0 0xc8200388d0] [0xc8200388a0 0xc8200388b8 0xc8200388c8] [0xafa5c0 0xafa720 0xafa720] 0xc8212933e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-y1e3y] []  0xc8209293a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820929b80 exit status 1 <nil> true [0xc820039190 0xc8200391b8 0xc8200391c8] [0xc820039190 0xc8200391b8 0xc8200391c8] [0xc820039198 0xc8200391b0 0xc8200391c0] [0xafa5c0 0xafa720 0xafa720] 0xc820dcaba0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-y1e3y] []  0xc8209293a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820929b80 exit status 1 <nil> true [0xc820039190 0xc8200391b8 0xc8200391c8] [0xc820039190 0xc8200391b8 0xc8200391c8] [0xc820039198 0xc8200391b0 0xc8200391c0] [0xafa5c0 0xafa720 0xafa720] 0xc820dcaba0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82147ef30>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-adfwv] []  0xc820b8d5c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820b8dce0 exit status 1 <nil> true [0xc820ec4428 0xc820ec4468 0xc820ec4478] [0xc820ec4428 0xc820ec4468 0xc820ec4478] [0xc820ec4430 0xc820ec4458 0xc820ec4470] [0xafa5c0 0xafa720 0xafa720] 0xc8210848a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-adfwv] []  0xc820b8d5c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820b8dce0 exit status 1 <nil> true [0xc820ec4428 0xc820ec4468 0xc820ec4478] [0xc820ec4428 0xc820ec4468 0xc820ec4478] [0xc820ec4430 0xc820ec4458 0xc820ec4470] [0xafa5c0 0xafa720 0xafa720] 0xc8210848a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82142d3e0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d04f80>: {
        s: "0 / 10 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 10 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8214af010>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #31918

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-y2npa] []  0xc821a73fa0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821a669c0 exit status 1 <nil> true [0xc821493190 0xc8214931b8 0xc8214931c8] [0xc821493190 0xc8214931b8 0xc8214931c8] [0xc821493198 0xc8214931b0 0xc8214931c0] [0xafa5c0 0xafa720 0xafa720] 0xc821a5c420}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-y2npa] []  0xc821a73fa0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821a669c0 exit status 1 <nil> true [0xc821493190 0xc8214931b8 0xc8214931c8] [0xc821493190 0xc8214931b8 0xc8214931c8] [0xc821493198 0xc8214931b0 0xc8214931c0] [0xafa5c0 0xafa720 0xafa720] 0xc821a5c420}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821b2f9c0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82180f200>: {
        s: "0 / 12 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 12 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc82019e760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820c6db70>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #33883

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5rqqd] []  0xc8222d8380  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8222d89a0 exit status 1 <nil> true [0xc821cce270 0xc821cce2a0 0xc821cce2c0] [0xc821cce270 0xc821cce2a0 0xc821cce2c0] [0xc821cce278 0xc821cce298 0xc821cce2b0] [0xafa5c0 0xafa720 0xafa720] 0xc820b85620}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5rqqd] []  0xc8222d8380  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8222d89a0 exit status 1 <nil> true [0xc821cce270 0xc821cce2a0 0xc821cce2c0] [0xc821cce270 0xc821cce2a0 0xc821cce2c0] [0xc821cce278 0xc821cce298 0xc821cce2b0] [0xafa5c0 0xafa720 0xafa720] 0xc820b85620}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tfmdi] []  0xc820dd6820  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820dd70e0 exit status 1 <nil> true [0xc8200c5100 0xc8200c5128 0xc8200c5138] [0xc8200c5100 0xc8200c5128 0xc8200c5138] [0xc8200c5108 0xc8200c5120 0xc8200c5130] [0xafa5c0 0xafa720 0xafa720] 0xc8214219e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tfmdi] []  0xc820dd6820  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820dd70e0 exit status 1 <nil> true [0xc8200c5100 0xc8200c5128 0xc8200c5138] [0xc8200c5100 0xc8200c5128 0xc8200c5138] [0xc8200c5108 0xc8200c5120 0xc8200c5130] [0xafa5c0 0xafa720 0xafa720] 0xc8214219e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821adf570>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82180e5d0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-87gbc] []  0xc820a0fde0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8209c1720 exit status 1 <nil> true [0xc8202e44b8 0xc8202e4520 0xc8202e4568] [0xc8202e44b8 0xc8202e4520 0xc8202e4568] [0xc8202e44e0 0xc8202e4518 0xc8202e4528] [0xafa5c0 0xafa720 0xafa720] 0xc82147c900}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-87gbc] []  0xc820a0fde0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8209c1720 exit status 1 <nil> true [0xc8202e44b8 0xc8202e4520 0xc8202e4568] [0xc8202e44b8 0xc8202e4520 0xc8202e4568] [0xc8202e44e0 0xc8202e4518 0xc8202e4528] [0xafa5c0 0xafa720 0xafa720] 0xc82147c900}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820c2a670>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-qj1o4 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-qj1o4\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-qj1o4/services/redis-master\", \"uid\":\"3df0ff70-b68a-11e6-825a-42010af0002c\", \"resourceVersion\":\"28773\", \"creationTimestamp\":\"2016-11-29T23:19:14Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.253.226\", \"type\":\"ClusterIP\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82084bea0 exit status 1 <nil> true [0xc821cce158 0xc821cce170 0xc821cce188] [0xc821cce158 0xc821cce170 0xc821cce188] [0xc821cce168 0xc821cce180] [0xafa720 0xafa720] 0xc821b1d9e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-qj1o4\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-qj1o4/services/redis-master\", \"uid\":\"3df0ff70-b68a-11e6-825a-42010af0002c\", \"resourceVersion\":\"28773\", \"creationTimestamp\":\"2016-11-29T23:19:14Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.253.226\", \"type\":\"ClusterIP\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-qj1o4 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-qj1o4", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-qj1o4/services/redis-master", "uid":"3df0ff70-b68a-11e6-825a-42010af0002c", "resourceVersion":"28773", "creationTimestamp":"2016-11-29T23:19:14Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.253.226", "type":"ClusterIP"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82084bea0 exit status 1 <nil> true [0xc821cce158 0xc821cce170 0xc821cce188] [0xc821cce158 0xc821cce170 0xc821cce188] [0xc821cce168 0xc821cce180] [0xafa720 0xafa720] 0xc821b1d9e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-qj1o4", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-qj1o4/services/redis-master", "uid":"3df0ff70-b68a-11e6-825a-42010af0002c", "resourceVersion":"28773", "creationTimestamp":"2016-11-29T23:19:14Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.253.226", "type":"ClusterIP"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82019e760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-i7xox] []  0xc82145eb40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82145f180 exit status 1 <nil> true [0xc820ec4120 0xc820ec4148 0xc820ec4158] [0xc820ec4120 0xc820ec4148 0xc820ec4158] [0xc820ec4128 0xc820ec4140 0xc820ec4150] [0xafa5c0 0xafa720 0xafa720] 0xc820b84de0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-i7xox] []  0xc82145eb40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82145f180 exit status 1 <nil> true [0xc820ec4120 0xc820ec4148 0xc820ec4158] [0xc820ec4120 0xc820ec4148 0xc820ec4158] [0xc820ec4128 0xc820ec4140 0xc820ec4150] [0xafa5c0 0xafa720 0xafa720] 0xc820b84de0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82019e760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:478
Expected error:
    <*errors.errorString | 0xc821d7e040>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:471

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82078a780>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820c48bd0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821597f90>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zemx0] []  0xc821a344a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821a34bc0 exit status 1 <nil> true [0xc8217c6140 0xc8217c6168 0xc8217c6178] [0xc8217c6140 0xc8217c6168 0xc8217c6178] [0xc8217c6148 0xc8217c6160 0xc8217c6170] [0xafa5c0 0xafa720 0xafa720] 0xc8220f9c80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zemx0] []  0xc821a344a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821a34bc0 exit status 1 <nil> true [0xc8217c6140 0xc8217c6168 0xc8217c6178] [0xc8217c6140 0xc8217c6168 0xc8217c6178] [0xc8217c6148 0xc8217c6160 0xc8217c6170] [0xafa5c0 0xafa720 0xafa720] 0xc8220f9c80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820e347e0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820b39860>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d0a8b0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28019

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc821a8b350>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:447

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821bbed30>: {
        s: "0 / 10 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 10 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zwmnd] []  0xc821832520  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821832b40 exit status 1 <nil> true [0xc8220f21a8 0xc8220f21d0 0xc8220f21e0] [0xc8220f21a8 0xc8220f21d0 0xc8220f21e0] [0xc8220f21b0 0xc8220f21c8 0xc8220f21d8] [0xafa5c0 0xafa720 0xafa720] 0xc821826b40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zwmnd] []  0xc821832520  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821832b40 exit status 1 <nil> true [0xc8220f21a8 0xc8220f21d0 0xc8220f21e0] [0xc8220f21a8 0xc8220f21d0 0xc8220f21e0] [0xc8220f21b0 0xc8220f21c8 0xc8220f21d8] [0xafa5c0 0xafa720 0xafa720] 0xc821826b40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-master/141/

Multiple broken tests:

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820018b30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-lrk77] []  0xc820404ac0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82035d260 exit status 1 <nil> true [0xc820037400 0xc820037440 0xc820037450] [0xc820037400 0xc820037440 0xc820037450] [0xc820037408 0xc820037438 0xc820037448] [0xafa5c0 0xafa720 0xafa720] 0xc820d1d1a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-lrk77] []  0xc820404ac0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82035d260 exit status 1 <nil> true [0xc820037400 0xc820037440 0xc820037450] [0xc820037400 0xc820037440 0xc820037450] [0xc820037408 0xc820037438 0xc820037448] [0xafa5c0 0xafa720 0xafa720] 0xc820d1d1a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-a4ppp] []  0xc822027ce0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821ae84e0 exit status 1 <nil> true [0xc8200dc8f0 0xc8200dc948 0xc8200dc980] [0xc8200dc8f0 0xc8200dc948 0xc8200dc980] [0xc8200dc900 0xc8200dc930 0xc8200dc960] [0xafa5c0 0xafa720 0xafa720] 0xc821b83a40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-a4ppp] []  0xc822027ce0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821ae84e0 exit status 1 <nil> true [0xc8200dc8f0 0xc8200dc948 0xc8200dc980] [0xc8200dc8f0 0xc8200dc948 0xc8200dc980] [0xc8200dc900 0xc8200dc930 0xc8200dc960] [0xafa5c0 0xafa720 0xafa720] 0xc821b83a40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8215852c0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28071

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gpb06] []  0xc8207190e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820719960 exit status 1 <nil> true [0xc820e50110 0xc820e50158 0xc820e50168] [0xc820e50110 0xc820e50158 0xc820e50168] [0xc820e50120 0xc820e50150 0xc820e50160] [0xafa5c0 0xafa720 0xafa720] 0xc820560fc0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gpb06] []  0xc8207190e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820719960 exit status 1 <nil> true [0xc820e50110 0xc820e50158 0xc820e50168] [0xc820e50110 0xc820e50158 0xc820e50168] [0xc820e50120 0xc820e50150 0xc820e50160] [0xafa5c0 0xafa720 0xafa720] 0xc820560fc0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:430
Expected error:
    <*errors.errorString | 0xc821832280>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:427

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820cde280>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8217f5aa0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821195330>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:430
Expected error:
    <*errors.errorString | 0xc8210dab50>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:427

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-scuvp] []  0xc8209e64a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8209de0a0 exit status 1 <nil> true [0xc820276200 0xc8202762a0 0xc8202762c0] [0xc820276200 0xc8202762a0 0xc8202762c0] [0xc820276218 0xc820276258 0xc8202762a8] [0xafa5c0 0xafa720 0xafa720] 0xc820c8b380}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-scuvp] []  0xc8209e64a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8209de0a0 exit status 1 <nil> true [0xc820276200 0xc8202762a0 0xc8202762c0] [0xc820276200 0xc8202762a0 0xc8202762c0] [0xc820276218 0xc820276258 0xc8202762a8] [0xafa5c0 0xafa720 0xafa720] 0xc820c8b380}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820018b30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-mx9uh -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-mx9uh/services/redis-master\", \"uid\":\"3fd37454-b6ec-11e6-8d62-42010af0001a\", \"resourceVersion\":\"48088\", \"creationTimestamp\":\"2016-11-30T11:00:48Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-mx9uh\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.247.41\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821b4b100 exit status 1 <nil> true [0xc820e50b70 0xc820e50b88 0xc820e50ba8] [0xc820e50b70 0xc820e50b88 0xc820e50ba8] [0xc820e50b80 0xc820e50b98] [0xafa720 0xafa720] 0xc821a9fc80}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-mx9uh/services/redis-master\", \"uid\":\"3fd37454-b6ec-11e6-8d62-42010af0001a\", \"resourceVersion\":\"48088\", \"creationTimestamp\":\"2016-11-30T11:00:48Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-mx9uh\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.247.41\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-mx9uh -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-mx9uh/services/redis-master", "uid":"3fd37454-b6ec-11e6-8d62-42010af0001a", "resourceVersion":"48088", "creationTimestamp":"2016-11-30T11:00:48Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-mx9uh"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.247.41", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821b4b100 exit status 1 <nil> true [0xc820e50b70 0xc820e50b88 0xc820e50ba8] [0xc820e50b70 0xc820e50b88 0xc820e50ba8] [0xc820e50b80 0xc820e50b98] [0xafa720 0xafa720] 0xc821a9fc80}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-mx9uh/services/redis-master", "uid":"3fd37454-b6ec-11e6-8d62-42010af0001a", "resourceVersion":"48088", "creationTimestamp":"2016-11-30T11:00:48Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-mx9uh"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.247.41", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821bdbde0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82171e200>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820018b30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tseaa] []  0xc821a035c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821a03c20 exit status 1 <nil> true [0xc820e50330 0xc820e503a0 0xc820e503d8] [0xc820e50330 0xc820e503a0 0xc820e503d8] [0xc820e50340 0xc820e50390 0xc820e503c0] [0xafa5c0 0xafa720 0xafa720] 0xc8216f48a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tseaa] []  0xc821a035c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821a03c20 exit status 1 <nil> true [0xc820e50330 0xc820e503a0 0xc820e503d8] [0xc820e50330 0xc820e503a0 0xc820e503d8] [0xc820e50340 0xc820e50390 0xc820e503c0] [0xafa5c0 0xafa720 0xafa720] 0xc8216f48a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82194c3b0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8204d9af0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82108ed40>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #31918

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-69gp7] []  0xc820e78c20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e79380 exit status 1 <nil> true [0xc820e501a8 0xc820e501d0 0xc820e501e0] [0xc820e501a8 0xc820e501d0 0xc820e501e0] [0xc820e501b0 0xc820e501c8 0xc820e501d8] [0xafa5c0 0xafa720 0xafa720] 0xc8211629c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-69gp7] []  0xc820e78c20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e79380 exit status 1 <nil> true [0xc820e501a8 0xc820e501d0 0xc820e501e0] [0xc820e501a8 0xc820e501d0 0xc820e501e0] [0xc820e501b0 0xc820e501c8 0xc820e501d8] [0xafa5c0 0xafa720 0xafa720] 0xc8211629c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820f36780>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8213ee7a0>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821a47040>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820901240>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8219d2c90>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #34223

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-69lb7] []  0xc820fbec40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820fbf2c0 exit status 1 <nil> true [0xc82130c928 0xc82130c978 0xc82130c998] [0xc82130c928 0xc82130c978 0xc82130c998] [0xc82130c938 0xc82130c968 0xc82130c988] [0xafa5c0 0xafa720 0xafa720] 0xc821731e60}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-69lb7] []  0xc820fbec40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820fbf2c0 exit status 1 <nil> true [0xc82130c928 0xc82130c978 0xc82130c998] [0xc82130c928 0xc82130c978 0xc82130c998] [0xc82130c938 0xc82130c968 0xc82130c988] [0xafa5c0 0xafa720 0xafa720] 0xc821731e60}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82114d910>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820fde630>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #33883

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ygkgz] []  0xc821277d20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820faa4e0 exit status 1 <nil> true [0xc820aba428 0xc820aba468 0xc820aba478] [0xc820aba428 0xc820aba468 0xc820aba478] [0xc820aba438 0xc820aba460 0xc820aba470] [0xafa5c0 0xafa720 0xafa720] 0xc82174d5c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ygkgz] []  0xc821277d20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820faa4e0 exit status 1 <nil> true [0xc820aba428 0xc820aba468 0xc820aba478] [0xc820aba428 0xc820aba468 0xc820aba478] [0xc820aba438 0xc820aba460 0xc820aba470] [0xafa5c0 0xafa720 0xafa720] 0xc82174d5c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821285890>: {
        s: "0 / 14 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\n",
    }
    0 / 14 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD NODE PHASE GRACE CONDITIONS
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-lp702] []  0xc8216fee60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8216ff560 exit status 1 <nil> true [0xc8202761d8 0xc820276258 0xc8202762a8] [0xc8202761d8 0xc820276258 0xc8202762a8] [0xc8202761f0 0xc820276240 0xc8202762a0] [0xafa5c0 0xafa720 0xafa720] 0xc821730ae0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-lp702] []  0xc8216fee60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8216ff560 exit status 1 <nil> true [0xc8202761d8 0xc820276258 0xc8202762a8] [0xc8202761d8 0xc820276258 0xc8202762a8] [0xc8202761f0 0xc820276240 0xc8202762a0] [0xafa5c0 0xafa720 0xafa720] 0xc821730ae0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-osu8j] []  0xc8212764c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821276b60 exit status 1 <nil> true [0xc821334700 0xc821334728 0xc821334738] [0xc821334700 0xc821334728 0xc821334738] [0xc821334708 0xc821334720 0xc821334730] [0xafa5c0 0xafa720 0xafa720] 0xc8214496e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-osu8j] []  0xc8212764c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821276b60 exit status 1 <nil> true [0xc821334700 0xc821334728 0xc821334738] [0xc821334700 0xc821334728 0xc821334738] [0xc821334708 0xc821334720 0xc821334730] [0xafa5c0 0xafa720 0xafa720] 0xc8214496e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-master/142/

Multiple broken tests:

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201be5d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-b67v8] []  0xc820955aa0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8208585c0 exit status 1 <nil> true [0xc820036a60 0xc820036aa0 0xc820036ab8] [0xc820036a60 0xc820036aa0 0xc820036ab8] [0xc820036a78 0xc820036a98 0xc820036ab0] [0xafa5c0 0xafa720 0xafa720] 0xc821027f80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-b67v8] []  0xc820955aa0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8208585c0 exit status 1 <nil> true [0xc820036a60 0xc820036aa0 0xc820036ab8] [0xc820036a60 0xc820036aa0 0xc820036ab8] [0xc820036a78 0xc820036a98 0xc820036ab0] [0xafa5c0 0xafa720 0xafa720] 0xc821027f80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2nqmm] []  0xc821e46680  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821e46d40 exit status 1 <nil> true [0xc8209d88a0 0xc8209d88e0 0xc8209d8910] [0xc8209d88a0 0xc8209d88e0 0xc8209d8910] [0xc8209d88b0 0xc8209d88d8 0xc8209d88f0] [0xafa5c0 0xafa720 0xafa720] 0xc821b0e240}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2nqmm] []  0xc821e46680  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821e46d40 exit status 1 <nil> true [0xc8209d88a0 0xc8209d88e0 0xc8209d8910] [0xc8209d88a0 0xc8209d88e0 0xc8209d8910] [0xc8209d88b0 0xc8209d88d8 0xc8209d88f0] [0xafa5c0 0xafa720 0xafa720] 0xc821b0e240}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-7pz2p] []  0xc820ddc0a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820ddc760 exit status 1 <nil> true [0xc8200c2000 0xc8200c21a8 0xc8200c21d8] [0xc8200c2000 0xc8200c21a8 0xc8200c21d8] [0xc8200c2118 0xc8200c2198 0xc8200c21c8] [0xafa5c0 0xafa720 0xafa720] 0xc820dfa2a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-7pz2p] []  0xc820ddc0a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820ddc760 exit status 1 <nil> true [0xc8200c2000 0xc8200c21a8 0xc8200c21d8] [0xc8200c2000 0xc8200c21a8 0xc8200c21d8] [0xc8200c2118 0xc8200c2198 0xc8200c21c8] [0xafa5c0 0xafa720 0xafa720] 0xc820dfa2a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8201be5d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8201be5d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-ksx7v -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-ksx7v\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-ksx7v/services/redis-master\", \"uid\":\"dfd8f53e-b704-11e6-8b11-42010af0002f\", \"resourceVersion\":\"10883\", \"creationTimestamp\":\"2016-11-30T13:57:04Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.250.15\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8211b1540 exit status 1 <nil> true [0xc8200cc2b0 0xc8200cc2f8 0xc8200cc318] [0xc8200cc2b0 0xc8200cc2f8 0xc8200cc318] [0xc8200cc2f0 0xc8200cc310] [0xafa720 0xafa720] 0xc820b82900}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-ksx7v\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-ksx7v/services/redis-master\", \"uid\":\"dfd8f53e-b704-11e6-8b11-42010af0002f\", \"resourceVersion\":\"10883\", \"creationTimestamp\":\"2016-11-30T13:57:04Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.250.15\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-ksx7v -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-ksx7v", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-ksx7v/services/redis-master", "uid":"dfd8f53e-b704-11e6-8b11-42010af0002f", "resourceVersion":"10883", "creationTimestamp":"2016-11-30T13:57:04Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.250.15", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8211b1540 exit status 1 <nil> true [0xc8200cc2b0 0xc8200cc2f8 0xc8200cc318] [0xc8200cc2b0 0xc8200cc2f8 0xc8200cc318] [0xc8200cc2f0 0xc8200cc310] [0xafa720 0xafa720] 0xc820b82900}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-ksx7v", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-ksx7v/services/redis-master", "uid":"dfd8f53e-b704-11e6-8b11-42010af0002f", "resourceVersion":"10883", "creationTimestamp":"2016-11-30T13:57:04Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.250.15", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-qs0zt] []  0xc821825200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821825860 exit status 1 <nil> true [0xc8209d80f8 0xc8209d8190 0xc8209d81a0] [0xc8209d80f8 0xc8209d8190 0xc8209d81a0] [0xc8209d8118 0xc8209d8180 0xc8209d8198] [0xafa5c0 0xafa720 0xafa720] 0xc821dc8480}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-qs0zt] []  0xc821825200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821825860 exit status 1 <nil> true [0xc8209d80f8 0xc8209d8190 0xc8209d81a0] [0xc8209d80f8 0xc8209d8190 0xc8209d81a0] [0xc8209d8118 0xc8209d8180 0xc8209d8198] [0xafa5c0 0xafa720 0xafa720] 0xc821dc8480}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g3whk] []  0xc8207237c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820512060 exit status 1 <nil> true [0xc820037348 0xc820037388 0xc820037398] [0xc820037348 0xc820037388 0xc820037398] [0xc820037360 0xc820037378 0xc820037390] [0xafa5c0 0xafa720 0xafa720] 0xc820dfbda0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g3whk] []  0xc8207237c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820512060 exit status 1 <nil> true [0xc820037348 0xc820037388 0xc820037398] [0xc820037348 0xc820037388 0xc820037398] [0xc820037360 0xc820037378 0xc820037390] [0xafa5c0 0xafa720 0xafa720] 0xc820dfbda0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-hh53f] []  0xc8218096e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821809d00 exit status 1 <nil> true [0xc8210c07e0 0xc8210c0828 0xc8210c0840] [0xc8210c07e0 0xc8210c0828 0xc8210c0840] [0xc8210c07f8 0xc8210c0820 0xc8210c0830] [0xafa5c0 0xafa720 0xafa720] 0xc821acade0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-hh53f] []  0xc8218096e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821809d00 exit status 1 <nil> true [0xc8210c07e0 0xc8210c0828 0xc8210c0840] [0xc8210c07e0 0xc8210c0828 0xc8210c0840] [0xc8210c07f8 0xc8210c0820 0xc8210c0830] [0xafa5c0 0xafa720 0xafa720] 0xc821acade0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6hdb1] []  0xc820aaa3c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820aaaa40 exit status 1 <nil> true [0xc8210c0050 0xc8210c0078 0xc8210c0088] [0xc8210c0050 0xc8210c0078 0xc8210c0088] [0xc8210c0058 0xc8210c0070 0xc8210c0080] [0xafa5c0 0xafa720 0xafa720] 0xc821a7f5c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6hdb1] []  0xc820aaa3c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820aaaa40 exit status 1 <nil> true [0xc8210c0050 0xc8210c0078 0xc8210c0088] [0xc8210c0050 0xc8210c0078 0xc8210c0088] [0xc8210c0058 0xc8210c0070 0xc8210c0080] [0xafa5c0 0xafa720 0xafa720] 0xc821a7f5c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x542l] []  0xc820aa49a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820aa51a0 exit status 1 <nil> true [0xc820f32110 0xc820f32150 0xc820f32160] [0xc820f32110 0xc820f32150 0xc820f32160] [0xc820f32130 0xc820f32148 0xc820f32158] [0xafa5c0 0xafa720 0xafa720] 0xc820d48360}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x542l] []  0xc820aa49a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820aa51a0 exit status 1 <nil> true [0xc820f32110 0xc820f32150 0xc820f32160] [0xc820f32110 0xc820f32150 0xc820f32160] [0xc820f32130 0xc820f32148 0xc820f32158] [0xafa5c0 0xafa720 0xafa720] 0xc820d48360}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dqk5q] []  0xc8213473c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821347ac0 exit status 1 <nil> true [0xc8200368e0 0xc820036948 0xc820036978] [0xc8200368e0 0xc820036948 0xc820036978] [0xc820036910 0xc820036940 0xc820036958] [0xafa5c0 0xafa720 0xafa720] 0xc82114fda0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dqk5q] []  0xc8213473c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821347ac0 exit status 1 <nil> true [0xc8200368e0 0xc820036948 0xc820036978] [0xc8200368e0 0xc820036948 0xc820036978] [0xc820036910 0xc820036940 0xc820036958] [0xafa5c0 0xafa720 0xafa720] 0xc82114fda0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-m052h] []  0xc82087f7c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82087fe20 exit status 1 <nil> true [0xc820036648 0xc8200366e8 0xc820036710] [0xc820036648 0xc8200366e8 0xc820036710] [0xc820036660 0xc8200366a0 0xc820036708] [0xafa5c0 0xafa720 0xafa720] 0xc821007a40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.132.176 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-m052h] []  0xc82087f7c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82087fe20 exit status 1 <nil> true [0xc820036648 0xc8200366e8 0xc820036710] [0xc820036648 0xc8200366e8 0xc820036710] [0xc820036660 0xc8200366a0 0xc820036708] [0xafa5c0 0xafa720 0xafa720] 0xc821007a40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-master/143/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-w7g3g -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-01T01:44:42Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-w7g3g\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-w7g3g/services/redis-master\", \"uid\":\"baf602e0-b767-11e6-8af4-42010af00014\", \"resourceVersion\":\"48256\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.245.83\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8212f8dc0 exit status 1 <nil> true [0xc8200363e0 0xc820036400 0xc820036418] [0xc8200363e0 0xc820036400 0xc820036418] [0xc8200363f8 0xc820036410] [0xafa720 0xafa720] 0xc821481c20}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-01T01:44:42Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-w7g3g\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-w7g3g/services/redis-master\", \"uid\":\"baf602e0-b767-11e6-8af4-42010af00014\", \"resourceVersion\":\"48256\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.245.83\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-w7g3g -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-01T01:44:42Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-w7g3g", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-w7g3g/services/redis-master", "uid":"baf602e0-b767-11e6-8af4-42010af00014", "resourceVersion":"48256"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.245.83", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8212f8dc0 exit status 1 <nil> true [0xc8200363e0 0xc820036400 0xc820036418] [0xc8200363e0 0xc820036400 0xc820036418] [0xc8200363f8 0xc820036410] [0xafa720 0xafa720] 0xc821481c20}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-01T01:44:42Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-w7g3g", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-w7g3g/services/redis-master", "uid":"baf602e0-b767-11e6-8af4-42010af00014", "resourceVersion":"48256"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.245.83", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820180a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-q6w0n] []  0xc820e1a0c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e1a820 exit status 1 <nil> true [0xc8200c21e0 0xc8200c2930 0xc8200c2960] [0xc8200c21e0 0xc8200c2930 0xc8200c2960] [0xc8200c2318 0xc8200c2498 0xc8200c2950] [0xafa5c0 0xafa720 0xafa720] 0xc8211ec2a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-q6w0n] []  0xc820e1a0c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e1a820 exit status 1 <nil> true [0xc8200c21e0 0xc8200c2930 0xc8200c2960] [0xc8200c21e0 0xc8200c2930 0xc8200c2960] [0xc8200c2318 0xc8200c2498 0xc8200c2950] [0xafa5c0 0xafa720 0xafa720] 0xc8211ec2a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9zvm5] []  0xc820f9ac80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820f9b460 exit status 1 <nil> true [0xc8213f4330 0xc8213f4358 0xc8213f4368] [0xc8213f4330 0xc8213f4358 0xc8213f4368] [0xc8213f4338 0xc8213f4350 0xc8213f4360] [0xafa5c0 0xafa720 0xafa720] 0xc821947260}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9zvm5] []  0xc820f9ac80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820f9b460 exit status 1 <nil> true [0xc8213f4330 0xc8213f4358 0xc8213f4368] [0xc8213f4330 0xc8213f4358 0xc8213f4368] [0xc8213f4338 0xc8213f4350 0xc8213f4360] [0xafa5c0 0xafa720 0xafa720] 0xc821947260}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wfgzq] []  0xc820f59c60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82157a4e0 exit status 1 <nil> true [0xc820482478 0xc8204824f8 0xc820482580] [0xc820482478 0xc8204824f8 0xc820482580] [0xc820482480 0xc8204824e8 0xc820482528] [0xafa5c0 0xafa720 0xafa720] 0xc8212f49c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wfgzq] []  0xc820f59c60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82157a4e0 exit status 1 <nil> true [0xc820482478 0xc8204824f8 0xc820482580] [0xc820482478 0xc8204824f8 0xc820482580] [0xc820482480 0xc8204824e8 0xc820482528] [0xafa5c0 0xafa720 0xafa720] 0xc8212f49c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-pp31f] []  0xc821cda0a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821cda880 exit status 1 <nil> true [0xc820036a28 0xc820036a50 0xc820036a60] [0xc820036a28 0xc820036a50 0xc820036a60] [0xc820036a30 0xc820036a48 0xc820036a58] [0xafa5c0 0xafa720 0xafa720] 0xc820ff58c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-pp31f] []  0xc821cda0a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821cda880 exit status 1 <nil> true [0xc820036a28 0xc820036a50 0xc820036a60] [0xc820036a28 0xc820036a50 0xc820036a60] [0xc820036a30 0xc820036a48 0xc820036a58] [0xafa5c0 0xafa720 0xafa720] 0xc820ff58c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-qfmzl] []  0xc821b41440  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821b41e00 exit status 1 <nil> true [0xc8213f4590 0xc8213f45d0 0xc8213f4620] [0xc8213f4590 0xc8213f45d0 0xc8213f4620] [0xc8213f45a0 0xc8213f45c8 0xc8213f45f8] [0xafa5c0 0xafa720 0xafa720] 0xc8210ee3c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-qfmzl] []  0xc821b41440  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821b41e00 exit status 1 <nil> true [0xc8213f4590 0xc8213f45d0 0xc8213f4620] [0xc8213f4590 0xc8213f45d0 0xc8213f4620] [0xc8213f45a0 0xc8213f45c8 0xc8213f45f8] [0xafa5c0 0xafa720 0xafa720] 0xc8210ee3c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4rs62] []  0xc820e0a140  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e0a9c0 exit status 1 <nil> true [0xc820b4eb08 0xc820b4eb30 0xc820b4eb40] [0xc820b4eb08 0xc820b4eb30 0xc820b4eb40] [0xc820b4eb10 0xc820b4eb28 0xc820b4eb38] [0xafa5c0 0xafa720 0xafa720] 0xc820bbe600}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4rs62] []  0xc820e0a140  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e0a9c0 exit status 1 <nil> true [0xc820b4eb08 0xc820b4eb30 0xc820b4eb40] [0xc820b4eb08 0xc820b4eb30 0xc820b4eb40] [0xc820b4eb10 0xc820b4eb28 0xc820b4eb38] [0xafa5c0 0xafa720 0xafa720] 0xc820bbe600}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-48wjq] []  0xc820886220  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820886b40 exit status 1 <nil> true [0xc8200c2b68 0xc8200c20f8 0xc8200c2108] [0xc8200c2b68 0xc8200c20f8 0xc8200c2108] [0xc8200c2b78 0xc8200c20e8 0xc8200c2100] [0xafa5c0 0xafa720 0xafa720] 0xc820c837a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-48wjq] []  0xc820886220  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820886b40 exit status 1 <nil> true [0xc8200c2b68 0xc8200c20f8 0xc8200c2108] [0xc8200c2b68 0xc8200c20f8 0xc8200c2108] [0xc8200c2b78 0xc8200c20e8 0xc8200c2100] [0xafa5c0 0xafa720 0xafa720] 0xc820c837a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-8k9m3] []  0xc82226f3a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82226fac0 exit status 1 <nil> true [0xc821084088 0xc8210840b0 0xc8210840c8] [0xc821084088 0xc8210840b0 0xc8210840c8] [0xc821084090 0xc8210840a8 0xc8210840b8] [0xafa5c0 0xafa720 0xafa720] 0xc820bc3b00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-8k9m3] []  0xc82226f3a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82226fac0 exit status 1 <nil> true [0xc821084088 0xc8210840b0 0xc8210840c8] [0xc821084088 0xc8210840b0 0xc8210840c8] [0xc821084090 0xc8210840a8 0xc8210840b8] [0xafa5c0 0xafa720 0xafa720] 0xc820bc3b00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820180a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-541gw] []  0xc820a79e60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820bda620 exit status 1 <nil> true [0xc82017adb0 0xc82017add8 0xc82017ade8] [0xc82017adb0 0xc82017add8 0xc82017ade8] [0xc82017adb8 0xc82017add0 0xc82017ade0] [0xafa5c0 0xafa720 0xafa720] 0xc820c27380}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-541gw] []  0xc820a79e60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820bda620 exit status 1 <nil> true [0xc82017adb0 0xc82017add8 0xc82017ade8] [0xc82017adb0 0xc82017add8 0xc82017ade8] [0xc82017adb8 0xc82017add0 0xc82017ade0] [0xafa5c0 0xafa720 0xafa720] 0xc820c27380}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820180a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9sjcd] []  0xc820fa1200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820fa1860 exit status 1 <nil> true [0xc8200c2478 0xc8200c24c0 0xc8200c2500] [0xc8200c2478 0xc8200c24c0 0xc8200c2500] [0xc8200c2480 0xc8200c24b8 0xc8200c24d0] [0xafa5c0 0xafa720 0xafa720] 0xc820806de0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9sjcd] []  0xc820fa1200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820fa1860 exit status 1 <nil> true [0xc8200c2478 0xc8200c24c0 0xc8200c2500] [0xc8200c2478 0xc8200c24c0 0xc8200c2500] [0xc8200c2480 0xc8200c24b8 0xc8200c24d0] [0xafa5c0 0xafa720 0xafa720] 0xc820806de0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc820984e60>: {
        s: "failed to wait for pods responding: pod with UID 12e35199-b742-11e6-a7de-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-d372b/pods 19145} [{{ } {my-hostname-delete-node-d2qqw my-hostname-delete-node- e2e-tests-resize-nodes-d372b /api/v1/namespaces/e2e-tests-resize-nodes-d372b/pods/my-hostname-delete-node-d2qqw 487bfabc-b742-11e6-a7de-42010af00014 18990 0 2016-11-30 13:16:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-d372b\",\"name\":\"my-hostname-delete-node\",\"uid\":\"12e14676-b742-11e6-a7de-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"18908\"}}\n] [{v1 ReplicationController my-hostname-delete-node 12e14676-b742-11e6-a7de-42010af00014 0xc821a156b7}] [] } {[{default-token-p422z {<nil> <nil> <nil> <nil> <nil> 0xc8217ef7d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p422z true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821a157b0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-541db8eb-exb3 0xc821e30cc0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:16:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:16:40 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:16:39 -0800 PST  }]   10.240.0.2 10.124.2.48 2016-11-30 13:16:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820e243a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2091d07bb6813413ed803c569479363f3d0dea73c0cf4c4c5299970afbe98467}]}} {{ } {my-hostname-delete-node-pstc8 my-hostname-delete-node- e2e-tests-resize-nodes-d372b /api/v1/namespaces/e2e-tests-resize-nodes-d372b/pods/my-hostname-delete-node-pstc8 12e3309e-b742-11e6-a7de-42010af00014 18828 0 2016-11-30 13:15:09 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-d372b\",\"name\":\"my-hostname-delete-node\",\"uid\":\"12e14676-b742-11e6-a7de-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"18815\"}}\n] [{v1 ReplicationController my-hostname-delete-node 12e14676-b742-11e6-a7de-42010af00014 0xc821a15a47}] [] } {[{default-token-p422z {<nil> <nil> <nil> <nil> <nil> 0xc8217ef830 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p422z true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821a15b40 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-541db8eb-9cyf 0xc821e30d80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:09 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:09 -0800 PST  }]   10.240.0.4 10.124.0.171 2016-11-30 13:15:09 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820e243c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e0cbbade5748116eeb96e522ae45a241275d146d33cbc44766d4d4fa2cc7cf5c}]}} {{ } {my-hostname-delete-node-tdrd3 my-hostname-delete-node- e2e-tests-resize-nodes-d372b /api/v1/namespaces/e2e-tests-resize-nodes-d372b/pods/my-hostname-delete-node-tdrd3 12e309e0-b742-11e6-a7de-42010af00014 18832 0 2016-11-30 13:15:09 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-d372b\",\"name\":\"my-hostname-delete-node\",\"uid\":\"12e14676-b742-11e6-a7de-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"18815\"}}\n] [{v1 ReplicationController my-hostname-delete-node 12e14676-b742-11e6-a7de-42010af00014 0xc821a15dd7}] [] } {[{default-token-p422z {<nil> <nil> <nil> <nil> <nil> 0xc8217ef890 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p422z true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821a15ed0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-541db8eb-exb3 0xc821e30e40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:09 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:09 -0800 PST  }]   10.240.0.2 10.124.2.47 2016-11-30 13:15:09 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820e243e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c3540c8ca4c46f4a449ad1f63b89362bc854e4d9b88d547c5e00273f88b0fd0b}]}}]}",
    }
    failed to wait for pods responding: pod with UID 12e35199-b742-11e6-a7de-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-d372b/pods 19145} [{{ } {my-hostname-delete-node-d2qqw my-hostname-delete-node- e2e-tests-resize-nodes-d372b /api/v1/namespaces/e2e-tests-resize-nodes-d372b/pods/my-hostname-delete-node-d2qqw 487bfabc-b742-11e6-a7de-42010af00014 18990 0 2016-11-30 13:16:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-d372b","name":"my-hostname-delete-node","uid":"12e14676-b742-11e6-a7de-42010af00014","apiVersion":"v1","resourceVersion":"18908"}}
    ] [{v1 ReplicationController my-hostname-delete-node 12e14676-b742-11e6-a7de-42010af00014 0xc821a156b7}] [] } {[{default-token-p422z {<nil> <nil> <nil> <nil> <nil> 0xc8217ef7d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p422z true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821a157b0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-541db8eb-exb3 0xc821e30cc0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:16:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:16:40 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:16:39 -0800 PST  }]   10.240.0.2 10.124.2.48 2016-11-30 13:16:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820e243a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2091d07bb6813413ed803c569479363f3d0dea73c0cf4c4c5299970afbe98467}]}} {{ } {my-hostname-delete-node-pstc8 my-hostname-delete-node- e2e-tests-resize-nodes-d372b /api/v1/namespaces/e2e-tests-resize-nodes-d372b/pods/my-hostname-delete-node-pstc8 12e3309e-b742-11e6-a7de-42010af00014 18828 0 2016-11-30 13:15:09 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-d372b","name":"my-hostname-delete-node","uid":"12e14676-b742-11e6-a7de-42010af00014","apiVersion":"v1","resourceVersion":"18815"}}
    ] [{v1 ReplicationController my-hostname-delete-node 12e14676-b742-11e6-a7de-42010af00014 0xc821a15a47}] [] } {[{default-token-p422z {<nil> <nil> <nil> <nil> <nil> 0xc8217ef830 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p422z true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821a15b40 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-541db8eb-9cyf 0xc821e30d80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:09 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:09 -0800 PST  }]   10.240.0.4 10.124.0.171 2016-11-30 13:15:09 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820e243c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e0cbbade5748116eeb96e522ae45a241275d146d33cbc44766d4d4fa2cc7cf5c}]}} {{ } {my-hostname-delete-node-tdrd3 my-hostname-delete-node- e2e-tests-resize-nodes-d372b /api/v1/namespaces/e2e-tests-resize-nodes-d372b/pods/my-hostname-delete-node-tdrd3 12e309e0-b742-11e6-a7de-42010af00014 18832 0 2016-11-30 13:15:09 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-d372b","name":"my-hostname-delete-node","uid":"12e14676-b742-11e6-a7de-42010af00014","apiVersion":"v1","resourceVersion":"18815"}}
    ] [{v1 ReplicationController my-hostname-delete-node 12e14676-b742-11e6-a7de-42010af00014 0xc821a15dd7}] [] } {[{default-token-p422z {<nil> <nil> <nil> <nil> <nil> 0xc8217ef890 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p422z true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821a15ed0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-541db8eb-exb3 0xc821e30e40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:09 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 13:15:09 -0800 PST  }]   10.240.0.2 10.124.2.47 2016-11-30 13:15:09 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc820e243e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c3540c8ca4c46f4a449ad1f63b89362bc854e4d9b88d547c5e00273f88b0fd0b}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-master/144/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428
Expected error:
    <*net.OpError | 0xc82018bcc0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\x9a\xc62",
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.154.198.50:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82153d5d0>: {
        s: "Namespace e2e-tests-services-3zl1n is active",
    }
    Namespace e2e-tests-services-3zl1n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #33883

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d060f0>: {
        s: "Namespace e2e-tests-services-3zl1n is active",
    }
    Namespace e2e-tests-services-3zl1n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820018c00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820018c00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-zj4sw -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-zj4sw\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-zj4sw/services/redis-master\", \"uid\":\"9f2d2150-b779-11e6-a22b-42010af00039\", \"resourceVersion\":\"14942\", \"creationTimestamp\":\"2016-12-01T03:52:47Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.239\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8212d1b00 exit status 1 <nil> true [0xc820038350 0xc820038370 0xc8200383b0] [0xc820038350 0xc820038370 0xc8200383b0] [0xc820038368 0xc820038388] [0xafa720 0xafa720] 0xc821ae5f80}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-zj4sw\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-zj4sw/services/redis-master\", \"uid\":\"9f2d2150-b779-11e6-a22b-42010af00039\", \"resourceVersion\":\"14942\", \"creationTimestamp\":\"2016-12-01T03:52:47Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.239\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.198.50 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-zj4sw -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-zj4sw", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-zj4sw/services/redis-master", "uid":"9f2d2150-b779-11e6-a22b-42010af00039", "resourceVersion":"14942", "creationTimestamp":"2016-12-01T03:52:47Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.239", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8212d1b00 exit status 1 <nil> true [0xc820038350 0xc820038370 0xc8200383b0] [0xc820038350 0xc820038370 0xc8200383b0] [0xc820038368 0xc820038388] [0xafa720 0xafa720] 0xc821ae5f80}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-zj4sw", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-zj4sw/services/redis-master", "uid":"9f2d2150-b779-11e6-a22b-42010af00039", "resourceVersion":"14942", "creationTimestamp":"2016-12-01T03:52:47Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.239", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821221020>: {
        s: "Namespace e2e-tests-services-3zl1n is active",
    }
    Namespace e2e-tests-services-3zl1n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820018c00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-master/145/

Multiple broken tests:

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8201c2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201c2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8201c2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.155.155.43 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-3xw3g -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"1162\", \"creationTimestamp\":\"2016-12-01T08:53:48Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-3xw3g\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-3xw3g/services/redis-master\", \"uid\":\"ac5fd0cd-b7a3-11e6-bc7e-42010af00013\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.250.107\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8209d31c0 exit status 1 <nil> true [0xc820094898 0xc8200948b8 0xc8200948d0] [0xc820094898 0xc8200948b8 0xc8200948d0] [0xc8200948b0 0xc8200948c8] [0xafa720 0xafa720] 0xc820bd19e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"1162\", \"creationTimestamp\":\"2016-12-01T08:53:48Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-3xw3g\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-3xw3g/services/redis-master\", \"uid\":\"ac5fd0cd-b7a3-11e6-bc7e-42010af00013\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.250.107\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.155.155.43 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-3xw3g -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"1162", "creationTimestamp":"2016-12-01T08:53:48Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-3xw3g", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-3xw3g/services/redis-master", "uid":"ac5fd0cd-b7a3-11e6-bc7e-42010af00013"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.250.107", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8209d31c0 exit status 1 <nil> true [0xc820094898 0xc8200948b8 0xc8200948d0] [0xc820094898 0xc8200948b8 0xc8200948d0] [0xc8200948b0 0xc8200948c8] [0xafa720 0xafa720] 0xc820bd19e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"1162", "creationTimestamp":"2016-12-01T08:53:48Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-3xw3g", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-3xw3g/services/redis-master", "uid":"ac5fd0cd-b7a3-11e6-bc7e-42010af00013"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.250.107", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-master/146/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.165 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-xxf08 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.255.171\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"f7aee3c7-b7e0-11e6-b537-42010af0001e\", \"resourceVersion\":\"5383\", \"creationTimestamp\":\"2016-12-01T16:12:33Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-xxf08\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-xxf08/services/redis-master\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8208b6be0 exit status 1 <nil> true [0xc8202787d8 0xc8202787f0 0xc820278810] [0xc8202787d8 0xc8202787f0 0xc820278810] [0xc8202787e8 0xc820278808] [0xafa720 0xafa720] 0xc821245680}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.255.171\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"f7aee3c7-b7e0-11e6-b537-42010af0001e\", \"resourceVersion\":\"5383\", \"creationTimestamp\":\"2016-12-01T16:12:33Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-xxf08\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-xxf08/services/redis-master\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.165 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-xxf08 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.255.171", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"f7aee3c7-b7e0-11e6-b537-42010af0001e", "resourceVersion":"5383", "creationTimestamp":"2016-12-01T16:12:33Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-xxf08", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-xxf08/services/redis-master"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8208b6be0 exit status 1 <nil> true [0xc8202787d8 0xc8202787f0 0xc820278810] [0xc8202787d8 0xc8202787f0 0xc820278810] [0xc8202787e8 0xc820278808] [0xafa720 0xafa720] 0xc821245680}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.255.171", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"f7aee3c7-b7e0-11e6-b537-42010af0001e", "resourceVersion":"5383", "creationTimestamp":"2016-12-01T16:12:33Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-xxf08", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-xxf08/services/redis-master"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8201b4a50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8201b4a50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201b4a50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants