Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-kops-aws: broken test run #42334

Closed
k8s-github-robot opened this issue Mar 1, 2017 · 12 comments
Closed

ci-kubernetes-e2e-kops-aws: broken test run #42334

k8s-github-robot opened this issue Mar 1, 2017 · 12 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/4875/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1120
Mar  1 07:13:07.713: Pods for rc e2e-test-nginx-rc were not ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1113

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203fff40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420452fe0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 145.0.0
Installing components from version: 145.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
..............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:316
Mar  1 07:14:00.519: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2016

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Mar  1 07:17:09.755: Couldn't delete ns: "e2e-tests-disruption-ktjr5": namespace e2e-tests-disruption-ktjr5 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-ktjr5 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276

Issues about this test specifically: #32639

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1009
Expected error:
    <*errors.errorString | 0xc420441370>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3837

Issues about this test specifically: #37274

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:108
Expected error:
    <*errors.errorString | 0xc4203a0320>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:95

Issues about this test specifically: #28003

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203c6310>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420401020>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: [k8s.io] Garbage collector should orphan pods created by rc if delete options say so {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:312
Mar  1 07:08:10.679: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:296

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-4ddgq" to be ready
Expected error:
    <*errors.errorString | 0xc4203d2d70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:802
Mar  1 07:14:48.838: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:299

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Mar  1 07:19:12.281: Couldn't delete ns: "e2e-tests-disruption-smszt": namespace e2e-tests-disruption-smszt was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-smszt was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:568
Expected error:
    <*errors.errorString | 0xc4203ea810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #30263

Previous issues for this suite: #37891

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Mar 1, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/4972/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:316
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg190557002 get pods update-demo-nautilus-06mnm -o template --template={{if (exists . \"status\" \"containerStatuses\")}}{{range .status.containerStatuses}}{{if (and (eq .name \"update-demo\") (exists . \"state\" \"running\"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w9h58] []  <nil>  Unable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout\n [] <nil> 0xc4215405d0 exit status 1 <nil> <nil> true [0xc42071e088 0xc42071e1a8 0xc42071e1f0] [0xc42071e088 0xc42071e1a8 0xc42071e1f0] [0xc42071e158 0xc42071e1d8] [0xca1b20 0xca1b20] 0xc4208ca180 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg190557002 get pods update-demo-nautilus-06mnm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w9h58] []  <nil>  Unable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout
     [] <nil> 0xc4215405d0 exit status 1 <nil> <nil> true [0xc42071e088 0xc42071e1a8 0xc42071e1f0] [0xc42071e088 0xc42071e1a8 0xc42071e1f0] [0xc42071e158 0xc42071e1d8] [0xca1b20 0xca1b20] 0xc4208ca180 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2098

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:568
Mar  3 09:33:03.635: Failed to open websocket to wss://api.e2e-kops-aws.test-aws.k8s.io:443/api/v1/namespaces/e2e-tests-pods-kcg79/pods/pod-logs-websocket-276f411d-0037-11e7-9b2c-0242ac11000a/log?container=main: websocket.Dial wss://api.e2e-kops-aws.test-aws.k8s.io:443/api/v1/namespaces/e2e-tests-pods-kcg79/pods/pod-logs-websocket-276f411d-0037-11e7-9b2c-0242ac11000a/log?container=main: dial tcp 54.174.28.81:443: getsockopt: connection timed out
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:548

Issues about this test specifically: #30263

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:386
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg190557002 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dzqzz] []  0xc4213349c0  warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when stopping \"STDIN\": Get https://api.e2e-kops-aws.test-aws.k8s.io/api/v1/namespaces/e2e-tests-kubectl-dzqzz/pods/nginx: dial tcp 54.174.28.81:443: i/o timeout\n [] <nil> 0xc421233320 exit status 1 <nil> <nil> true [0xc4213a4058 0xc4213a4080 0xc4213a4090] [0xc4213a4058 0xc4213a4080 0xc4213a4090] [0xc4213a4060 0xc4213a4078 0xc4213a4088] [0xca1a20 0xca1b20 0xca1b20] 0xc42121eba0 <nil>}:\nCommand stdout:\n\nstderr:\nwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when stopping \"STDIN\": Get https://api.e2e-kops-aws.test-aws.k8s.io/api/v1/namespaces/e2e-tests-kubectl-dzqzz/pods/nginx: dial tcp 54.174.28.81:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg190557002 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dzqzz] []  0xc4213349c0  warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    error: error when stopping "STDIN": Get https://api.e2e-kops-aws.test-aws.k8s.io/api/v1/namespaces/e2e-tests-kubectl-dzqzz/pods/nginx: dial tcp 54.174.28.81:443: i/o timeout
     [] <nil> 0xc421233320 exit status 1 <nil> <nil> true [0xc4213a4058 0xc4213a4080 0xc4213a4090] [0xc4213a4058 0xc4213a4080 0xc4213a4090] [0xc4213a4060 0xc4213a4078 0xc4213a4088] [0xca1a20 0xca1b20 0xca1b20] 0xc42121eba0 <nil>}:
    Command stdout:
    
    stderr:
    warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    error: error when stopping "STDIN": Get https://api.e2e-kops-aws.test-aws.k8s.io/api/v1/namespaces/e2e-tests-kubectl-dzqzz/pods/nginx: dial tcp 54.174.28.81:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2098

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:340
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg190557002 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h7779] []  <nil>  Unable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout\n [] <nil> 0xc420bb6540 exit status 1 <nil> <nil> true [0xc42137bf48 0xc42137bf60 0xc42137bf78] [0xc42137bf48 0xc42137bf60 0xc42137bf78] [0xc42137bf58 0xc42137bf70] [0xca1b20 0xca1b20] 0xc420bb4120 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg190557002 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h7779] []  <nil>  Unable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout
     [] <nil> 0xc420bb6540 exit status 1 <nil> <nil> true [0xc42137bf48 0xc42137bf60 0xc42137bf78] [0xc42137bf48 0xc42137bf60 0xc42137bf78] [0xc42137bf58 0xc42137bf70] [0xca1b20 0xca1b20] 0xc420bb4120 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2098

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:499
Mar  3 09:34:53.462: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:188

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55
failed to execute command in pod test-pod, container busybox-2: error sending request: Post https://api.e2e-kops-aws.test-aws.k8s.io/api/v1/namespaces/e2e-tests-e2e-kubelet-etc-hosts-pjbfh/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true: dial tcp 54.174.28.81:443: getsockopt: connection timed out
Expected error:
    <*errors.errorString | 0xc4213d4070>: {
        s: "error sending request: Post https://api.e2e-kops-aws.test-aws.k8s.io/api/v1/namespaces/e2e-tests-e2e-kubelet-etc-hosts-pjbfh/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true: dial tcp 54.174.28.81:443: getsockopt: connection timed out",
    }
    error sending request: Post https://api.e2e-kops-aws.test-aws.k8s.io/api/v1/namespaces/e2e-tests-e2e-kubelet-etc-hosts-pjbfh/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true: dial tcp 54.174.28.81:443: getsockopt: connection timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:107

Issues about this test specifically: #37502

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:334
Expected error:
    <*errors.errorString | 0xc420408240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:333

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:979
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg190557002 logs redis-master-0r6p9 redis-master --namespace=e2e-tests-kubectl-9lt92] []  <nil>  Unable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout\n [] <nil> 0xc4210f2210 exit status 1 <nil> <nil> true [0xc4202c0818 0xc4202c0830 0xc4202c0848] [0xc4202c0818 0xc4202c0830 0xc4202c0848] [0xc4202c0828 0xc4202c0840] [0xca1b20 0xca1b20] 0xc421003140 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg190557002 logs redis-master-0r6p9 redis-master --namespace=e2e-tests-kubectl-9lt92] []  <nil>  Unable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout
     [] <nil> 0xc4210f2210 exit status 1 <nil> <nil> true [0xc4202c0818 0xc4202c0830 0xc4202c0848] [0xc4202c0818 0xc4202c0830 0xc4202c0848] [0xc4202c0828 0xc4202c0840] [0xca1b20 0xca1b20] 0xc421003140 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 54.174.28.81:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2098

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@calebamiles calebamiles modified the milestone: v1.6 Mar 3, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5026/
Multiple broken tests:

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203fe410>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:258
Mar  4 18:49:50.710: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4201e0ea0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:798
Mar  4 18:44:25.882: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:299

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:318
Expected error:
    <*errors.errorString | 0xc420dda7f0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:277

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:48
Expected error:
    <*errors.errorString | 0xc4203c8160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:47

Issues about this test specifically: #31938

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:103
Expected error:
    <*errors.errorString | 0xc420bbea40>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:18, Replicas:6, UpdatedReplicas:6, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624278961, nsec:221325626, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624278961, nsec:221325796, loc:(*time.Location)(0x4db8a20)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, v1beta1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624278966, nsec:758913694, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624278919, nsec:649109012, loc:(*time.Location)(0x4db8a20)}}, Reason:\"NewReplicaSetAvailable\", Message:\"ReplicaSet \\\"nginx-4178072188\\\" has successfully progressed.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:18, Replicas:6, UpdatedReplicas:6, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63624278961, nsec:221325626, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624278961, nsec:221325796, loc:(*time.Location)(0x4db8a20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1beta1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63624278966, nsec:758913694, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624278919, nsec:649109012, loc:(*time.Location)(0x4db8a20)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"nginx-4178072188\" has successfully progressed."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1375

Issues about this test specifically: #36265 #36353 #36628

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends no data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:478
Mar  4 18:51:54.396: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:270

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:243
wait for pod "pod-service-account-3756c3dc-0150-11e7-b1fd-0242ac110007-hkq4k" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201e0ea0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #37526

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1345
Expected error:
    <*errors.errorString | 0xc4216ae140>: {
        s: "timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg963398028 --namespace=e2e-tests-kubectl-xh3km run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc4212eb880   [] <nil> 0xc42114fc80 <nil> <nil> <nil> true [0xc420464f80 0xc420464fc8 0xc420464fe0] [0xc420464f80 0xc420464fc8 0xc420464fe0] [0xc420464f88 0xc420464fc0 0xc420464fd0] [0xcace40 0xcacf40 0xcacf40] 0xc420c08240 <nil>}:\nCommand stdout:\n\nstderr:\n\n",
    }
    timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg963398028 --namespace=e2e-tests-kubectl-xh3km run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc4212eb880   [] <nil> 0xc42114fc80 <nil> <nil> <nil> true [0xc420464f80 0xc420464fc8 0xc420464fe0] [0xc420464f80 0xc420464fc8 0xc420464fe0] [0xc420464f88 0xc420464fc0 0xc420464fd0] [0xcace40 0xcacf40 0xcacf40] 0xc420c08240 <nil>}:
    Command stdout:
    
    stderr:
    
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2098

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:94
Expected error:
    <*errors.errorString | 0xc4209cb640>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:976

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:382
wait for pod "pod-configmaps-c6035cfa-014f-11e7-8e3e-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420374910>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #27079

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Mar  4 18:43:31.074: Couldn't delete ns: "e2e-tests-disruption-hrzs1": namespace e2e-tests-disruption-hrzs1 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-hrzs1 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203c73f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:85
Expected error:
    <*errors.errorString | 0xc4203c5ab0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:84

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc42043ea90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Mar  4 18:42:15.794: Couldn't delete ns: "e2e-tests-init-container-0qdx9": namespace e2e-tests-init-container-0qdx9 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-init-container-0qdx9 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276

Issues about this test specifically: #31873

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:340
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg963398028 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-c0324] []  0xc4212b6520 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\n error: timed out waiting for \"update-demo-nautilus\" to be synced\n [] <nil> 0xc420892cf0 exit status 1 <nil> <nil> true [0xc42073b630 0xc42073b660 0xc42073b678] [0xc42073b630 0xc42073b660 0xc42073b678] [0xc42073b638 0xc42073b650 0xc42073b668] [0xcace40 0xcacf40 0xcacf40] 0xc4212b4600 <nil>}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\n\nstderr:\nerror: timed out waiting for \"update-demo-nautilus\" to be synced\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg963398028 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-c0324] []  0xc4212b6520 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    Scaling update-demo-nautilus down to 1
    Scaling update-demo-kitten up to 2
    Scaling update-demo-nautilus down to 0
     error: timed out waiting for "update-demo-nautilus" to be synced
     [] <nil> 0xc420892cf0 exit status 1 <nil> <nil> true [0xc42073b630 0xc42073b660 0xc42073b678] [0xc42073b630 0xc42073b660 0xc42073b678] [0xc42073b638 0xc42073b650 0xc42073b668] [0xcace40 0xcacf40 0xcacf40] 0xc4212b4600 <nil>}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    Scaling update-demo-nautilus down to 1
    Scaling update-demo-kitten up to 2
    Scaling update-demo-nautilus down to 0
    
    stderr:
    error: timed out waiting for "update-demo-nautilus" to be synced
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2098

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc421b1ea20>: {
        s: "Only 43 pods started out of 50",
    }
    Only 43 pods started out of 50
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:346

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:49
wait for pod "downwardapi-volume-8d6b69f4-014d-11e7-8e3e-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420374910>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] Projected should set mode on item file [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:832
wait for pod "downwardapi-volume-b8db1f5f-014c-11e7-a02e-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420450600>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-g98mp" to be ready
Expected error:
    <*errors.errorString | 0xc4203fecb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55
Expected error:
    <*errors.errorString | 0xc42043e560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #37502

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:67
Expected error:
    <*errors.errorString | 0xc4203acd00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:66

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:363
Expected error:
    <*errors.errorString | 0xc420450600>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:247

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc420bb5cc0>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624278378, nsec:908047029, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624278378, nsec:908047181, loc:(*time.Location)(0x4db8a20)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63624278378, nsec:908047029, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624278378, nsec:908047181, loc:(*time.Location)(0x4db8a20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:344

Issues about this test specifically: #29197 #36289 #36598 #38528

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5065/
Multiple broken tests:

Failed: [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Mar  5 18:28:59.054: Couldn't delete ns: "e2e-tests-projected-z0g7h": namespace e2e-tests-projected-z0g7h was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-projected-z0g7h was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Mar  5 18:29:28.677: Couldn't delete ns: "e2e-tests-events-2q856": namespace e2e-tests-events-2q856 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-events-2q856 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276

Issues about this test specifically: #28346

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:94
Expected error:
    <*errors.errorString | 0xc4212f6580>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:20, UpdatedReplicas:20, ReadyReplicas:18, AvailableReplicas:18, UnavailableReplicas:2, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624364598, nsec:796641896, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624364598, nsec:796642038, loc:(*time.Location)(0x4db8a20)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:20, UpdatedReplicas:20, ReadyReplicas:18, AvailableReplicas:18, UnavailableReplicas:2, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63624364598, nsec:796641896, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624364598, nsec:796642038, loc:(*time.Location)(0x4db8a20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1029

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:131
Mar  5 18:33:15.088: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-whbdv to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:103
Expected error:
    <*errors.errorString | 0xc4201daaf0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:15, Replicas:5, UpdatedReplicas:5, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624364674, nsec:162346485, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624364674, nsec:162346646, loc:(*time.Location)(0x4db8a20)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, v1beta1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624364678, nsec:146352287, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624364642, nsec:699002750, loc:(*time.Location)(0x4db8a20)}}, Reason:\"NewReplicaSetAvailable\", Message:\"ReplicaSet \\\"nginx-2660925710\\\" has successfully progressed.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:15, Replicas:5, UpdatedReplicas:5, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63624364674, nsec:162346485, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624364674, nsec:162346646, loc:(*time.Location)(0x4db8a20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1beta1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63624364678, nsec:146352287, loc:(*time.Location)(0x4db8a20)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624364642, nsec:699002750, loc:(*time.Location)(0x4db8a20)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"nginx-2660925710\" has successfully progressed."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1375

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Mar  5 18:31:39.342: Couldn't delete ns: "e2e-tests-services-6nr7l": namespace e2e-tests-services-6nr7l was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-services-6nr7l was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*errors.errorString | 0xc4203ae320>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:101
wait for pod "var-expansion-3a63329a-0214-11e7-a33e-0242ac110004" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420419770>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Mar  5 18:28:24.660: Couldn't delete ns: "e2e-tests-pods-63djg": namespace e2e-tests-pods-63djg was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-pods-63djg was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276

Issues about this test specifically: #30263

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:163
Mar  5 18:39:10.589: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc42034a5d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:189
Expected error:
    <*errors.errorString | 0xc420415400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:170

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203fc880>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:372
Expected error:
    <*errors.errorString | 0xc420478320>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1662

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:198
Expected
    <*errors.errorString | 0xc420433ea0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:176

Issues about this test specifically: #31873

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1008
Mar  5 18:32:05.815: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:299

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:198
Mar  5 18:34:41.424: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5130/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:502
Mar  7 14:07:04.162: Expected "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" from server, got ""
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:357

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:190
Mar  7 14:11:17.958: Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:118

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5170/
Multiple broken tests:

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:190
Mar  8 17:09:53.895: Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:118

Issues about this test specifically: #42724

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc42043e220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:368

Issues about this test specifically: #28106 #35197 #37482

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5223/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203b33b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc42043e340>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:383
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:382

Issues about this test specifically: #31151 #35586

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:382
Timed out after 240.001s.
Expected
    <string>: Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:379

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-2db1h" to be ready
Expected error:
    <*errors.errorString | 0xc4203d80d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:337
Mar 10 05:58:08.629: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203d87b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1154
Expected error:
    <*errors.errorString | 0xc4213b88f0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1075

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc421499100>: {
        s: "Only 45 pods started out of 50",
    }
    Only 45 pods started out of 50
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:346

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*errors.errorString | 0xc4203d9360>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55
Expected error:
    <*errors.errorString | 0xc4203ce090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #37502

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:69
Expected error:
    <*errors.errorString | 0xc4202532d0>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624750301, nsec:209572138, loc:(*time.Location)(0x496a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624750301, nsec:209572265, loc:(*time.Location)(0x496a8e0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63624750301, nsec:209572138, loc:(*time.Location)(0x496a8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624750301, nsec:209572265, loc:(*time.Location)(0x496a8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:314

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 10 06:07:57.809: Couldn't delete ns: "e2e-tests-disruption-x3gzb": namespace e2e-tests-disruption-x3gzb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-x3gzb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #32639

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:85
Expected error:
    <*errors.errorString | 0xc4203ce090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:84

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42041e640>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:61
wait for pod "pod-configmaps-2ae79503-0598-11e7-b7ce-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420430bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 10 05:53:07.314: Couldn't delete ns: "e2e-tests-disruption-2rlmh": namespace e2e-tests-disruption-2rlmh was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-2rlmh was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #32668 #35405

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5283/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 11 17:57:01.039: Couldn't delete ns: "e2e-tests-port-forwarding-6kl1h": namespace e2e-tests-port-forwarding-6kl1h was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-port-forwarding-6kl1h was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:411
Expected error:
    <*errors.errorString | 0xc420452b70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:247

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-vfbv1" to be ready
Expected error:
    <*errors.errorString | 0xc420408ac0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:331
Expected
    <*errors.errorString | 0xc4203f3b10>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:313

Issues about this test specifically: #31408

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc42043f8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:125
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:372
Expected error:
    <*errors.errorString | 0xc4210bc1c0>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1672

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:128
Expected error:
    <*errors.errorString | 0xc42080a160>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg354032218 exec --namespace=e2e-tests-statefulset-7mgqz ss-0 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: \n [] <nil> 0xc420be2f30 exit status 1 <nil> <nil> true [0xc4203846e8 0xc420384750 0xc420384798] [0xc4203846e8 0xc420384750 0xc420384798] [0xc420384738 0xc420384788] [0xc45f00 0xc45f00] 0xc42131c0c0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: \n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg354032218 exec --namespace=e2e-tests-statefulset-7mgqz ss-0 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: 
     [] <nil> 0xc420be2f30 exit status 1 <nil> <nil> true [0xc4203846e8 0xc420384750 0xc420384798] [0xc4203846e8 0xc420384750 0xc420384798] [0xc420384738 0xc420384788] [0xc45f00 0xc45f00] 0xc42131c0c0 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: 
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:123

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
..............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42043fde0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 11 17:57:14.671: Couldn't delete ns: "e2e-tests-disruption-scdl7": namespace e2e-tests-disruption-scdl7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-scdl7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #32668 #35405

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5293/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:383
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:382

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:52
wait for pod "pod-projected-secrets-0d6df144-06fc-11e7-8bf5-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420431290>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203f55e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:41
Expected error:
    <*errors.errorString | 0xc420432b70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203f55e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:69
Expected error:
    <*errors.errorString | 0xc420e6c060>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624903297, nsec:44613650, loc:(*time.Location)(0x4978b00)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624903297, nsec:44613786, loc:(*time.Location)(0x4978b00)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63624903297, nsec:44613650, loc:(*time.Location)(0x4978b00)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624903297, nsec:44613786, loc:(*time.Location)(0x4978b00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:314

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:128
Mar 12 00:23:55.234: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:511
Mar 12 00:15:09.704: Failed to open websocket to wss://api.e2e-kops-aws.test-aws.k8s.io:443/api/v1/namespaces/e2e-tests-pods-zwlsl/pods/pod-exec-websocket-b809bee7-06fb-11e7-b478-0242ac11000a/exec?command=cat&command=%2Fetc%2Fresolv.conf&container=main&stderr=1&stdout=1: websocket.Dial wss://api.e2e-kops-aws.test-aws.k8s.io:443/api/v1/namespaces/e2e-tests-pods-zwlsl/pods/pod-exec-websocket-b809bee7-06fb-11e7-b478-0242ac11000a/exec?command=cat&command=%2Fetc%2Fresolv.conf&container=main&stderr=1&stdout=1: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:481

Issues about this test specifically: #38308

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:421
Mar 12 00:42:17.041: Pod test-pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:376

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:266
Expected error:
    <*errors.errorString | 0xc42032fb20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:664

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:387
wait for pod "pod-projected-configmaps-ffaccea7-06fb-11e7-9b81-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203f1ee0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc421847f30>: {
        s: "Only 49 pods started out of 50",
    }
    Only 49 pods started out of 50
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:346

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420b2af90>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-12 00:11:45 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-12 00:12:17 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-12 00:11:45 -0800 PST Reason: Message:}] Message: Reason: HostIP:172.20.54.2 PodIP:100.96.2.53 StartTime:2017-03-12 00:11:45 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-03-12 00:12:16 -0800 PST,ContainerID:docker://b8996bd48a0136031fdea28dacc6d16a8b581e1d22f5fe2a2e8bfa400be7054d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker-pullable://gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff ContainerID:docker://b8996bd48a0136031fdea28dacc6d16a8b581e1d22f5fe2a2e8bfa400be7054d}] QOSClass:BestEffort}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-12 00:11:45 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-12 00:12:17 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-12 00:11:45 -0800 PST Reason: Message:}] Message: Reason: HostIP:172.20.54.2 PodIP:100.96.2.53 StartTime:2017-03-12 00:11:45 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-03-12 00:12:16 -0800 PST,ContainerID:docker://b8996bd48a0136031fdea28dacc6d16a8b581e1d22f5fe2a2e8bfa400be7054d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker-pullable://gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff ContainerID:docker://b8996bd48a0136031fdea28dacc6d16a8b581e1d22f5fe2a2e8bfa400be7054d}] QOSClass:BestEffort}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:129
Expected error:
    <*errors.errorString | 0xc420453610>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:74

Issues about this test specifically: #28346

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*errors.errorString | 0xc42043c5c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:372
Expected error:
    <*errors.errorString | 0xc42074a0a0>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1672

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420401d80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5302/
Multiple broken tests:

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 12 08:36:27.437: Couldn't delete ns: "e2e-tests-limitrange-vs8xn": namespace e2e-tests-limitrange-vs8xn was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-limitrange-vs8xn was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #27503

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
.........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:316
Mar 12 08:36:03.222: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2013

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:75
Expected error:
    <*errors.errorString | 0xc421047c30>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:422

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203adf60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:330
Mar 12 08:48:44.038: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2013

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-rv6zq" to be ready
Expected error:
    <*errors.errorString | 0xc4203ac470>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:128
Mar 12 08:53:39.651: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420455d20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc4213bbbf0>: {
        s: "Only 41 pods started out of 50",
    }
    Only 41 pods started out of 50
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:346

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:494
wait for pod "pod-configmaps-0f4cb730-073b-11e7-8988-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42043e490>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:93
Expected error:
    <*errors.errorString | 0xc421f98280>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:946

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc42043e490>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:69
Expected error:
    <*errors.errorString | 0xc421019510>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624929510, nsec:467123065, loc:(*time.Location)(0x4978b00)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624929510, nsec:467123187, loc:(*time.Location)(0x4978b00)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63624929510, nsec:467123065, loc:(*time.Location)(0x4978b00)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624929510, nsec:467123187, loc:(*time.Location)(0x4978b00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:314

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42037d3a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:85
Expected error:
    <*errors.errorString | 0xc4203b2860>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:84

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends no data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 12 08:36:03.310: Couldn't delete ns: "e2e-tests-port-forwarding-7ttpd": namespace e2e-tests-port-forwarding-7ttpd was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-port-forwarding-7ttpd was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Failed: [k8s.io] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:82
wait for pod "pod-projected-secrets-10cfa291-0739-11e7-a1c1-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203b2860>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:383
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:382

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:507
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg307067214 --namespace=e2e-tests-kubectl-xtts2 run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never success -- /bin/sh -c exit 0] []  <nil>  error: watch closed before Until timeout\n [] <nil> 0xc4209d4ba0 exit status 1 <nil> <nil> true [0xc42064c938 0xc42064c950 0xc42064c968] [0xc42064c938 0xc42064c950 0xc42064c968] [0xc42064c948 0xc42064c960] [0xc45f00 0xc45f00] 0xc420d85ce0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: watch closed before Until timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg307067214 --namespace=e2e-tests-kubectl-xtts2 run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never success -- /bin/sh -c exit 0] []  <nil>  error: watch closed before Until timeout
     [] <nil> 0xc4209d4ba0 exit status 1 <nil> <nil> true [0xc42064c938 0xc42064c950 0xc42064c968] [0xc42064c938 0xc42064c950 0xc42064c968] [0xc42064c948 0xc42064c960] [0xc45f00 0xc45f00] 0xc420d85ce0 <nil>}:
    Command stdout:
    
    stderr:
    error: watch closed before Until timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:481

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:213
Expected error:
    <*errors.errorString | 0xc42043e490>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:200

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1355
Expected error:
    <*errors.errorString | 0xc421030160>: {
        s: "timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg307067214 --namespace=e2e-tests-kubectl-tpr7w run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420adc080   [] <nil> 0xc420da3200 <nil> <nil> <nil> true [0xc42037ed48 0xc42037edb8 0xc42037ee30] [0xc42037ed48 0xc42037edb8 0xc42037ee30] [0xc42037ed58 0xc42037eda0 0xc42037ede8] [0xc45e00 0xc45f00 0xc45f00] 0xc4207ca3c0 <nil>}:\nCommand stdout:\n\nstderr:\n\n",
    }
    timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg307067214 --namespace=e2e-tests-kubectl-tpr7w run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420adc080   [] <nil> 0xc420da3200 <nil> <nil> <nil> true [0xc42037ed48 0xc42037edb8 0xc42037ee30] [0xc42037ed48 0xc42037edb8 0xc42037ee30] [0xc42037ed58 0xc42037eda0 0xc42037ede8] [0xc45e00 0xc45f00 0xc45f00] 0xc4207ca3c0 <nil>}:
    Command stdout:
    
    stderr:
    
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2095

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:411
Expected error:
    <*errors.errorString | 0xc4203ac470>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230

Issues about this test specifically: #26168 #27450

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5338/
Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:274
Expected error:
    <*errors.errorString | 0xc4213b2890>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:150

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1162
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg501718590 rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-5f5hg] []  <nil> Created e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef\nScaling up e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc421383740 exit status 1 <nil> <nil> true [0xc4203e0d88 0xc4203e0da0 0xc4203e0db8] [0xc4203e0d88 0xc4203e0da0 0xc4203e0db8] [0xc4203e0d98 0xc4203e0db0] [0xc45f00 0xc45f00] 0xc42123b8c0 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef\nScaling up e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg501718590 rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-5f5hg] []  <nil> Created e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef
    Scaling up e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc421383740 exit status 1 <nil> <nil> true [0xc4203e0d88 0xc4203e0da0 0xc4203e0db8] [0xc4203e0d88 0xc4203e0da0 0xc4203e0db8] [0xc4203e0d98 0xc4203e0db0] [0xc45f00 0xc45f00] 0xc42123b8c0 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef
    Scaling up e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-32acf23b6b4ea661036d44fce5f9d8ef up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:174

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*errors.errorString | 0xc4203da290>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:69
Expected error:
    <*errors.errorString | 0xc4213ef2c0>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63625010593, nsec:318626236, loc:(*time.Location)(0x4978b00)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625010593, nsec:318626357, loc:(*time.Location)(0x4978b00)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63625010593, nsec:318626236, loc:(*time.Location)(0x4978b00)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625010593, nsec:318626357, loc:(*time.Location)(0x4978b00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:314

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:85
Expected error:
    <*errors.errorString | 0xc4203d6210>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:84

Issues about this test specifically: #31498 #33896 #35507

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
.............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203d6210>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:266
Expected error:
    <*errors.errorString | 0xc4203f2110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3902

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 13 07:01:42.565: Couldn't delete ns: "e2e-tests-disruption-r6lrv": namespace e2e-tests-disruption-r6lrv was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-r6lrv was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:156
Expected error:
    <*errors.errorString | 0xc420415fe0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:75
Waiting for pods in namespace "e2e-tests-disruption-hftwx" to be ready
Expected error:
    <*errors.errorString | 0xc420433620>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203ff1f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:372
Expected error:
    <*errors.errorString | 0xc420494690>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1672

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-hbp4f" to be ready
Expected error:
    <*errors.errorString | 0xc4203cf630>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects no client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:508
Mar 13 07:01:41.732: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:214

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42041bb50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55
failed to execute command in pod test-pod, container busybox-1: 
Expected error:
    <*errors.StatusError | 0xc42141a200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "",
            Message: "",
            Reason: "",
            Details: nil,
            Code: 0,
        },
    }
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:107

Issues about this test specifically: #37502

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc4204bbc00>: {
        s: "Only 46 pods started out of 50",
    }
    Only 46 pods started out of 50
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:346

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:967
wait for pod "downwardapi-volume-1acd131c-07f6-11e7-9d3b-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42041c2c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:213
Expected error:
    <*errors.errorString | 0xc4203a45f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:200

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203fe460>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:318
Expected error:
    <*errors.errorString | 0xc420dc6920>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:277

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:144
Expected error:
    <*errors.errorString | 0xc4203d1bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #33008

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:507
Expected
    <int>: 1
to equal
    <int>: 42
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:487

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 13 07:00:46.761: Couldn't delete ns: "e2e-tests-services-tln5g": namespace e2e-tests-services-tln5g was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-services-tln5g was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:316
Mar 13 07:00:34.820: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2013

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:513
Mar 13 06:57:44.931: Failed to open websocket to wss://api.e2e-kops-aws.test-aws.k8s.io:443/api/v1/namespaces/e2e-tests-port-forwarding-pwbks/pods/pfpod/portforward?ports=80: websocket.Dial wss://api.e2e-kops-aws.test-aws.k8s.io:443/api/v1/namespaces/e2e-tests-port-forwarding-pwbks/pods/pfpod/portforward?ports=80: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:406

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc42041bb50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

@grodrigues3 grodrigues3 assigned justinsb and unassigned rmmh Mar 13, 2017
@grodrigues3 grodrigues3 added the sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. label Mar 13, 2017
@bowei
Copy link
Member

bowei commented Mar 13, 2017

close, looks like gcloud command failed

@justinsb justinsb removed the sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. label Mar 14, 2017
@justinsb
Copy link
Member

Tracking gcloud problem here: kubernetes/test-infra#2236

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

7 participants