Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-prod-parallel: broken test run #39233

Closed
k8s-github-robot opened this issue Dec 26, 2016 · 9 comments
Closed

ci-kubernetes-e2e-gci-gke-prod-parallel: broken test run #39233

k8s-github-robot opened this issue Dec 26, 2016 · 9 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/2474/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200ef7b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1071
Dec 26 14:22:54.192: expected un-ready endpoint for Service webserver within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1069

Issues about this test specifically: #26172

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Dec 26 14:26:51.125: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1513

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc8200fd6a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:50
Expected error:
    <*errors.errorString | 0xc8207c22f0>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-12-26 14:13:24 -0800 PST FinishedAt:2016-12-26 14:13:54 -0800 PST ContainerID:docker://4dfa53de16108b169b2c0577dd78213f2f4ae0adaba16b077ec25aa70865147b}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-12-26 14:13:24 -0800 PST FinishedAt:2016-12-26 14:13:54 -0800 PST ContainerID:docker://4dfa53de16108b169b2c0577dd78213f2f4ae0adaba16b077ec25aa70865147b}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Dec 26 14:29:01.033: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:284

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc8200e97b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:542
Dec 26 14:16:34.387: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:539

Issues about this test specifically: #28420 #36122

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc8201a6760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Previous issues for this suite: #37173 #37815 #38395

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Dec 26, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/2890/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan  2 02:14:10.540: Couldn't delete ns: "e2e-tests-services-xzwyj": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-services-xzwyj/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-services-xzwyj/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8209440a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan  2 02:14:11.097: Couldn't delete ns: "e2e-tests-containers-njjes": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-containers-njjes/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-containers-njjes/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820c8d270), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #34520

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan  2 02:14:12.458: Couldn't delete ns: "e2e-tests-container-probe-0ktui": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-0ktui/jobs\"") has prevented the request from succeeding (get jobs.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-0ktui/jobs\\\"\") has prevented the request from succeeding (get jobs.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8204fb540), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #38511

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/3161/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc82019e870>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc8200e57b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:266
Expected error:
    <*errors.errorString | 0xc8200fd6a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:207

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Jan  6 08:00:13.290: Err : timed out waiting for the condition
. Failed to remove deployment test-paused-deployment pods : &{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink:/api/v1/namespaces/e2e-tests-deployment-h941z/pods ResourceVersion:3518} Items:[{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:test-paused-deployment-965846846-0xq4l GenerateName:test-paused-deployment-965846846- Namespace:e2e-tests-deployment-h941z SelfLink:/api/v1/namespaces/e2e-tests-deployment-h941z/pods/test-paused-deployment-965846846-0xq4l UID:0d24331d-d429-11e6-a580-42010af000fc ResourceVersion:2526 Generation:0 CreationTimestamp:2017-01-06 07:59:06 -0800 PST DeletionTimestamp:2017-01-06 07:59:32 -0800 PST DeletionGracePeriodSeconds:0xc8207438d0 Labels:map[name:nginx pod-template-hash:965846846] Annotations:map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"e2e-tests-deployment-h941z","name":"test-paused-deployment-965846846","uid":"0d22cf37-d429-11e6-a580-42010af000fc","apiVersion":"extensions","resourceVersion":"2426"}}
] OwnerReferences:[] Finalizers:[] ClusterName:} Spec:{Volumes:[{Name:default-token-6hcpw VolumeSource:{HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil> GitRepo:<nil> Secret:0xc820e49290 NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaim:<nil> RBD:<nil> Quobyte:<nil> FlexVolume:<nil> Cinder:<nil> CephFS:<nil> Flocker:<nil> DownwardAPI:<nil> FC:<nil> AzureFile:<nil> ConfigMap:<nil> VsphereVolume:<nil> AzureDisk:<nil>}}] InitContainers:[] Containers:[{Name:nginx Image:gcr.io/google_containers/nginx-slim:0.7 Command:[] Args:[] WorkingDir: Ports:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-6hcpw ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:<nil> Stdin:false StdinOnce:false TTY:false}] RestartPolicy:Always TerminationGracePeriodSeconds:0xc820743a60 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName:default NodeName:gke-bootstrap-e2e-default-pool-0bccf4e0-cjh9 SecurityContext:0xc8209b3b40 ImagePullSecrets:[] Hostname: Subdomain:} Status:{Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-06 07:59:06 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-06 07:59:06 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-06 07:59:06 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP: StartTime:2017-01-06 07:59:06 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:0xc820e6a1a0 Running:<nil> Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/nginx-slim:0.7 ImageID: ContainerID:}]}}]}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:249

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/3415/
Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 10 10:02:24.776: Couldn't delete ns: "e2e-tests-proxy-v35i5": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-proxy-v35i5/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-proxy-v35i5/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820aeaaa0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32936

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:137
Expected error:
    <*errors.errorString | 0xc8200db790>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:136

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 10 10:02:24.774: Couldn't delete ns: "e2e-tests-pods-a8unj": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-pods-a8unj/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-pods-a8unj/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208013b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #38308

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:86
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {SelfLink: "", ResourceVersion: ""},
                Status: "Failure",
                Message: "an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-fqmc3/replicasets/nginx-deployment-3837372172\\\"\") has prevented the request from succeeding (get replicasets.extensions nginx-deployment-3837372172)",
                Reason: "InternalError",
                Details: {
                    Name: "nginx-deployment-3837372172",
                    Group: "extensions",
                    Kind: "replicasets",
                    Causes: [
                        {
                            Type: "UnexpectedServerResponse",
                            Message: "Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-fqmc3/replicasets/nginx-deployment-3837372172\"",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 500,
            },
        },
    ]
    an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-fqmc3/replicasets/nginx-deployment-3837372172\"") has prevented the request from succeeding (get replicasets.extensions nginx-deployment-3837372172)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:202

Issues about this test specifically: #29828

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:46
Expected error:
    <*errors.errorString | 0xc820dbfd20>: {
        s: "failed to get logs from pod-configmaps-e9b06d5b-d75e-11e6-bd00-0242ac110006 for configmap-volume-test: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-configmap-ve20f/pods/pod-configmaps-e9b06d5b-d75e-11e6-bd00-0242ac110006/log?container=configmap-volume-test&previous=false\\\"\") has prevented the request from succeeding (get pods pod-configmaps-e9b06d5b-d75e-11e6-bd00-0242ac110006)",
    }
    failed to get logs from pod-configmaps-e9b06d5b-d75e-11e6-bd00-0242ac110006 for configmap-volume-test: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-ve20f/pods/pod-configmaps-e9b06d5b-d75e-11e6-bd00-0242ac110006/log?container=configmap-volume-test&previous=false\"") has prevented the request from succeeding (get pods pod-configmaps-e9b06d5b-d75e-11e6-bd00-0242ac110006)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #27245

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
Expected error:
    <*errors.errorString | 0xc820c65850>: {
        s: "failed to get logs from pod-service-account-e9f0a9d0-d75e-11e6-b66f-0242ac110006 for token-test: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-svcaccounts-l1ptz/pods/pod-service-account-e9f0a9d0-d75e-11e6-b66f-0242ac110006/log?container=token-test&previous=false\\\"\") has prevented the request from succeeding (get pods pod-service-account-e9f0a9d0-d75e-11e6-b66f-0242ac110006)",
    }
    failed to get logs from pod-service-account-e9f0a9d0-d75e-11e6-b66f-0242ac110006 for token-test: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-svcaccounts-l1ptz/pods/pod-service-account-e9f0a9d0-d75e-11e6-b66f-0242ac110006/log?container=token-test&previous=false\"") has prevented the request from succeeding (get pods pod-service-account-e9f0a9d0-d75e-11e6-b66f-0242ac110006)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #37526

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:88
Expected error:
    <*errors.errorString | 0xc8207a89a0>: {
        s: "failed to get logs from pod-secrets-e95258b6-d75e-11e6-981c-0242ac110006 for secret-volume-test: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-secrets-dowc8/pods/pod-secrets-e95258b6-d75e-11e6-981c-0242ac110006/log?container=secret-volume-test&previous=false\\\"\") has prevented the request from succeeding (get pods pod-secrets-e95258b6-d75e-11e6-981c-0242ac110006)",
    }
    failed to get logs from pod-secrets-e95258b6-d75e-11e6-981c-0242ac110006 for secret-volume-test: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-secrets-dowc8/pods/pod-secrets-e95258b6-d75e-11e6-981c-0242ac110006/log?container=secret-volume-test&previous=false\"") has prevented the request from succeeding (get pods pod-secrets-e95258b6-d75e-11e6-981c-0242ac110006)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #29221

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 10 10:02:24.780: Couldn't delete ns: "e2e-tests-pods-6sxph": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-6sxph\"") has prevented the request from succeeding (delete namespaces e2e-tests-pods-6sxph) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-6sxph\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-pods-6sxph)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820cd3400), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #33008

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:306
Expected error:
    <*errors.StatusError | 0xc820529700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-4d5b1/persistentvolumeclaims/test-claim\\\"\") has prevented the request from succeeding (delete persistentVolumeClaims test-claim)",
            Reason: "InternalError",
            Details: {
                Name: "test-claim",
                Group: "",
                Kind: "persistentVolumeClaims",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-4d5b1/persistentvolumeclaims/test-claim\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-4d5b1/persistentvolumeclaims/test-claim\"") has prevented the request from succeeding (delete persistentVolumeClaims test-claim)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:299

Issues about this test specifically: #34212

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:583
Expected error:
    <*errors.StatusError | 0xc8204fae80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-us6iy/pods/pod-logs-websocket-e9c7b850-d75e-11e6-82e4-0242ac110006\\\"\") has prevented the request from succeeding (get pods pod-logs-websocket-e9c7b850-d75e-11e6-82e4-0242ac110006)",
            Reason: "InternalError",
            Details: {
                Name: "pod-logs-websocket-e9c7b850-d75e-11e6-82e4-0242ac110006",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-us6iy/pods/pod-logs-websocket-e9c7b850-d75e-11e6-82e4-0242ac110006\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-us6iy/pods/pod-logs-websocket-e9c7b850-d75e-11e6-82e4-0242ac110006\"") has prevented the request from succeeding (get pods pod-logs-websocket-e9c7b850-d75e-11e6-82e4-0242ac110006)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:60

Issues about this test specifically: #30263

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1185
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.226.194 --kubeconfig=/workspace/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-05uwb -o json] []  <nil>  Error from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-05uwb/pods/e2e-test-nginx-pod\\\"\") has prevented the request from succeeding (get pods e2e-test-nginx-pod)\n [] <nil> 0xc820a737a0 exit status 1 <nil> true [0xc8204beae0 0xc8204beaf8 0xc8204beb10] [0xc8204beae0 0xc8204beaf8 0xc8204beb10] [0xc8204beaf0 0xc8204beb08] [0xafae20 0xafae20] 0xc820aac240}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-05uwb/pods/e2e-test-nginx-pod\\\"\") has prevented the request from succeeding (get pods e2e-test-nginx-pod)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.226.194 --kubeconfig=/workspace/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-05uwb -o json] []  <nil>  Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-05uwb/pods/e2e-test-nginx-pod\"") has prevented the request from succeeding (get pods e2e-test-nginx-pod)
     [] <nil> 0xc820a737a0 exit status 1 <nil> true [0xc8204beae0 0xc8204beaf8 0xc8204beb10] [0xc8204beae0 0xc8204beaf8 0xc8204beb10] [0xc8204beaf0 0xc8204beb08] [0xafae20 0xafae20] 0xc820aac240}:
    Command stdout:
    
    stderr:
    Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-05uwb/pods/e2e-test-nginx-pod\"") has prevented the request from succeeding (get pods e2e-test-nginx-pod)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #29834 #35757

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 10 10:02:24.776: Couldn't delete ns: "e2e-tests-proxy-1dnxg": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-proxy-1dnxg/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-proxy-1dnxg/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820bba0a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:268
Expected error:
    <*errors.StatusError | 0xc82008cc00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-qbhld/replicationcontrollers/test-rc\\\"\") has prevented the request from succeeding (delete replicationControllers test-rc)",
            Reason: "InternalError",
            Details: {
                Name: "test-rc",
                Group: "",
                Kind: "replicationControllers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-qbhld/replicationcontrollers/test-rc\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-qbhld/replicationcontrollers/test-rc\"") has prevented the request from succeeding (delete replicationControllers test-rc)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:262

Issues about this test specifically: #34372

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:233
Expected error:
    <*errors.StatusError | 0xc8200d4e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-451jp/configmaps/test-configmap\\\"\") has prevented the request from succeeding (delete configmaps test-configmap)",
            Reason: "InternalError",
            Details: {
                Name: "test-configmap",
                Group: "",
                Kind: "configmaps",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-451jp/configmaps/test-configmap\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-451jp/configmaps/test-configmap\"") has prevented the request from succeeding (delete configmaps test-configmap)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:227

Issues about this test specifically: #34367

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/3779/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:59
Expected error:
    <*errors.errorString | 0xc82099d720>: {
        s: "error waiting for deployment \"test-rolling-update-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rolling-update-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:323

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:117
Expected error:
    <*errors.errorString | 0xc820c390b0>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-79a1b91a-dc13-11e6-a0a3-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-79a1b91a-dc13-11e6-a0a3-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:243
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.61.252 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-xt8z4] []  0xc820dd2220 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc820dd2920 exit status 1 <nil> true [0xc820848790 0xc8208487b8 0xc8208487c8] [0xc820848790 0xc8208487b8 0xc8208487c8] [0xc820848798 0xc8208487b0 0xc8208487c0] [0xafacc0 0xafae20 0xafae20] 0xc820dd6180}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.61.252 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-xt8z4] []  0xc820dd2220 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc820dd2920 exit status 1 <nil> true [0xc820848790 0xc8208487b8 0xc8208487c8] [0xc820848790 0xc8208487b8 0xc8208487c8] [0xc820848798 0xc8208487b0 0xc8208487c0] [0xafacc0 0xafae20 0xafae20] 0xc820dd6180}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:359
Jan 16 09:49:09.533: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:312

Issues about this test specifically: #27673

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:666
Jan 16 09:50:34.221: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:202

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
waiting for server pod to start
Expected error:
    <*errors.errorString | 0xc820190b90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:65

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc820864960>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-97370fed-dc13-11e6-aac1-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-97370fed-dc13-11e6-aac1-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #33987

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc8200e77b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:361

Issues about this test specifically: #32375

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc820728040>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:595

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/3783/
Multiple broken tests:

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc821101ba0>: {
        s: "Observed 17 available replicas, less than min required 18",
    }
    Observed 17 available replicas, less than min required 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1168

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Jan 16 12:05:11.163: Err : timed out waiting for the condition
. Failed to remove deployment test-paused-deployment pods : &{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink:/api/v1/namespaces/e2e-tests-deployment-g26uw/pods ResourceVersion:3070} Items:[{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:test-paused-deployment-965846846-x6p2n GenerateName:test-paused-deployment-965846846- Namespace:e2e-tests-deployment-g26uw SelfLink:/api/v1/namespaces/e2e-tests-deployment-g26uw/pods/test-paused-deployment-965846846-x6p2n UID:ee058560-dc26-11e6-aeaf-42010a8000c8 ResourceVersion:1293 Generation:0 CreationTimestamp:2017-01-16 12:04:04 -0800 PST DeletionTimestamp:2017-01-16 12:04:30 -0800 PST DeletionGracePeriodSeconds:0xc8205dc9b0 Labels:map[pod-template-hash:965846846 name:nginx] Annotations:map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"e2e-tests-deployment-g26uw","name":"test-paused-deployment-965846846","uid":"edee7944-dc26-11e6-aeaf-42010a8000c8","apiVersion":"extensions","resourceVersion":"1081"}}
] OwnerReferences:[] Finalizers:[] ClusterName:} Spec:{Volumes:[{Name:default-token-4cuhv VolumeSource:{HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil> GitRepo:<nil> Secret:0xc820ea36b0 NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaim:<nil> RBD:<nil> Quobyte:<nil> FlexVolume:<nil> Cinder:<nil> CephFS:<nil> Flocker:<nil> DownwardAPI:<nil> FC:<nil> AzureFile:<nil> ConfigMap:<nil> VsphereVolume:<nil> AzureDisk:<nil>}}] InitContainers:[] Containers:[{Name:nginx Image:gcr.io/google_containers/nginx-slim:0.7 Command:[] Args:[] WorkingDir: Ports:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-4cuhv ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:<nil> Stdin:false StdinOnce:false TTY:false}] RestartPolicy:Always TerminationGracePeriodSeconds:0xc8205dcae0 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName:default NodeName:gke-bootstrap-e2e-default-pool-3c759f83-jhhg SecurityContext:0xc820e95100 ImagePullSecrets:[] Hostname: Subdomain:} Status:{Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-16 12:04:04 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-16 12:04:04 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-16 12:04:04 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP: StartTime:2017-01-16 12:04:04 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:0xc820ec6160 Running:<nil> Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/nginx-slim:0.7 ImageID: ContainerID:}]}}]}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:249

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc82141e110>: {
        s: "Only 0 pods started out of 2",
    }
    Only 0 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/3892/
Multiple broken tests:

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc820cdff80>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:595

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:235
Jan 18 09:52:09.801: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:199

Issues about this test specifically: #26955

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc820198a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc8200e57b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc8200e97b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #28337

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:281
Expected error:
    <*errors.errorString | 0xc8200ff6a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2378

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:59
Expected error:
    <*errors.errorString | 0xc820faa6f0>: {
        s: "error waiting for deployment \"test-rolling-update-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rolling-update-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:323

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.errorString | 0xc82017ca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:361

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200e97b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:745
Jan 18 09:57:39.321: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:202

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:462
Jan 18 09:49:38.006: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2180

Issues about this test specifically: #28064 #28569 #34036

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc820b22670>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:104
Expected error:
    <*errors.errorString | 0xc8200e57b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:361

Issues about this test specifically: #32830

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:52
Expected error:
    <*errors.errorString | 0xc8200e97b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #27023 #34604 #38550

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/3894/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc82080da90>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc8208cc970>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc8200fd6a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/3895/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc820ea1ab0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 18 11:48:50.845: Couldn't delete ns: "e2e-tests-kubectl-wq10v": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-wq10v/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-wq10v/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208ee370), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:175
starting pod liveness-http in namespace e2e-tests-container-probe-n5lmt
Expected error:
    <*errors.errorString | 0xc8201016a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:334

Issues about this test specifically: #38511

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:359
Jan 18 11:48:09.822: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:312

Issues about this test specifically: #27673

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/4419/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc421247680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420c499b0>: {
        s: "Observed 6 available replicas, less than min required 8",
    }
    Observed 6 available replicas, less than min required 8
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1090

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.243.164 --kubeconfig=/workspace/.kube/config replace -f - --namespace=e2e-tests-kubectl-dd5vm] []  0xc420c940e0  Unable to connect to the server: dial tcp 104.197.243.164:443: i/o timeout\n [] <nil> 0xc4210f1dd0 exit status 1 <nil> <nil> true [0xc4202f6e98 0xc4202f6ec0 0xc4202f6ed0] [0xc4202f6e98 0xc4202f6ec0 0xc4202f6ed0] [0xc4202f6ea0 0xc4202f6eb8 0xc4202f6ec8] [0x9727b0 0x9728b0 0x9728b0] 0xc4210f6d80 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.197.243.164:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.243.164 --kubeconfig=/workspace/.kube/config replace -f - --namespace=e2e-tests-kubectl-dd5vm] []  0xc420c940e0  Unable to connect to the server: dial tcp 104.197.243.164:443: i/o timeout
     [] <nil> 0xc4210f1dd0 exit status 1 <nil> <nil> true [0xc4202f6e98 0xc4202f6ec0 0xc4202f6ed0] [0xc4202f6e98 0xc4202f6ec0 0xc4202f6ed0] [0xc4202f6ea0 0xc4202f6eb8 0xc4202f6ec8] [0x9727b0 0x9728b0 0x9728b0] 0xc4210f6d80 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 104.197.243.164:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 27 17:43:53.373: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

2 participants