Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke: broken test run #26742

Closed
k8s-github-robot opened this issue Jun 2, 2016 · 63 comments
Closed

kubernetes-e2e-gke: broken test run #26742

k8s-github-robot opened this issue Jun 2, 2016 · 63 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8317/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:286
Expected error:
    <*errors.errorString | 0xc8200db060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:483
Jun  2 13:51:19.964: Missing KubeDNS in kubectl cluster-info

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc820933120>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-02 13:49:24 -0700 PDT} FinishedAt:{Time:2016-06-02 13:49:54 -0700 PDT} ContainerID:docker://282c3f5e4fa1e7484ca4e5388059af6f22755ff2dc2d4f4a2fde418ea76eb555}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-02 13:49:24 -0700 PDT} FinishedAt:{Time:2016-06-02 13:49:54 -0700 PDT} ContainerID:docker://282c3f5e4fa1e7484ca4e5388059af6f22755ff2dc2d4f4a2fde418ea76eb555}
not to have occurred

Issues about this test specifically: #26171

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:334
Expected error:
    <*errors.errorString | 0xc8200f20b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:87
Expected error:
    <*errors.errorString | 0xc82088ca30>: {
        s: "error waiting for service kube-system/kubernetes-dashboard to appear: timed out waiting for the condition",
    }
    error waiting for service kube-system/kubernetes-dashboard to appear: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26191

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. area/test-infra labels Jun 2, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8320/

Multiple broken tests:

Failed: [k8s.io] Addon update should propagate add-on file changes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8209b6580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-addon-update-test-spccu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26125

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8205e1180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-deployment-kckkq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:67
Expected error:
    <*errors.StatusError | 0xc820ade180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post replicasets.extensions)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "extensions",
                Kind: "replicasets",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-a8jl4/replicasets\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post replicasets.extensions)
not to have occurred

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820831700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-yp4a1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8208bd880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-nd62z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820b30a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-fkkp9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc82027d400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-s57s0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:223
Expected error:
    <*errors.errorString | 0xc820b12750>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.197.14 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-56dyv] []  0xc820b1e960  error validating \"STDIN\": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false\n [] <nil> 0xc820b1f060 exit status 1 <nil> true [0xc8200f4008 0xc8200f4078 0xc8200f4090] [0xc8200f4008 0xc8200f4078 0xc8200f4090] [0xc8200f4018 0xc8200f4068 0xc8200f4080] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820b3c300}:\nCommand stdout:\n\nstderr:\nerror validating \"STDIN\": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.197.14 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-56dyv] []  0xc820b1e960  error validating "STDIN": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
     [] <nil> 0xc820b1f060 exit status 1 <nil> true [0xc8200f4008 0xc8200f4078 0xc8200f4090] [0xc8200f4008 0xc8200f4078 0xc8200f4090] [0xc8200f4018 0xc8200f4068 0xc8200f4080] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820b3c300}:
    Command stdout:

    stderr:
    error validating "STDIN": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false

    error:
    exit status 1

not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8322/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:672
Expected error:
    <*errors.errorString | 0xc820764180>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-1aqcp] []  0xc82032d6a0  Error from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (delete pods pause)\n [] <nil> 0xc82032dd80 exit status 1 <nil> true [0xc8200aa590 0xc8200aa5b8 0xc8200aa5c8] [0xc8200aa590 0xc8200aa5b8 0xc8200aa5c8] [0xc8200aa598 0xc8200aa5b0 0xc8200aa5c0] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820827140}:\nCommand stdout:\n\nstderr:\nError from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (delete pods pause)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-1aqcp] []  0xc82032d6a0  Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (delete pods pause)
     [] <nil> 0xc82032dd80 exit status 1 <nil> true [0xc8200aa590 0xc8200aa5b8 0xc8200aa5c8] [0xc8200aa590 0xc8200aa5b8 0xc8200aa5c8] [0xc8200aa598 0xc8200aa5b0 0xc8200aa5c0] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820827140}:
    Command stdout:

    stderr:
    Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (delete pods pause)

    error:
    exit status 1

not to have occurred

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 15:26:09.274: Couldn't delete ns "e2e-tests-job-b1plz": the server does not allow access to the requested resource (delete namespaces e2e-tests-job-b1plz)

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:445
Expected error:
    <*errors.errorString | 0xc820a37550>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-d5kh5] []  0xc8209000a0  Error from server: error when creating \"STDIN\": the server does not allow access to the requested resource (post replicationcontrollers)\n [] <nil> 0xc820900780 exit status 1 <nil> true [0xc8203f0010 0xc8203f01c0 0xc8203f01d0] [0xc8203f0010 0xc8203f01c0 0xc8203f01d0] [0xc8203f01a0 0xc8203f01b8 0xc8203f01c8] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc82043db00}:\nCommand stdout:\n\nstderr:\nError from server: error when creating \"STDIN\": the server does not allow access to the requested resource (post replicationcontrollers)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-d5kh5] []  0xc8209000a0  Error from server: error when creating "STDIN": the server does not allow access to the requested resource (post replicationcontrollers)
     [] <nil> 0xc820900780 exit status 1 <nil> true [0xc8203f0010 0xc8203f01c0 0xc8203f01d0] [0xc8203f0010 0xc8203f01c0 0xc8203f01d0] [0xc8203f01a0 0xc8203f01b8 0xc8203f01c8] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc82043db00}:
    Command stdout:

    stderr:
    Error from server: error when creating "STDIN": the server does not allow access to the requested resource (post replicationcontrollers)

    error:
    exit status 1

not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8207c9680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-b56y2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.StatusError | 0xc820ac8900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get services service1)",
            Reason: "Forbidden",
            Details: {
                Name: "service1",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-services-ir1bq/services/service1\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get services service1)
not to have occurred

Issues about this test specifically: #26128 #26685

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8205e2080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-ufye9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Mesos schedules pods annotated with roles on correct slaves {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820c0a800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-mcjil/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1039
Expected error:
    <*errors.errorString | 0xc8205f88e0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-9f6bu] []  <nil>  Error from server: the server does not allow access to the requested resource (post pods)\n [] <nil> 0xc820ae4be0 exit status 1 <nil> true [0xc8201a6098 0xc8201a60b0 0xc8201a60f8] [0xc8201a6098 0xc8201a60b0 0xc8201a60f8] [0xc8201a60a8 0xc8201a60d8] [0xbc7c40 0xbc7c40] 0xc82094e1e0}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (post pods)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-9f6bu] []  <nil>  Error from server: the server does not allow access to the requested resource (post pods)
     [] <nil> 0xc820ae4be0 exit status 1 <nil> true [0xc8201a6098 0xc8201a60b0 0xc8201a60f8] [0xc8201a6098 0xc8201a60b0 0xc8201a60f8] [0xc8201a60a8 0xc8201a60d8] [0xbc7c40 0xbc7c40] 0xc82094e1e0}:
    Command stdout:

    stderr:
    Error from server: the server does not allow access to the requested resource (post pods)

    error:
    exit status 1

not to have occurred

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:376
Expected error:
    <*errors.StatusError | 0xc820a06f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post resourceQuotas)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "resourceQuotas",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-0cm0c/resourcequotas\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post resourceQuotas)
not to have occurred

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:65
Expected error:
    <*errors.StatusError | 0xc8200cc900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/nodes/gke-jenkins-e2e-default-pool-258ba87d-bt97/proxy/logs/\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource
not to have occurred

Failed: [k8s.io] Pods should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 15:26:45.662: Couldn't delete ns "e2e-tests-pods-2ms5x": the server does not allow access to the requested resource (delete namespaces e2e-tests-pods-2ms5x)

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
getting pod info
Expected error:
    <*errors.StatusError | 0xc820869200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods server)",
            Reason: "Forbidden",
            Details: {
                Name: "server",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-prestop-5n6e6/pods/server\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods server)
not to have occurred

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820266500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-deployment-ugh88/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:64
Expected error:
    <*errors.errorString | 0xc820bad870>: {
        s: "failed to wait for pods responding: Unable to get server version: the server has asked for the client to provide credentials",
    }
    failed to wait for pods responding: Unable to get server version: the server has asked for the client to provide credentials
not to have occurred

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820ab0200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-mq8b8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:82
Expected error:
    <*errors.StatusError | 0xc820a39680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post jobs.batch)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "batch",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/batch/v1/namespaces/e2e-tests-v1job-coyca/jobs\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post jobs.batch)
not to have occurred

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:185
Expected error:
    <*errors.StatusError | 0xc820402680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post jobs.extensions)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "extensions",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-job-4elu3/jobs\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post jobs.extensions)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:931
Expected error:
    <*errors.errorString | 0xc820c08170>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx:1.7.9 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-f1bzu] []  <nil>  Error from server: the server does not allow access to the requested resource (get replicationControllers e2e-test-nginx-rc-5a821d51b9e6c27152e262ac18d80de5)\n [] <nil> 0xc8202ce920 exit status 1 <nil> true [0xc82004e0a8 0xc82004e0c8 0xc82004e0e8] [0xc82004e0a8 0xc82004e0c8 0xc82004e0e8] [0xc82004e0c0 0xc82004e0e0] [0xbc7c40 0xbc7c40] 0xc82022fa40}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (get replicationControllers e2e-test-nginx-rc-5a821d51b9e6c27152e262ac18d80de5)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx:1.7.9 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-f1bzu] []  <nil>  Error from server: the server does not allow access to the requested resource (get replicationControllers e2e-test-nginx-rc-5a821d51b9e6c27152e262ac18d80de5)
     [] <nil> 0xc8202ce920 exit status 1 <nil> true [0xc82004e0a8 0xc82004e0c8 0xc82004e0e8] [0xc82004e0a8 0xc82004e0c8 0xc82004e0e8] [0xc82004e0c0 0xc82004e0e0] [0xbc7c40 0xbc7c40] 0xc82022fa40}:
    Command stdout:

    stderr:
    Error from server: the server does not allow access to the requested resource (get replicationControllers e2e-test-nginx-rc-5a821d51b9e6c27152e262ac18d80de5)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26138

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a nodePort service updated to loadBalancer. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:311
Expected error:
    <*errors.StatusError | 0xc82095a580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post resourceQuotas)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "resourceQuotas",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-o7yhy/resourcequotas\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post resourceQuotas)
not to have occurred

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:137
Expected error:
    <*errors.errorString | 0xc820095f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1063
Expected error:
    <*errors.errorString | 0xc820b9e220>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820938fe0  Error from server: the server does not allow access to the requested resource (get pods)\n [] <nil> 0xc820939700 exit status 1 <nil> true [0xc820502938 0xc820502960 0xc820502970] [0xc820502938 0xc820502960 0xc820502970] [0xc820502940 0xc820502958 0xc820502968] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820829800}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (get pods)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820938fe0  Error from server: the server does not allow access to the requested resource (get pods)
     [] <nil> 0xc820939700 exit status 1 <nil> true [0xc820502938 0xc820502960 0xc820502970] [0xc820502938 0xc820502960 0xc820502970] [0xc820502940 0xc820502958 0xc820502968] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820829800}:
    Command stdout:

    stderr:
    Error from server: the server does not allow access to the requested resource (get pods)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26728

Failed: [k8s.io] EmptyDir wrapper volumes should becomes running {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:179
Jun  2 15:28:23.649: unable to create test secret : the server does not allow access to the requested resource (post secrets)

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820f10d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-emg1d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820cede80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-ssh-diy4f/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26129

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:839
Expected error:
    <*errors.StatusError | 0xc820c72100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post services)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-services-yrs5l/services\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post services)
not to have occurred

Failed: [k8s.io] Downward API volume should provide container's memory limit {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc82033f180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-uvl5b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820cb2200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-containers-mmrvu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:196
Jun  2 15:26:20.686: Pod did not start running: the server does not allow access to the requested resource (get pods)

Failed: [k8s.io] Addon update should propagate add-on file changes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc82094a600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-addon-update-test-pgqi9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26125

Failed: [k8s.io] Downward API should provide pod IP as an env var {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820958600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-5kq2m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a nodePort service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8209a0880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-gwzl1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820a37380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-configmap-rbhii/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820536200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-svcaccounts-dqmb1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 15:26:05.657: Couldn't delete ns "e2e-tests-v1job-8akf1": the server does not allow access to the requested resource (delete namespaces e2e-tests-v1job-8akf1)

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820800b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-container-probe-xij4v/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820bac200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-viyq2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1243
Jun  2 15:28:35.539: Failed to create pod: the server does not allow access to the requested resource (post pods)

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:61
Expected error:
    <*errors.StatusError | 0xc8207a2e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/proxy/nodes/gke-jenkins-e2e-default-pool-258ba87d-bt97/logs/\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource
not to have occurred

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:493
Expected error:
    <*errors.errorString | 0xc820a77e00>: {
        s: "failed to update pod: the server does not allow access to the requested resource (put pods pod-update-e06b981f-2910-11e6-93fb-0242ac11000f)",
    }
    failed to update pod: the server does not allow access to the requested resource (put pods pod-update-e06b981f-2910-11e6-93fb-0242ac11000f)
not to have occurred

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8205da580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-volume-provisioning-spwog/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26682

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820736500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-h77s0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8323/

Multiple broken tests:

Failed: [k8s.io] Downward API volume should provide container's cpu limit {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820ae2580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-596ak/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820c36500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-15mfn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26131

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:96
Expected error:
    <*errors.StatusError | 0xc820aa0900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post jobs.extensions)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "extensions",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-job-t5128/jobs\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post jobs.extensions)
not to have occurred

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:76
Expected error:
    <*errors.StatusError | 0xc820ac0100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post deployments.extensions test-rollback-deployment)",
            Reason: "Forbidden",
            Details: {
                Name: "test-rollback-deployment",
                Group: "extensions",
                Kind: "deployments",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-6krcy/deployments/test-rollback-deployment/rollback\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post deployments.extensions test-rollback-deployment)
not to have occurred

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820427000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-9yfsk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:53
Jun  2 16:06:37.570: unable to create test configMap : the server does not allow access to the requested resource (post configmaps)

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820326200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubelet-rydko/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:286
Expected error:
    <*errors.StatusError | 0xc8203b0380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-dns-9yk15/pods?fieldSelector=metadata.name%3Ddns-test-863da4fb-2916-11e6-b191-0242ac11001e\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:97
Expected error:
    <*errors.StatusError | 0xc820afe980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-downward-api-ypaar/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post pods)
not to have occurred

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820998480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-volume-provisioning-1fa86/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26682

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8200d9e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-vmc8t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820763180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-containers-vv36l/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:189
Expected error:
    <*errors.StatusError | 0xc8207f3880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post jobs.batch)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "batch",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/batch/v1/namespaces/e2e-tests-v1job-axipa/jobs\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post jobs.batch)
not to have occurred

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:659
Expected error:
    <*errors.StatusError | 0xc8208f8600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-xvzqy/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post pods)
not to have occurred

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:133
Expected error:
    <*errors.StatusError | 0xc820988c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get resourceQuotas test-quota)",
            Reason: "Forbidden",
            Details: {
                Name: "test-quota",
                Group: "",
                Kind: "resourceQuotas",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-313xb/resourcequotas/test-quota\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get resourceQuotas test-quota)
not to have occurred

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:162
Expected error:
    <*errors.StatusError | 0xc820957700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-job-l9hs4/pods?labelSelector=job%3Dscale-down\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8200fdb00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-g4i2s/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820a13080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-nettest-v7asm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
Expected error:
    <*errors.StatusError | 0xc8208e6080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replicaset-4u7b7/pods?fieldSelector=metadata.name%3Dmy-hostname-private-90a6b4dd-2916-11e6-9c1e-0242ac11001e-nwoxk\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820542880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-7y8hi/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should not be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8209d1380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-fcsx5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:371
Expected error:
    <*errors.StatusError | 0xc82027e880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post services)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-dns-udg6r/services\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post services)
not to have occurred

Issues about this test specifically: #26180

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:202
Jun  2 16:08:35.558: Failed to create pod: the server does not allow access to the requested resource (post pods)

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8205e8a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-cadvisor-by9t6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:209
Expected error:
    <*errors.errorString | 0xc820385460>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-yknto] []  0xc820b1f8a0  error validating \"STDIN\": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false\n [] <nil> 0xc820b1fe00 exit status 1 <nil> true [0xc8202b8490 0xc8202b84d8 0xc8202b84e8] [0xc8202b8490 0xc8202b84d8 0xc8202b84e8] [0xc8202b8498 0xc8202b84c8 0xc8202b84e0] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820731080}:\nCommand stdout:\n\nstderr:\nerror validating \"STDIN\": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.129.194 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-yknto] []  0xc820b1f8a0  error validating "STDIN": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
     [] <nil> 0xc820b1fe00 exit status 1 <nil> true [0xc8202b8490 0xc8202b84d8 0xc8202b84e8] [0xc8202b8490 0xc8202b84d8 0xc8202b84e8] [0xc8202b8498 0xc8202b84c8 0xc8202b84e0] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820731080}:
    Command stdout:

    stderr:
    error validating "STDIN": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false

    error:
    exit status 1

not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820882d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-6n9y4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Generated release_1_3 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:244
Jun  2 16:08:37.103: Failed to set up watch: the server does not allow access to the requested resource (get pods)

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:260
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:443/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
3: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:81/ took 38.878058899s > 30s
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:443/ took 34.753249455s > 30s
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname1/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
3: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:80/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:80/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:162/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname1/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:80/proxy/ took 31.113995943s > 30s
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname2/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:80/ took 36.799020448s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:80/proxy/ took 32.744181106s > 30s
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:460/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:80/proxy/ took 40.050279815s > 30s
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname2/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:444/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:444/ took 39.047344231s > 30s
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:460/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname1/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/proxy/ took 36.941456761s > 30s
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname1/ took 32.153597987s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:80/ took 39.67416006s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname2/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname2/proxy/ took 37.320257222s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/proxy/ took 39.847440481s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname2/proxy/ took 40.057078951s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/ took 43.053170376s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/ took 40.126752088s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:81/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname1/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/proxy/ took 36.666055006s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:80/ took 40.77463287s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:80/ took 42.620727694s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname1/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:162/proxy/ took 40.08511061s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr/proxy/ took 42.111810476s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:160/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:462/proxy/ took 44.224988172s > 30s
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:81/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:80/ took 39.7609785s > 30s
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:81/ took 44.166424881s > 30s
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:81/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname2/ took 43.105450176s > 30s
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname1/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname2/ took 39.884800937s > 30s
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname2/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:80/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:80/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:160/proxy/ took 50.705741215s > 30s
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:162/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname2/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname2/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr/proxy/ took 42.510731104s > 30s
2: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:460/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:162/ took 50.369050311s > 30s
4: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/proxy/ took 31.053918222s > 30s
4: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/proxy/ took 30.904273562s > 30s
3: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/proxy/ took 40.698754853s > 30s
4: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname2/ took 30.994713659s > 30s
4: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/ took 40.505964567s > 30s
5: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/\"" field:"" > retryAfterSeconds:0  Code:403}
5: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname2/ took 39.437966318s > 30s
5: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/proxy/ took 46.871260976s > 30s
6: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:444/ took 35.408228506s > 30s
6: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname1/ took 38.41795073s > 30s
6: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/https:proxy-service-63ufm:tlsportname2/ took 38.91040855s > 30s
6: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/proxy-service-63ufm:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
6: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:80/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
6: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
7: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:81/ took 32.183231572s > 30s
6: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/https:proxy-service-63ufm-ck3cr:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
7: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:80/ took 33.04351682s > 30s
8: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/proxy/ took 32.297320689s > 30s
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:80/ took 34.592263567s > 30s
6: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname2/proxy/ took 45.807521728s > 30s
6: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/ took 46.487221787s > 30s
8: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/proxy-service-63ufm-ck3cr:162/proxy/ took 37.681600871s > 30s
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-o4vwi/services/http:proxy-service-63ufm:portname1/ took 39.366758535s > 30s
8: path /api/v1/namespaces/e2e-tests-proxy-o4vwi/pods/http:proxy-service-63ufm-ck3cr:160/proxy/ took 49.841747546s > 30s

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8208f5100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-m48x4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.StatusError | 0xc82088a180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post jobs.extensions)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "extensions",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-job-ntgol/jobs\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post jobs.extensions)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8326/

Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 17:16:10.304: Couldn't delete ns "e2e-tests-port-forwarding-pfab1": the server does not allow access to the requested resource (delete namespaces e2e-tests-port-forwarding-pfab1)

Failed: [k8s.io] EmptyDir wrapper volumes should becomes running {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:179
Jun  2 17:15:30.282: unable to delete git server pod git-server-0f625a43-2920-11e6-b7d2-0242ac110014: the server does not allow access to the requested resource (delete pods git-server-0f625a43-2920-11e6-b7d2-0242ac110014)

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8208bf080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-asocs/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:196
Jun  2 17:15:38.794: Error retrieving logs: the server does not allow access to the requested resource (get pods pfpod)

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820cc4200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-deployment-v0bmt/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.errorString | 0xc820664510>: {
        s: "error while stopping RC: service1: error getting replication controllers: error getting replication controllers: the server does not allow access to the requested resource (get replicationControllers)",
    }
    error while stopping RC: service1: error getting replication controllers: error getting replication controllers: the server does not allow access to the requested resource (get replicationControllers)
not to have occurred

Issues about this test specifically: #26128 #26685

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc820778510>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-02 17:17:17 -0700 PDT} FinishedAt:{Time:2016-06-02 17:17:47 -0700 PDT} ContainerID:docker://db0c8ce469fa0e47b31ed2f0652f9151515d7df78aa18557d6b4a5154cd8f635}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-02 17:17:17 -0700 PDT} FinishedAt:{Time:2016-06-02 17:17:47 -0700 PDT} ContainerID:docker://db0c8ce469fa0e47b31ed2f0652f9151515d7df78aa18557d6b4a5154cd8f635}
not to have occurred

Issues about this test specifically: #26171

Failed: ThirdParty resources Simple Third Party creating/deleting thirdparty objects works [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/third-party.go:176
Jun  2 17:16:12.212: expected:
&e2e.Foo{TypeMeta:unversioned.TypeMeta{Kind:"Foo", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"foo", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, SomeField:"bar", OtherField:10}
saw:
&e2e.Foo{TypeMeta:unversioned.TypeMeta{Kind:"Status", APIVersion:"v1"}, ObjectMeta:api.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, SomeField:"", OtherField:0}
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"thirdpartyresourcedatas.extensions \"foo\" not found","reason":"NotFound","details":{"name":"foo","group":"extensions","kind":"thirdpartyresourcedatas"},"code":404}


Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:53
Jun  2 17:14:43.866: unable to delete configMap configmap-test-volume-map-fb4800db-291f-11e6-b585-0242ac110014: the server does not allow access to the requested resource (delete configmaps configmap-test-volume-map-fb4800db-291f-11e6-b585-0242ac110014)

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:334
Expected error:
    <*errors.errorString | 0xc820015180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:371
Expected error:
    <*errors.errorString | 0xc8200fa0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26180

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:265
Jun  2 17:30:46.180: Frontend service did not start serving content in 600 seconds.

Issues about this test specifically: #26175

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:657
Expected error:
    <*errors.errorString | 0xc820ec2300>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.200.107 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-2xpkl] []  0xc820984000  Error from server: error when creating \"STDIN\": the server does not allow access to the requested resource (post replicationcontrollers)\n [] <nil> 0xc820984660 exit status 1 <nil> true [0xc820eee048 0xc820eee070 0xc820eee080] [0xc820eee048 0xc820eee070 0xc820eee080] [0xc820eee050 0xc820eee068 0xc820eee078] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820ec91a0}:\nCommand stdout:\n\nstderr:\nError from server: error when creating \"STDIN\": the server does not allow access to the requested resource (post replicationcontrollers)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.200.107 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-2xpkl] []  0xc820984000  Error from server: error when creating "STDIN": the server does not allow access to the requested resource (post replicationcontrollers)
     [] <nil> 0xc820984660 exit status 1 <nil> true [0xc820eee048 0xc820eee070 0xc820eee080] [0xc820eee048 0xc820eee070 0xc820eee080] [0xc820eee050 0xc820eee068 0xc820eee078] [0xbc7ae0 0xbc7c40 0xbc7c40] 0xc820ec91a0}:
    Command stdout:

    stderr:
    Error from server: error when creating "STDIN": the server does not allow access to the requested resource (post replicationcontrollers)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26209

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820a24300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-proxy-gxy2x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820348680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-a3t4h/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26678

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8207be200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-deployment-cxp6q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26509

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:286
Expected error:
    <*errors.errorString | 0xc8201060b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:185
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {SelfLink: "", ResourceVersion: ""},
                Status: "Failure",
                Message: "the server does not allow access to the requested resource (delete pods foo-kstxc)",
                Reason: "Forbidden",
                Details: {
                    Name: "foo-kstxc",
                    Group: "",
                    Kind: "pods",
                    Causes: [
                        {
                            Type: "UnexpectedServerResponse",
                            Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-job-54mkr/pods/foo-kstxc\"",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 403,
            },
        },
    ]
    the server does not allow access to the requested resource (delete pods foo-kstxc)
not to have occurred

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:960
Jun  2 17:19:08.949: expected un-ready endpoint for Service webserver within 5m0s, stdout: 

Issues about this test specifically: #26172

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 17:14:55.926: Couldn't delete ns "e2e-tests-job-vy10h": the server does not allow access to the requested resource (delete namespaces e2e-tests-job-vy10h)

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:401
Expected
    <*errors.StatusError | 0xc820996080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (delete jobs.extensions run-test)",
            Reason: "Forbidden",
            Details: {
                Name: "run-test",
                Group: "extensions",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-r0fqi/jobs/run-test\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
to be nil

Issues about this test specifically: #26324

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 17:14:18.962: Couldn't delete ns "e2e-tests-services-hwrm9": the server does not allow access to the requested resource (delete namespaces e2e-tests-services-hwrm9)

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:55
Expected error:
    <*errors.StatusError | 0xc821090100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post deployments.extensions)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "extensions",
                Kind: "deployments",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-rnam1/deployments\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post deployments.extensions)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8204fe680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-47jyc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8335/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1063
Expected error:
    <*errors.errorString | 0xc820cbc2a0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.170.13 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820b076e0 abcd1234stdin closed\n Error from server: the server does not allow access to the requested resource (get jobs.batch e2e-test-rm-busybox-job)\n [] <nil> 0xc820b07d00 exit status 1 <nil> true [0xc82004e2a0 0xc82004e440 0xc82004e458] [0xc82004e2a0 0xc82004e440 0xc82004e458] [0xc82004e2d0 0xc82004e430 0xc82004e448] [0xbc87e0 0xbc8940 0xbc8940] 0xc820749d40}:\nCommand stdout:\nabcd1234stdin closed\n\nstderr:\nError from server: the server does not allow access to the requested resource (get jobs.batch e2e-test-rm-busybox-job)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.170.13 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820b076e0 abcd1234stdin closed
     Error from server: the server does not allow access to the requested resource (get jobs.batch e2e-test-rm-busybox-job)
     [] <nil> 0xc820b07d00 exit status 1 <nil> true [0xc82004e2a0 0xc82004e440 0xc82004e458] [0xc82004e2a0 0xc82004e440 0xc82004e458] [0xc82004e2d0 0xc82004e430 0xc82004e448] [0xbc87e0 0xbc8940 0xbc8940] 0xc820749d40}:
    Command stdout:
    abcd1234stdin closed

    stderr:
    Error from server: the server does not allow access to the requested resource (get jobs.batch e2e-test-rm-busybox-job)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26728

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 20:20:46.342: Couldn't delete ns "e2e-tests-pods-7vp4u": the server does not allow access to the requested resource (delete namespaces e2e-tests-pods-7vp4u)

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:123
Expected error:
    <*errors.StatusError | 0xc820340980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (delete pods pvc-volume-tester-c0vtq)",
            Reason: "Forbidden",
            Details: {
                Name: "pvc-volume-tester-c0vtq",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-volume-provisioning-4qg7g/pods/pvc-volume-tester-c0vtq\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (delete pods pvc-volume-tester-c0vtq)
not to have occurred

Issues about this test specifically: #26682

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:153
Expected error:
    <*errors.StatusError | 0xc820aaa580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-configmap-39skz/pods?fieldSelector=metadata.name%3Dpod-configmaps-fdca1a1a-2939-11e6-b5a1-0242ac110012\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8339/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:657
Expected error:
    <*errors.StatusError | 0xc8207f6380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get endpoints rm2)",
            Reason: "Forbidden",
            Details: {
                Name: "rm2",
                Group: "",
                Kind: "endpoints",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-kubectl-xcv2h/endpoints/rm2\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get endpoints rm2)
not to have occurred

Issues about this test specifically: #26209

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 21:39:16.162: Couldn't delete ns "e2e-tests-pods-lpg9g": the server does not allow access to the requested resource (delete namespaces e2e-tests-pods-lpg9g)

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 21:39:24.101: Couldn't delete ns "e2e-tests-downward-api-rxvjm": the server does not allow access to the requested resource (delete namespaces e2e-tests-downward-api-rxvjm)

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820bce200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replicaset-rvk65/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:134
Expected error:
    <*errors.StatusError | 0xc820377b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-downward-api-gns89/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post pods)
not to have occurred

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:902
Expected error:
    <*errors.StatusError | 0xc82026b300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-5imkf/pods?fieldSelector=metadata.name%3Dhostexec\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 21:39:57.418: Couldn't delete ns "e2e-tests-e2e-kubelet-etc-hosts-6aobs": the server does not allow access to the requested resource (delete namespaces e2e-tests-e2e-kubelet-etc-hosts-6aobs)

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8340/

Multiple broken tests:

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected
    <int32>: 4
to be <
    <int32>: 4

Issues about this test specifically: #26509

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820e56380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-gzyb1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:787
Jun  2 22:08:15.676: Verified 0 of 1 pods , error : the server does not allow access to the requested resource (get pods)

Issues about this test specifically: #26126

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820cdcb00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-89mau/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:61
Expected error:
    <*errors.StatusError | 0xc820d5ce80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post replicasets.extensions)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "extensions",
                Kind: "replicasets",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-rl2ew/replicasets\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post replicasets.extensions)
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:101
Expected error:
    <*errors.StatusError | 0xc820e37b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post horizontalPodAutoscalers.autoscaling)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "autoscaling",
                Kind: "horizontalPodAutoscalers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/autoscaling/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-et3q4/horizontalpodautoscalers\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post horizontalPodAutoscalers.autoscaling)
not to have occurred

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 22:07:16.261: All nodes should be ready after test, the server has asked for the client to provide credentials (get nodes)

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820980880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-z8rzh/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26175

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  2 22:05:46.787: Couldn't delete ns "e2e-tests-proxy-6rivw": the server does not allow access to the requested resource (delete namespaces e2e-tests-proxy-6rivw)

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:260
19: path /api/v1/namespaces/e2e-tests-proxy-w7bds/services/proxy-service-woffs:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/services/https:proxy-service-woffs:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/services/proxy-service-woffs:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/services/https:proxy-service-woffs:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/pods/http:proxy-service-woffs-g1l7g:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/services/proxy-service-woffs:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/namespaces/e2e-tests-proxy-w7bds/services/http:proxy-service-woffs:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/pods/proxy-service-woffs-g1l7g:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/pods/proxy-service-woffs-g1l7g:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/namespaces/e2e-tests-proxy-w7bds/pods/proxy-service-woffs-g1l7g:80/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/namespaces/e2e-tests-proxy-w7bds/pods/https:proxy-service-woffs-g1l7g:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/services/http:proxy-service-woffs:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/pods/http:proxy-service-woffs-g1l7g:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
19: path /api/v1/namespaces/e2e-tests-proxy-w7bds/pods/https:proxy-service-woffs-g1l7g:443/proxy/ took 38.876578889s > 30s
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-w7bds/services/proxy-service-woffs:81/ took 55.109110882s > 30s

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:403
Expected error:
    <*errors.StatusError | 0xc820abf000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-hqfw2/pods?fieldSelector=metadata.name%3Dhostexec\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:276
Expected error:
    <*errors.errorString | 0xc820a04170>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.206.85 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-42nzk] []  0xc820b31460  error: You must be logged in to the server (the server has asked for the client to provide credentials)\n [] <nil> 0xc820b31b60 exit status 1 <nil> true [0xc8200aa8b8 0xc8200aa8e0 0xc8200aa8f0] [0xc8200aa8b8 0xc8200aa8e0 0xc8200aa8f0] [0xc8200aa8c0 0xc8200aa8d8 0xc8200aa8e8] [0xbc87e0 0xbc8940 0xbc8940] 0xc820b2df80}:\nCommand stdout:\n\nstderr:\nerror: You must be logged in to the server (the server has asked for the client to provide credentials)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.206.85 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-42nzk] []  0xc820b31460  error: You must be logged in to the server (the server has asked for the client to provide credentials)
     [] <nil> 0xc820b31b60 exit status 1 <nil> true [0xc8200aa8b8 0xc8200aa8e0 0xc8200aa8f0] [0xc8200aa8b8 0xc8200aa8e0 0xc8200aa8f0] [0xc8200aa8c0 0xc8200aa8d8 0xc8200aa8e8] [0xbc87e0 0xbc8940 0xbc8940] 0xc820b2df80}:
    Command stdout:

    stderr:
    error: You must be logged in to the server (the server has asked for the client to provide credentials)

    error:
    exit status 1

not to have occurred

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.StatusError | 0xc820fe6080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post jobs.batch)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "batch",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/batch/v1/namespaces/e2e-tests-v1job-dlacg/jobs\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post jobs.batch)
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:89
Expected error:
    <*errors.errorString | 0xc820aaa340>: {
        s: "error while stopping RC: rc-light: error getting replication controllers: error getting replication controllers: the server does not allow access to the requested resource (get replicationControllers)",
    }
    error while stopping RC: rc-light: error getting replication controllers: error getting replication controllers: the server does not allow access to the requested resource (get replicationControllers)
not to have occurred

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:123
Failed to create pod: the server does not allow access to the requested resource (post pods)
Expected error:
    <*errors.StatusError | 0xc820bc6100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-volume-provisioning-nk7o8/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post pods)
not to have occurred

Issues about this test specifically: #26682

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
Expected error:
    <*errors.StatusError | 0xc8200cc200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post replicasets.extensions)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "extensions",
                Kind: "replicasets",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-replicaset-fc3gg/replicasets\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post replicasets.extensions)
not to have occurred

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication on a single node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:232
Expected error:
    <*errors.StatusError | 0xc820c12d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-nettest-tqcbq/pods?fieldSelector=metadata.name%3Dsame-node-webserver\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8346/

Multiple broken tests:

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected
    <int>: 0
to equal
    <int>: 1

Failed: [k8s.io] Pods should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1045
getting pod liveness-exec in namespace e2e-tests-pods-blh4g
Expected error:
    <*errors.StatusError | 0xc8200b8700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods liveness-exec)",
            Reason: "Forbidden",
            Details: {
                Name: "liveness-exec",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-pods-blh4g/pods/liveness-exec\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods liveness-exec)
not to have occurred

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:771
Expected error:
    <*errors.StatusError | 0xc82112e100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post services)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-services-e8oz6/services\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post services)
not to have occurred

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:89
Expected error:
    <*errors.StatusError | 0xc82025ba80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (delete services test-service)",
            Reason: "Forbidden",
            Details: {
                Name: "test-service",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-ikgj2/services/test-service\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (delete services test-service)
not to have occurred

@j3ffml
Copy link
Contributor

j3ffml commented Jun 3, 2016

The majority of the above failures are from IAM problems which should be resolved now (see #26639). Leaving open for now to see what else the bot finds.

@j3ffml j3ffml closed this as completed Jun 3, 2016
@j3ffml j3ffml reopened this Jun 3, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8437/

Multiple broken tests:

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:06:04.073: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Jun  4 05:09:22.136: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

Issues about this test specifically: #26425 #26715

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:05:55.348: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[volumes.kubernetes.io/controller-managed-attach-detach:true scheduler.alpha.kubernetes.io/taints:[]] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:11:53.497: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:09:26.960: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:09:51.006: Couldn't delete ns "e2e-tests-kubectl-fokwz": namespace e2e-tests-kubectl-fokwz was not deleted within limit: timed out waiting for the condition, pods remaining: [frontend-708336848-f5rzp]

Issues about this test specifically: #26175

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication on a single node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:14:12.317: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true scheduler.alpha.kubernetes.io/taints:[]] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:14:45.038: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Issues about this test specifically: #26171

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:13:58.475: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[volumes.kubernetes.io/controller-managed-attach-detach:true scheduler.alpha.kubernetes.io/taints:[]] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:12:47.663: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:13:46.637: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[volumes.kubernetes.io/controller-managed-attach-detach:true scheduler.alpha.kubernetes.io/taints:[]] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:08:29.938: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Issues about this test specifically: #26682

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:12:47.258: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:13:09.513: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:06:57.734: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Pods should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:07:44.719: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:13:00.957: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true scheduler.alpha.kubernetes.io/taints:[]] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1063
Expected error:
    <*errors.errorString | 0xc82097c150>: {
        s: "Timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://107.178.223.97 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc82051bac0 Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\n  [] <nil> 0xc820782200 <nil> <nil> true [0xc8207ae5e0 0xc8207ae608 0xc8207ae618] [0xc8207ae5e0 0xc8207ae608 0xc8207ae618] [0xc8207ae5e8 0xc8207ae600 0xc8207ae610] [0xa66530 0xa66690 0xa66690] 0xc8208deea0}:\nCommand stdout:\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\nWaiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false\n\nstderr:\n\n",
    }
    Timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://107.178.223.97 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc82051bac0 Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
      [] <nil> 0xc820782200 <nil> <nil> true [0xc8207ae5e0 0xc8207ae608 0xc8207ae618] [0xc8207ae5e0 0xc8207ae608 0xc8207ae618] [0xc8207ae5e8 0xc8207ae600 0xc8207ae610] [0xa66530 0xa66690 0xa66690] 0xc8208deea0}:
    Command stdout:
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false
    Waiting for pod default/e2e-test-rm-busybox-job-dq01k to be running, status is Pending, pod ready: false

    stderr:


not to have occurred

Issues about this test specifically: #26728

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:10:59.429: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:14:01.617: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Generated release_1_3 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:08:19.486: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:13:37.699: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:260
18: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/proxy-service-ypf4r:portname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-ypf4r:portname1" Reason:ServiceUnavailable Details:<nil> Code:503}
18: path /api/v1/namespaces/e2e-tests-proxy-bp9vo/services/https:proxy-service-ypf4r:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-ypf4r:tlsportname1" Reason:ServiceUnavailable Details:<nil> Code:503}
18: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/https:proxy-service-ypf4r:444/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-ypf4r:444" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/proxy-service-ypf4r:81/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-ypf4r:81" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/https:proxy-service-ypf4r:443/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-ypf4r:443" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/https:proxy-service-ypf4r:tlsportname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-ypf4r:tlsportname2" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/http:proxy-service-ypf4r:80/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-ypf4r:80" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/proxy-service-ypf4r:portname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-ypf4r:portname1" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/namespaces/e2e-tests-proxy-bp9vo/services/https:proxy-service-ypf4r:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-ypf4r:tlsportname1" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/https:proxy-service-ypf4r:444/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-ypf4r:444" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/http:proxy-service-ypf4r:portname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-ypf4r:portname2" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/proxy-service-ypf4r:portname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-ypf4r:portname2" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/https:proxy-service-ypf4r:tlsportname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-ypf4r:tlsportname1" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/namespaces/e2e-tests-proxy-bp9vo/services/proxy-service-ypf4r:portname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-ypf4r:portname1" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/namespaces/e2e-tests-proxy-bp9vo/services/http:proxy-service-ypf4r:portname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-ypf4r:portname2" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/http:proxy-service-ypf4r:81/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-ypf4r:81" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/namespaces/e2e-tests-proxy-bp9vo/services/proxy-service-ypf4r:portname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-ypf4r:portname2" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/proxy-service-ypf4r:80/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-ypf4r:80" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/proxy/namespaces/e2e-tests-proxy-bp9vo/services/http:proxy-service-ypf4r:portname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-ypf4r:portname1" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/namespaces/e2e-tests-proxy-bp9vo/services/http:proxy-service-ypf4r:portname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-ypf4r:portname1" Reason:ServiceUnavailable Details:<nil> Code:503}
19: path /api/v1/namespaces/e2e-tests-proxy-bp9vo/services/https:proxy-service-ypf4r:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-ypf4r:tlsportname2" Reason:ServiceUnavailable Details:<nil> Code:503}

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:20:23.704: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:10:24.024: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:10:38.274: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Issues about this test specifically: #26224

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:12:11.838: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:09:19.986: Couldn't delete ns "e2e-tests-container-probe-hy8s9": namespace e2e-tests-container-probe-hy8s9 was not deleted within limit: timed out waiting for the condition, pods remaining: [test-webserver-5ed318e0-2a4c-11e6-ac7c-0242ac11000c]

Failed: [k8s.io] Pods should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:13:20.168: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:11:16.844: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:12:28.908: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[scheduler.alpha.kubernetes.io/taints:[] volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 05:12:30.348: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-13daad16-cx8m   /api/v1/nodes/gke-jenkins-e2e-default-pool-13daad16-cx8m c7ac9a69-2a4b-11e6-a1ac-42010af00006 5625 0 {2016-06-04 04:59:24 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-13daad16-cx8m] map[volumes.kubernetes.io/controller-managed-attach-detach:true scheduler.alpha.kubernetes.io/taints:[]] [] []} {10.180.1.0/24 9116807320666340564 gce://k8s-jkns-e2e-gke-ci/us-central1-f/gke-jenkins-e2e-default-pool-13daad16-cx8m false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-06-04 05:00:23 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 04:59:24 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-06-04 05:04:10 -0700 PDT} {2016-06-04 05:04:54 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.5} {ExternalIP 104.154.41.205}] {{10250}} { 1CD6EDAB-1E44-454A-FADD-939C53DA68EE c5c59d30-22a5-415c-83a4-fe5f81056d7d 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.9.1 v1.3.0-alpha.5.98+57125d81e16caf v1.3.0-alpha.5.98+57125d81e16caf linux amd64} [{[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/fluentd-gcp:1.18] 411450900} {[gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/kube-proxy:26ab9cd156f43141c509e57067ae0825] 178177849} {[gcr.io/google_containers/dnsutils:e2e] 141895666} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[<none>:<none> <none>@<none>] 125051065} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/nettest:1.8] 25164808} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google_containers/pause:0.8.0] 241656}] []}}]

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8472/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <*errors.errorString | 0xc8208422a0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.141.216 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-pitjz] []  0xc820a984a0 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc820a990e0 exit status 1 <nil> true [0xc8200e2220 0xc8200e2668 0xc8200e26b0] [0xc8200e2220 0xc8200e2668 0xc8200e26b0] [0xc8200e2628 0xc8200e2660 0xc8200e2698] [0xa66c10 0xa66d70 0xa66d70] 0xc820aab080}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.141.216 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-pitjz] []  0xc820a984a0 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    Scaling update-demo-nautilus down to 1
    Scaling update-demo-kitten up to 2
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc820a990e0 exit status 1 <nil> true [0xc8200e2220 0xc8200e2668 0xc8200e26b0] [0xc8200e2220 0xc8200e2668 0xc8200e26b0] [0xc8200e2628 0xc8200e2660 0xc8200e2698] [0xa66c10 0xa66d70 0xa66d70] 0xc820aab080}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    Scaling update-demo-nautilus down to 1
    Scaling update-demo-kitten up to 2

    stderr:
    error: timed out waiting for any update progress to be made

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26425 #26715

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 16:30:20.117: Couldn't delete ns "e2e-tests-kubectl-d3es0": namespace e2e-tests-kubectl-d3es0 was not deleted within limit: timed out waiting for the condition, pods remaining: [e2e-test-nginx-deployment-1517792476-xstgv]

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 16:35:28.407: Couldn't delete ns "e2e-tests-services-5q9qs": Operation cannot be fulfilled on namespaces "e2e-tests-services-5q9qs": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:255
Expected error:
    <*errors.errorString | 0xc8201100b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication on a single node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:232
Expected error:
    <*errors.errorString | 0xc8201080b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 16:30:02.056: Couldn't delete ns "e2e-tests-kubectl-5wdav": namespace e2e-tests-kubectl-5wdav was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-ans4q]

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:58
Expected error:
    <*errors.errorString | 0xc820707a20>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 16:37:58.336: Couldn't delete ns "e2e-tests-job-ln9hr": namespace e2e-tests-job-ln9hr was not deleted within limit: timed out waiting for the condition, pods remaining: []

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 16:30:32.520: Couldn't delete ns "e2e-tests-kubectl-d3es0": Operation cannot be fulfilled on namespaces "e2e-tests-kubectl-d3es0": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 16:31:31.532: Couldn't delete ns "e2e-tests-kubectl-r3h4p": namespace e2e-tests-kubectl-r3h4p was not deleted within limit: timed out waiting for the condition, pods remaining: [frontend-708336848-xlpm8 redis-slave-109403812-fzibm]

Issues about this test specifically: #26175 #26846

Failed: [k8s.io] Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
Expected error:
    <*errors.errorString | 0xc8200ed060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 16:32:10.996: Couldn't delete ns "e2e-tests-kubectl-r3h4p": Operation cannot be fulfilled on namespaces "e2e-tests-kubectl-r3h4p": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.errorString | 0xc8207ab420>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred

Issues about this test specifically: #26128 #26685

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8487/

Multiple broken tests:

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 21:59:05.452: Couldn't delete ns "e2e-tests-v1job-nr2tx": the server does not allow access to the requested resource (delete namespaces e2e-tests-v1job-nr2tx)

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc82028aa80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-qp8o7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 21:59:15.007: Couldn't delete ns "e2e-tests-pods-8nbce": the server does not allow access to the requested resource (delete namespaces e2e-tests-pods-8nbce)

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820a28500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-goq0j/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 21:59:04.710: Couldn't delete ns "e2e-tests-job-3tltu": the server does not allow access to the requested resource (delete namespaces e2e-tests-job-3tltu)

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:105
Jun  4 21:58:55.858: Failed to create pod: the server does not allow access to the requested resource (post pods)

Failed: [k8s.io] Addon update should propagate add-on file changes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820e1c080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-addon-update-test-rc9q0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26125

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:185
Expected error:
    <*errors.StatusError | 0xc820a65400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-job-jdi2u/pods?labelSelector=job%3Dfoo\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Jun  4 21:59:14.907: Failed to get pod : the server does not allow access to the requested resource (get pods dns-test-08ae86fc-2ada-11e6-9432-0242ac11000b)

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  4 21:59:35.864: Couldn't delete ns "e2e-tests-job-mb5p2": the server does not allow access to the requested resource (delete namespaces e2e-tests-job-mb5p2)

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:407
kubelet never observed the termination notice
Expected error:
    <*errors.errorString | 0xc8200c3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26224

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.StatusError | 0xc820be2700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (delete services service1)",
            Reason: "Forbidden",
            Details: {
                Name: "service1",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-services-hc7aa/services/service1\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (delete services service1)
not to have occurred

Issues about this test specifically: #26128 #26685

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:110
Expected error:
    <*errors.StatusError | 0xc820952180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get nodes gke-jenkins-e2e-default-pool-ceaa76df-eymf:10250)",
            Reason: "Forbidden",
            Details: {
                Name: "gke-jenkins-e2e-default-pool-ceaa76df-eymf:10250",
                Group: "",
                Kind: "nodes",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/proxy/nodes/gke-jenkins-e2e-default-pool-ceaa76df-eymf:10250/metrics\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get nodes gke-jenkins-e2e-default-pool-ceaa76df-eymf:10250)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8496/

Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.errorString | 0xc82025c2e0>: {
        s: "service verification failed for: 10.183.252.218\nexpected [service1-lwaxz service1-opmmr service1-susv5]\nreceived [wget: download timed out]",
    }
    service verification failed for: 10.183.252.218
    expected [service1-lwaxz service1-opmmr service1-susv5]
    received [wget: download timed out]
not to have occurred

Issues about this test specifically: #26128 #26685

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Expected error:
    <*errors.errorString | 0xc8200db060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200db060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:960
Jun  5 01:13:37.029: expected un-ready endpoint for Service webserver within 5m0s, stdout: 

Issues about this test specifically: #26172

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc820b31740>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-05 01:07:34 -0700 PDT} FinishedAt:{Time:2016-06-05 01:08:04 -0700 PDT} ContainerID:docker://c4519357f9d4a56bccc0cab36ca6a134b8e9cfed3fc3935d2cc5e37bbc28187f}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-05 01:07:34 -0700 PDT} FinishedAt:{Time:2016-06-05 01:08:04 -0700 PDT} ContainerID:docker://c4519357f9d4a56bccc0cab36ca6a134b8e9cfed3fc3935d2cc5e37bbc28187f}
not to have occurred

Issues about this test specifically: #26171

Failed: [k8s.io] Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
Jun  5 01:06:25.878: Failed on attempt 38. Cleaning up. Details:
{
    "Hostname": "nettest-w0ixp",
    "Sent": {
        "nettest-mjuei": 15,
        "nettest-w0ixp": 15,
        "nettest-ypt1e": 5
    },
    "Received": {
        "nettest-mjuei": 15,
        "nettest-w0ixp": 15
    },
    "Errors": null,
    "Log": [
        "e2e-tests-nettest-2k3xo/nettest has 0 endpoints ([]), which is less than 3 as expected. Waiting for all endpoints to come up.",
        "e2e-tests-nettest-2k3xo/nettest has 1 endpoints ([http://10.180.0.9:8080]), which is less than 3 as expected. Waiting for all endpoints to come up.",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.2.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.2.9:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.2.9:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.2.9:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.2.9:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.0.9:8080",
        "Attempting to contact http://10.180.1.8:8080",
        "Declaring failure for e2e-tests-nettest-2k3xo/nettest with 3 sent and 2 received and 3 peers"
    ],
    "StillContactingPeers": false
}

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8511/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication on a single node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:232
Expected error:
    <*errors.errorString | 0xc820097f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:265
Jun  5 05:42:57.832: Cannot added new entry in 180 seconds.

Issues about this test specifically: #26175 #26846

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:257
Jun  5 05:44:15.272: Pod did not start running: timed out waiting for the condition

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:389
Expected error:
    <*errors.errorString | 0xc820095fa0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26180

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:82
Expected error:
    <*errors.errorString | 0xc820be4630>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred

Failed: [k8s.io] Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
Expected error:
    <*errors.errorString | 0xc8200fc0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:660
Expected error:
    <*errors.errorString | 0xc820095f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62
Expected error:
    <*errors.errorString | 0xc820468a40>: {
        s: "gave up waiting for pod 'client-containers-6e76f531-2b1a-11e6-9254-0242ac110007' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'client-containers-6e76f531-2b1a-11e6-9254-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:101
Expected error:
    <*errors.errorString | 0xc820dc5b50>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.errorString | 0xc8202d6070>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred

Issues about this test specifically: #26128 #26685

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8555/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:483
Jun  5 19:40:40.864: Missing KubeDNS in kubectl cluster-info

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc820095f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Expected error:
    <*errors.errorString | 0xc8200f5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:389
Expected error:
    <*errors.errorString | 0xc82010e0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26180

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:87
Expected error:
    <*errors.errorString | 0xc820b12a20>: {
        s: "error waiting for service kube-system/kubernetes-dashboard to appear: timed out waiting for the condition",
    }
    error waiting for service kube-system/kubernetes-dashboard to appear: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26191

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8562/

Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:89
Expected error:
    <*errors.errorString | 0xc820cb24e0>: {
        s: "error while stopping RC: rc-light: error getting replication controllers: error getting replication controllers: the server does not allow access to the requested resource (get replicationControllers)",
    }
    error while stopping RC: rc-light: error getting replication controllers: error getting replication controllers: the server does not allow access to the requested resource (get replicationControllers)
not to have occurred

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:115
Not all RC/pod/service trials succeeded: got 33 errors
Tail (99 percentile) latency should be less than 50s
50, 90, 99 percentiles: 1.497894238s 2.199578566s 52.34710506s

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:123
Expected error:
    <*errors.StatusError | 0xc820d56280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (delete persistentVolumeClaims pvc-y3r2n)",
            Reason: "ServerTimeout",
            Details: {
                Name: "pvc-y3r2n",
                Group: "",
                Kind: "persistentVolumeClaims",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (delete persistentVolumeClaims pvc-y3r2n)
not to have occurred

Issues about this test specifically: #26682

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  5 22:00:48.093: All nodes should be ready after test, the server has asked for the client to provide credentials (get nodes)

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820b76b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-dns-acb11/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26168

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:111
Jun  5 22:00:27.920: expecting wait timeout error but got: the server has asked for the client to provide credentials (get pods test-webserver-46650a4e-2ba3-11e6-864a-0242ac110005)

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:265
Expected error:
    <*errors.errorString | 0xc820a99e70>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.156.175 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-tyuzj] []  0xc820788f20  error validating \"STDIN\": error validating data: the server does not allow access to the requested resource (get .extensions); if you choose to ignore these errors, turn validation off with --validate=false\n [] <nil> 0xc820789580 exit status 1 <nil> true [0xc820c483a0 0xc820c483c8 0xc820c483d8] [0xc820c483a0 0xc820c483c8 0xc820c483d8] [0xc820c483a8 0xc820c483c0 0xc820c483d0] [0xa66c10 0xa66d70 0xa66d70] 0xc820fa1380}:\nCommand stdout:\n\nstderr:\nerror validating \"STDIN\": error validating data: the server does not allow access to the requested resource (get .extensions); if you choose to ignore these errors, turn validation off with --validate=false\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.156.175 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-tyuzj] []  0xc820788f20  error validating "STDIN": error validating data: the server does not allow access to the requested resource (get .extensions); if you choose to ignore these errors, turn validation off with --validate=false
     [] <nil> 0xc820789580 exit status 1 <nil> true [0xc820c483a0 0xc820c483c8 0xc820c483d8] [0xc820c483a0 0xc820c483c8 0xc820c483d8] [0xc820c483a8 0xc820c483c0 0xc820c483d0] [0xa66c10 0xa66d70 0xa66d70] 0xc820fa1380}:
    Command stdout:

    stderr:
    error validating "STDIN": error validating data: the server does not allow access to the requested resource (get .extensions); if you choose to ignore these errors, turn validation off with --validate=false

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26175 #26846

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:825
Expected error:
    <*errors.errorString | 0xc820d92940>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.156.175 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-kf0kt] []  <nil>  Error from server: the server does not allow access to the requested resource (get replicasets.extensions)\n [] <nil> 0xc8210be560 exit status 1 <nil> true [0xc8200aa260 0xc8200aa278 0xc8200aa2a8] [0xc8200aa260 0xc8200aa278 0xc8200aa2a8] [0xc8200aa270 0xc8200aa290] [0xa66d70 0xa66d70] 0xc820e39f80}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (get replicasets.extensions)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.156.175 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-kf0kt] []  <nil>  Error from server: the server does not allow access to the requested resource (get replicasets.extensions)
     [] <nil> 0xc8210be560 exit status 1 <nil> true [0xc8200aa260 0xc8200aa278 0xc8200aa2a8] [0xc8200aa260 0xc8200aa278 0xc8200aa2a8] [0xc8200aa270 0xc8200aa290] [0xa66d70 0xa66d70] 0xc820e39f80}:
    Command stdout:

    stderr:
    Error from server: the server does not allow access to the requested resource (get replicasets.extensions)

    error:
    exit status 1

not to have occurred

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  5 21:59:48.082: Couldn't delete ns "e2e-tests-job-esuk6": the server does not allow access to the requested resource (delete namespaces e2e-tests-job-esuk6)

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:161
Jun  5 21:59:29.325: Failed to create pod: the server does not allow access to the requested resource (post pods)

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820286580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-secrets-2ezuw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc8205d8580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-91qjh/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820e54200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-2baew/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should not be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun  5 22:00:48.268: All nodes should be ready after test, the server has asked for the client to provide credentials (get nodes)

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Jun  5 22:00:01.845: Failed to get pod : the server does not allow access to the requested resource (get pods dns-test-5fe1c406-2ba3-11e6-befa-0242ac110005)

Issues about this test specifically: #26194 #26338

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8608/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:223
Jun  6 13:02:10.369: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:403
Jun  6 12:58:44.127: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.errorString | 0xc8212399f0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred

Issues about this test specifically: #26128 #26685

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:308
Jun  6 13:01:15.681: Pod did not start running: timed out waiting for the condition

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Expected error:
    <*errors.errorString | 0xc8201060b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] EmptyDir wrapper volumes should becomes running {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:179
Expected error:
    <*errors.errorString | 0xc820015180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:67
Expected error:
    <*errors.errorString | 0xc820095f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8705/

Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820e26800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-r68jl/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:260
0: path /api/v1/namespaces/e2e-tests-proxy-ewrzd/pods/http:proxy-service-t4vqa-qd6tj:80/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'EOF'\nTrying to reach: 'http://10.180.1.8:80/'" field:"" > retryAfterSeconds:0  Code:503}

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Downward API volume should provide container's cpu limit {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820c55e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-ve12p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820ebc380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-xui6g/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
validating pre-stop.
Expected error:
    <*errors.errorString | 0xc8200f40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820c25780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kibana-logging-ykg8q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc820efd500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-olz25/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/8929/

Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Jun 10 20:29:21.073: Couldn't delete ns "e2e-tests-proxy-6ws2o": the server does not allow access to the requested resource (delete namespaces e2e-tests-proxy-6ws2o)

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:708
Expected error:
    <*errors.errorString | 0xc820a3cea0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.145.242 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-eq20f] []  0xc82089d7e0  Error from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (get replicationControllers redis-master)\n [] <nil> 0xc82089dee0 exit status 1 <nil> true [0xc8200d2c68 0xc8200d2c90 0xc8200d2ca0] [0xc8200d2c68 0xc8200d2c90 0xc8200d2ca0] [0xc8200d2c70 0xc8200d2c88 0xc8200d2c98] [0xa63c50 0xa63db0 0xa63db0] 0xc820b09620}:\nCommand stdout:\n\nstderr:\nError from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (get replicationControllers redis-master)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.145.242 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-eq20f] []  0xc82089d7e0  Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (get replicationControllers redis-master)
     [] <nil> 0xc82089dee0 exit status 1 <nil> true [0xc8200d2c68 0xc8200d2c90 0xc8200d2ca0] [0xc8200d2c68 0xc8200d2c90 0xc8200d2ca0] [0xc8200d2c70 0xc8200d2c88 0xc8200d2c98] [0xa63c50 0xa63db0 0xa63db0] 0xc820b09620}:
    Command stdout:

    stderr:
    Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (get replicationControllers redis-master)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26139

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:134
Expected error:
    <*errors.StatusError | 0xc820dcc480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods annotationupdate93c998b9-2f84-11e6-80e5-0242ac110004)",
            Reason: "Forbidden",
            Details: {
                Name: "annotationupdate93c998b9-2f84-11e6-80e5-0242ac110004",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-downward-api-k68yw/pods/annotationupdate93c998b9-2f84-11e6-80e5-0242ac110004\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods annotationupdate93c998b9-2f84-11e6-80e5-0242ac110004)
not to have occurred

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:97
Expected error:
    <*errors.StatusError | 0xc820c12100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods labelsupdate980db5ae-2f84-11e6-842f-0242ac110004)",
            Reason: "Forbidden",
            Details: {
                Name: "labelsupdate980db5ae-2f84-11e6-842f-0242ac110004",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-downward-api-kte5c/pods/labelsupdate980db5ae-2f84-11e6-842f-0242ac110004\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods labelsupdate980db5ae-2f84-11e6-842f-0242ac110004)
not to have occurred

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.StatusError | 0xc820c7cb80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods different-node-webserver)",
            Reason: "Forbidden",
            Details: {
                Name: "different-node-webserver",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-nettest-16wzw/pods/different-node-webserver\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods different-node-webserver)
not to have occurred

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9252/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:389
Expected error:
    <*errors.errorString | 0xc820075f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:130
Jun 15 13:07:47.804: Couldn't delete ns "e2e-tests-kubectl-c1oka": namespace e2e-tests-kubectl-c1oka was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-u4vpz]

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:483
Jun 15 12:58:12.405: Missing KubeDNS in kubectl cluster-info

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200b9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:261
0: path /api/v1/namespaces/e2e-tests-proxy-ia8fj/pods/proxy-service-2p1x9-bdta0/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'EOF'\nTrying to reach: 'http://10.180.2.8:80/'" field:"" > retryAfterSeconds:0  Code:503}

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:960
Jun 15 13:04:50.739: expected un-ready endpoint for Service webserver within 5m0s, stdout: 

Issues about this test specifically: #26172

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jun 15 13:17:32.279: timeout waiting 15m0s for pods size to be 2

Issues about this test specifically: #27443

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:265
Jun 15 13:10:17.955: Frontend service did not start serving content in 600 seconds.

Issues about this test specifically: #26175 #26846 #27334

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:130
Jun 15 13:08:35.810: Couldn't delete ns "e2e-tests-kubectl-c1oka": Operation cannot be fulfilled on namespaces "e2e-tests-kubectl-c1oka": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc82027c800>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-15 13:04:57 -0700 PDT} FinishedAt:{Time:2016-06-15 13:05:27 -0700 PDT} ContainerID:docker://b7d49add3170081a596a2a33300b47608531005f8069a807933f9d364dd29c5a}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-15 13:04:57 -0700 PDT} FinishedAt:{Time:2016-06-15 13:05:27 -0700 PDT} ContainerID:docker://b7d49add3170081a596a2a33300b47608531005f8069a807933f9d364dd29c5a}
not to have occurred

Issues about this test specifically: #26171

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Expected error:
    <*errors.errorString | 0xc820075f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9261/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200d60b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:389
Expected error:
    <*errors.errorString | 0xc8200e00b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Expected error:
    <*errors.errorString | 0xc8200e20b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc820b11480>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-15 16:37:53 -0700 PDT} FinishedAt:{Time:2016-06-15 16:38:23 -0700 PDT} ContainerID:docker://1ae8e23568489e1de28ab594d9df7f0f79882ac12d890c765d803c50930a7d7e}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-06-15 16:37:53 -0700 PDT} FinishedAt:{Time:2016-06-15 16:38:23 -0700 PDT} ContainerID:docker://1ae8e23568489e1de28ab594d9df7f0f79882ac12d890c765d803c50930a7d7e}
not to have occurred

Issues about this test specifically: #26171

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:87
Expected error:
    <*errors.errorString | 0xc820a9b430>: {
        s: "error waiting for service kube-system/kubernetes-dashboard to appear: timed out waiting for the condition",
    }
    error waiting for service kube-system/kubernetes-dashboard to appear: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26191

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:483
Jun 15 16:37:30.562: Missing KubeDNS in kubectl cluster-info

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:67
Expected error:
    <*errors.errorString | 0xc8200d60b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

@k8s-github-robot k8s-github-robot added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Jun 16, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9276/

Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:261
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:160/\"" field:"" > retryAfterSeconds:0  Code:403}
3: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:1080/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:1080/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:160/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname2/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:443/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:81/\"" field:"" > retryAfterSeconds:0  Code:403}
3: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:162/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:1080/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:160/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:162/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:460/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname1/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname2/ took 39.69543422s > 30s
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname2/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname2/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:80/ took 34.5393539s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:162/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:1080/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:81/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:1080/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname1/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:162/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:460/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:162/ took 36.635952764s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:443/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao/proxy/ took 39.138221727s > 30s
3: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:462/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname2/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
4: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname2/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
4: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
4: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname1/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/https:proxy-service-x33w5-u3rao:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
3: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:81/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/\"" field:"" > retryAfterSeconds:0  Code:403}
4: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:160/ took 39.423056538s > 30s
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:tlsportname1/ took 50.858383644s > 30s
4: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/proxy-service-x33w5:portname2/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/http:proxy-service-x33w5-u3rao:160/ took 59.251979917s > 30s
4: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/pods/proxy-service-x33w5-u3rao:162/ took 45.911559521s > 30s
5: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/proxy/ took 39.555051663s > 30s
5: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/proxy/ took 1m4.18989825s > 30s
6: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/ took 38.854933475s > 30s
4: path /api/v1/namespaces/e2e-tests-proxy-4vr04/services/http:proxy-service-x33w5:portname2/proxy/ took 55.794011988s > 30s
7: path /api/v1/proxy/namespaces/e2e-tests-proxy-4vr04/services/https:proxy-service-x33w5:444/ took 36.383793805s > 30s

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.StatusError | 0xc820be6100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-services-eb0un/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post pods)
not to have occurred

Issues about this test specifically: #26128 #26685

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:100
Jun 15 22:00:26.956: Failed to create pod: the server does not allow access to the requested resource (post pods)

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Jun 15 22:00:28.660: Error creating a pod: the server does not allow access to the requested resource (post pods)

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:86
pod never became ready
Expected error:
    <*errors.StatusError | 0xc820b89b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get pods test-webserver-0066b3fe-337f-11e6-8f15-0242ac110003)",
            Reason: "Unauthorized",
            Details: {
                Name: "test-webserver-0066b3fe-337f-11e6-8f15-0242ac110003",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get pods test-webserver-0066b3fe-337f-11e6-8f15-0242ac110003)
not to have occurred

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1316
Jun 15 22:00:07.966: Failed to open websocket to wss://130.211.165.234:443/api/v1/namespaces/e2e-tests-pods-61qrj/pods/pod-logs-websocket-1532c930-337f-11e6-92e7-0242ac110003/log?container=main: websocket.Dial wss://130.211.165.234:443/api/v1/namespaces/e2e-tests-pods-61qrj/pods/pod-logs-websocket-1532c930-337f-11e6-92e7-0242ac110003/log?container=main: bad status

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:279
Expected error:
    <*errors.errorString | 0xc820884450>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.165.234 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4tymp] []  0xc820a59540  Error from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (delete pods nginx)\n [] <nil> 0xc820a59d80 exit status 1 <nil> true [0xc82022e150 0xc82022e178 0xc82022e188] [0xc82022e150 0xc82022e178 0xc82022e188] [0xc82022e158 0xc82022e170 0xc82022e180] [0xa798d0 0xa79a30 0xa79a30] 0xc8200d6300}:\nCommand stdout:\n\nstderr:\nError from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (delete pods nginx)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.165.234 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4tymp] []  0xc820a59540  Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (delete pods nginx)
     [] <nil> 0xc820a59d80 exit status 1 <nil> true [0xc82022e150 0xc82022e178 0xc82022e188] [0xc82022e150 0xc82022e178 0xc82022e188] [0xc82022e158 0xc82022e170 0xc82022e180] [0xa798d0 0xa79a30 0xa79a30] 0xc8200d6300}:
    Command stdout:

    stderr:
    Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (delete pods nginx)

    error:
    exit status 1

not to have occurred

Failed: [k8s.io] Pods should not be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1076
getting pod liveness-exec
Expected error:
    <*errors.StatusError | 0xc82082e600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get pods liveness-exec)",
            Reason: "Unauthorized",
            Details: {
                Name: "liveness-exec",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get pods liveness-exec)
not to have occurred

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:111
Jun 15 22:00:19.856: expecting wait timeout error but got: the server has asked for the client to provide credentials (get pods test-webserver-14ac8136-337f-11e6-8101-0242ac110003)

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
Expected error:
    <*errors.StatusError | 0xc8208b2080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-replicaset-tjqc8/pods?labelSelector=name%3Dmy-hostname-private-1923476e-337f-11e6-869a-0242ac110003\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] Downward API volume should provide container's memory request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:129
Expected error:
    <*errors.StatusError | 0xc820089e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-4nabx/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:664
Jun 15 22:00:14.707: Failed to create serverPod: the server does not allow access to the requested resource (post pods)

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9278/

Multiple broken tests:

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:129
Expected error:
    <*errors.StatusError | 0xc82085a400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-yg11y/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:129
Expected error:
    <*errors.StatusError | 0xc820adbd80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-gw9t4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1049
getting pod liveness-exec in namespace e2e-tests-pods-tby6z
Expected error:
    <*errors.StatusError | 0xc820fd0680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods liveness-exec)",
            Reason: "Forbidden",
            Details: {
                Name: "liveness-exec",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-pods-tby6z/pods/liveness-exec\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods liveness-exec)
not to have occurred

Failed: [k8s.io] Downward API volume should provide container's memory request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:129
Expected error:
    <*errors.StatusError | 0xc820a24f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-u4y36/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:664
Jun 15 22:48:20.814: Failed to create serverPod: the server does not allow access to the requested resource (post pods)

Failed: ThirdParty resources Simple Third Party creating/deleting thirdparty objects works [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:129
Expected error:
    <*errors.StatusError | 0xc8208d2500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-thirdparty-uuqad/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:129
Expected error:
    <*errors.StatusError | 0xc820251600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-roifj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:129
Expected error:
    <*errors.StatusError | 0xc820938980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-nettest-f8fh1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26838

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9319/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication on a single node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:244
Expected error:
    <*errors.errorString | 0xc8200b9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc820370a70>: {
        s: "gave up waiting for pod 'pod-87c2fe79-3401-11e6-bdaa-0242ac110003' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pod-87c2fe79-3401-11e6-bdaa-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:583
Jun 16 13:37:42.509: Verified 0 of 1 pods , error : timed out waiting for the condition

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:194
Expected success, but got an error:
    <*errors.errorString | 0xc8200de0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc82052c700>: {
        s: "gave up waiting for pod 'pod-7863dba5-3401-11e6-8b10-0242ac110003' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pod-7863dba5-3401-11e6-8b10-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc820973840>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred

Issues about this test specifically: #26509 #26834

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:67
Expected error:
    <*errors.errorString | 0xc8200d5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9409/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1063
Expected error:
    <*errors.errorString | 0xc8207cae50>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.154.74 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820ac38c0  Error from server: the server does not allow access to the requested resource (get pods e2e-test-rm-busybox-job-ndhc9)\n [] <nil> 0xc820ac3ee0 exit status 1 <nil> true [0xc8202ce080 0xc8202ce0a8 0xc8202ce0b8] [0xc8202ce080 0xc8202ce0a8 0xc8202ce0b8] [0xc8202ce088 0xc8202ce0a0 0xc8202ce0b0] [0xa7e5a0 0xa7e700 0xa7e700] 0xc820a821e0}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (get pods e2e-test-rm-busybox-job-ndhc9)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.154.74 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820ac38c0  Error from server: the server does not allow access to the requested resource (get pods e2e-test-rm-busybox-job-ndhc9)
     [] <nil> 0xc820ac3ee0 exit status 1 <nil> true [0xc8202ce080 0xc8202ce0a8 0xc8202ce0b8] [0xc8202ce080 0xc8202ce0a8 0xc8202ce0b8] [0xc8202ce088 0xc8202ce0a0 0xc8202ce0b0] [0xa7e5a0 0xa7e700 0xa7e700] 0xc820a821e0}:
    Command stdout:

    stderr:
    Error from server: the server does not allow access to the requested resource (get pods e2e-test-rm-busybox-job-ndhc9)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26728

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:841
Expected error:
    <*errors.errorString | 0xc82094c070>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.154.74 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-ns9ia] []  <nil>  error: failed to discover supported resources: the server does not allow access to the requested resource\n [] <nil> 0xc82081bc60 exit status 1 <nil> true [0xc820290fd0 0xc820290fe8 0xc820291000] [0xc820290fd0 0xc820290fe8 0xc820291000] [0xc820290fe0 0xc820290ff8] [0xa7e700 0xa7e700] 0xc820743a40}:\nCommand stdout:\n\nstderr:\nerror: failed to discover supported resources: the server does not allow access to the requested resource\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.154.74 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-ns9ia] []  <nil>  error: failed to discover supported resources: the server does not allow access to the requested resource
     [] <nil> 0xc82081bc60 exit status 1 <nil> true [0xc820290fd0 0xc820290fe8 0xc820291000] [0xc820290fd0 0xc820290fe8 0xc820291000] [0xc820290fe0 0xc820290ff8] [0xa7e700 0xa7e700] 0xc820743a40}:
    Command stdout:

    stderr:
    error: failed to discover supported resources: the server does not allow access to the requested resource

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #27014

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:279
Expected error:
    <*errors.errorString | 0xc82080ded0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.154.74 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dga84] []  0xc8207c6600  Error from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (delete pods nginx)\n [] <nil> 0xc8207c6ce0 exit status 1 <nil> true [0xc82002e070 0xc82002e098 0xc82002e0a8] [0xc82002e070 0xc82002e098 0xc82002e0a8] [0xc82002e078 0xc82002e090 0xc82002e0a0] [0xa7e5a0 0xa7e700 0xa7e700] 0xc82020ec60}:\nCommand stdout:\n\nstderr:\nError from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (delete pods nginx)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.154.74 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dga84] []  0xc8207c6600  Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (delete pods nginx)
     [] <nil> 0xc8207c6ce0 exit status 1 <nil> true [0xc82002e070 0xc82002e098 0xc82002e0a8] [0xc82002e070 0xc82002e098 0xc82002e0a8] [0xc82002e078 0xc82002e090 0xc82002e0a0] [0xa7e5a0 0xa7e700 0xa7e700] 0xc82020ec60}:
    Command stdout:

    stderr:
    Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (delete pods nginx)

    error:
    exit status 1

not to have occurred

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:407
Expected error:
    <*errors.StatusError | 0xc820c50300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (delete pods terminating-pod)",
            Reason: "Forbidden",
            Details: {
                Name: "terminating-pod",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-h4rdf/pods/terminating-pod\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (delete pods terminating-pod)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820860700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-pds9k/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26139

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:261
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/services/proxy-service-wx0g8:81/ took 31.305458781s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/pods/proxy-service-wx0g8-2ncqz:160/ took 30.969007599s > 30s
0: path /api/v1/namespaces/e2e-tests-proxy-6kyzs/services/http:proxy-service-wx0g8:portname2/proxy/ took 38.572140698s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/services/proxy-service-wx0g8:81/ took 33.482255093s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/pods/http:proxy-service-wx0g8-2ncqz:162/ took 32.184647174s > 30s
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/services/http:proxy-service-wx0g8:portname1/ took 36.76187045s > 30s
0: path /api/v1/namespaces/e2e-tests-proxy-6kyzs/services/https:proxy-service-wx0g8:tlsportname1/proxy/ took 43.004474247s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-6kyzs/pods/http:proxy-service-wx0g8-2ncqz:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-6kyzs/pods/http:proxy-service-wx0g8-2ncqz:160/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
0: path /api/v1/namespaces/e2e-tests-proxy-6kyzs/services/proxy-service-wx0g8:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-6kyzs/services/proxy-service-wx0g8:portname1/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/services/proxy-service-wx0g8:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/services/proxy-service-wx0g8:80/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/services/http:proxy-service-wx0g8:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/services/http:proxy-service-wx0g8:portname1/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/namespaces/e2e-tests-proxy-6kyzs/pods/http:proxy-service-wx0g8-2ncqz:162/proxy/ took 30.653018526s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-6kyzs/services/http:proxy-service-wx0g8:portname2/proxy/ took 44.128055261s > 30s
3: path /api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/services/http:proxy-service-wx0g8:portname1/ took 40.333576595s > 30s
2: path /api/v1/namespaces/e2e-tests-proxy-6kyzs/pods/https:proxy-service-wx0g8-2ncqz:462/proxy/ took 50.675561651s > 30s
2: path /api/v1/namespaces/e2e-tests-proxy-6kyzs/pods/http:proxy-service-wx0g8-2ncqz:160/proxy/ took 52.967786772s > 30s
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/pods/proxy-service-wx0g8-2ncqz:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-6kyzs/pods/proxy-service-wx0g8-2ncqz:1080/\"" field:"" > retryAfterSeconds:0  Code:403}

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:55
Expected error:
    <*errors.StatusError | 0xc820b5e400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (delete deployments.extensions test-new-deployment)",
            Reason: "Forbidden",
            Details: {
                Name: "test-new-deployment",
                Group: "extensions",
                Kind: "deployments",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-sdot3/deployments/test-new-deployment\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (delete deployments.extensions test-new-deployment)
not to have occurred

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9538/

Multiple broken tests:

Failed: [k8s.io] Pods should not be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1076
starting pod liveness-exec in namespace e2e-tests-pods-gbm5h
Expected error:
    <*errors.StatusError | 0xc820895500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-gbm5h/pods?fieldSelector=metadata.name%3Dliveness-exec\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:261
0: path /api/v1/proxy/namespaces/e2e-tests-proxy-u61vj/services/http:proxy-service-hqm5j:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-u61vj/services/http:proxy-service-hqm5j:81/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-u61vj/services/https:proxy-service-hqm5j:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/proxy/namespaces/e2e-tests-proxy-u61vj/services/https:proxy-service-hqm5j:tlsportname1/\"" field:"" > retryAfterSeconds:0  Code:403}
1: path /api/v1/namespaces/e2e-tests-proxy-u61vj/services/https:proxy-service-hqm5j:tlsportname2/proxy/ took 35.309312481s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-u61vj/pods/proxy-service-hqm5j-jgtw9:1080/ took 35.152384956s > 30s
2: path /api/v1/namespaces/e2e-tests-proxy-u61vj/services/proxy-service-hqm5j:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server does not allow access to the requested resource Reason:Forbidden Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Forbidden: \"/api/v1/namespaces/e2e-tests-proxy-u61vj/services/proxy-service-hqm5j:portname2/proxy/\"" field:"" > retryAfterSeconds:0  Code:403}
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-u61vj/pods/http:proxy-service-hqm5j-jgtw9:162/ took 35.511356479s > 30s
2: path /api/v1/proxy/namespaces/e2e-tests-proxy-u61vj/services/http:proxy-service-hqm5j:81/ took 42.00524836s > 30s

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 19 21:18:37.127: Couldn't delete ns "e2e-tests-kubectl-2wnfg": the server does not allow access to the requested resource (delete namespaces e2e-tests-kubectl-2wnfg)

Issues about this test specifically: #27507

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:64
Expected error:
    <*errors.StatusError | 0xc82080b500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-deployment-u3kx9/pods?labelSelector=name%3Dsample-pod-3\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:227
Expected error:
    <*errors.StatusError | 0xc8207e7380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-nnp0o/pods?fieldSelector=metadata.name%3Dpod-hostip-ecc1e1d3-369d-11e6-bf59-0242ac110006\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:407
Expected error:
    <*errors.StatusError | 0xc8209eae00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get resourceQuotas quota-not-terminating)",
            Reason: "Forbidden",
            Details: {
                Name: "quota-not-terminating",
                Group: "",
                Kind: "resourceQuotas",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-u5a0v/resourcequotas/quota-not-terminating\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get resourceQuotas quota-not-terminating)
not to have occurred

Failed: [k8s.io] EmptyDir wrapper volumes should becomes running {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:179
Jun 19 21:18:27.562: unable to create test git server service : the server does not allow access to the requested resource (post services)

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9540/

Multiple broken tests:

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:137
Expected error:
    <*errors.StatusError | 0xc820b90d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-job-pp0li/pods?labelSelector=job%3Dscale-up\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 19 22:03:24.764: Couldn't delete ns "e2e-tests-v1job-r58pe": the server does not allow access to the requested resource (delete namespaces e2e-tests-v1job-r58pe)

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8209af400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-3hq8e/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27465

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:185
Expected error:
    <*errors.StatusError | 0xc820b47400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (delete jobs.batch foo)",
            Reason: "Forbidden",
            Details: {
                Name: "foo",
                Group: "batch",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/batch/v1/namespaces/e2e-tests-job-c5fky/jobs/foo\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (delete jobs.batch foo)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9541/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:276
Expected error:
    <*errors.errorString | 0xc8209ae8b0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.235.16 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-bc8m0] []  0xc8208e6920  Error from server: error when creating \"STDIN\": the server does not allow access to the requested resource (post pods)\n [] <nil> 0xc8208e6fe0 exit status 1 <nil> true [0xc8200cde60 0xc8200cde88 0xc8200cde98] [0xc8200cde60 0xc8200cde88 0xc8200cde98] [0xc8200cde68 0xc8200cde80 0xc8200cde90] [0xa845d0 0xa84730 0xa84730] 0xc820c93aa0}:\nCommand stdout:\n\nstderr:\nError from server: error when creating \"STDIN\": the server does not allow access to the requested resource (post pods)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.235.16 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-bc8m0] []  0xc8208e6920  Error from server: error when creating "STDIN": the server does not allow access to the requested resource (post pods)
     [] <nil> 0xc8208e6fe0 exit status 1 <nil> true [0xc8200cde60 0xc8200cde88 0xc8200cde98] [0xc8200cde60 0xc8200cde88 0xc8200cde98] [0xc8200cde68 0xc8200cde80 0xc8200cde90] [0xa845d0 0xa84730 0xa84730] 0xc820c93aa0}:
    Command stdout:

    stderr:
    Error from server: error when creating "STDIN": the server does not allow access to the requested resource (post pods)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #27156

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:162
Expected error:
    <*errors.StatusError | 0xc82081cc80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-job-9pwem/pods?labelSelector=job%3Dscale-down\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82025a800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-diz4y/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 19 22:24:50.412: Couldn't delete ns "e2e-tests-v1job-usxur": the server does not allow access to the requested resource (delete namespaces e2e-tests-v1job-usxur)

Failed: [k8s.io] Downward API should provide pod IP as an env var {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:83
Jun 19 22:25:14.832: Failed to create pod: the server does not allow access to the requested resource (post pods)

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82086ad00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-yfy7h/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:189
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {SelfLink: "", ResourceVersion: ""},
                Status: "Failure",
                Message: "the server does not allow access to the requested resource (delete pods foo-7v1mk)",
                Reason: "Forbidden",
                Details: {
                    Name: "foo-7v1mk",
                    Group: "",
                    Kind: "pods",
                    Causes: [
                        {
                            Type: "UnexpectedServerResponse",
                            Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-v1job-99w5d/pods/foo-7v1mk\"",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 403,
            },
        },
    ]
    the server does not allow access to the requested resource (delete pods foo-7v1mk)
not to have occurred

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820cab580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-smmg8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8202bfc00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-grqlv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:137
Expected error:
    <*errors.StatusError | 0xc820ec2d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-job-3un4p/pods?labelSelector=job%3Dscale-up\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9542/

Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.StatusError | 0xc820097d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get services service1)",
            Reason: "Forbidden",
            Details: {
                Name: "service1",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-services-q1h6b/services/service1\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get services service1)
not to have occurred

Issues about this test specifically: #26128 #26685

Failed: [k8s.io] Generated release_1_2 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 19 22:44:47.950: Couldn't delete ns "e2e-tests-clientset-udm2a": the server does not allow access to the requested resource (delete namespaces e2e-tests-clientset-udm2a)

Failed: [k8s.io] Pods should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1168
starting pod liveness-http in namespace e2e-tests-pods-bub0p
Expected error:
    <*errors.StatusError | 0xc8205ca900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-bub0p/pods?fieldSelector=metadata.name%3Dliveness-http\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8200abd80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-hcsvj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 19 22:45:39.277: Couldn't delete ns "e2e-tests-emptydir-bzs1c": the server does not allow access to the requested resource (delete namespaces e2e-tests-emptydir-bzs1c)

Failed: [k8s.io] hostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 19 22:45:23.125: Couldn't delete ns "e2e-tests-hostpath-8a6pb": the server does not allow access to the requested resource (delete namespaces e2e-tests-hostpath-8a6pb)

Failed: [k8s.io] Mesos starts static pods on every node in the mesos cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820191480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-hvx28/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc8209363c0>: {
        s: "Error creating replication controller: the server does not allow access to the requested resource (post replicationControllers)",
    }
    Error creating replication controller: the server does not allow access to the requested resource (post replicationControllers)
not to have occurred

Issues about this test specifically: #27443

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:759
Expected error:
    <*errors.errorString | 0xc8207d4b20>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.20.76 --kubeconfig=/workspace/.kube/config log redis-master-si700 redis-master --namespace=e2e-tests-kubectl-3vd42] []  <nil>  Error from server: the server does not allow access to the requested resource\n [] <nil> 0xc8209e4940 exit status 1 <nil> true [0xc820722100 0xc820722128 0xc820722140] [0xc820722100 0xc820722128 0xc820722140] [0xc820722120 0xc820722138] [0xa84730 0xa84730] 0xc8207123c0}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.20.76 --kubeconfig=/workspace/.kube/config log redis-master-si700 redis-master --namespace=e2e-tests-kubectl-3vd42] []  <nil>  Error from server: the server does not allow access to the requested resource
     [] <nil> 0xc8209e4940 exit status 1 <nil> true [0xc820722100 0xc820722128 0xc820722140] [0xc820722100 0xc820722128 0xc820722140] [0xc820722120 0xc820722138] [0xa84730 0xa84730] 0xc8207123c0}:
    Command stdout:

    stderr:
    Error from server: the server does not allow access to the requested resource

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26139

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:577
Expected error:
    <*errors.StatusError | 0xc8202b2f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-nhiz2/pods?fieldSelector=metadata.name%3Dpod-update-activedeadlineseconds-ec794049-36a9-11e6-b4ba-0242ac110006\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820ab6300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-fu8v5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820906700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-nqb6i/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:55
Expected error:
    <*errors.errorString | 0xc8207dd4f0>: {
        s: "deployment test-new-deployment failed to create new RS: the server does not allow access to the requested resource (get replicasets.extensions)",
    }
    deployment test-new-deployment failed to create new RS: the server does not allow access to the requested resource (get replicasets.extensions)
not to have occurred

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 19 22:44:47.625: Couldn't delete ns "e2e-tests-v1job-0bh8u": the server does not allow access to the requested resource (delete namespaces e2e-tests-v1job-0bh8u)

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:466
Expected error:
    <*errors.errorString | 0xc8207d3b00>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.20.76 --kubeconfig=/workspace/.kube/config apply -f - --namespace=e2e-tests-kubectl-6isrb] []  0xc820983720  proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]\nproto: tag has too few fields: \"-\"\nproto: no coders for struct *reflect.rtype\nproto: no encoder for sec int64 [GetProperties]\nproto: no encoder for nsec int32 [GetProperties]\nproto: no encoder for loc *time.Location [GetProperties]\nproto: no encoder for Time time.Time [GetProperties]\nproto: no coders for intstr.Type\nproto: no encoder for Type intstr.Type [GetProperties]\nError from server: error when applying patch:\n{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"kind\\\":\\\"Service\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{\\\"name\\\":\\\"redis-master\\\",\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"app\\\":\\\"redis\\\",\\\"role\\\":\\\"master\\\"}},\\\"spec\\\":{\\\"ports\\\":[{\\\"port\\\":6379,\\\"targetPort\\\":\\\"redis-server\\\"}],\\\"selector\\\":{\\\"app\\\":\\\"redis\\\",\\\"role\\\":\\\"master\\\"}},\\\"status\\\":{\\\"loadBalancer\\\":{}}}\"},\"creationTimestamp\":null}}\nto:\n&{0xc820141500 0xc820278230 e2e-tests-kubectl-6isrb redis-master STDIN TypeMeta:<kind:\"Service\" apiVersion:\"v1\" > metadata:<name:\"redis-master\" generateName:\"\" namespace:\"\" selfLink:\"\" uid:\"\" resourceVersion:\"\" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:\"app\" value:\"redis\" > labels:<key:\"role\" value:\"master\" > annotations:<key:\"kubectl.kubernetes.io/last-applied-configuration\" value:\"\" > > spec:<ports:<name:\"\" protocol:\"\" port:6379 targetPort:<type:1 intVal:0 strVal:\"redis-server\" > nodePort:0 > selector:<key:\"app\" value:\"redis\" > selector:<key:\"role\" value:\"master\" > clusterIP:\"\" type:\"\" sessionAffinity:\"\" loadBalancerIP:\"\" > status:<loadBalancer:<> >  kind:\"\" apiVersion:\"\"  353 false}\nfor: \"STDIN\": the server does not allow access to the requested resource (patch services redis-master)\n [] <nil> 0xc820983dc0 exit status 1 <nil> true [0xc8203fe048 0xc8203fe070 0xc8203fe080] [0xc8203fe048 0xc8203fe070 0xc8203fe080] [0xc8203fe050 0xc8203fe068 0xc8203fe078] [0xa845d0 0xa84730 0xa84730] 0xc8208f8900}:\nCommand stdout:\n\nstderr:\nproto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]\nproto: tag has too few fields: \"-\"\nproto: no coders for struct *reflect.rtype\nproto: no encoder for sec int64 [GetProperties]\nproto: no encoder for nsec int32 [GetProperties]\nproto: no encoder for loc *time.Location [GetProperties]\nproto: no encoder for Time time.Time [GetProperties]\nproto: no coders for intstr.Type\nproto: no encoder for Type intstr.Type [GetProperties]\nError from server: error when applying patch:\n{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"kind\\\":\\\"Service\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{\\\"name\\\":\\\"redis-master\\\",\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"app\\\":\\\"redis\\\",\\\"role\\\":\\\"master\\\"}},\\\"spec\\\":{\\\"ports\\\":[{\\\"port\\\":6379,\\\"targetPort\\\":\\\"redis-server\\\"}],\\\"selector\\\":{\\\"app\\\":\\\"redis\\\",\\\"role\\\":\\\"master\\\"}},\\\"status\\\":{\\\"loadBalancer\\\":{}}}\"},\"creationTimestamp\":null}}\nto:\n&{0xc820141500 0xc820278230 e2e-tests-kubectl-6isrb redis-master STDIN TypeMeta:<kind:\"Service\" apiVersion:\"v1\" > metadata:<name:\"redis-master\" generateName:\"\" namespace:\"\" selfLink:\"\" uid:\"\" resourceVersion:\"\" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:\"app\" value:\"redis\" > labels:<key:\"role\" value:\"master\" > annotations:<key:\"kubectl.kubernetes.io/last-applied-configuration\" value:\"\" > > spec:<ports:<name:\"\" protocol:\"\" port:6379 targetPort:<type:1 intVal:0 strVal:\"redis-server\" > nodePort:0 > selector:<key:\"app\" value:\"redis\" > selector:<key:\"role\" value:\"master\" > clusterIP:\"\" type:\"\" sessionAffinity:\"\" loadBalancerIP:\"\" > status:<loadBalancer:<> >  kind:\"\" apiVersion:\"\"  353 false}\nfor: \"STDIN\": the server does not allow access to the requested resource (patch services redis-master)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.20.76 --kubeconfig=/workspace/.kube/config apply -f - --namespace=e2e-tests-kubectl-6isrb] []  0xc820983720  proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
    proto: tag has too few fields: "-"
    proto: no coders for struct *reflect.rtype
    proto: no encoder for sec int64 [GetProperties]
    proto: no encoder for nsec int32 [GetProperties]
    proto: no encoder for loc *time.Location [GetProperties]
    proto: no encoder for Time time.Time [GetProperties]
    proto: no coders for intstr.Type
    proto: no encoder for Type intstr.Type [GetProperties]
    Error from server: error when applying patch:
    {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"Service\",\"apiVersion\":\"v1\",\"metadata\":{\"name\":\"redis-master\",\"creationTimestamp\":null,\"labels\":{\"app\":\"redis\",\"role\":\"master\"}},\"spec\":{\"ports\":[{\"port\":6379,\"targetPort\":\"redis-server\"}],\"selector\":{\"app\":\"redis\",\"role\":\"master\"}},\"status\":{\"loadBalancer\":{}}}"},"creationTimestamp":null}}
    to:
    &{0xc820141500 0xc820278230 e2e-tests-kubectl-6isrb redis-master STDIN TypeMeta:<kind:"Service" apiVersion:"v1" > metadata:<name:"redis-master" generateName:"" namespace:"" selfLink:"" uid:"" resourceVersion:"" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:"app" value:"redis" > labels:<key:"role" value:"master" > annotations:<key:"kubectl.kubernetes.io/last-applied-configuration" value:"" > > spec:<ports:<name:"" protocol:"" port:6379 targetPort:<type:1 intVal:0 strVal:"redis-server" > nodePort:0 > selector:<key:"app" value:"redis" > selector:<key:"role" value:"master" > clusterIP:"" type:"" sessionAffinity:"" loadBalancerIP:"" > status:<loadBalancer:<> >  kind:"" apiVersion:""  353 false}
    for: "STDIN": the server does not allow access to the requested resource (patch services redis-master)
     [] <nil> 0xc820983dc0 exit status 1 <nil> true [0xc8203fe048 0xc8203fe070 0xc8203fe080] [0xc8203fe048 0xc8203fe070 0xc8203fe080] [0xc8203fe050 0xc8203fe068 0xc8203fe078] [0xa845d0 0xa84730 0xa84730] 0xc8208f8900}:
    Command stdout:

    stderr:
    proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
    proto: tag has too few fields: "-"
    proto: no coders for struct *reflect.rtype
    proto: no encoder for sec int64 [GetProperties]
    proto: no encoder for nsec int32 [GetProperties]
    proto: no encoder for loc *time.Location [GetProperties]
    proto: no encoder for Time time.Time [GetProperties]
    proto: no coders for intstr.Type
    proto: no encoder for Type intstr.Type [GetProperties]
    Error from server: error when applying patch:
    {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"Service\",\"apiVersion\":\"v1\",\"metadata\":{\"name\":\"redis-master\",\"creationTimestamp\":null,\"labels\":{\"app\":\"redis\",\"role\":\"master\"}},\"spec\":{\"ports\":[{\"port\":6379,\"targetPort\":\"redis-server\"}],\"selector\":{\"app\":\"redis\",\"role\":\"master\"}},\"status\":{\"loadBalancer\":{}}}"},"creationTimestamp":null}}
    to:
    &{0xc820141500 0xc820278230 e2e-tests-kubectl-6isrb redis-master STDIN TypeMeta:<kind:"Service" apiVersion:"v1" > metadata:<name:"redis-master" generateName:"" namespace:"" selfLink:"" uid:"" resourceVersion:"" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:"app" value:"redis" > labels:<key:"role" value:"master" > annotations:<key:"kubectl.kubernetes.io/last-applied-configuration" value:"" > > spec:<ports:<name:"" protocol:"" port:6379 targetPort:<type:1 intVal:0 strVal:"redis-server" > nodePort:0 > selector:<key:"app" value:"redis" > selector:<key:"role" value:"master" > clusterIP:"" type:"" sessionAffinity:"" loadBalancerIP:"" > status:<loadBalancer:<> >  kind:"" apiVersion:""  353 false}
    for: "STDIN": the server does not allow access to the requested resource (patch services redis-master)

    error:
    exit status 1

not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 19 22:45:31.794: Couldn't delete ns "e2e-tests-emptydir-mprub": the server does not allow access to the requested resource (delete namespaces e2e-tests-emptydir-mprub)

Failed: [k8s.io] Pods should not be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1076
getting pod liveness-exec
Expected error:
    <*errors.StatusError | 0xc8209a1100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get pods liveness-exec)",
            Reason: "Unauthorized",
            Details: {
                Name: "liveness-exec",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get pods liveness-exec)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:265
Expected error:
    <*errors.errorString | 0xc8209d8160>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.20.76 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-xh70h] []  0xc820176240  Error from server: the server does not allow access to the requested resource\n [] <nil> 0xc820176800 exit status 1 <nil> true [0xc82002e9b0 0xc82002e9d8 0xc82002e9e8] [0xc82002e9b0 0xc82002e9d8 0xc82002e9e8] [0xc82002e9b8 0xc82002e9d0 0xc82002e9e0] [0xa845d0 0xa84730 0xa84730] 0xc820a32ba0}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.20.76 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-xh70h] []  0xc820176240  Error from server: the server does not allow access to the requested resource
     [] <nil> 0xc820176800 exit status 1 <nil> true [0xc82002e9b0 0xc82002e9d8 0xc82002e9e8] [0xc82002e9b0 0xc82002e9d8 0xc82002e9e8] [0xc82002e9b8 0xc82002e9d0 0xc82002e9e0] [0xa845d0 0xa84730 0xa84730] 0xc820a32ba0}:
    Command stdout:

    stderr:
    Error from server: the server does not allow access to the requested resource

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26175 #26846 #27334

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:40
Expected error:
    <*errors.StatusError | 0xc820965480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replicaset-okv81/pods?fieldSelector=metadata.name%3Dmy-hostname-basic-ee822c89-36a9-11e6-8be8-0242ac110006-f1pld\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:445
Expected error:
    <*errors.errorString | 0xc8208baf60>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.20.76 --kubeconfig=/workspace/.kube/config apply -f - --namespace=e2e-tests-kubectl-itvc7] []  0xc820900e10  proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]\nproto: tag has too few fields: \"-\"\nproto: no coders for struct *reflect.rtype\nproto: no encoder for sec int64 [GetProperties]\nproto: no encoder for nsec int32 [GetProperties]\nproto: no encoder for loc *time.Location [GetProperties]\nproto: no encoder for Time time.Time [GetProperties]\nproto: no encoder for i resource.int64Amount [GetProperties]\nproto: no encoder for d resource.infDecAmount [GetProperties]\nproto: no encoder for s string [GetProperties]\nproto: no encoder for Format resource.Format [GetProperties]\nproto: no encoder for InitContainers []v1.Container [GetProperties]\nproto: no coders for intstr.Type\nproto: no encoder for Type intstr.Type [GetProperties]\nError from server: error when retrieving current configuration of:\n&{0xc8201d0480 0xc820354930 e2e-tests-kubectl-itvc7 redis-master STDIN TypeMeta:<kind:\"ReplicationController\" apiVersion:\"v1\" > metadata:<name:\"redis-master\" generateName:\"\" namespace:\"\" selfLink:\"\" uid:\"\" resourceVersion:\"\" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:\"app\" value:\"redis\" > labels:<key:\"kubectl.kubernetes.io/apply-test\" value:\"ADDED\" > labels:<key:\"role\" value:\"master\" > annotations:<key:\"kubectl.kubernetes.io/last-applied-configuration\" value:\"\" > > spec:<replicas:1 selector:<key:\"app\" value:\"redis\" > selector:<key:\"kubectl.kubernetes.io/apply-test\" value:\"ADDED\" > selector:<key:\"role\" value:\"master\" > template:<metadata:<name:\"\" generateName:\"\" namespace:\"\" selfLink:\"\" uid:\"\" resourceVersion:\"\" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:\"app\" value:\"redis\" > labels:<key:\"kubectl.kubernetes.io/apply-test\" value:\"ADDED\" > labels:<key:\"role\" value:\"master\" > > spec:<containers:<name:\"redis-master\" image:\"gcr.io/google_containers/redis:e2e\" workingDir:\"\" ports:<name:\"redis-server\" hostPort:0 containerPort:6379 protocol:\"\" hostIP:\"\" > resources:<> terminationMessagePath:\"\" imagePullPolicy:\"\" stdin:false stdinOnce:false tty:false > restartPolicy:\"\" dnsPolicy:\"\" serviceAccountName:\"\" serviceAccount:\"\" nodeName:\"\" hostNetwork:false hostPID:false hostIPC:false hostname:\"\" subdomain:\"\" > > > status:<replicas:0 fullyLabeledReplicas:0 observedGeneration:0 >  kind:\"\" apiVersion:\"\"   false}\nfrom server for: \"STDIN\": the server does not allow access to the requested resource (get replicationcontrollers redis-master)\n [] <nil> 0xc8208c4420 exit status 1 <nil> true [0xc82002fbe0 0xc82002fc08 0xc82002fc18] [0xc82002fbe0 0xc82002fc08 0xc82002fc18] [0xc82002fbe8 0xc82002fc00 0xc82002fc10] [0xa845d0 0xa84730 0xa84730] 0xc8207e74a0}:\nCommand stdout:\n\nstderr:\nproto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]\nproto: tag has too few fields: \"-\"\nproto: no coders for struct *reflect.rtype\nproto: no encoder for sec int64 [GetProperties]\nproto: no encoder for nsec int32 [GetProperties]\nproto: no encoder for loc *time.Location [GetProperties]\nproto: no encoder for Time time.Time [GetProperties]\nproto: no encoder for i resource.int64Amount [GetProperties]\nproto: no encoder for d resource.infDecAmount [GetProperties]\nproto: no encoder for s string [GetProperties]\nproto: no encoder for Format resource.Format [GetProperties]\nproto: no encoder for InitContainers []v1.Container [GetProperties]\nproto: no coders for intstr.Type\nproto: no encoder for Type intstr.Type [GetProperties]\nError from server: error when retrieving current configuration of:\n&{0xc8201d0480 0xc820354930 e2e-tests-kubectl-itvc7 redis-master STDIN TypeMeta:<kind:\"ReplicationController\" apiVersion:\"v1\" > metadata:<name:\"redis-master\" generateName:\"\" namespace:\"\" selfLink:\"\" uid:\"\" resourceVersion:\"\" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:\"app\" value:\"redis\" > labels:<key:\"kubectl.kubernetes.io/apply-test\" value:\"ADDED\" > labels:<key:\"role\" value:\"master\" > annotations:<key:\"kubectl.kubernetes.io/last-applied-configuration\" value:\"\" > > spec:<replicas:1 selector:<key:\"app\" value:\"redis\" > selector:<key:\"kubectl.kubernetes.io/apply-test\" value:\"ADDED\" > selector:<key:\"role\" value:\"master\" > template:<metadata:<name:\"\" generateName:\"\" namespace:\"\" selfLink:\"\" uid:\"\" resourceVersion:\"\" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:\"app\" value:\"redis\" > labels:<key:\"kubectl.kubernetes.io/apply-test\" value:\"ADDED\" > labels:<key:\"role\" value:\"master\" > > spec:<containers:<name:\"redis-master\" image:\"gcr.io/google_containers/redis:e2e\" workingDir:\"\" ports:<name:\"redis-server\" hostPort:0 containerPort:6379 protocol:\"\" hostIP:\"\" > resources:<> terminationMessagePath:\"\" imagePullPolicy:\"\" stdin:false stdinOnce:false tty:false > restartPolicy:\"\" dnsPolicy:\"\" serviceAccountName:\"\" serviceAccount:\"\" nodeName:\"\" hostNetwork:false hostPID:false hostIPC:false hostname:\"\" subdomain:\"\" > > > status:<replicas:0 fullyLabeledReplicas:0 observedGeneration:0 >  kind:\"\" apiVersion:\"\"   false}\nfrom server for: \"STDIN\": the server does not allow access to the requested resource (get replicationcontrollers redis-master)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.20.76 --kubeconfig=/workspace/.kube/config apply -f - --namespace=e2e-tests-kubectl-itvc7] []  0xc820900e10  proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
    proto: tag has too few fields: "-"
    proto: no coders for struct *reflect.rtype
    proto: no encoder for sec int64 [GetProperties]
    proto: no encoder for nsec int32 [GetProperties]
    proto: no encoder for loc *time.Location [GetProperties]
    proto: no encoder for Time time.Time [GetProperties]
    proto: no encoder for i resource.int64Amount [GetProperties]
    proto: no encoder for d resource.infDecAmount [GetProperties]
    proto: no encoder for s string [GetProperties]
    proto: no encoder for Format resource.Format [GetProperties]
    proto: no encoder for InitContainers []v1.Container [GetProperties]
    proto: no coders for intstr.Type
    proto: no encoder for Type intstr.Type [GetProperties]
    Error from server: error when retrieving current configuration of:
    &{0xc8201d0480 0xc820354930 e2e-tests-kubectl-itvc7 redis-master STDIN TypeMeta:<kind:"ReplicationController" apiVersion:"v1" > metadata:<name:"redis-master" generateName:"" namespace:"" selfLink:"" uid:"" resourceVersion:"" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:"app" value:"redis" > labels:<key:"kubectl.kubernetes.io/apply-test" value:"ADDED" > labels:<key:"role" value:"master" > annotations:<key:"kubectl.kubernetes.io/last-applied-configuration" value:"" > > spec:<replicas:1 selector:<key:"app" value:"redis" > selector:<key:"kubectl.kubernetes.io/apply-test" value:"ADDED" > selector:<key:"role" value:"master" > template:<metadata:<name:"" generateName:"" namespace:"" selfLink:"" uid:"" resourceVersion:"" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:"app" value:"redis" > labels:<key:"kubectl.kubernetes.io/apply-test" value:"ADDED" > labels:<key:"role" value:"master" > > spec:<containers:<name:"redis-master" image:"gcr.io/google_containers/redis:e2e" workingDir:"" ports:<name:"redis-server" hostPort:0 containerPort:6379 protocol:"" hostIP:"" > resources:<> terminationMessagePath:"" imagePullPolicy:"" stdin:false stdinOnce:false tty:false > restartPolicy:"" dnsPolicy:"" serviceAccountName:"" serviceAccount:"" nodeName:"" hostNetwork:false hostPID:false hostIPC:false hostname:"" subdomain:"" > > > status:<replicas:0 fullyLabeledReplicas:0 observedGeneration:0 >  kind:"" apiVersion:""   false}
    from server for: "STDIN": the server does not allow access to the requested resource (get replicationcontrollers redis-master)
     [] <nil> 0xc8208c4420 exit status 1 <nil> true [0xc82002fbe0 0xc82002fc08 0xc82002fc18] [0xc82002fbe0 0xc82002fc08 0xc82002fc18] [0xc82002fbe8 0xc82002fc00 0xc82002fc10] [0xa845d0 0xa84730 0xa84730] 0xc8207e74a0}:
    Command stdout:

    stderr:
    proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
    proto: tag has too few fields: "-"
    proto: no coders for struct *reflect.rtype
    proto: no encoder for sec int64 [GetProperties]
    proto: no encoder for nsec int32 [GetProperties]
    proto: no encoder for loc *time.Location [GetProperties]
    proto: no encoder for Time time.Time [GetProperties]
    proto: no encoder for i resource.int64Amount [GetProperties]
    proto: no encoder for d resource.infDecAmount [GetProperties]
    proto: no encoder for s string [GetProperties]
    proto: no encoder for Format resource.Format [GetProperties]
    proto: no encoder for InitContainers []v1.Container [GetProperties]
    proto: no coders for intstr.Type
    proto: no encoder for Type intstr.Type [GetProperties]
    Error from server: error when retrieving current configuration of:
    &{0xc8201d0480 0xc820354930 e2e-tests-kubectl-itvc7 redis-master STDIN TypeMeta:<kind:"ReplicationController" apiVersion:"v1" > metadata:<name:"redis-master" generateName:"" namespace:"" selfLink:"" uid:"" resourceVersion:"" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:"app" value:"redis" > labels:<key:"kubectl.kubernetes.io/apply-test" value:"ADDED" > labels:<key:"role" value:"master" > annotations:<key:"kubectl.kubernetes.io/last-applied-configuration" value:"" > > spec:<replicas:1 selector:<key:"app" value:"redis" > selector:<key:"kubectl.kubernetes.io/apply-test" value:"ADDED" > selector:<key:"role" value:"master" > template:<metadata:<name:"" generateName:"" namespace:"" selfLink:"" uid:"" resourceVersion:"" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> labels:<key:"app" value:"redis" > labels:<key:"kubectl.kubernetes.io/apply-test" value:"ADDED" > labels:<key:"role" value:"master" > > spec:<containers:<name:"redis-master" image:"gcr.io/google_containers/redis:e2e" workingDir:"" ports:<name:"redis-server" hostPort:0 containerPort:6379 protocol:"" hostIP:"" > resources:<> terminationMessagePath:"" imagePullPolicy:"" stdin:false stdinOnce:false tty:false > restartPolicy:"" dnsPolicy:"" serviceAccountName:"" serviceAccount:"" nodeName:"" hostNetwork:false hostPID:false hostIPC:false hostname:"" subdomain:"" > > > status:<replicas:0 fullyLabeledReplicas:0 observedGeneration:0 >  kind:"" apiVersion:""   false}
    from server for: "STDIN": the server does not allow access to the requested resource (get replicationcontrollers redis-master)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #27524

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:89
Expected error:
    <*errors.StatusError | 0xc8202ad180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post services)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-7faxl/services\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post services)
not to have occurred

Failed: ThirdParty resources Simple Third Party creating/deleting thirdparty objects works [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/third-party.go:176
Jun 19 22:45:22.606: failed to decode: &json.SyntaxError{msg:"invalid character 'F' looking for beginning of value", Offset:1}

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
Expected error:
    <*errors.StatusError | 0xc820930e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post replicasets.extensions)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "extensions",
                Kind: "replicasets",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-replicaset-ptngl/replicasets\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post replicasets.extensions)
not to have occurred

Failed: [k8s.io] Ubernetes Lite should spread the pods of a service across zones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8207bc480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-ubernetes-lite-f2xwh/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.StatusError | 0xc82025ad80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-nettest-cj48t/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post pods)
not to have occurred

Issues about this test specifically: #26171

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820ae0580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-uyqfj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26425 #26715

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9570/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8208a4080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-dns-qc1ta/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc82027cd20>: {
        s: "error while stopping RC: rc-light-ctrl: the server has asked for the client to provide credentials (get replicationControllers rc-light-ctrl)",
    }
    error while stopping RC: rc-light-ctrl: the server has asked for the client to provide credentials (get replicationControllers rc-light-ctrl)
not to have occurred

Issues about this test specifically: #27443

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.StatusError | 0xc82089de00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get jobs.extensions foo)",
            Reason: "Unauthorized",
            Details: {
                Name: "foo",
                Group: "extensions",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get jobs.extensions foo)
not to have occurred

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*url.Error | 0xc820e5c270>: {
        Op: "Get",
        URL: "https://104.197.139.240/api/v1/watch/namespaces/e2e-tests-job-n7qe3/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {Op: "remote error", Net: "", Source: nil, Addr: nil, Err: 20},
    }
    Get https://104.197.139.240/api/v1/watch/namespaces/e2e-tests-job-n7qe3/serviceaccounts?fieldSelector=metadata.name%3Ddefault: remote error: bad record MAC
not to have occurred

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820938080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-x943g/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820bc9a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-i4nja/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820b7c600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-volume-provisioning-c04pc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26682

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d34c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-zu5qf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:111
Jun 20 08:25:50.893: expecting wait timeout error but got: the server has asked for the client to provide credentials (get pods test-webserver-ca3eda7e-36fa-11e6-9de6-0242ac110004)

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820e1c200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-biids/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820a5c380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-acza7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:131
Jun 20 08:25:01.614: Failed to create pod: the server does not allow access to the requested resource (post pods)

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820520e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get serviceAccounts)",
            Reason: "Unauthorized",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get serviceAccounts)
not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820b9d400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-uukpv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:841
Expected error:
    <*errors.errorString | 0xc8207f92e0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.139.240 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-ffuow] []  <nil>  Error from server: the server does not allow access to the requested resource (post deployments.extensions)\n [] <nil> 0xc820e0d520 exit status 1 <nil> true [0xc820196358 0xc820196370 0xc820196388] [0xc820196358 0xc820196370 0xc820196388] [0xc820196368 0xc820196380] [0xa84730 0xa84730] 0xc8211b14a0}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (post deployments.extensions)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.139.240 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx:1.7.9 --namespace=e2e-tests-kubectl-ffuow] []  <nil>  Error from server: the server does not allow access to the requested resource (post deployments.extensions)
     [] <nil> 0xc820e0d520 exit status 1 <nil> true [0xc820196358 0xc820196370 0xc820196388] [0xc820196358 0xc820196370 0xc820196388] [0xc820196368 0xc820196380] [0xa84730 0xa84730] 0xc8211b14a0}:
    Command stdout:

    stderr:
    Error from server: the server does not allow access to the requested resource (post deployments.extensions)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #27014

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820976d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-8kcx8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820a6ce00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-0y5ew/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82050af80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-5k7b8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc820be8700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (delete services rc-light)",
            Reason: "ServerTimeout",
            Details: {
                Name: "rc-light",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (delete services rc-light)
not to have occurred

Issues about this test specifically: #27196

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 20 08:25:18.712: Couldn't delete ns "e2e-tests-emptydir-epnnp": the server does not allow access to the requested resource (delete namespaces e2e-tests-emptydir-epnnp)

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8202ec980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-containers-wywv2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d86900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-4fhqg/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:403
Jun 20 08:24:19.052: Failed to created RC "nodeport-test": the server does not allow access to the requested resource (post replicationControllers)

Failed: [k8s.io] Generated release_1_3 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:244
Jun 20 08:23:30.305: Failed to set up watch: the server does not allow access to the requested resource (get pods)

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820b9c200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-prestop-3k4ak/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:261
7: path /api/v1/namespaces/e2e-tests-proxy-fuysd/services/http:proxy-service-vyv57:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/pods/https:proxy-service-vyv57-ot5he:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/https:proxy-service-vyv57:443/ took 30.249842892s > 30s
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/pods/proxy-service-vyv57-ot5he:160/ took 30.328593654s > 30s
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/https:proxy-service-vyv57:444/ took 30.240834559s > 30s
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/services/proxy-service-vyv57:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/pods/proxy-service-vyv57-ot5he:1080/ took 30.311268547s > 30s
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/services/https:proxy-service-vyv57:tlsportname2/proxy/ took 30.105814118s > 30s
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/proxy-service-vyv57:portname2/ took 30.395860681s > 30s
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/pods/http:proxy-service-vyv57-ot5he:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/http:proxy-service-vyv57:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/services/https:proxy-service-vyv57:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/pods/proxy-service-vyv57-ot5he:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/proxy-service-vyv57:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/proxy-service-vyv57:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/pods/proxy-service-vyv57-ot5he/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/pods/proxy-service-vyv57-ot5he:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/https:proxy-service-vyv57:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/services/http:proxy-service-vyv57:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/pods/proxy-service-vyv57-ot5he:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/proxy-service-vyv57:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/services/http:proxy-service-vyv57:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/pods/http:proxy-service-vyv57-ot5he:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/http:proxy-service-vyv57:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
9: path /api/v1/namespaces/e2e-tests-proxy-fuysd/pods/proxy-service-vyv57-ot5he:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/https:proxy-service-vyv57:tlsportname2/ took 47.195707378s > 30s
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/pods/https:proxy-service-vyv57-ot5he:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}
8: path /api/v1/namespaces/e2e-tests-proxy-fuysd/pods/http:proxy-service-vyv57-ot5he:1080/proxy/ took 51.393752194s > 30s
8: path /api/v1/proxy/namespaces/e2e-tests-proxy-fuysd/services/http:proxy-service-vyv57:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:the server has asked for the client to provide credentials Reason:Unauthorized Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Unauthorized" field:"" > retryAfterSeconds:0  Code:401}

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Pods should not be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1076
getting pod liveness-exec
Expected error:
    <*errors.StatusError | 0xc820d14200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get pods liveness-exec)",
            Reason: "Unauthorized",
            Details: {
                Name: "liveness-exec",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get pods liveness-exec)
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9571/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.StatusError | 0xc8208d2400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get jobs.batch foo)",
            Reason: "Unauthorized",
            Details: {
                Name: "foo",
                Group: "batch",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get jobs.batch foo)
not to have occurred

Issues about this test specifically: #27704

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.StatusError | 0xc820a0d780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get jobs.extensions foo)",
            Reason: "Unauthorized",
            Details: {
                Name: "foo",
                Group: "extensions",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get jobs.extensions foo)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:265
Expected error:
    <*errors.errorString | 0xc8207d8150>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.192.105 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-j3f5c] []  0xc8208fc4e0  Error from server: error when creating \"STDIN\": the server does not allow access to the requested resource (post services)\n [] <nil> 0xc8208fcbc0 exit status 1 <nil> true [0xc820268a98 0xc820268b90 0xc820268be0] [0xc820268a98 0xc820268b90 0xc820268be0] [0xc820268ad0 0xc820268b68 0xc820268bc8] [0xa845d0 0xa84730 0xa84730] 0xc8209fb380}:\nCommand stdout:\n\nstderr:\nError from server: error when creating \"STDIN\": the server does not allow access to the requested resource (post services)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.192.105 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-j3f5c] []  0xc8208fc4e0  Error from server: error when creating "STDIN": the server does not allow access to the requested resource (post services)
     [] <nil> 0xc8208fcbc0 exit status 1 <nil> true [0xc820268a98 0xc820268b90 0xc820268be0] [0xc820268a98 0xc820268b90 0xc820268be0] [0xc820268ad0 0xc820268b68 0xc820268bc8] [0xa845d0 0xa84730 0xa84730] 0xc8209fb380}:
    Command stdout:

    stderr:
    Error from server: error when creating "STDIN": the server does not allow access to the requested resource (post services)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26175 #26846 #27334

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:115
Not all RC/pod/service trials succeeded: got 7 errors
50, 90, 99 percentiles: 23.818511ms 2.208000243s 29.697711156s

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9581/

Multiple broken tests:

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jun 20 12:30:22.839: Couldn't delete ns "e2e-tests-deployment-htg9y": the server does not allow access to the requested resource (delete namespaces e2e-tests-deployment-htg9y)

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.StatusError | 0xc8208e8980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get jobs.batch foo)",
            Reason: "Unauthorized",
            Details: {
                Name: "foo",
                Group: "batch",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get jobs.batch foo)
not to have occurred

Issues about this test specifically: #27704

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc8209dd4d0>: {
        s: "error while stopping RC: rc-light: error getting replication controllers: error getting replication controllers: the server does not allow access to the requested resource (get replicationControllers)",
    }
    error while stopping RC: rc-light: error getting replication controllers: error getting replication controllers: the server does not allow access to the requested resource (get replicationControllers)
not to have occurred

Issues about this test specifically: #27196

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:133
Expected error:
    <*errors.StatusError | 0xc821008b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post secrets)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "secrets",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-ubqdr/secrets\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post secrets)
not to have occurred

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.StatusError | 0xc820704080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get services service1)",
            Reason: "Forbidden",
            Details: {
                Name: "service1",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-services-0v3iy/services/service1\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get services service1)
not to have occurred

Issues about this test specifically: #26128 #26685

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820bab780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-proxy-ih3ge/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

@j3ffml
Copy link
Contributor

j3ffml commented Jun 21, 2016

The internal error that was being surfaced as a 403 was changed to 500, so transient failures that were causing these errors should now be retried.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests

3 participants