Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gke-1.4-1.5-cvm-kubectl-skew: broken test run #40599

Closed
k8s-github-robot opened this issue Jan 27, 2017 · 6 comments
Closed
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-1.4-1.5-cvm-kubectl-skew/4550/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:875
Jan 27 05:54:51.576: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:202

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:666
Jan 27 05:54:52.558: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:202

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1298
Jan 27 05:49:51.518: Failed to find kubernetes.io/e2e-taint-key-78d30853-e497-11e6-91b2-0242ac110005=testing-taint-value:NoSchedule in Name:			gke-bootstrap-e2e-default-pool-933ca23e-bb8d
Role:			
Labels:			beta.kubernetes.io/arch=amd64
			beta.kubernetes.io/instance-type=n1-standard-2
			beta.kubernetes.io/os=linux
			cloud.google.com/gke-nodepool=default-pool
			failure-domain.beta.kubernetes.io/region=us-central1
			failure-domain.beta.kubernetes.io/zone=us-central1-c
			kubernetes.io/hostname=gke-bootstrap-e2e-default-pool-933ca23e-bb8d
Taints:			kubernetes.io/e2e-taint-key-78e3b7a1-e497-11e6-bc18-0242ac110005=testing-taint-value:NoSchedule
CreationTimestamp:	Fri, 27 Jan 2017 05:46:29 -0800
Phase:			
Conditions:
  Type			Status	LastHeartbeatTime			LastTransitionTime			Reason				Message
  ----			------	-----------------			------------------			------				-------
  NetworkUnavailable 	False 	Fri, 27 Jan 2017 05:49:46 -0800 	Fri, 27 Jan 2017 05:49:46 -0800 	RouteCreated 			RouteController created a route
  OutOfDisk 		False 	Fri, 27 Jan 2017 05:49:50 -0800 	Fri, 27 Jan 2017 05:46:29 -0800 	KubeletHasSufficientDisk 	kubelet has sufficient disk space available
  MemoryPressure 	False 	Fri, 27 Jan 2017 05:49:50 -0800 	Fri, 27 Jan 2017 05:46:29 -0800 	KubeletHasSufficientMemory 	kubelet has sufficient memory available
  DiskPressure 		False 	Fri, 27 Jan 2017 05:49:50 -0800 	Fri, 27 Jan 2017 05:46:29 -0800 	KubeletHasNoDiskPressure 	kubelet has no disk pressure
  Ready 		True 	Fri, 27 Jan 2017 05:49:50 -0800 	Fri, 27 Jan 2017 05:47:00 -0800 	KubeletReady 			kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Addresses:		10.240.0.2,130.211.229.133
Capacity:
 alpha.kubernetes.io/nvidia-gpu:	0
 cpu:					2
 memory:				7679792Ki
 pods:					110
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:	0
 cpu:					2
 memory:				7679792Ki
 pods:					110
System Info:
 Machine ID:			
 System UUID:			26E72360-ABA2-288A-DBA5-36FD7666F413
 Boot ID:			24aac6ce-ae02-4988-bd34-29134cd960a8
 Kernel Version:		3.16.0-4-amd64
 OS Image:			Debian GNU/Linux 7 (wheezy)
 Operating System:		linux
 Architecture:			amd64
 Container Runtime Version:	docker://1.11.2
 Kubelet Version:		v1.4.9-beta.0.12+95952b366dc81c
 Kube-Proxy Version:		v1.4.9-beta.0.12+95952b366dc81c
PodCIDR:			10.188.2.0/24
ExternalID:			1114391063187353642
Non-terminated Pods:		(3 in total)
  Namespace			Name										CPU Requests	CPU Limits	Memory Requests	Memory Limits
  ---------			----										------------	----------	---------------	-------------
  kube-system			fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-933ca23e-bb8d		100m (5%)	0 (0%)		200Mi (2%)	200Mi (2%)
  kube-system			kube-dns-v20-8iafq								110m (5%)	0 (0%)		120Mi (1%)	220Mi (2%)
  kube-system			kube-proxy-gke-bootstrap-e2e-default-pool-933ca23e-bb8d				100m (5%)	0 (0%)		0 (0%)		0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  ------------	----------	---------------	-------------
  310m (15%)	0 (0%)		320Mi (4%)	420Mi (5%)
Events:
  FirstSeen	LastSeen	Count	From								SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----								-------------	--------	------		-------
  2m		2m		1	{kube-proxy gke-bootstrap-e2e-default-pool-933ca23e-bb8d}			Normal		Starting	Starting kube-proxy.
  2m		2m		1	{kubelet gke-bootstrap-e2e-default-pool-933ca23e-bb8d}				Normal		NodeReady	Node gke-bootstrap-e2e-default-pool-933ca23e-bb8d status is now: NodeReady

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1432

Issues about this test specifically: #27976 #29503

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Previous issues for this suite: #38466

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Jan 27, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-1.4-1.5-cvm-kubectl-skew/4748/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1212
Expected
    <string>: Waiting for pod e2e-tests-kubectl-enctu/e2e-test-rm-busybox-job-h5i6p to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-enctu/e2e-test-rm-busybox-job-h5i6p to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-enctu/e2e-test-rm-busybox-job-h5i6p to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-enctu/e2e-test-rm-busybox-job-h5i6p to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-enctu/e2e-test-rm-busybox-job-h5i6p to be running, status is Pending, pod ready: false
    job "e2e-test-rm-busybox-job" deleted
    
to contain substring
    <string>: abcd1234
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1205

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 30 00:27:26.340: Couldn't delete ns: "e2e-tests-kubectl-ez3qn": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-ez3qn/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-ez3qn/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820e18730), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 30 00:27:07.520: Couldn't delete ns: "e2e-tests-kubectl-fmd2i": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-fmd2i/endpoints\"") has prevented the request from succeeding (get endpoints) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-fmd2i/endpoints\\\"\") has prevented the request from succeeding (get endpoints)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a9af50), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #35473

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 30 00:27:10.035: Couldn't delete ns: "e2e-tests-kubectl-prcgt": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-prcgt/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-prcgt/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820898ff0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d6e400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-sphv3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-sphv3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-sphv3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 30 00:27:08.425: Couldn't delete ns: "e2e-tests-kubectl-57o4h": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-kubectl-57o4h/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-kubectl-57o4h/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a4b720), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 30 00:27:07.471: Couldn't delete ns: "e2e-tests-kubectl-1j9v4": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-1j9v4/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-1j9v4/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8209f1d10), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:847
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.64 --kubeconfig=/workspace/.kube/config logs redis-master-c46l3 redis-master --namespace=e2e-tests-kubectl-ulgv6] []  <nil>  Error from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-ulgv6/pods/redis-master-c46l3\\\"\") has prevented the request from succeeding (get pods redis-master-c46l3)\n [] <nil> 0xc820d695c0 exit status 1 <nil> true [0xc8200baa78 0xc8200baa90 0xc8200baaa8] [0xc8200baa78 0xc8200baa90 0xc8200baaa8] [0xc8200baa88 0xc8200baaa0] [0xafae20 0xafae20] 0xc820d66720}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-ulgv6/pods/redis-master-c46l3\\\"\") has prevented the request from succeeding (get pods redis-master-c46l3)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.64 --kubeconfig=/workspace/.kube/config logs redis-master-c46l3 redis-master --namespace=e2e-tests-kubectl-ulgv6] []  <nil>  Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-ulgv6/pods/redis-master-c46l3\"") has prevented the request from succeeding (get pods redis-master-c46l3)
     [] <nil> 0xc820d695c0 exit status 1 <nil> true [0xc8200baa78 0xc8200baa90 0xc8200baaa8] [0xc8200baa78 0xc8200baa90 0xc8200baaa8] [0xc8200baa88 0xc8200baaa0] [0xafae20 0xafae20] 0xc820d66720}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-ulgv6/pods/redis-master-c46l3\"") has prevented the request from succeeding (get pods redis-master-c46l3)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:402
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.64 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-p54wg exec nginx -- /bin/sh -c exit 0] []  <nil>  Error from server: \n [] <nil> 0xc820b07740 exit status 1 <nil> true [0xc8200e04d0 0xc8200e04e8 0xc8200e0500] [0xc8200e04d0 0xc8200e04e8 0xc8200e0500] [0xc8200e04e0 0xc8200e04f8] [0xafae20 0xafae20] 0xc820980de0}:\nCommand stdout:\n\nstderr:\nError from server: \n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.64 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-p54wg exec nginx -- /bin/sh -c exit 0] []  <nil>  Error from server: 
     [] <nil> 0xc820b07740 exit status 1 <nil> true [0xc8200e04d0 0xc8200e04e8 0xc8200e0500] [0xc8200e04d0 0xc8200e04e8 0xc8200e0500] [0xc8200e04e0 0xc8200e04f8] [0xafae20 0xafae20] 0xc820980de0}:
    Command stdout:
    
    stderr:
    Error from server: 
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:664

Issues about this test specifically: #31151 #35586

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 30 00:27:07.448: Couldn't delete ns: "e2e-tests-kubectl-9smrh": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-9smrh/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-9smrh/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208b44b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29050

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820a07a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-q4ecp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-q4ecp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-q4ecp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29710

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.64 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-07jkw] []  0xc820caeec0  Error from server (InternalError): error when stopping \"STDIN\": an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-07jkw/replicationcontrollers/update-demo-nautilus\\\"\") has prevented the request from succeeding (get replicationcontrollers update-demo-nautilus)\n [] <nil> 0xc820caf560 exit status 1 <nil> true [0xc820039708 0xc820039730 0xc820039740] [0xc820039708 0xc820039730 0xc820039740] [0xc820039710 0xc820039728 0xc820039738] [0xafacc0 0xafae20 0xafae20] 0xc820a2a960}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): error when stopping \"STDIN\": an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-07jkw/replicationcontrollers/update-demo-nautilus\\\"\") has prevented the request from succeeding (get replicationcontrollers update-demo-nautilus)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.64 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-07jkw] []  0xc820caeec0  Error from server (InternalError): error when stopping "STDIN": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-07jkw/replicationcontrollers/update-demo-nautilus\"") has prevented the request from succeeding (get replicationcontrollers update-demo-nautilus)
     [] <nil> 0xc820caf560 exit status 1 <nil> true [0xc820039708 0xc820039730 0xc820039740] [0xc820039708 0xc820039730 0xc820039740] [0xc820039710 0xc820039728 0xc820039738] [0xafacc0 0xafae20 0xafae20] 0xc820a2a960}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): error when stopping "STDIN": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-07jkw/replicationcontrollers/update-demo-nautilus\"") has prevented the request from succeeding (get replicationcontrollers update-demo-nautilus)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82070eb80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-jdck9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-jdck9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-jdck9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1043
Expected error:
    <*errors.errorString | 0xc8208b2a40>: {
        s: "kubectl delete failed output: , err: error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.64 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-7glft] []  <nil>  Error from server (InternalError): an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-7glft/replicasets/e2e-test-nginx-deployment-4272272891\\\"\") has prevented the request from succeeding (delete replicasets.extensions e2e-test-nginx-deployment-4272272891)\n [] <nil> 0xc820712c60 exit status 1 <nil> true [0xc8202bc4b0 0xc8202bc4c8 0xc8202bc4e0] [0xc8202bc4b0 0xc8202bc4c8 0xc8202bc4e0] [0xc8202bc4c0 0xc8202bc4d8] [0xafae20 0xafae20] 0xc820c17080}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-7glft/replicasets/e2e-test-nginx-deployment-4272272891\\\"\") has prevented the request from succeeding (delete replicasets.extensions e2e-test-nginx-deployment-4272272891)\n\nerror:\nexit status 1\n",
    }
    kubectl delete failed output: , err: error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.64 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-7glft] []  <nil>  Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-7glft/replicasets/e2e-test-nginx-deployment-4272272891\"") has prevented the request from succeeding (delete replicasets.extensions e2e-test-nginx-deployment-4272272891)
     [] <nil> 0xc820712c60 exit status 1 <nil> true [0xc8202bc4b0 0xc8202bc4c8 0xc8202bc4e0] [0xc8202bc4b0 0xc8202bc4c8 0xc8202bc4e0] [0xc8202bc4c0 0xc8202bc4d8] [0xafae20 0xafae20] 0xc820c17080}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-7glft/replicasets/e2e-test-nginx-deployment-4272272891\"") has prevented the request from succeeding (delete replicasets.extensions e2e-test-nginx-deployment-4272272891)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1042

Issues about this test specifically: #27532 #34567

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-1.4-1.5-cvm-kubectl-skew/4761/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:745
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.213.47 --kubeconfig=/workspace/.kube/config logs redis-master-d11kc redis-master --namespace=e2e-tests-kubectl-41hhc] []  <nil>  Error from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-41hhc/pods/redis-master-d11kc\\\"\") has prevented the request from succeeding (get pods redis-master-d11kc)\n [] <nil> 0xc820a242a0 exit status 1 <nil> true [0xc820038428 0xc820038450 0xc820038468] [0xc820038428 0xc820038450 0xc820038468] [0xc820038448 0xc820038460] [0xafae20 0xafae20] 0xc820f0bf80}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-41hhc/pods/redis-master-d11kc\\\"\") has prevented the request from succeeding (get pods redis-master-d11kc)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.213.47 --kubeconfig=/workspace/.kube/config logs redis-master-d11kc redis-master --namespace=e2e-tests-kubectl-41hhc] []  <nil>  Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-41hhc/pods/redis-master-d11kc\"") has prevented the request from succeeding (get pods redis-master-d11kc)
     [] <nil> 0xc820a242a0 exit status 1 <nil> true [0xc820038428 0xc820038450 0xc820038468] [0xc820038428 0xc820038450 0xc820038468] [0xc820038448 0xc820038460] [0xafae20 0xafae20] 0xc820f0bf80}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-41hhc/pods/redis-master-d11kc\"") has prevented the request from succeeding (get pods redis-master-d11kc)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 30 06:27:57.706: Couldn't delete ns: "e2e-tests-kubectl-4hxel": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-4hxel/ingresses\"") has prevented the request from succeeding (get ingresses.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-4hxel/ingresses\\\"\") has prevented the request from succeeding (get ingresses.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82042d5e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1346
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.213.47 --kubeconfig=/workspace/.kube/config describe node gke-bootstrap-e2e-default-pool-9fb9c9d6-94jc] []  <nil>  Error from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/pods?fieldSelector=spec.nodeName%3Dgke-bootstrap-e2e-default-pool-9fb9c9d6-94jc%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded\\\"\") has prevented the request from succeeding (get pods)\n [] <nil> 0xc8205668a0 exit status 1 <nil> true [0xc820da2020 0xc820da2038 0xc820da2050] [0xc820da2020 0xc820da2038 0xc820da2050] [0xc820da2030 0xc820da2048] [0xafae20 0xafae20] 0xc820da93e0}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/pods?fieldSelector=spec.nodeName%3Dgke-bootstrap-e2e-default-pool-9fb9c9d6-94jc%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded\\\"\") has prevented the request from succeeding (get pods)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.213.47 --kubeconfig=/workspace/.kube/config describe node gke-bootstrap-e2e-default-pool-9fb9c9d6-94jc] []  <nil>  Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/pods?fieldSelector=spec.nodeName%3Dgke-bootstrap-e2e-default-pool-9fb9c9d6-94jc%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded\"") has prevented the request from succeeding (get pods)
     [] <nil> 0xc8205668a0 exit status 1 <nil> true [0xc820da2020 0xc820da2038 0xc820da2050] [0xc820da2020 0xc820da2038 0xc820da2050] [0xc820da2030 0xc820da2048] [0xafae20 0xafae20] 0xc820da93e0}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/pods?fieldSelector=spec.nodeName%3Dgke-bootstrap-e2e-default-pool-9fb9c9d6-94jc%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded\"") has prevented the request from succeeding (get pods)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:167

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@calebamiles calebamiles modified the milestone: v1.6 Mar 3, 2017
@calebamiles calebamiles modified the milestones: v1.6, v1.5 Mar 13, 2017
@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 97 days. Closing this Issue. Please reopen if you would like to work towards merging this change, if/when the Issue is ready for the next round of review.

cc @k8s-merge-robot @rmmh

You can add 'keep-open' label to prevent this from happening again, or add a comment to keep it open another 90 days

@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 98 days. Closing this Issue. Please reopen if you would like to work towards merging this change, if/when the Issue is ready for the next round of review.

cc @k8s-merge-robot @rmmh

You can add 'keep-open' label to prevent this from happening again, or add a comment to keep it open another 90 days

2 similar comments
@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 98 days. Closing this Issue. Please reopen if you would like to work towards merging this change, if/when the Issue is ready for the next round of review.

cc @k8s-merge-robot @rmmh

You can add 'keep-open' label to prevent this from happening again, or add a comment to keep it open another 90 days

@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 98 days. Closing this Issue. Please reopen if you would like to work towards merging this change, if/when the Issue is ready for the next round of review.

cc @k8s-merge-robot @rmmh

You can add 'keep-open' label to prevent this from happening again, or add a comment to keep it open another 90 days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

3 participants