Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gke-serial: broken test run #43261

Closed
k8s-github-robot opened this issue Mar 17, 2017 · 5 comments
Closed

ci-kubernetes-e2e-gke-serial: broken test run #43261

k8s-github-robot opened this issue Mar 17, 2017 · 5 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/965/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421dfa5b0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:325
Expected error:
    <*errors.StatusError | 0xc420e1eb00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "storageclasses.storage.k8s.io \"default\" not found",
            Reason: "NotFound",
            Details: {
                Name: "default",
                Group: "storage.k8s.io",
                Kind: "storageclasses",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    storageclasses.storage.k8s.io "default" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:355

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:270
Expected error:
    <*errors.errorString | 0xc422763b80>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:253

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:296
Expected error:
    <*errors.errorString | 0xc4214f2e80>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:281

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:271
Expected error:
    <*errors.errorString | 0xc4203f1880>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:262

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:349
Expected error:
    <*errors.StatusError | 0xc420c89300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "storageclasses.storage.k8s.io \"default\" not found",
            Reason: "NotFound",
            Details: {
                Name: "default",
                Group: "storage.k8s.io",
                Kind: "storageclasses",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    storageclasses.storage.k8s.io "default" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:355

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:374
Expected error:
    <*errors.errorString | 0xc4213c6610>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:334

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421df07c0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Expected error:
    <*errors.errorString | 0xc421ec8a40>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Previous issues for this suite: #37162 #37931 #40468

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Mar 17, 2017
@calebamiles calebamiles added this to the v1.6 milestone Mar 17, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/966/
Multiple broken tests:

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:349
Expected error:
    <*errors.StatusError | 0xc421f0e980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "storageclasses.storage.k8s.io \"default\" not found",
            Reason: "NotFound",
            Details: {
                Name: "default",
                Group: "storage.k8s.io",
                Kind: "storageclasses",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    storageclasses.storage.k8s.io "default" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:355

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:312
Expected error:
    <*errors.StatusError | 0xc421784600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Operation cannot be fulfilled on daemonsets.extensions \"daemon-set\": the object has been modified; please apply your changes to the latest version and try again",
            Reason: "Conflict",
            Details: {Name: "daemon-set", Group: "extensions", Kind: "daemonsets", Causes: nil, RetryAfterSeconds: 0},
            Code: 409,
        },
    }
    Operation cannot be fulfilled on daemonsets.extensions "daemon-set": the object has been modified; please apply your changes to the latest version and try again
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:298

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:325
Expected error:
    <*errors.StatusError | 0xc42036ca00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "storageclasses.storage.k8s.io \"default\" not found",
            Reason: "NotFound",
            Details: {
                Name: "default",
                Group: "storage.k8s.io",
                Kind: "storageclasses",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    storageclasses.storage.k8s.io "default" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:355

@jsafrane
Copy link
Member

I have a fix for storageclasses.storage.k8s.io "default" not found in #43285. I did not analyze the other errors in this test run!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/968/
Multiple broken tests:

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:325
Expected error:
    <*errors.StatusError | 0xc421c4a300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "storageclasses.storage.k8s.io \"default\" not found",
            Reason: "NotFound",
            Details: {
                Name: "default",
                Group: "storage.k8s.io",
                Kind: "storageclasses",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    storageclasses.storage.k8s.io "default" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:355

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:271
Expected error:
    <*errors.StatusError | 0xc42045be80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Operation cannot be fulfilled on daemonsets.extensions \"daemon-set\": the object has been modified; please apply your changes to the latest version and try again",
            Reason: "Conflict",
            Details: {Name: "daemon-set", Group: "extensions", Kind: "daemonsets", Causes: nil, RetryAfterSeconds: 0},
            Code: 409,
        },
    }
    Operation cannot be fulfilled on daemonsets.extensions "daemon-set": the object has been modified; please apply your changes to the latest version and try again
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:257

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:349
Expected error:
    <*errors.StatusError | 0xc420beba00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "storageclasses.storage.k8s.io \"default\" not found",
            Reason: "NotFound",
            Details: {
                Name: "default",
                Group: "storage.k8s.io",
                Kind: "storageclasses",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    storageclasses.storage.k8s.io "default" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:355

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@marun
Copy link
Contributor

marun commented Mar 17, 2017

@janetkuo is working on a fix for the daemonset failure, see linked issue.

k8s-github-robot pushed a commit that referenced this issue Mar 17, 2017
Automatic merge from submit-queue (batch tested with PRs 42869, 43298, 43285)

Fix default storage class tests

Name of the default storage class is not "default", it must be discovered dynamically.

```release-note
NONE
```

This fixes flake `storageclasses.storage.k8s.io "default" not found` in #43261
@ethernetdan
Copy link
Contributor

Looks stable other than known issues affecting multiple suites.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

6 participants