Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[k8s.io] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} #26544

Closed
k8s-github-robot opened this issue May 30, 2016 · 4 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.

Comments

@k8s-github-robot
Copy link

https://storage.googleapis.com/kubernetes-jenkins/logs/kubernetes-e2e-gce-scalability/7993/

Failed: [k8s.io] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:182
creating rc load-big-rc-3
Expected error:
    <*errors.errorString | 0xc8283a7fb0>: {
        s: "Number of reported pods for load-big-rc-3 changed: 248 vs 250",
    }
    Number of reported pods for load-big-rc-3 changed: 248 vs 250
not to have occurred
@k8s-github-robot k8s-github-robot added the kind/flake Categorizes issue or PR as related to a flaky test. label May 30, 2016
@k8s-github-robot
Copy link
Author

https://storage.googleapis.com/kubernetes-jenkins/logs/kubernetes-e2e-gce-scalability/7995/

Failed: [k8s.io] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:71
Expected
    <int>: 15
not to be >
    <int>: 0

@yujuhong
Copy link
Contributor

/cc @kubernetes/sig-scalability

@wojtek-t
Copy link
Member

The second run is just ridiculous - metrics are ~100x higher than they usually are - this had to be some GCE-related flake.

In the first one, it look something bad happened with either apiserver or nodes - I will take a look later this week.

@wojtek-t
Copy link
Member

wojtek-t commented Jun 1, 2016

So unfortunately, we don't have any logs from that run, so we can't debug it further.

But we have thousands of logs like this one:

"May 30 13:58:07.305: INFO: Error while reading data from e2e-scalability-minion-group-gs4m: Unable to get server version: Get https://104.197.116.46/version: dial tcp 104.197.116.46:443: i/o timeout"

which suggests that something bad happened to apiserver.

My strong feeling is that this is because:
#26563

because that could have effect exactly like what happened here.

So I'm closing this as a duplicate of #26563

@wojtek-t wojtek-t closed this as completed Jun 1, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

4 participants