Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Horizontal pod autoscaling tests don't work #14667

Closed
piosz opened this issue Sep 28, 2015 · 6 comments
Closed

Horizontal pod autoscaling tests don't work #14667

piosz opened this issue Sep 28, 2015 · 6 comments
Assignees
Labels
area/test kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.

Comments

@piosz
Copy link
Member

piosz commented Sep 28, 2015

Horizontal pod autoscaling [Skipped][Autoscaling Suite] should scale from 5 pods to 3 pods and from 3 to 1 (scale resource: Memory)

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:149
Sep 28 09:06:15.915: timeout waiting 10m0s for pods size to be 3

Horizontal pod autoscaling [Skipped][Autoscaling Suite] should scale from 1 pod to 3 pods (scale resource: CPU)

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:42
Sep 28 09:40:07.245: timeout waiting 10m0s for pods size to be 3
@piosz piosz added area/test priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. labels Sep 28, 2015
@jszczepkowski
Copy link
Contributor

Duplicate of #14656.

@piosz
Copy link
Member Author

piosz commented Oct 5, 2015

The test failed 9 times in the last 30 runs. Reopening.

@piosz piosz reopened this Oct 5, 2015
@piosz piosz added the kind/flake Categorizes issue or PR as related to a flaky test. label Oct 5, 2015
@piosz piosz added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Oct 5, 2015
@piosz
Copy link
Member Author

piosz commented Oct 5, 2015

Also the test failed during last 3 runs so increasing the priority.

@brendandburns
Copy link
Contributor

Two more failures today. Is this being actively worked on?

@piosz
Copy link
Member Author

piosz commented Oct 7, 2015

Yes. Both @jszczepkowski and me. We are close to the solution I believe.
7 paź 2015 1:48 AM "Brendan Burns" notifications@github.com napisał(a):

Two more failures today. Is this being actively worked on?


Reply to this email directly or view it on GitHub
#14667 (comment)
.

jszczepkowski added a commit to jszczepkowski/kubernetes that referenced this issue Oct 7, 2015
Increased memory limit for horizontal pod autoscaler e2e test. Fixes kubernetes#14667.
@jszczepkowski
Copy link
Contributor

There are two problems here:

  1. resource consumers sometimes get silently killed (unknown resasons),
  2. memory limites for resource conusmer were too low.
    My PR (Increased memory limit for horizontal pod autoscaler e2e test. #15218) solved the second problem, which is enough to make the test stable. The first problem is still mystery to us.

RichieEscarez pushed a commit to RichieEscarez/kubernetes that referenced this issue Dec 4, 2015
Increased memory limit for horizontal pod autoscaler e2e test. Fixes kubernetes#14667.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.
Projects
None yet
Development

No branches or pull requests

3 participants