Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite} #31591

Closed
k8s-github-robot opened this issue Aug 28, 2016 · 6 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2033/

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Aug 28 09:38:39.413: timeout waiting 15m0s for pods size to be 5
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:284

Previous issues for this test: #30317

@k8s-github-robot k8s-github-robot added priority/backlog Higher priority than priority/awaiting-more-evidence. kind/flake Categorizes issue or PR as related to a flaky test. labels Aug 28, 2016
@dchen1107
Copy link
Member

xref: #31448

@j3ffml
Copy link
Contributor

j3ffml commented Aug 29, 2016

@rmmh do you know why I got auto-assigned this. I think the owner should be the same as for the other HPA test, since the only diff is one uses deployment, the other replicationcontroller (#30571)

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gke/8544/

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Aug 30 09:05:22.867: timeout waiting 15m0s for pods size to be 5
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:284

@bprashanth
Copy link
Contributor

@jszczepkowski I'm seeing a bunch of oom kills in kernlog for the "consume-cpu" process which I'm guessing is from this test:

Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.435251] memory: usage 98440kB, limit 102400kB, failcnt 1031
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.442888] memory+swap: usage 0kB, limit 18014398509481983kB, failcnt 0
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.451589] kmem: usage 0kB, limit 18014398509481983kB, failcnt 0
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.458582] Memory cgroup stats for /c22c7425a0623a522f6385e71309c648b3bd145d43c1085472986b7913c83d15: cache:8KB rss:98688KB rss_huge:0KB mapped_file:0KB writeback:0KB inactive_anon:0KB active_anon:98708KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.495645] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.503674] [17485]     0 17485     3319     2149      11        0          -998 consumer
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.513300] [30083]     0 30083     2418     1734       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.523266] [30084]     0 30084     2418     1667       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.533222] [30085]     0 30085     2418     1737       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.543207] [30086]     0 30086     2418     1737       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.553265] [30096]     0 30096     2418     1734       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.561843] [30103]     0 30103     2418     1735       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.572060] [30107]     0 30107     2418     1738       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.583111] [30108]     0 30108     2418     1668       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.591782] [30112]     0 30112     2418     1658       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.601776] [30115]     0 30115     2418     1737       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.611733] [30117]     0 30117     2418     1732       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.621687] [30122]     0 30122     2418     1722       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.631640] [30130]     0 30130     2418     1735       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.641605] [30131]     0 30131     2418     1653       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.651564] [30139]     0 30139     2418     1671       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.661519] [30147]     0 30147     2418     1723       8        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.671473] [30151]     0 30151     2418     1728       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.681431] [30152]     0 30152     2418     1656       9        0          -998 consume-cpu
Aug 30 13:15:05 gke-jenkins-e2e-default-pool-8cc0a277-dk7i kernel: [ 3043.690005] Memory cgroup out of memory: Kill process 30152 (consume-cpu) score 0 or sacrifice child

@jszczepkowski
Copy link
Contributor

@bprashanth Thanks for noticing that, I'll bump limits.

jszczepkowski added a commit to jszczepkowski/kubernetes that referenced this issue Sep 2, 2016
Bumped memory limit for resource consumer from 100 MB to 200 MB. Fixes kubernetes#31591.
k8s-github-robot pushed a commit that referenced this issue Sep 5, 2016
Automatic merge from submit-queue

Bumped memory limit for resource consumer. Fixes #31591.

Bumped memory limit for resource consumer from 100 MB to 200 MB, increased request sizes so that the number of consumers will be smaller. Fixes #31591.
@davidopp davidopp added this to the v1.4 milestone Sep 18, 2016
@davidopp
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

6 participants