Heapster sizing adjustments #22940
Heapster sizing adjustments #22940
Conversation
Labelling this PR as size/M |
GCE e2e build/test passed for commit 07d5e67. |
{% set num_nodes = pillar.get('num_nodes', -1) -%} | ||
{% if num_nodes >= 0 -%} | ||
{% set heapster_memory = (200 + num_nodes * 3)|string + "Mi" -%} | ||
{% set metrics_memory = (200 + num_nodes * 3)|string + "Mi" -%} | ||
{% set eventer_memory = (200000 + num_nodes * 200)|string + "Ki" -%} |
gmarek
Mar 14, 2016
Member
s/200000/200 * 1024/
s/200000/200 * 1024/
GCE e2e build/test passed for commit 798f0bd. |
please ping this thread when you think it is ready to merge |
{% set num_nodes = pillar.get('num_nodes', -1) -%} | ||
{% if num_nodes >= 0 -%} | ||
{% set heapster_memory = (200 + num_nodes * 3)|string + "Mi" -%} | ||
{% set metrics_memory = (200 + num_nodes * 4)|string + "Mi" -%} |
vishh
Mar 14, 2016
Member
Are we increasing the memory requirement for heapster pod as a whole?
Are we increasing the memory requirement for heapster pod as a whole?
mwielgus
Mar 14, 2016
Author
Contributor
Yes. Just in case.
For 1000 node cluster we are able to operate without any sink at 2-2.3gb (with 30k pause and 2k system pods). GKE http output creation consumes +400MB. That brings us to 2.75. If couple events occur at once: scraping, gke scraping and pod relist then the temporary memory consumption can be bigger. So to be on the VERY safe side I decided to go with 4 multpilier.
As we did a lot of noise (=events) during the tests it became apparent that Eventer should also have more memory. List that happens inside of watch consumes LOTS of memory (we will try to get rid of it just after 1.2).
Yes. Just in case.
For 1000 node cluster we are able to operate without any sink at 2-2.3gb (with 30k pause and 2k system pods). GKE http output creation consumes +400MB. That brings us to 2.75. If couple events occur at once: scraping, gke scraping and pod relist then the temporary memory consumption can be bigger. So to be on the VERY safe side I decided to go with 4 multpilier.
As we did a lot of noise (=events) during the tests it became apparent that Eventer should also have more memory. List that happens inside of watch consumes LOTS of memory (we will try to get rid of it just after 1.2).
Also bumping the image to beta2. |
This comment has been minimized.
This comment has been minimized.
TeamCity OSS :: Kubernetes Mesos :: 4 - Smoke Tests Build 19062 outcome was SUCCESS |
GCE e2e build/test passed for commit f5f6a80. |
GCE e2e build/test passed for commit 6123df9. |
LGTM |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
{% set num_nodes = pillar.get('num_nodes', -1) -%} | ||
{% if num_nodes >= 0 -%} | ||
{% set heapster_memory = (200 + num_nodes * 3)|string + "Mi" -%} | ||
{% set metrics_memory = (200 + num_nodes * 4)|string + "Mi" -%} | ||
{% set eventer_memory = (200000 + num_nodes * 500)|string + "Ki" -%} |
piosz
Mar 14, 2016
Member
/s/200000/200*1024
/s/200000/200*1024
GCE e2e build/test passed for commit 6123df9. |
Automatic merge from submit-queue |
Auto commit by PR queue bot
f899a40
into
kubernetes:master
PR description still says "do not merge yet". |
Not good for release-note generation |
And please ensure all PRs have a description or at least a reference to an issue. |
Auto commit by PR queue bot (cherry picked from commit f899a40)
Commit be2ad9e found in the "release-1.2" branch appears to be this PR. Removing the "cherrypick-candidate" label. If this s an error find help to get your PR picked. |
@bgrant0607 Please merge it to 1.2. |
@mwielgus this is already merged to 1.2 branch. |
Auto commit by PR queue bot (cherry picked from commit f899a40)
Auto commit by PR queue bot (cherry picked from commit f899a40)
Do not merge yet.