Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

elastic-internal-init-keystore initcontainer does not specify resources #2660

Closed
primeroz opened this issue Mar 3, 2020 · 2 comments · Fixed by #3193
Closed

elastic-internal-init-keystore initcontainer does not specify resources #2660

primeroz opened this issue Mar 3, 2020 · 2 comments · Fixed by #3193
Labels
>enhancement Enhancement of existing functionality

Comments

@primeroz
Copy link

primeroz commented Mar 3, 2020

Bug Report

What did you do?
Add a secureSettings secret to my elastic resource

The init container elastic-internal-init-keystore was added to the statefulset but it was added with no resources.

In a cluster with quotas enabled ( and no default limit range for cpu and memory ) this caused the roll restart of the statefulset to hang with

pods "elastic-es-main-2" is forbidden: failed quota: default: must specify limits.cpu,limits.memory,requests.cpu,requests.memory

This works fine for the other initcontainer elastic-internal-init-filesystem which has values set for cpu and memory resources

Environment

  • ECK version: 1.0.1

  • Kubernetes information:

    • Cloud: GKE / EKS / AKS ? GKE 1.14.11-gke.17

Current WorkAround
Use a limitRange with default values for cpu and memory

@sebgl
Copy link
Contributor

sebgl commented Mar 3, 2020

Sounds like a good idea. Let's do it!

@sebgl sebgl added the >enhancement Enhancement of existing functionality label Mar 3, 2020
@barkbay
Copy link
Contributor

barkbay commented Jun 5, 2020

I investigated this one, we create init containers in a few different places:

Elasticsearch

From the bin/elasticsearch-cli script:

# use a small heap size for the CLI tools, and thus the serial collector to
# avoid stealing many CPU cycles; a user can override by setting ES_JAVA_OPTS
ES_JAVA_OPTS="-Xms4m -Xmx64m -XX:+UseSerialGC ${ES_JAVA_OPTS}"
> /usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S .user=%U" /usr/share/elasticsearch/bin/elasticsearch-keystore create
Created elasticsearch keystore in /usr/share/elasticsearch/config/elasticsearch.keystore
mem=0 RSS=116300 elapsed=0:04.33 cpu.sys=0.25 .user=1.86
> /usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S .user=%U" /usr/share/elasticsearch/bin/elasticsearch-keystore add-file foo /tmp/bar
mem=0 RSS=118648 elapsed=0:02.13 cpu.sys=0.24 .user=1.78

I think that 196MB of memory and 500m of CPU should be enough

Kibana

Node.JS is called behind the scene:

NODE="${DIR}/node/bin/node"
test -x "$NODE"
if [ ! -x "$NODE" ]; then
  echo "unable to find usable node.js executable."
  exit 1
fi

"${NODE}" "${DIR}/src/cli_keystore" "$@"
> /usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S .user=%U" /usr/share/kibana/bin/kibana-keystore create
Created Kibana keystore in /usr/share/kibana/data/kibana.keystore
mem=0 RSS=52864 elapsed=0:00.48 cpu.sys=0.05 .user=0.43
> /usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S .user=%U" /usr/share/kibana/bin/kibana-keystore add "logging.verbose" --stdin < /tmp/true
mem=0 RSS=52040 elapsed=0:00.48 cpu.sys=0.05 .user=0.43

APM Server

Keystore entries are loaded with the Go binary /usr/share/apm-server/apm-server

> /usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S .user=%U" /usr/share/apm-server/apm-server keystore create --force
Created apm-server keystore
mem=0 RSS=49608 elapsed=0:00.07 cpu.sys=0.01 .user=0.07
> /usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S .user=%U" /usr/share/apm-server/apm-server keystore add test --stdin < /tmp/foo
Successfully updated the keystore
mem=0 RSS=49144 elapsed=0:00.07 cpu.sys=0.02 .user=0.06

For both Kibana and APM Server invocations seems to be lightweight and use ~50 Mbytes of memory. I think we could set memory requirements to 128MB and 100m of CPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>enhancement Enhancement of existing functionality
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants