Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider raising default memory limit for the operator #3025

Closed
david-kow opened this issue May 4, 2020 · 3 comments · Fixed by #3046
Closed

Consider raising default memory limit for the operator #3025

david-kow opened this issue May 4, 2020 · 3 comments · Fixed by #3046
Assignees
Labels
>enhancement Enhancement of existing functionality

Comments

@david-kow
Copy link
Contributor

ECK memory usage scales not only with the number of resources it manages, but also with the size of resources in the K8s cluster in general. This is because we are not able to filter watched resources by labels - client caches include all Pods, Secrets and any other resources of the same type that ECK watches. In #2981 we can see that caused the operator pod to be OOMKilled.

As the memory limit is fairly low right now (150Mi) and seems insignificant in comparison with memory requirements of most ES clusters, we should consider raising it to allow larger clusters to work out of the box.

@david-kow david-kow added the >enhancement Enhancement of existing functionality label May 4, 2020
@sebgl sebgl self-assigned this May 12, 2020
@sebgl
Copy link
Contributor

sebgl commented May 12, 2020

It's a bit arbitrary, but based on users feedback 512Mi seems to be a fair memory limit.

@barkbay
Copy link
Contributor

barkbay commented May 12, 2020

It's a bit arbitrary, but based on users feedback 512Mi seems to be a fair memory limit.

👍 Also in order to be consistent maybe increase the request ? Half of the limit looks reasonable I guess.

@sebgl
Copy link
Contributor

sebgl commented May 12, 2020

👍 Also in order to be consistent maybe increase the request ? Half of the limit looks reasonable I guess.

We could do 150Mi for the memory requests?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>enhancement Enhancement of existing functionality
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants