-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace kamaji-etcd with aenix-io/etcd-operator #95
Conversation
25bb31c
to
c5f1c85
Compare
9669420
to
3a28029
Compare
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
| etcdOperator.readinessProbe.periodSeconds | int | `10` | | | ||
| etcdOperator.resources.limits.cpu | string | `"500m"` | | | ||
| etcdOperator.resources.limits.memory | string | `"128Mi"` | | | ||
| etcdOperator.resources.requests.cpu | string | `"100m"` | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is the request CPU 100m? This is quite a lot, I think you should start with 20-40m, but you need to know how much etcd eats at least.
| etcdOperator.readinessProbe.initialDelaySeconds | int | `5` | | | ||
| etcdOperator.readinessProbe.periodSeconds | int | `10` | | | ||
| etcdOperator.resources.limits.cpu | string | `"500m"` | | | ||
| etcdOperator.resources.limits.memory | string | `"128Mi"` | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The memory limit is very small, an average k8s cluster with 100 pods will force etcd to use up to 300MB per instance. I think it should be increased to 384-512MB
| kubeRbacProxy.readinessProbe | object | `{}` | | | ||
| kubeRbacProxy.resources.limits.cpu | string | `"500m"` | | | ||
| kubeRbacProxy.resources.limits.memory | string | `"128Mi"` | | | ||
| kubeRbacProxy.resources.requests.cpu | string | `"100m"` | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is quite a lot, I think you should start with 20-40m.
| kubeRbacProxy.livenessProbe | object | `{}` | | | ||
| kubeRbacProxy.readinessProbe | object | `{}` | | | ||
| kubeRbacProxy.resources.limits.cpu | string | `"500m"` | | | ||
| kubeRbacProxy.resources.limits.memory | string | `"128Mi"` | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's hard to say here, but it's worth reviewing the limit, you don't really want to catch OOM
@themoriarti, there are defaults provided by the etcd-operator maintainers. We can consider other values, but I think it might be better to address this in the upstream project. |
We are waiting for a response from etcd-operator aenix-io/etcd-operator#199 , if they make changes it will simplify the integration. |
resources: | ||
limits: | ||
cpu: 500m | ||
memory: 128Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need more memory up to 512MB
cpu: 500m | ||
memory: 128Mi | ||
requests: | ||
cpu: 100m |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for start etcd not need more then 40m CPU reservation
@kvaps Can you add the default value of requests and resource limits when creating an etсd cluster? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the moment, we have gone through all the discussions and have come to the decision not to set any additional limits, but after the release we will monitor and return to the discussion of defaults in the next discussions. Fly fly our good etсd.
This change replaces kamaji-etcd with our own etcd-operator, which provides a robust environment for running etcd clusters.
The previous kamaji-etcd has not been removed yet, so it will continue to operate for compatibility purposes.
fixes: #33