Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,12 @@ All notable changes to this project will be documented in this file.

## [Unreleased]

### Added

- Default resource requests (memory and cpu) for ZooKeeper pods ([#563]).

[#563]: https://github.com/stackabletech/zookeeper-operator/pull/563

## [0.11.0] - 2022-09-06

### Changed
Expand Down
39 changes: 13 additions & 26 deletions docs/modules/ROOT/pages/usage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -178,30 +178,14 @@ In the above example, all ZooKeeper nodes in the default group will store data (

By default, in case nothing is configured in the custom resource for a certain role group, each Pod will have a `1Gi` large local volume mount for the data location.

=== Memory requests
=== Resource Requests

You can request a certain amount of memory for each individual role group as shown below:
// The "nightly" version is needed because the "include" directive searches for
// files in the "stable" version by default.
// TODO: remove the "nightly" version after the next platform release (current: 22.09)
include::nightly@home:concepts:stackable_resource_requests.adoc[]

[source,yaml]
----
servers:
roleGroups:
default:
config:
resources:
memory:
limit: '2Gi'
----

In this example, each ZooKeeper container in the "default" group will have a maximum of 2 gigabytes of memory. To be more precise, these memory limits apply to the containers running the ZooKeeper daemons but not to any sidecar containers that are part of the pod.

Setting this property will also automatically set the maximum Java heap size for the corresponding process to 80% of the available memory. Be aware that if the memory constraint is too low, the cluster might fail to start. If pods terminate with an 'OOMKilled' status and the cluster doesn't start, try increasing the memory limit.

For more details regarding Kubernetes memory requests and limits see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/[Assign Memory Resources to Containers and Pods].

=== CPU requests

Similarly to memory resources, you can also configure CPU limits, as shown below:
If no resource requests are configured explicitly, the ZooKeeper operator uses the following defaults:

[source,yaml]
----
Expand All @@ -210,9 +194,12 @@ servers:
default:
config:
resources:
memory:
limit: '512Mi'
cpu:
max: '500m'
min: '250m'
max: '4'
min: '500m'
storage:
data:
capacity: '1Gi'
----

For more details regarding Kubernetes CPU limits see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/[Assign CPU Resources to Containers and Pods].
6 changes: 3 additions & 3 deletions rust/crd/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -485,11 +485,11 @@ impl ZookeeperCluster {
fn default_resources() -> Resources<Storage, NoRuntimeLimits> {
Resources {
cpu: CpuLimits {
min: None,
max: None,
min: Some(Quantity("500m".to_owned())),
max: Some(Quantity("4".to_owned())),
},
memory: MemoryLimits {
limit: None,
limit: Some(Quantity("512Mi".to_owned())),
runtime_limits: NoRuntimeLimits {},
},
storage: Storage {
Expand Down