Skip to content
This repository has been archived by the owner on Feb 14, 2023. It is now read-only.

Set memory and cpu requests and limit values for all containers #65

Merged
merged 2 commits into from
Sep 16, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions templates/api_server_deployment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,10 @@ spec:
imagePullPolicy: Always
resources:
requests:
cpu: 500m
memory: 300Mi
limits:
cpu: 1000m
memory: 1.2Gi
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know you didn't change this, but for @cloudfoundry/cf-capi how did we choose this memory limit?

On BOSH the limit ~4GB:
https://github.com/cloudfoundry/capi-release/blob/6f8899976561e64eab8c9804b1ce772083bbf68c/jobs/cloud_controller_ng/spec#L878-L886

In production envs I've seen a well-used CF API settle in at around 2.5-3GB (occasionally bursting higher for certain requests), so I think we should make it match the capi-release limits.

Copy link
Contributor

@cwlbraa cwlbraa Aug 28, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was probably a raw multiple of the initially-deployed memory usage. We set these to keep the API from getting OOM killed during test, not based on any experiment to produce memory bloat or "realistic" workloads.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's good context, thanks @cwlbraa.

I wonder if we should just make the very forgiving for now (either meet or exceed the BOSH defaults). The main use for them is to just be a good neighbor and avoid starving other Pods of resources, but I feel like if we're targeting a "developer edition" 1.0 they might be more frustrating than helpful if they're too strict.

volumeMounts:
- #@ template.replace(shared_config_volume_mounts())
Expand All @@ -67,8 +69,10 @@ spec:
value: #@ ccng_secrets_mount_path
resources:
requests:
cpu: 100m
memory: 300Mi
limits:
cpu: 500m
memory: 1.2Gi
volumeMounts:
- #@ template.replace(shared_config_volume_mounts())
Expand All @@ -86,6 +90,13 @@ spec:
httpGet:
port: 80
path: "/healthz"
resources:
requests:
cpu: 100m
memory: 300Mi
limits:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: nginx
mountPath: /etc/nginx
Expand All @@ -101,6 +112,13 @@ spec:
- containerPort: 9102
image: #@ data.values.images.statsd_exporter
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 300Mi
limits:
cpu: 500m
memory: 1Gi
serviceAccountName: cf-api-server-service-account
volumes:
- #@ template.replace(shared_config_volumes())
Expand Down
2 changes: 2 additions & 0 deletions templates/clock_deployment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,10 @@ spec:
value: #@ ccng_secrets_mount_path
resources:
requests:
cpu: 300m
memory: 300Mi
limits:
cpu: 1000m
memory: 1Gi
readinessProbe:
tcpSocket:
Expand Down
2 changes: 2 additions & 0 deletions templates/deployment_updater_deployment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,10 @@ spec:
value: #@ ccng_secrets_mount_path
resources:
requests:
cpu: 300m
memory: 300Mi
limits:
cpu: 1000m
memory: 1Gi
readinessProbe:
tcpSocket:
Expand Down
2 changes: 2 additions & 0 deletions templates/worker_deployment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,10 @@ spec:
value: #@ ccng_secrets_mount_path
resources:
requests:
cpu: 300m
memory: 300Mi
limits:
cpu: 1000m
memory: 1Gi
readinessProbe:
tcpSocket:
Expand Down