Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm: Remove cpu limits from all pods #13722

Merged
merged 1 commit into from
Feb 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 0 additions & 1 deletion Documentation/CRDs/Cluster/ceph-cluster-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -667,7 +667,6 @@ spec:
- name: "172.17.4.201"
resources:
limits:
cpu: "2"
memory: "4096Mi"
requests:
cpu: "2"
Expand Down
1 change: 0 additions & 1 deletion Documentation/CRDs/Cluster/pvc-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,6 @@ spec:
portable: false
resources:
limits:
cpu: "500m"
memory: "4Gi"
requests:
cpu: "500m"
Expand Down
1 change: 0 additions & 1 deletion Documentation/CRDs/Object-Storage/ceph-object-store-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@ spec:
# topologySpreadConstraints:
resources:
# limits:
# cpu: "500m"
# memory: "1024Mi"
# requests:
# cpu: "500m"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ spec:
# topologySpreadConstraints:
resources:
# limits:
# cpu: "500m"
# memory: "1024Mi"
# requests:
# cpu: "500m"
Expand Down
1 change: 0 additions & 1 deletion Documentation/CRDs/ceph-nfs-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ spec:

resources:
limits:
cpu: "3"
memory: "8Gi"
requests:
cpu: "3"
Expand Down
2 changes: 1 addition & 1 deletion Documentation/Helm-Charts/ceph-cluster-chart.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ The following table lists the configurable parameters of the rook-operator chart
| `toolbox.enabled` | Enable Ceph debugging pod deployment. See [toolbox](../Troubleshooting/ceph-toolbox.md) | `false` |
| `toolbox.image` | Toolbox image, defaults to the image used by the Ceph cluster | `nil` |
| `toolbox.priorityClassName` | Set the priority class for the toolbox if desired | `nil` |
| `toolbox.resources` | Toolbox resources | `{"limits":{"cpu":"500m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"128Mi"}}` |
| `toolbox.resources` | Toolbox resources | `{"limits":{"memory":"1Gi"},"requests":{"cpu":"100m","memory":"128Mi"}}` |
| `toolbox.tolerations` | Toolbox tolerations | `[]` |

### **Ceph Cluster Spec**
Expand Down
2 changes: 1 addition & 1 deletion Documentation/Helm-Charts/operator-chart.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ The following table lists the configurable parameters of the rook-operator chart
| `pspEnable` | If true, create & use PSP resources | `false` |
| `rbacAggregate.enableOBCs` | If true, create a ClusterRole aggregated to [user facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) for objectbucketclaims | `false` |
| `rbacEnable` | If true, create & use RBAC resources | `true` |
| `resources` | Pod resource requests & limits | `{"limits":{"cpu":"1500m","memory":"512Mi"},"requests":{"cpu":"200m","memory":"128Mi"}}` |
| `resources` | Pod resource requests & limits | `{"limits":{"memory":"512Mi"},"requests":{"cpu":"200m","memory":"128Mi"}}` |
| `scaleDownOperator` | If true, scale down the rook operator. This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling to deploy your helm charts. | `false` |
| `tolerations` | List of Kubernetes [`tolerations`](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) to add to the Deployment. | `[]` |
| `unreachableNodeTolerationSeconds` | Delay to use for the `node.kubernetes.io/unreachable` pod failure toleration to override the Kubernetes default of 5 minutes | `5` |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,6 @@ spec:
imagePullPolicy: Always
resources:
limits:
cpu: 100m
memory: 100Mi
env:
# Configuration reference: https://docs.docker.com/registry/configuration/
Expand Down
11 changes: 0 additions & 11 deletions deploy/charts/rook-ceph-cluster/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ toolbox:
# -- Toolbox resources
resources:
limits:
cpu: "500m"
memory: "1Gi"
requests:
cpu: "100m"
Expand Down Expand Up @@ -281,21 +280,18 @@ cephClusterSpec:
resources:
mgr:
limits:
cpu: "1000m"
memory: "1Gi"
requests:
cpu: "500m"
memory: "512Mi"
mon:
limits:
cpu: "2000m"
memory: "2Gi"
requests:
cpu: "1000m"
memory: "1Gi"
osd:
limits:
cpu: "2000m"
memory: "4Gi"
requests:
cpu: "1000m"
Expand All @@ -314,35 +310,30 @@ cephClusterSpec:
memory: "50Mi"
mgr-sidecar:
limits:
cpu: "500m"
memory: "100Mi"
requests:
cpu: "100m"
memory: "40Mi"
crashcollector:
limits:
cpu: "500m"
memory: "60Mi"
requests:
cpu: "100m"
memory: "60Mi"
logcollector:
limits:
cpu: "500m"
memory: "1Gi"
requests:
cpu: "100m"
memory: "100Mi"
cleanup:
limits:
cpu: "500m"
memory: "1Gi"
requests:
cpu: "500m"
memory: "100Mi"
exporter:
limits:
cpu: "250m"
memory: "128Mi"
requests:
cpu: "50m"
Expand Down Expand Up @@ -522,7 +513,6 @@ cephFileSystems:
activeStandby: true
resources:
limits:
cpu: "2000m"
memory: "4Gi"
requests:
cpu: "1000m"
Expand Down Expand Up @@ -596,7 +586,6 @@ cephObjectStores:
port: 80
resources:
limits:
cpu: "2000m"
memory: "2Gi"
requests:
cpu: "1000m"
Expand Down
27 changes: 0 additions & 27 deletions deploy/charts/rook-ceph/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ crds:
# -- Pod resource requests & limits
resources:
limits:
cpu: 1500m
memory: 512Mi
requests:
cpu: 200m
Expand Down Expand Up @@ -218,55 +217,47 @@ csi:
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
- name : csi-resizer
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
- name : csi-attacher
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
- name : csi-snapshotter
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
- name : csi-rbdplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 500m
- name : csi-omap-generator
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 500m
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
cpu: 100m

# -- CEPH CSI RBD plugin resource requirement list
# @default -- see values.yaml
Expand All @@ -278,23 +269,20 @@ csi:
cpu: 50m
limits:
memory: 256Mi
cpu: 100m
- name : csi-rbdplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 500m
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
cpu: 100m

# -- CEPH CSI CephFS provisioner resource requirement list
# @default -- see values.yaml
Expand All @@ -306,47 +294,41 @@ csi:
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
- name : csi-resizer
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
- name : csi-attacher
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
- name : csi-snapshotter
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
- name : csi-cephfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 500m
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
cpu: 100m

# -- CEPH CSI CephFS plugin resource requirement list
# @default -- see values.yaml
Expand All @@ -358,23 +340,20 @@ csi:
cpu: 50m
limits:
memory: 256Mi
cpu: 100m
- name : csi-cephfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 500m
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
cpu: 100m

# -- CEPH CSI NFS provisioner resource requirement list
# @default -- see values.yaml
Expand All @@ -386,23 +365,20 @@ csi:
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
- name : csi-nfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 500m
- name : csi-attacher
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 500m

# -- CEPH CSI NFS plugin resource requirement list
# @default -- see values.yaml
Expand All @@ -414,15 +390,13 @@ csi:
cpu: 50m
limits:
memory: 256Mi
cpu: 100m
- name : csi-nfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 500m

# Set provisionerTolerations and provisionerNodeAffinity for provisioner pod.
# The CSI provisioner would be best to start on the same nodes as other ceph daemons.
Expand Down Expand Up @@ -629,7 +603,6 @@ discover:
# -- Add resources to discover daemon pods
resources:
# - limits:
# cpu: 500m
# memory: 512Mi
# - requests:
# cpu: 100m
Expand Down
6 changes: 1 addition & 5 deletions deploy/examples/cluster-on-local-pvc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,6 @@ spec:
resources:
# These are the OSD daemon limits. For OSD prepare limits, see the separate section below for "prepareosd" resources
# limits:
# cpu: "500m"
# memory: "4Gi"
# requests:
# cpu: "500m"
Expand All @@ -250,10 +249,7 @@ spec:
onlyApplyOSDPlacement: false
resources:
# prepareosd:
# limits:
# cpu: "200m"
# memory: "200Mi"
# requests:
# requests:
# cpu: "200m"
# memory: "200Mi"
priorityClassNames:
Expand Down
6 changes: 1 addition & 5 deletions deploy/examples/cluster-on-pvc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,6 @@ spec:
resources:
# These are the OSD daemon limits. For OSD prepare limits, see the separate section below for "prepareosd" resources
# limits:
# cpu: "500m"
# memory: "4Gi"
# requests:
# cpu: "500m"
Expand Down Expand Up @@ -167,10 +166,7 @@ spec:
onlyApplyOSDPlacement: false
resources:
# prepareosd:
# limits:
# cpu: "200m"
# memory: "200Mi"
# requests:
# requests:
# cpu: "200m"
# memory: "200Mi"
priorityClassNames:
Expand Down