From 050a5c833621b0c439a6fd71e771eb8d3b1c0498 Mon Sep 17 00:00:00 2001 From: Varalakshmi Kumar <102720382+vara2504@users.noreply.github.com> Date: Wed, 10 Apr 2024 05:45:13 -0700 Subject: [PATCH] OSS configure resource request changes (#1409) --- .../configure-resources.mdx | 664 ++++++++++++++++++ .../configure-resources.mdx | 215 +++--- calico/reference/configure-resources.mdx | 262 +++++++ sidebars-calico-cloud.js | 1 + sidebars-calico.js | 1 + 5 files changed, 1033 insertions(+), 110 deletions(-) create mode 100644 calico-cloud/reference/component-resources/configure-resources.mdx create mode 100644 calico/reference/configure-resources.mdx diff --git a/calico-cloud/reference/component-resources/configure-resources.mdx b/calico-cloud/reference/component-resources/configure-resources.mdx new file mode 100644 index 0000000000..cc9f490c03 --- /dev/null +++ b/calico-cloud/reference/component-resources/configure-resources.mdx @@ -0,0 +1,664 @@ +--- +description: Configure Resource requests and limits. +--- + +# Configure resource requests and limits + +## Big picture + +Resource requests and limits are essential configurations for managing resource allocation and ensuring optimal performance of Kubernetes workloads. In {{prodname}}, these configurations can be customized using custom resources to meet specific requirements and optimize resource utilization. + +:::note +It's important to note that the CPU and memory values used in the examples are for demonstration purposes and should be adjusted based on individual system requirements. To find the list of all applicable containers for a component, please refer to its specification. +::: + +## APIServer custom resource + +The [APIServer](../../reference/installation/api.mdx#operator.tigera.io/v1.APIServer) CR provides a way to configure APIServerDeployment. The following sections provide example configurations for this CR. + +### APIServerDeployment + +To configure resource specification for the [APIServerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.APIServerDeployment), patch the installation CR using the below command: + +```bash +$ kubectl patch apiserver tigera-secure --type=merge --patch='{"spec": {"apiServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-apiserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-queryserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get deployment.apps/tigera-apiserver -n tigera-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the Calico APIServerDeployment component in JSON format. + +```bash +{ + "name": "calico-apiserver", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +{ + "name": "tigera-queryserver", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +## ApplicationLayer custom resource + +The [ApplicationLayer](../../reference/installation/api.mdx#operator.tigera.io/v1.ApplicationLayer) CR provides a way to configure resources for L7LogCollectorDaemonSet. The following sections provide example configurations for this CR. + +Example Configurations: + +### L7LogCollectorDaemonSet + +To configure resource specification for the [L7LogCollectorDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.L7LogCollectorDaemonSet), patch the ApplicationLayer CR using the below command: + +```bash +$ kubectl patch applicationlayer tigera-secure --type=merge --patch='{"spec": {"l7LogCollectorDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"l7-collector","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"envoy-proxy","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +applicationlayer.operator.tigera.io/tigera-secure patched +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get daemonset.apps/l7-log-collector -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the Calico L7LogCollectorDaemonSet component in JSON format. + +```bash +{ + "name": "envoy-proxy", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +{ + "name": "l7-collector", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +## Compliance custom resource + +The [Compliance](../../reference/installation/api.mdx#operator.tigera.io/v1.Compliance) CR provides a way to configure resources for ComplianceControllerDeployment, ComplianceSnapshotterDeployment, ComplianceBenchmarkerDaemonSet, ComplianceServerDeployment, ComplianceReporterPodTemplate. The following sections provide example configurations for this CR. + +Example Configurations: + +### ComplianceControllerDeployment + +To configure resource specification for the [ComplianceControllerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceControllerDeployment), patch the Compliance CR using the below command: + +```bash +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get deployment.apps/compliance-controller -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the ComplianceControllerDeployment component in JSON format. + +```bash +{ + "name": "compliance-controller", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + + +### ComplianceSnapshotterDeployment + +To configure resource specification for the [ComplianceSnapshotterDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceSnapshotterDeployment), patch the Compliance CR using the below command: + +```bash +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceSnapshotterDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-snapshotter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get deployment.apps/compliance-snapshotter -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the ComplianceSnapshotterDeployment in JSON format. + +```bash +{ + "name": "compliance-snapshotter", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + + +### ComplianceBenchmarkerDaemonSet + +To configure resource specification for the [ComplianceBenchmarkerDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceBenchmarkerDaemonSet), patch the Compliance CR using the below command: + +```bash +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceBenchmarkerDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-benchmarker","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get daemonset.apps/compliance-benchmarker -n tigera-compliance -o json |jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +```bash +{ + "name": "compliance-benchmarker", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +This command will output the configured resource requests and limits for the ComplianceBenchmarkerDaemonSet in JSON format. + +### ComplianceServerDeployment + +To configure resource specification for the [ComplianceServerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceServerDeployment), patch the Compliance CR using the below command: + +```bash +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-server","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get deployment.apps/compliance-server -n tigera-compliance -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the ComplianceServerDeployment in JSON format. + +```bash +{ + "name": "compliance-server", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + + +### ComplianceReporterPodTemplate. + +To configure resource specification for the [ComplianceReporterPodTemplate](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceReporterPodTemplate), patch the Compliance CR using the below command: + +```bash +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceReporterPodTemplate": {"template": {"spec": {"containers":[{"name":"reporter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get Podtemplates tigera.io.report -n tigera-compliance -o json | jq '.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the ComplianceReporterPodTemplate component in JSON format. + +```bash +{ + "name": "reporter", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +## Installation custom resource + +The [Installation CR](../../reference/installation/api.mdx) provides a way to configure resources for various Calico Enterprise components, including TyphaDeployment, calicoNodeDaemonSet, CalicoNodeWindowsDaemonSet, csiNodeDriverDaemonSet and KubeControllersDeployment. The following sections provide example configurations for this CR. + +Example Configurations: + + +### TyphaDeployment + +To configure resource specification for the [TyphaDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.TyphaDeployment), patch the installation CR using the below command: + +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"typhaDeployment": {"spec": {"template": {"spec": {"containers": [{"name": "calico-typha", "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}, "limits": {"cpu": "1", "memory": "1000Mi"}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get deployment.apps/calico-typha -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico TyphaDeployment component in JSON format. + +```bash +{ + "name": "calico-typha", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + + +### CalicoNodeDaemonSet + +To configure resource requests for the [calicoNodeDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.calicoNodeDaemonSet) component, patch the installation CR using the below command: + +```bash +$ kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. + +```bash +{ + "name": "calico-node", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +### calicoNodeWindowsDaemonSet + +To configure resource requests for the [calicoNodeWindowsDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.calicoNodeWindowsDaemonSet) component, patch the installation CR using the below command: + +```bash +$ kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeWindowsDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node-windows","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico calicoNodeWindowsDaemonSet component in JSON format. + +```bash +{ + "name": "calico-node", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +### CalicoKubeControllersDeployment + +To configure resource requests for the [CalicoKubeControllersDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.CalicoKubeControllersDeployment) component, patch the installation CR using the below command: + +```bash +$ kubectl patch installations default --type=merge --patch='{"spec": {"calicoKubeControllersDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-kube-controllers","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get deployment.apps/calico-kube-controllers -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico CalicoKubeControllersDeployment component in JSON format. + +```bash +{ + "name": "calico-kube-controllers", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} + +``` + +### CSINodeDriverDaemonSet + +To configure resource requests for the [CSINodeDriverDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.CSINodeDriverDaemonSet) component, patch the installation CR using the below command: + +```bash +$ kubectl patch installations default --type=merge --patch='{"spec": {"csiNodeDriverDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-csi","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}},{"name":"csi-node-driver-registrar","resources":{"requests":{"cpu":"50m", "memory":"50Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get daemonset.apps/csi-node-driver -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. + +```bash +{ + "name": "calico-csi", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +{ + "name": "csi-node-driver-registrar", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "50m", + "memory": "50Mi" + } + } +} +``` + +## IntrusionDetection custom resource + +The [IntrusionDetection](../../reference/installation/api.mdx#operator.tigera.io/v1.IntrusionDetection) CR provides a way to configure resources for IntrusionDetectionControllerDeployment. The following sections provide example configurations for this CR. + +### IntrusionDetectionControllerDeployment. + +To configure resource specification for the [IntrusionDetectionControllerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.IntrusionDetectionControllerDeployment), patch the IntrusionDetection CR using the below command: + +```bash +$ kubectl patch intrusiondetection tigera-secure --type=merge --patch='{"spec": {"intrusionDetectionControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"webhooks-processor","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}},{"name":"controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get deployment.apps/intrusion-detection-controller -n tigera-intrusion-detection -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the IntrusionDetectionControllerDeployment in JSON format. + +```bash +{ + "name": "controller", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "1000Mi" + } + } +} +{ + "name": "webhooks-processor", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "1000Mi" + } + } +} +``` + +## LogCollector custom resource + +The [LogCollector](../../reference/installation/api.mdx#operator.tigera.io/v1.LogCollector) CR provides a way to configure resources for FluentdDaemonSet, EKSLogForwarderDeployment. + +### FluentdDaemonSet. + +To configure resource specification for the [FluentdDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.FluentdDaemonSet), patch the LogCollector CR using the below command: + +```bash +$ kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"fluentdDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"fluentd","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get daemonset.apps/fluentd-node -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the FluentdDaemonSet in JSON format. + +```bash +{ + "name": "fluentd", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + + +### EKSLogForwarderDeployment. + +To configure resource specification for the [EKSLogForwarderDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.EKSLogForwarderDeployment), patch the LogCollector CR using the below command: + +```bash +$ kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"eksLogForwarderDeployment": {"spec": {"template": {"spec": {"containers":[{"name":"eks-log-forwarder","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get deployment.apps/eks-log-forwarder -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the EKSLogForwarderDeployment in JSON format. + +```bash +{ + "name": "eks-log-forwarder", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +## ManagementClusterConnection custom resource + +The [ManagementClusterConnection](../../reference/installation/api.mdx#operator.tigera.io/v1.ManagementClusterConnection) CR provides a way to configure resources for GuardianDeployment. The following sections provide example configurations for this CR. + +### GuardianDeployment. + +To configure resource specification for the [GuardianDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.GuardianDeployment), patch the ManagementClusterConnection CR using the below command: + +```bash +$ kubectl patch managementclusterconnection tigera-secure --type=merge --patch='{"spec": {"guardianDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-guardian","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +$ kubectl get deployment.apps/tigera-guardian -n tigera-guardian -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the GuardianDeployment in JSON format. + +```bash +{ + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } +} +``` \ No newline at end of file diff --git a/calico-enterprise/reference/component-resources/configure-resources.mdx b/calico-enterprise/reference/component-resources/configure-resources.mdx index 153755be7a..1d3a5d8df5 100644 --- a/calico-enterprise/reference/component-resources/configure-resources.mdx +++ b/calico-enterprise/reference/component-resources/configure-resources.mdx @@ -9,7 +9,7 @@ description: Configure Resource requests and limits. Resource requests and limits are essential configurations for managing resource allocation and ensuring optimal performance of Kubernetes workloads. In {{prodname}}, these configurations can be customized using custom resources to meet specific requirements and optimize resource utilization. :::note -It's important to note that the CPU and memory values used in the examples are for demonstration purposes and should be adjusted based on individual system requirements. +It's important to note that the CPU and memory values used in the examples are for demonstration purposes and should be adjusted based on individual system requirements. To find the list of all applicable containers for a component, please refer to its specification. ::: ## APIServer custom resource @@ -21,8 +21,7 @@ The [APIServer](../../reference/installation/api.mdx#operator.tigera.io/v1.APISe To configure resource specification for the [APIServerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.APIServerDeployment), patch the installation CR using the below command: ```bash -$ kubectl patch apiserver tigera-secure --type=merge --patch='{"spec": {"apiServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-apiserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-queryserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -apiserver.operator.tigera.io/tigera-secure patched +kubectl patch apiserver tigera-secure --type=merge --patch='{"spec": {"apiServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-apiserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-queryserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -31,7 +30,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/tigera-apiserver -n tigera-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/tigera-apiserver -n tigera-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the Calico APIServerDeployment component in JSON format. @@ -76,8 +75,7 @@ Example Configurations: To configure resource specification for the [L7LogCollectorDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.L7LogCollectorDaemonSet), patch the ApplicationLayer CR using the below command: ```bash -$ kubectl patch applicationlayer tigera-secure --type=merge --patch='{"spec": {"l7LogCollectorDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"l7-collector","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"envoy-proxy","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -applicationlayer.operator.tigera.io/tigera-secure patched +kubectl patch applicationlayer tigera-secure --type=merge --patch='{"spec": {"l7LogCollectorDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"l7-collector","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"envoy-proxy","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -86,7 +84,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get daemonset.apps/l7-log-collector -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get daemonset.apps/l7-log-collector -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the Calico L7LogCollectorDaemonSet component in JSON format. @@ -131,8 +129,7 @@ Example Configurations: To configure resource specification for the [DexDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.DexDeployment), patch the Authentication CR using the below command: ```bash -$ kubectl patch authentication tigera-secure --type=merge --patch='{"spec": {"dexDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-dex","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -authentication.operator.tigera.io/tigera-secure patched +kubectl patch authentication tigera-secure --type=merge --patch='{"spec": {"dexDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-dex","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -141,7 +138,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/tigera-dex -n tigera-dex -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/tigera-dex -n tigera-dex -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the Calico DexDeployment component in JSON format. @@ -173,8 +170,7 @@ Example Configurations: To configure resource specification for the [ComplianceControllerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceControllerDeployment), patch the Compliance CR using the below command: ```bash -$ kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -compliance.operator.tigera.io/tigera-secure patched +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -183,7 +179,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/compliance-controller -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/compliance-controller -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the ComplianceControllerDeployment component in JSON format. @@ -209,8 +205,7 @@ This command will output the configured resource requests and limits for the Com To configure resource specification for the [ComplianceSnapshotterDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceSnapshotterDeployment), patch the Compliance CR using the below command: ```bash -$ kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceSnapshotterDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-snapshotter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -compliance.operator.tigera.io/tigera-secure patched +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceSnapshotterDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-snapshotter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -219,7 +214,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/compliance-snapshotter -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/compliance-snapshotter -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the ComplianceSnapshotterDeployment in JSON format. @@ -245,8 +240,7 @@ This command will output the configured resource requests and limits for the Com To configure resource specification for the [ComplianceBenchmarkerDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceBenchmarkerDaemonSet), patch the Compliance CR using the below command: ```bash -$ kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceBenchmarkerDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-benchmarker","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -compliance.operator.tigera.io/tigera-secure patched +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceBenchmarkerDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-benchmarker","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -255,7 +249,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get daemonset.apps/compliance-benchmarker -n tigera-compliance -o json |jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get daemonset.apps/compliance-benchmarker -n tigera-compliance -o json |jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` ```bash @@ -281,9 +275,7 @@ This command will output the configured resource requests and limits for the Com To configure resource specification for the [ComplianceServerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceServerDeployment), patch the Compliance CR using the below command: ```bash -$ kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-server","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -compliance.operator.tigera.io/tigera-secure patched - +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-server","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -292,7 +284,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/compliance-server -n tigera-compliance -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/compliance-server -n tigera-compliance -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the ComplianceServerDeployment in JSON format. @@ -318,8 +310,7 @@ This command will output the configured resource requests and limits for the Com To configure resource specification for the [ComplianceReporterPodTemplate](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceReporterPodTemplate), patch the Compliance CR using the below command: ```bash -$ kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceReporterPodTemplate": {"template": {"spec": {"containers":[{"name":"reporter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}' -compliance.operator.tigera.io/tigera-secure patched +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceReporterPodTemplate": {"template": {"spec": {"containers":[{"name":"reporter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -328,7 +319,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get Podtemplates tigera.io.report -n tigera-compliance -o json | jq '.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get Podtemplates tigera.io.report -n tigera-compliance -o json | jq '.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the ComplianceReporterPodTemplate component in JSON format. @@ -361,7 +352,6 @@ To configure resource specification for the [TyphaDeployment](../../reference/in ```bash kubectl patch installations default --type=merge --patch='{"spec": {"typhaDeployment": {"spec": {"template": {"spec": {"containers": [{"name": "calico-typha", "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}, "limits": {"cpu": "1", "memory": "1000Mi"}}]}}}}}}' -installation.operator.tigera.io/default patched ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -370,7 +360,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/calico-typha -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +kubectl get deployment.apps/calico-typha -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' ``` This command will output the configured resource requests and limits for the Calico TyphaDeployment component in JSON format. @@ -396,8 +386,7 @@ This command will output the configured resource requests and limits for the Cal To configure resource requests for the [calicoNodeDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.calicoNodeDaemonSet) component, patch the installation CR using the below command: ```bash -$ kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' -installation.operator.tigera.io/default patched +kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -406,7 +395,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' ``` This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. @@ -432,8 +421,7 @@ This command will output the configured resource requests and limits for the Cal To configure resource requests for the [calicoNodeWindowsDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.calicoNodeWindowsDaemonSet) component, patch the installation CR using the below command: ```bash -$ kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeWindowsDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node-windows","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' -installation.operator.tigera.io/default patched +kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeWindowsDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node-windows","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -442,7 +430,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' ``` This command will output the configured resource requests and limits for the Calico calicoNodeWindowsDaemonSet component in JSON format. @@ -468,8 +456,7 @@ This command will output the configured resource requests and limits for the Cal To configure resource requests for the [CalicoKubeControllersDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.CalicoKubeControllersDeployment) component, patch the installation CR using the below command: ```bash -$ kubectl patch installations default --type=merge --patch='{"spec": {"calicoKubeControllersDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-kube-controllers","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' -installation.operator.tigera.io/default patched +kubectl patch installations default --type=merge --patch='{"spec": {"calicoKubeControllersDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-kube-controllers","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -478,7 +465,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/calico-kube-controllers -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +kubectl get deployment.apps/calico-kube-controllers -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' ``` This command will output the configured resource requests and limits for the Calico CalicoKubeControllersDeployment component in JSON format. @@ -500,6 +487,54 @@ This command will output the configured resource requests and limits for the Cal ``` +### CSINodeDriverDaemonSet + +To configure resource requests for the [CSINodeDriverDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.CSINodeDriverDaemonSet) component, patch the installation CR using the below command: + +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"csiNodeDriverDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-csi","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}},{"name":"csi-node-driver-registrar","resources":{"requests":{"cpu":"50m", "memory":"50Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get daemonset.apps/csi-node-driver -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. + +```bash +{ + "name": "calico-csi", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +{ + "name": "csi-node-driver-registrar", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "50m", + "memory": "50Mi" + } + } +} +``` + ## IntrusionDetection custom resource The [IntrusionDetection](../../reference/installation/api.mdx#operator.tigera.io/v1.IntrusionDetection) CR provides a way to configure resources for IntrusionDetectionControllerDeployment. The following sections provide example configurations for this CR. @@ -509,8 +544,7 @@ The [IntrusionDetection](../../reference/installation/api.mdx#operator.tigera.io To configure resource specification for the [IntrusionDetectionControllerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.IntrusionDetectionControllerDeployment), patch the IntrusionDetection CR using the below command: ```bash -$ kubectl patch intrusiondetection tigera-secure --type=merge --patch='{"spec": {"intrusionDetectionControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"webhooks-processor","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}},{"name":"controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}' -intrusiondetection.operator.tigera.io/tigera-secure patched +kubectl patch intrusiondetection tigera-secure --type=merge --patch='{"spec": {"intrusionDetectionControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"webhooks-processor","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}},{"name":"controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -519,7 +553,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/intrusion-detection-controller -n tigera-intrusion-detection -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/intrusion-detection-controller -n tigera-intrusion-detection -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the IntrusionDetectionControllerDeployment in JSON format. @@ -563,7 +597,6 @@ To configure resource specification for the [FluentdDaemonSet](../../reference/i ```bash $ kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"fluentdDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"fluentd","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -logcollector.operator.tigera.io/tigera-secure patched ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -572,7 +605,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get daemonset.apps/fluentd-node -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get daemonset.apps/fluentd-node -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the FluentdDaemonSet in JSON format. @@ -599,7 +632,6 @@ To configure resource specification for the [EKSLogForwarderDeployment](../../re ```bash $ kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"eksLogForwarderDeployment": {"spec": {"template": {"spec": {"containers":[{"name":"eks-log-forwarder","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -logcollector.operator.tigera.io/tigera-secure patched ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -608,7 +640,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/eks-log-forwarder -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/eks-log-forwarder -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the EKSLogForwarderDeployment in JSON format. @@ -638,8 +670,7 @@ The [LogStorage](../../reference/installation/api.mdx#operator.tigera.io/v1.LogS To configure resource specification for the [ECKOperatorStatefulSet](../../reference/installation/api.mdx#operator.tigera.io/v1.ECKOperatorStatefulSet), patch the LogStorage CR using the below command: ```bash -$ kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"eckOperatorStatefulSet":{"spec": {"template": {"spec": {"containers":[{"name":"manager","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -logstorage.operator.tigera.io/tigera-secure patched +kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"eckOperatorStatefulSet":{"spec": {"template": {"spec": {"containers":[{"name":"manager","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -648,7 +679,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get statefulset.apps/elastic-operator -n tigera-eck-operator -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get statefulset.apps/elastic-operator -n tigera-eck-operator -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the ECKOperatorStatefulSet in JSON format. @@ -674,8 +705,7 @@ This command will output the configured resource requests and limits for the ECK To configure resource specification for the [Kibana](../../reference/installation/api.mdx#operator.tigera.io/v1.Kibana), patch the LogStorage CR using the below command: ```bash -$ kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"kibana":{"spec": {"template": {"spec": {"containers":[{"name":"kibana","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -logstorage.operator.tigera.io/tigera-secure patched +kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"kibana":{"spec": {"template": {"spec": {"containers":[{"name":"kibana","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -684,7 +714,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/tigera-secure-kb -n tigera-kibana -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/tigera-secure-kb -n tigera-kibana -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the Kibana in JSON format. @@ -710,8 +740,7 @@ This command will output the configured resource requests and limits for the Kib To configure resource specification for the [LinseedDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.LinseedDeployment), patch the LogStorage CR using the below command: ```bash -$ kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"linseedDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-linseed","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -logstorage.operator.tigera.io/tigera-secure patched +kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"linseedDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-linseed","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -720,7 +749,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/tigera-linseed -n tigera-elasticsearch -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources} +kubectl get deployment.apps/tigera-linseed -n tigera-elasticsearch -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources} ``` This command will output the configured resource requests and limits for the LinseedDeployment in JSON format. @@ -746,8 +775,7 @@ This command will output the configured resource requests and limits for the Lin To configure resource specification for the [ElasticsearchMetricsDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ElasticsearchMetricsDeployment), patch the LogStorage CR using the below command: ```bash -$ kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"elasticsearchMetricsDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-elasticsearch-metrics","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}' -logstorage.operator.tigera.io/tigera-secure patched +kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"elasticsearchMetricsDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-elasticsearch-metrics","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -756,7 +784,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/tigera-elasticsearch-metrics -n tigera-elasticsearch -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/tigera-elasticsearch-metrics -n tigera-elasticsearch -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the ElasticsearchMetricsDeployment in JSON format. @@ -781,13 +809,12 @@ This command will output the configured resource requests and limits for the Ela The [ManagementClusterConnection](../../reference/installation/api.mdx#operator.tigera.io/v1.ManagementClusterConnection) CR provides a way to configure resources for GuardianDeployment. The following sections provide example configurations for this CR. -### GuardianDeployment. +### GuardianDeployment To configure resource specification for the [GuardianDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.GuardianDeployment), patch the ManagementClusterConnection CR using the below command: ```bash -$ kubectl patch managementclusterconnection tigera-secure --type=merge --patch='{"spec": {"guardianDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-guardian","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -managementclusterconnection.operator.tigera.io/tigera-secure patched +kubectl patch managementclusterconnection tigera-secure --type=merge --patch='{"spec": {"guardianDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-guardian","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -796,7 +823,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/tigera-guardian -n tigera-guardian -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/tigera-guardian -n tigera-guardian -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the GuardianDeployment in JSON format. @@ -816,15 +843,14 @@ This command will output the configured resource requests and limits for the Gua ## Manager custom resource -The [Manager](../../reference/installation/api.mdx#operator.tigera.io/v1.Manager) CR provides a way to configure resources for GuardianDeployment. The following sections provide example configurations for this CR. +The [Manager](../../reference/installation/api.mdx#operator.tigera.io/v1.Manager) CR provides a way to configure resources for ManagerDeployment. The following sections provide example configurations for this CR. -### ManagerDeployment. +### ManagerDeployment To configure resource specification for the [ManagerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ManagerDeployment), patch the Manager CR using the below command: ```bash -$ kubectl patch managementclusterconnection tigera-secure --type=merge --patch='{"spec": {"guardianDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-guardian","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -managementclusterconnection.operator.tigera.io/tigera-secure patched +kubectl patch manager tigera-secure --type=merge --patch='{"spec": {"managerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-voltron","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-es-proxy","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-manager","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -833,7 +859,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/tigera-manager -n tigera-manager -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/tigera-manager -n tigera-manager -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the ManagerDeployment in JSON format. @@ -890,7 +916,7 @@ The [Monitor](../../reference/installation/api.mdx#operator.tigera.io/v1.Monitor To configure resource specification for the [Prometheus](../../reference/installation/api.mdx#operator.tigera.io/v1.Prometheus), Resources for the default container "prometheus" can be configured using the "resources" field under "commonPrometheusFields". For all other injected containers, such as "authn-proxy", resource configuration can be set using the "containers" struct, as shown below in the patch command below. ```bash -$ kubectl patch monitor tigera-secure --type=merge --patch='{ +kubectl patch monitor tigera-secure --type=merge --patch='{ "spec": { "prometheus": { "spec": { @@ -934,7 +960,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get statefulset.apps/prometheus-calico-node-prometheus -n tigera-prometheus -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get statefulset.apps/prometheus-calico-node-prometheus -n tigera-prometheus -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the Prometheus in JSON format. @@ -990,8 +1016,7 @@ The "config-reloader" container has default resource values set based by the Pro To configure resource specification for the [AlertManager](../../reference/installation/api.mdx#operator.tigera.io/v1.AlertManager), you can set resources for the default container "prometheus" using the "resources" field under "commonPrometheusFields". For all other injected containers, like "authn-proxy", resource configuration can be set using the "containers" struct, as shown below in the patch command below. ```bash -$ kubectl patch monitor tigera-secure --type=merge --patch='{"spec": {"alertManager": {"spec": {"resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}}}}' -monitor.operator.tigera.io/tigera-secure patched +kubectl patch monitor tigera-secure --type=merge --patch='{"spec": {"alertManager": {"spec": {"resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -1000,7 +1025,7 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get statefulset.apps/alertmanager-calico-node-alertmanager -n tigera-prometheus -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get statefulset.apps/alertmanager-calico-node-alertmanager -n tigera-prometheus -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the AlertManager in JSON format. @@ -1042,13 +1067,12 @@ The "config-reloader" container has default resource values set by the AlertMana The [PolicyRecommendation](../../reference/installation/api.mdx#operator.tigera.io/v1.PolicyRecommendation) CR provides a way to configure resources for PolicyRecommendation. The following sections provide example configurations for this CR. -### PolicyRecommendationDeployment. +### PolicyRecommendationDeployment To configure resource specification for the [PolicyRecommendationDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.PolicyRecommendationDeployment), patch the PolicyRecommendation CR using the below command: ```bash -$ kubectl patch managementclusterconnection tigera-secure --type=merge --patch='{"spec": {"guardianDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-guardian","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -managementclusterconnection.operator.tigera.io/tigera-secure patched +kubectl patch policyrecommendation tigera-secure --type=merge --patch='{"spec": {"policyRecommendationDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"policy-recommendation-controller","resources":{"requests":{"cpu":"100m", "memory":"100Mi"},"limits":{"cpu":"1", "memory":"512Mi"}}}]}}}}}}' ``` This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). @@ -1057,49 +1081,20 @@ This command sets the CPU request to 100 milliCPU (mCPU) and the memory request You can verify the configured resources using the following command: ```bash -$ kubectl get deployment.apps/tigera-manager -n tigera-manager -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +kubectl get deployment.apps/tigera-manager -n tigera-manager -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` This command will output the configured resource requests and limits for the ManagerDeployment in JSON format. ```bash { - "name": "tigera-es-proxy", - "resources": { - "limits": { - "cpu": "1", - "memory": "1000Mi" - }, - "requests": { - "cpu": "100m", - "memory": "100Mi" - } - } -} -{ - "name": "tigera-voltron", - "resources": { - "limits": { - "cpu": "1", - "memory": "1000Mi" - }, - "requests": { - "cpu": "100m", - "memory": "100Mi" - } - } -} -{ - "name": "tigera-manager", - "resources": { - "limits": { - "cpu": "1", - "memory": "1000Mi" - }, - "requests": { - "cpu": "100m", - "memory": "100Mi" - } + "limits": { + "cpu": "1", + "memory": "512Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" } } ``` diff --git a/calico/reference/configure-resources.mdx b/calico/reference/configure-resources.mdx new file mode 100644 index 0000000000..5835425e08 --- /dev/null +++ b/calico/reference/configure-resources.mdx @@ -0,0 +1,262 @@ +--- +description: Configure Resource requests and limits. +--- + +# Configure resource requests and limits + +## Big picture + +Resource requests and limits are essential configurations for managing resource allocation and ensuring optimal performance of Kubernetes workloads. In {{prodname}}, these configurations can be customized using custom resources to meet specific requirements and optimize resource utilization. + +:::note +It's important to note that the CPU and memory values used in the examples are for demonstration purposes and should be adjusted based on individual system requirements. To find the list of all applicable containers for a component, please refer to its specification. +::: + +## APIServer custom resource + +The [APIServer](../reference/installation/api.mdx#operator.tigera.io/v1.APIServer) CR provides a way to configure APIServerDeployment. The following sections provide example configurations for this CR. + +### APIServerDeployment + +To configure resource specification for the [APIServerDeployment](../reference/installation/api.mdx#operator.tigera.io/v1.APIServerDeployment), patch the installation CR using the below command: + +```bash +kubectl patch apiserver tigera-secure --type=merge --patch='{"spec": {"apiServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-apiserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-queryserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get deployment.apps/tigera-apiserver -n tigera-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` + +This command will output the configured resource requests and limits for the Calico APIServerDeployment component in JSON format. + +```bash +{ + "name": "calico-apiserver", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +{ + "name": "tigera-queryserver", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +## Installation custom resource + +The [Installation CR](../reference/installation/api.mdx) provides a way to configure resources for various Calico Enterprise components, including TyphaDeployment, calicoNodeDaemonSet, CalicoNodeWindowsDaemonSet, csiNodeDriverDaemonSet and KubeControllersDeployment. The following sections provide example configurations for this CR. + +Example Configurations: + + +### TyphaDeployment + +To configure resource specification for the [TyphaDeployment](../reference/installation/api.mdx#operator.tigera.io/v1.TyphaDeployment), patch the installation CR using the below command: + +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"typhaDeployment": {"spec": {"template": {"spec": {"containers": [{"name": "calico-typha", "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}, "limits": {"cpu": "1", "memory": "1000Mi"}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get deployment.apps/calico-typha -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico TyphaDeployment component in JSON format. + +```bash +{ + "name": "calico-typha", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + + +### CalicoNodeDaemonSet + +To configure resource requests for the [calicoNodeDaemonSet](../reference/installation/api.mdx#operator.tigera.io/v1.calicoNodeDaemonSet) component, patch the installation CR using the below command: + +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. + +```bash +{ + "name": "calico-node", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +### calicoNodeWindowsDaemonSet + +To configure resource requests for the [calicoNodeWindowsDaemonSet](../reference/installation/api.mdx#operator.tigera.io/v1.calicoNodeWindowsDaemonSet) component, patch the installation CR using the below command: + +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeWindowsDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node-windows","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico calicoNodeWindowsDaemonSet component in JSON format. + +```bash +{ + "name": "calico-node", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +``` + +### CalicoKubeControllersDeployment + +To configure resource requests for the [CalicoKubeControllersDeployment](../reference/installation/api.mdx#operator.tigera.io/v1.CalicoKubeControllersDeployment) component, patch the installation CR using the below command: + +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"calicoKubeControllersDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-kube-controllers","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get deployment.apps/calico-kube-controllers -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico CalicoKubeControllersDeployment component in JSON format. + +```bash +{ + "name": "calico-kube-controllers", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} + +``` + +### CSINodeDriverDaemonSet + +To configure resource requests for the [CSINodeDriverDaemonSet](../reference/installation/api.mdx#operator.tigera.io/v1.CSINodeDriverDaemonSet) component, patch the installation CR using the below command: + +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"csiNodeDriverDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-csi","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}},{"name":"csi-node-driver-registrar","resources":{"requests":{"cpu":"50m", "memory":"50Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + +#### Verification + +You can verify the configured resources using the following command: + +```bash +kubectl get daemonset.apps/csi-node-driver -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` + +This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. + +```bash +{ + "name": "calico-csi", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } + } +} +{ + "name": "csi-node-driver-registrar", + "resources": { + "limits": { + "cpu": "1", + "memory": "1000Mi" + }, + "requests": { + "cpu": "50m", + "memory": "50Mi" + } + } +} +``` diff --git a/sidebars-calico-cloud.js b/sidebars-calico-cloud.js index 099ea37a25..124101cc5a 100644 --- a/sidebars-calico-cloud.js +++ b/sidebars-calico-cloud.js @@ -536,6 +536,7 @@ module.exports = { link: {type: 'doc', id: 'reference/component-resources/index'}, items: [ 'reference/component-resources/configuration', + 'reference/component-resources/configure-resources', { type: 'category', label: 'Calico Cloud Kubernetes controllers', diff --git a/sidebars-calico.js b/sidebars-calico.js index cd2de296d2..e9f5b9f4a6 100644 --- a/sidebars-calico.js +++ b/sidebars-calico.js @@ -614,6 +614,7 @@ module.exports = { ], }, 'reference/configure-calico-node', + 'reference/configure-resources', { type: 'category', label: 'Felix',