From 1180e4f9e5e036c064d7338609d818fa56c8dd17 Mon Sep 17 00:00:00 2001 From: Abigail McCarthy <20771501+a-mccarthy@users.noreply.github.com> Date: Wed, 21 Feb 2024 12:17:25 -0500 Subject: [PATCH 01/23] Update link to new dashboard --- content/en/docs/contribute/analytics.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/contribute/analytics.md b/content/en/docs/contribute/analytics.md index f910c1d91b28a..57dea1f3fb3ad 100644 --- a/content/en/docs/contribute/analytics.md +++ b/content/en/docs/contribute/analytics.md @@ -14,9 +14,9 @@ This page contains information about the kubernetes.io analytics dashboard. -[View the dashboard](https://datastudio.google.com/reporting/fede2672-b2fd-402a-91d2-7473bdb10f04). +[View the dashboard](https://lookerstudio.google.com/u/0/reporting/fe615dc5-59b0-4db5-8504-ef9eacb663a9/page/4VDGB/). -This dashboard is built using Google Data Studio and shows information collected on kubernetes.io using Google Analytics. +This dashboard is built using [Google Looker Studio](https://lookerstudio.google.com/overview) and shows information collected on kubernetes.io using Google Analytics 4 since August 2022. ### Using the dashboard From b39e01b971757ffcc8d74afa4cdb00737806d35a Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Sun, 18 Feb 2024 14:59:21 +0000 Subject: [PATCH 02/23] Add concept page about cluster autoscaling Co-Authored-By: Niranjan Darshann --- .../en/docs/concepts/architecture/nodes.md | 2 + .../concepts/cluster-administration/_index.md | 1 + .../concepts/cluster-administration/addons.md | 2 +- .../cluster-autoscaling.md | 117 ++++++++++++++++++ .../en/docs/concepts/workloads/autoscaling.md | 10 +- .../setup/best-practices/cluster-large.md | 4 +- .../setup/production-environment/_index.md | 12 +- .../horizontal-pod-autoscale.md | 5 +- 8 files changed, 131 insertions(+), 22 deletions(-) create mode 100644 content/en/docs/concepts/cluster-administration/cluster-autoscaling.md diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index c0bcecd3df4ac..ead3b9101d2f8 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -578,6 +578,8 @@ Learn more about the following: * [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). * [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document. +* [Cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) to + manage the number and size of nodes in your cluster. * [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). * [Node Resource Managers](/docs/concepts/policy/node-resource-managers/). * [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/). diff --git a/content/en/docs/concepts/cluster-administration/_index.md b/content/en/docs/concepts/cluster-administration/_index.md index 2c7baf1d9871e..456069c980175 100644 --- a/content/en/docs/concepts/cluster-administration/_index.md +++ b/content/en/docs/concepts/cluster-administration/_index.md @@ -52,6 +52,7 @@ Before choosing a guide, here are some considerations: ## Managing a cluster * Learn how to [manage nodes](/docs/concepts/architecture/nodes/). + * Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/). * Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters. diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 7c2ab269e4a63..4bbfbb7d263f1 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -1,7 +1,7 @@ --- title: Installing Addons content_type: concept -weight: 120 +weight: 150 --- diff --git a/content/en/docs/concepts/cluster-administration/cluster-autoscaling.md b/content/en/docs/concepts/cluster-administration/cluster-autoscaling.md new file mode 100644 index 0000000000000..495943b65a27a --- /dev/null +++ b/content/en/docs/concepts/cluster-administration/cluster-autoscaling.md @@ -0,0 +1,117 @@ +--- +title: Cluster Autoscaling +linkTitle: Cluster Autoscaling +description: >- + Automatically manage the nodes in your cluster to adapt to demand. +content_type: concept +weight: 120 +--- + + + +Kubernetes requires {{< glossary_tooltip text="nodes" term_id="node" >}} in your cluster to +run {{< glossary_tooltip text="pods" term_id="pod" >}}. This means providing capacity for +the workload Pods and for Kubernetes itself. + +You can adjust the amount of resources available in your cluster automatically: +_node autoscaling_. You can either change the number of nodes, or change the capacity +that nodes provide. The first approach is referred to as _horizontal scaling_, while the +second is referred to as _vertical scaling_. + +Kubernetes can even provide multidimensional automatic scaling for nodes. + + + +## Manual node management + +You can manually manage node-level capacity, where you configure a fixed amount of nodes; +you can use this approach even if the provisioning (the process to set up, manage, and +decommission) for these nodes is automated. + +This page is about taking the next step, and automating management of the amount of +node capacity (CPU, memory, and other node resources) available in your cluster. + +## Automatic horizontal scaling {#autoscaling-horizontal} + +### Cluster Autoscaler + +You can use the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) to manage the scale of your nodes automatically. +The cluster autoscaler can integrate with a cloud provider, or with Kubernetes' +[cluster API](https://github.com/kubernetes/autoscaler/blob/c6b754c359a8563050933a590f9a5dece823c836/cluster-autoscaler/cloudprovider/clusterapi/README.md), +to achieve the actual node management that's needed. + +The cluster autoscaler adds nodes when there are unschedulable Pods, and +removes nodes when those nodes are empty. + +#### Cloud provider integrations {#cluster-autoscaler-providers} + +The [README](https://github.com/kubernetes/autoscaler/tree/c6b754c359a8563050933a590f9a5dece823c836/cluster-autoscaler#readme) +for the cluster autoscaler lists some of the cloud provider integrations +that are available. + +## Cost-aware multidimensional scaling {#autoscaling-multi-dimension} + +### Karpenter {#autoscaler-karpenter} + +[Karpenter](https://karpenter.sh/) supports direct node management, via +plugins that integrate with specific cloud providers, and can manage nodes +for you whilst optimizing for overall cost. + +> Karpenter automatically launches just the right compute resources to +> handle your cluster's applications. It is designed to let you take +> full advantage of the cloud with fast and simple compute provisioning +> for Kubernetes clusters. + +The Karpenter tool is designed to integrate with a cloud provider that +provides API-driven server management, and where the price information for +available servers is also available via a web API. + +For example, if you start some more Pods in your cluster, the Karpenter +tool might buy a new node that is larger than one of the nodes you are +already using, and then shut down an existing node once the new node +is in service. + +#### Cloud provider integrations {#karpenter-providers} + +{{% thirdparty-content vendor="true" %}} + +There are integrations available between Karpenter's core and the following +cloud providers: + +- [Amazon Web Services](https://github.com/aws/karpenter-provider-aws) +- [Azure](https://github.com/Azure/karpenter-provider-azure) + + +## Related components + +### Descheduler + +The [descheduler](https://github.com/kubernetes-sigs/descheduler) can help you +consolidate Pods onto a smaller number of nodes, to help with automatic scale down +when the cluster has space capacity. + +### Sizing a workload based on cluster size + +#### Cluster proportional autoscaler + +For workloads that need to be scaled based on the size of the cluster (for example +`cluster-dns` or other system components), you can use the +[_Cluster Proportional Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).
+ +The Cluster Proportional Autoscaler watches the number of schedulable nodes +and cores, and scales the number of replicas of the target workload accordingly. + +#### Cluster proportional vertical autoscaler + +If the number of replicas should stay the same, you can scale your workloads vertically according to the cluster size using +the [_Cluster Proportional Vertical Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler). +This project is in **beta** and can be found on GitHub. + +While the Cluster Proportional Autoscaler scales the number of replicas of a workload, the Cluster Proportional Vertical Autoscaler +adjusts the resource requests for a workload (for example a Deployment or DaemonSet) based on the number of nodes and/or cores +in the cluster. + + +## {{% heading "whatsnext" %}} + +- Read about [workload-level autoscaling](/docs/concepts/workloads/autoscaling/) diff --git a/content/en/docs/concepts/workloads/autoscaling.md b/content/en/docs/concepts/workloads/autoscaling.md index 5ecd2755e23cd..1c9dc162ec7c3 100644 --- a/content/en/docs/concepts/workloads/autoscaling.md +++ b/content/en/docs/concepts/workloads/autoscaling.md @@ -129,13 +129,8 @@ its [`Cron` scaler](https://keda.sh/docs/2.13/scalers/cron/). The `Cron` scaler If scaling workloads isn't enough to meet your needs, you can also scale your cluster infrastructure itself. Scaling the cluster infrastructure normally means adding or removing {{< glossary_tooltip text="nodes" term_id="node" >}}. -This can be done using one of two available autoscalers: - -- [**Cluster Autoscaler**](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) -- [**Karpenter**](https://github.com/kubernetes-sigs/karpenter?tab=readme-ov-file) - -Both scalers work by watching for pods marked as _unschedulable_ or _underutilized_ nodes and then adding or -removing nodes as needed. +Read [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) +for more information. ## {{% heading "whatsnext" %}} @@ -144,3 +139,4 @@ removing nodes as needed. - [HorizontalPodAutoscaler Walkthrough](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) - [Resize Container Resources In-Place](/docs/tasks/configure-pod-container/resize-container-resources/) - [Autoscale the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/) +- Learn about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) diff --git a/content/en/docs/setup/best-practices/cluster-large.md b/content/en/docs/setup/best-practices/cluster-large.md index 808a1c47510a3..f5f4292b44c3a 100644 --- a/content/en/docs/setup/best-practices/cluster-large.md +++ b/content/en/docs/setup/best-practices/cluster-large.md @@ -121,9 +121,7 @@ Learn more about [Vertical Pod Autoscaler](https://github.com/kubernetes/autosca and how you can use it to scale cluster components, including cluster-critical addons. -* The [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) -integrates with a number of cloud providers to help you run the right number of -nodes for the level of resource demand in your cluster. +* Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) * The [addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme) helps you in resizing the addons automatically as your cluster's scale changes. diff --git a/content/en/docs/setup/production-environment/_index.md b/content/en/docs/setup/production-environment/_index.md index 7aeb4eb1919bf..02332ca4e0e98 100644 --- a/content/en/docs/setup/production-environment/_index.md +++ b/content/en/docs/setup/production-environment/_index.md @@ -183,15 +183,9 @@ simply as *nodes*). to help determine how many nodes you need, based on the number of pods and containers you need to run. If you are managing nodes yourself, this can mean purchasing and installing your own physical equipment. -- *Autoscale nodes*: Most cloud providers support - [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) - to replace unhealthy nodes or grow and shrink the number of nodes as demand requires. See the - [Frequently Asked Questions](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md) - for how the autoscaler works and - [Deployment](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#deployment) - for how it is implemented by different cloud providers. For on-premises, there - are some virtualization platforms that can be scripted to spin up new nodes - based on demand. +- *Autoscale nodes*: Read [Cluster Autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling) to learn about the + tools available to automatically manage your nodes and the capacity they + provide. - *Set up node health checks*: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy. Using the [Node Problem Detector](/docs/tasks/debug/debug-cluster/monitor-node-health/) diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 9461ad4ae4ae9..e3b736a8205d8 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -596,8 +596,9 @@ guidelines, which cover this exact use case. ## {{% heading "whatsnext" %}} -If you configure autoscaling in your cluster, you may also want to consider running a -cluster-level autoscaler such as [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler). +If you configure autoscaling in your cluster, you may also want to consider using +[cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) +to ensure you are running the right number of nodes. For more information on HorizontalPodAutoscaler: From faeb20fab9104c5cd1dfbd1156a732993abede2e Mon Sep 17 00:00:00 2001 From: Charles Uneze Date: Mon, 11 Mar 2024 17:06:57 +0100 Subject: [PATCH 03/23] Update resource-usage-monitoring.md --- .../docs/tasks/debug/debug-cluster/resource-usage-monitoring.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md b/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md index eea8419e7f169..be1c512d42787 100644 --- a/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md +++ b/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md @@ -84,7 +84,7 @@ solutions. The choice of monitoring platform depends heavily on your needs, budget, and technical resources. Kubernetes does not recommend any specific metrics pipeline; [many options](https://landscape.cncf.io/?group=projects-and-products&view-mode=card#observability-and-analysis--monitoring) are available. Your monitoring system should be capable of handling the [OpenMetrics](https://openmetrics.io/) metrics -transmission standard, and needs to chosen to best fit in to your overall design and deployment of +transmission standard, and needs to be chosen to best fit into your overall design and deployment of your infrastructure platform. From d77e68f2fd8593b2b896e7396f901d5c257104eb Mon Sep 17 00:00:00 2001 From: Charles Uneze Date: Tue, 19 Mar 2024 18:26:54 +0100 Subject: [PATCH 04/23] Update resource-usage-monitoring.md --- .../docs/tasks/debug/debug-cluster/resource-usage-monitoring.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md b/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md index be1c512d42787..618458dcad0c7 100644 --- a/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md +++ b/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md @@ -84,7 +84,7 @@ solutions. The choice of monitoring platform depends heavily on your needs, budget, and technical resources. Kubernetes does not recommend any specific metrics pipeline; [many options](https://landscape.cncf.io/?group=projects-and-products&view-mode=card#observability-and-analysis--monitoring) are available. Your monitoring system should be capable of handling the [OpenMetrics](https://openmetrics.io/) metrics -transmission standard, and needs to be chosen to best fit into your overall design and deployment of +transmission standard and needs to be chosen to best fit into your overall design and deployment of your infrastructure platform. From f1777283c9142eb19518dd7b22e526faa5063955 Mon Sep 17 00:00:00 2001 From: Prashant Rewar <108176843+prashantrewar@users.noreply.github.com> Date: Tue, 19 Mar 2024 22:59:23 +0530 Subject: [PATCH 05/23] Move the introduction page from SIG CLI guides to the kubectl reference Signed-off-by: Prashant Rewar <108176843+prashantrewar@users.noreply.github.com> --- .../en/docs/reference/kubectl/introduction.md | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 content/en/docs/reference/kubectl/introduction.md diff --git a/content/en/docs/reference/kubectl/introduction.md b/content/en/docs/reference/kubectl/introduction.md new file mode 100644 index 0000000000000..ac0aff2b590d6 --- /dev/null +++ b/content/en/docs/reference/kubectl/introduction.md @@ -0,0 +1,71 @@ +--- +title: "Introduction To Kubectl" +content_type: concept +weight: 1 +--- + +Kubectl is the Kubernetes cli version of a swiss army knife, and can do many things. + +While this Book is focused on using Kubectl to declaratively manage Applications in Kubernetes, it +also covers other Kubectl functions. + +## Command Families + +Most Kubectl commands typically fall into one of a few categories: + +| Type | Used For | Description | +|----------------------------------------|----------------------------|----------------------------------------------------| +| Declarative Resource Management | Deployment and Operations (e.g. GitOps) | Declaratively manage Kubernetes Workloads using Resource Config | +| Imperative Resource Management | Development Only | Run commands to manage Kubernetes Workloads using Command Line arguments and flags | +| Printing Workload State | Debugging | Print information about Workloads | +| Interacting with Containers | Debugging | Exec, Attach, Cp, Logs | +| Cluster Management | Cluster Ops | Drain and Cordon Nodes | + +## Declarative Application Management + +The preferred approach for managing Resources is through +declarative files called Resource Config used with the Kubectl *Apply* command. +This command reads a local (or remote) file structure and modifies cluster state to +reflect the declared intent. + +{{< alert color="success" title="Apply" >}} +Apply is the preferred mechanism for managing Resources in a Kubernetes cluster. +{{< /alert >}} + +## Printing state about Workloads + +Users will need to view Workload state. + +- Printing summarize state and information about Resources +- Printing complete state and information about Resources +- Printing specific fields from Resources +- Query Resources matching labels + +## Debugging Workloads + +Kubectl supports debugging by providing commands for: + +- Printing Container logs +- Printing cluster events +- Exec or attaching to a Container +- Copying files from Containers in the cluster to a user's filesystem + +## Cluster Management + +On occasion, users may need to perform operations to the Nodes of cluster. Kubectl supports +commands to drain Workloads from a Node so that it can be decommission or debugged. + +## Porcelain + +Users may find using Resource Config overly verbose for *Development* and prefer to work with +the cluster *imperatively* with a shell-like workflow. Kubectl offers porcelain commands for +generating and modifying Resources. + +- Generating + creating Resources such as Deployments, StatefulSets, Services, ConfigMaps, etc +- Setting fields on Resources +- Editing (live) Resources in a text editor + +{{< alert color="warning" title="Porcelain For Dev Only" >}} +Porcelain commands are time saving for experimenting with workloads in a dev cluster, but shouldn't +be used for production. +{{< /alert >}} From c5828df71273e7f5189003a8db361c589976b386 Mon Sep 17 00:00:00 2001 From: Amim Knabben Date: Tue, 19 Mar 2024 20:55:12 -0300 Subject: [PATCH 06/23] Enhancing troubleshoot section with Windows Operational Readiness --- content/en/docs/concepts/windows/intro.md | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md index 5bb8c60fe6d95..dcf4db95b9910 100644 --- a/content/en/docs/concepts/windows/intro.md +++ b/content/en/docs/concepts/windows/intro.md @@ -408,6 +408,17 @@ reported previously and comment with your experience on the issue and add additi logs. SIG Windows channel on the Kubernetes Slack is also a great avenue to get some initial support and troubleshooting ideas prior to creating a ticket. +### Validating the Windows cluster operability + +The Kubernetes project provides a _Windows Operational Readiness_ specification, +accompanied by a structured test suite. This suite is split into two sets of tests, +core and extended, each containing categories aimed at testing specific areas. +It can be used to validate all the functionalities of a Windows and hybrid system +(mixed with Linux nodes) with full coverage. + +To set up the project on a newly created cluster, refer to the instructions in the +[project guide](https://github.com/kubernetes-sigs/windows-operational-readiness/blob/main/README.md). + ## Deployment tools The kubeadm tool helps you to deploy a Kubernetes cluster, providing the control @@ -422,4 +433,4 @@ For a detailed explanation of Windows distribution channels see the Information on the different Windows Server servicing channels including their support models can be found at -[Windows Server servicing channels](https://docs.microsoft.com/en-us/windows-server/get-started/servicing-channels-comparison). +[Windows Server servicing channels](https://docs.microsoft.com/en-us/windows-server/get-started/servicing-channels-comparison). \ No newline at end of file From 3ee904cfae1718f356469396b9c5c288f8b479b2 Mon Sep 17 00:00:00 2001 From: mahmut Date: Thu, 21 Mar 2024 10:51:49 +0800 Subject: [PATCH 07/23] Update container-runtimes.md --- .../docs/setup/production-environment/container-runtimes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh-cn/docs/setup/production-environment/container-runtimes.md b/content/zh-cn/docs/setup/production-environment/container-runtimes.md index eb42a5a20722f..0586c60dc419e 100644 --- a/content/zh-cn/docs/setup/production-environment/container-runtimes.md +++ b/content/zh-cn/docs/setup/production-environment/container-runtimes.md @@ -359,7 +359,7 @@ Return to this step once you've created a valid `config.toml` configuration file 要在系统上安装 containerd,请按照[开始使用 containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md) 的说明进行操作。创建有效的 `config.toml` 配置文件后返回此步骤。 -{{< tabs name="找到 config.toml 文件" >}} +{{< tabs name="finding-your-config-toml-file" >}} {{% tab name="Linux" %}} + {{- if ne $feed nil -}} {{ time.Format "Mon, 02 Jan 2006 15:04:05 -0700" $feed._kubernetes_io.updated_at | safeHTML }} + {{- end -}} {{ with .OutputFormats.Get "RSS" -}} {{ printf "" .Permalink .MediaType | safeHTML }} {{ end -}} diff --git a/layouts/shortcodes/cve-feed.html b/layouts/shortcodes/cve-feed.html index 8b829079fba89..02a191886451a 100644 --- a/layouts/shortcodes/cve-feed.html +++ b/layouts/shortcodes/cve-feed.html @@ -1,7 +1,39 @@ -{{ $feed := getJSON .Site.Params.cveFeedBucket }} -{{ if ne $feed.version "https://jsonfeed.org/version/1.1" }} - {{ warnf "CVE feed shortcode. KEP-3203: CVE feed does not comply with JSON feed v1.1." }} -{{ end }} +{{- $url := .Site.Params.cveFeedBucket }} +{{- $feed := "" -}} + +{{- with resources.GetRemote $url -}} + {{- if .Err -}} + + {{- $message := printf "Failed to retrieve CVE data: %s" .Err -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $message -}} + {{- else -}} + {{- warnf $message -}} + {{- end -}} + {{- else -}} + + {{- $feed = .Content | transform.Unmarshal -}} + {{- if ne $feed.version "https://jsonfeed.org/version/1.1" -}} + {{- $warningMessage := "CVE feed shortcode. KEP-3203: CVE feed does not comply with JSON feed v1.1." -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $warningMessage -}} + {{- else -}} + {{- warnf $warningMessage -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- else -}} + + {{- $message := printf "Unable to fetch CVE data from the specified URL: %q" $url -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $message -}} + {{- else -}} + {{- warnf $message -}} + {{- end -}} +{{- end -}} + + +{{ if ne $feed nil }} @@ -21,3 +53,4 @@ {{ end }}
{{ T "cve_table" }} {{ printf (T "cve_table_date_format_string") ($feed._kubernetes_io.updated_at | time.Format (T "cve_table_date_format")) }}
+{{- end -}} diff --git a/layouts/shortcodes/release-binaries.html b/layouts/shortcodes/release-binaries.html index 6eef11dda22b0..f34b6ac16dbeb 100644 --- a/layouts/shortcodes/release-binaries.html +++ b/layouts/shortcodes/release-binaries.html @@ -1,6 +1,29 @@ -{{ $response := getJSON "https://raw.githubusercontent.com/kubernetes-sigs/downloadkubernetes/master/dist/release_binaries.json" }} +{{- $url := "https://raw.githubusercontent.com/kubernetes-sigs/downloadkubernetes/master/dist/release_binaries.json" }} +{{- $response := "" }} +{{- with resources.GetRemote $url -}} + {{- if .Err -}} + {{- $message := printf "Failed to retrieve release binaries data: %s" .Err -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $message -}} + {{- else -}} + {{- warnf $message -}} + {{- end -}} + {{- else -}} + {{- $response = .Content | transform.Unmarshal }} + {{- end -}} +{{- else -}} + {{ $message := printf "Unable to fetch release binaries data from the specified URL: %q" $url -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $message -}} + {{- else -}} + {{- warnf $message -}} + {{- end -}} +{{- end -}} + + +{{ if ne $response nil }} {{ $currentVersion := site.Params.version }} {{ $Binaries := slice }} @@ -127,4 +150,5 @@ - \ No newline at end of file + +{{- end -}} \ No newline at end of file From 27b286157948f8a4b69c13d72f5fcea5c3e952b8 Mon Sep 17 00:00:00 2001 From: xin gu <418294249@qq.com> Date: Fri, 22 Mar 2024 10:42:21 +0800 Subject: [PATCH 11/23] sync container-runtimes coarse-parallel-processing-work-queue fine-parallel-processing-work-queue --- .../production-environment/container-runtimes.md | 11 +++++------ .../job/coarse-parallel-processing-work-queue.md | 2 +- .../job/fine-parallel-processing-work-queue.md | 15 ++++++++++++--- 3 files changed, 18 insertions(+), 10 deletions(-) diff --git a/content/zh-cn/docs/setup/production-environment/container-runtimes.md b/content/zh-cn/docs/setup/production-environment/container-runtimes.md index eb42a5a20722f..223b9cae319aa 100644 --- a/content/zh-cn/docs/setup/production-environment/container-runtimes.md +++ b/content/zh-cn/docs/setup/production-environment/container-runtimes.md @@ -570,10 +570,10 @@ This config option supports live configuration reload to apply this change: `sys {{< note >}} -以下操作假设你使用 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) 适配器来将 +以下操作假设你使用 [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/) 适配器来将 Docker Engine 与 Kubernetes 集成。 {{< /note >}} @@ -585,10 +585,9 @@ Docker Engine 与 Kubernetes 集成。 指南为你的 Linux 发行版安装 Docker。 -2. 按照源代码仓库中的说明安装 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)。 +2. 请按照文档中的安装部分指示来安装 [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/usage/install)。 +要启动一个 Redis 实例,你需要创建 Redis Pod 和 Redis 服务: + +```shell +kubectl apply -f https://k8s.io/examples/application/job/redis/redis-pod.yaml +kubectl apply -f https://k8s.io/examples/application/job/redis/redis-service.yaml +``` + 在这个例子中,每个 Pod 处理了队列中的多个项目,直到队列中没有项目时便退出。 @@ -286,8 +296,7 @@ the other pods to complete too. 这依赖于工作程序在完成工作时发出信号。 工作程序以成功退出的形式发出信号表示工作队列已经为空。 所以,只要有**任意**一个工作程序成功退出,控制器就知道工作已经完成了,所有的 Pod 将很快会退出。 -因此,我们需要将 Job 的完成计数(Completion Count)设置为 1。 -尽管如此,Job 控制器还是会等待其它 Pod 完成。 +因此,你不需要设置 Job 的完成次数。Job 控制器还是会等待其它 Pod 完成。 + +### 验证 Windows 集群的操作性 {#validating-windows-cluster-operability} + +Kubernetes 项目提供了 **Windows 操作准备** 规范,配备了结构化的测试套件。 +这个套件分为两组测试:核心和扩展。每组测试都包含了针对特定场景的分类测试。 +它可以用来验证 Windows 和混合系统(混合了 Linux 节点)的所有功能,实现全面覆盖。 + +要在新创建的集群上搭建此项目, +请参考[项目指南](https://github.com/kubernetes-sigs/windows-operational-readiness/blob/main/README.md)中的说明。 + 本部分包含以下有关节点的参考主题: * Kubelet 的 [Checkpoint API](/zh-cn/docs/reference/node/kubelet-checkpoint-api/) * 一系列[关于 dockershim 移除和使用兼容 CRI 运行时的文章](/zh-cn/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/) + +* [Kubelet 设备管理器 API 版本](/zh-cn/docs/reference/node/device-plugin-api-versions) + +* [由 kubelet 填充的节点标签](/zh-cn/docs/reference/node/node-labels) + * [节点 `.status` 信息](/zh-cn/docs/reference/node/node-status/) * + + 你还可以从 Kubernetes 文档的其他地方阅读节点的详细参考信息,包括: * [节点指标数据](/zh-cn/docs/reference/instrumentation/node-metrics)。 -* [CRI Pod & 容器指标](/docs/reference/instrumentation/cri-pod-container-metrics). + +* [CRI Pod & 容器指标](/zh-cn/docs/reference/instrumentation/cri-pod-container-metrics)。 From 1f41bea3c7d34e47bad15d98a686895bc3c00b66 Mon Sep 17 00:00:00 2001 From: ydFu Date: Fri, 22 Mar 2024 14:53:29 +0800 Subject: [PATCH 16/23] Update error rendering in node-lable.md Signed-off-by: ydFu --- content/zh-cn/docs/reference/node/node-labels.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/content/zh-cn/docs/reference/node/node-labels.md b/content/zh-cn/docs/reference/node/node-labels.md index 543d1aafedc2a..7a27954c9c567 100644 --- a/content/zh-cn/docs/reference/node/node-labels.md +++ b/content/zh-cn/docs/reference/node/node-labels.md @@ -1,7 +1,8 @@ +--- content_type: "reference" title: 由 kubelet 填充的节点标签 weight: 40 - +--- -## 预设标签 +## 预设标签 {#preset-labels} Kubernetes 在节点上设置的预设标签有: From c1e7578efbcc300aa38909abdc64a96928c7b51f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Manuel=20R=C3=BCger?= Date: Mon, 11 Mar 2024 22:46:30 +0100 Subject: [PATCH 17/23] Add section about kube-state-metrics --- .../concepts/cluster-administration/addons.md | 4 ++ .../kube-state-metrics.md | 46 +++++++++++++++++++ 2 files changed, 50 insertions(+) create mode 100644 content/en/docs/concepts/cluster-administration/kube-state-metrics.md diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 2b5de17c8c348..b34fb3752623e 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -109,6 +109,10 @@ installation instructions. The list does not try to be exhaustive. [Events](/docs/reference/kubernetes-api/cluster-resources/event-v1/) or [Node conditions](/docs/concepts/architecture/nodes/#condition). +## Instrumentation + +* [kube-state-metrics](/docs/concepts/cluster-administration/kube-state-metrics) + ## Legacy Add-ons There are several other add-ons documented in the deprecated diff --git a/content/en/docs/concepts/cluster-administration/kube-state-metrics.md b/content/en/docs/concepts/cluster-administration/kube-state-metrics.md new file mode 100644 index 0000000000000..4be0318fdf867 --- /dev/null +++ b/content/en/docs/concepts/cluster-administration/kube-state-metrics.md @@ -0,0 +1,46 @@ +--- +title: Metrics for Kubernetes Object States +content_type: concept +weight: 75 +description: >- + kube-state-metrics, an add-on agent to generate and expose cluster-level metrics. +--- + +The state of Kubernetes objects in the Kubernetes API can be exposed as metrics. +An add-on agent called [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) can connect to the Kubernetes API server and expose a HTTP endpoint with metrics generated from the state of individual objects in the cluster. +It exposes various information about the state of objects like labels and annotations, startup and termination times, status or the phase the object currently is in. +For example, containers running in pods create a `kube_pod_container_info` metric. +This includes the name of the container, the name of the pod it is part of, the {{< glossary_tooltip text="namespace" term_id="namespace" >}} the pod is running in, the name of the container image, the ID of the image, the image name from the spec of the container, the ID of the running container and the ID of the pod as labels. + +{{% thirdparty-content single="true" %}} + +An external component that is able and capable to scrape the endpoint of kube-state-metrics (for example via Prometheus) can now be used to enable the following use cases. + +## Example: using metrics from kube-state-metrics to query the cluster state {#example-kube-state-metrics-query-1} + +Metric series generated by kube-state-metrics are helpful to gather further insights into the cluster, as they can be used for querying. + +If you use Prometheus or another tool that uses the same query language, the following PromQL query returns the number of pods that are not ready: + +``` +count(kube_pod_status_ready{condition="false"}) by (namespace, pod) +``` + +## Example: alerting based on from kube-state-metrics {#example-kube-state-metrics-alert-1} + +Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster. + +If you use Prometheus or a similar tool that uses the same alert rule language, the following alert will fire if there are pods that have been in a `Terminating` state for more than 5 minutes: + +```yaml +groups: +- name: Pod state + rules: + - alert: PodsBlockedInTerminatingState + expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason="NodeLost"} == 0) by (namespace, pod) > 0 + for: 5m + labels: + severity: page + annotations: + summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state. +``` From 8c2fc0e57988a93d4e004b8e964c107d373b9d8f Mon Sep 17 00:00:00 2001 From: Arhell Date: Sat, 23 Mar 2024 00:14:31 +0200 Subject: [PATCH 18/23] [es] Update Weave Net link to newly supported fork --- content/es/docs/concepts/cluster-administration/addons.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/cluster-administration/addons.md b/content/es/docs/concepts/cluster-administration/addons.md index 85a9ced075a91..4d0488caa887c 100644 --- a/content/es/docs/concepts/cluster-administration/addons.md +++ b/content/es/docs/concepts/cluster-administration/addons.md @@ -74,7 +74,7 @@ En esta página se listan algunos de los complementos disponibles con sus respec Pods y entornos no Kubernetes con visibilidad y supervisión de la seguridad. * [Romana](https://github.com/romana) es una solución de red de capa 3 para las redes de Pods que también son compatibles con la API de [NetworkPolicy](/docs/concepts/services-networking/network-policies/). -* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) +* [Weave Net](https://github.com/rajch/weave#using-weave-on-kubernetes) proporciona redes y políticas de red, funciona en ambos lados de una partición de red y no requiere una base de datos externa. From ab13afe034de6fcdcb2184de71ca778574ef95bc Mon Sep 17 00:00:00 2001 From: Michal Srb Date: Sat, 23 Mar 2024 21:47:21 +0100 Subject: [PATCH 19/23] Fix typo Missing space. --- content/en/docs/reference/labels-annotations-taints/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index 4849f60f9aabb..1e4fd86c149db 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -54,7 +54,7 @@ Type: Label Example: `app.kubernetes.io/created-by: "controller-manager"` -Used on: All Objects (typically used on[workload resources](/docs/reference/kubernetes-api/workload-resources/)). +Used on: All Objects (typically used on [workload resources](/docs/reference/kubernetes-api/workload-resources/)). The controller/user who created this resource. From a3662828dd9c4de2edcb7d053d60811159f22edb Mon Sep 17 00:00:00 2001 From: Arhell Date: Sun, 24 Mar 2024 10:02:42 +0200 Subject: [PATCH 20/23] [zh] Update Weave Net link to newly supported fork --- content/zh-cn/docs/concepts/cluster-administration/addons.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh-cn/docs/concepts/cluster-administration/addons.md b/content/zh-cn/docs/concepts/cluster-administration/addons.md index 4e8f9f96f4513..3cbfb67451180 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/addons.md +++ b/content/zh-cn/docs/concepts/cluster-administration/addons.md @@ -148,7 +148,7 @@ Add-on 扩展了 Kubernetes 的功能。 * [Spiderpool](https://github.com/spidernet-io/spiderpool) is an underlay and RDMA networking solution for Kubernetes. Spiderpool is supported on bare metal, virtual machines, and public cloud environments. -* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) +* [Weave Net](https://github.com/rajch/weave#using-weave-on-kubernetes) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. --> @@ -161,7 +161,7 @@ Add-on 扩展了 Kubernetes 的功能。 [NetworkPolicy](/zh-cn/docs/concepts/services-networking/network-policies/) API。 * [Spiderpool](https://github.com/spidernet-io/spiderpool) 为 Kubernetes 提供了下层网络和 RDMA 高速网络解决方案,兼容裸金属、虚拟机和公有云等运行环境。 -* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) +* [Weave Net](https://github.com/rajch/weave#using-weave-on-kubernetes) 提供在网络分组两端参与工作的联网和网络策略,并且不需要额外的数据库。 上述例子中 `effect` 使用的值为 `NoSchedule`,你也可以使用另外一个值 `PreferNoSchedule`。 @@ -389,7 +389,7 @@ are true. The following taints are built in: * `node.kubernetes.io/network-unavailable`: Node's network is unavailable. * `node.kubernetes.io/unschedulable`: Node is unschedulable. * `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started - with "external" cloud provider, this taint is set on a node to mark it + with an "external" cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. --> diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md index ef6cf4e31b4d4..3a3db98d464a5 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -496,7 +496,7 @@ can use a manifest similar to: @@ -981,7 +981,7 @@ section of the enhancement proposal about Pod topology spread constraints. because, in this case, those topology domains won't be considered until there is at least one node in them. - You can work around this by using an cluster autoscaling tool that is aware of + You can work around this by using a cluster autoscaling tool that is aware of Pod topology spread constraints and is also aware of the overall set of topology domains. --> From 009cb6eae8e557a41019e76b381c404cd8abf54f Mon Sep 17 00:00:00 2001 From: Arhell Date: Mon, 25 Mar 2024 00:14:29 +0200 Subject: [PATCH 22/23] [ru] Update Weave Net link to newly supported fork --- content/ru/docs/concepts/cluster-administration/addons.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ru/docs/concepts/cluster-administration/addons.md b/content/ru/docs/concepts/cluster-administration/addons.md index 53be662b177dc..cccba3562a665 100644 --- a/content/ru/docs/concepts/cluster-administration/addons.md +++ b/content/ru/docs/concepts/cluster-administration/addons.md @@ -30,7 +30,7 @@ content_type: concept * [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) плагин для контейнера (NCP) обеспечивающий интеграцию между VMware NSX-T и контейнерами оркестраторов, таких как Kubernetes, а так же интеграцию между NSX-T и контейнеров на основе платформы CaaS/PaaS, таких как Pivotal Container Service (PKS) и OpenShift. * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) - эта платформа SDN, которая обеспечивает сетевое взаимодействие на основе политик между Kubernetes подами и не Kubernetes окружением, с отображением и мониторингом безопасности. * [Romana](https://github.com/romana/romana) - это сетевое решение уровня 3 для сетей подов, которое также поддерживает [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Подробности установки Kubeadm доступны [здесь](https://github.com/romana/romana/tree/master/containerize). -* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) предоставляет сеть и обеспечивает сетевую политику, будет работать на обеих сторонах сетевого раздела и не требует внешней базы данных. +* [Weave Net](https://github.com/rajch/weave#using-weave-on-kubernetes) предоставляет сеть и обеспечивает сетевую политику, будет работать на обеих сторонах сетевого раздела и не требует внешней базы данных. ## Обнаружение служб From a6332247d9e857a8dad3b3b0d674a193d11a0d03 Mon Sep 17 00:00:00 2001 From: ydFu Date: Mon, 25 Mar 2024 17:11:01 +0800 Subject: [PATCH 23/23] [zh] sync contribute\analytics.md Signed-off-by: ydFu --- content/zh-cn/docs/contribute/analytics.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/zh-cn/docs/contribute/analytics.md b/content/zh-cn/docs/contribute/analytics.md index 9050cf0cf5be9..1a8d5f3c41798 100644 --- a/content/zh-cn/docs/contribute/analytics.md +++ b/content/zh-cn/docs/contribute/analytics.md @@ -26,13 +26,13 @@ This page contains information about the kubernetes.io analytics dashboard. -[查看仪表板](https://datastudio.google.com/reporting/fede2672-b2fd-402a-91d2-7473bdb10f04)。 +[查看仪表板](https://lookerstudio.google.com/u/0/reporting/fe615dc5-59b0-4db5-8504-ef9eacb663a9/page/4VDGB/)。 -此仪表板使用 Google Data Studio 构建,显示使用 Google Analytics 在 kubernetes.io 上收集的信息。 +此仪表板使用 [Google Looker Studio](https://lookerstudio.google.com/overview) 构建,并显示自 2022 年 8 月以来使用 Google Analytics 4 在 kubernetes.io 上收集的信息。 -### 使用仪表板 +### 使用仪表板 {#using-the-dashboard} 默认情况下,仪表板显示过去 30 天收集的所有分析。 使用日期选择器查看来自不同日期范围的数据。