diff --git a/content/en/blog/_posts/2024-03-19-go-workspaces.md b/content/en/blog/_posts/2024-03-19-go-workspaces.md deleted file mode 100644 index ee246e0a67c82..0000000000000 --- a/content/en/blog/_posts/2024-03-19-go-workspaces.md +++ /dev/null @@ -1,210 +0,0 @@ ---- -layout: blog -title: 'Using Go workspaces in Kubernetes' -date: 2024-03-19T08:30:00-08:00 -slug: go-workspaces-in-kubernetes -canonicalUrl: https://www.kubernetes.dev/blog/2024/03/19/go-workspaces-in-kubernetes/ ---- - -**Author:** Tim Hockin (Google) - -The [Go programming language](https://go.dev/) has played a huge role in the -success of Kubernetes. As Kubernetes has grown, matured, and pushed the bounds -of what "regular" projects do, the Go project team has also grown and evolved -the language and tools. In recent releases, Go introduced a feature called -"workspaces" which was aimed at making projects like Kubernetes easier to -manage. - -We've just completed a major effort to adopt workspaces in Kubernetes, and the -results are great. Our codebase is simpler and less error-prone, and we're no -longer off on our own technology island. - -## GOPATH and Go modules - -Kubernetes is one of the most visible open source projects written in Go. The -earliest versions of Kubernetes, dating back to 2014, were built with Go 1.3. -Today, 10 years later, Go is up to version 1.22 — and let's just say that a -_whole lot_ has changed. - -In 2014, Go development was entirely based on -[`GOPATH`](https://go.dev/wiki/GOPATH). As a Go project, Kubernetes lived by the -rules of `GOPATH`. In the buildup to Kubernetes 1.4 (mid 2016), we introduced a -directory tree called `staging`. This allowed us to pretend to be multiple -projects, but still exist within one git repository (which had advantages for -development velocity). The magic of `GOPATH` allowed this to work. - -Kubernetes depends on several code-generation tools which have to find, read, -and write Go code packages. Unsurprisingly, those tools grew to rely on -`GOPATH`. This all worked pretty well until Go introduced modules in Go 1.11 -(mid 2018). - -Modules were an answer to many issues around `GOPATH`. They gave more control to -projects on how to track and manage dependencies, and were overall a great step -forward. Kubernetes adopted them. However, modules had one major drawback — -most Go tools could not work on multiple modules at once. This was a problem -for our code-generation tools and scripts. - -Thankfully, Go offered a way to temporarily disable modules (`GO111MODULE` to -the rescue). We could get the dependency tracking benefits of modules, but the -flexibility of `GOPATH` for our tools. We even wrote helper tools to create fake -`GOPATH` trees and played tricks with symlinks in our vendor directory (which -holds a snapshot of our external dependencies), and we made it all work. - -And for the last 5 years it _has_ worked pretty well. That is, it worked well -unless you looked too closely at what was happening. Woe be upon you if you -had the misfortune to work on one of the code-generation tools, or the build -system, or the ever-expanding suite of bespoke shell scripts we use to glue -everything together. - -## The problems - -Like any large software project, we Kubernetes developers have all learned to -deal with a certain amount of constant low-grade pain. Our custom `staging` -mechanism let us bend the rules of Go; it was a little clunky, but when it -worked (which was most of the time) it worked pretty well. When it failed, the -errors were inscrutable and un-Googleable — nobody else was doing the silly -things we were doing. Usually the fix was to re-run one or more of the `update-*` -shell scripts in our aptly named `hack` directory. - -As time went on we drifted farther and farher from "regular" Go projects. At -the same time, Kubernetes got more and more popular. For many people, -Kubernetes was their first experience with Go, and it wasn't always a good -experience. - -Our eccentricities also impacted people who consumed some of our code, such as -our client library and the code-generation tools (which turned out to be useful -in the growing ecosystem of custom resources). The tools only worked if you -stored your code in a particular `GOPATH`-compatible directory structure, even -though `GOPATH` had been replaced by modules more than four years prior. - -This state persisted because of the confluence of three factors: -1. Most of the time it only hurt a little (punctuated with short moments of - more acute pain). -1. Kubernetes was still growing in popularity - we all had other, more urgent - things to work on. -1. The fix was not obvious, and whatever we came up with was going to be both - hard and tedious. - -As a Kubernetes maintainer and long-timer, my fingerprints were all over the -build system, the code-generation tools, and the `hack` scripts. While the pain -of our mess may have been low _on_average_, I was one of the people who felt it -regularly. - -## Enter workspaces - -Along the way, the Go language team saw what we (and others) were doing and -didn't love it. They designed a new way of stitching multiple modules together -into a new _workspace_ concept. Once enrolled in a workspace, Go tools had -enough information to work in any directory structure and across modules, -without `GOPATH` or symlinks or other dirty tricks. - -When I first saw this proposal I knew that this was the way out. This was how -to break the logjam. If workspaces was the technical solution, then I would -put in the work to make it happen. - -## The work - -Adopting workspaces was deceptively easy. I very quickly had the codebase -compiling and running tests with workspaces enabled. I set out to purge the -repository of anything `GOPATH` related. That's when I hit the first real bump - -the code-generation tools. - -We had about a dozen tools, totalling several thousand lines of code. All of -them were built using an internal framework called -[gengo](https://github.com/kubernetes/gengo), which was built on Go's own -parsing libraries. There were two main problems: - -1. Those parsing libraries didn't understand modules or workspaces. -1. `GOPATH` allowed us to pretend that Go _package paths_ and directories on - disk were interchangeable in trivial ways. They are not. - -Switching to a -[modules- and workspaces-aware parsing](https://pkg.go.dev/golang.org/x/tools/go/packages) -library was the first step. Then I had to make a long series of changes to -each of the code-generation tools. Critically, I had to find a way to do it -that was possible for some other person to review! I knew that I needed -reviewers who could cover the breadth of changes and reviewers who could go -into great depth on specific topics like gengo and Go's module semantics. -Looking at the history for the areas I was touching, I asked Joe Betz and Alex -Zielenski (SIG API Machinery) to go deep on gengo and code-generation, Jordan -Liggitt (SIG Architecture and all-around wizard) to cover Go modules and -vendoring and the `hack` scripts, and Antonio Ojea (wearing his SIG Testing -hat) to make sure the whole thing made sense. We agreed that a series of small -commits would be easiest to review, even if the codebase might not actually -work at each commit. - -Sadly, these were not mechanical changes. I had to dig into each tool to -figure out where they were processing disk paths versus where they were -processing package names, and where those were being conflated. I made -extensive use of the [delve](https://github.com/go-delve/delve) debugger, which -I just can't say enough good things about. - -One unfortunate result of this work was that I had to break compatibility. The -gengo library simply did not have enough information to process packages -outside of GOPATH. After discussion with gengo and Kubernetes maintainers, we -agreed to make [gengo/v2](https://github.com/kubernetes/gengo/tree/master/v2). -I also used this as an opportunity to clean up some of the gengo APIs and the -tools' CLIs to be more understandable and not conflate packages and -directories. For example you can't just string-join directory names and -assume the result is a valid package name. - -Once I had the code-generation tools converted, I shifted attention to the -dozens of scripts in the `hack` directory. One by one I had to run them, debug, -and fix failures. Some of them needed minor changes and some needed to be -rewritten. - -Along the way we hit some cases that Go did not support, like workspace -vendoring. Kubernetes depends on vendoring to ensure that our dependencies are -always available, even if their source code is removed from the internet (it -has happened more than once!). After discussing with the Go team, and looking -at possible workarounds, they decided the right path was to -[implement workspace vendoring](https://github.com/golang/go/issues/60056). - -The eventual Pull Request contained over 200 individual commits. - -## Results - -Now that this work has been merged, what does this mean for Kubernetes users? -Pretty much nothing. No features were added or changed. This work was not -about fixing bugs (and hopefully none were introduced). - -This work was mainly for the benefit of the Kubernetes project, to help and -simplify the lives of the core maintainers. In fact, it would not be a lie to -say that it was rather self-serving - my own life is a little bit better now. - -This effort, while unusually large, is just a tiny fraction of the overall -maintenance work that needs to be done. Like any large project, we have lots of -"technical debt" — tools that made point-in-time assumptions and need -revisiting, internal APIs whose organization doesn't make sense, code which -doesn't follow conventions which didn't exist at the time, and tests which -aren't as rigorous as they could be, just to throw out a few examples. This -work is often called "grungy" or "dirty", but in reality it's just an -indication that the project has grown and evolved. I love this stuff, but -there's far more than I can ever tackle on my own, which makes it an -interesting way for people to get involved. As our unofficial motto goes: -"chop wood and carry water". - -Kubernetes used to be a case-study of how _not_ to do large-scale Go -development, but now our codebase is simpler (and in some cases faster!) and -more consistent. Things that previously seemed like they _should_ work, but -didn't, now behave as expected. - -Our project is now a little more "regular". Not completely so, but we're -getting closer. - -## Thanks - -This effort would not have been possible without tons of support. - -First, thanks to the Go team for hearing our pain, taking feedback, and solving -the problems for us. - -Special mega-thanks goes to Michael Matloob, on the Go team at Google, who -designed and implemented workspaces. He guided me every step of the way, and -was very generous with his time, answering all my questions, no matter how -dumb. - -Writing code is just half of the work, so another special thanks to my -reviewers: Jordan Liggitt, Joe Betz, Alexander Zielenski, and Antonio Ojea. -These folks brought a wealth of expertise and attention to detail, and made -this work smarter and safer. diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 344469b25faaa..62bf7842bc2cd 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -607,6 +607,8 @@ Learn more about the following: * [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). * [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document. +* [Cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) to + manage the number and size of nodes in your cluster. * [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). * [Node Resource Managers](/docs/concepts/policy/node-resource-managers/). * [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/). diff --git a/content/en/docs/concepts/cluster-administration/_index.md b/content/en/docs/concepts/cluster-administration/_index.md index 2c7baf1d9871e..456069c980175 100644 --- a/content/en/docs/concepts/cluster-administration/_index.md +++ b/content/en/docs/concepts/cluster-administration/_index.md @@ -52,6 +52,7 @@ Before choosing a guide, here are some considerations: ## Managing a cluster * Learn how to [manage nodes](/docs/concepts/architecture/nodes/). + * Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/). * Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters. diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 2b5de17c8c348..c46fc6940ec51 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -1,7 +1,7 @@ --- title: Installing Addons content_type: concept -weight: 120 +weight: 150 --- @@ -109,6 +109,10 @@ installation instructions. The list does not try to be exhaustive. [Events](/docs/reference/kubernetes-api/cluster-resources/event-v1/) or [Node conditions](/docs/concepts/architecture/nodes/#condition). +## Instrumentation + +* [kube-state-metrics](/docs/concepts/cluster-administration/kube-state-metrics) + ## Legacy Add-ons There are several other add-ons documented in the deprecated diff --git a/content/en/docs/concepts/cluster-administration/cluster-autoscaling.md b/content/en/docs/concepts/cluster-administration/cluster-autoscaling.md new file mode 100644 index 0000000000000..495943b65a27a --- /dev/null +++ b/content/en/docs/concepts/cluster-administration/cluster-autoscaling.md @@ -0,0 +1,117 @@ +--- +title: Cluster Autoscaling +linkTitle: Cluster Autoscaling +description: >- + Automatically manage the nodes in your cluster to adapt to demand. +content_type: concept +weight: 120 +--- + + + +Kubernetes requires {{< glossary_tooltip text="nodes" term_id="node" >}} in your cluster to +run {{< glossary_tooltip text="pods" term_id="pod" >}}. This means providing capacity for +the workload Pods and for Kubernetes itself. + +You can adjust the amount of resources available in your cluster automatically: +_node autoscaling_. You can either change the number of nodes, or change the capacity +that nodes provide. The first approach is referred to as _horizontal scaling_, while the +second is referred to as _vertical scaling_. + +Kubernetes can even provide multidimensional automatic scaling for nodes. + + + +## Manual node management + +You can manually manage node-level capacity, where you configure a fixed amount of nodes; +you can use this approach even if the provisioning (the process to set up, manage, and +decommission) for these nodes is automated. + +This page is about taking the next step, and automating management of the amount of +node capacity (CPU, memory, and other node resources) available in your cluster. + +## Automatic horizontal scaling {#autoscaling-horizontal} + +### Cluster Autoscaler + +You can use the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) to manage the scale of your nodes automatically. +The cluster autoscaler can integrate with a cloud provider, or with Kubernetes' +[cluster API](https://github.com/kubernetes/autoscaler/blob/c6b754c359a8563050933a590f9a5dece823c836/cluster-autoscaler/cloudprovider/clusterapi/README.md), +to achieve the actual node management that's needed. + +The cluster autoscaler adds nodes when there are unschedulable Pods, and +removes nodes when those nodes are empty. + +#### Cloud provider integrations {#cluster-autoscaler-providers} + +The [README](https://github.com/kubernetes/autoscaler/tree/c6b754c359a8563050933a590f9a5dece823c836/cluster-autoscaler#readme) +for the cluster autoscaler lists some of the cloud provider integrations +that are available. + +## Cost-aware multidimensional scaling {#autoscaling-multi-dimension} + +### Karpenter {#autoscaler-karpenter} + +[Karpenter](https://karpenter.sh/) supports direct node management, via +plugins that integrate with specific cloud providers, and can manage nodes +for you whilst optimizing for overall cost. + +> Karpenter automatically launches just the right compute resources to +> handle your cluster's applications. It is designed to let you take +> full advantage of the cloud with fast and simple compute provisioning +> for Kubernetes clusters. + +The Karpenter tool is designed to integrate with a cloud provider that +provides API-driven server management, and where the price information for +available servers is also available via a web API. + +For example, if you start some more Pods in your cluster, the Karpenter +tool might buy a new node that is larger than one of the nodes you are +already using, and then shut down an existing node once the new node +is in service. + +#### Cloud provider integrations {#karpenter-providers} + +{{% thirdparty-content vendor="true" %}} + +There are integrations available between Karpenter's core and the following +cloud providers: + +- [Amazon Web Services](https://github.com/aws/karpenter-provider-aws) +- [Azure](https://github.com/Azure/karpenter-provider-azure) + + +## Related components + +### Descheduler + +The [descheduler](https://github.com/kubernetes-sigs/descheduler) can help you +consolidate Pods onto a smaller number of nodes, to help with automatic scale down +when the cluster has space capacity. + +### Sizing a workload based on cluster size + +#### Cluster proportional autoscaler + +For workloads that need to be scaled based on the size of the cluster (for example +`cluster-dns` or other system components), you can use the +[_Cluster Proportional Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).
+ +The Cluster Proportional Autoscaler watches the number of schedulable nodes +and cores, and scales the number of replicas of the target workload accordingly. + +#### Cluster proportional vertical autoscaler + +If the number of replicas should stay the same, you can scale your workloads vertically according to the cluster size using +the [_Cluster Proportional Vertical Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler). +This project is in **beta** and can be found on GitHub. + +While the Cluster Proportional Autoscaler scales the number of replicas of a workload, the Cluster Proportional Vertical Autoscaler +adjusts the resource requests for a workload (for example a Deployment or DaemonSet) based on the number of nodes and/or cores +in the cluster. + + +## {{% heading "whatsnext" %}} + +- Read about [workload-level autoscaling](/docs/concepts/workloads/autoscaling/) diff --git a/content/en/docs/concepts/cluster-administration/kube-state-metrics.md b/content/en/docs/concepts/cluster-administration/kube-state-metrics.md new file mode 100644 index 0000000000000..4be0318fdf867 --- /dev/null +++ b/content/en/docs/concepts/cluster-administration/kube-state-metrics.md @@ -0,0 +1,46 @@ +--- +title: Metrics for Kubernetes Object States +content_type: concept +weight: 75 +description: >- + kube-state-metrics, an add-on agent to generate and expose cluster-level metrics. +--- + +The state of Kubernetes objects in the Kubernetes API can be exposed as metrics. +An add-on agent called [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) can connect to the Kubernetes API server and expose a HTTP endpoint with metrics generated from the state of individual objects in the cluster. +It exposes various information about the state of objects like labels and annotations, startup and termination times, status or the phase the object currently is in. +For example, containers running in pods create a `kube_pod_container_info` metric. +This includes the name of the container, the name of the pod it is part of, the {{< glossary_tooltip text="namespace" term_id="namespace" >}} the pod is running in, the name of the container image, the ID of the image, the image name from the spec of the container, the ID of the running container and the ID of the pod as labels. + +{{% thirdparty-content single="true" %}} + +An external component that is able and capable to scrape the endpoint of kube-state-metrics (for example via Prometheus) can now be used to enable the following use cases. + +## Example: using metrics from kube-state-metrics to query the cluster state {#example-kube-state-metrics-query-1} + +Metric series generated by kube-state-metrics are helpful to gather further insights into the cluster, as they can be used for querying. + +If you use Prometheus or another tool that uses the same query language, the following PromQL query returns the number of pods that are not ready: + +``` +count(kube_pod_status_ready{condition="false"}) by (namespace, pod) +``` + +## Example: alerting based on from kube-state-metrics {#example-kube-state-metrics-alert-1} + +Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster. + +If you use Prometheus or a similar tool that uses the same alert rule language, the following alert will fire if there are pods that have been in a `Terminating` state for more than 5 minutes: + +```yaml +groups: +- name: Pod state + rules: + - alert: PodsBlockedInTerminatingState + expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason="NodeLost"} == 0) by (namespace, pod) > 0 + for: 5m + labels: + severity: page + annotations: + summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state. +``` diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md index 5bb8c60fe6d95..dcf4db95b9910 100644 --- a/content/en/docs/concepts/windows/intro.md +++ b/content/en/docs/concepts/windows/intro.md @@ -408,6 +408,17 @@ reported previously and comment with your experience on the issue and add additi logs. SIG Windows channel on the Kubernetes Slack is also a great avenue to get some initial support and troubleshooting ideas prior to creating a ticket. +### Validating the Windows cluster operability + +The Kubernetes project provides a _Windows Operational Readiness_ specification, +accompanied by a structured test suite. This suite is split into two sets of tests, +core and extended, each containing categories aimed at testing specific areas. +It can be used to validate all the functionalities of a Windows and hybrid system +(mixed with Linux nodes) with full coverage. + +To set up the project on a newly created cluster, refer to the instructions in the +[project guide](https://github.com/kubernetes-sigs/windows-operational-readiness/blob/main/README.md). + ## Deployment tools The kubeadm tool helps you to deploy a Kubernetes cluster, providing the control @@ -422,4 +433,4 @@ For a detailed explanation of Windows distribution channels see the Information on the different Windows Server servicing channels including their support models can be found at -[Windows Server servicing channels](https://docs.microsoft.com/en-us/windows-server/get-started/servicing-channels-comparison). +[Windows Server servicing channels](https://docs.microsoft.com/en-us/windows-server/get-started/servicing-channels-comparison). \ No newline at end of file diff --git a/content/en/docs/concepts/workloads/autoscaling.md b/content/en/docs/concepts/workloads/autoscaling.md index 49691d702569e..ff154d71599be 100644 --- a/content/en/docs/concepts/workloads/autoscaling.md +++ b/content/en/docs/concepts/workloads/autoscaling.md @@ -129,13 +129,8 @@ its [`Cron` scaler](https://keda.sh/docs/2.13/scalers/cron/). The `Cron` scaler If scaling workloads isn't enough to meet your needs, you can also scale your cluster infrastructure itself. Scaling the cluster infrastructure normally means adding or removing {{< glossary_tooltip text="nodes" term_id="node" >}}. -This can be done using one of two available autoscalers: - -- [**Cluster Autoscaler**](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) -- [**Karpenter**](https://github.com/kubernetes-sigs/karpenter?tab=readme-ov-file) - -Both scalers work by watching for pods marked as _unschedulable_ or _underutilized_ nodes and then adding or -removing nodes as needed. +Read [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) +for more information. ## {{% heading "whatsnext" %}} @@ -144,3 +139,4 @@ removing nodes as needed. - [HorizontalPodAutoscaler Walkthrough](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) - [Resize Container Resources In-Place](/docs/tasks/configure-pod-container/resize-container-resources/) - [Autoscale the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/) +- Learn about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) diff --git a/content/en/docs/contribute/analytics.md b/content/en/docs/contribute/analytics.md index f910c1d91b28a..57dea1f3fb3ad 100644 --- a/content/en/docs/contribute/analytics.md +++ b/content/en/docs/contribute/analytics.md @@ -14,9 +14,9 @@ This page contains information about the kubernetes.io analytics dashboard. -[View the dashboard](https://datastudio.google.com/reporting/fede2672-b2fd-402a-91d2-7473bdb10f04). +[View the dashboard](https://lookerstudio.google.com/u/0/reporting/fe615dc5-59b0-4db5-8504-ef9eacb663a9/page/4VDGB/). -This dashboard is built using Google Data Studio and shows information collected on kubernetes.io using Google Analytics. +This dashboard is built using [Google Looker Studio](https://lookerstudio.google.com/overview) and shows information collected on kubernetes.io using Google Analytics 4 since August 2022. ### Using the dashboard diff --git a/content/en/docs/reference/kubectl/introduction.md b/content/en/docs/reference/kubectl/introduction.md new file mode 100644 index 0000000000000..ac0aff2b590d6 --- /dev/null +++ b/content/en/docs/reference/kubectl/introduction.md @@ -0,0 +1,71 @@ +--- +title: "Introduction To Kubectl" +content_type: concept +weight: 1 +--- + +Kubectl is the Kubernetes cli version of a swiss army knife, and can do many things. + +While this Book is focused on using Kubectl to declaratively manage Applications in Kubernetes, it +also covers other Kubectl functions. + +## Command Families + +Most Kubectl commands typically fall into one of a few categories: + +| Type | Used For | Description | +|----------------------------------------|----------------------------|----------------------------------------------------| +| Declarative Resource Management | Deployment and Operations (e.g. GitOps) | Declaratively manage Kubernetes Workloads using Resource Config | +| Imperative Resource Management | Development Only | Run commands to manage Kubernetes Workloads using Command Line arguments and flags | +| Printing Workload State | Debugging | Print information about Workloads | +| Interacting with Containers | Debugging | Exec, Attach, Cp, Logs | +| Cluster Management | Cluster Ops | Drain and Cordon Nodes | + +## Declarative Application Management + +The preferred approach for managing Resources is through +declarative files called Resource Config used with the Kubectl *Apply* command. +This command reads a local (or remote) file structure and modifies cluster state to +reflect the declared intent. + +{{< alert color="success" title="Apply" >}} +Apply is the preferred mechanism for managing Resources in a Kubernetes cluster. +{{< /alert >}} + +## Printing state about Workloads + +Users will need to view Workload state. + +- Printing summarize state and information about Resources +- Printing complete state and information about Resources +- Printing specific fields from Resources +- Query Resources matching labels + +## Debugging Workloads + +Kubectl supports debugging by providing commands for: + +- Printing Container logs +- Printing cluster events +- Exec or attaching to a Container +- Copying files from Containers in the cluster to a user's filesystem + +## Cluster Management + +On occasion, users may need to perform operations to the Nodes of cluster. Kubectl supports +commands to drain Workloads from a Node so that it can be decommission or debugged. + +## Porcelain + +Users may find using Resource Config overly verbose for *Development* and prefer to work with +the cluster *imperatively* with a shell-like workflow. Kubectl offers porcelain commands for +generating and modifying Resources. + +- Generating + creating Resources such as Deployments, StatefulSets, Services, ConfigMaps, etc +- Setting fields on Resources +- Editing (live) Resources in a text editor + +{{< alert color="warning" title="Porcelain For Dev Only" >}} +Porcelain commands are time saving for experimenting with workloads in a dev cluster, but shouldn't +be used for production. +{{< /alert >}} diff --git a/content/en/docs/reference/kubectl/quick-reference.md b/content/en/docs/reference/kubectl/quick-reference.md index d0af256b4d2a1..59251382f1a4f 100644 --- a/content/en/docs/reference/kubectl/quick-reference.md +++ b/content/en/docs/reference/kubectl/quick-reference.md @@ -371,6 +371,7 @@ kubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the kubectl exec my-pod -- ls / # Run command in existing pod (1 container case) kubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case) kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case) +kubectl top pod # Show metrics for all pods in the default namespace kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory' ``` @@ -411,6 +412,7 @@ kubectl exec deploy/my-deployment -- ls # run command in first kubectl cordon my-node # Mark my-node as unschedulable kubectl drain my-node # Drain my-node in preparation for maintenance kubectl uncordon my-node # Mark my-node as schedulable +kubectl top node # Show metrics for all nodes kubectl top node my-node # Show metrics for a given node kubectl cluster-info # Display addresses of the master and services kubectl cluster-info dump # Dump current cluster state to stdout diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index e525235cfeb87..a549b8d8737b9 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -54,7 +54,7 @@ Type: Label Example: `app.kubernetes.io/created-by: "controller-manager"` -Used on: All Objects (typically used on[workload resources](/docs/reference/kubernetes-api/workload-resources/)). +Used on: All Objects (typically used on [workload resources](/docs/reference/kubernetes-api/workload-resources/)). The controller/user who created this resource. diff --git a/content/en/docs/setup/best-practices/cluster-large.md b/content/en/docs/setup/best-practices/cluster-large.md index 808a1c47510a3..f5f4292b44c3a 100644 --- a/content/en/docs/setup/best-practices/cluster-large.md +++ b/content/en/docs/setup/best-practices/cluster-large.md @@ -121,9 +121,7 @@ Learn more about [Vertical Pod Autoscaler](https://github.com/kubernetes/autosca and how you can use it to scale cluster components, including cluster-critical addons. -* The [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) -integrates with a number of cloud providers to help you run the right number of -nodes for the level of resource demand in your cluster. +* Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) * The [addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme) helps you in resizing the addons automatically as your cluster's scale changes. diff --git a/content/en/docs/setup/production-environment/_index.md b/content/en/docs/setup/production-environment/_index.md index 7aeb4eb1919bf..02332ca4e0e98 100644 --- a/content/en/docs/setup/production-environment/_index.md +++ b/content/en/docs/setup/production-environment/_index.md @@ -183,15 +183,9 @@ simply as *nodes*). to help determine how many nodes you need, based on the number of pods and containers you need to run. If you are managing nodes yourself, this can mean purchasing and installing your own physical equipment. -- *Autoscale nodes*: Most cloud providers support - [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) - to replace unhealthy nodes or grow and shrink the number of nodes as demand requires. See the - [Frequently Asked Questions](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md) - for how the autoscaler works and - [Deployment](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#deployment) - for how it is implemented by different cloud providers. For on-premises, there - are some virtualization platforms that can be scripted to spin up new nodes - based on demand. +- *Autoscale nodes*: Read [Cluster Autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling) to learn about the + tools available to automatically manage your nodes and the capacity they + provide. - *Set up node health checks*: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy. Using the [Node Problem Detector](/docs/tasks/debug/debug-cluster/monitor-node-health/) diff --git a/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md b/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md index eea8419e7f169..618458dcad0c7 100644 --- a/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md +++ b/content/en/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md @@ -84,7 +84,7 @@ solutions. The choice of monitoring platform depends heavily on your needs, budget, and technical resources. Kubernetes does not recommend any specific metrics pipeline; [many options](https://landscape.cncf.io/?group=projects-and-products&view-mode=card#observability-and-analysis--monitoring) are available. Your monitoring system should be capable of handling the [OpenMetrics](https://openmetrics.io/) metrics -transmission standard, and needs to chosen to best fit in to your overall design and deployment of +transmission standard and needs to be chosen to best fit into your overall design and deployment of your infrastructure platform. diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 6108c9bf94c1a..f25317fef9a0d 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -596,8 +596,9 @@ guidelines, which cover this exact use case. ## {{% heading "whatsnext" %}} -If you configure autoscaling in your cluster, you may also want to consider running a -cluster-level autoscaler such as [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler). +If you configure autoscaling in your cluster, you may also want to consider using +[cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) +to ensure you are running the right number of nodes. For more information on HorizontalPodAutoscaler: diff --git a/content/en/docs/tasks/tools/included/verify-kubectl.md b/content/en/docs/tasks/tools/included/verify-kubectl.md index b4eb0fe08d2a3..f7d2ad94cbc32 100644 --- a/content/en/docs/tasks/tools/included/verify-kubectl.md +++ b/content/en/docs/tasks/tools/included/verify-kubectl.md @@ -31,7 +31,7 @@ The connection to the server was refused - did you specify th ``` For example, if you are intending to run a Kubernetes cluster on your laptop (locally), -you will need a tool like Minikube to be installed first and then re-run the commands stated above. +you will need a tool like [Minikube](https://minikube.sigs.k8s.io/docs/start/) to be installed first and then re-run the commands stated above. If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use: diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 3c3a27afa0617..3411ce8f54d0c 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -153,7 +153,7 @@

View the app

export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME

You can access the Pod through the proxied API, by running:

-

curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/

+

curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/

In order for the new Deployment to be accessible without using the proxy, a Service is required which will be explained in Module 4.

diff --git a/content/es/docs/concepts/cluster-administration/addons.md b/content/es/docs/concepts/cluster-administration/addons.md index 85a9ced075a91..4d0488caa887c 100644 --- a/content/es/docs/concepts/cluster-administration/addons.md +++ b/content/es/docs/concepts/cluster-administration/addons.md @@ -74,7 +74,7 @@ En esta página se listan algunos de los complementos disponibles con sus respec Pods y entornos no Kubernetes con visibilidad y supervisión de la seguridad. * [Romana](https://github.com/romana) es una solución de red de capa 3 para las redes de Pods que también son compatibles con la API de [NetworkPolicy](/docs/concepts/services-networking/network-policies/). -* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) +* [Weave Net](https://github.com/rajch/weave#using-weave-on-kubernetes) proporciona redes y políticas de red, funciona en ambos lados de una partición de red y no requiere una base de datos externa. diff --git a/content/ru/docs/concepts/cluster-administration/addons.md b/content/ru/docs/concepts/cluster-administration/addons.md index 53be662b177dc..cccba3562a665 100644 --- a/content/ru/docs/concepts/cluster-administration/addons.md +++ b/content/ru/docs/concepts/cluster-administration/addons.md @@ -30,7 +30,7 @@ content_type: concept * [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) плагин для контейнера (NCP) обеспечивающий интеграцию между VMware NSX-T и контейнерами оркестраторов, таких как Kubernetes, а так же интеграцию между NSX-T и контейнеров на основе платформы CaaS/PaaS, таких как Pivotal Container Service (PKS) и OpenShift. * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) - эта платформа SDN, которая обеспечивает сетевое взаимодействие на основе политик между Kubernetes подами и не Kubernetes окружением, с отображением и мониторингом безопасности. * [Romana](https://github.com/romana/romana) - это сетевое решение уровня 3 для сетей подов, которое также поддерживает [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Подробности установки Kubeadm доступны [здесь](https://github.com/romana/romana/tree/master/containerize). -* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) предоставляет сеть и обеспечивает сетевую политику, будет работать на обеих сторонах сетевого раздела и не требует внешней базы данных. +* [Weave Net](https://github.com/rajch/weave#using-weave-on-kubernetes) предоставляет сеть и обеспечивает сетевую политику, будет работать на обеих сторонах сетевого раздела и не требует внешней базы данных. ## Обнаружение служб diff --git a/content/zh-cn/docs/concepts/cluster-administration/addons.md b/content/zh-cn/docs/concepts/cluster-administration/addons.md index 4e8f9f96f4513..3cbfb67451180 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/addons.md +++ b/content/zh-cn/docs/concepts/cluster-administration/addons.md @@ -148,7 +148,7 @@ Add-on 扩展了 Kubernetes 的功能。 * [Spiderpool](https://github.com/spidernet-io/spiderpool) is an underlay and RDMA networking solution for Kubernetes. Spiderpool is supported on bare metal, virtual machines, and public cloud environments. -* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) +* [Weave Net](https://github.com/rajch/weave#using-weave-on-kubernetes) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. --> @@ -161,7 +161,7 @@ Add-on 扩展了 Kubernetes 的功能。 [NetworkPolicy](/zh-cn/docs/concepts/services-networking/network-policies/) API。 * [Spiderpool](https://github.com/spidernet-io/spiderpool) 为 Kubernetes 提供了下层网络和 RDMA 高速网络解决方案,兼容裸金属、虚拟机和公有云等运行环境。 -* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) +* [Weave Net](https://github.com/rajch/weave#using-weave-on-kubernetes) 提供在网络分组两端参与工作的联网和网络策略,并且不需要额外的数据库。 上述例子中 `effect` 使用的值为 `NoSchedule`,你也可以使用另外一个值 `PreferNoSchedule`。 @@ -389,7 +389,7 @@ are true. The following taints are built in: * `node.kubernetes.io/network-unavailable`: Node's network is unavailable. * `node.kubernetes.io/unschedulable`: Node is unschedulable. * `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started - with "external" cloud provider, this taint is set on a node to mark it + with an "external" cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. --> diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md index ef6cf4e31b4d4..3a3db98d464a5 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -496,7 +496,7 @@ can use a manifest similar to: @@ -981,7 +981,7 @@ section of the enhancement proposal about Pod topology spread constraints. because, in this case, those topology domains won't be considered until there is at least one node in them. - You can work around this by using an cluster autoscaling tool that is aware of + You can work around this by using a cluster autoscaling tool that is aware of Pod topology spread constraints and is also aware of the overall set of topology domains. --> diff --git a/content/zh-cn/docs/concepts/windows/intro.md b/content/zh-cn/docs/concepts/windows/intro.md index c67b4abac2bf4..013d37286dde6 100644 --- a/content/zh-cn/docs/concepts/windows/intro.md +++ b/content/zh-cn/docs/concepts/windows/intro.md @@ -772,7 +772,28 @@ troubleshooting ideas prior to creating a ticket. 并随附日志信息。Kubernetes Slack 上的 SIG Windows 频道也是一个很好的途径, 可以在创建工单之前获得一些初始支持和故障排查思路。 -## {{% heading "whatsnext" %}} + + +### 验证 Windows 集群的操作性 {#validating-windows-cluster-operability} + +Kubernetes 项目提供了 **Windows 操作准备** 规范,配备了结构化的测试套件。 +这个套件分为两组测试:核心和扩展。每组测试都包含了针对特定场景的分类测试。 +它可以用来验证 Windows 和混合系统(混合了 Linux 节点)的所有功能,实现全面覆盖。 + +要在新创建的集群上搭建此项目, +请参考[项目指南](https://github.com/kubernetes-sigs/windows-operational-readiness/blob/main/README.md)中的说明。 + -[查看仪表板](https://datastudio.google.com/reporting/fede2672-b2fd-402a-91d2-7473bdb10f04)。 +[查看仪表板](https://lookerstudio.google.com/u/0/reporting/fe615dc5-59b0-4db5-8504-ef9eacb663a9/page/4VDGB/)。 -此仪表板使用 Google Data Studio 构建,显示使用 Google Analytics 在 kubernetes.io 上收集的信息。 +此仪表板使用 [Google Looker Studio](https://lookerstudio.google.com/overview) 构建,并显示自 2022 年 8 月以来使用 Google Analytics 4 在 kubernetes.io 上收集的信息。 -### 使用仪表板 +### 使用仪表板 {#using-the-dashboard} 默认情况下,仪表板显示过去 30 天收集的所有分析。 使用日期选择器查看来自不同日期范围的数据。 diff --git a/content/zh-cn/docs/reference/node/_index.md b/content/zh-cn/docs/reference/node/_index.md index e2f4aa17486d3..93a394fb0d390 100644 --- a/content/zh-cn/docs/reference/node/_index.md +++ b/content/zh-cn/docs/reference/node/_index.md @@ -15,21 +15,34 @@ This section contains the following reference topics about nodes: * the kubelet's [checkpoint API](/docs/reference/node/kubelet-checkpoint-api/) * a list of [Articles on dockershim Removal and on Using CRI-compatible Runtimes](/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/) -* [Node `.status` information](/docs/reference/node/node-status/) -You can also read node reference details from elsewhere in the -Kubernetes documentation, including: +* [Kubelet Device Manager API Versions](/docs/reference/node/device-plugin-api-versions) -* [Node Metrics Data](/docs/reference/instrumentation/node-metrics). -* [CRI Pod & Container Metrics](/docs/reference/instrumentation/cri-pod-container-metrics). +* [Node Labels Populated By The Kubelet](/docs/reference/node/node-labels) + +* [Node `.status` information](/docs/reference/node/node-status/) --> 本部分包含以下有关节点的参考主题: * Kubelet 的 [Checkpoint API](/zh-cn/docs/reference/node/kubelet-checkpoint-api/) * 一系列[关于 dockershim 移除和使用兼容 CRI 运行时的文章](/zh-cn/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/) + +* [Kubelet 设备管理器 API 版本](/zh-cn/docs/reference/node/device-plugin-api-versions) + +* [由 kubelet 填充的节点标签](/zh-cn/docs/reference/node/node-labels) + * [节点 `.status` 信息](/zh-cn/docs/reference/node/node-status/) * + + 你还可以从 Kubernetes 文档的其他地方阅读节点的详细参考信息,包括: * [节点指标数据](/zh-cn/docs/reference/instrumentation/node-metrics)。 -* [CRI Pod & 容器指标](/docs/reference/instrumentation/cri-pod-container-metrics). + +* [CRI Pod & 容器指标](/zh-cn/docs/reference/instrumentation/cri-pod-container-metrics)。 diff --git a/content/zh-cn/docs/reference/node/node-labels.md b/content/zh-cn/docs/reference/node/node-labels.md index 543d1aafedc2a..7a27954c9c567 100644 --- a/content/zh-cn/docs/reference/node/node-labels.md +++ b/content/zh-cn/docs/reference/node/node-labels.md @@ -1,7 +1,8 @@ +--- content_type: "reference" title: 由 kubelet 填充的节点标签 weight: 40 - +--- -## 预设标签 +## 预设标签 {#preset-labels} Kubernetes 在节点上设置的预设标签有: diff --git a/content/zh-cn/docs/setup/production-environment/container-runtimes.md b/content/zh-cn/docs/setup/production-environment/container-runtimes.md index eb42a5a20722f..778865eb39275 100644 --- a/content/zh-cn/docs/setup/production-environment/container-runtimes.md +++ b/content/zh-cn/docs/setup/production-environment/container-runtimes.md @@ -359,7 +359,7 @@ Return to this step once you've created a valid `config.toml` configuration file 要在系统上安装 containerd,请按照[开始使用 containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md) 的说明进行操作。创建有效的 `config.toml` 配置文件后返回此步骤。 -{{< tabs name="找到 config.toml 文件" >}} +{{< tabs name="finding-your-config-toml-file" >}} {{% tab name="Linux" %}} -以下操作假设你使用 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) 适配器来将 +以下操作假设你使用 [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/) 适配器来将 Docker Engine 与 Kubernetes 集成。 {{< /note >}} @@ -585,10 +585,9 @@ Docker Engine 与 Kubernetes 集成。 指南为你的 Linux 发行版安装 Docker。 -2. 按照源代码仓库中的说明安装 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)。 +2. 请按照文档中的安装部分指示来安装 [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/usage/install)。 +要启动一个 Redis 实例,你需要创建 Redis Pod 和 Redis 服务: + +```shell +kubectl apply -f https://k8s.io/examples/application/job/redis/redis-pod.yaml +kubectl apply -f https://k8s.io/examples/application/job/redis/redis-service.yaml +``` + 在这个例子中,每个 Pod 处理了队列中的多个项目,直到队列中没有项目时便退出。 @@ -286,8 +296,7 @@ the other pods to complete too. 这依赖于工作程序在完成工作时发出信号。 工作程序以成功退出的形式发出信号表示工作队列已经为空。 所以,只要有**任意**一个工作程序成功退出,控制器就知道工作已经完成了,所有的 Pod 将很快会退出。 -因此,我们需要将 Job 的完成计数(Completion Count)设置为 1。 -尽管如此,Job 控制器还是会等待其它 Pod 完成。 +因此,你不需要设置 Job 的完成次数。Job 控制器还是会等待其它 Pod 完成。 + {{- if ne $feed nil -}} {{ time.Format "Mon, 02 Jan 2006 15:04:05 -0700" $feed._kubernetes_io.updated_at | safeHTML }} + {{- end -}} {{ with .OutputFormats.Get "RSS" -}} {{ printf "" .Permalink .MediaType | safeHTML }} {{ end -}} diff --git a/layouts/shortcodes/cve-feed.html b/layouts/shortcodes/cve-feed.html index 8b829079fba89..02a191886451a 100644 --- a/layouts/shortcodes/cve-feed.html +++ b/layouts/shortcodes/cve-feed.html @@ -1,7 +1,39 @@ -{{ $feed := getJSON .Site.Params.cveFeedBucket }} -{{ if ne $feed.version "https://jsonfeed.org/version/1.1" }} - {{ warnf "CVE feed shortcode. KEP-3203: CVE feed does not comply with JSON feed v1.1." }} -{{ end }} +{{- $url := .Site.Params.cveFeedBucket }} +{{- $feed := "" -}} + +{{- with resources.GetRemote $url -}} + {{- if .Err -}} + + {{- $message := printf "Failed to retrieve CVE data: %s" .Err -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $message -}} + {{- else -}} + {{- warnf $message -}} + {{- end -}} + {{- else -}} + + {{- $feed = .Content | transform.Unmarshal -}} + {{- if ne $feed.version "https://jsonfeed.org/version/1.1" -}} + {{- $warningMessage := "CVE feed shortcode. KEP-3203: CVE feed does not comply with JSON feed v1.1." -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $warningMessage -}} + {{- else -}} + {{- warnf $warningMessage -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- else -}} + + {{- $message := printf "Unable to fetch CVE data from the specified URL: %q" $url -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $message -}} + {{- else -}} + {{- warnf $message -}} + {{- end -}} +{{- end -}} + + +{{ if ne $feed nil }} @@ -21,3 +53,4 @@ {{ end }}
{{ T "cve_table" }} {{ printf (T "cve_table_date_format_string") ($feed._kubernetes_io.updated_at | time.Format (T "cve_table_date_format")) }}
+{{- end -}} diff --git a/layouts/shortcodes/release-binaries.html b/layouts/shortcodes/release-binaries.html index 6eef11dda22b0..f34b6ac16dbeb 100644 --- a/layouts/shortcodes/release-binaries.html +++ b/layouts/shortcodes/release-binaries.html @@ -1,6 +1,29 @@ -{{ $response := getJSON "https://raw.githubusercontent.com/kubernetes-sigs/downloadkubernetes/master/dist/release_binaries.json" }} +{{- $url := "https://raw.githubusercontent.com/kubernetes-sigs/downloadkubernetes/master/dist/release_binaries.json" }} +{{- $response := "" }} +{{- with resources.GetRemote $url -}} + {{- if .Err -}} + {{- $message := printf "Failed to retrieve release binaries data: %s" .Err -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $message -}} + {{- else -}} + {{- warnf $message -}} + {{- end -}} + {{- else -}} + {{- $response = .Content | transform.Unmarshal }} + {{- end -}} +{{- else -}} + {{ $message := printf "Unable to fetch release binaries data from the specified URL: %q" $url -}} + {{- if eq hugo.Environment "production" -}} + {{- errorf $message -}} + {{- else -}} + {{- warnf $message -}} + {{- end -}} +{{- end -}} + + +{{ if ne $response nil }} {{ $currentVersion := site.Params.version }} {{ $Binaries := slice }} @@ -127,4 +150,5 @@ - \ No newline at end of file + +{{- end -}} \ No newline at end of file