diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md index df384800e9606..785040cda316e 100644 --- a/content/en/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -33,7 +33,7 @@ are allowed. Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the API server along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See -[kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) +[kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. Pods that wish to connect to the API server can do so securely by leveraging a service account so diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 2321fc6474ff9..f57179df8ea57 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -479,29 +479,24 @@ these pods will be stuck in terminating status on the shutdown node forever. To mitigate the above situation, a user can manually add the taint `node kubernetes.io/out-of-service` with either `NoExecute` or `NoSchedule` effect to a Node marking it out-of-service. -If the `NodeOutOfServiceVolumeDetach` [feature gate](/docs/reference/ -command-line-tools-reference/feature-gates/) is enabled on -`kube-controller-manager`, and a Node is marked out-of-service with this taint, the -pods on the node will be forcefully deleted if there are no matching tolerations on -it and volume detach operations for the pods terminating on the node will happen -immediately. This allows the Pods on the out-of-service node to recover quickly on a -different node. +If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the +pods on the node will be forcefully deleted if there are no matching tolerations on it and volume +detach operations for the pods terminating on the node will happen immediately. This allows the +Pods on the out-of-service node to recover quickly on a different node. During a non-graceful shutdown, Pods are terminated in the two phases: 1. Force delete the Pods that do not have matching `out-of-service` tolerations. 2. Immediately perform detach volume operation for such pods. - {{< note >}} - Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified -that the node is already in shutdown or power off state (not in the middle of -restarting). + that the node is already in shutdown or power off state (not in the middle of + restarting). - The user is required to manually remove the out-of-service taint after the pods are -moved to a new node and the user has checked that the shutdown node has been -recovered since the user was the one who originally added the taint. - - + moved to a new node and the user has checked that the shutdown node has been + recovered since the user was the one who originally added the taint. {{< /note >}} ### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown} diff --git a/content/en/docs/concepts/cluster-administration/_index.md b/content/en/docs/concepts/cluster-administration/_index.md index ace5297b330cf..d5d6a273e23af 100644 --- a/content/en/docs/concepts/cluster-administration/_index.md +++ b/content/en/docs/concepts/cluster-administration/_index.md @@ -11,31 +11,37 @@ no_list: true --- + The cluster administration overview is for anyone creating or administering a Kubernetes cluster. It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/). - + ## Planning a cluster -See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*. +See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure +Kubernetes clusters. The solutions listed in this article are called *distros*. - {{< note >}} - Not all distros are actively maintained. Choose distros which have been tested with a recent version of Kubernetes. - {{< /note >}} +{{< note >}} +Not all distros are actively maintained. Choose distros which have been tested with a recent +version of Kubernetes. +{{< /note >}} Before choosing a guide, here are some considerations: - - Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs. - - Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**? - - Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters. - - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best. - - Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**? - - Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the - latter, choose an actively-developed distro. Some distros only use binary releases, but - offer a greater variety of choices. - - Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster. - +- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, + multi-node cluster? Choose distros best suited for your needs. +- Will you be using **a hosted Kubernetes cluster**, such as + [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**? +- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly + support hybrid clusters. Instead, you can set up multiple clusters. +- **If you are configuring Kubernetes on-premises**, consider which + [networking model](/docs/concepts/cluster-administration/networking/) fits best. +- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**? +- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? + If the latter, choose an actively-developed distro. Some distros only use binary releases, but + offer a greater variety of choices. +- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster. ## Managing a cluster @@ -45,29 +51,43 @@ Before choosing a guide, here are some considerations: ## Securing a cluster -* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to generate certificates using different tool chains. +* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to + generate certificates using different tool chains. -* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node. +* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes + the environment for Kubelet managed containers on a Kubernetes node. -* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access) describes how Kubernetes implements access control for its own API. +* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access) describes + how Kubernetes implements access control for its own API. -* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in Kubernetes, including the various authentication options. +* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in + Kubernetes, including the various authentication options. -* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from authentication, and controls how HTTP calls are handled. +* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from + authentication, and controls how HTTP calls are handled. -* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization. +* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) + explains plug-ins which intercepts requests to the Kubernetes API server after authentication + and authorization. -* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters . +* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/) + describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters +. -* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes' audit logs. +* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes' + audit logs. ### Securing the kubelet - * [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/) - * [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) - * [Kubelet authentication/authorization](/docs/reference/acess-authn-authz/kubelet-authn-authz/) + +* [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/) +* [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) +* [Kubelet authentication/authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/) ## Optional Cluster Services -* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service. +* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve + a DNS name directly to a Kubernetes service. + +* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) + explains how logging in Kubernetes works and how to implement it. -* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it. diff --git a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md index ee83692419ee7..8e4214991c7ac 100644 --- a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -8,21 +8,29 @@ card: --- -This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in `.yaml` format. - +This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can +express them in `.yaml` format. ## Understanding Kubernetes objects {#kubernetes-objects} -*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe: +*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these +entities to represent the state of your cluster. Specifically, they can describe: * What containerized applications are running (and on which nodes) * The resources available to those applications * The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance -A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's *desired state*. +A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system +will constantly work to ensure that object exists. By creating an object, you're effectively +telling the Kubernetes system what you want your cluster's workload to look like; this is your +cluster's *desired state*. -To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](/docs/concepts/overview/kubernetes-api/). When you use the `kubectl` command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use the Kubernetes API directly in your own programs using one of the [Client Libraries](/docs/reference/using-api/client-libraries/). +To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the +[Kubernetes API](/docs/concepts/overview/kubernetes-api/). When you use the `kubectl` command-line +interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use +the Kubernetes API directly in your own programs using one of the +[Client Libraries](/docs/reference/using-api/client-libraries/). ### Object Spec and Status @@ -48,11 +56,17 @@ the status to match your spec. If any of those instances should fail between spec and status by making a correction--in this case, starting a replacement instance. -For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). +For more information on the object spec, status, and metadata, see the +[Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). ### Describing a Kubernetes object -When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via `kubectl`), that API request must include that information as JSON in the request body. **Most often, you provide the information to `kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API request. +When you create an object in Kubernetes, you must provide the object spec that describes its +desired state, as well as some basic information about the object (such as a name). When you use +the Kubernetes API to create the object (either directly or via `kubectl`), that API request must +include that information as JSON in the request body. **Most often, you provide the information to +`kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API +request. Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment: @@ -81,7 +95,9 @@ In the `.yaml` file for the Kubernetes object you want to create, you'll need to * `metadata` - Data that helps uniquely identify the object, including a `name` string, `UID`, and optional `namespace` * `spec` - What state you desire for the object -The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API Reference](/docs/reference/kubernetes-api/) can help you find the spec format for all of the objects you can create using Kubernetes. +The precise format of the object `spec` is different for every Kubernetes object, and contains +nested fields specific to that object. The [Kubernetes API Reference](/docs/reference/kubernetes-api/) +can help you find the spec format for all of the objects you can create using Kubernetes. For example, see the [`spec` field](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec) for the Pod API reference. @@ -103,5 +119,3 @@ detail the structure of that `.status` field, and its content for each different * Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes. * [Using the Kubernetes API](/docs/reference/using-api/) explains some more API concepts. - - diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 470a5e5024150..a282b8455a4a0 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -13,9 +13,6 @@ weight: 20 A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. - - - ## How a ReplicaSet works @@ -26,14 +23,14 @@ it should create to meet the number of replicas criteria. A ReplicaSet then fulf and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template. -A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) +A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/architecture/garbage-collection/#owners-and-dependents) field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly. -A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the -OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it matches a ReplicaSet's selector, it will be immediately acquired by said -ReplicaSet. +A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no +OwnerReference or the OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it +matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet. ## When to use a ReplicaSet @@ -253,7 +250,9 @@ In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`, be rejected by the API. {{< note >}} -For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet. +For 2 ReplicaSets specifying the same `.spec.selector` but different +`.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the +Pods created by the other ReplicaSet. {{< /note >}} ### Replicas @@ -267,11 +266,14 @@ If you do not specify `.spec.replicas`, then it defaults to 1. ### Deleting a ReplicaSet and its Pods -To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The [Garbage collector](/docs/concepts/workloads/controllers/garbage-collection/) automatically deletes all of the dependent Pods by default. +To delete a ReplicaSet and all of its Pods, use +[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The +[Garbage collector](/docs/concepts/architecture/garbage-collection/) automatically deletes all of +the dependent Pods by default. + +When using the REST API or the `client-go` library, you must set `propagationPolicy` to +`Background` or `Foreground` in the `-d` option. For example: -When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in -the -d option. -For example: ```shell kubectl proxy --port=8080 curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \ @@ -281,9 +283,12 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron ### Deleting just a ReplicaSet -You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=orphan` option. +You can delete a ReplicaSet without affecting any of its Pods using +[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) +with the `--cascade=orphan` option. When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`. For example: + ```shell kubectl proxy --port=8080 curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \ @@ -295,7 +300,8 @@ Once the original is deleted, you can create a new ReplicaSet to replace it. As as the old and new `.spec.selector` are the same, then the new one will adopt the old Pods. However, it will not make any effort to make existing Pods match a new, different pod template. To update Pods to a new spec in a controlled way, use a -[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as ReplicaSets do not support a rolling update directly. +[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as +ReplicaSets do not support a rolling update directly. ### Isolating Pods from a ReplicaSet @@ -310,17 +316,19 @@ ensures that a desired number of Pods with a matching label selector are availab When scaling down, the ReplicaSet controller chooses which pods to delete by sorting the available pods to prioritize scaling down pods based on the following general algorithm: - 1. Pending (and unschedulable) pods are scaled down first - 2. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then - the pod with the lower value will come first. - 3. Pods on nodes with more replicas come before pods on nodes with fewer replicas. - 4. If the pods' creation times differ, the pod that was created more recently - comes before the older pod (the creation times are bucketed on an integer log scale - when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled) + +1. Pending (and unschedulable) pods are scaled down first +1. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then + the pod with the lower value will come first. +1. Pods on nodes with more replicas come before pods on nodes with fewer replicas. +1. If the pods' creation times differ, the pod that was created more recently + comes before the older pod (the creation times are bucketed on an integer log scale + when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled) If all of the above match, then selection is random. ### Pod deletion cost + {{< feature-state for_k8s_version="v1.22" state="beta" >}} Using the [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost) @@ -344,6 +352,7 @@ This feature is beta and enabled by default. You can disable it using the {{< /note >}} #### Example Use Case + The different pods of an application could have different utilization levels. On scale down, the application may prefer to remove the pods with lower utilization. To avoid frequently updating the pods, the application should update `controller.kubernetes.io/pod-deletion-cost` once before issuing a scale down (setting the @@ -387,12 +396,17 @@ As such, it is recommended to use Deployments when you want ReplicaSets. ### Bare Pods -Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node such as Kubelet. +Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or +terminated for any reason, such as in the case of node failure or disruptive node maintenance, +such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your +application requires only a single Pod. Think of it similarly to a process supervisor, only it +supervises multiple Pods across multiple nodes instead of individual processes on a single node. A +ReplicaSet delegates local container restarts to some agent on the node such as Kubelet. ### Job -Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are expected to terminate on their own -(that is, batch jobs). +Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are +expected to terminate on their own (that is, batch jobs). ### DaemonSet @@ -402,12 +416,12 @@ to a machine lifetime: the Pod needs to be running on the machine before other P safe to terminate when the machine is otherwise ready to be rebooted/shutdown. ### ReplicationController -ReplicaSets are the successors to [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/). + +ReplicaSets are the successors to [ReplicationControllers](/docs/concepts/workloads/controllers/replicationcontroller/). The two serve the same purpose, and behave similarly, except that a ReplicationController does not support set-based selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors). As such, ReplicaSets are preferred over ReplicationControllers - ## {{% heading "whatsnext" %}} * Learn about [Pods](/docs/concepts/workloads/pods). @@ -419,3 +433,4 @@ As such, ReplicaSets are preferred over ReplicationControllers object definition to understand the API for replica sets. * Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions. + diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index fdb117c5d545b..124d1ddfd22ea 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -6,7 +6,9 @@ title: kubeadm init content_type: concept weight: 20 --- + + This command initializes a Kubernetes control-plane node. @@ -26,12 +28,12 @@ following steps: 1. Generates a self-signed CA to set up identities for each component in the cluster. The user can provide their own CA cert and/or key by dropping it in the cert directory configured via `--cert-dir` (`/etc/kubernetes/pki` by default). - The APIServer certs will have additional SAN entries for any `--apiserver-cert-extra-sans` arguments, lowercased if necessary. + The APIServer certs will have additional SAN entries for any `--apiserver-cert-extra-sans` + arguments, lowercased if necessary. -1. Writes kubeconfig files in `/etc/kubernetes/` for - the kubelet, the controller-manager and the scheduler to use to connect to the - API server, each with its own identity, as well as an additional - kubeconfig file for administration named `admin.conf`. +1. Writes kubeconfig files in `/etc/kubernetes/` for the kubelet, the controller-manager and the + scheduler to use to connect to the API server, each with its own identity, as well as an + additional kubeconfig file for administration named `admin.conf`. 1. Generates static Pod manifests for the API server, controller-manager and scheduler. In case an external etcd is not provided, @@ -76,10 +78,12 @@ following steps: Kubeadm allows you to create a control-plane node in phases using the `kubeadm init phase` command. -To view the ordered list of phases and sub-phases you can call `kubeadm init --help`. The list will be located at the top of the help screen and each phase will have a description next to it. +To view the ordered list of phases and sub-phases you can call `kubeadm init --help`. The list +will be located at the top of the help screen and each phase will have a description next to it. Note that by calling `kubeadm init` all of the phases and sub-phases will be executed in this exact order. -Some phases have unique flags, so if you want to have a look at the list of available options add `--help`, for example: +Some phases have unique flags, so if you want to have a look at the list of available options add +`--help`, for example: ```shell sudo kubeadm init phase control-plane controller-manager --help @@ -91,7 +95,8 @@ You can also use `--help` to see the list of sub-phases for a certain parent pha sudo kubeadm init phase control-plane --help ``` -`kubeadm init` also exposes a flag called `--skip-phases` that can be used to skip certain phases. The flag accepts a list of phase names and the names can be taken from the above ordered list. +`kubeadm init` also exposes a flag called `--skip-phases` that can be used to skip certain phases. +The flag accepts a list of phase names and the names can be taken from the above ordered list. An example: @@ -102,7 +107,10 @@ sudo kubeadm init phase etcd local --config=configfile.yaml sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml ``` -What this example would do is write the manifest files for the control plane and etcd in `/etc/kubernetes/manifests` based on the configuration in `configfile.yaml`. This allows you to modify the files and then skip these phases using `--skip-phases`. By calling the last command you will create a control plane node with the custom manifest files. +What this example would do is write the manifest files for the control plane and etcd in +`/etc/kubernetes/manifests` based on the configuration in `configfile.yaml`. This allows you to +modify the files and then skip these phases using `--skip-phases`. By calling the last command you +will create a control plane node with the custom manifest files. {{< feature-state for_k8s_version="v1.22" state="beta" >}} @@ -249,7 +257,7 @@ To set a custom image for these you need to configure this in your to use the image. Consult the documentation for your container runtime to find out how to change this setting; for selected container runtimes, you can also find advice within the -[Container Runtimes]((/docs/setup/production-environment/container-runtimes/) topic. +[Container Runtimes](/docs/setup/production-environment/container-runtimes/) topic. ### Uploading control-plane certificates to the cluster @@ -284,30 +292,35 @@ and certificate renewal. ### Managing the kubeadm drop-in file for the kubelet {#kubelet-drop-in} -The `kubeadm` package ships with a configuration file for running the `kubelet` by `systemd`. Note that the kubeadm CLI never touches this drop-in file. This drop-in file is part of the kubeadm DEB/RPM package. +The `kubeadm` package ships with a configuration file for running the `kubelet` by `systemd`. +Note that the kubeadm CLI never touches this drop-in file. This drop-in file is part of the kubeadm +DEB/RPM package. -For further information, see [Managing the kubeadm drop-in file for systemd](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd). +For further information, see +[Managing the kubeadm drop-in file for systemd](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd). ### Use kubeadm with CRI runtimes -By default kubeadm attempts to detect your container runtime. For more details on this detection, see -the [kubeadm CRI installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime). +By default kubeadm attempts to detect your container runtime. For more details on this detection, +see the [kubeadm CRI installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime). ### Setting the node name -By default, `kubeadm` assigns a node name based on a machine's host address. You can override this setting with the `--node-name` flag. +By default, `kubeadm` assigns a node name based on a machine's host address. +You can override this setting with the `--node-name` flag. The flag passes the appropriate [`--hostname-override`](/docs/reference/command-line-tools-reference/kubelet/#options) value to the kubelet. -Be aware that overriding the hostname can [interfere with cloud providers](https://github.com/kubernetes/website/pull/8873). +Be aware that overriding the hostname can +[interfere with cloud providers](https://github.com/kubernetes/website/pull/8873). ### Automating kubeadm Rather than copying the token you obtained from `kubeadm init` to each node, as -in the [basic kubeadm tutorial](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), you can parallelize the -token distribution for easier automation. To implement this automation, you must -know the IP address that the control-plane node will have after it is started, -or use a DNS name or an address of a load balancer. +in the [basic kubeadm tutorial](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), +you can parallelize the token distribution for easier automation. To implement this automation, +you must know the IP address that the control-plane node will have after it is started, or use a +DNS name or an address of a load balancer. 1. Generate a token. This token must have the form `<6 character string>.<16 character string>`. More formally, it must match the regex: @@ -341,7 +354,11 @@ provisioned). For details, see the [kubeadm join](/docs/reference/setup-tools/ku ## {{% heading "whatsnext" %}} * [kubeadm init phase](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) to understand more about -`kubeadm init` phases -* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to bootstrap a Kubernetes worker node and join it to the cluster -* [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) to upgrade a Kubernetes cluster to a newer version -* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made to this host by `kubeadm init` or `kubeadm join` + `kubeadm init` phases +* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to bootstrap a Kubernetes + worker node and join it to the cluster +* [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) to upgrade a Kubernetes + cluster to a newer version +* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made + to this host by `kubeadm init` or `kubeadm join` + diff --git a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md index dccf214c473a4..8055f6774eba6 100644 --- a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md +++ b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md @@ -6,34 +6,63 @@ weight: 10 -In this tutorial you will learn how and why to externalize your microservice’s configuration. Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment variables and then consume them using MicroProfile Config. +In this tutorial you will learn how and why to externalize your microservice’s configuration. +Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment +variables and then consume them using MicroProfile Config. ## {{% heading "prerequisites" %}} ### Creating Kubernetes ConfigMaps & Secrets -There are several ways to set environment variables for a Docker container in Kubernetes, including: Dockerfile, kubernetes.yml, Kubernetes ConfigMaps, and Kubernetes Secrets. In the tutorial, you will learn how to use the latter two for setting your environment variables whose values will be injected into your microservices. One of the benefits for using ConfigMaps and Secrets is that they can be re-used across multiple containers, including being assigned to different environment variables for the different containers. -ConfigMaps are API Objects that store non-confidential key-value pairs. In the Interactive Tutorial you will learn how to use a ConfigMap to store the application's name. For more information regarding ConfigMaps, you can find the documentation [here](/docs/tasks/configure-pod-container/configure-pod-configmap/). +There are several ways to set environment variables for a Docker container in Kubernetes, +including: Dockerfile, kubernetes.yml, Kubernetes ConfigMaps, and Kubernetes Secrets. In the +tutorial, you will learn how to use the latter two for setting your environment variables whose +values will be injected into your microservices. One of the benefits for using ConfigMaps and +Secrets is that they can be re-used across multiple containers, including being assigned to +different environment variables for the different containers. -Although Secrets are also used to store key-value pairs, they differ from ConfigMaps in that they're intended for confidential/sensitive information and are stored using Base64 encoding. This makes secrets the appropriate choice for storing such things as credentials, keys, and tokens, the former of which you'll do in the Interactive Tutorial. For more information on Secrets, you can find the documentation [here](/docs/concepts/configuration/secret/). +ConfigMaps are API Objects that store non-confidential key-value pairs. In the Interactive +Tutorial you will learn how to use a ConfigMap to store the application's name. For more +information regarding ConfigMaps, you can find the documentation +[here](/docs/tasks/configure-pod-container/configure-pod-configmap/). + +Although Secrets are also used to store key-value pairs, they differ from ConfigMaps in that +they're intended for confidential/sensitive information and are stored using Base64 encoding. +This makes secrets the appropriate choice for storing such things as credentials, keys, and +tokens, the former of which you'll do in the Interactive Tutorial. For more information on +Secrets, you can find the documentation [here](/docs/concepts/configuration/secret/). ### Externalizing Config from Code -Externalized application configuration is useful because configuration usually changes depending on your environment. In order to accomplish this, we'll use Java's Contexts and Dependency Injection (CDI) and MicroProfile Config. MicroProfile Config is a feature of MicroProfile, a set of open Java technologies for developing and deploying cloud-native microservices. -CDI provides a standard dependency injection capability enabling an application to be assembled from collaborating, loosely-coupled beans. MicroProfile Config provides apps and microservices a standard way to obtain config properties from various sources, including the application, runtime, and environment. Based on the source's defined priority, the properties are automatically combined into a single set of properties that the application can access via an API. Together, CDI & MicroProfile will be used in the Interactive Tutorial to retrieve the externally provided properties from the Kubernetes ConfigMaps and Secrets and get injected into your application code. +Externalized application configuration is useful because configuration usually changes depending +on your environment. In order to accomplish this, we'll use Java's Contexts and Dependency +Injection (CDI) and MicroProfile Config. MicroProfile Config is a feature of MicroProfile, a set +of open Java technologies for developing and deploying cloud-native microservices. + +CDI provides a standard dependency injection capability enabling an application to be assembled +from collaborating, loosely-coupled beans. MicroProfile Config provides apps and microservices a +standard way to obtain config properties from various sources, including the application, runtime, +and environment. Based on the source's defined priority, the properties are automatically +combined into a single set of properties that the application can access via an API. Together, +CDI & MicroProfile will be used in the Interactive Tutorial to retrieve the externally provided +properties from the Kubernetes ConfigMaps and Secrets and get injected into your application code. -Many open source frameworks and runtimes implement and support MicroProfile Config. Throughout the interactive tutorial, you'll be using Open Liberty, a flexible open-source Java runtime for building and running cloud-native apps and microservices. However, any MicroProfile compatible runtime could be used instead. +Many open source frameworks and runtimes implement and support MicroProfile Config. Throughout +the interactive tutorial, you'll be using Open Liberty, a flexible open-source Java runtime for +building and running cloud-native apps and microservices. However, any MicroProfile compatible +runtime could be used instead. ## {{% heading "objectives" %}} * Create a Kubernetes ConfigMap and Secret * Inject microservice configuration using MicroProfile Config - ## Example: Externalizing config using MicroProfile, ConfigMaps and Secrets -### [Start Interactive Tutorial](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/) + +[Start Interactive Tutorial](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/) + diff --git a/content/en/docs/tutorials/security/cluster-level-pss.md b/content/en/docs/tutorials/security/cluster-level-pss.md index 8a303af651dd0..3b662efc602cd 100644 --- a/content/en/docs/tutorials/security/cluster-level-pss.md +++ b/content/en/docs/tutorials/security/cluster-level-pss.md @@ -17,7 +17,8 @@ created. This tutorial shows you how to enforce the `baseline` Pod Security Standard at the cluster level which applies a standard configuration to all namespaces in a cluster. -To apply Pod Security Standards to specific namespaces, refer to [Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss). +To apply Pod Security Standards to specific namespaces, refer to +[Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss). If you are running a version of Kubernetes other than v{{< skew currentVersion >}}, check the documentation for that version. diff --git a/content/en/docs/tutorials/security/ns-level-pss.md b/content/en/docs/tutorials/security/ns-level-pss.md index 4a20895df73a1..43a48d0932d27 100644 --- a/content/en/docs/tutorials/security/ns-level-pss.md +++ b/content/en/docs/tutorials/security/ns-level-pss.md @@ -17,7 +17,7 @@ one namespace at a time. You can also apply Pod Security Standards to multiple namespaces at once at the cluster level. For instructions, refer to -[Apply Pod Security Standards at the cluster level](/docs/tutorials/security/cluster-level-pss). +[Apply Pod Security Standards at the cluster level](/docs/tutorials/security/cluster-level-pss/). ## {{% heading "prerequisites" %}}