diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md index 39ad8b4b7e472..d05de37f34099 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md @@ -8,9 +8,6 @@ content_type: concept This topic discusses multiple ways to interact with clusters. - - - ## Accessing for the first time with kubectl @@ -29,8 +26,9 @@ Check the location and credentials that kubectl knows about with this command: kubectl config view ``` -Many of the [examples](/docs/user-guide/kubectl-cheatsheet) provide an introduction to using -kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl-overview). +Many of the [examples](/docs/reference/kubectl/cheatsheet/) provide an introduction to using +kubectl and complete documentation is found in the +[kubectl manual](/docs/reference/kubectl/overview/). ## Directly accessing the REST API @@ -165,7 +163,7 @@ client libraries. * To get the library, run the following command: `go get k8s.io/client-go@kubernetes-`, see [INSTALL.md](https://github.com/kubernetes/client-go/blob/master/INSTALL.md#for-the-casual-user) for detailed installation instructions. See [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go#compatibility-matrix) to see which versions are supported. * Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/kubernetes"` is correct. -The Go client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) +The Go client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go). If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod). @@ -174,7 +172,7 @@ If the application is deployed as a Pod in the cluster, please refer to the [nex To use [Python client](https://github.com/kubernetes-client/python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-client/python) for more installation options. -The Python client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) +The Python client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-client/python/tree/master/examples). ### Other languages @@ -219,7 +217,9 @@ In each case, the credentials of the pod are used to communicate securely with t The previous section was about connecting the Kubernetes API server. This section is about connecting to other services running on Kubernetes cluster. In Kubernetes, the -[nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have +[nodes](/docs/concepts/architecture/nodes/), +[pods](/docs/concepts/workloads/pods/) and +[services](/docs/concepts/services-networking/service/) all have their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be routable, so they will not be reachable from a machine outside the cluster, such as your desktop machine. @@ -230,7 +230,7 @@ You have several options for connecting to nodes, pods and services from outside - Access services through public IPs. - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - the cluster. See the [services](/docs/user-guide/services) and + the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. - Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. diff --git a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md index 1d00516d28996..95066ac6121a6 100644 --- a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md +++ b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md @@ -7,19 +7,14 @@ weight: 110 This page shows how to use a Volume to communicate between two Containers running -in the same Pod. See also how to allow processes to communicate by [sharing process namespace](/docs/tasks/configure-pod-container/share-process-namespace/) between containers. - - - +in the same Pod. See also how to allow processes to communicate by +[sharing process namespace](/docs/tasks/configure-pod-container/share-process-namespace/) +between containers. ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - - ## Creating a Pod that runs two Containers @@ -103,14 +98,15 @@ The output is similar to this: Recall that the debian Container created the `index.html` file in the nginx root directory. Use `curl` to send a GET request to the nginx server: - root@two-containers:/# curl localhost +``` +root@two-containers:/# curl localhost +``` The output shows that nginx serves a web page written by the debian container: - Hello from the debian container - - - +``` +Hello from the debian container +``` @@ -128,20 +124,14 @@ The Volume in this exercise provides a way for Containers to communicate during the life of the Pod. If the Pod is deleted and recreated, any data stored in the shared Volume is lost. - - - ## {{% heading "whatsnext" %}} -* Learn more about -[patterns for composite containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns). +* Learn more about [patterns for composite containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns). -* Learn about -[composite containers for modular architecture](http://www.slideshare.net/Docker/slideshare-burns). +* Learn about [composite containers for modular architecture](https://www.slideshare.net/Docker/slideshare-burns). -* See -[Configuring a Pod to Use a Volume for Storage](/docs/tasks/configure-pod-container/configure-volume-storage/). +* See [Configuring a Pod to Use a Volume for Storage](/docs/tasks/configure-pod-container/configure-volume-storage/). * See [Configure a Pod to share process namespace between containers in a Pod](/docs/tasks/configure-pod-container/share-process-namespace/) @@ -149,7 +139,3 @@ the shared Volume is lost. * See [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). - - - - diff --git a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md index 0ce827185ce21..725afbfb89836 100644 --- a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md +++ b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md @@ -11,33 +11,21 @@ microservice. The backend microservice is a hello greeter. The frontend and backend are connected using a Kubernetes {{< glossary_tooltip term_id="service" >}} object. - - - ## {{% heading "objectives" %}} - * Create and run a microservice using a {{< glossary_tooltip term_id="deployment" >}} object. * Route traffic to the backend using a frontend. * Use a Service object to connect the frontend application to the backend application. - - - ## {{% heading "prerequisites" %}} +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - -* This task uses - [Services with external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/), which - require a supported environment. If your environment does not - support this, you can use a Service of type - [NodePort](/docs/concepts/services-networking/service/#nodeport) instead. - - - +This task uses +[Services with external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/), which +require a supported environment. If your environment does not support this, you can use a Service of type +[NodePort](/docs/concepts/services-networking/service/#nodeport) instead. @@ -153,8 +141,8 @@ service/frontend created ``` {{< note >}} -The nginx configuration is baked into the [container -image](/examples/service/access/Dockerfile). A better way to do this would +The nginx configuration is baked into the +[container image](/examples/service/access/Dockerfile). A better way to do this would be to use a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), so that you can change the configuration more easily. @@ -203,27 +191,22 @@ The output shows the message generated by the backend: {"message":"Hello"} ``` - - ## {{% heading "cleanup" %}} - To delete the Services, enter this command: - kubectl delete services frontend hello +```shell +kubectl delete services frontend hello +``` To delete the Deployments, the ReplicaSets and the Pods that are running the backend and frontend applications, enter this command: - kubectl delete deployment frontend hello - - +```shell +kubectl delete deployment frontend hello +``` ## {{% heading "whatsnext" %}} - * Learn more about [Services](/docs/concepts/services-networking/service/) * Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) - - - diff --git a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md index d1e1ba1568c72..3a8983eec888c 100644 --- a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md +++ b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md @@ -9,15 +9,10 @@ weight: 100 This page shows how to use kubectl to list all of the Container images for Pods running in a cluster. - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - In this exercise you will use kubectl to fetch all of the Pods @@ -30,14 +25,14 @@ of Containers for each. - Format the output to include only the list of Container image names using `-o jsonpath={..image}`. This will recursively parse out the `image` field from the returned json. - - See the [jsonpath reference](/docs/user-guide/jsonpath/) + - See the [jsonpath reference](/docs/reference/kubectl/jsonpath/) for further information on how to use jsonpath. - Format the output using standard tools: `tr`, `sort`, `uniq` - Use `tr` to replace spaces with newlines - Use `sort` to sort the results - Use `uniq` to aggregate image counts -```sh +```shell kubectl get pods --all-namespaces -o jsonpath="{..image}" |\ tr -s '[[:space:]]' '\n' |\ sort |\ @@ -52,7 +47,7 @@ field within the Pod. This ensures the correct field is retrieved even when the field name is repeated, e.g. many fields are called `name` within a given item: -```sh +```shell kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" ``` @@ -74,7 +69,7 @@ Pod is returned instead of a list of items. The formatting can be controlled further by using the `range` operation to iterate over elements individually. -```sh +```shell kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\ sort ``` @@ -84,7 +79,7 @@ sort To target only Pods matching a specific label, use the -l flag. The following matches only Pods with labels matching `app=nginx`. -```sh +```shell kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=nginx ``` @@ -93,7 +88,7 @@ kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=nginx To target only pods in a specific namespace, use the namespace flag. The following matches only Pods in the `kube-system` namespace. -```sh +```shell kubectl get pods --namespace kube-system -o jsonpath="{..image}" ``` @@ -102,27 +97,14 @@ kubectl get pods --namespace kube-system -o jsonpath="{..image}" As an alternative to jsonpath, Kubectl supports using [go-templates](https://golang.org/pkg/text/template/) for formatting the output: - -```sh +```shell kubectl get pods --all-namespaces -o go-template --template="{{range .items}}{{range .spec.containers}}{{.image}} {{end}}{{end}}" ``` - - - - - - - - ## {{% heading "whatsnext" %}} - ### Reference -* [Jsonpath](/docs/user-guide/jsonpath/) reference guide +* [Jsonpath](/docs/reference/kubectl/jsonpath/) reference guide * [Go template](https://golang.org/pkg/text/template/) reference guide - - - diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md index 6d7c1cced2f9d..7bfcf03ebdc1e 100644 --- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -14,15 +14,19 @@ card: -Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. You can use Dashboard to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc). For example, you can scale a Deployment, initiate a rolling update, restart a pod or deploy new applications using a deploy wizard. +Dashboard is a web-based Kubernetes user interface. +You can use Dashboard to deploy containerized applications to a Kubernetes cluster, +troubleshoot your containerized application, and manage the cluster resources. +You can use Dashboard to get an overview of applications running on your cluster, +as well as for creating or modifying individual Kubernetes resources +(such as Deployments, Jobs, DaemonSets, etc). +For example, you can scale a Deployment, initiate a rolling update, restart a pod +or deploy new applications using a deploy wizard. Dashboard also provides information on the state of Kubernetes resources in your cluster and on any errors that may have occurred. ![Kubernetes Dashboard UI](/images/docs/ui-dashboard.png) - - - ## Deploying the Dashboard UI @@ -35,8 +39,10 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/a ## Accessing the Dashboard UI - -To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you can follow our guide on [creating a sample user](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md). +To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. +Currently, Dashboard only supports logging in with a Bearer Token. +To create a token for this demo, you can follow our guide on +[creating a sample user](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md). {{< warning >}} The sample user created in the tutorial will have administrative privileges and is for educational purposes only. @@ -59,13 +65,17 @@ Kubeconfig Authentication method does NOT support external identity providers or ## Welcome view -When you access Dashboard on an empty cluster, you'll see the welcome page. This page contains a link to this document as well as a button to deploy your first application. In addition, you can view which system applications are running by default in the `kube-system` [namespace](/docs/tasks/administer-cluster/namespaces/) of your cluster, for example the Dashboard itself. +When you access Dashboard on an empty cluster, you'll see the welcome page. +This page contains a link to this document as well as a button to deploy your first application. +In addition, you can view which system applications are running by default in the `kube-system` +[namespace](/docs/tasks/administer-cluster/namespaces/) of your cluster, for example the Dashboard itself. ![Kubernetes Dashboard welcome page](/images/docs/ui-dashboard-zerostate.png) ## Deploying containerized applications -Dashboard lets you create and deploy a containerized application as a Deployment and optional Service with a simple wizard. You can either manually specify application details, or upload a YAML or JSON file containing application configuration. +Dashboard lets you create and deploy a containerized application as a Deployment and optional Service with a simple wizard. +You can either manually specify application details, or upload a YAML or JSON file containing application configuration. Click the **CREATE** button in the upper right corner of any page to begin. @@ -73,17 +83,29 @@ Click the **CREATE** button in the upper right corner of any page to begin. The deploy wizard expects that you provide the following information: -- **App name** (mandatory): Name for your application. A [label](/docs/concepts/overview/working-with-objects/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed. +- **App name** (mandatory): Name for your application. + A [label](/docs/concepts/overview/working-with-objects/labels/) with the name will be + added to the Deployment and Service, if any, that will be deployed. - The application name must be unique within the selected Kubernetes [namespace](/docs/tasks/administer-cluster/namespaces/). It must start with a lowercase character, and end with a lowercase character or a number, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored. + The application name must be unique within the selected Kubernetes [namespace](/docs/tasks/administer-cluster/namespaces/). + It must start with a lowercase character, and end with a lowercase character or a number, + and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. + Leading and trailing spaces are ignored. -- **Container image** (mandatory): The URL of a public Docker [container image](/docs/concepts/containers/images/) on any registry, or a private image (commonly hosted on the Google Container Registry or Docker Hub). The container image specification must end with a colon. +- **Container image** (mandatory): + The URL of a public Docker [container image](/docs/concepts/containers/images/) on any registry, + or a private image (commonly hosted on the Google Container Registry or Docker Hub). + The container image specification must end with a colon. -- **Number of pods** (mandatory): The target number of Pods you want your application to be deployed in. The value must be a positive integer. +- **Number of pods** (mandatory): The target number of Pods you want your application to be deployed in. + The value must be a positive integer. - A [Deployment](/docs/concepts/workloads/controllers/deployment/) will be created to maintain the desired number of Pods across your cluster. + A [Deployment](/docs/concepts/workloads/controllers/deployment/) will be created to + maintain the desired number of Pods across your cluster. -- **Service** (optional): For some parts of your application (e.g. frontends) you may want to expose a [Service](/docs/concepts/services-networking/service/) onto an external, maybe public IP address outside of your cluster (external Service). +- **Service** (optional): For some parts of your application (e.g. frontends) you may want to expose a + [Service](/docs/concepts/services-networking/service/) onto an external, + maybe public IP address outside of your cluster (external Service). {{< note >}} For external Services, you may need to open up one or more ports to do so. @@ -91,13 +113,22 @@ The deploy wizard expects that you provide the following information: Other Services that are only visible from inside the cluster are called internal Services. - Irrespective of the Service type, if you choose to create a Service and your container listens on a port (incoming), you need to specify two ports. The Service will be created mapping the port (incoming) to the target port seen by the container. This Service will route to your deployed Pods. Supported protocols are TCP and UDP. The internal DNS name for this Service will be the value you specified as application name above. + Irrespective of the Service type, if you choose to create a Service and your container listens + on a port (incoming), you need to specify two ports. + The Service will be created mapping the port (incoming) to the target port seen by the container. + This Service will route to your deployed Pods. Supported protocols are TCP and UDP. + The internal DNS name for this Service will be the value you specified as application name above. If needed, you can expand the **Advanced options** section where you can specify more settings: -- **Description**: The text you enter here will be added as an [annotation](/docs/concepts/overview/working-with-objects/annotations/) to the Deployment and displayed in the application's details. +- **Description**: The text you enter here will be added as an + [annotation](/docs/concepts/overview/working-with-objects/annotations/) + to the Deployment and displayed in the application's details. -- **Labels**: Default [labels](/docs/concepts/overview/working-with-objects/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track. +- **Labels**: Default [labels](/docs/concepts/overview/working-with-objects/labels/) to be used + for your application are application name and version. + You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, + such as release, environment, tier, partition, and release track. Example: @@ -108,65 +139,110 @@ If needed, you can expand the **Advanced options** section where you can specify track=stable ``` -- **Namespace**: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called [namespaces](/docs/tasks/administer-cluster/namespaces/). They let you partition resources into logically named groups. +- **Namespace**: Kubernetes supports multiple virtual clusters backed by the same physical cluster. + These virtual clusters are called [namespaces](/docs/tasks/administer-cluster/namespaces/). + They let you partition resources into logically named groups. - Dashboard offers all available namespaces in a dropdown list, and allows you to create a new namespace. The namespace name may contain a maximum of 63 alphanumeric characters and dashes (-) but can not contain capital letters. - Namespace names should not consist of only numbers. If the name is set as a number, such as 10, the pod will be put in the default namespace. + Dashboard offers all available namespaces in a dropdown list, and allows you to create a new namespace. + The namespace name may contain a maximum of 63 alphanumeric characters and dashes (-) but can not contain capital letters. + Namespace names should not consist of only numbers. + If the name is set as a number, such as 10, the pod will be put in the default namespace. - In case the creation of the namespace is successful, it is selected by default. If the creation fails, the first namespace is selected. + In case the creation of the namespace is successful, it is selected by default. + If the creation fails, the first namespace is selected. -- **Image Pull Secret**: In case the specified Docker container image is private, it may require [pull secret](/docs/concepts/configuration/secret/) credentials. +- **Image Pull Secret**: + In case the specified Docker container image is private, it may require + [pull secret](/docs/concepts/configuration/secret/) credentials. - Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. The secret name must follow the DNS domain name syntax, for example `new.image-pull.secret`. The content of a secret must be base64-encoded and specified in a [`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) file. The secret name may consist of a maximum of 253 characters. + Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. + The secret name must follow the DNS domain name syntax, for example `new.image-pull.secret`. + The content of a secret must be base64-encoded and specified in a + [`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) file. + The secret name may consist of a maximum of 253 characters. In case the creation of the image pull secret is successful, it is selected by default. If the creation fails, no secret is applied. -- **CPU requirement (cores)** and **Memory requirement (MiB)**: You can specify the minimum [resource limits](/docs/tasks/configure-pod-container/limit-range/) for the container. By default, Pods run with unbounded CPU and memory limits. +- **CPU requirement (cores)** and **Memory requirement (MiB)**: + You can specify the minimum [resource limits](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) + for the container. By default, Pods run with unbounded CPU and memory limits. -- **Run command** and **Run command arguments**: By default, your containers run the specified Docker image's default [entrypoint command](/docs/tasks/inject-data-application/define-command-argument-container/). You can use the command options and arguments to override the default. +- **Run command** and **Run command arguments**: + By default, your containers run the specified Docker image's default + [entrypoint command](/docs/tasks/inject-data-application/define-command-argument-container/). + You can use the command options and arguments to override the default. -- **Run as privileged**: This setting determines whether processes in [privileged containers](/docs/user-guide/pods/#privileged-mode-for-pod-containers) are equivalent to processes running as root on the host. Privileged containers can make use of capabilities like manipulating the network stack and accessing devices. +- **Run as privileged**: This setting determines whether processes in + [privileged containers](/docs/concepts/workloads/pods/#privileged-mode-for-containers) + are equivalent to processes running as root on the host. + Privileged containers can make use of capabilities like manipulating the network stack and accessing devices. -- **Environment variables**: Kubernetes exposes Services through [environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). You can compose environment variable or pass arguments to your commands using the values of environment variables. They can be used in applications to find a Service. Values can reference other variables using the `$(VAR_NAME)` syntax. +- **Environment variables**: Kubernetes exposes Services through + [environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). + You can compose environment variable or pass arguments to your commands using the values of environment variables. + They can be used in applications to find a Service. + Values can reference other variables using the `$(VAR_NAME)` syntax. ### Uploading a YAML or JSON file -Kubernetes supports declarative configuration. In this style, all configuration is stored in YAML or JSON configuration files using the Kubernetes [API](/docs/concepts/overview/kubernetes-api/) resource schemas. +Kubernetes supports declarative configuration. +In this style, all configuration is stored in YAML or JSON configuration files +using the Kubernetes [API](/docs/concepts/overview/kubernetes-api/) resource schemas. -As an alternative to specifying application details in the deploy wizard, you can define your application in YAML or JSON files, and upload the files using Dashboard. +As an alternative to specifying application details in the deploy wizard, +you can define your application in YAML or JSON files, and upload the files using Dashboard. ## Using Dashboard Following sections describe views of the Kubernetes Dashboard UI; what they provide and how can they be used. ### Navigation -When there are Kubernetes objects defined in the cluster, Dashboard shows them in the initial view. By default only objects from the _default_ namespace are shown and this can be changed using the namespace selector located in the navigation menu. +When there are Kubernetes objects defined in the cluster, Dashboard shows them in the initial view. +By default only objects from the _default_ namespace are shown and +this can be changed using the namespace selector located in the navigation menu. Dashboard shows most Kubernetes object kinds and groups them in a few menu categories. #### Admin Overview -For cluster and namespace administrators, Dashboard lists Nodes, Namespaces and Persistent Volumes and has detail views for them. Node list view contains CPU and memory usage metrics aggregated across all Nodes. The details view shows the metrics for a Node, its specification, status, allocated resources, events and pods running on the node. +For cluster and namespace administrators, Dashboard lists Nodes, Namespaces and Persistent Volumes and has detail views for them. +Node list view contains CPU and memory usage metrics aggregated across all Nodes. +The details view shows the metrics for a Node, its specification, status, +allocated resources, events and pods running on the node. #### Workloads -Shows all applications running in the selected namespace. The view lists applications by workload kind (e.g., Deployments, Replica Sets, Stateful Sets, etc.) and each workload kind can be viewed separately. The lists summarize actionable information about the workloads, such as the number of ready pods for a Replica Set or current memory usage for a Pod. -Detail views for workloads show status and specification information and surface relationships between objects. For example, Pods that Replica Set is controlling or New Replica Sets and Horizontal Pod Autoscalers for Deployments. +Shows all applications running in the selected namespace. +The view lists applications by workload kind (e.g., Deployments, Replica Sets, Stateful Sets, etc.) +and each workload kind can be viewed separately. +The lists summarize actionable information about the workloads, +such as the number of ready pods for a Replica Set or current memory usage for a Pod. + +Detail views for workloads show status and specification information and +surface relationships between objects. +For example, Pods that Replica Set is controlling or New Replica Sets and Horizontal Pod Autoscalers for Deployments. #### Services -Shows Kubernetes resources that allow for exposing services to external world and discovering them within a cluster. For that reason, Service and Ingress views show Pods targeted by them, internal endpoints for cluster connections and external endpoints for external users. + +Shows Kubernetes resources that allow for exposing services to external world and +discovering them within a cluster. +For that reason, Service and Ingress views show Pods targeted by them, +internal endpoints for cluster connections and external endpoints for external users. #### Storage + Storage view shows Persistent Volume Claim resources which are used by applications for storing data. #### Config Maps and Secrets -Shows all Kubernetes resources that are used for live configuration of applications running in clusters. The view allows for editing and managing config objects and displays secrets hidden by default. -#### Logs viewer -Pod lists and detail pages link to a logs viewer that is built into Dashboard. The viewer allows for drilling down logs from containers belonging to a single Pod. +Shows all Kubernetes resources that are used for live configuration of applications running in clusters. +The view allows for editing and managing config objects and displays secrets hidden by default. -![Logs viewer](/images/docs/ui-dashboard-logs-view.png) +#### Logs viewer +Pod lists and detail pages link to a logs viewer that is built into Dashboard. +The viewer allows for drilling down logs from containers belonging to a single Pod. +![Logs viewer](/images/docs/ui-dashboard-logs-view.png) ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md index 659c8d777c4ba..5c94dceffc6ab 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md @@ -6,13 +6,10 @@ content_type: task This page shows how to access clusters using the Kubernetes API. - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - ## Accessing the Kubernetes API @@ -170,7 +167,7 @@ client-go defines its own API objects, so if needed, import API definitions from {{< /note >}} -The Go client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) +The Go client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go): ```golang @@ -199,7 +196,7 @@ If the application is deployed as a Pod in the cluster, see [Accessing the API f To use [Python client](https://github.com/kubernetes-client/python), run the following command: `pip install kubernetes` See [Python Client Library page](https://github.com/kubernetes-client/python) for more installation options. -The Python client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) +The Python client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/python/blob/master/examples/out_of_cluster_config.py): ```python @@ -229,7 +226,7 @@ mvn install See [https://github.com/kubernetes-client/java/releases](https://github.com/kubernetes-client/java/releases) to see which versions are supported. -The Java client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) +The Java client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java): ```java @@ -283,7 +280,7 @@ public class KubeConfigFileClientExample { To use [dotnet client](https://github.com/kubernetes-client/csharp), run the following command: `dotnet add package KubernetesClient --version 1.6.1` See [dotnet Client Library page](https://github.com/kubernetes-client/csharp) for more installation options. See [https://github.com/kubernetes-client/csharp/releases](https://github.com/kubernetes-client/csharp/releases) to see which versions are supported. -The dotnet client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) +The dotnet client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/csharp/blob/master/examples/simple/PodList.cs): ```csharp @@ -318,7 +315,7 @@ namespace simple To install [JavaScript client](https://github.com/kubernetes-client/javascript), run the following command: `npm install @kubernetes/client-node`. See [https://github.com/kubernetes-client/javascript/releases](https://github.com/kubernetes-client/javascript/releases) to see which versions are supported. -The JavaScript client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) +The JavaScript client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/javascript/blob/master/examples/example.js): ```javascript @@ -338,7 +335,7 @@ k8sApi.listNamespacedPod('default').then((res) => { See [https://github.com/kubernetes-client/haskell/releases](https://github.com/kubernetes-client/haskell/releases) to see which versions are supported. -The [Haskell client](https://github.com/kubernetes-client/haskell) can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) +The [Haskell client](https://github.com/kubernetes-client/haskell) can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/haskell/blob/master/kubernetes-client/example/App.hs): ```haskell @@ -388,7 +385,7 @@ While running in a Pod, the Kubernetes apiserver is accessible via a Service nam do this automatically. The recommended way to authenticate to the API server is with a -[service account](/docs/user-guide/service-accounts) credential. By default, a Pod +[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By default, a Pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that Pod, at `/var/run/secrets/kubernetes.io/serviceaccount/token`. diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-services.md b/content/en/docs/tasks/administer-cluster/access-cluster-services.md index 979a75a162eaa..c318a3df35388 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-services.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-services.md @@ -17,7 +17,8 @@ This page shows how to connect to services running on the Kubernetes cluster. ## Accessing services running on the cluster -In Kubernetes, [nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have +In Kubernetes, [nodes](/docs/concepts/architecture/nodes/), +[pods](/docs/concepts/workloads/pods/) and [services](/docs/concepts/services-networking/service/) all have their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be routable, so they will not be reachable from a machine outside the cluster, such as your desktop machine. @@ -28,7 +29,7 @@ You have several options for connecting to nodes, pods and services from outside - Access services through public IPs. - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - the cluster. See the [services](/docs/user-guide/services) and + the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. - Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. diff --git a/content/en/docs/tasks/administer-cluster/cluster-management.md b/content/en/docs/tasks/administer-cluster/cluster-management.md index 7cbab3aa2cb2e..ecbae2a4b3c13 100644 --- a/content/en/docs/tasks/administer-cluster/cluster-management.md +++ b/content/en/docs/tasks/administer-cluster/cluster-management.md @@ -13,9 +13,6 @@ upgrading your cluster's master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster. - - - ## Creating and configuring a Cluster @@ -81,24 +78,33 @@ Different providers, and tools, will manage upgrades differently. It is recomme * [Digital Rebar](https://provision.readthedocs.io/en/tip/doc/content-packages/krib.html) * ... -To upgrade a cluster on a platform not mentioned in the above list, check the order of component upgrade on the [Skewed versions](/docs/setup/release/version-skew-policy/#supported-component-upgrade-order) page. +To upgrade a cluster on a platform not mentioned in the above list, check the order of component upgrade on the +[Skewed versions](/docs/setup/release/version-skew-policy/#supported-component-upgrade-order) page. ## Resizing a cluster -If your cluster runs short on resources you can easily add more machines to it if your cluster is running in [Node self-registration mode](/docs/admin/node/#self-registration-of-nodes). -If you're using GCE or Google Kubernetes Engine it's done by resizing the Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI: +If your cluster runs short on resources you can easily add more machines to it if your cluster +is running in [Node self-registration mode](/docs/concepts/architecture/nodes/#self-registration-of-nodes). +If you're using GCE or Google Kubernetes Engine it's done by resizing the Instance Group managing your Nodes. +It can be accomplished by modifying number of instances on +`Compute > Compute Engine > Instance groups > your group > Edit group` +[Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI: ```shell gcloud compute instance-groups managed resize kubernetes-node-pool --size=42 --zone=$ZONE ``` -The Instance Group will take care of putting appropriate image on new machines and starting them, while the Kubelet will register its Node with the API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill. +The Instance Group will take care of putting appropriate image on new machines and starting them, +while the Kubelet will register its Node with the API server to make it available for scheduling. +If you scale the instance group down, system will randomly choose Nodes to kill. In other environments you may need to configure the machine yourself and tell the Kubelet on which machine API server is running. ### Resizing an Azure Kubernetes Service (AKS) cluster -Azure Kubernetes Service enables user-initiated resizing of the cluster from either the CLI or the Azure Portal and is described in the [Azure AKS documentation](https://docs.microsoft.com/en-us/azure/aks/scale-cluster). +Azure Kubernetes Service enables user-initiated resizing of the cluster from either the CLI or +the Azure Portal and is described in the +[Azure AKS documentation](https://docs.microsoft.com/en-us/azure/aks/scale-cluster). ### Cluster autoscaling @@ -106,7 +112,8 @@ Azure Kubernetes Service enables user-initiated resizing of the cluster from eit If you are using GCE or Google Kubernetes Engine, you can configure your cluster so that it is automatically rescaled based on pod needs. -As described in [Compute Resource](/docs/concepts/configuration/manage-compute-resources-container/), users can reserve how much CPU and memory is allocated to pods. +As described in [Compute Resource](/docs/concepts/configuration/manage-resources-containers/), +users can reserve how much CPU and memory is allocated to pods. This information is used by the Kubernetes scheduler to find a place to run the pod. If there is no node that has enough free capacity (or doesn't match other pod requirements) then the pod has to wait until some pods are terminated or a new node is added. @@ -185,7 +192,8 @@ kubectl uncordon $NODENAME If you deleted the node's VM instance and created a new one, then a new schedulable node resource will be created automatically (if you're using a cloud provider that supports -node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](/docs/admin/node) for more details. +node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). +See [Node](/docs/concepts/architecture/nodes/) for more details. ## Advanced Topics diff --git a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md index 9fb0452ddc55a..437e58f39e4a7 100644 --- a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -50,7 +50,7 @@ and more. For more information, see [DNS for Services and Pods](/docs/concepts/s If a Pod's `dnsPolicy` is set to `default`, it inherits the name resolution configuration from the node that the Pod runs on. The Pod's DNS resolution should behave the same as the node. -But see [Known issues](/docs/tasks/debug-application-cluster/dns-debugging-resolution/#known-issues). +But see [Known issues](/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues). If you don't want this, or if you want a different DNS config for pods, you can use the kubelet's `--resolv-conf` flag. Set this flag to "" to prevent Pods from diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 2c4c3d135e41e..a763f36a5802a 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -26,11 +26,8 @@ The upgrade workflow at high level is the following: 1. Upgrade additional control plane nodes. 1. Upgrade worker nodes. - - ## {{% heading "prerequisites" %}} - - You need to have a kubeadm Kubernetes cluster running version 1.17.0 or later. - [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). - The cluster should use a static control plane and etcd pods or external etcd. @@ -45,8 +42,6 @@ The upgrade workflow at high level is the following: or between PATCH versions of the same MINOR. That is, you cannot skip MINOR versions when you upgrade. For example, you can upgrade from 1.y to 1.y+1, but not from 1.y to 1.y+2. - - ## Determine which version to upgrade to @@ -445,3 +440,4 @@ and post-upgrade manifest file for a certain component, a backup file for it wil - Fetches the kubeadm `ClusterConfiguration` from the cluster. - Upgrades the kubelet configuration for this node. + diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md index df6adcd39fec6..93312e199e64c 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md @@ -10,14 +10,9 @@ weight: 40 This page shows how to use Romana for NetworkPolicy. - - ## {{% heading "prerequisites" %}} - -Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/). - - +Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/reference/setup-tools/kubeadm/kubeadm/). @@ -33,13 +28,10 @@ To apply network policies use one of the following: * [Example of Romana network policy](https://github.com/romana/core/blob/master/doc/policy.md). * The NetworkPolicy API. - - ## {{% heading "whatsnext" %}} - -Once you have installed Romana, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy. - - +Once you have installed Romana, you can follow the +[Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) +to try out Kubernetes NetworkPolicy. diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md index a9d15f40a6ba2..b6b562620aa24 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md @@ -10,14 +10,10 @@ weight: 50 This page shows how to use Weave Net for NetworkPolicy. - - ## {{% heading "prerequisites" %}} - -You need to have a Kubernetes cluster. Follow the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/) to bootstrap one. - - +You need to have a Kubernetes cluster. Follow the +[kubeadm getting started guide](/docs/reference/setup-tools/kubeadm/kubeadm/) to bootstrap one. @@ -25,7 +21,10 @@ You need to have a Kubernetes cluster. Follow the [kubeadm getting started guide Follow the [Integrating Kubernetes via the Addon](https://www.weave.works/docs/net/latest/kube-addon/) guide. -The Weave Net addon for Kubernetes comes with a [Network Policy Controller](https://www.weave.works/docs/net/latest/kube-addon/#npc) that automatically monitors Kubernetes for any NetworkPolicy annotations on all namespaces and configures `iptables` rules to allow or block traffic as directed by the policies. +The Weave Net addon for Kubernetes comes with a +[Network Policy Controller](https://www.weave.works/docs/net/latest/kube-addon/#npc) +that automatically monitors Kubernetes for any NetworkPolicy annotations on all +namespaces and configures `iptables` rules to allow or block traffic as directed by the policies. ## Test the installation @@ -49,13 +48,10 @@ weave-net-pmw8w 2/2 Running 0 9d Each Node has a weave Pod, and all Pods are `Running` and `2/2 READY`. (`2/2` means that each Pod has `weave` and `weave-npc`.) - - ## {{% heading "whatsnext" %}} - -Once you have installed the Weave Net addon, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy. If you have any question, contact us at [#weave-community on Slack or Weave User Group](https://github.com/weaveworks/weave#getting-help). - - - +Once you have installed the Weave Net addon, you can follow the +[Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) +to try out Kubernetes NetworkPolicy. If you have any question, contact us at +[#weave-community on Slack or Weave User Group](https://github.com/weaveworks/weave#getting-help). diff --git a/content/en/docs/tasks/administer-cluster/out-of-resource.md b/content/en/docs/tasks/administer-cluster/out-of-resource.md index a9d2ee37025db..10ef986b8c469 100644 --- a/content/en/docs/tasks/administer-cluster/out-of-resource.md +++ b/content/en/docs/tasks/administer-cluster/out-of-resource.md @@ -16,9 +16,6 @@ are low. This is especially important when dealing with incompressible compute resources, such as memory or disk space. If such resources are exhausted, nodes become unstable. - - - ## Eviction Policy @@ -53,8 +50,7 @@ like `free -m`. This is important because `free -m` does not work in a container, and if users use the [node allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) feature, out of resource decisions are made local to the end user Pod part of the cgroup hierarchy as well as the -root node. This -[script](/docs/tasks/administer-cluster/out-of-resource/memory-available.sh) +root node. This [script](/docs/tasks/administer-cluster/memory-available.sh) reproduces the same set of steps that the `kubelet` performs to calculate `memory.available`. The `kubelet` excludes inactive_file (i.e. # of bytes of file-backed memory on inactive LRU list) from its calculation as it assumes that diff --git a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md index 323f5b0a48f8e..090e292966871 100644 --- a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md @@ -56,7 +56,9 @@ an integrated [Role-Based Access Control (RBAC)](/docs/reference/access-authn-au set of permissions bundled into roles. These permissions combine verbs (get, create, delete) with resources (pods, services, nodes) and can be namespace or cluster scoped. A set of out of the box roles are provided that offer reasonable default separation of responsibility depending on what -actions a client might want to perform. It is recommended that you use the [Node](/docs/reference/access-authn-authz/node/) and [RBAC](/docs/reference/access-authn-authz/rbac/) authorizers together, in combination with the +actions a client might want to perform. It is recommended that you use the +[Node](/docs/reference/access-authn-authz/node/) and +[RBAC](/docs/reference/access-authn-authz/rbac/) authorizers together, in combination with the [NodeRestriction](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission plugin. As with authentication, simple and broad roles may be appropriate for smaller clusters, but as @@ -79,7 +81,7 @@ Kubelets expose HTTPS endpoints which grant powerful control over the node and c Production clusters should enable Kubelet authentication and authorization. -Consult the [Kubelet authentication/authorization reference](/docs/admin/kubelet-authentication-authorization) for more information. +Consult the [Kubelet authentication/authorization reference](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization) for more information. ## Controlling the capabilities of a workload or user at runtime @@ -252,9 +254,8 @@ are not encrypted or an attacker gains read access to etcd. ### Receiving alerts for security updates and reporting vulnerabilities Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) -group for emails about security announcements. See the [security reporting](/security/) +group for emails about security announcements. See the +[security reporting](/docs/reference/issues-security/security/) page for more on how to report vulnerabilities. - - diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 163070818245e..6d5363ee58910 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -8,7 +8,7 @@ weight: 110 This page shows how to configure liveness, readiness and startup probes for containers. -The [kubelet](/docs/admin/kubelet/) uses liveness probes to know when to +The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available diff --git a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index 6ff6c21530244..60e45804c9efb 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -29,13 +29,11 @@ PersistentVolume. {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} command-line tool must be configured to communicate with your cluster. If you do not already have a single-node cluster, you can create one by using -[Minikube](/docs/getting-started-guides/minikube). +[Minikube](/docs/setup/learning-environment/minikube/). * Familiarize yourself with the material in [Persistent Volumes](/docs/concepts/storage/persistent-volumes/). - - ## Create an index.html file on your Node diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index f1b1e22db9545..e3f97dd5ce872 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -39,10 +39,14 @@ When they do, they are authenticated as a particular Service Account (for exampl When you create a pod, if you do not specify a service account, it is automatically assigned the `default` service account in the same namespace. -If you get the raw json or yaml for a pod you have created (for example, `kubectl get pods/ -o yaml`), you can see the `spec.serviceAccountName` field has been [automatically set](/docs/user-guide/working-with-resources/#resources-are-automatically-modified). - -You can access the API from inside a pod using automatically mounted service account credentials, as described in [Accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod). -The API permissions of the service account depend on the [authorization plugin and policy](/docs/reference/access-authn-authz/authorization/#authorization-modules) in use. +If you get the raw json or yaml for a pod you have created (for example, `kubectl get pods/ -o yaml`), +you can see the `spec.serviceAccountName` field has been +[automatically set](/docs/concepts/overview/working-with-objects/object-management/). + +You can access the API from inside a pod using automatically mounted service account credentials, as described in +[Accessing the Cluster](/docs/tasks/access-application-cluster/access-cluster). +The API permissions of the service account depend on the +[authorization plugin and policy](/docs/reference/access-authn-authz/authorization/#authorization-modules) in use. In version 1.6+, you can opt out of automounting API credentials for a service account by setting `automountServiceAccountToken: false` on the service account: diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md index db9a0aa96f400..9f69fdccc6ef6 100644 --- a/content/en/docs/tasks/configure-pod-container/security-context.md +++ b/content/en/docs/tasks/configure-pod-container/security-context.md @@ -243,7 +243,7 @@ exit ## Set capabilities for a Container -With [Linux capabilities](http://man7.org/linux/man-pages/man7/capabilities.7.html), +With [Linux capabilities](https://man7.org/linux/man-pages/man7/capabilities.7.html), you can grant certain privileges to a process without granting all the privileges of the root user. To add or remove Linux capabilities for a Container, include the `capabilities` field in the `securityContext` section of the Container manifest. diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md index b14777111e182..b0e272afa0c83 100644 --- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md @@ -9,7 +9,8 @@ weight: 20 -Kubernetes ships with a default scheduler that is described [here](/docs/admin/kube-scheduler/). +Kubernetes ships with a default scheduler that is described +[here](/docs/reference/command-line-tools-reference/kube-scheduler/). If the default scheduler does not suit your needs you can implement your own scheduler. Not just that, you can even run multiple schedulers simultaneously alongside the default scheduler and instruct Kubernetes what scheduler to use for each of your pods. Let's @@ -20,16 +21,10 @@ document. Please refer to the kube-scheduler implementation in [pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/pkg/scheduler) in the Kubernetes source directory for a canonical example. - - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ## Package the scheduler @@ -83,7 +78,7 @@ Note also that we created a dedicated service account `my-scheduler` and bind th `system:kube-scheduler` to it so that it can acquire the same privileges as `kube-scheduler`. Please see the -[kube-scheduler documentation](/docs/admin/kube-scheduler/) for +[kube-scheduler documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for detailed description of other command line arguments. ## Run the second scheduler in the cluster @@ -100,6 +95,7 @@ Verify that the scheduler pod is running: ```shell kubectl get pods --namespace=kube-system ``` + ``` NAME READY STATUS RESTARTS AGE .... @@ -125,8 +121,10 @@ The control plane creates the lock objects for you, but the namespace must alrea You can use the `kube-system` namespace. {{< /note >}} -If RBAC is enabled on your cluster, you must update the `system:kube-scheduler` cluster role. Add your scheduler name to the resourceNames of the rule applied for `endpoints` and `leases` resources, as in the following example: -``` +If RBAC is enabled on your cluster, you must update the `system:kube-scheduler` cluster role. +Add your scheduler name to the resourceNames of the rule applied for `endpoints` and `leases` resources, as in the following example: + +```shell kubectl edit clusterrole system:kube-scheduler ``` @@ -134,10 +132,11 @@ kubectl edit clusterrole system:kube-scheduler ## Specify schedulers for pods -Now that our second scheduler is running, let's create some pods, and direct them to be scheduled by either the default scheduler or the one we just deployed. In order to schedule a given pod using a specific scheduler, we specify the name of the +Now that our second scheduler is running, let's create some pods, and direct them +to be scheduled by either the default scheduler or the one we just deployed. +In order to schedule a given pod using a specific scheduler, we specify the name of the scheduler in that pod spec. Let's look at three examples. - - Pod spec without any scheduler name {{< codenew file="admin/sched/pod1.yaml" >}} @@ -147,9 +146,9 @@ scheduler in that pod spec. Let's look at three examples. Save this file as `pod1.yaml` and submit it to the Kubernetes cluster. -```shell -kubectl create -f pod1.yaml -``` + ```shell + kubectl create -f pod1.yaml + ``` - Pod spec with `default-scheduler` @@ -160,9 +159,9 @@ kubectl create -f pod1.yaml Save this file as `pod2.yaml` and submit it to the Kubernetes cluster. -```shell -kubectl create -f pod2.yaml -``` + ```shell + kubectl create -f pod2.yaml + ``` - Pod spec with `my-scheduler` @@ -174,17 +173,15 @@ kubectl create -f pod2.yaml Save this file as `pod3.yaml` and submit it to the Kubernetes cluster. -```shell -kubectl create -f pod3.yaml -``` + ```shell + kubectl create -f pod3.yaml + ``` Verify that all three pods are running. -```shell -kubectl get pods -``` - - + ```shell + kubectl get pods + ``` @@ -206,4 +203,3 @@ verify that the pods were scheduled by the desired schedulers. kubectl get events ``` - diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md index 1ad8e084af16e..09ab41ea4c417 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md @@ -13,19 +13,15 @@ This page explains how to add versioning information to [CustomResourceDefinitions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#customresourcedefinition-v1beta1-apiextensions), to indicate the stability level of your CustomResourceDefinitions or advance your API to a new version with conversion between API representations. It also describes how to upgrade an object from one version to another. - - ## {{% heading "prerequisites" %}} {{< include "task-tutorial-prereqs.md" >}} -You should have a initial understanding of [custom resources](/docs/concepts/api-extension/custom-resources/). +You should have a initial understanding of [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). {{< version-check >}} - - ## Overview @@ -291,7 +287,9 @@ conversions that call an external service in case a conversion is required. For * Watch is created in one version but the changed object is stored in another version. * custom resource PUT request is in a different version than storage version. -To cover all of these cases and to optimize conversion by the API server, the conversion requests may contain multiple objects in order to minimize the external calls. The webhook should perform these conversions independently. +To cover all of these cases and to optimize conversion by the API server, +the conversion requests may contain multiple objects in order to minimize the external calls. +The webhook should perform these conversions independently. ### Write a conversion webhook server @@ -302,7 +300,12 @@ that is validated in a Kubernetes e2e test. The webhook handles the results wrapped in `ConversionResponse`. Note that the request contains a list of custom resources that need to be converted independently without changing the order of objects. -The example server is organized in a way to be reused for other conversions. Most of the common code are located in the [framework file](https://github.com/kubernetes/kubernetes/tree/v1.15.0/test/images/crd-conversion-webhook/converter/framework.go) that leaves only [one function](https://github.com/kubernetes/kubernetes/blob/v1.15.0/test/images/crd-conversion-webhook/converter/example_converter.go#L29-L80) to be implemented for different conversions. +The example server is organized in a way to be reused for other conversions. +Most of the common code are located in the +[framework file](https://github.com/kubernetes/kubernetes/tree/v1.15.0/test/images/crd-conversion-webhook/converter/framework.go) +that leaves only +[one function](https://github.com/kubernetes/kubernetes/blob/v1.15.0/test/images/crd-conversion-webhook/converter/example_converter.go#L29-L80) +to be implemented for different conversions. {{< note >}} The example conversion webhook server leaves the `ClientAuth` field @@ -315,12 +318,17 @@ how to [authenticate API servers](/docs/reference/access-authn-authz/extensible- #### Permissible mutations -A conversion webhook must not mutate anything inside of `metadata` of the converted object other than `labels` and `annotations`. Attempted changes to `name`, `UID` and `namespace` are rejected and fail the request which caused the conversion. All other changes are just ignored. +A conversion webhook must not mutate anything inside of `metadata` of the converted object +other than `labels` and `annotations`. +Attempted changes to `name`, `UID` and `namespace` are rejected and fail the request +which caused the conversion. All other changes are just ignored. ### Deploy the conversion webhook service -Documentation for deploying the conversion webhook is the same as for the [admission webhook example service](/docs/reference/access-authn-authz/extensible-admission-controllers/#deploy_the_admission_webhook_service). -The assumption for next sections is that the conversion webhook server is deployed to a service named `example-conversion-webhook-server` in `default` namespace and serving traffic on path `/crdconvert`. +Documentation for deploying the conversion webhook is the same as for the +[admission webhook example service](/docs/reference/access-authn-authz/extensible-admission-controllers/#deploy_the_admission_webhook_service). +The assumption for next sections is that the conversion webhook server is deployed to a service +named `example-conversion-webhook-server` in `default` namespace and serving traffic on path `/crdconvert`. {{< note >}} When the webhook server is deployed into the Kubernetes cluster as a diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index 78b55b58dc659..834c6983f712a 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -16,7 +16,6 @@ This page shows how to install a into the Kubernetes API by creating a [CustomResourceDefinition](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#customresourcedefinition-v1beta1-apiextensions). - ## {{% heading "prerequisites" %}} @@ -24,9 +23,7 @@ into the Kubernetes API by creating a * Make sure your Kubernetes cluster has a master version of 1.16.0 or higher to use `apiextensions.k8s.io/v1`, or 1.7.0 or higher for `apiextensions.k8s.io/v1beta1`. -* Read about [custom resources](/docs/concepts/api-extension/custom-resources/). - - +* Read about [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). @@ -427,7 +424,9 @@ spec: The field `someRandomField` has been pruned. -Note that the `kubectl create` call uses `--validate=false` to skip client-side validation. Because the [OpenAPI validation schemas are also published](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#publish-validation-schema-in-openapi-v2) to kubectl, it will also check for unknown fields and reject those objects long before they are sent to the API server. +Note that the `kubectl create` call uses `--validate=false` to skip client-side validation. +Because the [OpenAPI validation schemas are also published](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#publish-validation-schema-in-openapi-v2) +to kubectl, it will also check for unknown fields and reject those objects long before they are sent to the API server. ### Controlling pruning @@ -533,11 +532,14 @@ allOf: With one of those specification, both an integer and a string validate. -In [Validation Schema Publishing](/docs/tasks/extend-kubernetes/custom-resources/extend-api-custom-resource-definitions/#publish-validation-schema-in-openapi-v2), `x-kubernetes-int-or-string: true` is unfolded to one of the two patterns shown above. +In [Validation Schema Publishing](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#publish-validation-schema-in-openapi-v2), +`x-kubernetes-int-or-string: true` is unfolded to one of the two patterns shown above. ### RawExtension -RawExtensions (as in `runtime.RawExtension` defined in [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery/blob/03ac7a9ade429d715a1a46ceaa3724c18ebae54f/pkg/runtime/types.go#L94)) holds complete Kubernetes objects, i.e. with `apiVersion` and `kind` fields. +RawExtensions (as in `runtime.RawExtension` defined in +[k8s.io/apimachinery](https://github.com/kubernetes/apimachinery/blob/03ac7a9ade429d715a1a46ceaa3724c18ebae54f/pkg/runtime/types.go#L94)) +holds complete Kubernetes objects, i.e. with `apiVersion` and `kind` fields. It is possible to specify those embedded objects (both completely without constraints or partially specified) by setting `x-kubernetes-embedded-resource: true`. For example: @@ -569,8 +571,6 @@ See [Custom resource definition versioning](/docs/tasks/extend-kubernetes/custom for more information about serving multiple versions of your CustomResourceDefinition and migrating your objects from one version to another. - - ## Advanced topics diff --git a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md index d75d930c56fae..cbc3c45260b11 100644 --- a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -9,17 +9,10 @@ weight: 20 This page shows how to define environment variables for a container in a Kubernetes Pod. - - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} - - - ## Define an environment variable for a container @@ -123,13 +116,10 @@ spec: Upon creation, the command `echo Warm greetings to The Most Honorable Kubernetes` is run on the container. - - ## {{% heading "whatsnext" %}} - * Learn more about [environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). -* Learn about [using secrets as environment variables](/docs/user-guide/secrets/#using-secrets-as-environment-variables). +* Learn about [using secrets as environment variables](/docs/concepts/configuration/secret/#using-secrets-as-environment-variables). * See [EnvVarSource](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#envvarsource-v1-core). diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md index e5f0d3a6b7afa..693e730a093ff 100644 --- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -128,8 +128,8 @@ You can read more about removing jobs in [garbage collection](/docs/concepts/wor ## Writing a Cron Job Spec As with all other Kubernetes configs, a cron job needs `apiVersion`, `kind`, and `metadata` fields. For general -information about working with config files, see [deploying applications](/docs/user-guide/deploying-applications), -and [using kubectl to manage resources](/docs/user-guide/working-with-resources) documents. +information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), +and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents. A cron job config also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). @@ -142,7 +142,8 @@ All modifications to a cron job, especially its `.spec`, are applied only to the The `.spec.schedule` is a required field of the `.spec`. It takes a [Cron](https://en.wikipedia.org/wiki/Cron) format string, such as `0 * * * *` or `@hourly`, as schedule time of its jobs to be created and executed. -The format also includes extended `vixie cron` step values. As explained in the [FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29): +The format also includes extended `vixie cron` step values. As explained in the +[FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29): > Step values can be used in conjunction with ranges. Following a range > with `/` specifies skips of the number's value through the diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md index 346fbdda8d1bd..1bbb49a256cd3 100644 --- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -17,25 +17,20 @@ from a task queue, completes it, deletes it from the queue, and exits. Here is an overview of the steps in this example: 1. **Start a message queue service.** In this example, we use RabbitMQ, but you could use another - one. In practice you would set up a message queue service once and reuse it for many jobs. + one. In practice you would set up a message queue service once and reuse it for many jobs. 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In - this example, a message is just an integer that we will do a lengthy computation on. + this example, a message is just an integer that we will do a lengthy computation on. 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes - one task from the message queue, processes it, and repeats until the end of the queue is reached. - - - + one task from the message queue, processes it, and repeats until the end of the queue is reached. ## {{% heading "prerequisites" %}} Be familiar with the basic, -non-parallel, use of [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/). +non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/). {{< include "task-tutorial-prereqs.md" >}} - - ## Starting a message queue service @@ -304,7 +299,7 @@ do not need to modify your "worker" program to be aware that there is a work que It does require that you run a message queue service. If running a queue service is inconvenient, you may -want to consider one of the other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns). +want to consider one of the other [job patterns](/docs/concepts/workloads/controllers/job/#job-patterns). This approach creates a pod for every work item. If your work items only take a few seconds, though, creating a Pod for every work item may add a lot of overhead. Consider another diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md index f502113c8ffda..7f3c30121edec 100644 --- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md @@ -16,31 +16,24 @@ from a task queue, processes it, and repeats until the end of the queue is reach Here is an overview of the steps in this example: 1. **Start a storage service to hold the work queue.** In this example, we use Redis to store - our work items. In the previous example, we used RabbitMQ. In this example, we use Redis and - a custom work-queue client library because AMQP does not provide a good way for clients to - detect when a finite-length work queue is empty. In practice you would set up a store such - as Redis once and reuse it for the work queues of many jobs, and other things. + our work items. In the previous example, we used RabbitMQ. In this example, we use Redis and + a custom work-queue client library because AMQP does not provide a good way for clients to + detect when a finite-length work queue is empty. In practice you would set up a store such + as Redis once and reuse it for the work queues of many jobs, and other things. 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In - this example, a message is just an integer that we will do a lengthy computation on. + this example, a message is just an integer that we will do a lengthy computation on. 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes - one task from the message queue, processes it, and repeats until the end of the queue is reached. - - - + one task from the message queue, processes it, and repeats until the end of the queue is reached. ## {{% heading "prerequisites" %}} {{< include "task-tutorial-prereqs.md" >}} - - Be familiar with the basic, -non-parallel, use of [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/). - - +non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/). @@ -227,14 +220,13 @@ Working on lemon As you can see, one of our pods worked on several work units. - - ## Alternatives If running a queue service or modifying your containers to use a work queue is inconvenient, you may -want to consider one of the other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns). +want to consider one of the other +[job patterns](/docs/concepts/workloads/controllers/job/#job-patterns). If you have a continuous stream of background processing work to run, then consider running your background workers with a `ReplicaSet` instead, diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md index 3477be2650905..e92fa9f5bb657 100644 --- a/content/en/docs/tasks/job/parallel-processing-expansion.md +++ b/content/en/docs/tasks/job/parallel-processing-expansion.md @@ -17,12 +17,10 @@ The sample Jobs process each item simply by printing a string then pausing. See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how this pattern fits more realistic use cases. - ## {{% heading "prerequisites" %}} - You should be familiar with the basic, -non-parallel, use of [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/). +non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/). {{< include "task-tutorial-prereqs.md" >}} @@ -33,12 +31,11 @@ To follow the advanced templating example, you need a working installation of library for Python. Once you have Python set up, you can install Jinja2 by running: + ```shell pip install --user jinja2 ``` - - ## Create Jobs based on a template @@ -305,7 +302,7 @@ If you plan to create a large number of Job objects, you may find that: on Jobs: the API server permanently rejects some of your requests when you create a great deal of work in one batch. -There are other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns) +There are other [job patterns](/docs/concepts/workloads/controllers/job/#job-patterns) that you can use to process large amounts of work without creating very many Job objects. diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index b9168ed098121..f9e35cb0f55c9 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -10,17 +10,10 @@ weight: 10 This page shows how to perform a rolling update on a DaemonSet. - - - ## {{% heading "prerequisites" %}} - * The DaemonSet rolling update feature is only supported in Kubernetes version 1.6 or later. - - - ## DaemonSet Update Strategy @@ -164,7 +157,7 @@ make room for new DaemonSet pods. {{< note >}} This will cause service disruption when deleted pods are not controlled by any controllers or pods are not -replicated. This does not respect [PodDisruptionBudget](/docs/tasks/configure-pod-container/configure-pod-disruption-budget/) +replicated. This does not respect [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) either. {{< /note >}} diff --git a/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md b/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md index a55ff3b5fdd6c..9d7516aeef030 100644 --- a/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md +++ b/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md @@ -14,53 +14,54 @@ This page shows how to manually rotate the certificate authority (CA) certificat - For more information about authentication in Kubernetes, see [Authenticating](/docs/reference/access-authn-authz/authentication). -- For more information about best practices for CA certificates, see [Single root CA](docs/setup/best-practices/certificates/#single-root-ca). +- For more information about best practices for CA certificates, see [Single root CA](/docs/setup/best-practices/certificates/#single-root-ca). ## Rotate the CA certificates manually {{< caution >}} - Make sure to back up your certificate directory along with configuration files and any other necessary files. -This approach assumes operation of the Kubernetes control plane in a HA configuration with multiple API servers. Graceful termination of the API server is also assumed so clients can cleanly disconnect from one API server and reconnect to another. +This approach assumes operation of the Kubernetes control plane in a HA configuration with multiple API servers. +Graceful termination of the API server is also assumed so clients can cleanly disconnect from one API server and reconnect to another. Configurations with a single API server will experience unavailability while the API server is being restarted. - {{< /caution >}} -1. Distribute the new CA certificates and private keys (ex: `ca.crt`, `ca.key`, `front-proxy-ca.crt`, and `front-proxy-ca.key`) to all your control plane nodes in the Kubernetes certificates directory. +1. Distribute the new CA certificates and private keys + (ex: `ca.crt`, `ca.key`, `front-proxy-ca.crt`, and `front-proxy-ca.key`) + to all your control plane nodes in the Kubernetes certificates directory. 1. Update *Kubernetes controller manager's* `--root-ca-file` to include both old and new CA and restart controller manager. - Any service account created after this point will get secrets that include both old and new CAs. - - {{< note >}} + Any service account created after this point will get secrets that include both old and new CAs. - Remove the flag `--client-ca-file` from the *Kubernetes controller manager* configuration. You can also replace the existing client CA file or change this configuration item to reference a new, updated CA. [Issue 1350](https://github.com/kubernetes/kubeadm/issues/1350) tracks an issue with *Kubernetes controller manager* being unable to accept a CA bundle. - - {{< /note >}} + {{< note >}} + Remove the flag `--client-ca-file` from the *Kubernetes controller manager* configuration. + You can also replace the existing client CA file or change this configuration item to reference a new, updated CA. + [Issue 1350](https://github.com/kubernetes/kubeadm/issues/1350) tracks an issue with *Kubernetes controller manager* being unable to accept a CA bundle. + {{< /note >}} 1. Update all service account tokens to include both old and new CA certificates. - If any pods are started before new CA is used by API servers, they will get this update and trust both old and new CAs. + If any pods are started before new CA is used by API servers, they will get this update and trust both old and new CAs. - ```shell - base64_encoded_ca="$(base64 )" + ```shell + base64_encoded_ca="$(base64 )" - for namespace in $(kubectl get ns --no-headers | awk '{print $1}'); do - for token in $(kubectl get secrets --namespace "$namespace" --field-selector type=kubernetes.io/service-account-token -o name); do - kubectl get $token --namespace "$namespace" -o yaml | \ - /bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}" | \ - kubectl apply -f - - done - done - ``` + for namespace in $(kubectl get ns --no-headers | awk '{print $1}'); do + for token in $(kubectl get secrets --namespace "$namespace" --field-selector type=kubernetes.io/service-account-token -o name); do + kubectl get $token --namespace "$namespace" -o yaml | \ + /bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}" | \ + kubectl apply -f - + done + done + ``` 1. Restart all pods using in-cluster configs (ex: kube-proxy, coredns, etc) so they can use the updated certificate authority data from *ServiceAccount* secrets. - * Make sure coredns, kube-proxy and other pods using in-cluster configs are working as expected. + * Make sure coredns, kube-proxy and other pods using in-cluster configs are working as expected. 1. Append the both old and new CA to the file against `--client-ca-file` and `--kubelet-certificate-authority` flag in the `kube-apiserver` configuration. @@ -68,77 +69,88 @@ Configurations with a single API server will experience unavailability while the 1. Update certificates for user accounts by replacing the content of `client-certificate-data` and `client-key-data` respectively. - For information about creating certificates for individual user accounts, see [Configure certificates for user accounts](/docs/setup/best-practices/certificates/#configure-certificates-for-user-accounts). + For information about creating certificates for individual user accounts, see + [Configure certificates for user accounts](/docs/setup/best-practices/certificates/#configure-certificates-for-user-accounts). - Additionally, update the `certificate-authority-data` section in the kubeconfig files, respectively with Base64-encoded old and new certificate authority data + Additionally, update the `certificate-authority-data` section in the kubeconfig files, + respectively with Base64-encoded old and new certificate authority data 1. Follow below steps in a rolling fashion. - 1. Restart any other *[aggregated api servers](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)* or *webhook handlers* to trust the new CA certificates. - - 1. Restart the kubelet by update the file against `clientCAFile` in kubelet configuration and `certificate-authority-data` in kubelet.conf to use both the old and new CA on all nodes. + 1. Restart any other *[aggregated api servers](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)* + or *webhook handlers* to trust the new CA certificates. - If your kubelet is not using client certificate rotation update `client-certificate-data` and `client-key-data` in kubelet.conf on all nodes along with the kubelet client certificate file usually found in `/var/lib/kubelet/pki`. + 1. Restart the kubelet by update the file against `clientCAFile` in kubelet configuration and + `certificate-authority-data` in kubelet.conf to use both the old and new CA on all nodes. + If your kubelet is not using client certificate rotation update `client-certificate-data` and + `client-key-data` in kubelet.conf on all nodes along with the kubelet client certificate file + usually found in `/var/lib/kubelet/pki`. - 1. Restart API servers with the certificates (`apiserver.crt`, `apiserver-kubelet-client.crt` and `front-proxy-client.crt`) signed by new CA. You can use the existing private keys or new private keys. If you changed the private keys then update these in the Kubernetes certificates directory as well. - Since the pod trusts both old and new CAs, there will be a momentarily disconnection after which the pod's kube client will reconnect to the new API server that uses the certificate signed by the new CA. + 1. Restart API servers with the certificates (`apiserver.crt`, `apiserver-kubelet-client.crt` and + `front-proxy-client.crt`) signed by new CA. + You can use the existing private keys or new private keys. + If you changed the private keys then update these in the Kubernetes certificates directory as well. - * Restart Scheduler to use the new CAs. + Since the pod trusts both old and new CAs, there will be a momentarily disconnection + after which the pod's kube client will reconnect to the new API server + that uses the certificate signed by the new CA. - * Make sure control plane components logs no TLS errors. + * Restart Scheduler to use the new CAs. - {{< note >}} + * Make sure control plane components logs no TLS errors. - To generate certificates and private keys for your cluster using the `openssl` command line tool, see [Certificates (`openssl`)](/docs/concepts/cluster-administration/certificates/#openssl). - You can also use [`cfssl`](/docs/concepts/cluster-administration/certificates/#cfssl). + {{< note >}} + To generate certificates and private keys for your cluster using the `openssl` command line tool, see [Certificates (`openssl`)](/docs/concepts/cluster-administration/certificates/#openssl). + You can also use [`cfssl`](/docs/concepts/cluster-administration/certificates/#cfssl). + {{< /note >}} - {{< /note >}} + 1. Annotate any Daemonsets and Deployments to trigger pod replacement in a safer rolling fashion. - 1. Annotate any Daemonsets and Deployments to trigger pod replacement in a safer rolling fashion. + Example: - Example: + ```shell + for namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do + for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do + kubectl patch deployment -n ${namespace} ${name} -p '{"spec":{"template":{"metadata":{"annotations":{"ca-rotation": "1"}}}}}'; + done + for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do + kubectl patch daemonset -n ${namespace} ${name} -p '{"spec":{"template":{"metadata":{"annotations":{"ca-rotation": "1"}}}}}'; + done + done + ``` - ```shell - for namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do - for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do - kubectl patch deployment -n ${namespace} ${name} -p '{"spec":{"template":{"metadata":{"annotations":{"ca-rotation": "1"}}}}}'; - done - for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do - kubectl patch daemonset -n ${namespace} ${name} -p '{"spec":{"template":{"metadata":{"annotations":{"ca-rotation": "1"}}}}}'; - done - done - ``` - - {{< note >}} - - To limit the number of concurrent disruptions that your application experiences, see [configure pod disruption budget](docs/tasks/run-application/configure-pdb/). - - {{< /note >}} + {{< note >}} + To limit the number of concurrent disruptions that your application experiences, + see [configure pod disruption budget](/docs/tasks/run-application/configure-pdb/). + {{< /note >}} 1. If your cluster is using bootstrap tokens to join nodes, update the ConfigMap `cluster-info` in the `kube-public` namespace with new CA. - ```shell - base64_encoded_ca="$(base64 /etc/kubernetes/pki/ca.crt)" + ```shell + base64_encoded_ca="$(base64 /etc/kubernetes/pki/ca.crt)" - kubectl get cm/cluster-info --namespace kube-public -o yaml | \ - /bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}" | \ - kubectl apply -f - - ``` + kubectl get cm/cluster-info --namespace kube-public -o yaml | \ + /bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}" | \ + kubectl apply -f - + ``` 1. Verify the cluster functionality. - 1. Validate the logs from control plane components, along with the kubelet and the kube-proxy are not throwing any tls errors, see [looking at the logs](/docs/tasks/debug-application-cluster/debug-cluster/#looking-at-logs). + 1. Validate the logs from control plane components, along with the kubelet and the + kube-proxy are not throwing any tls errors, see + [looking at the logs](/docs/tasks/debug-application-cluster/debug-cluster/#looking-at-logs). - 1. Validate logs from any aggregated api servers and pods using in-cluster config. + 1. Validate logs from any aggregated api servers and pods using in-cluster config. 1. Once the cluster functionality is successfully verified: - 1. Update all service account tokens to include new CA certificate only. + 1. Update all service account tokens to include new CA certificate only. + + * All pods using an in-cluster kubeconfig will eventually need to be restarted to pick up the new SA secret for the old CA to be completely untrusted. - * All pods using an in-cluster kubeconfig will eventually need to be restarted to pick up the new SA secret for the old CA to be completely untrusted. + 1. Restart the control plane components by removing the old CA from the kubeconfig files and the files against `--client-ca-file`, `--root-ca-file` flags resp. - 1. Restart the control plane components by removing the old CA from the kubeconfig files and the files against `--client-ca-file`, `--root-ca-file` flags resp. + 1. Restart kubelet by removing the old CA from file against the `clientCAFile` flag and kubelet kubeconfig file. - 1. Restart kubelet by removing the old CA from file against the `clientCAFile` flag and kubelet kubeconfig file. diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md index 22b960751f23c..e3d6c0aa9c0e0 100644 --- a/content/en/docs/tasks/tools/install-kubectl.md +++ b/content/en/docs/tasks/tools/install-kubectl.md @@ -310,7 +310,12 @@ You can install kubectl as part of the Google Cloud SDK. ## Verifying kubectl configuration -In order for kubectl to find and access a Kubernetes cluster, it needs a [kubeconfig file](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/), which is created automatically when you create a cluster using [kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh) or successfully deploy a Minikube cluster. By default, kubectl configuration is located at `~/.kube/config`. +In order for kubectl to find and access a Kubernetes cluster, it needs a +[kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/), +which is created automatically when you create a cluster using +[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh) +or successfully deploy a Minikube cluster. +By default, kubectl configuration is located at `~/.kube/config`. Check that kubectl is properly configured by getting the cluster state: @@ -518,5 +523,7 @@ compinit * [Install Minikube](/docs/tasks/tools/install-minikube/) * See the [getting started guides](/docs/setup/) for more about creating clusters. * [Learn how to launch and expose your application.](/docs/tasks/access-application-cluster/service-access-application-cluster/) -* If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). +* If you need access to a cluster you didn't create, see the + [Sharing Cluster Access document](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). * Read the [kubectl reference docs](/docs/reference/kubectl/kubectl/) +