From ecc27bbbe70f92d031fa52be76bb9471b2e83152 Mon Sep 17 00:00:00 2001 From: Karen Bradshaw Date: Sat, 30 May 2020 15:10:23 -0400 Subject: [PATCH] add en pages --- content/en/docs/concepts/_index.md | 15 +++++----- .../concepts/architecture/cloud-controller.md | 14 ++++----- .../control-plane-node-communication.md | 8 ++--- .../docs/concepts/architecture/controller.md | 15 +++++----- .../en/docs/concepts/architecture/nodes.md | 15 +++++----- .../concepts/cluster-administration/addons.md | 10 +++---- .../cluster-administration/certificates.md | 10 +++---- .../cluster-administration/cloud-providers.md | 10 +++---- .../cluster-administration-overview.md | 10 +++---- .../cluster-administration/flow-control.md | 14 ++++----- .../kubelet-garbage-collection.md | 15 +++++----- .../cluster-administration/logging.md | 10 +++---- .../manage-deployment.md | 15 +++++----- .../cluster-administration/monitoring.md | 15 +++++----- .../cluster-administration/networking.md | 15 +++++----- .../cluster-administration/proxies.md | 10 +++---- .../docs/concepts/configuration/configmap.md | 15 +++++----- .../manage-resources-containers.md | 15 +++++----- .../organize-cluster-access-kubeconfig.md | 15 +++++----- .../docs/concepts/configuration/overview.md | 10 +++---- .../concepts/configuration/pod-overhead.md | 15 +++++----- .../configuration/pod-priority-preemption.md | 15 +++++----- .../configuration/resource-bin-packing.md | 10 +++---- .../en/docs/concepts/configuration/secret.md | 8 ++--- .../containers/container-environment.md | 15 +++++----- .../containers/container-lifecycle-hooks.md | 15 +++++----- content/en/docs/concepts/containers/images.md | 10 +++---- .../en/docs/concepts/containers/overview.md | 15 +++++----- .../docs/concepts/containers/runtime-class.md | 15 +++++----- .../docs/concepts/example-concept-template.md | 15 +++++----- .../api-extension/apiserver-aggregation.md | 15 +++++----- .../api-extension/custom-resources.md | 15 +++++----- .../compute-storage-net/device-plugins.md | 15 +++++----- .../compute-storage-net/network-plugins.md | 15 +++++----- .../extend-kubernetes/extend-cluster.md | 15 +++++----- .../concepts/extend-kubernetes/operator.md | 14 ++++----- .../poseidon-firmament-alternate-scheduler.md | 15 +++++----- .../extend-kubernetes/service-catalog.md | 15 +++++----- .../en/docs/concepts/overview/components.md | 15 +++++----- .../docs/concepts/overview/kubernetes-api.md | 16 +++++----- .../concepts/overview/what-is-kubernetes.md | 15 +++++----- .../working-with-objects/annotations.md | 15 +++++----- .../working-with-objects/common-labels.md | 10 +++---- .../kubernetes-objects.md | 15 +++++----- .../overview/working-with-objects/labels.md | 10 +++---- .../overview/working-with-objects/names.md | 15 +++++----- .../working-with-objects/namespaces.md | 15 +++++----- .../working-with-objects/object-management.md | 15 +++++----- .../en/docs/concepts/policy/limit-range.md | 15 +++++----- .../concepts/policy/pod-security-policy.md | 15 +++++----- .../docs/concepts/policy/resource-quotas.md | 15 +++++----- .../scheduling-eviction/assign-pod-node.md | 15 +++++----- .../scheduling-eviction/kube-scheduler.md | 15 +++++----- .../scheduler-perf-tuning.md | 10 +++---- .../scheduling-framework.md | 9 +++--- .../taint-and-toleration.md | 15 +++++----- content/en/docs/concepts/security/overview.md | 15 +++++----- .../security/pod-security-standards.md | 10 +++---- ...ries-to-pod-etc-hosts-with-host-aliases.md | 10 +++---- .../connect-applications-service.md | 15 +++++----- .../services-networking/dns-pod-service.md | 14 ++++----- .../services-networking/dual-stack.md | 15 +++++----- .../services-networking/endpoint-slices.md | 15 +++++----- .../ingress-controllers.md | 15 +++++----- .../concepts/services-networking/ingress.md | 15 +++++----- .../services-networking/network-policies.md | 15 +++++----- .../services-networking/service-topology.md | 15 +++++----- .../concepts/services-networking/service.md | 15 +++++----- .../concepts/storage/dynamic-provisioning.md | 10 +++---- .../concepts/storage/persistent-volumes.md | 14 ++++----- .../docs/concepts/storage/storage-classes.md | 10 +++---- .../docs/concepts/storage/storage-limits.md | 10 +++---- .../concepts/storage/volume-pvc-datasource.md | 10 +++---- .../storage/volume-snapshot-classes.md | 10 +++---- .../docs/concepts/storage/volume-snapshots.md | 10 +++---- content/en/docs/concepts/storage/volumes.md | 13 ++++---- .../workloads/controllers/cron-jobs.md | 15 +++++----- .../workloads/controllers/daemonset.md | 10 +++---- .../workloads/controllers/deployment.md | 10 +++---- .../controllers/garbage-collection.md | 15 +++++----- .../controllers/jobs-run-to-completion.md | 10 +++---- .../workloads/controllers/replicaset.md | 10 +++---- .../controllers/replicationcontroller.md | 10 +++---- .../workloads/controllers/statefulset.md | 15 +++++----- .../workloads/controllers/ttlafterfinished.md | 15 +++++----- .../concepts/workloads/pods/disruptions.md | 15 +++++----- .../workloads/pods/ephemeral-containers.md | 10 +++---- .../workloads/pods/init-containers.md | 15 +++++----- .../concepts/workloads/pods/pod-lifecycle.md | 15 +++++----- .../concepts/workloads/pods/pod-overview.md | 15 +++++----- .../pods/pod-topology-spread-constraints.md | 10 +++---- .../en/docs/concepts/workloads/pods/pod.md | 10 +++---- .../docs/concepts/workloads/pods/podpreset.md | 15 +++++----- content/en/docs/contribute/_index.md | 10 +++---- content/en/docs/contribute/advanced.md | 10 +++---- .../generate-ref-docs/contribute-upstream.md | 20 +++++++------ .../contribute/generate-ref-docs/kubectl.md | 20 +++++++------ .../generate-ref-docs/kubernetes-api.md | 20 +++++++------ .../kubernetes-components.md | 20 +++++++------ .../generate-ref-docs/quickstart.md | 20 +++++++------ content/en/docs/contribute/localization.md | 15 +++++----- .../new-content/blogs-case-studies.md | 15 +++++----- .../contribute/new-content/new-features.md | 9 +++--- .../docs/contribute/new-content/open-a-pr.md | 15 +++++----- .../docs/contribute/new-content/overview.md | 10 +++---- content/en/docs/contribute/participating.md | 15 +++++----- content/en/docs/contribute/review/_index.md | 8 ++--- .../docs/contribute/review/for-approvers.md | 9 +++--- .../docs/contribute/review/reviewing-prs.md | 9 +++--- .../en/docs/contribute/style/content-guide.md | 15 +++++----- .../contribute/style/content-organization.md | 15 +++++----- .../contribute/style/hugo-shortcodes/index.md | 15 +++++----- .../docs/contribute/style/page-templates.md | 21 ++++++------- .../en/docs/contribute/style/style-guide.md | 15 +++++----- .../docs/contribute/style/write-new-topic.md | 20 +++++++------ .../contribute/suggesting-improvements.md | 10 +++---- .../en/docs/home/supported-doc-versions.md | 10 +++---- content/en/docs/reference/_index.md | 10 +++---- .../docs/reference/access-authn-authz/abac.md | 10 +++---- .../admission-controllers.md | 10 +++---- .../access-authn-authz/authentication.md | 10 +++---- .../access-authn-authz/authorization.md | 15 +++++----- .../access-authn-authz/bootstrap-tokens.md | 10 +++---- .../certificate-signing-requests.md | 15 +++++----- .../access-authn-authz/controlling-access.md | 10 +++---- .../extensible-admission-controllers.md | 9 +++--- .../docs/reference/access-authn-authz/node.md | 10 +++---- .../docs/reference/access-authn-authz/rbac.md | 10 +++---- .../service-accounts-admin.md | 10 +++---- .../reference/access-authn-authz/webhook.md | 10 +++---- .../cloud-controller-manager.md | 10 ++++--- .../feature-gates.md | 15 +++++----- .../kube-apiserver.md | 10 ++++--- .../kube-controller-manager.md | 10 ++++--- .../kube-proxy.md | 10 ++++--- .../kube-scheduler.md | 10 ++++--- .../kubelet-tls-bootstrapping.md | 10 +++---- .../command-line-tools-reference/kubelet.md | 10 ++++--- .../reference/issues-security/security.md | 10 +++---- .../en/docs/reference/kubectl/cheatsheet.md | 15 +++++----- .../en/docs/reference/kubectl/conventions.md | 10 +++---- .../kubectl/docker-cli-to-kubectl.md | 10 +++---- content/en/docs/reference/kubectl/jsonpath.md | 10 +++---- content/en/docs/reference/kubectl/kubectl.md | 15 ++++++---- content/en/docs/reference/kubectl/overview.md | 15 +++++----- .../labels-annotations-taints.md | 10 +++---- .../en/docs/reference/scheduling/policies.md | 15 +++++----- .../en/docs/reference/scheduling/profiles.md | 15 +++++----- .../kubeadm/implementation-details.md | 10 +++---- .../setup-tools/kubeadm/kubeadm-config.md | 15 +++++----- .../setup-tools/kubeadm/kubeadm-init.md | 15 +++++----- .../setup-tools/kubeadm/kubeadm-join.md | 15 +++++----- .../setup-tools/kubeadm/kubeadm-reset.md | 15 +++++----- .../setup-tools/kubeadm/kubeadm-token.md | 15 +++++----- .../setup-tools/kubeadm/kubeadm-upgrade.md | 15 +++++----- .../setup-tools/kubeadm/kubeadm-version.md | 10 +++---- content/en/docs/reference/tools.md | 10 +++---- .../docs/reference/using-api/api-concepts.md | 8 ++--- .../docs/reference/using-api/api-overview.md | 8 ++--- .../reference/using-api/client-libraries.md | 10 +++---- .../reference/using-api/deprecation-policy.md | 10 +++---- content/en/docs/setup/_index.md | 10 +++---- .../docs/setup/best-practices/certificates.md | 10 +++---- .../setup/best-practices/multiple-zones.md | 10 +++---- .../docs/setup/learning-environment/kind.md | 10 +++---- .../setup/learning-environment/minikube.md | 10 +++---- .../container-runtimes.md | 10 +++---- .../on-premises-vm/cloudstack.md | 10 +++---- .../on-premises-vm/dcos.md | 10 +++---- .../on-premises-vm/ovirt.md | 10 +++---- .../production-environment/tools/kops.md | 20 +++++++------ .../tools/kubeadm/control-plane-flags.md | 10 +++---- .../tools/kubeadm/create-cluster-kubeadm.md | 19 ++++++------ .../tools/kubeadm/ha-topology.md | 15 +++++----- .../tools/kubeadm/high-availability.md | 15 +++++----- .../tools/kubeadm/install-kubeadm.md | 17 ++++++----- .../tools/kubeadm/kubelet-integration.md | 10 +++---- .../tools/kubeadm/self-hosting.md | 10 +++---- .../kubeadm/setup-ha-etcd-with-kubeadm.md | 20 +++++++------ .../tools/kubeadm/troubleshooting-kubeadm.md | 10 +++---- .../production-environment/tools/kubespray.md | 14 ++++----- .../production-environment/turnkey/aws.md | 15 +++++----- .../production-environment/turnkey/gce.md | 15 +++++----- .../windows/intro-windows-in-kubernetes.md | 15 +++++----- .../windows/user-guide-windows-containers.md | 10 +++---- .../docs/setup/release/version-skew-policy.md | 8 ++--- content/en/docs/tasks/_index.md | 15 +++++----- .../access-cluster.md | 9 +++--- ...icate-containers-same-pod-shared-volume.md | 24 ++++++++------- .../configure-access-multiple-clusters.md | 20 +++++++------ .../configure-cloud-provider-firewall.md | 15 +++++----- .../configure-dns-cluster.md | 10 +++---- .../connecting-frontend-backend.md | 30 +++++++++++-------- .../create-external-load-balancer.md | 15 +++++----- .../ingress-minikube.md | 20 +++++++------ .../list-all-running-container-images.md | 24 ++++++++------- ...port-forward-access-application-cluster.md | 24 ++++++++------- .../service-access-application-cluster.md | 30 +++++++++++-------- .../web-ui-dashboard.md | 15 +++++----- .../configure-aggregation-layer.md | 20 +++++++------ .../custom-resource-definition-versioning.md | 15 +++++----- .../custom-resource-definitions.md | 23 +++++++------- .../http-proxy-access-api.md | 20 +++++++------ .../setup-extension-api-server.md | 20 +++++++------ .../administer-cluster/access-cluster-api.md | 15 +++++----- .../access-cluster-services.md | 15 +++++----- .../change-default-storage-class.md | 20 +++++++------ .../change-pv-reclaim-policy.md | 20 +++++++------ .../administer-cluster/cluster-management.md | 10 +++---- .../configure-multiple-schedulers.md | 19 ++++++------ .../configure-upgrade-etcd.md | 15 +++++----- .../docs/tasks/administer-cluster/coredns.md | 20 +++++++------ .../cpu-management-policies.md | 15 +++++----- .../declare-network-policy.md | 15 +++++----- .../developing-cloud-controller-manager.md | 10 +++---- .../dns-custom-nameservers.md | 19 ++++++------ .../dns-debugging-resolution.md | 15 +++++----- .../dns-horizontal-autoscaling.md | 24 ++++++++------- .../enabling-endpointslices.md | 18 ++++++----- .../enabling-service-topology.md | 18 ++++++----- .../tasks/administer-cluster/encrypt-data.md | 15 +++++----- .../extended-resource-node.md | 20 +++++++------ ...aranteed-scheduling-critical-addon-pods.md | 10 +++---- .../highly-available-master.md | 19 ++++++------ .../tasks/administer-cluster/ip-masq-agent.md | 19 ++++++------ .../tasks/administer-cluster/kms-provider.md | 15 +++++----- .../kubeadm/adding-windows-nodes.md | 25 +++++++++------- .../kubeadm/kubeadm-certs.md | 15 +++++----- .../kubeadm/kubeadm-upgrade.md | 15 +++++----- .../kubeadm/upgrading-windows-nodes.md | 15 +++++----- .../administer-cluster/kubelet-config-file.md | 19 ++++++------ .../limit-storage-consumption.md | 19 ++++++------ .../cpu-constraint-namespace.md | 20 +++++++------ .../manage-resources/cpu-default-namespace.md | 20 +++++++------ .../memory-constraint-namespace.md | 20 +++++++------ .../memory-default-namespace.md | 20 +++++++------ .../quota-memory-cpu-namespace.md | 20 +++++++------ .../manage-resources/quota-pod-namespace.md | 20 +++++++------ .../namespaces-walkthrough.md | 15 +++++----- .../tasks/administer-cluster/namespaces.md | 24 ++++++++------- .../calico-network-policy.md | 20 +++++++------ .../cilium-network-policy.md | 24 ++++++++------- .../kube-router-network-policy.md | 20 +++++++------ .../romana-network-policy.md | 20 +++++++------ .../weave-network-policy.md | 20 +++++++------ .../tasks/administer-cluster/nodelocaldns.md | 17 ++++++----- .../administer-cluster/out-of-resource.md | 10 +++---- .../administer-cluster/quota-api-object.md | 20 +++++++------ .../administer-cluster/reconfigure-kubelet.md | 23 +++++++------- .../reserve-compute-resources.md | 18 +++++------ .../running-cloud-controller.md | 15 +++++----- .../administer-cluster/safely-drain-node.md | 20 +++++++------ .../administer-cluster/securing-a-cluster.md | 15 +++++----- .../administer-cluster/sysctl-cluster.md | 19 ++++++------ .../administer-cluster/topology-manager.md | 15 +++++----- .../assign-cpu-resource.md | 20 +++++++------ .../assign-memory-resource.md | 20 +++++++------ .../assign-pods-nodes-using-node-affinity.md | 20 +++++++------ .../assign-pods-nodes.md | 20 +++++++------ .../attach-handler-lifecycle-event.md | 24 ++++++++------- .../configure-pod-container/configure-gmsa.md | 15 +++++----- ...igure-liveness-readiness-startup-probes.md | 20 +++++++------ .../configure-persistent-volume-storage.md | 24 ++++++++------- .../configure-pod-configmap.md | 24 ++++++++------- .../configure-pod-initialization.md | 20 +++++++------ .../configure-projected-volume-storage.md | 20 +++++++------ .../configure-runasusername.md | 19 ++++++------ .../configure-service-account.md | 20 +++++++------ .../configure-volume-storage.md | 20 +++++++------ .../extended-resource.md | 20 +++++++------ .../pull-image-private-registry.md | 20 +++++++------ .../quality-service-pod.md | 20 +++++++------ .../security-context.md | 20 +++++++------ .../share-process-namespace.md | 19 ++++++------ .../configure-pod-container/static-pod.md | 15 +++++----- .../translate-compose-kubernetes.md | 19 ++++++------ .../tasks/debug-application-cluster/audit.md | 15 +++++----- .../tasks/debug-application-cluster/crictl.md | 19 ++++++------ .../debug-application-introspection.md | 15 +++++----- .../debug-application.md | 15 +++++----- .../debug-cluster.md | 10 +++---- .../debug-init-containers.md | 19 ++++++------ .../debug-pod-replication-controller.md | 15 +++++----- .../debug-running-pod.md | 15 +++++----- .../debug-service.md | 15 +++++----- .../debug-stateful-set.md | 20 +++++++------ .../determine-reason-pod-failure.md | 20 +++++++------ .../events-stackdriver.md | 10 +++---- .../tasks/debug-application-cluster/falco.md | 10 +++---- .../get-shell-running-container.md | 24 ++++++++------- .../local-debugging.md | 20 +++++++------ .../logging-elasticsearch-kibana.md | 15 +++++----- .../logging-stackdriver.md | 10 +++---- .../monitor-node-health.md | 19 ++++++------ .../resource-metrics-pipeline.md | 10 +++---- .../resource-usage-monitoring.md | 10 +++---- .../troubleshooting.md | 10 +++---- .../en/docs/tasks/example-task-template.md | 23 +++++++------- .../tasks/extend-kubectl/kubectl-plugins.md | 20 +++++++------ .../define-command-argument-container.md | 20 +++++++------ .../define-environment-variable-container.md | 20 +++++++------ .../distribute-credentials-secure.md | 20 +++++++------ ...nward-api-volume-expose-pod-information.md | 24 ++++++++------- ...ronment-variable-expose-pod-information.md | 20 +++++++------ .../inject-data-application/podpreset.md | 15 +++++----- .../job/automated-tasks-with-cron-jobs.md | 15 +++++----- .../coarse-parallel-processing-work-queue.md | 19 ++++++------ .../fine-parallel-processing-work-queue.md | 23 +++++++------- .../job/parallel-processing-expansion.md | 19 ++++++------ .../manage-daemon/rollback-daemon-set.md | 19 ++++++------ .../tasks/manage-daemon/update-daemon-set.md | 20 +++++++------ .../docs/tasks/manage-gpus/scheduling-gpus.md | 10 +++---- .../manage-hugepages/scheduling-hugepages.md | 15 +++++----- .../declarative-config.md | 18 ++++++----- .../imperative-command.md | 20 +++++++------ .../imperative-config.md | 20 +++++++------ .../kustomization.md | 20 +++++++------ .../update-api-object-kubectl-patch.md | 20 +++++++------ .../docs/tasks/network/validate-dual-stack.md | 15 +++++----- .../tasks/run-application/configure-pdb.md | 19 ++++++------ .../run-application/delete-stateful-set.md | 20 +++++++------ .../force-delete-stateful-set-pod.md | 20 +++++++------ .../horizontal-pod-autoscale-walkthrough.md | 19 ++++++------ .../horizontal-pod-autoscale.md | 15 +++++----- .../run-replicated-stateful-application.md | 30 +++++++++++-------- ...un-single-instance-stateful-application.md | 25 +++++++++------- .../run-stateless-application-deployment.md | 25 +++++++++------- .../run-application/scale-stateful-set.md | 20 +++++++------ .../install-service-catalog-using-helm.md | 20 +++++++------ .../install-service-catalog-using-sc.md | 20 +++++++------ .../setup-konnectivity/setup-konnectivity.md | 14 ++++----- .../en/docs/tasks/tls/certificate-rotation.md | 15 +++++----- .../tasks/tls/managing-tls-in-a-cluster.md | 15 +++++----- .../en/docs/tasks/tools/install-kubectl.md | 20 +++++++------ .../en/docs/tasks/tools/install-minikube.md | 20 +++++++------ content/en/docs/tutorials/_index.md | 15 +++++----- .../en/docs/tutorials/clusters/apparmor.md | 25 +++++++++------- .../configure-redis-using-configmap.md | 25 +++++++++------- content/en/docs/tutorials/hello-minikube.md | 25 +++++++++------- .../en/docs/tutorials/services/source-ip.md | 30 +++++++++++-------- .../basic-stateful-set.md | 25 +++++++++------- .../stateful-application/cassandra.md | 30 +++++++++++-------- .../mysql-wordpress-persistent-volume.md | 30 +++++++++++-------- .../stateful-application/zookeeper.md | 25 +++++++++------- .../expose-external-ip-address.md | 30 +++++++++++-------- .../guestbook-logs-metrics-with-elk.md | 29 ++++++++++-------- .../stateless-application/guestbook.md | 30 +++++++++++-------- 347 files changed, 2900 insertions(+), 2537 deletions(-) diff --git a/content/en/docs/concepts/_index.md b/content/en/docs/concepts/_index.md index 0cb970fd66e09..ae9ed7545d94e 100644 --- a/content/en/docs/concepts/_index.md +++ b/content/en/docs/concepts/_index.md @@ -1,17 +1,17 @@ --- title: Concepts main_menu: true -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + The Concepts section helps you learn about the parts of the Kubernetes system and the abstractions Kubernetes uses to represent your {{< glossary_tooltip text="cluster" term_id="cluster" length="all" >}}, and helps you obtain a deeper understanding of how Kubernetes works. -{{% /capture %}} -{{% capture body %}} + + ## Overview @@ -60,12 +60,13 @@ The Kubernetes master is responsible for maintaining the desired state for your The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you'll rarely interact with nodes directly. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + If you would like to write a concept page, see [Using Page Templates](/docs/home/contribute/page-templates/) for information about the concept page type and the concept template. -{{% /capture %}} + diff --git a/content/en/docs/concepts/architecture/cloud-controller.md b/content/en/docs/concepts/architecture/cloud-controller.md index 31c0ad9d549af..9a731b684a6f8 100644 --- a/content/en/docs/concepts/architecture/cloud-controller.md +++ b/content/en/docs/concepts/architecture/cloud-controller.md @@ -1,10 +1,10 @@ --- title: Cloud Controller Manager -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + {{< feature-state state="beta" for_k8s_version="v1.11" >}} @@ -17,9 +17,9 @@ components. The cloud-controller-manager is structured using a plugin mechanism that allows different cloud providers to integrate their platforms with Kubernetes. -{{% /capture %}} -{{% capture body %}} + + ## Design @@ -200,8 +200,9 @@ rules: - update ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager) has instructions on running and managing the cloud controller manager. @@ -212,4 +213,3 @@ The cloud controller manager uses Go interfaces to allow implementations from an The implementation of the shared controllers highlighted in this document (Node, Route, and Service), and some scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core. Implementations specific to cloud providers are outside the core of Kubernetes and implement the `CloudProvider` interface. For more information about developing plugins, see [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/). -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md index 5e85302c384ad..ac901abdab963 100644 --- a/content/en/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -3,19 +3,19 @@ reviewers: - dchen1107 - liggitt title: Control Plane-Node Communication -content_template: templates/concept +content_type: concept weight: 20 aliases: - master-node-communication --- -{{% capture overview %}} + This document catalogs the communication paths between the control plane (really the apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). -{{% /capture %}} -{{% capture body %}} + + ## Node to Control Plane All communication paths from the nodes to the control plane terminate at the apiserver (none of the other master components are designed to expose remote services). In a typical deployment, the apiserver is configured to listen for remote connections on a secure HTTPS port (443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. diff --git a/content/en/docs/concepts/architecture/controller.md b/content/en/docs/concepts/architecture/controller.md index 2872959bacfa0..547a624a94660 100644 --- a/content/en/docs/concepts/architecture/controller.md +++ b/content/en/docs/concepts/architecture/controller.md @@ -1,10 +1,10 @@ --- title: Controllers -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + In robotics and automation, a _control loop_ is a non-terminating loop that regulates the state of a system. @@ -18,10 +18,10 @@ closer to the desired state, by turning equipment on or off. {{< glossary_definition term_id="controller" length="short">}} -{{% /capture %}} -{{% capture body %}} + + ## Controller pattern @@ -150,11 +150,12 @@ You can run your own controller as a set of Pods, or externally to Kubernetes. What fits best will depend on what that particular controller does. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about the [Kubernetes control plane](/docs/concepts/#kubernetes-control-plane) * Discover some of the basic [Kubernetes objects](/docs/concepts/#kubernetes-objects) * Learn more about the [Kubernetes API](/docs/concepts/overview/kubernetes-api/) * If you want to write your own controller, see [Extension Patterns](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns) in Extending Kubernetes. -{{% /capture %}} + diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 32274f5a3b305..516e4eb6d977e 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -3,11 +3,11 @@ reviewers: - caesarxuchao - dchen1107 title: Nodes -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + Kubernetes runs your workload by placing containers into Pods to run on _Nodes_. A node may be a virtual or physical machine, depending on the cluster. Each node @@ -23,9 +23,9 @@ The [components](/docs/concepts/overview/components/#node-components) on a node {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, and the {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}. -{{% /capture %}} -{{% capture body %}} + + ## Management @@ -332,12 +332,13 @@ the kubelet can use topology hints when making resource assignment decisions. See [Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/) for more information. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node. * Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). * Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) section of the architecture design document. * Read about [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/). * Read about [cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling). -{{% /capture %}} + diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 0347327f13015..5b5110ec92f3f 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -1,9 +1,9 @@ --- title: Installing Addons -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + Add-ons extend the functionality of Kubernetes. @@ -12,10 +12,10 @@ This page lists some of the available add-ons and links to their respective inst Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status. -{{% /capture %}} -{{% capture body %}} + + ## Networking and Network Policy @@ -55,4 +55,4 @@ There are several other add-ons documented in the deprecated [cluster/addons](ht Well-maintained ones should be linked to here. PRs welcome! -{{% /capture %}} + diff --git a/content/en/docs/concepts/cluster-administration/certificates.md b/content/en/docs/concepts/cluster-administration/certificates.md index 052e7b9aa5b66..8cc45252ecdbd 100644 --- a/content/en/docs/concepts/cluster-administration/certificates.md +++ b/content/en/docs/concepts/cluster-administration/certificates.md @@ -1,19 +1,19 @@ --- title: Certificates -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + When using client certificate authentication, you can generate certificates manually through `easyrsa`, `openssl` or `cfssl`. -{{% /capture %}} -{{% capture body %}} + + ### easyrsa @@ -249,4 +249,4 @@ You can use the `certificates.k8s.io` API to provision x509 certificates to use for authentication as documented [here](/docs/tasks/tls/managing-tls-in-a-cluster). -{{% /capture %}} + diff --git a/content/en/docs/concepts/cluster-administration/cloud-providers.md b/content/en/docs/concepts/cluster-administration/cloud-providers.md index 7d2f2a0b669ee..4f49e7bc4232a 100644 --- a/content/en/docs/concepts/cluster-administration/cloud-providers.md +++ b/content/en/docs/concepts/cluster-administration/cloud-providers.md @@ -1,16 +1,16 @@ --- title: Cloud Providers -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This page explains how to manage Kubernetes running on a specific cloud provider. -{{% /capture %}} -{{% capture body %}} + + ### kubeadm [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters. kubeadm has configuration options to specify configuration information for cloud providers. For example a typical @@ -363,7 +363,7 @@ Kubernetes network plugin and should appear in the `[Route]` section of the [kubenet]: /docs/concepts/cluster-administration/network-plugins/#kubenet -{{% /capture %}} + ## OVirt diff --git a/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md index 5ba0bb30d856a..fc2f55fbcd034 100644 --- a/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -3,16 +3,16 @@ reviewers: - davidopp - lavalamp title: Cluster Administration Overview -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + The cluster administration overview is for anyone creating or administering a Kubernetes cluster. It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/). -{{% /capture %}} -{{% capture body %}} + + ## Planning a cluster See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*. @@ -68,6 +68,6 @@ Note: Not all distros are actively maintained. Choose distros which have been te * [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it. -{{% /capture %}} + diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index aa6b0c04673c8..26fc1194df008 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -1,10 +1,10 @@ --- title: API Priority and Fairness -content_template: templates/concept +content_type: concept min-kubernetes-server-version: v1.18 --- -{{% capture overview %}} + {{< feature-state state="alpha" for_k8s_version="v1.18" >}} @@ -33,9 +33,9 @@ the `--max-requests-inflight` flag without the API Priority and Fairness feature enabled. {{< /caution >}} -{{% /capture %}} -{{% capture body %}} + + ## Enabling API Priority and Fairness @@ -366,13 +366,13 @@ poorly-behaved workloads that may be harming system health. request and the PriorityLevel to which it was assigned. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + For background information on design details for API priority and fairness, see the [enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md). You can make suggestions and feature requests via [SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery). -{{% /capture %}} diff --git a/content/en/docs/concepts/cluster-administration/kubelet-garbage-collection.md b/content/en/docs/concepts/cluster-administration/kubelet-garbage-collection.md index eb41a01cfe944..1590561cc9d1b 100644 --- a/content/en/docs/concepts/cluster-administration/kubelet-garbage-collection.md +++ b/content/en/docs/concepts/cluster-administration/kubelet-garbage-collection.md @@ -1,20 +1,20 @@ --- reviewers: title: Configuring kubelet Garbage Collection -content_template: templates/concept +content_type: concept weight: 70 --- -{{% capture overview %}} + Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes. External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist. -{{% /capture %}} -{{% capture body %}} + + ## Image Collection @@ -77,10 +77,11 @@ Including: | `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources | | `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources | -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details. -{{% /capture %}} + diff --git a/content/en/docs/concepts/cluster-administration/logging.md b/content/en/docs/concepts/cluster-administration/logging.md index e464a2869e486..399f8f16ccb8c 100644 --- a/content/en/docs/concepts/cluster-administration/logging.md +++ b/content/en/docs/concepts/cluster-administration/logging.md @@ -3,20 +3,20 @@ reviewers: - piosz - x13n title: Logging Architecture -content_template: templates/concept +content_type: concept weight: 60 --- -{{% capture overview %}} + Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams. However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, a pod is evicted, or a node dies, you'll usually still want to access your application's logs. As such, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level-logging_. Cluster-level logging requires a separate backend to store, analyze, and query logs. Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. -{{% /capture %}} -{{% capture body %}} + + Cluster-level logging architectures are described in assumption that a logging backend is present inside or outside of your cluster. If you're @@ -267,4 +267,4 @@ You can implement cluster-level logging by exposing or pushing logs directly fro every application; however, the implementation for such a logging mechanism is outside the scope of Kubernetes. -{{% /capture %}} + diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index 6b246ec3b6d24..b052dd3a15767 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -2,18 +2,18 @@ reviewers: - janetkuo title: Managing Resources -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/). -{{% /capture %}} -{{% capture body %}} + + ## Organizing resource configurations @@ -449,11 +449,12 @@ kubectl edit deployment/my-nginx That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + - Learn about [how to use `kubectl` for application introspection and debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/). - See [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/cluster-administration/monitoring.md b/content/en/docs/concepts/cluster-administration/monitoring.md index e02ac8231cad8..fbea5e69c184b 100644 --- a/content/en/docs/concepts/cluster-administration/monitoring.md +++ b/content/en/docs/concepts/cluster-administration/monitoring.md @@ -4,21 +4,21 @@ reviewers: - brancz - logicalhan - RainbowMango -content_template: templates/concept +content_type: concept weight: 60 aliases: - controller-metrics.md --- -{{% capture overview %}} + System component metrics can give a better look into what is happening inside them. Metrics are particularly useful for building dashboards and alerts. Metrics in Kubernetes control plane are emitted in [prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/) and are human readable. -{{% /capture %}} -{{% capture body %}} + + ## Metrics in Kubernetes @@ -124,10 +124,11 @@ cloudprovider_gce_api_request_duration_seconds { request = "detach_disk"} cloudprovider_gce_api_request_duration_seconds { request = "list_disk"} ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) for metrics * See the list of [stable Kubernetes metrics](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml) * Read about the [Kubernetes deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior ) -{{% /capture %}} + diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md index c260963d87485..29044be250136 100644 --- a/content/en/docs/concepts/cluster-administration/networking.md +++ b/content/en/docs/concepts/cluster-administration/networking.md @@ -2,11 +2,11 @@ reviewers: - thockin title: Cluster Networking -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work. There are 4 distinct networking problems to address: @@ -17,10 +17,10 @@ problems to address: 3. Pod-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/). 4. External-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/). -{{% /capture %}} -{{% capture body %}} + + Kubernetes is all about sharing machines between applications. Typically, sharing machines requires ensuring that two applications do not try to use the @@ -312,12 +312,13 @@ Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-pl or stand-alone. In either version, it doesn't require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + The early design of the networking model and its rationale, and some future plans are described in more detail in the [networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md). -{{% /capture %}} + diff --git a/content/en/docs/concepts/cluster-administration/proxies.md b/content/en/docs/concepts/cluster-administration/proxies.md index 8e03334d12b8e..9bf204bd9f246 100644 --- a/content/en/docs/concepts/cluster-administration/proxies.md +++ b/content/en/docs/concepts/cluster-administration/proxies.md @@ -1,14 +1,14 @@ --- title: Proxies in Kubernetes -content_template: templates/concept +content_type: concept weight: 90 --- -{{% capture overview %}} + This page explains proxies used with Kubernetes. -{{% /capture %}} -{{% capture body %}} + + ## Proxies @@ -62,6 +62,6 @@ will typically ensure that the latter types are setup correctly. Proxies have replaced redirect capabilities. Redirects have been deprecated. -{{% /capture %}} + diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md index 92348f36b767f..3e9ddf718f7b1 100644 --- a/content/en/docs/concepts/configuration/configmap.md +++ b/content/en/docs/concepts/configuration/configmap.md @@ -1,10 +1,10 @@ --- title: ConfigMaps -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + {{< glossary_definition term_id="configmap" prepend="A ConfigMap is" length="all" >}} @@ -15,9 +15,9 @@ If the data you want to store are confidential, use a or use additional (third party) tools to keep your data private. {{< /caution >}} -{{% /capture %}} -{{% capture body %}} + + ## Motivation Use a ConfigMap for setting configuration data separately from application code. @@ -243,12 +243,13 @@ Existing Pods maintain a mount point to the deleted ConfigMap - it is recommende these pods. {{< /note >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about [Secrets](/docs/concepts/configuration/secret/). * Read [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/). * Read [The Twelve-Factor App](https://12factor.net/) to understand the motivation for separating code from configuration. -{{% /capture %}} + diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index 69ea4a255d9bc..f8989c4a5dd81 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -1,6 +1,6 @@ --- title: Managing Resources for Containers -content_template: templates/concept +content_type: concept weight: 40 feature: title: Automatic bin packing @@ -8,7 +8,7 @@ feature: Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources. --- -{{% capture overview %}} + When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how much of each resource a {{< glossary_tooltip text="Container" term_id="container" >}} needs. @@ -21,10 +21,10 @@ allowed to use more of that resource than the limit you set. The kubelet also re at least the _request_ amount of that system resource specifically for that container to use. -{{% /capture %}} -{{% capture body %}} + + ## Requests and limits @@ -740,10 +740,11 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Get hands-on experience [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/). @@ -758,4 +759,4 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh * Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS -{{% /capture %}} + diff --git a/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md index 480b708018c9a..df767bbc3e7a9 100644 --- a/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md +++ b/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md @@ -1,10 +1,10 @@ --- title: Organizing Cluster Access Using kubeconfig Files -content_template: templates/concept +content_type: concept weight: 60 --- -{{% capture overview %}} + Use kubeconfig files to organize information about clusters, users, namespaces, and authentication mechanisms. The `kubectl` command-line tool uses kubeconfig files to @@ -25,10 +25,10 @@ variable or by setting the For step-by-step instructions on creating and specifying kubeconfig files, see [Configure Access to Multiple Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters). -{{% /capture %}} -{{% capture body %}} + + ## Supporting multiple clusters, users, and authentication mechanisms @@ -143,14 +143,15 @@ File references on the command line are relative to the current working director In `$HOME/.kube/config`, relative paths are stored relatively, and absolute paths are stored absolutely. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Configure Access to Multiple Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) * [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config) -{{% /capture %}} + diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md index b7b7b829db700..fe8cd3002dbfa 100644 --- a/content/en/docs/concepts/configuration/overview.md +++ b/content/en/docs/concepts/configuration/overview.md @@ -2,17 +2,17 @@ reviewers: - mikedanese title: Configuration Best Practices -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + This document highlights and consolidates configuration best practices that are introduced throughout the user guide, Getting Started documentation, and examples. This is a living document. If you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR. -{{% /capture %}} -{{% capture body %}} + + ## General Configuration Tips - When defining configurations, specify the latest stable API version. @@ -105,5 +105,5 @@ The caching semantics of the underlying image provider make even `imagePullPolic - Use `kubectl run` and `kubectl expose` to quickly create single-container Deployments and Services. See [Use a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/) for an example. -{{% /capture %}} + diff --git a/content/en/docs/concepts/configuration/pod-overhead.md b/content/en/docs/concepts/configuration/pod-overhead.md index 9661264820354..7057383dacdd4 100644 --- a/content/en/docs/concepts/configuration/pod-overhead.md +++ b/content/en/docs/concepts/configuration/pod-overhead.md @@ -4,11 +4,11 @@ reviewers: - egernst - tallclair title: Pod Overhead -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.18" state="beta" >}} @@ -19,10 +19,10 @@ _Pod Overhead_ is a feature for accounting for the resources consumed by the Pod on top of the container requests & limits. -{{% /capture %}} -{{% capture body %}} + + In Kubernetes, the Pod's overhead is set at [admission](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) @@ -188,11 +188,12 @@ running with a defined Overhead. This functionality is not available in the 1.9 kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics from source in the meantime. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [RuntimeClass](/docs/concepts/containers/runtime-class/) * [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) -{{% /capture %}} + diff --git a/content/en/docs/concepts/configuration/pod-priority-preemption.md b/content/en/docs/concepts/configuration/pod-priority-preemption.md index c9bddd7e3eb90..9bfc514257f40 100644 --- a/content/en/docs/concepts/configuration/pod-priority-preemption.md +++ b/content/en/docs/concepts/configuration/pod-priority-preemption.md @@ -3,11 +3,11 @@ reviewers: - davidopp - wojtek-t title: Pod Priority and Preemption -content_template: templates/concept +content_type: concept weight: 70 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.14" state="stable" >}} @@ -16,9 +16,9 @@ importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible. -{{% /capture %}} -{{% capture body %}} + + {{< warning >}} @@ -407,7 +407,8 @@ usage does not exceed their requests. If a Pod with lower priority is not exceeding its requests, it won't be evicted. Another Pod with higher priority that exceeds its requests may be evicted. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about using ResourceQuotas in connection with PriorityClasses: [limit Priority Class consumption by default](/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default) -{{% /capture %}} + diff --git a/content/en/docs/concepts/configuration/resource-bin-packing.md b/content/en/docs/concepts/configuration/resource-bin-packing.md index 0d475791ce5ba..5d030d94e590b 100644 --- a/content/en/docs/concepts/configuration/resource-bin-packing.md +++ b/content/en/docs/concepts/configuration/resource-bin-packing.md @@ -4,19 +4,19 @@ reviewers: - k82cn - ahg-g title: Resource Bin Packing for Extended Resources -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.16" state="alpha" >}} The kube-scheduler can be configured to enable bin packing of resources along with extended resources using `RequestedToCapacityRatioResourceAllocation` priority function. Priority functions can be used to fine-tune the kube-scheduler as per custom needs. -{{% /capture %}} -{{% capture body %}} + + ## Enabling Bin Packing using RequestedToCapacityRatioResourceAllocation @@ -194,4 +194,4 @@ NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3) ``` -{{% /capture %}} + diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index d6c898ae9ca8e..8da65eafbc81f 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -2,7 +2,7 @@ reviewers: - mikedanese title: Secrets -content_template: templates/concept +content_type: concept feature: title: Secret and configuration management description: > @@ -10,16 +10,16 @@ feature: weight: 30 --- -{{% capture overview %}} + Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a {{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information. -{{% /capture %}} -{{% capture body %}} + + ## Overview of Secrets diff --git a/content/en/docs/concepts/containers/container-environment.md b/content/en/docs/concepts/containers/container-environment.md index 86b595661d3cf..a57ac2181af39 100644 --- a/content/en/docs/concepts/containers/container-environment.md +++ b/content/en/docs/concepts/containers/container-environment.md @@ -3,18 +3,18 @@ reviewers: - mikedanese - thockin title: Container Environment -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + This page describes the resources available to Containers in the Container environment. -{{% /capture %}} -{{% capture body %}} + + ## Container environment @@ -53,12 +53,13 @@ FOO_SERVICE_PORT= Services have dedicated IP addresses and are available to the Container via DNS, if [DNS addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled.  -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/). * Get hands-on experience [attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md index fe810d23c5ce9..386e4d00bb436 100644 --- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md @@ -3,19 +3,19 @@ reviewers: - mikedanese - thockin title: Container Lifecycle Hooks -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This page describes how kubelet managed Containers can use the Container lifecycle hook framework to run code triggered by events during their management lifecycle. -{{% /capture %}} -{{% capture body %}} + + ## Overview @@ -112,12 +112,13 @@ Events: 1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about the [Container environment](/docs/concepts/containers/container-environment/). * Get hands-on experience [attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 3d27355e3a4f2..b5f9e7641f9e0 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -3,20 +3,20 @@ reviewers: - erictune - thockin title: Images -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + You create your Docker image and push it to a registry before referring to it in a Kubernetes pod. The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags. -{{% /capture %}} -{{% capture body %}} + + ## Updating Images @@ -370,4 +370,4 @@ common use cases and suggested solutions. If you need access to multiple registries, you can create one secret for each registry. Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json` -{{% /capture %}} + diff --git a/content/en/docs/concepts/containers/overview.md b/content/en/docs/concepts/containers/overview.md index 49162710d7462..1d996b8b930b6 100644 --- a/content/en/docs/concepts/containers/overview.md +++ b/content/en/docs/concepts/containers/overview.md @@ -3,11 +3,11 @@ reviewers: - erictune - thockin title: Containers overview -content_template: templates/concept +content_type: concept weight: 1 --- -{{% capture overview %}} + Containers are a technology for packaging the (compiled) code for an application along with the dependencies it needs at run time. Each @@ -18,10 +18,10 @@ run it. Containers decouple applications from underlying host infrastructure. This makes deployment easier in different cloud or OS environments. -{{% /capture %}} -{{% capture body %}} + + ## Container images A [container image](/docs/concepts/containers/images/) is a ready-to-run @@ -38,8 +38,9 @@ the change, then recreate the container to start from the updated image. {{< glossary_definition term_id="container-runtime" length="all" >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about [container images](/docs/concepts/containers/images/) * Read about [Pods](/docs/concepts/workloads/pods/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index dca6f2d0a86e0..d1857f3807a81 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -3,11 +3,11 @@ reviewers: - tallclair - dchen1107 title: Runtime Class -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.14" state="beta" >}} @@ -16,10 +16,10 @@ This page describes the RuntimeClass resource and runtime selection mechanism. RuntimeClass is a feature for selecting the container runtime configuration. The container runtime configuration is used to run a Pod's containers. -{{% /capture %}} -{{% capture body %}} + + ## Motivation @@ -180,12 +180,13 @@ Pod overhead is defined in RuntimeClass through the `overhead` fields. Through t you can specify the overhead of running pods utilizing this RuntimeClass and ensure these overheads are accounted for in Kubernetes. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + - [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md) - [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md) - Read about the [Pod Overhead](/docs/concepts/configuration/pod-overhead/) concept - [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) -{{% /capture %}} + diff --git a/content/en/docs/concepts/example-concept-template.md b/content/en/docs/concepts/example-concept-template.md index 26ce263ef4059..d5dfd52be1118 100644 --- a/content/en/docs/concepts/example-concept-template.md +++ b/content/en/docs/concepts/example-concept-template.md @@ -2,11 +2,11 @@ title: Example Concept Template reviewers: - chenopis -content_template: templates/concept +content_type: concept toc_hide: true --- -{{% capture overview %}} + {{< note >}} Be sure to also [create an entry in the table of contents](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents) for your new document. @@ -14,9 +14,9 @@ Be sure to also [create an entry in the table of contents](/docs/home/contribute This page explains ... -{{% /capture %}} -{{% capture body %}} + + ## Understanding ... @@ -26,15 +26,16 @@ Kubernetes provides ... To use ... -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + **[Optional Section]** * Learn more about [Writing a New Topic](/docs/home/contribute/write-new-topic/). * See [Using Page Templates - Concept template](/docs/home/contribute/page-templates/#concept_template) for how to use this template. -{{% /capture %}} + diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index 8bc6e22861761..9efee5b311cf8 100644 --- a/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -4,20 +4,20 @@ reviewers: - lavalamp - cheftako - chenopis -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs. The additional APIs can either be ready-made solutions such as [service-catalog](/docs/concepts/extend-kubernetes/service-catalog/), or APIs that you develop yourself. The aggregation layer is different from [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/), which are a way to make the {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} recognise new kinds of object. -{{% /capture %}} -{{% capture body %}} + + ## Aggregation layer @@ -34,13 +34,14 @@ If your extension API server cannot achieve that latency requirement, consider m `EnableAggregatedDiscoveryTimeout=false` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver to disable the timeout restriction. This deprecated feature gate will be removed in a future release. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * To get the aggregator working in your environment, [configure the aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/). * Then, [setup an extension api-server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/) to work with the aggregation layer. * Also, learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/). * Read the specification for [APIService](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io) -{{% /capture %}} + diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index b1ca7f610a9fe..ea52f6e44b249 100644 --- a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -3,19 +3,19 @@ title: Custom Resources reviewers: - enisoc - deads2k -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + *Custom resources* are extensions of the Kubernetes API. This page discusses when to add a custom resource to your Kubernetes cluster and when to use a standalone service. It describes the two methods for adding custom resources and how to choose between them. -{{% /capture %}} -{{% capture body %}} + + ## Custom resources A *resource* is an endpoint in the [Kubernetes API](/docs/reference/using-api/api-overview/) that stores a collection of @@ -246,12 +246,13 @@ When you add a custom resource, you can access it using: - A REST client that you write. - A client generated using [Kubernetes client generation tools](https://github.com/kubernetes/code-generator) (generating one is an advanced undertaking, but some projects may provide a client along with the CRD or AA). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn how to [Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/). * Learn how to [Extend the Kubernetes API with CustomResourceDefinition](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index 23f64628b55a5..d27dddd384964 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -2,11 +2,11 @@ reviewers: title: Device Plugins description: Use the Kubernetes device plugin framework to implement plugins for GPUs, NICs, FPGAs, InfiniBand, and similar resources that require vendor-specific setup. -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.10" state="beta" >}} Kubernetes provides a [device plugin framework](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md) @@ -19,9 +19,9 @@ The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adap and other similar computing resources that may require vendor specific initialization and setup. -{{% /capture %}} -{{% capture body %}} + + ## Device plugin registration @@ -225,12 +225,13 @@ Here are some examples of device plugin implementations: * The [SR-IOV Network device plugin](https://github.com/intel/sriov-network-device-plugin) * The [Xilinx FPGA device plugins](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-fpga-device-plugin/trunk) for Xilinx FPGA devices -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about [scheduling GPU resources](/docs/tasks/manage-gpus/scheduling-gpus/) using device plugins * Learn about [advertising extended resources](/docs/tasks/administer-cluster/extended-resource-node/) on a node * Read about using [hardware acceleration for TLS ingress](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) with Kubernetes * Learn about the [Topology Manager] (/docs/tasks/adminster-cluster/topology-manager/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index 2ff4ae23778ab..b32bce83dd1f4 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -4,12 +4,12 @@ reviewers: - freehan - thockin title: Network Plugins -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + {{< feature-state state="alpha" >}} {{< caution >}}Alpha features can change rapidly. {{< /caution >}} @@ -19,9 +19,9 @@ Network plugins in Kubernetes come in a few flavors: * CNI plugins: adhere to the appc/CNI specification, designed for interoperability. * Kubenet plugin: implements basic `cbr0` using the `bridge` and `host-local` CNI plugins -{{% /capture %}} -{{% capture body %}} + + ## Installation @@ -166,8 +166,9 @@ This option is provided to the network-plugin; currently **only kubenet supports * `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`. * `--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin. -{{% /capture %}} -{{% capture whatsnext %}} -{{% /capture %}} +## {{% heading "whatsnext" %}} + + + diff --git a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md index 2b5aa1b67678f..7914b1cab579a 100644 --- a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md +++ b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md @@ -5,11 +5,11 @@ reviewers: - lavalamp - cheftako - chenopis -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes @@ -22,10 +22,10 @@ their work environment. Developers who are prospective {{< glossary_tooltip text useful as an introduction to what extension points and patterns exist, and their trade-offs and limitations. -{{% /capture %}} -{{% capture body %}} + + ## Overview @@ -194,10 +194,11 @@ The scheduler also supports a that permits a webhook backend (scheduler extension) to filter and prioritize the nodes chosen for a pod. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/) * Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/) @@ -207,4 +208,4 @@ the nodes chosen for a pod. * Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) * Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/extend-kubernetes/operator.md b/content/en/docs/concepts/extend-kubernetes/operator.md index eb56d5475a44f..dda8f0020b8bc 100644 --- a/content/en/docs/concepts/extend-kubernetes/operator.md +++ b/content/en/docs/concepts/extend-kubernetes/operator.md @@ -1,20 +1,20 @@ --- title: Operator pattern -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + Operators are software extensions to Kubernetes that make use of [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to manage applications and their components. Operators follow Kubernetes principles, notably the [control loop](/docs/concepts/#kubernetes-control-plane). -{{% /capture %}} -{{% capture body %}} + + ## Motivation @@ -113,9 +113,10 @@ Operator. You also implement an Operator (that is, a Controller) using any language / runtime that can act as a [client for the Kubernetes API](/docs/reference/using-api/client-libraries/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case @@ -129,4 +130,3 @@ that can act as a [client for the Kubernetes API](/docs/reference/using-api/clie * Read [CoreOS' original article](https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern * Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/concepts/extend-kubernetes/poseidon-firmament-alternate-scheduler.md b/content/en/docs/concepts/extend-kubernetes/poseidon-firmament-alternate-scheduler.md index 4c5ab12c03aae..7f81439c417b8 100644 --- a/content/en/docs/concepts/extend-kubernetes/poseidon-firmament-alternate-scheduler.md +++ b/content/en/docs/concepts/extend-kubernetes/poseidon-firmament-alternate-scheduler.md @@ -1,18 +1,18 @@ --- title: Poseidon-Firmament Scheduler -content_template: templates/concept +content_type: concept weight: 80 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.6" state="alpha" >}} The Poseidon-Firmament scheduler is an alternate scheduler that can be deployed alongside the default Kubernetes scheduler. -{{% /capture %}} -{{% capture body %}} + + ## Introduction @@ -102,10 +102,11 @@ Pod-by-pod schedulers, such as the Kubernetes default scheduler, process Pods in These downsides of pod-by-pod schedulers are addressed by batching or bulk scheduling in Poseidon-Firmament scheduler. Processing several pods in a batch allows the scheduler to jointly consider their placement, and thus to find the best trade-off for the whole batch instead of one pod. At the same time it amortizes work across pods resulting in much higher throughput. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * See [Poseidon-Firmament](https://github.com/kubernetes-sigs/poseidon#readme) on GitHub for more information. * See the [design document](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/design/README.md) for Poseidon. * Read [Firmament: Fast, Centralized Cluster Scheduling at Scale](https://www.usenix.org/system/files/conference/osdi16/osdi16-gog.pdf), the academic paper on the Firmament scheduling design. * If you'd like to contribute to Poseidon-Firmament, refer to the [developer setup instructions](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/devel/README.md). -{{% /capture %}} + diff --git a/content/en/docs/concepts/extend-kubernetes/service-catalog.md b/content/en/docs/concepts/extend-kubernetes/service-catalog.md index 35d181d9986ba..b40ca7ee143f1 100644 --- a/content/en/docs/concepts/extend-kubernetes/service-catalog.md +++ b/content/en/docs/concepts/extend-kubernetes/service-catalog.md @@ -2,11 +2,11 @@ title: Service Catalog reviewers: - chenopis -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + {{< glossary_definition term_id="service-catalog" length="all" prepend="Service Catalog is" >}} A service broker, as defined by the [Open service broker API spec](https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md), is an endpoint for a set of managed services offered and maintained by a third-party, which could be a cloud provider such as AWS, GCP, or Azure. @@ -14,10 +14,10 @@ Some examples of managed services are Microsoft Azure Cloud Queue, Amazon Simple Using Service Catalog, a {{< glossary_tooltip text="cluster operator" term_id="cluster-operator" >}} can browse the list of managed services offered by a service broker, provision an instance of a managed service, and bind with it to make it available to an application in the Kubernetes cluster. -{{% /capture %}} -{{% capture body %}} + + ## Example use case An {{< glossary_tooltip text="application developer" term_id="application-developer" >}} wants to use message queuing as part of their application running in a Kubernetes cluster. @@ -222,16 +222,17 @@ The following example describes how to map secret values into application enviro key: topic ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * If you are familiar with {{< glossary_tooltip text="Helm Charts" term_id="helm-chart" >}}, [install Service Catalog using Helm](/docs/tasks/service-catalog/install-service-catalog-using-helm/) into your Kubernetes cluster. Alternatively, you can [install Service Catalog using the SC tool](/docs/tasks/service-catalog/install-service-catalog-using-sc/). * View [sample service brokers](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers). * Explore the [kubernetes-incubator/service-catalog](https://github.com/kubernetes-incubator/service-catalog) project. * View [svc-cat.io](https://svc-cat.io/docs/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/overview/components.md b/content/en/docs/concepts/overview/components.md index 04c4bbe805f2e..f83f00683e227 100644 --- a/content/en/docs/concepts/overview/components.md +++ b/content/en/docs/concepts/overview/components.md @@ -2,14 +2,14 @@ reviewers: - lavalamp title: Kubernetes Components -content_template: templates/concept +content_type: concept weight: 20 card: name: concepts weight: 20 --- -{{% capture overview %}} + When you deploy Kubernetes, you get a cluster. {{< glossary_definition term_id="cluster" length="all" prepend="A Kubernetes cluster consists of">}} @@ -20,9 +20,9 @@ Here's the diagram of a Kubernetes cluster with all the components tied together ![Components of Kubernetes](/images/docs/components-of-kubernetes.png) -{{% /capture %}} -{{% capture body %}} + + ## Control Plane Components The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new {{< glossary_tooltip text="pod" term_id="pod">}} when a deployment's `replicas` field is unsatisfied). @@ -122,10 +122,11 @@ about containers in a central database, and provides a UI for browsing that data A [cluster-level logging](/docs/concepts/cluster-administration/logging/) mechanism is responsible for saving container logs to a central log store with search/browsing interface. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about [Nodes](/docs/concepts/architecture/nodes/) * Learn about [Controllers](/docs/concepts/architecture/controller/) * Learn about [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/) * Read etcd's official [documentation](https://etcd.io/docs/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md index bbdef8495868f..a82359072f146 100644 --- a/content/en/docs/concepts/overview/kubernetes-api.md +++ b/content/en/docs/concepts/overview/kubernetes-api.md @@ -2,14 +2,14 @@ reviewers: - chenopis title: The Kubernetes API -content_template: templates/concept +content_type: concept weight: 30 card: name: concepts weight: 30 --- -{{% capture overview %}} + The core of Kubernetes' {{< glossary_tooltip text="control plane" term_id="control-plane" >}} is the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}. The API server @@ -21,9 +21,10 @@ The Kubernetes API lets you query and manipulate the state of objects in the Kub API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/). -{{% /capture %}} -{{% capture body %}} + + + ## API changes @@ -166,8 +167,9 @@ For example: to enable deployments and daemonsets, set Kubernetes stores its serialized state in terms of the API resources by writing them into {{< glossary_tooltip term_id="etcd" >}}. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + [Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes how the cluster manages authentication and authorization for API access. @@ -176,5 +178,3 @@ Overall API conventions are described in the document. API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/). - -{{% /capture %}} diff --git a/content/en/docs/concepts/overview/what-is-kubernetes.md b/content/en/docs/concepts/overview/what-is-kubernetes.md index fbe74e4337921..5b30c8e66edfd 100644 --- a/content/en/docs/concepts/overview/what-is-kubernetes.md +++ b/content/en/docs/concepts/overview/what-is-kubernetes.md @@ -5,18 +5,18 @@ reviewers: title: What is Kubernetes? description: > Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. -content_template: templates/concept +content_type: concept weight: 10 card: name: concepts weight: 10 --- -{{% capture overview %}} + This page is an overview of Kubernetes. -{{% /capture %}} -{{% capture body %}} + + Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines [over 15 years of Google's experience](/blog/2015/04/borg-predecessor-to-kubernetes/) running production workloads at scale with best-of-breed ideas and practices from the community. @@ -86,9 +86,10 @@ Kubernetes: * Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems. * Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn’t matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Take a look at the [Kubernetes Components](/docs/concepts/overview/components/) * Ready to [Get Started](/docs/setup/)? -{{% /capture %}} + diff --git a/content/en/docs/concepts/overview/working-with-objects/annotations.md b/content/en/docs/concepts/overview/working-with-objects/annotations.md index f88c6a0003d90..d440d2965e36d 100644 --- a/content/en/docs/concepts/overview/working-with-objects/annotations.md +++ b/content/en/docs/concepts/overview/working-with-objects/annotations.md @@ -1,15 +1,15 @@ --- title: Annotations -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata. -{{% /capture %}} -{{% capture body %}} + + ## Attaching metadata to objects You can use either labels or annotations to attach metadata to Kubernetes @@ -88,10 +88,11 @@ spec: ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn more about [Labels and Selectors](/docs/concepts/overview/working-with-objects/labels/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/overview/working-with-objects/common-labels.md b/content/en/docs/concepts/overview/working-with-objects/common-labels.md index d360d7d28418a..11e8944c8aded 100644 --- a/content/en/docs/concepts/overview/working-with-objects/common-labels.md +++ b/content/en/docs/concepts/overview/working-with-objects/common-labels.md @@ -1,18 +1,18 @@ --- title: Recommended Labels -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + You can visualize and manage Kubernetes objects with more tools than kubectl and the dashboard. A common set of labels allows tools to work interoperably, describing objects in a common manner that all tools can understand. In addition to supporting tooling, the recommended labels describe applications in a way that can be queried. -{{% /capture %}} -{{% capture body %}} + + The metadata is organized around the concept of an _application_. Kubernetes is not a platform as a service (PaaS) and doesn't have or enforce a formal notion of an application. Instead, applications are informal and described with metadata. The definition of @@ -170,4 +170,4 @@ metadata: With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and Wordpress, the broader application, are included. -{{% /capture %}} + diff --git a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md index b9df009db7cf4..1f4f4e7509b5f 100644 --- a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -1,17 +1,17 @@ --- title: Understanding Kubernetes Objects -content_template: templates/concept +content_type: concept weight: 10 card: name: concepts weight: 40 --- -{{% capture overview %}} + This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in `.yaml` format. -{{% /capture %}} -{{% capture body %}} + + ## Understanding Kubernetes objects {#kubernetes-objects} *Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe: @@ -87,12 +87,13 @@ For example, the `spec` format for a Pod can be found in and the `spec` format for a Deployment can be found in [DeploymentSpec v1 apps](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#deploymentspec-v1-apps). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Kubernetes API overview](/docs/reference/using-api/api-overview/) explains some more API concepts * Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/pod-overview/). * Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes -{{% /capture %}} + diff --git a/content/en/docs/concepts/overview/working-with-objects/labels.md b/content/en/docs/concepts/overview/working-with-objects/labels.md index f08daf323bea5..e995db10a5ddd 100644 --- a/content/en/docs/concepts/overview/working-with-objects/labels.md +++ b/content/en/docs/concepts/overview/working-with-objects/labels.md @@ -2,11 +2,11 @@ reviewers: - mikedanese title: Labels and Selectors -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + _Labels_ are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. @@ -24,10 +24,10 @@ Each object can have a set of key/value labels defined. Each Key must be unique Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using [annotations](/docs/concepts/overview/working-with-objects/annotations/). -{{% /capture %}} -{{% capture body %}} + + ## Motivation @@ -228,4 +228,4 @@ selector: One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule. See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information. -{{% /capture %}} + diff --git a/content/en/docs/concepts/overview/working-with-objects/names.md b/content/en/docs/concepts/overview/working-with-objects/names.md index 01bb53b56d5ff..9831f7335c752 100644 --- a/content/en/docs/concepts/overview/working-with-objects/names.md +++ b/content/en/docs/concepts/overview/working-with-objects/names.md @@ -3,11 +3,11 @@ reviewers: - mikedanese - thockin title: Object Names and IDs -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + Each object in your cluster has a [_Name_](#names) that is unique for that type of resource. Every Kubernetes object also has a [_UID_](#uids) that is unique across your whole cluster. @@ -16,9 +16,9 @@ For example, you can only have one Pod named `myapp-1234` within the same [names For non-unique user-provided attributes, Kubernetes provides [labels](/docs/concepts/overview/working-with-objects/labels/) and [annotations](/docs/concepts/overview/working-with-objects/annotations/). -{{% /capture %}} -{{% capture body %}} + + ## Names @@ -81,8 +81,9 @@ Some resource types have additional restrictions on their names. Kubernetes UIDs are universally unique identifiers (also known as UUIDs). UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes. * See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document. -{{% /capture %}} + diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index 8d6e907afd593..30285e6fbfe1d 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -4,19 +4,19 @@ reviewers: - mikedanese - thockin title: Namespaces -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. -{{% /capture %}} -{{% capture body %}} + + ## When to Use Multiple Namespaces @@ -112,11 +112,12 @@ kubectl api-resources --namespaced=true kubectl api-resources --namespaced=false ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [creating a new namespace](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace). * Learn more about [deleting a namespace](/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace). -{{% /capture %}} + diff --git a/content/en/docs/concepts/overview/working-with-objects/object-management.md b/content/en/docs/concepts/overview/working-with-objects/object-management.md index 288be6a684ec1..97f57ff27597a 100644 --- a/content/en/docs/concepts/overview/working-with-objects/object-management.md +++ b/content/en/docs/concepts/overview/working-with-objects/object-management.md @@ -1,17 +1,17 @@ --- title: Kubernetes Object Management -content_template: templates/concept +content_type: concept weight: 15 --- -{{% capture overview %}} + The `kubectl` command-line tool supports several different ways to create and manage Kubernetes objects. This document provides an overview of the different approaches. Read the [Kubectl book](https://kubectl.docs.kubernetes.io) for details of managing objects by Kubectl. -{{% /capture %}} -{{% capture body %}} + + ## Management techniques @@ -173,9 +173,10 @@ Disadvantages compared to imperative object configuration: - Declarative object configuration is harder to debug and understand results when they are unexpected. - Partial updates using diffs create complex merge and patch operations. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + - [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/) - [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/) @@ -185,4 +186,4 @@ Disadvantages compared to imperative object configuration: - [Kubectl Book](https://kubectl.docs.kubernetes.io) - [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md index 8bea6c88e7670..cf0ad16783d6b 100644 --- a/content/en/docs/concepts/policy/limit-range.md +++ b/content/en/docs/concepts/policy/limit-range.md @@ -2,20 +2,20 @@ reviewers: - nelvadas title: Limit Ranges -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + By default, containers run with unbounded [compute resources](/docs/user-guide/compute-resources) on a Kubernetes cluster. With resource quotas, cluster administrators can restrict resource consumption and creation on a {{< glossary_tooltip text="namespace" term_id="namespace" >}} basis. Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace. -{{% /capture %}} -{{% capture body %}} + + A _LimitRange_ provides constraints that can: @@ -56,9 +56,10 @@ there may be contention for resources. In this case, the Containers or Pods will Neither contention nor changes to a LimitRange will affect already created resources. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Refer to the [LimitRanger design document](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information. @@ -72,4 +73,4 @@ For examples on using limits, see: - a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index 52aa593e6fc28..c8d072fe70477 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -3,21 +3,21 @@ reviewers: - pweil- - tallclair title: Pod Security Policies -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + {{< feature-state state="beta" >}} Pod Security Policies enable fine-grained authorization of pod creation and updates. -{{% /capture %}} -{{% capture body %}} + + ## What is a Pod Security Policy? @@ -631,12 +631,13 @@ By default, all safe sysctls are allowed. Refer to the [Sysctl documentation]( /docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations. Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details. -{{% /capture %}} + diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index 39f51bf2d7cbd..4fb3f17a38595 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -2,21 +2,21 @@ reviewers: - derekwaynecarr title: Resource Quotas -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources. Resource quotas are a tool for administrators to address this concern. -{{% /capture %}} -{{% capture body %}} + + A resource quota, defined by a `ResourceQuota` object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can @@ -596,10 +596,11 @@ See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) and See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information. -{{% /capture %}} + diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 79a9487c6089e..009c0d9276519 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -4,12 +4,12 @@ reviewers: - kevin-wangzefeng - bsalamat title: Assigning Pods to Nodes -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} to only be able to run on particular {{< glossary_tooltip text="Node(s)" term_id="node" >}}, or to prefer to run on particular nodes. @@ -21,9 +21,9 @@ but there are some circumstances where you may want more control on a node where that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone. -{{% /capture %}} -{{% capture body %}} + + ## nodeSelector @@ -388,9 +388,10 @@ spec: The above pod will run on the node kube-01. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + [Taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) allow a Node to *repel* a set of Pods. @@ -402,4 +403,4 @@ Once a Pod is assigned to a Node, the kubelet runs the Pod and allocates node-lo The [topology manager](/docs/tasks/administer-cluster/topology-manager/) can take part in node-level resource allocation decisions. -{{% /capture %}} + diff --git a/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md index 2fea98bfb4c0d..406c3f974baab 100644 --- a/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md +++ b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md @@ -1,18 +1,18 @@ --- title: Kubernetes Scheduler -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + In Kubernetes, _scheduling_ refers to making sure that {{< glossary_tooltip text="Pods" term_id="pod" >}} are matched to {{< glossary_tooltip text="Nodes" term_id="node" >}} so that {{< glossary_tooltip term_id="kubelet" >}} can run them. -{{% /capture %}} -{{% capture body %}} + + ## Scheduling overview {#scheduling} @@ -86,12 +86,13 @@ of the scheduler: `QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You can also configure the kube-scheduler to run different profiles. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) * Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) * Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler * Learn about [configuring multiple schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/) * Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/) * Learn about [Pod Overhead](/docs/concepts/configuration/pod-overhead/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md index e3d4b168617b1..06f535a57454b 100644 --- a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md +++ b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md @@ -2,11 +2,11 @@ reviewers: - bsalamat title: Scheduler Performance Tuning -content_template: templates/concept +content_type: concept weight: 70 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.14" state="beta" >}} @@ -24,9 +24,9 @@ in a process called _Binding_. This page explains performance tuning optimizations that are relevant for large Kubernetes clusters. -{{% /capture %}} -{{% capture body %}} + + In large clusters, you can tune the scheduler's behaviour balancing scheduling outcomes between latency (new Pods are placed quickly) and @@ -164,4 +164,4 @@ Node 1, Node 5, Node 2, Node 6, Node 3, Node 4 After going over all the Nodes, it goes back to Node 1. -{{% /capture %}} + diff --git a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md index d1123b72e15cf..5798b0579f092 100644 --- a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md +++ b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md @@ -2,11 +2,11 @@ reviewers: - ahg-g title: Scheduling Framework -content_template: templates/concept +content_type: concept weight: 60 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.15" state="alpha" >}} @@ -20,9 +20,9 @@ framework. [kep]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md -{{% /capture %}} -{{% capture body %}} + + # Framework workflow @@ -239,4 +239,3 @@ If you are using Kubernetes v1.18 or later, you can configure a set of plugins a a scheduler profile and then define multiple profiles to fit various kinds of workload. Learn more at [multiple profiles](/docs/reference/scheduling/profiles/#multiple-profiles). -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md index c803676d3a10d..89a7eca7b1982 100644 --- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -4,12 +4,12 @@ reviewers: - kevin-wangzefeng - bsalamat title: Taints and Tolerations -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + [_Node affinity_](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity), is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attracts* them to a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a @@ -22,9 +22,9 @@ Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints. -{{% /capture %}} -{{% capture body %}} + + ## Concepts @@ -282,9 +282,10 @@ tolerations to all daemons, to prevent DaemonSets from breaking. Adding these tolerations ensures backward compatibility. You can also add arbitrary tolerations to DaemonSets. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about [out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) and how you can configure it * Read about [pod priority](/docs/concepts/configuration/pod-priority-preemption/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/security/overview.md b/content/en/docs/concepts/security/overview.md index 20ba25503979e..ed3ba48eb4599 100644 --- a/content/en/docs/concepts/security/overview.md +++ b/content/en/docs/concepts/security/overview.md @@ -2,13 +2,13 @@ reviewers: - zparnold title: Overview of Cloud Native Security -content_template: templates/concept +content_type: concept weight: 1 --- {{< toc >}} -{{% capture overview %}} + Kubernetes Security (and security in general) is an immense topic that has many highly interrelated parts. In today's era where open source software is integrated into many of the systems that help web applications run, @@ -17,9 +17,9 @@ think about security holistically. This guide will define a mental model for some general concepts surrounding Cloud Native Security. The mental model is completely arbitrary and you should only use it if it helps you think about where to secure your software stack. -{{% /capture %}} -{{% capture body %}} + + ## The 4C's of Cloud Native Security Let's start with a diagram that may help you understand how you can think about security in layers. @@ -153,12 +153,13 @@ Most of the above mentioned suggestions can actually be automated in your code delivery pipeline as part of a series of checks in security. To learn about a more "Continuous Hacking" approach to software delivery, [this article](https://thenewstack.io/beyond-ci-cd-how-continuous-hacking-of-docker-containers-and-pipeline-driven-security-keeps-ygrene-secure/) provides more detail. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about [network policies for Pods](/docs/concepts/services-networking/network-policies/) * Read about [securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/) * Read about [API access control](/docs/reference/access-authn-authz/controlling-access/) * Read about [data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane * Read about [data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) * Read about [Secrets in Kubernetes](/docs/concepts/configuration/secret/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md index ffe1aa45f2faa..b75d2e0504d6b 100644 --- a/content/en/docs/concepts/security/pod-security-standards.md +++ b/content/en/docs/concepts/security/pod-security-standards.md @@ -2,11 +2,11 @@ reviewers: - tallclair title: Pod Security Standards -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + Security settings for Pods are typically applied by using [security contexts](/docs/tasks/configure-pod-container/security-context/). Security Contexts allow for the @@ -21,9 +21,9 @@ However, numerous means of policy enforcement have arisen that augment or replac PodSecurityPolicy. The intent of this page is to detail recommended Pod security profiles, decoupled from any specific instantiation. -{{% /capture %}} -{{% capture body %}} + + ## Policy Types @@ -322,4 +322,4 @@ kernel. This allows for workloads requiring heightened permissions to still be i Additionally, the protection of sandboxed workloads is highly dependent on the method of sandboxing. As such, no single ‘recommended’ policy is recommended for all sandboxed workloads. -{{% /capture %}} + diff --git a/content/en/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md b/content/en/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md index 6f931a8531579..05a6a8bc85a58 100644 --- a/content/en/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md +++ b/content/en/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md @@ -3,19 +3,19 @@ reviewers: - rickypai - thockin title: Adding entries to Pod /etc/hosts with HostAliases -content_template: templates/concept +content_type: concept weight: 60 --- {{< toc >}} -{{% capture overview %}} + Adding entries to a Pod's /etc/hosts file provides Pod-level override of hostname resolution when DNS and other options are not applicable. In 1.7, users can add these custom entries with the HostAliases field in PodSpec. Modification not using HostAliases is not suggested because the file is managed by Kubelet and can be overwritten on during Pod creation/restart. -{{% /capture %}} -{{% capture body %}} + + ## Default Hosts File Content @@ -125,5 +125,5 @@ overwritten whenever the `hosts` file is remounted by Kubelet in the event of a container restart or a Pod reschedule. Thus, it is not suggested to modify the contents of the file. -{{% /capture %}} + diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index 50c012ffc62c5..831ab8384c2ec 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -4,12 +4,12 @@ reviewers: - lavalamp - thockin title: Connecting Applications with Services -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + ## The Kubernetes model for connecting containers @@ -21,9 +21,9 @@ Coordinating port allocations across multiple developers or teams that provide c This guide uses a simple nginx server to demonstrate proof of concept. -{{% /capture %}} -{{% capture body %}} + + ## Exposing pods to the cluster @@ -418,12 +418,13 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el ... ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Using a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/) * Learn more about [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/) * Learn more about [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index 9cba1841681fc..280d7601932e2 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -3,14 +3,14 @@ reviewers: - davidopp - thockin title: DNS for Services and Pods -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + This page provides an overview of DNS support by Kubernetes. -{{% /capture %}} -{{% capture body %}} + + ## Introduction @@ -262,11 +262,11 @@ The availability of Pod DNS Config and DNS Policy "`None`" is shown as below. | 1.10 | Beta (on by default)| | 1.9 | Alpha | -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + For guidance on administering DNS configurations, check [Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/) -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md index c753c17cc1520..aa249566b99f0 100644 --- a/content/en/docs/concepts/services-networking/dual-stack.md +++ b/content/en/docs/concepts/services-networking/dual-stack.md @@ -9,11 +9,11 @@ feature: description: > Allocation of IPv4 and IPv6 addresses to Pods and Services -content_template: templates/concept +content_type: concept weight: 70 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.16" state="alpha" >}} @@ -21,9 +21,9 @@ weight: 70 If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses. -{{% /capture %}} -{{% capture body %}} + + ## Supported Features @@ -103,10 +103,11 @@ The use of publicly routable and non-publicly routable IPv6 address blocks is ac * Kubenet forces IPv4,IPv6 positional reporting of IPs (--cluster-cidr) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking -{{% /capture %}} + diff --git a/content/en/docs/concepts/services-networking/endpoint-slices.md b/content/en/docs/concepts/services-networking/endpoint-slices.md index 940374ae5204d..7c66ce0072157 100644 --- a/content/en/docs/concepts/services-networking/endpoint-slices.md +++ b/content/en/docs/concepts/services-networking/endpoint-slices.md @@ -2,12 +2,12 @@ reviewers: - freehan title: EndpointSlices -content_template: templates/concept +content_type: concept weight: 15 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.17" state="beta" >}} @@ -15,9 +15,9 @@ _EndpointSlices_ provide a simple way to track network endpoints within a Kubernetes cluster. They offer a more scalable and extensible alternative to Endpoints. -{{% /capture %}} -{{% capture body %}} + + ## Motivation @@ -175,11 +175,12 @@ necessary soon anyway. Rolling updates of Deployments also provide a natural repacking of EndpointSlices with all pods and their corresponding endpoints getting replaced. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices) * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md index efeb327049891..2c363ce7dc4b8 100644 --- a/content/en/docs/concepts/services-networking/ingress-controllers.md +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -1,11 +1,11 @@ --- title: Ingress Controllers reviewers: -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + In order for the Ingress resource to work, the cluster must have an ingress controller running. @@ -16,9 +16,9 @@ that best fits your cluster. Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and [nginx](https://git.k8s.io/ingress-nginx/README.md) controllers. -{{% /capture %}} -{{% capture body %}} + + ## Additional controllers @@ -64,11 +64,12 @@ controllers operate slightly differently. Make sure you review your ingress controller's documentation to understand the caveats of choosing it. {{< /note >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Ingress](/docs/concepts/services-networking/ingress/). * [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube). -{{% /capture %}} + diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index 062dc14f66daf..430ee3c72d551 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -2,16 +2,16 @@ reviewers: - bprashanth title: Ingress -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.1" state="beta" >}} {{< glossary_definition term_id="ingress" length="all" >}} -{{% /capture %}} -{{% capture body %}} + + ## Terminology @@ -542,10 +542,11 @@ You can expose a Service in multiple ways that don't directly involve the Ingres * Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer) * Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about the [Ingress API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io) * Learn about [Ingress Controllers](/docs/concepts/services-networking/ingress-controllers/) * [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube) -{{% /capture %}} + diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md index 795969757d65c..9f29405ae7543 100644 --- a/content/en/docs/concepts/services-networking/network-policies.md +++ b/content/en/docs/concepts/services-networking/network-policies.md @@ -4,20 +4,20 @@ reviewers: - caseydavenport - danwinship title: Network Policies -content_template: templates/concept +content_type: concept weight: 50 --- {{< toc >}} -{{% capture overview %}} + A network policy is a specification of how groups of {{< glossary_tooltip text="pods" term_id="pod">}} are allowed to communicate with each other and other network endpoints. NetworkPolicy resources use {{< glossary_tooltip text="labels" term_id="label">}} to select pods and define rules which specify what traffic is allowed to the selected pods. -{{% /capture %}} -{{% capture body %}} + + ## Prerequisites Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect. @@ -215,12 +215,13 @@ You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin tha {{< /note >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + - See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples. - See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource. -{{% /capture %}} + diff --git a/content/en/docs/concepts/services-networking/service-topology.md b/content/en/docs/concepts/services-networking/service-topology.md index 7b3c58a84a546..d36b76f55f003 100644 --- a/content/en/docs/concepts/services-networking/service-topology.md +++ b/content/en/docs/concepts/services-networking/service-topology.md @@ -8,12 +8,12 @@ feature: description: > Routing of service traffic based upon cluster topology. -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.17" state="alpha" >}} @@ -22,9 +22,9 @@ topology of the cluster. For example, a service can specify that traffic be preferentially routed to endpoints that are on the same Node as the client, or in the same availability zone. -{{% /capture %}} -{{% capture body %}} + + ## Introduction @@ -192,11 +192,12 @@ spec: ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology) * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index e97d80db21d66..2ae49ac2704d5 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -7,12 +7,12 @@ feature: description: > No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + {{< glossary_definition term_id="service" length="short" >}} @@ -20,9 +20,9 @@ With Kubernetes you don't need to modify your application to use an unfamiliar s Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. -{{% /capture %}} -{{% capture body %}} + + ## Motivation @@ -1227,12 +1227,13 @@ SCTP is not supported on Windows based nodes. The kube-proxy does not support the management of SCTP associations when it is in userspace mode. {{< /warning >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) * Read about [Ingress](/docs/concepts/services-networking/ingress/) * Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/storage/dynamic-provisioning.md b/content/en/docs/concepts/storage/dynamic-provisioning.md index 77885981f7cac..dc82e5c2c8290 100644 --- a/content/en/docs/concepts/storage/dynamic-provisioning.md +++ b/content/en/docs/concepts/storage/dynamic-provisioning.md @@ -5,11 +5,11 @@ reviewers: - thockin - msau42 title: Dynamic Volume Provisioning -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make @@ -19,10 +19,10 @@ to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users. -{{% /capture %}} -{{% capture body %}} + + ## Background @@ -133,4 +133,4 @@ Zones in a Region. Single-Zone storage backends should be provisioned in the Zon Pods are scheduled. This can be accomplished by setting the [Volume Binding Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode). -{{% /capture %}} + diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index c365e02171a9e..2c3140de83021 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -11,18 +11,18 @@ feature: description: > Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker. -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + This document describes the current state of _persistent volumes_ in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested. -{{% /capture %}} -{{% capture body %}} + + ## Introduction @@ -746,8 +746,9 @@ and need persistent storage, it is recommended that you use the following patter dynamic storage support (in which case the user should create a matching PV) or the cluster has no storage system (in which case the user cannot deploy config requiring PVCs). -{{% /capture %}} - {{% capture whatsnext %}} + + ## {{% heading "whatsnext" %}} + * Learn more about [Creating a PersistentVolume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume). * Learn more about [Creating a PersistentVolumeClaim](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim). @@ -759,4 +760,3 @@ and need persistent storage, it is recommended that you use the following patter * [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core) * [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core) * [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core) -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 1ea7c236d9cdf..d6b3a9e3322a8 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -5,19 +5,19 @@ reviewers: - thockin - msau42 title: Storage Classes -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This document describes the concept of a StorageClass in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) and [persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested. -{{% /capture %}} -{{% capture body %}} + + ## Introduction @@ -821,4 +821,4 @@ Delaying volume binding allows the scheduler to consider all of a Pod's scheduling constraints when choosing an appropriate PersistentVolume for a PersistentVolumeClaim. -{{% /capture %}} + diff --git a/content/en/docs/concepts/storage/storage-limits.md b/content/en/docs/concepts/storage/storage-limits.md index 295ed467a2b3d..fb6cffed9c9f6 100644 --- a/content/en/docs/concepts/storage/storage-limits.md +++ b/content/en/docs/concepts/storage/storage-limits.md @@ -5,10 +5,10 @@ reviewers: - thockin - msau42 title: Node-specific Volume Limits -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This page describes the maximum number of volumes that can be attached to a Node for various cloud providers. @@ -18,9 +18,9 @@ how many volumes can be attached to a Node. It is important for Kubernetes to respect those limits. Otherwise, Pods scheduled on a Node could get stuck waiting for volumes to attach. -{{% /capture %}} -{{% capture body %}} + + ## Kubernetes default limits @@ -78,4 +78,4 @@ Refer to the [CSI specifications](https://github.com/container-storage-interface * For volumes managed by in-tree plugins that have been migrated to a CSI driver, the maximum number of volumes will be the one reported by the CSI driver. -{{% /capture %}} + diff --git a/content/en/docs/concepts/storage/volume-pvc-datasource.md b/content/en/docs/concepts/storage/volume-pvc-datasource.md index 2f29fb9bb907a..ac8d16041da71 100644 --- a/content/en/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/en/docs/concepts/storage/volume-pvc-datasource.md @@ -5,18 +5,18 @@ reviewers: - thockin - msau42 title: CSI Volume Cloning -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested. -{{% /capture %}} -{{% capture body %}} + + ## Introduction @@ -70,4 +70,4 @@ The result is a new PVC with the name `clone-of-pvc-1` that has the exact same c Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC. It's also expected at this point that the newly created PVC is an independent object. It can be consumed, cloned, snapshotted, or deleted independently and without consideration for it's original dataSource PVC. This also implies that the source is not linked in any way to the newly created clone, it may also be modified or deleted without affecting the newly created clone. -{{% /capture %}} + diff --git a/content/en/docs/concepts/storage/volume-snapshot-classes.md b/content/en/docs/concepts/storage/volume-snapshot-classes.md index dcb9516519973..f50db195209fe 100644 --- a/content/en/docs/concepts/storage/volume-snapshot-classes.md +++ b/content/en/docs/concepts/storage/volume-snapshot-classes.md @@ -7,20 +7,20 @@ reviewers: - xing-yang - yuxiangqian title: Volume Snapshot Classes -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This document describes the concept of `VolumeSnapshotClass` in Kubernetes. Familiarity with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and [storage classes](/docs/concepts/storage/storage-classes) is suggested. -{{% /capture %}} -{{% capture body %}} + + ## Introduction @@ -69,4 +69,4 @@ Volume snapshot classes have parameters that describe volume snapshots belonging the volume snapshot class. Different parameters may be accepted depending on the `driver`. -{{% /capture %}} + diff --git a/content/en/docs/concepts/storage/volume-snapshots.md b/content/en/docs/concepts/storage/volume-snapshots.md index 0ad66e75ae7f3..a6cc1220866f7 100644 --- a/content/en/docs/concepts/storage/volume-snapshots.md +++ b/content/en/docs/concepts/storage/volume-snapshots.md @@ -7,19 +7,19 @@ reviewers: - xing-yang - yuxiangqian title: Volume Snapshots -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.17" state="beta" >}} In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/). -{{% /capture %}} -{{% capture body %}} + + ## Introduction @@ -154,4 +154,4 @@ the *dataSource* field in the `PersistentVolumeClaim` object. For more details, see [Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support). -{{% /capture %}} + diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 7930bf0fe64f6..fe71c2e86e6d8 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -5,11 +5,11 @@ reviewers: - thockin - msau42 title: Volumes -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + On-disk files in a Container are ephemeral, which presents some problems for non-trivial applications when running in Containers. First, when a Container @@ -20,10 +20,10 @@ Kubernetes `Volume` abstraction solves both of these problems. Familiarity with [Pods](/docs/user-guide/pods) is suggested. -{{% /capture %}} -{{% capture body %}} + + ## Background @@ -1481,6 +1481,7 @@ sudo systemctl restart docker -{{% capture whatsnext %}} +## {{% heading "whatsnext" %}} + * Follow an example of [deploying WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index 233e0ca66147f..aca2996147772 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -4,11 +4,11 @@ reviewers: - soltysh - janetkuo title: CronJob -content_template: templates/concept +content_type: concept weight: 80 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.8" state="beta" >}} @@ -33,8 +33,8 @@ append 11 characters to the job name provided and there is a constraint that the maximum length of a Job name is no more than 63 characters. -{{% /capture %}} -{{% capture body %}} + + ## CronJob @@ -82,12 +82,13 @@ be down for the same period as the previous example (`08:29:00` to `10:21:00`,) The CronJob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of the Pods it represents. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + [Cron expression format](https://pkg.go.dev/github.com/robfig/cron?tab=doc#hdr-CRON_Expression_Format) documents the format of CronJob `schedule` fields. For instructions on creating and working with cron jobs, and for an example of CronJob manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs). -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md index e7ac6139f8844..7f1b5c46303e2 100644 --- a/content/en/docs/concepts/workloads/controllers/daemonset.md +++ b/content/en/docs/concepts/workloads/controllers/daemonset.md @@ -6,11 +6,11 @@ reviewers: - janetkuo - kow3ns title: DaemonSet -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + A _DaemonSet_ ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage @@ -26,10 +26,10 @@ In a simple case, one DaemonSet, covering all nodes, would be used for each type A more complex setup might use multiple DaemonSets for a single type of daemon, but with different flags and/or different memory and cpu requests for different hardware types. -{{% /capture %}} -{{% capture body %}} + + ## Writing a DaemonSet Spec @@ -229,4 +229,4 @@ number of replicas and rolling out updates are more important than controlling e the Pod runs on. Use a DaemonSet when it is important that a copy of a Pod always run on all or certain hosts, and when it needs to start before other Pods. -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 2610380641781..6287c0d98edce 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -7,11 +7,11 @@ feature: description: > Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions. -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and [ReplicaSets](/docs/concepts/workloads/controllers/replicaset/). @@ -22,10 +22,10 @@ You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_ Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below. {{< /note >}} -{{% /capture %}} -{{% capture body %}} + + ## Use Case @@ -1166,4 +1166,4 @@ a paused Deployment and one that is not paused, is that any changes into the Pod Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when it is created. -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/controllers/garbage-collection.md b/content/en/docs/concepts/workloads/controllers/garbage-collection.md index 45303b66e8d00..c11386bc1c1da 100644 --- a/content/en/docs/concepts/workloads/controllers/garbage-collection.md +++ b/content/en/docs/concepts/workloads/controllers/garbage-collection.md @@ -1,18 +1,18 @@ --- title: Garbage Collection -content_template: templates/concept +content_type: concept weight: 60 --- -{{% capture overview %}} + The role of the Kubernetes garbage collector is to delete certain objects that once had an owner, but no longer have an owner. -{{% /capture %}} -{{% capture body %}} + + ## Owners and dependents @@ -168,16 +168,17 @@ See [kubeadm/#149](https://github.com/kubernetes/kubeadm/issues/149#issuecomment Tracked at [#26120](https://github.com/kubernetes/kubernetes/issues/26120) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + [Design Doc 1](https://git.k8s.io/community/contributors/design-proposals/api-machinery/garbage-collection.md) [Design Doc 2](https://git.k8s.io/community/contributors/design-proposals/api-machinery/synchronous-garbage-collection.md) -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md index aef6b556a37e4..11751dd1664c8 100644 --- a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -3,7 +3,7 @@ reviewers: - erictune - soltysh title: Jobs - Run to Completion -content_template: templates/concept +content_type: concept feature: title: Batch execution description: > @@ -11,7 +11,7 @@ feature: weight: 70 --- -{{% capture overview %}} + A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number @@ -24,10 +24,10 @@ due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel. -{{% /capture %}} -{{% capture body %}} + + ## Running an example Job @@ -478,4 +478,4 @@ object, but maintains complete control over what Pods are created and how work i You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`. -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 92cbe60a335a2..ef2a069ca1c69 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -4,19 +4,19 @@ reviewers: - bprashanth - madhusudancs title: ReplicaSet -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. -{{% /capture %}} -{{% capture body %}} + + ## How a ReplicaSet works @@ -366,4 +366,4 @@ The two serve the same purpose, and behave similarly, except that a ReplicationC selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors). As such, ReplicaSets are preferred over ReplicationControllers -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index fe20980ce6e6f..2cc828494075f 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -9,11 +9,11 @@ feature: description: > Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve. -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + {{< note >}} A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication. @@ -23,10 +23,10 @@ A _ReplicationController_ ensures that a specified number of pod replicas are ru time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available. -{{% /capture %}} -{{% capture body %}} + + ## How a ReplicationController Works @@ -285,4 +285,4 @@ safe to terminate when the machine is otherwise ready to be rebooted/shutdown. Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/controllers/statefulset.md b/content/en/docs/concepts/workloads/controllers/statefulset.md index 661955cb48c82..4f8429d668b9a 100644 --- a/content/en/docs/concepts/workloads/controllers/statefulset.md +++ b/content/en/docs/concepts/workloads/controllers/statefulset.md @@ -7,18 +7,18 @@ reviewers: - kow3ns - smarterclayton title: StatefulSets -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + StatefulSet is the workload API object used to manage stateful applications. {{< glossary_definition term_id="statefulset" length="all" >}} -{{% /capture %}} -{{% capture body %}} + + ## Using StatefulSets @@ -270,12 +270,13 @@ After reverting the template, you must also delete any Pods that StatefulSet had already attempted to run with the bad configuration. StatefulSet will then begin to recreate the Pods using the reverted template. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Follow an example of [deploying a stateful application](/docs/tutorials/stateful-application/basic-stateful-set/). * Follow an example of [deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/). * Follow an example of [running a replicated stateful application](/docs/tasks/run-application/run-replicated-stateful-application/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md index c5b88198f4e78..0d2657d8cea53 100644 --- a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md +++ b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md @@ -2,11 +2,11 @@ reviewers: - janetkuo title: TTL Controller for Finished Resources -content_template: templates/concept +content_type: concept weight: 65 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.12" state="alpha" >}} @@ -21,12 +21,12 @@ Alpha Disclaimer: this feature is currently alpha, and can be enabled with both `TTLAfterFinished`. -{{% /capture %}} -{{% capture body %}} + + ## TTL Controller @@ -78,12 +78,13 @@ In Kubernetes, it's required to run NTP on all nodes to avoid time skew. Clocks aren't always correct, but the difference should be very small. Please be aware of this risk when setting a non-zero TTL. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + [Clean up Jobs automatically](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically) [Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md) -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/pods/disruptions.md b/content/en/docs/concepts/workloads/pods/disruptions.md index 9983a67fc81a5..589bde56685e6 100644 --- a/content/en/docs/concepts/workloads/pods/disruptions.md +++ b/content/en/docs/concepts/workloads/pods/disruptions.md @@ -4,11 +4,11 @@ reviewers: - foxish - davidopp title: Disruptions -content_template: templates/concept +content_type: concept weight: 60 --- -{{% capture overview %}} + This guide is for application owners who want to build highly available applications, and thus need to understand what types of Disruptions can happen to Pods. @@ -16,10 +16,10 @@ what types of Disruptions can happen to Pods. It is also for Cluster Administrators who want to perform automated cluster actions, like upgrading and autoscaling clusters. -{{% /capture %}} -{{% capture body %}} + + ## Voluntary and Involuntary Disruptions @@ -262,13 +262,14 @@ the nodes in your cluster, such as a node or system software upgrade, here are s disruptions largely overlaps with work to support autoscaling and tolerating involuntary disruptions. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Follow steps to protect your application by [configuring a Pod Disruption Budget](/docs/tasks/run-application/configure-pdb/). * Learn more about [draining nodes](/docs/tasks/administer-cluster/safely-drain-node/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md index c6506df69c2c2..c1852df707f4b 100644 --- a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md @@ -3,11 +3,11 @@ reviewers: - verb - yujuhong title: Ephemeral Containers -content_template: templates/concept +content_type: concept weight: 80 --- -{{% capture overview %}} + {{< feature-state state="alpha" for_k8s_version="v1.16" >}} @@ -23,9 +23,9 @@ clusters. In accordance with the [Kubernetes Deprecation Policy]( significantly in the future or be removed entirely. {{< /warning >}} -{{% /capture %}} -{{% capture body %}} + + ## Understanding ephemeral containers @@ -192,4 +192,4 @@ example: kubectl attach -it example-pod -c debugger ``` -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md index 2cf2bf85b53fc..6e67a9e0ca4f1 100644 --- a/content/en/docs/concepts/workloads/pods/init-containers.md +++ b/content/en/docs/concepts/workloads/pods/init-containers.md @@ -2,20 +2,20 @@ reviewers: - erictune title: Init Containers -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + This page provides an overview of init containers: specialized containers that run before app containers in a {{< glossary_tooltip text="Pod" term_id="pod" >}}. Init containers can contain utilities or setup scripts not present in an app image. You can specify init containers in the Pod specification alongside the `containers` array (which describes app containers). -{{% /capture %}} -{{% capture body %}} + + ## Understanding init containers @@ -317,12 +317,13 @@ reasons: forcing a restart, and the init container completion record has been lost due to garbage collection. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) * Learn how to [debug init containers](/docs/tasks/debug-application-cluster/debug-init-containers/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index 74031a372292d..1ce43f6dd9c27 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -1,20 +1,20 @@ --- title: Pod Lifecycle -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + {{< comment >}}Updated: 4/14/2015{{< /comment >}} {{< comment >}}Edited and moved to Concepts section: 2/2/17{{< /comment >}} This page describes the lifecycle of a Pod. -{{% /capture %}} -{{% capture body %}} + + ## Pod phase @@ -390,10 +390,11 @@ spec: * Node controller sets Pod `phase` to Failed. * If running under a controller, Pod is recreated elsewhere. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Get hands-on experience [attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). @@ -403,7 +404,7 @@ spec: * Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/). -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/pods/pod-overview.md b/content/en/docs/concepts/workloads/pods/pod-overview.md index 2bc29512596b4..e963b7ace67c6 100644 --- a/content/en/docs/concepts/workloads/pods/pod-overview.md +++ b/content/en/docs/concepts/workloads/pods/pod-overview.md @@ -2,19 +2,19 @@ reviewers: - erictune title: Pod Overview -content_template: templates/concept +content_type: concept weight: 10 card: name: concepts weight: 60 --- -{{% capture overview %}} + This page provides an overview of `Pod`, the smallest deployable object in the Kubernetes object model. -{{% /capture %}} -{{% capture body %}} + + ## Understanding Pods A *Pod* is the basic execution unit of a Kubernetes application--the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your {{< glossary_tooltip term_id="cluster" text="cluster" >}}. @@ -111,12 +111,13 @@ For example, a Deployment controller ensures that the running Pods match the cur On Nodes, the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} does not directly observe or manage any of the details around pod templates and updates; those details are abstracted away. That abstraction and separation of concerns simplifies system semantics, and makes it feasible to extend the cluster's behavior without changing existing code. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Pods](/docs/concepts/workloads/pods/pod/) * [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) explains common layouts for Pods with more than one container * Learn more about Pod behavior: * [Pod Termination](/docs/concepts/workloads/pods/pod/#termination-of-pods) * [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/) -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 6e6f87844990a..afa52fa53239e 100644 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -1,18 +1,18 @@ --- title: Pod Topology Spread Constraints -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.18" state="beta" >}} You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization. -{{% /capture %}} -{{% capture body %}} + + ## Prerequisites @@ -246,4 +246,4 @@ As of 1.18, at which this feature is Beta, there are some known limitations: - Scaling down a Deployment may result in imbalanced Pods distribution. - Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921) -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/pods/pod.md b/content/en/docs/concepts/workloads/pods/pod.md index d64227be48edc..d87dc92cb2c65 100644 --- a/content/en/docs/concepts/workloads/pods/pod.md +++ b/content/en/docs/concepts/workloads/pods/pod.md @@ -1,19 +1,19 @@ --- reviewers: title: Pods -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + _Pods_ are the smallest deployable units of computing that can be created and managed in Kubernetes. -{{% /capture %}} -{{% capture body %}} + + ## What is a Pod? @@ -206,4 +206,4 @@ describes the object in detail. When creating the manifest for a Pod object, make sure the name specified is a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). -{{% /capture %}} + diff --git a/content/en/docs/concepts/workloads/pods/podpreset.md b/content/en/docs/concepts/workloads/pods/podpreset.md index a1906c8b99337..f77e34a3f9115 100644 --- a/content/en/docs/concepts/workloads/pods/podpreset.md +++ b/content/en/docs/concepts/workloads/pods/podpreset.md @@ -2,20 +2,20 @@ reviewers: - jessfraz title: Pod Preset -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.6" state="alpha" >}} This page provides an overview of PodPresets, which are objects for injecting certain information into pods at creation time. The information can include secrets, volumes, volume mounts, and environment variables. -{{% /capture %}} -{{% capture body %}} + + ## Understanding Pod presets A PodPreset is an API resource for injecting additional runtime requirements @@ -82,12 +82,13 @@ There may be instances where you wish for a Pod to not be altered by any Pod Preset mutations. In these cases, you can add an annotation in the Pod Spec of the form: `podpreset.admission.kubernetes.io/exclude: "true"`. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + See [Injecting data into a Pod using PodPreset](/docs/tasks/inject-data-application/podpreset/) For more information about the background, see the [design proposal for PodPreset](https://git.k8s.io/community/contributors/design-proposals/service-catalog/pod-preset.md). -{{% /capture %}} + diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md index c6aa34812560d..e518b1f975435 100644 --- a/content/en/docs/contribute/_index.md +++ b/content/en/docs/contribute/_index.md @@ -1,5 +1,5 @@ --- -content_template: templates/concept +content_type: concept title: Contribute to Kubernetes docs linktitle: Contribute main_menu: true @@ -10,7 +10,7 @@ card: title: Start contributing --- -{{% capture overview %}} + This website is maintained by [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs). @@ -23,9 +23,9 @@ Kubernetes documentation contributors: Kubernetes documentation welcomes improvements from all contributors, new and experienced! -{{% /capture %}} -{{% capture body %}} + + ## Getting started @@ -75,4 +75,4 @@ SIG Docs communicates with different methods: - Read the [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) to get involved with Kubernetes feature development. - Submit a [blog post or case study](/docs/contribute/new-content/blogs-case-studies/). -{{% /capture %}} + diff --git a/content/en/docs/contribute/advanced.md b/content/en/docs/contribute/advanced.md index 2ed3a4afd6316..9cf6a6588385f 100644 --- a/content/en/docs/contribute/advanced.md +++ b/content/en/docs/contribute/advanced.md @@ -1,11 +1,11 @@ --- title: Advanced contributing slug: advanced -content_template: templates/concept +content_type: concept weight: 98 --- -{{% capture overview %}} + This page assumes that you understand how to [contribute to new content](/docs/contribute/new-content/overview) and @@ -13,9 +13,9 @@ This page assumes that you understand how to to learn about more ways to contribute. You need to use the Git command line client and other tools for some of these tasks. -{{% /capture %}} -{{% capture body %}} + + ## Be the PR Wrangler for a week @@ -245,4 +245,4 @@ When you’re ready to stop recording, click Stop. The video uploads automatically to YouTube. -{{% /capture %}} + diff --git a/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md b/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md index 6c4d93cd401ed..5f4edbcc7726f 100644 --- a/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md +++ b/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md @@ -1,10 +1,10 @@ --- title: Contributing to the Upstream Kubernetes Code -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + This page shows how to contribute to the upstream `kubernetes/kubernetes` project. You can fix bugs found in the Kubernetes API documentation or the content of @@ -16,9 +16,10 @@ API or the `kube-*` components from the upstream code, see the following instruc - [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) - [Generating Reference Documentation for the Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + - You need to have these tools installed: @@ -35,9 +36,9 @@ API or the `kube-*` components from the upstream code, see the following instruc For more information, see [Creating a Pull Request](https://help.github.com/articles/creating-a-pull-request/) and [GitHub Standard Fork & Pull Request Workflow](https://gist.github.com/Chaser324/ce0505fbed06b947d962). -{{% /capture %}} -{{% capture steps %}} + + ## The big picture @@ -230,12 +231,13 @@ the API reference documentation. You are now ready to follow the [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) guide to generate the [published Kubernetes API reference documentation](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) * [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) * [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) -{{% /capture %}} + diff --git a/content/en/docs/contribute/generate-ref-docs/kubectl.md b/content/en/docs/contribute/generate-ref-docs/kubectl.md index 5930a1f452304..f057ce6800aea 100644 --- a/content/en/docs/contribute/generate-ref-docs/kubectl.md +++ b/content/en/docs/contribute/generate-ref-docs/kubectl.md @@ -1,10 +1,10 @@ --- title: Generating Reference Documentation for kubectl Commands -content_template: templates/task +content_type: task weight: 90 --- -{{% capture overview %}} + This page shows how to generate the `kubectl` command reference. @@ -21,15 +21,16 @@ reference page, see [Generating Reference Pages for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/). {{< /note >}} -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "prerequisites-ref-docs.md" >}} -{{% /capture %}} -{{% capture steps %}} + + ## Setting up the local repositories @@ -253,12 +254,13 @@ A few minutes after your pull request is merged, your updated reference topics will be visible in the [published documentation](/docs/home). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) * [Generating Reference Documentation for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) * [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) -{{% /capture %}} + diff --git a/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md b/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md index 5060d3b6e077b..10482eda9759d 100644 --- a/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md +++ b/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md @@ -1,10 +1,10 @@ --- title: Generating Reference Documentation for the Kubernetes API -content_template: templates/task +content_type: task weight: 50 --- -{{% capture overview %}} + This page shows how to update the Kubernetes API reference documentation. @@ -18,15 +18,16 @@ If you find bugs in the generated documentation, you need to If you need only to regenerate the reference documentation from the [OpenAPI](https://github.com/OAI/OpenAPI-Specification) spec, continue reading this page. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "prerequisites-ref-docs.md" >}} -{{% /capture %}} -{{% capture steps %}} + + ## Setting up the local repositories @@ -194,12 +195,13 @@ Submit your changes as a Monitor your pull request, and respond to reviewer comments as needed. Continue to monitor your pull request until it has been merged. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) * [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) * [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) -{{% /capture %}} + diff --git a/content/en/docs/contribute/generate-ref-docs/kubernetes-components.md b/content/en/docs/contribute/generate-ref-docs/kubernetes-components.md index f71db7afb1ae7..be84beeb084dc 100644 --- a/content/en/docs/contribute/generate-ref-docs/kubernetes-components.md +++ b/content/en/docs/contribute/generate-ref-docs/kubernetes-components.md @@ -1,34 +1,36 @@ --- title: Generating Reference Pages for Kubernetes Components and Tools -content_template: templates/task +content_type: task weight: 120 --- -{{% capture overview %}} + This page shows how to build the Kubernetes component and tool reference pages. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Start with the [Prerequisites section](/docs/contribute/generate-ref-docs/quickstart/#before-you-begin) in the Reference Documentation Quickstart guide. -{{% /capture %}} -{{% capture steps %}} + + Follow the [Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) to generate the Kubernetes component and tool reference pages. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) * [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) * [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) * [Contributing to the Upstream Kubernetes Project for Documentation](/docs/contribute/generate-ref-docs/contribute-upstream/) -{{% /capture %}} + diff --git a/content/en/docs/contribute/generate-ref-docs/quickstart.md b/content/en/docs/contribute/generate-ref-docs/quickstart.md index 9645c641707f1..df5cdbb95f55d 100644 --- a/content/en/docs/contribute/generate-ref-docs/quickstart.md +++ b/content/en/docs/contribute/generate-ref-docs/quickstart.md @@ -1,24 +1,25 @@ --- title: Quickstart -content_template: templates/task +content_type: task weight: 40 --- -{{% capture overview %}} + This page shows how to use the `update-imported-docs` script to generate the Kubernetes reference documentation. The script automates the build setup and generates the reference documentation for a release. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "prerequisites-ref-docs.md" >}} -{{% /capture %}} -{{% capture steps %}} + + ## Getting the docs repository @@ -246,9 +247,10 @@ A few minutes after your pull request is merged, your updated reference topics will be visible in the [published documentation](/docs/home/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + To generate the individual reference documentation by manually setting up the required build repositories and running the build targets, see the following guides: @@ -257,4 +259,4 @@ running the build targets, see the following guides: * [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) * [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) -{{% /capture %}} + diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index cb3cf0318704e..0c698305b95a3 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -1,6 +1,6 @@ --- title: Localizing Kubernetes documentation -content_template: templates/concept +content_type: concept approvers: - remyleone - rlenferink @@ -12,13 +12,13 @@ card: title: Translating the docs --- -{{% capture overview %}} + This page shows you how to [localize](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/) the docs for a different language. -{{% /capture %}} -{{% capture body %}} + + ## Getting started @@ -279,13 +279,14 @@ SIG Docs welcomes upstream contributions and corrections to the English source. You can also help add or improve content to an existing localization. Join the [Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/) for the localization, and start opening PRs to help. Please limit pull requests to a single localization since pull requests that change content in multiple localizations could be difficult to review. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Once a localization meets requirements for workflow and minimum output, SIG docs will: - Enable language selection on the website - Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/about/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/). -{{% /capture %}} + diff --git a/content/en/docs/contribute/new-content/blogs-case-studies.md b/content/en/docs/contribute/new-content/blogs-case-studies.md index 90c50ae6e125b..76acbd2d410e4 100644 --- a/content/en/docs/contribute/new-content/blogs-case-studies.md +++ b/content/en/docs/contribute/new-content/blogs-case-studies.md @@ -2,19 +2,19 @@ title: Submitting blog posts and case studies linktitle: Blogs and case studies slug: blogs-case-studies -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + Anyone can write a blog post and submit it for review. Case studies require extensive review before they're approved. -{{% /capture %}} -{{% capture body %}} + + ## Write a blog post @@ -52,8 +52,9 @@ Have a look at the source for the Refer to the [case study guidelines](https://github.com/cncf/foundation/blob/master/case-study-guidelines.md) and submit your request as outlined in the guidelines. -{{% /capture %}} -{{% capture whatsnext %}} -{{% /capture %}} +## {{% heading "whatsnext" %}} + + + diff --git a/content/en/docs/contribute/new-content/new-features.md b/content/en/docs/contribute/new-content/new-features.md index 68087a2a797c0..54db84da8f6c8 100644 --- a/content/en/docs/contribute/new-content/new-features.md +++ b/content/en/docs/contribute/new-content/new-features.md @@ -1,7 +1,7 @@ --- title: Documenting a feature for a release linktitle: Documenting for a release -content_template: templates/concept +content_type: concept main_menu: true weight: 20 card: @@ -9,7 +9,7 @@ card: weight: 45 title: Documenting a feature for a release --- -{{% capture overview %}} + Each major Kubernetes release introduces new features that require documentation. New releases also bring updates to existing features and documentation (such as upgrading a feature from alpha to beta). @@ -19,9 +19,9 @@ feature as a pull request to the appropriate development branch of the editorial feedback or edits the draft directly. This section covers the branching conventions and process used during a release by both groups. -{{% /capture %}} -{{% capture body %}} + + ## For documentation contributors @@ -131,4 +131,3 @@ add it to [Alpha/Beta Feature gates](/docs/reference/command-line-tools-referenc as part of your pull request. If your feature is moving out of Alpha, make sure to remove it from that table. -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/new-content/open-a-pr.md b/content/en/docs/contribute/new-content/open-a-pr.md index 4407568aff09c..05a576a74b6f9 100644 --- a/content/en/docs/contribute/new-content/open-a-pr.md +++ b/content/en/docs/contribute/new-content/open-a-pr.md @@ -1,14 +1,14 @@ --- title: Opening a pull request slug: new-content -content_template: templates/concept +content_type: concept weight: 10 card: name: contribute weight: 40 --- -{{% capture overview %}} + {{< note >}} **Code developers**: If you are documenting a new feature for an @@ -22,9 +22,9 @@ If your change is small, or you're unfamiliar with git, read [Changes using GitH If your changes are large, read [Work from a local fork](#fork-the-repo) to learn how to make changes locally on your computer. -{{% /capture %}} -{{% capture body %}} + + ## Changes using GitHub @@ -475,10 +475,11 @@ Most repositories use issue and PR templates. Have a look through some open issues and PRs to get a feel for that team's processes. Make sure to fill out the templates with as much detail as possible when you file issues or PRs. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + - Read [Reviewing](/docs/contribute/reviewing/revewing-prs) to learn more about the review process. -{{% /capture %}} + diff --git a/content/en/docs/contribute/new-content/overview.md b/content/en/docs/contribute/new-content/overview.md index 11f4c067d7522..cdb7174b2a07f 100644 --- a/content/en/docs/contribute/new-content/overview.md +++ b/content/en/docs/contribute/new-content/overview.md @@ -1,19 +1,19 @@ --- title: Contributing new content overview linktitle: Overview -content_template: templates/concept +content_type: concept main_menu: true weight: 5 --- -{{% capture overview %}} + This section contains information you should know before contributing new content. -{{% /capture %}} -{{% capture body %}} + + ## Contributing basics @@ -58,4 +58,4 @@ Limit pull requests to one language per PR. If you need to make an identical cha The [doc contributors tools](https://github.com/kubernetes/website/tree/master/content/en/docs/doc-contributor-tools) directory in the `kubernetes/website` repository contains tools to help your contribution journey go more smoothly. -{{% /capture %}} + diff --git a/content/en/docs/contribute/participating.md b/content/en/docs/contribute/participating.md index 3f491dc856b5c..681c53f9940c9 100644 --- a/content/en/docs/contribute/participating.md +++ b/content/en/docs/contribute/participating.md @@ -1,13 +1,13 @@ --- title: Participating in SIG Docs -content_template: templates/concept +content_type: concept weight: 60 card: name: contribute weight: 60 --- -{{% capture overview %}} + SIG Docs is one of the [special interest groups](https://github.com/kubernetes/community/blob/master/sig-list.md) @@ -30,9 +30,9 @@ The rest of this document outlines some unique ways these roles function within SIG Docs, which is responsible for maintaining one of the most public-facing aspects of Kubernetes -- the Kubernetes website and documentation. -{{% /capture %}} -{{% capture body %}} + + ## Roles and responsibilities @@ -302,9 +302,10 @@ SIG Docs approvers. Here's how it works. specific roles, such as [PR Wrangler](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) or [SIG Docs chairperson](#sig-docs-chairperson). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + For more information about contributing to the Kubernetes documentation, see: @@ -312,4 +313,4 @@ For more information about contributing to the Kubernetes documentation, see: - [Reviewing content](/docs/contribute/review/reviewing-prs) - [Documentation style guide](/docs/contribute/style/) -{{% /capture %}} + diff --git a/content/en/docs/contribute/review/_index.md b/content/en/docs/contribute/review/_index.md index bc70e3c6f1b62..d2a1a5c9067c3 100644 --- a/content/en/docs/contribute/review/_index.md +++ b/content/en/docs/contribute/review/_index.md @@ -3,12 +3,12 @@ title: Reviewing changes weight: 30 --- -{{% capture overview %}} + This section describes how to review content. -{{% /capture %}} -{{% capture body %}} -{{% /capture %}} + + + diff --git a/content/en/docs/contribute/review/for-approvers.md b/content/en/docs/contribute/review/for-approvers.md index dccc6cfe38524..0cddbcba6aa5e 100644 --- a/content/en/docs/contribute/review/for-approvers.md +++ b/content/en/docs/contribute/review/for-approvers.md @@ -2,11 +2,11 @@ title: Reviewing for approvers and reviewers linktitle: For approvers and reviewers slug: for-approvers -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + SIG Docs [Reviewers](/docs/contribute/participating/#reviewers) and [Approvers](/docs/contribute/participating/#approvers) do a few extra things when reviewing a change. @@ -19,10 +19,10 @@ requests (PRs) that are not already under active review. In addition to the rotation, a bot assigns reviewers and approvers for the PR based on the owners for the affected files. -{{% /capture %}} -{{% capture body %}} + + ## Reviewing a PR @@ -224,4 +224,3 @@ If this is a documentation issue, please re-open this issue. ``` -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/review/reviewing-prs.md b/content/en/docs/contribute/review/reviewing-prs.md index cb432a97ba510..11a56b17c85a1 100644 --- a/content/en/docs/contribute/review/reviewing-prs.md +++ b/content/en/docs/contribute/review/reviewing-prs.md @@ -1,11 +1,11 @@ --- title: Reviewing pull requests -content_template: templates/concept +content_type: concept main_menu: true weight: 10 --- -{{% capture overview %}} + Anyone can review a documentation pull request. Visit the [pull requests](https://github.com/kubernetes/website/pulls) section in the Kubernetes website repository to see open pull requests. @@ -19,9 +19,9 @@ Before reviewing, it's a good idea to: [style guide](/docs/contribute/style/style-guide/) so you can leave informed comments. - Understand the different [roles and responsibilities](/docs/contribute/participating/#roles-and-responsibilities) in the Kubernetes documentation community. -{{% /capture %}} -{{% capture body %}} + + ## Before you begin @@ -95,4 +95,3 @@ When reviewing, use the following as a starting point. For small issues with a PR, like typos or whitespace, prefix your comments with `nit:`. This lets the author know the issue is non-critical. -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/style/content-guide.md b/content/en/docs/contribute/style/content-guide.md index b5d8ed5d02a4e..2f367c9a8102d 100644 --- a/content/en/docs/contribute/style/content-guide.md +++ b/content/en/docs/contribute/style/content-guide.md @@ -1,11 +1,11 @@ --- title: Documentation Content Guide linktitle: Content guide -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + This page contains guidelines for Kubernetes documentation. @@ -17,9 +17,9 @@ You can register for Kubernetes Slack at http://slack.k8s.io/. For information on creating new content for the Kubernetes docs, follow the [style guide](/docs/contribute/style/style-guide). -{{% /capture %}} -{{% capture body %}} + + ## Overview @@ -69,10 +69,11 @@ ask for help in [#sig-docs on Kubernetes Slack](https://kubernetes.slack.com/mes If you have questions about allowed content, join the [Kubernetes Slack](http://slack.k8s.io/) #sig-docs channel and ask! -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read the [Style guide](/docs/contribute/style/style-guide). -{{% /capture %}} + diff --git a/content/en/docs/contribute/style/content-organization.md b/content/en/docs/contribute/style/content-organization.md index e93cf8126edc4..249bebf0fb7ec 100644 --- a/content/en/docs/contribute/style/content-organization.md +++ b/content/en/docs/contribute/style/content-organization.md @@ -1,17 +1,17 @@ --- title: Content organization -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + This site uses Hugo. In Hugo, [content organization](https://gohugo.io/content-management/organization/) is a core concept. -{{% /capture %}} -{{% capture body %}} + + {{% note %}} **Hugo Tip:** Start Hugo with `hugo server --navigateToChanged` for content edit-sessions. @@ -126,12 +126,13 @@ Some important notes to the files in the bundles: The [SASS](https://sass-lang.com/) source of the stylesheets for this site is stored in `assets/sass` and is automatically built by Hugo. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about [custom Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/) * Learn about the [Style guide](/docs/contribute/style/style-guide) * Learn about the [Content guide](/docs/contribute/style/content-guide) -{{% /capture %}} + diff --git a/content/en/docs/contribute/style/hugo-shortcodes/index.md b/content/en/docs/contribute/style/hugo-shortcodes/index.md index 60479c7fec3eb..87033f15a5424 100644 --- a/content/en/docs/contribute/style/hugo-shortcodes/index.md +++ b/content/en/docs/contribute/style/hugo-shortcodes/index.md @@ -2,16 +2,16 @@ approvers: - chenopis title: Custom Hugo Shortcodes -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This page explains the custom Hugo shortcodes that can be used in Kubernetes markdown documentation. Read more about shortcodes in the [Hugo documentation](https://gohugo.io/content-management/shortcodes). -{{% /capture %}} -{{% capture body %}} + + ## Feature state @@ -235,12 +235,13 @@ Renders to: {{< tab name="JSON File" include="podtemplate" />}} {{< /tabs >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about [Hugo](https://gohugo.io/). * Learn about [writing a new topic](/docs/home/contribute/write-new-topic/). * Learn about [using page templates](/docs/home/contribute/page-templates/). * Learn about [staging your changes](/docs/home/contribute/stage-documentation-changes/) * Learn about [creating a pull request](/docs/home/contribute/create-pull-request/). -{{% /capture %}} + diff --git a/content/en/docs/contribute/style/page-templates.md b/content/en/docs/contribute/style/page-templates.md index 7521ee3ecba1f..7c0616e1070be 100644 --- a/content/en/docs/contribute/style/page-templates.md +++ b/content/en/docs/contribute/style/page-templates.md @@ -1,13 +1,13 @@ --- title: Using Page Templates -content_template: templates/concept +content_type: concept weight: 30 card: name: contribute weight: 30 --- -{{% capture overview %}} + When contributing new topics, apply one of the following templates to them. This standardizes the user experience of a given page. @@ -24,10 +24,10 @@ template to use for a new topic, start with the {{< /note >}} -{{% /capture %}} -{{% capture body %}} + + ## Concept template @@ -41,7 +41,7 @@ tutorials. To write a new concept page, create a Markdown file in a subdirectory of the `/content/en/docs/concepts` directory, with the following characteristics: -- In the page's YAML front-matter, set `content_template: templates/concept`. +- In the page's YAML front-matter, set `content_type: concept`. - In the page's body, set the required `capture` variables and any optional ones you want to include: @@ -85,7 +85,7 @@ to conceptual topics that provide related background and knowledge. To write a new task page, create a Markdown file in a subdirectory of the `/content/en/docs/tasks` directory, with the following characteristics: -- In the page's YAML front-matter, set `content_template: templates/task`. +- In the page's YAML front-matter, set `content_type: task`. - In the page's body, set the required `capture` variables and any optional ones you want to include: @@ -150,7 +150,7 @@ for deep explanations. To write a new tutorial page, create a Markdown file in a subdirectory of the `/content/en/docs/tutorials` directory, with the following characteristics: -- In the page's YAML front-matter, set `content_template: templates/tutorial`. +- In the page's YAML front-matter, set `content_type: tutorial`. - In the page's body, set the required `capture` variables and any optional ones you want to include: @@ -211,12 +211,13 @@ To write a new tutorial page, create a Markdown file in a subdirectory of the An example of a published topic that uses the tutorial template is [Running a Stateless Application Using a Deployment](/docs/tutorials/stateless-application/run-stateless-application-deployment/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + - Learn about the [Style guide](/docs/contribute/style/style-guide/) - Learn about the [Content guide](/docs/contribute/style/content-guide/) - Learn about [content organization](/docs/contribute/style/content-organization/) -{{% /capture %}} + diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index 34dce6adaca38..64b2ec0705a2b 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -1,11 +1,11 @@ --- title: Documentation Style Guide linktitle: Style guide -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + This page gives writing style guidelines for the Kubernetes documentation. These are guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. @@ -18,9 +18,9 @@ Changes to the style guide are made by SIG Docs as a group. To propose a change or addition, [add it to the agenda](https://docs.google.com/document/d/1ddHwLK3kUMX1wVFIwlksjTk0MsqitBnWPe1LRa1Rx5A/edit) for an upcoming SIG Docs meeting, and attend the meeting to participate in the discussion. -{{% /capture %}} -{{% capture body %}} + + {{< note >}} Kubernetes documentation uses [Blackfriday Markdown Renderer](https://github.com/russross/blackfriday) along with a few [Hugo Shortcodes](/docs/home/contribute/includes/) to support glossary entries, tabs, @@ -585,13 +585,14 @@ The Federation feature provides ... | The new Federation feature provides ... {{< /table >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about [writing a new topic](/docs/contribute/style/write-new-topic/). * Learn about [using page templates](/docs/contribute/style/page-templates/). * Learn about [staging your changes](/docs/contribute/stage-documentation-changes/) * Learn about [creating a pull request](/docs/contribute/start/#submit-a-pull-request/). -{{% /capture %}} + diff --git a/content/en/docs/contribute/style/write-new-topic.md b/content/en/docs/contribute/style/write-new-topic.md index 65dca22f1aef1..a6b9e187a18a0 100644 --- a/content/en/docs/contribute/style/write-new-topic.md +++ b/content/en/docs/contribute/style/write-new-topic.md @@ -1,19 +1,20 @@ --- title: Writing a new topic -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + This page shows how to create a new topic for the Kubernetes docs. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Create a fork of the Kubernetes documentation repository as described in [Open a PR](/docs/new-content/open-a-pr/). -{{% /capture %}} -{{% capture steps %}} + + ## Choosing a page type @@ -159,9 +160,10 @@ For an example of a topic that uses this technique, see Put image files in the `/images` directory. The preferred image format is SVG. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about [using page templates](/docs/contribute/page-templates/). * Learn about [creating a pull request](/docs/contribute/new-content/open-a-pr/). -{{% /capture %}} + diff --git a/content/en/docs/contribute/suggesting-improvements.md b/content/en/docs/contribute/suggesting-improvements.md index 19133f379bd08..e48c2915b9a2c 100644 --- a/content/en/docs/contribute/suggesting-improvements.md +++ b/content/en/docs/contribute/suggesting-improvements.md @@ -1,14 +1,14 @@ --- title: Suggesting content improvements slug: suggest-improvements -content_template: templates/concept +content_type: concept weight: 10 card: name: contribute weight: 20 --- -{{% capture overview %}} + If you notice an issue with Kubernetes documentation, or have an idea for new content, then open an issue. All you need is a [GitHub account](https://github.com/join) and a web browser. @@ -16,9 +16,9 @@ In most cases, new work on Kubernetes documentation begins with an issue in GitH then review, categorize and tag issues as needed. Next, you or another member of the Kubernetes community open a pull request with changes to resolve the issue. -{{% /capture %}} -{{% capture body %}} + + ## Opening an issue @@ -62,4 +62,4 @@ Keep the following in mind when filing an issue: fellow contributors. For example, "The docs are terrible" is not helpful or polite feedback. -{{% /capture %}} + diff --git a/content/en/docs/home/supported-doc-versions.md b/content/en/docs/home/supported-doc-versions.md index 45a6012eaa148..bd368b2b54823 100644 --- a/content/en/docs/home/supported-doc-versions.md +++ b/content/en/docs/home/supported-doc-versions.md @@ -1,20 +1,20 @@ --- title: Supported Versions of the Kubernetes Documentation -content_template: templates/concept +content_type: concept card: name: about weight: 10 title: Supported Versions of the Documentation --- -{{% capture overview %}} + This website contains documentation for the current version of Kubernetes and the four previous versions of Kubernetes. -{{% /capture %}} -{{% capture body %}} + + ## Current version @@ -25,6 +25,6 @@ The current version is {{< versions-other >}} -{{% /capture %}} + diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md index 8b0faf5e91f8a..619430875ec1a 100644 --- a/content/en/docs/reference/_index.md +++ b/content/en/docs/reference/_index.md @@ -5,16 +5,16 @@ approvers: linkTitle: "Reference" main_menu: true weight: 70 -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This section of the Kubernetes documentation contains references. -{{% /capture %}} -{{% capture body %}} + + ## API Reference @@ -52,4 +52,4 @@ client libraries: An archive of the design docs for Kubernetes functionality. Good starting points are [Kubernetes Architecture](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) and [Kubernetes Design Overview](https://git.k8s.io/community/contributors/design-proposals). -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/abac.md b/content/en/docs/reference/access-authn-authz/abac.md index 40c56a985caf1..3810942660c04 100644 --- a/content/en/docs/reference/access-authn-authz/abac.md +++ b/content/en/docs/reference/access-authn-authz/abac.md @@ -5,15 +5,15 @@ reviewers: - deads2k - liggitt title: Using ABAC Authorization -content_template: templates/concept +content_type: concept weight: 80 --- -{{% capture overview %}} + Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together. -{{% /capture %}} -{{% capture body %}} + + ## Policy File Format To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup. @@ -152,5 +152,5 @@ privilege to the API using ABAC, you would add this line to your policy file: The apiserver will need to be restarted to pickup the new policy lines. -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index a254e43a84bb5..ffb71a601891f 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -7,15 +7,15 @@ reviewers: - janetkuo - thockin title: Using Admission Controllers -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This page provides an overview of Admission Controllers. -{{% /capture %}} -{{% capture body %}} + + ## What are they? An admission controller is a piece of code that intercepts requests to the @@ -773,4 +773,4 @@ in the mutating phase. For earlier versions, there was no concept of validating versus mutating and the admission controllers ran in the exact order specified. -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index b240fb4e223aa..8cb8013c76e4b 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -6,15 +6,15 @@ reviewers: - deads2k - liggitt title: Authenticating -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + This page provides an overview of authenticating. -{{% /capture %}} -{{% capture body %}} + + ## Users in Kubernetes All Kubernetes clusters have two categories of users: service accounts managed @@ -860,4 +860,4 @@ RFC3339 timestamp. Presence or absence of an expiry has the following impact: } } ``` -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md index 3a942266fcdc6..74c433b8eea2d 100644 --- a/content/en/docs/reference/access-authn-authz/authorization.md +++ b/content/en/docs/reference/access-authn-authz/authorization.md @@ -5,16 +5,16 @@ reviewers: - deads2k - liggitt title: Authorization Overview -content_template: templates/concept +content_type: concept weight: 60 --- -{{% capture overview %}} + Learn more about Kubernetes authorization, including details about creating policies using the supported authorization modules. -{{% /capture %}} -{{% capture body %}} + + In Kubernetes, you must be authenticated (logged in) before your request can be authorized (granted permission to access). For information about authentication, see [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/). @@ -197,9 +197,10 @@ namespace can: read all secrets in the namespace; read all config maps in the namespace; and impersonate any service account in the namespace and take any action the account could take. This applies regardless of authorization mode. {{< /caution >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * To learn more about Authentication, see **Authentication** in [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/). * To learn more about Admission Control, see [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/). -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md index c8c55c08d68a1..542b5267be852 100644 --- a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md +++ b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -2,11 +2,11 @@ reviewers: - jbeda title: Authenticating with Bootstrap Tokens -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.18" state="stable" >}} @@ -16,9 +16,9 @@ to support [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), but can be u for users that wish to start clusters without `kubeadm`. It is also built to work, via RBAC policy, with the [Kubelet TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) system. -{{% /capture %}} -{{% capture body %}} + + ## Bootstrap Tokens Overview Bootstrap Tokens are defined with a specific type @@ -188,4 +188,4 @@ client relying on the signature to bootstrap TLS trust. Consult the [kubeadm implementation details](/docs/reference/setup-tools/kubeadm/implementation-details/) section for more information. -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index 3e81215dd818b..fea62e545e414 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -4,11 +4,11 @@ reviewers: - mikedanese - munnerz title: Certificate Signing Requests -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.18" state="beta" >}} @@ -21,9 +21,9 @@ A CertificateSigningRequest (CSR) resource is used to request that a certificate by a denoted signer, after which the request may be approved or denied before finally being signed. -{{% /capture %}} -{{% capture body %}} + + ## Request signing process The _CertificateSigningRequest_ resource type allows a client to ask for an X.509 certificate @@ -317,9 +317,10 @@ subresource of the CSR to be signed. As part of this request, the `status.certificate` field should be set to contain the signed certificate. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read [Manage TLS Certificates in a Cluster](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/) * View the source code for the kube-controller-manager built in [signer](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/signer/cfssl_signer.go) @@ -327,4 +328,4 @@ signed certificate. * For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1 * For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986) -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/controlling-access.md b/content/en/docs/reference/access-authn-authz/controlling-access.md index 21c08447ffdaf..f83982c8c177e 100644 --- a/content/en/docs/reference/access-authn-authz/controlling-access.md +++ b/content/en/docs/reference/access-authn-authz/controlling-access.md @@ -3,15 +3,15 @@ reviewers: - erictune - lavalamp title: Controlling Access to the Kubernetes API -content_template: templates/concept +content_type: concept weight: 5 --- -{{% capture overview %}} + This page provides an overview of controlling access to the Kubernetes API. -{{% /capture %}} -{{% capture body %}} + + Users [access the API](/docs/tasks/access-application-cluster/access-cluster/) using `kubectl`, client libraries, or by making REST requests. Both human users and [Kubernetes service accounts](/docs/tasks/configure-pod-container/configure-service-account/) can be @@ -161,4 +161,4 @@ When the cluster is created by `kube-up.sh`, on Google Compute Engine (GCE), and on several other cloud providers, the API server serves on port 443. On GCE, a firewall rule is configured on the project to allow external HTTPS access to the API. Other cluster setup methods vary. -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md index 40d00cddf4c12..718c9d11477ab 100644 --- a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md @@ -7,18 +7,17 @@ reviewers: - liggitt - jpbetz title: Dynamic Admission Control -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + In addition to [compiled-in admission plugins](/docs/reference/access-authn-authz/admission-controllers/), admission plugins can be developed as extensions and run as webhooks configured at runtime. This page describes how to build, configure, use, and monitor admission webhooks. -{{% /capture %}} -{{% capture body %}} + ## What are admission webhooks? Admission webhooks are HTTP callbacks that receive admission requests and do @@ -1589,4 +1588,4 @@ If your admission webhooks don't intend to modify the behavior of the Kubernetes plane, exclude the `kube-system` namespace from being intercepted using a [`namespaceSelector`](#matching-requests-namespaceselector). -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/node.md b/content/en/docs/reference/access-authn-authz/node.md index 6c0e2f3e9969f..439d97ff84691 100644 --- a/content/en/docs/reference/access-authn-authz/node.md +++ b/content/en/docs/reference/access-authn-authz/node.md @@ -5,15 +5,15 @@ reviewers: - liggitt - ericchiang title: Using Node Authorization -content_template: templates/concept +content_type: concept weight: 90 --- -{{% capture overview %}} + Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets. -{{% /capture %}} -{{% capture body %}} + + ## Overview The Node authorizer allows a kubelet to perform API operations. This includes: @@ -96,4 +96,4 @@ In 1.8, the binding will not be created at all. When using RBAC, the `system:node` cluster role will continue to be created, for compatibility with deployment methods that bind other users or groups to that role. -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 15a4347fea0a1..20b1224e59e9f 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -4,17 +4,17 @@ reviewers: - deads2k - liggitt title: Using RBAC Authorization -content_template: templates/concept +content_type: concept aliases: [/rbac/] weight: 70 --- -{{% capture overview %}} + Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. -{{% /capture %}} -{{% capture body %}} + + RBAC authorization uses the `rbac.authorization.k8s.io` {{< glossary_tooltip text="API group" term_id="api-group" >}} to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API. @@ -1209,5 +1209,3 @@ kubectl create clusterrolebinding permissive-binding \ After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs. - -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md index 5c2dd3ddc52f1..6d2cf765731c9 100644 --- a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md @@ -5,19 +5,19 @@ reviewers: - lavalamp - liggitt title: Managing Service Accounts -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + This is a Cluster Administrator guide to service accounts. It assumes knowledge of the [User Guide to Service Accounts](/docs/user-guide/service-accounts). Support for authorization and user accounts is planned but incomplete. Sometimes incomplete features are referred to in order to better describe service accounts. -{{% /capture %}} -{{% capture body %}} + + ## User accounts versus service accounts Kubernetes distinguishes between the concept of a user account and a service account @@ -115,4 +115,4 @@ kubectl delete secret mysecretname Service Account Controller manages ServiceAccount inside namespaces, and ensures a ServiceAccount named "default" exists in every active namespace. -{{% /capture %}} + diff --git a/content/en/docs/reference/access-authn-authz/webhook.md b/content/en/docs/reference/access-authn-authz/webhook.md index 3f667fa5eff48..cf5944d9a1f96 100644 --- a/content/en/docs/reference/access-authn-authz/webhook.md +++ b/content/en/docs/reference/access-authn-authz/webhook.md @@ -5,15 +5,15 @@ reviewers: - deads2k - liggitt title: Webhook Mode -content_template: templates/concept +content_type: concept weight: 95 --- -{{% capture overview %}} + A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST. A web application implementing WebHooks will POST a message to a URL when certain things happen. -{{% /capture %}} -{{% capture body %}} + + When specified, mode `Webhook` causes Kubernetes to query an outside REST service when determining user privileges. @@ -174,6 +174,6 @@ to the REST api. For further documentation refer to the authorization.v1beta1 API objects and [webhook.go](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go). -{{% /capture %}} + diff --git a/content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md b/content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md index ee8d7f1ed14b4..7cafd5ba06352 100644 --- a/content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md +++ b/content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md @@ -4,7 +4,8 @@ content_template: templates/tool-reference weight: 30 --- -{{% capture synopsis %}} +## {{% heading "synopsis" %}} + The Cloud controller manager is a daemon that embeds @@ -14,9 +15,10 @@ the cloud specific control loops shipped with Kubernetes. cloud-controller-manager [flags] ``` -{{% /capture %}} -{{% capture options %}} + +## {{% heading "options" %}} + @@ -534,5 +536,5 @@ cloud-controller-manager [flags] -{{% /capture %}} + diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 7716c7be7d2ff..78784eace7371 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -1,17 +1,17 @@ --- weight: 10 title: Feature Gates -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This page contains an overview of the various feature gates an administrator can specify on different Kubernetes components. See [feature stages](#feature-stages) for an explanation of the stages for a feature. -{{% /capture %}} -{{% capture body %}} + + ## Overview Feature gates are a set of key=value pairs that describe Kubernetes features. @@ -511,8 +511,9 @@ Each feature gate is designed for enabling/disabling a specific feature: - `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows. - `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * The [deprecation policy](/docs/reference/using-api/deprecation-policy/) for Kubernetes explains the project's approach to removing features and components. -{{% /capture %}} + diff --git a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md index 6e9454dc492ae..05dbcf4c3c019 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md @@ -4,7 +4,8 @@ content_template: templates/tool-reference weight: 30 --- -{{% capture synopsis %}} +## {{% heading "synopsis" %}} + The Kubernetes API server validates and configures data @@ -16,9 +17,10 @@ cluster's shared state through which all other components interact. kube-apiserver [flags] ``` -{{% /capture %}} -{{% capture options %}} + +## {{% heading "options" %}} +
@@ -1082,5 +1084,5 @@ kube-apiserver [flags] -{{% /capture %}} + diff --git a/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md b/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md index 69be42b17e3fb..f25129d1873fc 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md @@ -4,7 +4,8 @@ content_template: templates/tool-reference weight: 30 --- -{{% capture synopsis %}} +## {{% heading "synopsis" %}} + The Kubernetes controller manager is a daemon that embeds @@ -20,9 +21,10 @@ controller, and serviceaccounts controller. kube-controller-manager [flags] ``` -{{% /capture %}} -{{% capture options %}} + +## {{% heading "options" %}} +
@@ -897,5 +899,5 @@ kube-controller-manager [flags] -{{% /capture %}} + diff --git a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md index 1ad3f6ee15568..c888f2bfff8f9 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md @@ -4,7 +4,8 @@ content_template: templates/tool-reference weight: 30 --- -{{% capture synopsis %}} +## {{% heading "synopsis" %}} + The Kubernetes network proxy runs on each node. This @@ -19,9 +20,10 @@ with the apiserver API to configure the proxy. kube-proxy [flags] ``` -{{% /capture %}} -{{% capture options %}} + +## {{% heading "options" %}} +
@@ -336,5 +338,5 @@ kube-proxy [flags] -{{% /capture %}} + diff --git a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md index f807c5d0241b3..e276535500929 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -4,7 +4,8 @@ content_template: templates/tool-reference weight: 30 --- -{{% capture synopsis %}} +## {{% heading "synopsis" %}} + The Kubernetes scheduler is a policy-rich, topology-aware, @@ -20,9 +21,10 @@ for more information about scheduling and the kube-scheduler component. kube-scheduler [flags] ``` -{{% /capture %}} -{{% capture options %}} + +## {{% heading "options" %}} +
@@ -512,5 +514,5 @@ kube-scheduler [flags] -{{% /capture %}} + diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md index 6269a3ec5ae07..562ec5b867cac 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md @@ -5,10 +5,10 @@ reviewers: - smarterclayton - awly title: TLS bootstrapping -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need to communicate with Kubernetes master components, specifically kube-apiserver. In order to ensure that communication is kept private, not interfered with, and ensure that each component of the cluster is talking to another trusted component, we strongly @@ -24,9 +24,9 @@ found [here](https://github.com/kubernetes/kubernetes/pull/20439). This document describes the process of node initialization, how to set up TLS client certificate bootstrapping for kubelets, and how it works. -{{% /capture %}} -{{% capture body %}} + + ## Initialization Process When a worker node starts up, the kubelet does the following: @@ -454,4 +454,4 @@ An issue is open referencing this [here](https://github.com/kubernetes/kubernete -{{% /capture %}} + diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index 595ef138fc50f..28f13458f1be6 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -4,7 +4,8 @@ content_template: templates/tool-reference weight: 28 --- -{{% capture synopsis %}} +## {{% heading "synopsis" %}} + The kubelet is the primary "node agent" that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider. @@ -24,10 +25,11 @@ HTTP server: The kubelet can also listen for HTTP and respond to a simple API (u kubelet [flags] ``` -{{% /capture %}} -{{% capture options %}} + +## {{% heading "options" %}} +
@@ -1265,4 +1267,4 @@ kubelet [flags]
-{{% /capture %}} + diff --git a/content/en/docs/reference/issues-security/security.md b/content/en/docs/reference/issues-security/security.md index d162e5c18cb5e..b9b1ce7c3719f 100644 --- a/content/en/docs/reference/issues-security/security.md +++ b/content/en/docs/reference/issues-security/security.md @@ -6,15 +6,15 @@ reviewers: - erictune - philips - jessfraz -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + This page describes Kubernetes security and disclosure information. -{{% /capture %}} -{{% capture body %}} + + ## Security Announcements Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) group for emails about security and major API announcements. @@ -56,4 +56,4 @@ As the security issue moves from triage, to identified fix, to release planning ## Public Disclosure Timing A public disclosure date is negotiated by the Kubernetes Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Product Security Committee holds the final say when setting a disclosure date. -{{% /capture %}} + diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index dbf0e2dc63121..23d074456c971 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -4,21 +4,21 @@ reviewers: - erictune - krousey - clove -content_template: templates/concept +content_type: concept card: name: reference weight: 30 --- -{{% capture overview %}} + See also: [Kubectl Overview](/docs/reference/kubectl/overview/) and [JsonPath Guide](/docs/reference/kubectl/jsonpath). This page is an overview of the `kubectl` command. -{{% /capture %}} -{{% capture body %}} + + # kubectl - Cheat Sheet @@ -382,9 +382,10 @@ Verbosity | Description `--v=8` | Display HTTP request contents. `--v=9` | Display HTTP request contents without truncation of contents. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Overview of kubectl](/docs/reference/kubectl/overview/). @@ -394,4 +395,4 @@ Verbosity | Description * See more community [kubectl cheatsheets](https://github.com/dennyzhang/cheatsheet-kubernetes-A4). -{{% /capture %}} + diff --git a/content/en/docs/reference/kubectl/conventions.md b/content/en/docs/reference/kubectl/conventions.md index c4bdd59ec5f3e..062847c4856ec 100644 --- a/content/en/docs/reference/kubectl/conventions.md +++ b/content/en/docs/reference/kubectl/conventions.md @@ -2,14 +2,14 @@ title: kubectl Usage Conventions reviewers: - janetkuo -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + Recommended usage conventions for `kubectl`. -{{% /capture %}} -{{% capture body %}} + + ## Using `kubectl` in Reusable Scripts @@ -59,4 +59,4 @@ You can generate the following resources with a kubectl command, `kubectl create * You can use `kubectl apply` to create or update resources. For more information about using kubectl apply to update resources, see [Kubectl Book](https://kubectl.docs.kubernetes.io). -{{% /capture %}} + diff --git a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md index 7def04e04c13d..9cda6064aab76 100644 --- a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md +++ b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md @@ -1,16 +1,16 @@ --- title: kubectl for Docker Users -content_template: templates/concept +content_type: concept reviewers: - brendandburns - thockin --- -{{% capture overview %}} + You can use the Kubernetes command line tool kubectl to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the docker commands and the kubectl commands. The following sections show a docker sub-command and describe the equivalent kubectl command. -{{% /capture %}} -{{% capture body %}} + + ## docker run To run an nginx Deployment and expose the Deployment, see [kubectl run](/docs/reference/generated/kubectl/kubectl-commands/#run). @@ -361,4 +361,4 @@ Grafana is running at https://203.0.113.141/api/v1/namespaces/kube-system/servic Heapster is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy InfluxDB is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy ``` -{{% /capture %}} + diff --git a/content/en/docs/reference/kubectl/jsonpath.md b/content/en/docs/reference/kubectl/jsonpath.md index 731af0004e09f..50c051c9f4cff 100644 --- a/content/en/docs/reference/kubectl/jsonpath.md +++ b/content/en/docs/reference/kubectl/jsonpath.md @@ -1,14 +1,14 @@ --- title: JSONPath Support -content_template: templates/concept +content_type: concept weight: 25 --- -{{% capture overview %}} + Kubectl supports JSONPath template. -{{% /capture %}} -{{% capture body %}} + + JSONPath template is composed of JSONPath expressions enclosed by curly braces {}. Kubectl uses JSONPath expressions to filter on specific fields in the JSON object and format the output. @@ -98,4 +98,4 @@ kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"}{.status. ``` {{< /note >}} -{{% /capture %}} + diff --git a/content/en/docs/reference/kubectl/kubectl.md b/content/en/docs/reference/kubectl/kubectl.md index 6342de0008d5c..f7e9a0f9347b5 100644 --- a/content/en/docs/reference/kubectl/kubectl.md +++ b/content/en/docs/reference/kubectl/kubectl.md @@ -4,7 +4,8 @@ content_template: templates/tool-reference weight: 30 --- -{{% capture synopsis %}} +## {{% heading "synopsis" %}} + kubectl controls the Kubernetes cluster manager. @@ -15,9 +16,10 @@ kubectl controls the Kubernetes cluster manager. kubectl [flags] ``` -{{% /capture %}} -{{% capture options %}} + +## {{% heading "options" %}} + @@ -521,9 +523,10 @@ kubectl [flags] -{{% /capture %}} -{{% capture seealso %}} + +## {{% heading "seealso" %}} + * [kubectl alpha](/docs/reference/generated/kubectl/kubectl-commands#alpha) - Commands for features in alpha * [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands#annotate) - Update the annotations on a resource @@ -569,5 +572,5 @@ kubectl [flags] * [kubectl version](/docs/reference/generated/kubectl/kubectl-commands#version) - Print the client and server version information * [kubectl wait](/docs/reference/generated/kubectl/kubectl-commands#wait) - Experimental: Wait for a specific condition on one or many resources. -{{% /capture %}} + diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index 556842d4fd2b3..84e2272dca92c 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -2,21 +2,21 @@ reviewers: - hw-qiaolei title: Overview of kubectl -content_template: templates/concept +content_type: concept weight: 20 card: name: reference weight: 20 --- -{{% capture overview %}} + Kubectl is a command line tool for controlling Kubernetes clusters. `kubectl` looks for a file named config in the $HOME/.kube directory. You can specify other [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) files by setting the KUBECONFIG environment variable or by setting the [`--kubeconfig`](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) flag. This overview covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](/docs/reference/generated/kubectl/kubectl-commands/) reference documentation. For installation instructions see [installing kubectl](/docs/tasks/kubectl/install/). -{{% /capture %}} -{{% capture body %}} + + ## Syntax @@ -488,10 +488,11 @@ Current user: plugins-user To find out more about plugins, take a look at the [example cli plugin](https://github.com/kubernetes/sample-cli-plugin). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Start using the [kubectl](/docs/reference/generated/kubectl/kubectl-commands/) commands. -{{% /capture %}} + diff --git a/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md b/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md index e1f1e9a8012aa..d1faa51a8876b 100644 --- a/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md +++ b/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md @@ -1,18 +1,18 @@ --- title: Well-Known Labels, Annotations and Taints -content_template: templates/concept +content_type: concept weight: 60 --- -{{% capture overview %}} + Kubernetes reserves all labels and annotations in the kubernetes.io namespace. This document serves both as a reference to the values and as a coordination point for assigning values. -{{% /capture %}} -{{% capture body %}} + + ## kubernetes.io/arch @@ -130,4 +130,4 @@ If `PersistentVolumeLabel` does not support automatic labeling of your Persisten adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all. -{{% /capture %}} + diff --git a/content/en/docs/reference/scheduling/policies.md b/content/en/docs/reference/scheduling/policies.md index 0bf6e030b031f..67d34e59f78e0 100644 --- a/content/en/docs/reference/scheduling/policies.md +++ b/content/en/docs/reference/scheduling/policies.md @@ -1,10 +1,10 @@ --- title: Scheduling Policies -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + A scheduling Policy can be used to specify the *predicates* and *priorities* that the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} @@ -16,9 +16,9 @@ You can set a scheduling policy by running `kube-scheduler --policy-configmap ` and using the [Policy type](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1?tab=doc#Policy). -{{% /capture %}} -{{% capture body %}} + + ## Predicates @@ -117,9 +117,10 @@ The following *priorities* implement scoring: - `EvenPodsSpreadPriority`: Implements preferred [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/) * Learn about [kube-scheduler profiles](/docs/reference/scheduling/profiles/) -{{% /capture %}} + diff --git a/content/en/docs/reference/scheduling/profiles.md b/content/en/docs/reference/scheduling/profiles.md index 48fa961b2e73c..fe28d10bd1dab 100644 --- a/content/en/docs/reference/scheduling/profiles.md +++ b/content/en/docs/reference/scheduling/profiles.md @@ -1,10 +1,10 @@ --- title: Scheduling Profiles -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.18" state="alpha" >}} @@ -20,9 +20,9 @@ or [`v1alpha2`](https://pkg.go.dev/k8s.io/kube-scheduler@{{< param "fullversion" The `v1alpha2` API allows you to configure kube-scheduler to run [multiple profiles](#multiple-profiles). -{{% /capture %}} -{{% capture body %}} + + ## Extension points @@ -174,8 +174,9 @@ the same configuration parameters (if applicable). This is because the scheduler only has one pending pods queue. {{< /note >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/) -{{% /capture %}} + diff --git a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md index a0aa3042174a2..cb42a34df99d0 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md +++ b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md @@ -3,10 +3,10 @@ reviewers: - luxas - jbeda title: Implementation details -content_template: templates/concept +content_type: concept weight: 100 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.10" state="stable" >}} @@ -14,9 +14,9 @@ weight: 100 However, it might not be obvious _how_ kubeadm does that. This document provides additional details on what happen under the hood, with the aim of sharing knowledge on Kubernetes cluster best practices. -{{% /capture %}} -{{% capture body %}} + + ## Core design principles The cluster that `kubeadm init` and `kubeadm join` set up should be: @@ -531,4 +531,4 @@ Please note that: 1. To make dynamic kubelet configuration work, flag `--dynamic-config-dir=/var/lib/kubelet/config/dynamic` should be specified in `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` -{{% /capture %}} + diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-config.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-config.md index c918cd55807d2..a4b0e501d8bfb 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-config.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-config.md @@ -3,10 +3,10 @@ reviewers: - luxas - jbeda title: kubeadm config -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + During `kubeadm init`, kubeadm uploads the `ClusterConfiguration` object to your cluster in a ConfigMap called `kubeadm-config` in the `kube-system` namespace. This configuration is then read during `kubeadm join`, `kubeadm reset` and `kubeadm upgrade`. To view this ConfigMap call `kubeadm config view`. @@ -19,9 +19,9 @@ In Kubernetes v1.13.0 and later to list/pull kube-dns images instead of the Core the `--config` method described [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon) has to be used. -{{% /capture %}} -{{% capture body %}} + + ## kubeadm config view {#cmd-config-view} {{< include "generated/kubeadm_config_view.md" >}} @@ -40,8 +40,9 @@ has to be used. ## kubeadm config images pull {#cmd-config-images-pull} {{< include "generated/kubeadm_config_images_pull.md" >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) to upgrade a Kubernetes cluster to a newer version -{{% /capture %}} + diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index 7103b39d42fce..54729065c6415 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -3,14 +3,14 @@ reviewers: - luxas - jbeda title: kubeadm init -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + This command initializes a Kubernetes control-plane node. -{{% /capture %}} -{{% capture body %}} + + {{< include "generated/kubeadm_init.md" >}} @@ -255,12 +255,13 @@ it does not allow the root CA hash to be validated with `--discovery-token-ca-cert-hash` (since it's not generated when the nodes are provisioned). For details, see the [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [kubeadm init phase](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) to understand more about `kubeadm init` phases * [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to bootstrap a Kubernetes worker node and join it to the cluster * [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) to upgrade a Kubernetes cluster to a newer version * [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made to this host by `kubeadm init` or `kubeadm join` -{{% /capture %}} + diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md index 1e99d1682bdd7..abceaf5f70a4d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md @@ -3,14 +3,14 @@ reviewers: - luxas - jbeda title: kubeadm join -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This command initializes a Kubernetes worker node and joins it to the cluster. -{{% /capture %}} -{{% capture body %}} + + {{< include "generated/kubeadm_join.md" >}} ### The join workflow {#join-workflow} @@ -276,10 +276,11 @@ kubeadm config print join-defaults For details on individual fields in `JoinConfiguration` see [the godoc](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#JoinConfiguration). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [kubeadm init](/docs/reference/setup-tools/kubeadm/kubeadm-init/) to bootstrap a Kubernetes control-plane node * [kubeadm token](/docs/reference/setup-tools/kubeadm/kubeadm-token/) to manage tokens for `kubeadm join` * [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made to this host by `kubeadm init` or `kubeadm join` -{{% /capture %}} + diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-reset.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-reset.md index 7185a514757cb..2664283daa6c9 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-reset.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-reset.md @@ -3,14 +3,14 @@ reviewers: - luxas - jbeda title: kubeadm reset -content_template: templates/concept +content_type: concept weight: 60 --- -{{% capture overview %}} + Performs a best effort revert of changes made by `kubeadm init` or `kubeadm join`. -{{% /capture %}} -{{% capture body %}} + + {{< include "generated/kubeadm_reset.md" >}} ### Reset workflow {#reset-workflow} @@ -35,9 +35,10 @@ etcdctl del "" --prefix ``` See the [etcd documentation](https://github.com/coreos/etcd/tree/master/etcdctl) for more information. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [kubeadm init](/docs/reference/setup-tools/kubeadm/kubeadm-init/) to bootstrap a Kubernetes control-plane node * [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to bootstrap a Kubernetes worker node and join it to the cluster -{{% /capture %}} + diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-token.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-token.md index a8e9c7cd995ad..92a187bb926e8 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-token.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-token.md @@ -3,10 +3,10 @@ reviewers: - luxas - jbeda title: kubeadm token -content_template: templates/concept +content_type: concept weight: 70 --- -{{% capture overview %}} + Bootstrap tokens are used for establishing bidirectional trust between a node joining the cluster and a control-plane node, as described in [authenticating with bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/). @@ -14,9 +14,9 @@ the cluster and a control-plane node, as described in [authenticating with boots `kubeadm init` creates an initial token with a 24-hour TTL. The following commands allow you to manage such a token and also to create and manage new ones. -{{% /capture %}} -{{% capture body %}} + + ## kubeadm token create {#cmd-token-create} {{< include "generated/kubeadm_token_create.md" >}} @@ -28,8 +28,9 @@ such a token and also to create and manage new ones. ## kubeadm token list {#cmd-token-list} {{< include "generated/kubeadm_token_list.md" >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to bootstrap a Kubernetes worker node and join it to the cluster -{{% /capture %}} + diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md index 31c2f11d9c0c4..71483aa1d6b49 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md @@ -3,15 +3,15 @@ reviewers: - luxas - jbeda title: kubeadm upgrade -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + `kubeadm upgrade` is a user-friendly command that wraps complex upgrading logic behind one command, with support for both planning an upgrade and actually performing it. -{{% /capture %}} -{{% capture body %}} + + ## kubeadm upgrade guidance @@ -46,8 +46,9 @@ reports of unexpected results. ## kubeadm upgrade node {#cmd-upgrade-node} {{< include "generated/kubeadm_upgrade_node.md" >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [kubeadm config](/docs/reference/setup-tools/kubeadm/kubeadm-config/) if you initialized your cluster using kubeadm v1.7.x or lower, to configure your cluster for `kubeadm upgrade` -{{% /capture %}} + diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-version.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-version.md index 5da4209f3e33f..a4b57e796c224 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-version.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-version.md @@ -3,13 +3,13 @@ reviewers: - luxas - jbeda title: kubeadm version -content_template: templates/concept +content_type: concept weight: 80 --- -{{% capture overview %}} + This command prints the version of kubeadm. -{{% /capture %}} -{{% capture body %}} + + {{< include "generated/kubeadm_version.md" >}} -{{% /capture %}} + diff --git a/content/en/docs/reference/tools.md b/content/en/docs/reference/tools.md index 349ce58f2c372..ef210f2b07ad6 100644 --- a/content/en/docs/reference/tools.md +++ b/content/en/docs/reference/tools.md @@ -2,14 +2,14 @@ reviewers: - janetkuo title: Tools -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + Kubernetes contains several built-in tools to help you work with the Kubernetes system. -{{% /capture %}} -{{% capture body %}} + + ## Kubectl [`kubectl`](/docs/tasks/tools/install-kubectl/) is the command line tool for Kubernetes. It controls the Kubernetes cluster manager. @@ -51,4 +51,4 @@ Use Kompose to: * Translate a Docker Compose file into Kubernetes objects * Go from local Docker development to managing your application via Kubernetes * Convert v1 or v2 Docker Compose `yaml` files or [Distributed Application Bundles](https://docs.docker.com/compose/bundles/) -{{% /capture %}} + diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index 0c3f1b23415c0..f83c43c00f9f3 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -4,15 +4,15 @@ reviewers: - smarterclayton - lavalamp - liggitt -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + This page describes common concepts in the Kubernetes API. -{{% /capture %}} -{{% capture body %}} + + The Kubernetes API is a resource-based (RESTful) programmatic interface provided via HTTP. It supports retrieving, creating, updating, and deleting primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE, GET), includes additional subresources for many objects that allow fine grained authorization (such as binding a pod to a node), and can accept and serve those resources in different representations for convenience or efficiency. It also supports efficient change notifications on resources via "watches" and consistent lists to allow other components to effectively cache and synchronize the state of resources. diff --git a/content/en/docs/reference/using-api/api-overview.md b/content/en/docs/reference/using-api/api-overview.md index 3820085e6bb35..cfba8b9f19a69 100644 --- a/content/en/docs/reference/using-api/api-overview.md +++ b/content/en/docs/reference/using-api/api-overview.md @@ -4,7 +4,7 @@ reviewers: - erictune - lavalamp - jbeda -content_template: templates/concept +content_type: concept weight: 10 card: name: reference @@ -12,11 +12,11 @@ card: title: Overview of API --- -{{% capture overview %}} + This page provides an overview of the Kubernetes API. -{{% /capture %}} -{{% capture body %}} + + The REST API is the fundamental fabric of Kubernetes. All operations and communications between components, and external user commands are REST API calls that the API Server handles. Consequently, everything in the Kubernetes platform is treated as an API object and has a corresponding entry in the [API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/). diff --git a/content/en/docs/reference/using-api/client-libraries.md b/content/en/docs/reference/using-api/client-libraries.md index 0d8af9394d07b..1531b2c5df982 100644 --- a/content/en/docs/reference/using-api/client-libraries.md +++ b/content/en/docs/reference/using-api/client-libraries.md @@ -2,16 +2,16 @@ title: Client Libraries reviewers: - ahmetb -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This page contains an overview of the client libraries for using the Kubernetes API from various programming languages. -{{% /capture %}} -{{% capture body %}} + + To write applications using the [Kubernetes REST API](/docs/reference/using-api/api-overview/), you do not need to implement the API calls and request/response types yourself. You can use a client library for the programming language you are using. @@ -75,6 +75,6 @@ their authors, not the Kubernetes team. | DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) | | Elixir | [github.com/obmarg/kazan](https://github.com/obmarg/kazan/) | | Elixir | [github.com/coryodaniel/k8s](https://github.com/coryodaniel/k8s) | -{{% /capture %}} + diff --git a/content/en/docs/reference/using-api/deprecation-policy.md b/content/en/docs/reference/using-api/deprecation-policy.md index f55438cd18c1e..a21d0887ba758 100644 --- a/content/en/docs/reference/using-api/deprecation-policy.md +++ b/content/en/docs/reference/using-api/deprecation-policy.md @@ -4,15 +4,15 @@ reviewers: - lavalamp - thockin title: Kubernetes Deprecation Policy -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + This document details the deprecation policy for various facets of the system. -{{% /capture %}} -{{% capture body %}} + + Kubernetes is a large system with many components and many contributors. As with any such software, the feature set naturally evolves over time, and sometimes a feature may need to be removed. This could include an API, a flag, @@ -425,4 +425,4 @@ leaders to find the best solutions for those specific cases, always bearing in mind that Kubernetes is committed to being a stable system that, as much as possible, never breaks users. Exceptions will always be announced in all relevant release notes. -{{% /capture %}} + diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md index 16702b40f5ad9..91b734953c31d 100644 --- a/content/en/docs/setup/_index.md +++ b/content/en/docs/setup/_index.md @@ -7,7 +7,7 @@ no_issue: true title: Getting started main_menu: true weight: 20 -content_template: templates/concept +content_type: concept card: name: setup weight: 20 @@ -18,7 +18,7 @@ card: title: Production environment --- -{{% capture overview %}} + This section covers different options to set up and run Kubernetes. @@ -28,9 +28,9 @@ You can deploy a Kubernetes cluster on a local machine, cloud, on-prem datacente More simply, you can create a Kubernetes cluster in learning and production environments. -{{% /capture %}} -{{% capture body %}} + + ## Learning environment @@ -51,4 +51,4 @@ When evaluating a solution for a production environment, consider which aspects [Kubernetes Partners](https://kubernetes.io/partners/#conformance) includes a list of [Certified Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes) providers. -{{% /capture %}} + diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md index 6169b3f87266a..ce7939bc4dd5f 100644 --- a/content/en/docs/setup/best-practices/certificates.md +++ b/content/en/docs/setup/best-practices/certificates.md @@ -2,20 +2,20 @@ title: PKI certificates and requirements reviewers: - sig-cluster-lifecycle -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + Kubernetes requires PKI certificates for authentication over TLS. If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), the certificates that your cluster requires are automatically generated. You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server. This page explains the certificates that your cluster requires. -{{% /capture %}} -{{% capture body %}} + + ## How certificates are used by your cluster @@ -164,4 +164,4 @@ These files are used as follows: [kubeadm]: /docs/reference/setup-tools/kubeadm/kubeadm/ [proxy]: /docs/tasks/access-kubernetes-api/configure-aggregation-layer/ -{{% /capture %}} + diff --git a/content/en/docs/setup/best-practices/multiple-zones.md b/content/en/docs/setup/best-practices/multiple-zones.md index ba58df028f55f..ab61c839a94ae 100644 --- a/content/en/docs/setup/best-practices/multiple-zones.md +++ b/content/en/docs/setup/best-practices/multiple-zones.md @@ -5,16 +5,16 @@ reviewers: - quinton-hoole title: Running in multiple zones weight: 10 -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This page describes how to run a cluster in multiple zones. -{{% /capture %}} -{{% capture body %}} + + ## Introduction @@ -401,4 +401,4 @@ KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b k KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh ``` -{{% /capture %}} + diff --git a/content/en/docs/setup/learning-environment/kind.md b/content/en/docs/setup/learning-environment/kind.md index e476d220d0ec0..ac355bd157507 100644 --- a/content/en/docs/setup/learning-environment/kind.md +++ b/content/en/docs/setup/learning-environment/kind.md @@ -1,22 +1,22 @@ --- title: Installing Kubernetes with Kind weight: 40 -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + Kind is a tool for running local Kubernetes clusters using Docker container "nodes". -{{% /capture %}} -{{% capture body %}} + + ## Installation See [Installing Kind](https://kind.sigs.k8s.io/docs/user/quick-start/). -{{% /capture %}} + diff --git a/content/en/docs/setup/learning-environment/minikube.md b/content/en/docs/setup/learning-environment/minikube.md index e314d566086a5..7b480f5d568e5 100644 --- a/content/en/docs/setup/learning-environment/minikube.md +++ b/content/en/docs/setup/learning-environment/minikube.md @@ -5,16 +5,16 @@ reviewers: - aaron-prindle title: Installing Kubernetes with Minikube weight: 30 -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop for users looking to try out Kubernetes or develop with it day-to-day. -{{% /capture %}} -{{% capture body %}} + + ## Minikube Features @@ -509,4 +509,4 @@ For more information about Minikube, see the [proposal](https://git.k8s.io/commu Contributions, questions, and comments are all welcomed and encouraged! Minikube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ". -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index 7db25e022bc32..14c8053efb145 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -3,17 +3,17 @@ reviewers: - vincepri - bart0sh title: Container runtimes -content_template: templates/concept +content_type: concept weight: 10 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.6" state="stable" >}} To run containers in Pods, Kubernetes uses a container runtime. Here are the installation instructions for various runtimes. -{{% /capture %}} -{{% capture body %}} + + {{< caution >}} @@ -402,4 +402,4 @@ When using kubeadm, manually configure the Refer to the [Frakti QuickStart guide](https://github.com/kubernetes/frakti#quickstart) for more information. -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/on-premises-vm/cloudstack.md b/content/en/docs/setup/production-environment/on-premises-vm/cloudstack.md index e85953dd86590..1f7d1fd81fbb4 100644 --- a/content/en/docs/setup/production-environment/on-premises-vm/cloudstack.md +++ b/content/en/docs/setup/production-environment/on-premises-vm/cloudstack.md @@ -2,10 +2,10 @@ reviewers: - thockin title: Cloudstack -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + [CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes. @@ -13,9 +13,9 @@ content_template: templates/concept This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init. -{{% /capture %}} -{{% capture body %}} + + ## Prerequisites @@ -118,4 +118,4 @@ IaaS Provider | Config. Mgmt | OS | Networking | Docs CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/production-environment/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/)) -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/on-premises-vm/dcos.md b/content/en/docs/setup/production-environment/on-premises-vm/dcos.md index 12e47948e2a6a..e4b310902c979 100644 --- a/content/en/docs/setup/production-environment/on-premises-vm/dcos.md +++ b/content/en/docs/setup/production-environment/on-premises-vm/dcos.md @@ -2,10 +2,10 @@ reviewers: - smugcloud title: Kubernetes on DC/OS -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + Mesosphere provides an easy option to provision Kubernetes onto [DC/OS](https://mesosphere.com/product/), offering: @@ -14,12 +14,12 @@ Mesosphere provides an easy option to provision Kubernetes onto [DC/OS](https:// * Highly available and secure by default * Kubernetes running alongside fast-data platforms (e.g. Akka, Cassandra, Kafka, Spark) -{{% /capture %}} -{{% capture body %}} + + ## Official Mesosphere Guide The canonical source of getting started on DC/OS is located in the [quickstart repo](https://github.com/mesosphere/dcos-kubernetes-quickstart). -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/on-premises-vm/ovirt.md b/content/en/docs/setup/production-environment/on-premises-vm/ovirt.md index be6f3b8e77984..1d57b6f7eb352 100644 --- a/content/en/docs/setup/production-environment/on-premises-vm/ovirt.md +++ b/content/en/docs/setup/production-environment/on-premises-vm/ovirt.md @@ -3,16 +3,16 @@ reviewers: - caesarxuchao - erictune title: oVirt -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center. -{{% /capture %}} -{{% capture body %}} + + ## oVirt Cloud Provider Deployment @@ -69,4 +69,4 @@ IaaS Provider | Config. Mgmt | OS | Networking | Docs oVirt | | | | [docs](/docs/setup/production-environment/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z)) -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kops.md b/content/en/docs/setup/production-environment/tools/kops.md index 10ae6dfa65285..338dbee0e5c30 100644 --- a/content/en/docs/setup/production-environment/tools/kops.md +++ b/content/en/docs/setup/production-environment/tools/kops.md @@ -1,10 +1,10 @@ --- title: Installing Kubernetes with kops -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + This quickstart shows you how to easily install a Kubernetes cluster on AWS. It uses a tool called [`kops`](https://github.com/kubernetes/kops). @@ -18,9 +18,10 @@ kops is an automated provisioning system: * High-Availability support - see the [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md) * Can directly provision, or generate terraform manifests - see the [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md) -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed. @@ -28,9 +29,9 @@ kops is an automated provisioning system: * You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them. -{{% /capture %}} -{{% capture steps %}} + + ## Creating a cluster @@ -225,13 +226,14 @@ See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to expl * To delete your cluster: `kops delete cluster useast1.dev.example.com --yes` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/). * Learn more about `kops` [advanced usage](https://kops.sigs.k8s.io/) for tutorials, best practices and advanced configuration options. * Follow `kops` community discussions on Slack: [community discussions](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors) * Contribute to `kops` by addressing or raising an issue [GitHub Issues](https://github.com/kubernetes/kops/issues) -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md index e2ae7267bc113..1bcdad0092915 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md @@ -2,11 +2,11 @@ reviewers: - sig-cluster-lifecycle title: Customizing control plane configuration with kubeadm -content_template: templates/concept +content_type: concept weight: 40 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.12" state="stable" >}} @@ -30,9 +30,9 @@ For more details on each field in the configuration you can navigate to our You can generate a `ClusterConfiguration` object with default values by running `kubeadm config print init-defaults` and saving the output to a file of your choice. {{< /note >}} -{{% /capture %}} -{{% capture body %}} + + ## APIServer flags @@ -83,4 +83,4 @@ scheduler: kubeconfig: /home/johndoe/kubeconfig.yaml ``` -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 2d386663863fb..f986031911e6f 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -2,11 +2,11 @@ reviewers: - sig-cluster-lifecycle title: Creating a single control-plane cluster with kubeadm -content_template: templates/task +content_type: task weight: 30 --- -{{% capture overview %}} + The `kubeadm` tool helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). `kubeadm` also supports other cluster @@ -24,9 +24,10 @@ of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the cloud or on-premises, you can integrate `kubeadm` into provisioning systems such as Ansible or Terraform. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + To follow this guide, you need: @@ -53,9 +54,9 @@ slightly as the tool evolves, but the overall implementation should be pretty st Any commands under `kubeadm alpha` are, by definition, supported on an alpha level. {{< /note >}} -{{% /capture %}} -{{% capture steps %}} + + ## Objectives @@ -564,9 +565,9 @@ See the [`kubeadm reset`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) reference documentation for more information about this subcommand and its options. -{{% /capture %}} -{{% capture discussion %}} + + ## What's next {#whats-next} @@ -641,4 +642,4 @@ supports your chosen platform. If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/). -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/ha-topology.md b/content/en/docs/setup/production-environment/tools/kubeadm/ha-topology.md index ec05ee12db46c..53b1f380248f3 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/ha-topology.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/ha-topology.md @@ -2,11 +2,11 @@ reviewers: - sig-cluster-lifecycle title: Options for Highly Available topology -content_template: templates/concept +content_type: concept weight: 50 --- -{{% capture overview %}} + This page explains the two options for configuring the topology of your highly available (HA) Kubernetes clusters. @@ -22,9 +22,9 @@ kubeadm bootstraps the etcd cluster statically. Read the etcd [Clustering Guide] for more details. {{< /note >}} -{{% /capture %}} -{{% capture body %}} + + ## Stacked etcd topology @@ -67,10 +67,11 @@ A minimum of three hosts for control plane nodes and three hosts for etcd nodes ![External etcd topology](/images/kubeadm/kubeadm-ha-topology-external-etcd.svg) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + - [Set up a highly available cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/) -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md index 162e60e175be1..436f4e3573957 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md @@ -2,11 +2,11 @@ reviewers: - sig-cluster-lifecycle title: Creating Highly Available clusters with kubeadm -content_template: templates/task +content_type: task weight: 60 --- -{{% capture overview %}} + This page explains two different approaches to setting up a highly available Kubernetes cluster using kubeadm: @@ -30,9 +30,10 @@ environment, neither approach documented here works with Service objects of type LoadBalancer, or with dynamic PersistentVolumes. {{< /caution >}} -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + For both methods you need this infrastructure: @@ -50,9 +51,9 @@ For the external etcd cluster only, you also need: - Three additional machines for etcd members -{{% /capture %}} -{{% capture steps %}} + + ## First steps for both methods @@ -373,4 +374,4 @@ SSH is required if you want to control all nodes from a single machine. # Quote this line if you are using external etcd mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key ``` -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 9438e8614050c..e06918d7b84d0 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -1,6 +1,6 @@ --- title: Installing kubeadm -content_template: templates/task +content_type: task weight: 10 card: name: setup @@ -8,14 +8,15 @@ card: title: Install the kubeadm setup tool --- -{{% capture overview %}} + This page shows how to install the `kubeadm` toolbox. For information how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * One or more machines running one of: - Ubuntu 16.04+ @@ -32,9 +33,9 @@ For information how to create a cluster with kubeadm once you have performed thi * Certain ports are open on your machines. See [here](#check-required-ports) for more details. * Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. -{{% /capture %}} -{{% capture steps %}} + + ## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address} @@ -301,8 +302,8 @@ like CRI-O and containerd is work in progress. If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/). -{{% capture whatsnext %}} +## {{% heading "whatsnext" %}} + * [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md index 070dbd72740de..8dfcb250cecb6 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md @@ -2,11 +2,11 @@ reviewers: - sig-cluster-lifecycle title: Configuring each kubelet in your cluster using kubeadm -content_template: templates/concept +content_type: concept weight: 80 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.11" state="stable" >}} @@ -26,9 +26,9 @@ characteristics of a given machine (such as OS, storage, and networking). You ca of your kubelets manually, but kubeadm now provides a `KubeletConfiguration` API type for [managing your kubelet configurations centrally](#configure-kubelets-using-kubeadm). -{{% /capture %}} -{{% capture body %}} + + ## Kubelet configuration patterns @@ -203,4 +203,4 @@ The DEB and RPM packages shipped with the Kubernetes releases are: | `kubernetes-cni` | Installs the official CNI binaries into the `/opt/cni/bin` directory. | | `cri-tools` | Installs the `/usr/bin/crictl` binary from the [cri-tools git repository](https://github.com/kubernetes-incubator/cri-tools). | -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/self-hosting.md b/content/en/docs/setup/production-environment/tools/kubeadm/self-hosting.md index 84c98ebe9c708..334e2266f25aa 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/self-hosting.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/self-hosting.md @@ -2,11 +2,11 @@ reviewers: - sig-cluster-lifecycle title: Configuring your kubernetes cluster to self-host the control plane -content_template: templates/concept +content_type: concept weight: 100 --- -{{% capture overview %}} + ### Self-hosting the Kubernetes control plane {#self-hosting} @@ -19,9 +19,9 @@ configured in the kubelet via static files. To create a self-hosted cluster see the [kubeadm alpha selfhosting pivot](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-selfhosting) command. -{{% /capture %}} -{{% capture body %}} + + #### Caveats @@ -67,4 +67,4 @@ In summary, `kubeadm alpha selfhosting` works as follows: 1. When the original static control plane stops, the new self-hosted control plane is able to bind to listening ports and become active. -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md index 708e10569fc4e..739b405d14267 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md @@ -2,11 +2,11 @@ reviewers: - sig-cluster-lifecycle title: Set up a High Availability etcd cluster with kubeadm -content_template: templates/task +content_type: task weight: 70 --- -{{% capture overview %}} + {{< note >}} While kubeadm is being used as the management tool for external etcd nodes @@ -23,9 +23,10 @@ becoming unavailable. This task walks through the process of creating a high availability etcd cluster of three members that can be used as an external etcd when using kubeadm to set up a kubernetes cluster. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * Three hosts that can talk to each other over ports 2379 and 2380. This document assumes these default ports. However, they are configurable through @@ -36,9 +37,9 @@ when using kubeadm to set up a kubernetes cluster. [toolbox]: /docs/setup/production-environment/tools/kubeadm/install-kubeadm/ -{{% /capture %}} -{{% capture steps %}} + + ## Setting up the cluster @@ -264,12 +265,13 @@ this example. - Set `${ETCD_TAG}` to the version tag of your etcd image. For example `3.4.3-0`. To see the etcd image and tag that kubeadm uses execute `kubeadm config images list --kubernetes-version ${K8S_VERSION}`, where `${K8S_VERSION}` is for example `v1.17.0` - Set `${HOST0}`to the IP address of the host you are testing. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Once you have a working 3 member etcd cluster, you can continue setting up a highly available control plane using the [external etcd method with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/). -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index 054f4b28fb8a1..0294284c9a5f6 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -1,10 +1,10 @@ --- title: Troubleshooting kubeadm -content_template: templates/concept +content_type: concept weight: 20 --- -{{% capture overview %}} + As with any program, you might run into an error installing or running kubeadm. This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem. @@ -18,9 +18,9 @@ If your problem is not listed below, please follow the following steps: - If you are unsure about how kubeadm works, you can ask on [Slack](http://slack.k8s.io/) in #kubeadm, or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include relevant tags like `#kubernetes` and `#kubeadm` so folks can help you. -{{% /capture %}} -{{% capture body %}} + + ## Not possible to join a v1.18 Node to a v1.17 cluster due to missing RBAC @@ -404,4 +404,4 @@ nodeRegistration: Alternatively, you can modify `/etc/fstab` to make the `/usr` mount writeable, but please be advised that this is modifying a design principle of the Linux distribution. -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/tools/kubespray.md b/content/en/docs/setup/production-environment/tools/kubespray.md index ae323d38cff43..07c0b3c574523 100644 --- a/content/en/docs/setup/production-environment/tools/kubespray.md +++ b/content/en/docs/setup/production-environment/tools/kubespray.md @@ -1,10 +1,10 @@ --- title: Installing Kubernetes with Kubespray -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). @@ -23,9 +23,9 @@ Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [in To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/). -{{% /capture %}} -{{% capture body %}} + + ## Creating a cluster @@ -113,10 +113,10 @@ When running the reset playbook, be sure not to accidentally target your product * Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](http://slack.k8s.io/)) * [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md). -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/setup/production-environment/turnkey/aws.md b/content/en/docs/setup/production-environment/turnkey/aws.md index 922f4a3eb9ac0..92dd18075c589 100644 --- a/content/en/docs/setup/production-environment/turnkey/aws.md +++ b/content/en/docs/setup/production-environment/turnkey/aws.md @@ -3,16 +3,17 @@ reviewers: - justinsb - clove title: Running Kubernetes on AWS EC2 -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page describes how to install a Kubernetes cluster on AWS. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS. @@ -28,9 +29,9 @@ To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secr * [KubeOne](https://github.com/kubermatic/kubeone) is an open source cluster lifecycle management tool that creates, upgrades and manages Kubernetes Highly-Available clusters. -{{% /capture %}} -{{% capture steps %}} + + ## Getting started with your cluster @@ -90,4 +91,4 @@ AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weave Please see the [Kubernetes docs](/docs/) for more details on administering and using a Kubernetes cluster. -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/turnkey/gce.md b/content/en/docs/setup/production-environment/turnkey/gce.md index 7ec902d10b787..60c4e690d962b 100644 --- a/content/en/docs/setup/production-environment/turnkey/gce.md +++ b/content/en/docs/setup/production-environment/turnkey/gce.md @@ -5,16 +5,17 @@ reviewers: - mikedanese - thockin title: Running Kubernetes on Google Compute Engine -content_template: templates/task +content_type: task --- -{{% capture overview %}} + The example below creates a Kubernetes cluster with 3 worker node Virtual Machines and a master Virtual Machine (i.e. 4 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted cluster installation and management. @@ -36,9 +37,9 @@ If you want to use custom binaries or pure open source Kubernetes, please contin 1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart. 1. Make sure you can SSH into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart. -{{% /capture %}} -{{% capture steps %}} + + ## Starting a cluster @@ -225,4 +226,4 @@ GCE | Saltstack | Debian | GCE | [docs](/docs/setup/ Please see the [Kubernetes docs](/docs/) for more details on administering and using a Kubernetes cluster. -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 78e61d45888c2..09a74d14505f9 100644 --- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -3,17 +3,17 @@ reviewers: - michmike - patricklang title: Intro to Windows support in Kubernetes -content_template: templates/concept +content_type: concept weight: 65 --- -{{% capture overview %}} + Windows applications constitute a large portion of the services and applications that run in many organizations. [Windows containers](https://aka.ms/windowscontainers) provide a modern way to encapsulate processes and package dependencies, making it easier to use DevOps practices and follow cloud native patterns for Windows applications. Kubernetes has become the defacto standard container orchestrator, and the release of Kubernetes 1.14 includes production support for scheduling Windows containers on Windows nodes in a Kubernetes cluster, enabling a vast ecosystem of Windows applications to leverage the power of Kubernetes. Organizations with investments in Windows-based applications and Linux-based applications don't have to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments, regardless of operating system. -{{% /capture %}} -{{% capture body %}} + + ## Windows containers in Kubernetes @@ -584,9 +584,10 @@ If filing a bug, please include detailed information about how to reproduce the * [Relevant logs](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs) * Tag the issue sig/windows by commenting on the issue with `/sig windows` to bring it to a SIG-Windows member's attention -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + We have a lot of features in our roadmap. An abbreviated high level list is included below, but we encourage you to view our [roadmap project](https://github.com/orgs/kubernetes/projects/8) and help us make Windows support better by [contributing](https://github.com/kubernetes/community/blob/master/sig-windows/). @@ -638,4 +639,4 @@ properly provisioned. * More CNIs * More Storage Plugins -{{% /capture %}} + diff --git a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md index aa1c1f3783ce9..e28afeb9f211b 100644 --- a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -3,17 +3,17 @@ reviewers: - michmike - patricklang title: Guide for scheduling Windows containers in Kubernetes -content_template: templates/concept +content_type: concept weight: 75 --- -{{% capture overview %}} + Windows applications constitute a large portion of the services and applications that run in many organizations. This guide walks you through the steps to configure and deploy a Windows container in Kubernetes. -{{% /capture %}} -{{% capture body %}} + + ## Objectives @@ -245,6 +245,6 @@ spec: ``` -{{% /capture %}} + [RuntimeClass]: https://kubernetes.io/docs/concepts/containers/runtime-class/ diff --git a/content/en/docs/setup/release/version-skew-policy.md b/content/en/docs/setup/release/version-skew-policy.md index f01b084448d95..cc506352d33da 100644 --- a/content/en/docs/setup/release/version-skew-policy.md +++ b/content/en/docs/setup/release/version-skew-policy.md @@ -7,16 +7,16 @@ reviewers: - sig-node - sig-release title: Kubernetes version and version skew support policy -content_template: templates/concept +content_type: concept weight: 30 --- -{{% capture overview %}} + This document describes the maximum version skew supported between various Kubernetes components. Specific cluster deployment tools may place additional restrictions on version skew. -{{% /capture %}} -{{% capture body %}} + + ## Supported versions diff --git a/content/en/docs/tasks/_index.md b/content/en/docs/tasks/_index.md index 1dee1f38f1d02..504ec1dd89094 100644 --- a/content/en/docs/tasks/_index.md +++ b/content/en/docs/tasks/_index.md @@ -2,20 +2,20 @@ title: Tasks main_menu: true weight: 50 -content_template: templates/concept +content_type: concept --- {{< toc >}} -{{% capture overview %}} + This section of the Kubernetes documentation contains pages that show how to do individual tasks. A task page shows how to do a single thing, typically by giving a short sequence of steps. -{{% /capture %}} -{{% capture body %}} + + ## Web UI (Dashboard) @@ -73,11 +73,12 @@ Configure and schedule NVIDIA GPUs for use as a resource by nodes in a cluster. Configure and schedule huge pages as a schedulable resource in a cluster. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + If you would like to write a task page, see [Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md index 05835f2b08a00..39ad8b4b7e472 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md @@ -1,17 +1,17 @@ --- title: Accessing Clusters weight: 20 -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This topic discusses multiple ways to interact with clusters. -{{% /capture %}} -{{% capture body %}} + + ## Accessing for the first time with kubectl @@ -376,4 +376,3 @@ There are several different proxies you may encounter when using Kubernetes: Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin will typically ensure that the latter types are setup correctly. -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md index 33547cdca6f91..1d00516d28996 100644 --- a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md +++ b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md @@ -1,25 +1,26 @@ --- title: Communicate Between Containers in the Same Pod Using a Shared Volume -content_template: templates/task +content_type: task weight: 110 --- -{{% capture overview %}} + This page shows how to use a Volume to communicate between two Containers running in the same Pod. See also how to allow processes to communicate by [sharing process namespace](/docs/tasks/configure-pod-container/share-process-namespace/) between containers. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Creating a Pod that runs two Containers @@ -108,10 +109,10 @@ The output shows that nginx serves a web page written by the debian container: Hello from the debian container -{{% /capture %}} -{{% capture discussion %}} + + ## Discussion @@ -127,10 +128,11 @@ The Volume in this exercise provides a way for Containers to communicate during the life of the Pod. If the Pod is deleted and recreated, any data stored in the shared Volume is lost. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [patterns for composite containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns). @@ -147,7 +149,7 @@ the shared Volume is lost. * See [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index acd023548a9cf..79abf9f16310e 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -1,6 +1,6 @@ --- title: Configure Access to Multiple Clusters -content_template: templates/task +content_type: task weight: 30 card: name: tasks @@ -8,7 +8,7 @@ card: --- -{{% capture overview %}} + This page shows how to configure access to multiple clusters by using configuration files. After your clusters, users, and contexts are defined in @@ -21,15 +21,16 @@ a *kubeconfig file*. This is a generic way of referring to configuration files. It does not mean that there is a file named `kubeconfig`. {{< /note >}} -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Define clusters, users, and contexts @@ -369,14 +370,15 @@ export KUBECONFIG=$KUBECONFIG_SAVED $Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Organizing Cluster Access Using kubeconfig Files](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) * [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config) -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md b/content/en/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md index 0ab9428a36098..385f226a98a38 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md +++ b/content/en/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md @@ -3,27 +3,28 @@ reviewers: - bprashanth - davidopp title: Configure Your Cloud Provider's Firewalls -content_template: templates/task +content_type: task weight: 90 --- -{{% capture overview %}} + Many cloud providers (e.g. Google Compute Engine) define firewalls that help prevent inadvertent exposure to the internet. When exposing a service to the external world, you may need to open up one or more ports in these firewalls to serve traffic. This document describes this process, as well as any provider specific details that may be necessary. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Restrict Access For LoadBalancer Service @@ -106,4 +107,4 @@ the wilds of the internet. {{< /note >}} -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md b/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md index 4c17d3128d45a..3535fdb8bcdf8 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md @@ -1,13 +1,13 @@ --- title: Configure DNS for a Cluster weight: 120 -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + Kubernetes offers a DNS cluster addon, which most of the supported environments enable by default. In Kubernetes version 1.11 and later, CoreDNS is recommended and is installed by default with kubeadm. -{{% /capture %}} -{{% capture body %}} + + For more information on how to configure CoreDNS for a Kubernetes cluster, see the [Customizing DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/). An example demonstrating how to use Kubernetes DNS with kube-dns, see the [Kubernetes DNS sample plugin](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns). -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md index 264d930d5fd4e..0ce827185ce21 100644 --- a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md +++ b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md @@ -1,30 +1,32 @@ --- title: Connect a Front End to a Back End Using a Service -content_template: templates/tutorial +content_type: tutorial weight: 70 --- -{{% capture overview %}} + This task shows how to create a frontend and a backend microservice. The backend microservice is a hello greeter. The frontend and backend are connected using a Kubernetes {{< glossary_tooltip term_id="service" >}} object. -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Create and run a microservice using a {{< glossary_tooltip term_id="deployment" >}} object. * Route traffic to the backend using a frontend. * Use a Service object to connect the frontend application to the backend application. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -34,10 +36,10 @@ frontend and backend are connected using a Kubernetes support this, you can use a Service of type [NodePort](/docs/concepts/services-networking/service/#nodeport) instead. -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Creating the backend using a Deployment @@ -201,9 +203,10 @@ The output shows the message generated by the backend: {"message":"Hello"} ``` -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + To delete the Services, enter this command: @@ -213,13 +216,14 @@ To delete the Deployments, the ReplicaSets and the Pods that are running the bac kubectl delete deployment frontend hello -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Services](/docs/concepts/services-networking/service/) * Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md b/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md index 720203d60d93e..7dcc613232364 100644 --- a/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md +++ b/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md @@ -1,11 +1,11 @@ --- title: Create an External Load Balancer -content_template: templates/task +content_type: task weight: 80 --- -{{% capture overview %}} + This page shows how to create an External Load Balancer. @@ -24,15 +24,16 @@ services externally-reachable URLs, load balance the traffic, terminate SSL etc. please check the [Ingress](/docs/concepts/services-networking/ingress/) documentation. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Configuration file @@ -199,4 +200,4 @@ Once the external load balancers provide weights, this functionality can be adde Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods. -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md index 0a16c71064c0d..9288ec3064a54 100644 --- a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md @@ -1,25 +1,26 @@ --- title: Set up Ingress on Minikube with the NGINX Ingress Controller -content_template: templates/task +content_type: task weight: 100 --- -{{% capture overview %}} + An [Ingress](/docs/concepts/services-networking/ingress/) is an API object that defines rules which allow external access to services in a cluster. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers/) fulfills the rules set in the Ingress. This page shows you how to set up a simple Ingress which routes requests to Service web or web2 depending on the HTTP URI. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Create a Minikube cluster @@ -275,13 +276,14 @@ The following file is an Ingress resource that sends traffic to your Service via {{< note >}}If you are running Minikube locally, you can visit hello-world.info and hello-world.info/v2 from your browser.{{< /note >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read more about [Ingress](/docs/concepts/services-networking/ingress/) * Read more about [Ingress Controllers](/docs/concepts/services-networking/ingress-controllers/) * Read more about [Services](/docs/concepts/services-networking/service/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md index b3fb886d1143a..d1e1ba1568c72 100644 --- a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md +++ b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md @@ -1,23 +1,24 @@ --- title: List All Container Images Running in a Cluster -content_template: templates/task +content_type: task weight: 100 --- -{{% capture overview %}} + This page shows how to use kubectl to list all of the Container images for Pods running in a cluster. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + In this exercise you will use kubectl to fetch all of the Pods running in a cluster, and format the output to pull out the list @@ -108,19 +109,20 @@ kubectl get pods --all-namespaces -o go-template --template="{{range .items}}{{r -{{% /capture %}} -{{% capture discussion %}} -{{% /capture %}} + + + + +## {{% heading "whatsnext" %}} -{{% capture whatsnext %}} ### Reference * [Jsonpath](/docs/user-guide/jsonpath/) reference guide * [Go template](https://golang.org/pkg/text/template/) reference guide -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md index fc24022d0c6aa..a6c2e217a50e2 100644 --- a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md @@ -1,29 +1,30 @@ --- title: Use Port Forwarding to Access Applications in a Cluster -content_template: templates/task +content_type: task weight: 40 min-kubernetes-server-version: v1.10 --- -{{% capture overview %}} + This page shows how to use `kubectl port-forward` to connect to a Redis server running in a Kubernetes cluster. This type of connection can be useful for database debugging. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * Install [redis-cli](http://redis.io/topics/rediscli). -{{% /capture %}} -{{% capture steps %}} + + ## Creating Redis deployment and service @@ -179,10 +180,10 @@ for database debugging. PONG ``` -{{% /capture %}} -{{% capture discussion %}} + + ## Discussion @@ -196,9 +197,10 @@ The support for UDP protocol is tracked in [issue 47862](https://github.com/kubernetes/kubernetes/issues/47862). {{< /note >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn more about [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward). -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/service-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/service-access-application-cluster.md index af5eb2db86993..fe90981432a02 100644 --- a/content/en/docs/tasks/access-application-cluster/service-access-application-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/service-access-application-cluster.md @@ -1,35 +1,37 @@ --- title: Use a Service to Access an Application in a Cluster -content_template: templates/tutorial +content_type: tutorial weight: 60 --- -{{% capture overview %}} + This page shows how to create a Kubernetes Service object that external clients can use to access an application running in a cluster. The Service provides load balancing for an application that has two running instances. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Run two instances of a Hello World application. * Create a Service object that exposes a node port. * Use the Service object to access the running application. -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Creating a service for an application running in two pods @@ -130,10 +132,11 @@ As an alternative to using `kubectl expose`, you can use a [service configuration file](/docs/concepts/services-networking/service/) to create a Service. -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + To delete the Service, enter this command: @@ -144,11 +147,12 @@ the Hello World application, enter this command: kubectl delete deployment hello-world -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn more about [connecting applications with services](/docs/concepts/services-networking/connect-applications-service/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md index 88132f52180ef..4da7cdf3d61b7 100644 --- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -4,7 +4,7 @@ reviewers: - mikedanese - rf232 title: Web UI (Dashboard) -content_template: templates/concept +content_type: concept weight: 10 card: name: tasks @@ -12,7 +12,7 @@ card: title: Use the Web UI Dashboard --- -{{% capture overview %}} + Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. You can use Dashboard to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc). For example, you can scale a Deployment, initiate a rolling update, restart a pod or deploy new applications using a deploy wizard. @@ -20,10 +20,10 @@ Dashboard also provides information on the state of Kubernetes resources in your ![Kubernetes Dashboard UI](/images/docs/ui-dashboard.png) -{{% /capture %}} -{{% capture body %}} + + ## Deploying the Dashboard UI @@ -162,11 +162,12 @@ Pod lists and detail pages link to a logs viewer that is built into Dashboard. T ![Logs viewer](/images/docs/ui-dashboard-logs-view.png) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + For more information, see the [Kubernetes Dashboard project page](https://github.com/kubernetes/dashboard). -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md index 9a77378d5cf04..b6c71d0eee4e1 100644 --- a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md +++ b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md @@ -4,17 +4,18 @@ reviewers: - lavalamp - cheftako - chenopis -content_template: templates/task +content_type: task weight: 10 --- -{{% capture overview %}} + Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes APIs. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -26,9 +27,9 @@ Reusing the same CA for different client types can negatively impact the cluster {{< /caution >}} {{< /note >}} -{{% /capture %}} -{{% capture steps %}} + + ## Authentication Flow @@ -222,7 +223,7 @@ If you are not running kube-proxy on a host running the API server, then you mus --enable-aggregator-routing=true -{{% /capture %}} + ### Register APIService objects @@ -275,11 +276,12 @@ spec: ... ``` -{{% capture whatsnext %}} +## {{% heading "whatsnext" %}} + * [Setup an extension api-server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/) to work with the aggregation layer. * For a high level overview, see [Extending the Kubernetes API with the aggregation layer](/docs/concepts/api-extension/apiserver-aggregation/). * Learn how to [Extend the Kubernetes API Using Custom Resource Definitions](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md index ec35dd88e8e41..6eaf0cdd3ae6b 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md @@ -3,19 +3,20 @@ title: Versions in CustomResourceDefinitions reviewers: - sttts - liggitt -content_template: templates/task +content_type: task weight: 30 min-kubernetes-server-version: v1.16 --- -{{% capture overview %}} + This page explains how to add versioning information to [CustomResourceDefinitions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#customresourcedefinition-v1beta1-apiextensions), to indicate the stability level of your CustomResourceDefinitions or advance your API to a new version with conversion between API representations. It also describes how to upgrade an object from one version to another. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} @@ -23,9 +24,9 @@ You should have a initial understanding of [custom resources](/docs/concepts/api {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Overview @@ -961,4 +962,4 @@ The following is an example procedure to upgrade from `v1beta1` to `v1`. storage version, which is `v1`. 2. Remove `v1beta1` from the CustomResourceDefinition `status.storedVersions` field. -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md index 4fcd389ba2869..d2b7d76d9ba24 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md @@ -6,18 +6,19 @@ reviewers: - liggitt - roycaihw - sttts -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + This page shows how to install a [custom resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) into the Kubernetes API by creating a [CustomResourceDefinition](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#customresourcedefinition-v1beta1-apiextensions). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -25,9 +26,9 @@ into the Kubernetes API by creating a * Read about [custom resources](/docs/concepts/api-extension/custom-resources/). -{{% /capture %}} -{{% capture steps %}} + + ## Create a CustomResourceDefinition @@ -568,9 +569,9 @@ See [Custom resource definition versioning](/docs/tasks/access-kubernetes-api/cu for more information about serving multiple versions of your CustomResourceDefinition and migrating your objects from one version to another. -{{% /capture %}} -{{% capture discussion %}} + + ## Advanced topics ### Finalizers @@ -1448,13 +1449,13 @@ NAME AGE crontabs/my-new-cron-object 3s ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * See [CustomResourceDefinition](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#customresourcedefinition-v1-apiextensions-k8s-io). * Serve [multiple versions](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/) of a CustomResourceDefinition. -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tasks/access-kubernetes-api/http-proxy-access-api.md b/content/en/docs/tasks/access-kubernetes-api/http-proxy-access-api.md index be282a29c1d1a..695ed5b6c015d 100644 --- a/content/en/docs/tasks/access-kubernetes-api/http-proxy-access-api.md +++ b/content/en/docs/tasks/access-kubernetes-api/http-proxy-access-api.md @@ -1,14 +1,15 @@ --- title: Use an HTTP Proxy to Access the Kubernetes API -content_template: templates/task +content_type: task weight: 40 --- -{{% capture overview %}} + This page shows how to use an HTTP proxy to access the Kubernetes API. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -19,9 +20,9 @@ a Hello world application by entering this command: kubectl run node-hello --image=gcr.io/google-samples/node-hello:1.0 --port=8080 ``` -{{% /capture %}} -{{% capture steps %}} + + ## Using kubectl to start a proxy server @@ -81,10 +82,11 @@ The output should look similar to this: ... } -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn more about [kubectl proxy](/docs/reference/generated/kubectl/kubectl-commands#proxy). -{{% /capture %}} + diff --git a/content/en/docs/tasks/access-kubernetes-api/setup-extension-api-server.md b/content/en/docs/tasks/access-kubernetes-api/setup-extension-api-server.md index 71c6059eecd6c..adf93732d33e5 100644 --- a/content/en/docs/tasks/access-kubernetes-api/setup-extension-api-server.md +++ b/content/en/docs/tasks/access-kubernetes-api/setup-extension-api-server.md @@ -4,25 +4,26 @@ reviewers: - lavalamp - cheftako - chenopis -content_template: templates/task +content_type: task weight: 15 --- -{{% capture overview %}} + Setting up an extension API server to work the aggregation layer allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes APIs. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * You must [configure the aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) and enable the apiserver flags. -{{% /capture %}} -{{% capture steps %}} + + ## Setup an extension api-server to work with the aggregation layer @@ -46,15 +47,16 @@ Alternatively, you can use an existing 3rd party solution, such as [apiserver-bu 1. Create a Kubernetes apiservice. The CA cert above should be base64 encoded, stripped of new lines and used as the spec.caBundle in the apiservice. This should not be namespaced. If using the [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/), only pass in the PEM encoded CA bundle because the base 64 encoding is done for you. 1. Use kubectl to get your resource. It should return "No resources found." Which means that everything worked but you currently have no objects of that resource type created yet. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * If you haven't already, [configure the aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) and enable the apiserver flags. * For a high level overview, see [Extending the Kubernetes API with the aggregation layer](/docs/concepts/api-extension/apiserver-aggregation). * Learn how to [Extend the Kubernetes API Using Custom Resource Definitions](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md index 520dd949cdff3..659c8d777c4ba 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md @@ -1,18 +1,19 @@ --- title: Access Clusters Using the Kubernetes API -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to access clusters using the Kubernetes API. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Accessing the Kubernetes API @@ -449,5 +450,5 @@ The output will be similar to this: } ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-services.md b/content/en/docs/tasks/administer-cluster/access-cluster-services.md index 57cdc835de8cb..979a75a162eaa 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-services.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-services.md @@ -1,18 +1,19 @@ --- title: Access Services Running on Clusters -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to connect to services running on the Kubernetes cluster. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Accessing services running on the cluster @@ -132,6 +133,6 @@ You may be able to put an apiserver proxy URL into the address bar of a browser. - Some web apps may not work, particularly those with client side javascript that construct URLs in a way that is unaware of the proxy path prefix. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md index a2070bcfe3fc0..453cfef22103c 100644 --- a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md +++ b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md @@ -1,21 +1,22 @@ --- title: Change the default StorageClass -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to change the default Storage Class that is used to provision volumes for PersistentVolumeClaims that have no special requirements. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Why change the default storage class? @@ -93,10 +94,11 @@ for details about addon manager and how to disable individual addons. gold (default) kubernetes.io/gce-pd 1d ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md index a7ac4d80c918f..729c7bde4fc41 100644 --- a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md +++ b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md @@ -1,20 +1,21 @@ --- title: Change the Reclaim Policy of a PersistentVolume -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to change the reclaim policy of a Kubernetes PersistentVolume. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Why change reclaim policy of a PersistentVolume @@ -80,9 +81,10 @@ kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\" `default/claim3` has reclaim policy `Retain`. It will not be automatically deleted when a user deletes claim `default/claim3`. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/). * Learn more about [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims). @@ -91,6 +93,6 @@ kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\" * [PersistentVolume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolume-v1-core) * [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core) * See the `persistentVolumeReclaimPolicy` field of [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/cluster-management.md b/content/en/docs/tasks/administer-cluster/cluster-management.md index 65728ec4ee9b5..7cbab3aa2cb2e 100644 --- a/content/en/docs/tasks/administer-cluster/cluster-management.md +++ b/content/en/docs/tasks/administer-cluster/cluster-management.md @@ -3,20 +3,20 @@ reviewers: - lavalamp - thockin title: Cluster Management -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This document describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster's master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster. -{{% /capture %}} -{{% capture body %}} + + ## Creating and configuring a Cluster @@ -224,4 +224,4 @@ kubectl convert -f pod.yaml --output-version v1 For more options, please refer to the usage of [kubectl convert](/docs/reference/generated/kubectl/kubectl-commands#convert) command. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md b/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md index 436584ad14ca0..e4b58b70e39c6 100644 --- a/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md @@ -3,10 +3,10 @@ reviewers: - davidopp - madhusudancs title: Configure Multiple Schedulers -content_template: templates/task +content_type: task --- -{{% capture overview %}} + Kubernetes ships with a default scheduler that is described [here](/docs/admin/kube-scheduler/). If the default scheduler does not suit your needs you can implement your own scheduler. @@ -19,16 +19,17 @@ document. Please refer to the kube-scheduler implementation in [pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/pkg/scheduler) in the Kubernetes source directory for a canonical example. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Package the scheduler @@ -219,9 +220,9 @@ kubectl create -f pod3.yaml kubectl get pods ``` -{{% /capture %}} -{{% capture discussion %}} + + ### Verifying that the pods were scheduled using the desired schedulers @@ -241,4 +242,4 @@ verify that the pods were scheduled by the desired schedulers. kubectl get events ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md index 73cecd999b5a6..91661d235fb86 100644 --- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -3,23 +3,24 @@ reviewers: - mml - wojtek-t title: Operating etcd clusters for Kubernetes -content_template: templates/task +content_type: task --- -{{% capture overview %}} + {{< glossary_definition term_id="etcd" length="all" prepend="etcd is a ">}} -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Prerequisites @@ -238,4 +239,4 @@ To urgently fix this bug for Kubernetes 1.15 or earlier, build a custom kube-api See ["kube-apiserver 1.13.x refuses to work when first etcd-server is not available"](https://github.com/kubernetes/kubernetes/issues/72102). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/coredns.md b/content/en/docs/tasks/administer-cluster/coredns.md index 2e50d54f06a2c..32d4f7d7ecfe9 100644 --- a/content/en/docs/tasks/administer-cluster/coredns.md +++ b/content/en/docs/tasks/administer-cluster/coredns.md @@ -3,18 +3,19 @@ reviewers: - johnbelamaric title: Using CoreDNS for Service Discovery min-kubernetes-server-version: v1.9 -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page describes the CoreDNS upgrade process and how to install CoreDNS instead of kube-dns. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## About CoreDNS @@ -89,14 +90,15 @@ There is a helpful [guideline and walkthrough](https://github.com/coredns/deploy When resource utilisation is a concern, it may be useful to tune the configuration of CoreDNS. For more details, check out the [documentation on scaling CoreDNS](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + You can configure [CoreDNS](https://coredns.io) to support many more use cases than kube-dns by modifying the `Corefile`. For more information, see the [CoreDNS site](https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/cpu-management-policies.md b/content/en/docs/tasks/administer-cluster/cpu-management-policies.md index 9568843e87bb1..1b29abf17c58f 100644 --- a/content/en/docs/tasks/administer-cluster/cpu-management-policies.md +++ b/content/en/docs/tasks/administer-cluster/cpu-management-policies.md @@ -4,10 +4,10 @@ reviewers: - sjenning - ConnorDoyle - balajismaniam -content_template: templates/task +content_type: task --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.12" state="beta" >}} @@ -18,16 +18,17 @@ acceptably. The kubelet provides methods to enable more complex workload placement policies while keeping the abstraction free from explicit placement directives. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## CPU Management Policies @@ -211,4 +212,4 @@ and `requests` are set equal to `limits` when not explicitly specified. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The `nginx` container is granted 2 exclusive CPUs. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md index 1b6a706934e3b..61add5312ad47 100644 --- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md @@ -4,13 +4,14 @@ reviewers: - danwinship title: Declare Network Policy min-kubernetes-server-version: v1.8 -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This document helps you get started using the Kubernetes [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) to declare network policies that govern how pods communicate with each other. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -25,9 +26,9 @@ Make sure you've configured a network provider with network policy support. Ther {{< note >}} The above list is sorted alphabetically by product name, not by recommendation or preference. This example is valid for a Kubernetes cluster using any of these providers. {{< /note >}} -{{% /capture %}} -{{% capture steps %}} + + ## Create an `nginx` deployment and expose it via a service @@ -146,4 +147,4 @@ Connecting to nginx (10.100.0.16:80) remote file exists ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md index 0e80a018c43dc..0f6579d915c1e 100644 --- a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md +++ b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md @@ -4,18 +4,18 @@ reviewers: - thockin - wlan0 title: Developing Cloud Controller Manager -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.11" state="beta" >}} {{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="The cloud-controller-manager is">}} -{{% /capture %}} -{{% capture body %}} + + ## Background @@ -41,4 +41,4 @@ controller manager as your starting point. For in-tree cloud providers, you can run the in-tree cloud controller manager as a {{< glossary_tooltip term_id="daemonset" >}} in your cluster. See [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) for more details. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md index f3101bf6c91fe..f5e1e93239dc9 100644 --- a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -3,24 +3,25 @@ reviewers: - bowei - zihongz title: Customizing DNS Service -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page explains how to configure your DNS Pod and customize the DNS resolution process. In Kubernetes version 1.11 and later, CoreDNS is at GA and is installed by default with kubeadm. See [CoreDNS ConfigMap options](#coredns-configmap-options) and [Using CoreDNS for Service Discovery](/docs/tasks/administer-cluster/coredns/). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * Kubernetes version 1.6 or later. To work with CoreDNS, version 1.9 or later. * The appropriate add-on: kube-dns or CoreDNS. To install with kubeadm, see [the kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-addon). -{{% /capture %}} -{{% capture steps %}} + + ## Introduction @@ -213,9 +214,9 @@ their destination DNS servers: See [ConfigMap options](#configmap-options) for details about the configuration option format. -{{% /capture %}} -{{% capture discussion %}} + + #### Effects on Pods @@ -302,7 +303,7 @@ data: ["172.16.0.1"] ``` -{{% /capture %}} + ## CoreDNS configuration equivalent to kube-dns diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md index 3a69bd84ec73b..26aa9688557d4 100644 --- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -3,20 +3,21 @@ reviewers: - bowei - zihongz title: Debugging DNS Resolution -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page provides hints on diagnosing DNS problems. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * Kubernetes version 1.6 and above. * The cluster must be configured to use the `coredns` (or `kube-dns`) addons. -{{% /capture %}} -{{% capture steps %}} + + ### Create a simple Pod to use as a test environment @@ -273,5 +274,5 @@ for more information. ## What's next - [Autoscaling the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md index 5d5dc98ade845..6fd887bd8f4ee 100644 --- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md +++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md @@ -1,14 +1,15 @@ --- title: Autoscale the DNS Service in a Cluster -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to enable and configure autoscaling of the DNS service in your Kubernetes cluster. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -16,9 +17,9 @@ your Kubernetes cluster. * Make sure [Kubernetes DNS](/docs/concepts/services-networking/dns-pod-service/) is enabled. -{{% /capture %}} -{{% capture steps %}} + + ## Determine whether DNS horizontal autoscaling is already enabled {#determining-whether-dns-horizontal-autoscaling-is-already-enabled} @@ -201,9 +202,9 @@ The common path for this dns-autoscaler is: After the manifest file is deleted, the Addon Manager will delete the dns-autoscaler Deployment. -{{% /capture %}} -{{% capture discussion %}} + + ## Understanding how DNS horizontal autoscaling works @@ -226,10 +227,11 @@ the autoscaler Pod. * The autoscaler provides a controller interface to support two control patterns: *linear* and *ladder*. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Read about [Guaranteed Scheduling For Critical Add-On Pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/). * Learn more about the [implementation of cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/enabling-endpointslices.md b/content/en/docs/tasks/administer-cluster/enabling-endpointslices.md index b8e4cf900da31..b9e389ead7e90 100644 --- a/content/en/docs/tasks/administer-cluster/enabling-endpointslices.md +++ b/content/en/docs/tasks/administer-cluster/enabling-endpointslices.md @@ -3,19 +3,20 @@ reviewers: - bowei - freehan title: Enabling EndpointSlices -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page provides an overview of enabling EndpointSlices in Kubernetes. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Introduction @@ -55,9 +56,10 @@ existing Endpoints functionality, EndpointSlices include new bits of information such as topology. They will allow for greater scalability and extensibility of network endpoints in your cluster. -{{% capture whatsnext %}} +## {{% heading "whatsnext" %}} + * Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/enabling-service-topology.md b/content/en/docs/tasks/administer-cluster/enabling-service-topology.md index c39b9b366de81..998bb8b2e5739 100644 --- a/content/en/docs/tasks/administer-cluster/enabling-service-topology.md +++ b/content/en/docs/tasks/administer-cluster/enabling-service-topology.md @@ -4,19 +4,20 @@ reviewers: - johnbelamaric - imroc title: Enabling Service Topology -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page provides an overview of enabling Service Topology in Kubernetes. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Introduction @@ -45,10 +46,11 @@ To enable service topology, enable the `ServiceTopology` and `EndpointSlice` fea ``` -{{% capture whatsnext %}} +## {{% heading "whatsnext" %}} + * Read about the [Service Topology](/docs/concepts/services-networking/service-topology) concept * Read about [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices) * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/encrypt-data.md b/content/en/docs/tasks/administer-cluster/encrypt-data.md index b96f0349635d7..8499855bb0fd0 100644 --- a/content/en/docs/tasks/administer-cluster/encrypt-data.md +++ b/content/en/docs/tasks/administer-cluster/encrypt-data.md @@ -2,23 +2,24 @@ reviewers: - smarterclayton title: Encrypting Secret Data at Rest -content_template: templates/task +content_type: task min-kubernetes-server-version: 1.13 --- -{{% capture overview %}} + This page shows how to enable and configure encryption of secret data at rest. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * etcd v3.0 or later is required -{{% /capture %}} -{{% capture steps %}} + + ## Configuration and determining whether encryption at rest is already enabled @@ -215,4 +216,4 @@ kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` to force all secrets to be decrypted. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/extended-resource-node.md b/content/en/docs/tasks/administer-cluster/extended-resource-node.md index 49e491d251386..07d8fea616722 100644 --- a/content/en/docs/tasks/administer-cluster/extended-resource-node.md +++ b/content/en/docs/tasks/administer-cluster/extended-resource-node.md @@ -1,26 +1,27 @@ --- title: Advertise Extended Resources for a Node -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to specify extended resources for a Node. Extended resources allow cluster administrators to advertise node-level resources that would otherwise be unknown to Kubernetes. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Get the names of your Nodes @@ -189,10 +190,11 @@ kubectl describe node | grep dongle (you should not see any output) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For application developers @@ -204,4 +206,4 @@ kubectl describe node | grep dongle * [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md index 0b00eed1257ce..0d5b6d4ebe93a 100644 --- a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md +++ b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md @@ -4,10 +4,10 @@ reviewers: - filipg - piosz title: Guaranteed Scheduling For Critical Add-On Pods -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine there are a number of add-ons which, for various reasons, must run on a regular cluster node (rather than the Kubernetes master). @@ -19,14 +19,14 @@ vacated by the evicted critical add-on pod or the amount of resources available Note that marking a pod as critical is not meant to prevent evictions entirely; it only prevents the pod from becoming permanently unavailable. For static pods, this means it can't be evicted, but for non-static pods, it just means they will always be rescheduled. -{{% /capture %}} -{{% capture body %}} + + ### Marking pod as critical To mark a Pod as critical, set priorityClassName for that Pod to `system-cluster-critical` or `system-node-critical`. `system-node-critical` is the highest available priority, even higher than `system-cluster-critical`. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/highly-available-master.md b/content/en/docs/tasks/administer-cluster/highly-available-master.md index e5529da7c74da..e2a582f8b2791 100644 --- a/content/en/docs/tasks/administer-cluster/highly-available-master.md +++ b/content/en/docs/tasks/administer-cluster/highly-available-master.md @@ -2,26 +2,27 @@ reviewers: - jszczepkowski title: Set up High-Availability Kubernetes Masters -content_template: templates/task +content_type: task --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.5" state="alpha" >}} You can replicate Kubernetes masters in `kube-up` or `kube-down` scripts for Google Compute Engine. This document describes how to use kube-up/down scripts to manage highly available (HA) masters and how HA masters are implemented for use with GCE. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Starting an HA-compatible cluster @@ -118,9 +119,9 @@ If the cluster is large, it may take a long time to duplicate its state. This operation may be sped up by migrating etcd data directory, as described [here](https://coreos.com/etcd/docs/latest/admin_guide.html#member-migration) (we are considering adding support for etcd data dir migration in future). -{{% /capture %}} -{{% capture discussion %}} + + ## Implementation notes @@ -173,4 +174,4 @@ To make such deployment secure, communication between etcd instances is authoriz [Automated HA master deployment - design doc](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/ha_master.md) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md index bdc871ddd9e5c..9c2e1d3d5d479 100644 --- a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md +++ b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md @@ -1,19 +1,20 @@ --- title: IP Masquerade Agent User Guide -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to configure and enable the ip-masq-agent. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture discussion %}} + + ## IP Masquerade Agent User Guide The ip-masq-agent configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range. @@ -53,9 +54,9 @@ MASQUERADE all -- anywhere anywhere /* ip-masq-agent: By default, in GCE/Google Kubernetes Engine starting with Kubernetes version 1.7.0, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in another environment, you can add the ip-masq-agent [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) to your cluster: -{{% /capture %}} -{{% capture steps %}} + + ## Create an ip-masq-agent To create an ip-masq-agent, run the following kubectl command: @@ -110,4 +111,4 @@ nonMasqueradeCIDRs: resyncInterval: 60s masqLinkLocal: true ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/kms-provider.md b/content/en/docs/tasks/administer-cluster/kms-provider.md index d90ca853cfb95..34cc1d6b66198 100644 --- a/content/en/docs/tasks/administer-cluster/kms-provider.md +++ b/content/en/docs/tasks/administer-cluster/kms-provider.md @@ -2,13 +2,14 @@ reviewers: - smarterclayton title: Using a KMS provider for data encryption -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to configure a Key Management Service (KMS) provider and plugin to enable secret data encryption. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -18,9 +19,9 @@ This page shows how to configure a Key Management Service (KMS) provider and plu {{< feature-state for_k8s_version="v1.12" state="beta" >}} -{{% /capture %}} -{{% capture steps %}} + + The KMS encryption provider uses an envelope encryption scheme to encrypt data in etcd. The data is encrypted using a data encryption key (DEK); a new DEK is generated for each encryption. The DEKs are encrypted with a key encryption key (KEK) that is stored and managed in a remote KMS. The KMS provider uses gRPC to communicate with a specific KMS plugin. The KMS plugin, which is implemented as a gRPC server and deployed on the same host(s) as the Kubernetes master(s), is responsible for all communication with the remote KMS. @@ -183,4 +184,4 @@ To disable encryption at rest: ``` kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md index 28df69c13aecb..e82c53f3a6a66 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md @@ -4,20 +4,21 @@ reviewers: - patricklang title: Adding Windows nodes min-kubernetes-server-version: 1.17 -content_template: templates/tutorial +content_type: tutorial weight: 30 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.18" state="beta" >}} You can use Kubernetes to run a mixture of Linux and Windows nodes, so you can mix Pods that run on Linux on with Pods that run on Windows. This page shows how to register Windows nodes to your cluster. -{{% /capture %}} -{{% capture prerequisites %}} {{< version-check >}} + +## {{% heading "prerequisites" %}} + {{< version-check >}} * Obtain a [Windows Server 2019 license](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) (or higher) in order to configure the Windows node that hosts Windows containers. @@ -25,18 +26,19 @@ If you are using VXLAN/Overlay networking you must have also have [KB4489899](ht * A Linux-based Kubernetes kubeadm cluster in which you have access to the control plane (see [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)). -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Register a Windows node to the cluster * Configure networking so Pods and Services on Linux and Windows can communicate with each other -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Getting Started: Adding a Windows Node to Your Cluster @@ -176,10 +178,11 @@ kubectl -n kube-system get pods -l app=flannel Once the flannel Pod is running, your node should enter the `Ready` state and then be available to handle workloads. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + - [Upgrading Windows kubeadm nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 6329c4a3959b4..54f43b840b95a 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -2,25 +2,26 @@ reviewers: - sig-cluster-lifecycle title: Certificate Management with kubeadm -content_template: templates/task +content_type: task weight: 10 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.15" state="stable" >}} Client certificates generated by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) expire after 1 year. This page explains how to manage certificate renewals with kubeadm. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You should be familiar with [PKI certificates and requirements in Kubernetes](/docs/setup/best-practices/certificates/). -{{% /capture %}} -{{% capture steps %}} + + ## Using custom certificates {#custom-certificates} @@ -242,4 +243,4 @@ After a certificate is signed using your preferred method, the certificate and t [cert-cas]: /docs/setup/best-practices/certificates/#single-root-ca [cert-table]: /docs/setup/best-practices/certificates/#all-certificates -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index f0368ecaf9179..bb7f67ae5ff26 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -2,12 +2,12 @@ reviewers: - sig-cluster-lifecycle title: Upgrading kubeadm clusters -content_template: templates/task +content_type: task weight: 20 min-kubernetes-server-version: 1.18 --- -{{% capture overview %}} + This page explains how to upgrade a Kubernetes cluster created with kubeadm from version 1.17.x to version 1.18.x, and from version 1.18.x to 1.18.y (where `y > x`). @@ -26,9 +26,10 @@ The upgrade workflow at high level is the following: 1. Upgrade additional control plane nodes. 1. Upgrade worker nodes. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + - You need to have a kubeadm Kubernetes cluster running version 1.17.0 or later. - [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). @@ -44,9 +45,9 @@ The upgrade workflow at high level is the following: or between PATCH versions of the same MINOR. That is, you cannot skip MINOR versions when you upgrade. For example, you can upgrade from 1.y to 1.y+1, but not from 1.y to 1.y+2. -{{% /capture %}} -{{% capture steps %}} + + ## Determine which version to upgrade to @@ -395,7 +396,7 @@ kubectl get nodes The `STATUS` column should show `Ready` for all your nodes, and the version number should be updated. -{{% /capture %}} + ## Recovering from a failure state diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md index a6c626a627799..35857d09a0205 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md @@ -1,29 +1,30 @@ --- title: Upgrading Windows nodes min-kubernetes-server-version: 1.17 -content_template: templates/task +content_type: task weight: 40 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.18" state="beta" >}} This page explains how to upgrade a Windows node [created with kubeadm](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * Familiarize yourself with [the process for upgrading the rest of your kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade). You will want to upgrade the control plane nodes before upgrading your Windows nodes. -{{% /capture %}} -{{% capture steps %}} + + ## Upgrading worker nodes @@ -90,4 +91,4 @@ again replacing {{< param "fullversion" >}} with your desired version: ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md index 6ffe290a1921e..54cd8373705b2 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md @@ -3,10 +3,10 @@ reviewers: - mtaufen - dawnchen title: Set Kubelet parameters via a config file -content_template: templates/task +content_type: task --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.10" state="beta" >}} A subset of the Kubelet's configuration parameters may be @@ -16,15 +16,16 @@ This functionality is considered beta in v1.10. Providing parameters via a config file is the recommended approach because it simplifies node deployment and configuration management. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + - A v1.10 or higher Kubelet binary must be installed for beta functionality. -{{% /capture %}} -{{% capture steps %}} + + ## Create the config file @@ -67,9 +68,9 @@ If `--config` is provided and the values are not specified via the command line, defaults for the `KubeletConfiguration` version apply. In the above example, this version is `kubelet.config.k8s.io/v1beta1`. -{{% /capture %}} -{{% capture discussion %}} + + ## Relationship to Dynamic Kubelet Config @@ -77,6 +78,6 @@ If you are using the [Dynamic Kubelet Configuration](/docs/tasks/administer-clus feature, the combination of configuration provided via `--config` and any flags which override these values is considered the default "last known good" configuration by the automatic rollback mechanism. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md b/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md index 83ec069915518..13dec384ea6e6 100644 --- a/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md +++ b/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md @@ -1,9 +1,9 @@ --- title: Limit Storage Consumption -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This example demonstrates an easy way to limit the amount of storage consumed in a namespace. @@ -11,15 +11,16 @@ The following resources are used in the demonstration: [ResourceQuota](/docs/con [LimitRange](/docs/tasks/administer-cluster/memory-default-namespace/), and [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Scenario: Limiting Storage Consumption The cluster-admin is operating a cluster on behalf of a user population and the admin wants to control @@ -77,9 +78,9 @@ spec: requests.storage: "5Gi" ``` -{{% /capture %}} -{{% capture discussion %}} + + ## Summary @@ -87,6 +88,6 @@ A limit range can put a ceiling on how much storage is requested while a resourc consumed by a namespace through claim counts and cumulative storage capacity. The allows a cluster-admin to plan their cluster's storage budget without risk of any one project going over their allotment. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md index a1d4c786c63b2..d3d1541d270d2 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md @@ -1,11 +1,11 @@ --- title: Configure Minimum and Maximum CPU Constraints for a Namespace -content_template: templates/task +content_type: task weight: 40 --- -{{% capture overview %}} + This page shows how to set minimum and maximum values for the CPU resources used by Containers and Pods in a namespace. You specify minimum and maximum CPU values in a @@ -13,19 +13,20 @@ and Pods in a namespace. You specify minimum and maximum CPU values in a object. If a Pod does not meet the constraints imposed by the LimitRange, it cannot be created in the namespace. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} Your cluster must have at least 1 CPU available for use to run the task examples. -{{% /capture %}} -{{% capture steps %}} + + ## Create a namespace @@ -239,9 +240,10 @@ Delete your namespace: kubectl delete namespace constraints-cpu-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For cluster administrators @@ -266,7 +268,7 @@ kubectl delete namespace constraints-cpu-example * [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md index 65a91a3538ed7..d2e15c91da0c6 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md @@ -1,10 +1,10 @@ --- title: Configure Default CPU Requests and Limits for a Namespace -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + This page shows how to configure default CPU requests and limits for a namespace. A Kubernetes cluster can be divided into namespaces. If a Container is created in a namespace @@ -12,14 +12,15 @@ that has a default CPU limit, and the Container does not specify its own CPU lim the Container is assigned the default CPU limit. Kubernetes assigns a default CPU request under certain conditions that are explained later in this topic. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Create a namespace @@ -163,9 +164,10 @@ Delete your namespace: kubectl delete namespace default-cpu-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For cluster administrators @@ -189,6 +191,6 @@ kubectl delete namespace default-cpu-example * [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md index e6a6e1c2b0d39..a5ad383e784b3 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md @@ -1,11 +1,11 @@ --- title: Configure Minimum and Maximum Memory Constraints for a Namespace -content_template: templates/task +content_type: task weight: 30 --- -{{% capture overview %}} + This page shows how to set minimum and maximum values for memory used by Containers running in a namespace. You specify minimum and maximum memory values in a @@ -13,19 +13,20 @@ running in a namespace. You specify minimum and maximum memory values in a object. If a Pod does not meet the constraints imposed by the LimitRange, it cannot be created in the namespace. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} Each node in your cluster must have at least 1 GiB of memory. -{{% /capture %}} -{{% capture steps %}} + + ## Create a namespace @@ -239,9 +240,10 @@ Delete your namespace: kubectl delete namespace constraints-mem-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For cluster administrators @@ -265,7 +267,7 @@ kubectl delete namespace constraints-mem-example * [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md index bb5070bc98d92..df7fce39f2ee8 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md @@ -1,27 +1,28 @@ --- title: Configure Default Memory Requests and Limits for a Namespace -content_template: templates/task +content_type: task weight: 10 --- -{{% capture overview %}} + This page shows how to configure default memory requests and limits for a namespace. If a Container is created in a namespace that has a default memory limit, and the Container does not specify its own memory limit, then the Container is assigned the default memory limit. Kubernetes assigns a default memory request under certain conditions that are explained later in this topic. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} Each node in your cluster must have at least 2 GiB of memory. -{{% /capture %}} -{{% capture steps %}} + + ## Create a namespace @@ -170,9 +171,10 @@ Delete your namespace: kubectl delete namespace default-mem-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For cluster administrators @@ -196,6 +198,6 @@ kubectl delete namespace default-mem-example * [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md index 9558766410663..d69e3d29d6c3c 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md @@ -1,30 +1,31 @@ --- title: Configure Memory and CPU Quotas for a Namespace -content_template: templates/task +content_type: task weight: 50 --- -{{% capture overview %}} + This page shows how to set quotas for the total amount memory and CPU that can be used by all Containers running in a namespace. You specify quotas in a [ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core) object. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} Each node in your cluster must have at least 1 GiB of memory. -{{% /capture %}} -{{% capture steps %}} + + ## Create a namespace @@ -146,9 +147,10 @@ Delete your namespace: kubectl delete namespace quota-mem-cpu-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For cluster administrators @@ -172,7 +174,7 @@ kubectl delete namespace quota-mem-cpu-example * [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md index 31cac82cf1016..c44a07681fe5a 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md @@ -1,28 +1,29 @@ --- title: Configure a Pod Quota for a Namespace -content_template: templates/task +content_type: task weight: 60 --- -{{% capture overview %}} + This page shows how to set a quota for the total number of Pods that can run in a namespace. You specify quotas in a [ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core) object. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Create a namespace @@ -107,9 +108,10 @@ Delete your namespace: kubectl delete namespace quota-pod-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For cluster administrators @@ -133,7 +135,7 @@ kubectl delete namespace quota-pod-example * [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index 9e3f4d6371aa0..36f056a61af11 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -3,10 +3,10 @@ reviewers: - derekwaynecarr - janetkuo title: Namespaces Walkthrough -content_template: templates/task +content_type: task --- -{{% capture overview %}} + Kubernetes {{< glossary_tooltip text="namespaces" term_id="namespace" >}} help different projects, teams, or customers to share a Kubernetes cluster. @@ -19,16 +19,17 @@ Use of multiple namespaces is optional. This example demonstrates how to use Kubernetes namespaces to subdivide your cluster. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Prerequisites @@ -295,4 +296,4 @@ At this point, it should be clear that the resources users create in one namespa As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different authorization rules for each namespace. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index 076f81d9b9525..39a3bcbaa3fad 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -3,19 +3,20 @@ reviewers: - derekwaynecarr - janetkuo title: Share a Cluster with Namespaces -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to view, work in, and delete {{< glossary_tooltip text="namespaces" term_id="namespace" >}}. The page also shows how to use Kubernetes namespaces to subdivide your cluster. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * Have an [existing Kubernetes cluster](/docs/setup/). * Have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_. -{{% /capture %}} -{{% capture steps %}} + + ## Viewing namespaces @@ -252,9 +253,9 @@ At this point, it should be clear that the resources users create in one namespa As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different authorization rules for each namespace. -{{% /capture %}} -{{% capture discussion %}} + + ## Understanding the motivation for using namespaces @@ -304,12 +305,13 @@ is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [setting the namespace preference](/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference). * Learn more about [setting the namespace for a request](/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-for-a-request) * See [namespaces design](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/architecture/namespaces.md). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md index 7046752a5f79e..9efdccfb6e242 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md @@ -2,19 +2,20 @@ reviewers: - caseydavenport title: Use Calico for NetworkPolicy -content_template: templates/task +content_type: task weight: 10 --- -{{% capture overview %}} + This page shows a couple of quick ways to create a Calico cluster on Kubernetes. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Decide whether you want to deploy a [cloud](#creating-a-calico-cluster-with-google-kubernetes-engine-gke) or [local](#creating-a-local-calico-cluster-with-kubeadm) cluster. -{{% /capture %}} -{{% capture steps %}} + + ## Creating a Calico cluster with Google Kubernetes Engine (GKE) **Prerequisite**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts). @@ -44,10 +45,11 @@ Decide whether you want to deploy a [cloud](#creating-a-calico-cluster-with-goog To get a local single-host Calico cluster in fifteen minutes using kubeadm, refer to the [Calico Quickstart](https://docs.projectcalico.org/latest/getting-started/kubernetes/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Once your cluster is running, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md index cca685d39524b..95912f4f885c2 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -3,23 +3,24 @@ reviewers: - danwent - aanm title: Use Cilium for NetworkPolicy -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + This page shows how to use Cilium for NetworkPolicy. For background on Cilium, read the [Introduction to Cilium](https://docs.cilium.io/en/stable/intro). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Deploying Cilium on Minikube for Basic Testing To get familiar with Cilium easily you can follow the @@ -75,9 +76,9 @@ For detailed instructions around deploying Cilium for production, see: This documentation includes detailed requirements, instructions and example production DaemonSet files. -{{% /capture %}} -{{% capture discussion %}} + + ## Understanding Cilium components Deploying a cluster with Cilium adds Pods to the `kube-system` namespace. To see @@ -98,14 +99,15 @@ cilium-6rxbd 1/1 Running 0 1m A `cilium` Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Once your cluster is running, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy with Cilium. Have fun, and if you have questions, contact us using the [Cilium Slack Channel](https://cilium.herokuapp.com/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md index 0111f6c21f8ab..673118e312b51 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md @@ -2,25 +2,27 @@ reviewers: - murali-reddy title: Use Kube-router for NetworkPolicy -content_template: templates/task +content_type: task weight: 30 --- -{{% capture overview %}} + This page shows how to use [Kube-router](https://github.com/cloudnativelabs/kube-router) for NetworkPolicy. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You need to have a Kubernetes cluster running. If you do not already have a cluster, you can create one by using any of the cluster installers like Kops, Bootkube, Kubeadm etc. -{{% /capture %}} -{{% capture steps %}} + + ## Installing Kube-router addon The Kube-router Addon comes with a Network Policy Controller that watches Kubernetes API server for any NetworkPolicy and pods updated and configures iptables rules and ipsets to allow or block traffic as directed by the policies. Please follow the [trying Kube-router with cluster installers](https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers) guide to install Kube-router addon. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Once you have installed the Kube-router addon, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md index 42577dae85952..df6adcd39fec6 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md @@ -2,23 +2,24 @@ reviewers: - chrismarino title: Romana for NetworkPolicy -content_template: templates/task +content_type: task weight: 40 --- -{{% capture overview %}} + This page shows how to use Romana for NetworkPolicy. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/). -{{% /capture %}} -{{% capture steps %}} + + ## Installing Romana with kubeadm @@ -32,12 +33,13 @@ To apply network policies use one of the following: * [Example of Romana network policy](https://github.com/romana/core/blob/master/doc/policy.md). * The NetworkPolicy API. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Once you have installed Romana, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md index 0fcb4ea1070f7..a9d15f40a6ba2 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md @@ -2,23 +2,24 @@ reviewers: - bboreham title: Weave Net for NetworkPolicy -content_template: templates/task +content_type: task weight: 50 --- -{{% capture overview %}} + This page shows how to use Weave Net for NetworkPolicy. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You need to have a Kubernetes cluster. Follow the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/) to bootstrap one. -{{% /capture %}} -{{% capture steps %}} + + ## Install the Weave Net addon @@ -48,12 +49,13 @@ weave-net-pmw8w 2/2 Running 0 9d Each Node has a weave Pod, and all Pods are `Running` and `2/2 READY`. (`2/2` means that each Pod has `weave` and `weave-npc`.) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Once you have installed the Weave Net addon, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy. If you have any question, contact us at [#weave-community on Slack or Weave User Group](https://github.com/weaveworks/weave#getting-help). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/nodelocaldns.md b/content/en/docs/tasks/administer-cluster/nodelocaldns.md index cb033f2925085..8aa6b9249b55d 100644 --- a/content/en/docs/tasks/administer-cluster/nodelocaldns.md +++ b/content/en/docs/tasks/administer-cluster/nodelocaldns.md @@ -4,21 +4,22 @@ reviewers: - zihongz - sftim title: Using NodeLocal DNSCache in Kubernetes clusters -content_template: templates/task +content_type: task --- - -{{% capture overview %}} + + {{< feature-state for_k8s_version="v1.18" state="stable" >}} This page provides an overview of NodeLocal DNSCache feature in Kubernetes. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} - {{% capture steps %}} + + ## Introduction @@ -88,4 +89,4 @@ This feature can be enabled using the following steps: Once enabled, node-local-dns Pods will run in the kube-system namespace on each of the cluster nodes. This Pod runs [CoreDNS](https://github.com/coredns/coredns) in cache mode, so all CoreDNS metrics exposed by the different plugins will be available on a per-node basis. You can disable this feature by removing the DaemonSet, using `kubectl delete -f ` . You should also revert any changes you made to the kubelet configuration. - {{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/out-of-resource.md b/content/en/docs/tasks/administer-cluster/out-of-resource.md index c52415f4c64fb..a9d2ee37025db 100644 --- a/content/en/docs/tasks/administer-cluster/out-of-resource.md +++ b/content/en/docs/tasks/administer-cluster/out-of-resource.md @@ -4,10 +4,10 @@ reviewers: - vishh - timstclair title: Configure Out of Resource Handling -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This page explains how to configure out of resource handling with `kubelet`. @@ -16,10 +16,10 @@ are low. This is especially important when dealing with incompressible compute resources, such as memory or disk space. If such resources are exhausted, nodes become unstable. -{{% /capture %}} -{{% capture body %}} + + ## Eviction Policy @@ -372,4 +372,4 @@ to prevent system OOMs, and promote eviction of workloads so cluster state can r The Pod eviction may evict more Pods than needed due to stats collection timing gap. This can be mitigated by adding the ability to get root container stats on an on-demand basis [(https://github.com/google/cadvisor/issues/1247)](https://github.com/google/cadvisor/issues/1247) in the future. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/quota-api-object.md b/content/en/docs/tasks/administer-cluster/quota-api-object.md index faf7210384f03..1fb48c7a2b911 100644 --- a/content/en/docs/tasks/administer-cluster/quota-api-object.md +++ b/content/en/docs/tasks/administer-cluster/quota-api-object.md @@ -1,10 +1,10 @@ --- title: Configure Quotas for API Objects -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to configure quotas for API objects, including PersistentVolumeClaims and Services. A quota restricts the number of @@ -13,17 +13,18 @@ You specify quotas in a [ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core) object. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Create a namespace @@ -140,9 +141,10 @@ Delete your namespace: kubectl delete namespace quota-object-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For cluster administrators @@ -167,7 +169,7 @@ kubectl delete namespace quota-object-example * [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md index af5696d622290..1e9715e8bf1d2 100644 --- a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md +++ b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md @@ -3,11 +3,11 @@ reviewers: - mtaufen - dawnchen title: Reconfigure a Node's Kubelet in a Live Cluster -content_template: templates/task +content_type: task min-kubernetes-server-version: v1.11 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.11" state="beta" >}} [Dynamic Kubelet Configuration](https://github.com/kubernetes/enhancements/issues/281) @@ -25,9 +25,10 @@ of nodes before rolling them out cluster-wide. Advice on configuring specific fields is available in the inline `KubeletConfiguration` [type documentation](https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go). {{< /warning >}} -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You need to have a Kubernetes cluster. You also need kubectl v1.11 or higher, configured to communicate with your cluster. {{< version-check >}} @@ -43,9 +44,9 @@ because there are manual alternatives. For each node that you're reconfiguring, you must set the kubelet `--dynamic-config-dir` flag to a writable directory. -{{% /capture %}} -{{% capture steps %}} + + ## Reconfiguring the kubelet on a running node in your cluster @@ -311,9 +312,9 @@ empty, since all config sources have been reset to `nil`, which indicates that the local default config is `assigned`, `active`, and `lastKnownGood`, and no error is reported. -{{% /capture %}} -{{% capture discussion %}} + + ## `kubectl patch` example You can change a Node's configSource using several different mechanisms. @@ -374,9 +375,9 @@ internal failure, see Kubelet log for details | The kubelet encountered some int {{< /table >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + - For more information on configuring the kubelet via a configuration file, see [Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file). - See the reference documentation for [`NodeConfigSource`](https://kubernetes.io/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodeconfigsource-v1-core) -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md index c78c9edb42cc0..4f00675c3732f 100644 --- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md +++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md @@ -4,11 +4,11 @@ reviewers: - derekwaynecarr - dashpole title: Reserve Compute Resources for System Daemons -content_template: templates/task +content_type: task min-kubernetes-server-version: 1.8 --- -{{% capture overview %}} + Kubernetes nodes can be scheduled to `Capacity`. Pods can consume all the available capacity on a node by default. This is an issue because nodes @@ -22,19 +22,20 @@ compute resources for system daemons. Kubernetes recommends cluster administrators to configure `Node Allocatable` based on their workload density on each node. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} Your Kubernetes server must be at or later than version 1.17 to use the kubelet command line option `--reserved-cpus` to set an [explicitly reserved CPU list](#explicitly-reserved-cpu-list). -{{% /capture %}} -{{% capture steps %}} + + ## Node Allocatable @@ -226,9 +227,9 @@ more features are added. Over time, kubernetes project will attempt to bring down utilization of node system daemons, but that is not a priority as of now. So expect a drop in `Allocatable` capacity in future releases. -{{% /capture %}} -{{% capture discussion %}} + + ## Example Scenario @@ -251,4 +252,3 @@ If `kube-reserved` and/or `system-reserved` is not enforced and system daemons exceed their reservation, `kubelet` evicts pods whenever the overall node memory usage is higher than `31.5Gi` or `storage` is greater than `90Gi` -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md index 71cc28ff40bc5..aa01c902e4e66 100644 --- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md +++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md @@ -4,10 +4,10 @@ reviewers: - thockin - wlan0 title: Cloud Controller Manager Administration -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + {{< feature-state state="beta" for_k8s_version="v1.11" >}} @@ -15,10 +15,10 @@ Since cloud providers develop and release at a different pace compared to the Ku The `cloud-controller-manager` can be linked to any cloud provider that satisfies [cloudprovider.Interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go). For backwards compatibility, the [cloud-controller-manager](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager) provided in the core Kubernetes project uses the same cloud libraries as `kube-controller-manager`. Cloud providers already supported in Kubernetes core are expected to use the in-tree cloud-controller-manager to transition out of Kubernetes core. -{{% /capture %}} -{{% capture body %}} + + ## Administration @@ -82,9 +82,10 @@ A good example of this is the TLS bootstrapping feature in the Kubelet. TLS boot As this initiative evolves, changes will be made to address these issues in upcoming releases. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + To build and develop your own cloud controller manager, read [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md index 29006ff754318..e18b2ed87d322 100644 --- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md @@ -5,14 +5,15 @@ reviewers: - foxish - kow3ns title: Safely Drain a Node while Respecting the PodDisruptionBudget -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to safely drain a node, respecting the PodDisruptionBudget you have defined. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + This task assumes that you have met the following prerequisites: @@ -24,9 +25,9 @@ This task assumes that you have met the following prerequisites: and [Configured PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/) for applications that need them. -{{% /capture %}} -{{% capture steps %}} + + ## Use `kubectl drain` to remove a node from service @@ -151,13 +152,14 @@ In this case, there are two potential solutions: Kubernetes does not specify what the behavior should be in this case; it is up to the application owners and cluster owners to establish an agreement on behavior in these cases. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Follow steps to protect your application by [configuring a Pod Disruption Budget](/docs/tasks/run-application/configure-pdb/). * Learn more about [maintenance on a node](/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node). -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md index d2d58ae702c0c..7e558fb48fe77 100644 --- a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md @@ -5,23 +5,24 @@ reviewers: - ericchiang - destijl title: Securing a Cluster -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This document covers topics related to protecting a cluster from accidental or malicious access and provides recommendations on overall security. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Controlling access to the Kubernetes API @@ -254,6 +255,6 @@ Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernete group for emails about security announcements. See the [security reporting](/security/) page for more on how to report vulnerabilities. -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md index f96c066dd5e0d..56a398c35fd4a 100644 --- a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md +++ b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md @@ -2,25 +2,26 @@ title: Using sysctls in a Kubernetes Cluster reviewers: - sttts -content_template: templates/task +content_type: task --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.12" state="beta" >}} This document describes how to configure and use kernel parameters within a Kubernetes cluster using the {{< glossary_tooltip term_id="sysctl" >}} interface. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Listing all Sysctl Parameters @@ -140,9 +141,9 @@ spec: value: "65536" ... ``` -{{% /capture %}} -{{% capture discussion %}} + + {{< warning >}} Due to their nature of being _unsafe_, the use of _unsafe_ sysctls @@ -210,4 +211,4 @@ spec: ... ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/administer-cluster/topology-manager.md b/content/en/docs/tasks/administer-cluster/topology-manager.md index 156146e1c2d69..8455bb0d8d146 100644 --- a/content/en/docs/tasks/administer-cluster/topology-manager.md +++ b/content/en/docs/tasks/administer-cluster/topology-manager.md @@ -8,11 +8,11 @@ reviewers: - nolancon - bg-chun -content_template: templates/task +content_type: task min-kubernetes-server-version: v1.18 --- -{{% capture overview %}} + {{< feature-state state="beta" for_k8s_version="v1.18" >}} @@ -22,15 +22,16 @@ In order to extract the best performance, optimizations related to CPU isolation _Topology Manager_ is a Kubelet component that aims to co-ordinate the set of components that are responsible for these optimizations. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## How Topology Manager Works @@ -216,4 +217,4 @@ Using this information the Topology Manager calculates the optimal hint for the 3. The Device Manager and the CPU Manager are the only components to adopt the Topology Manager's HintProvider interface. This means that NUMA alignment can only be achieved for resources managed by the CPU Manager and the Device Manager. Memory or Hugepages are not considered by the Topology Manager for NUMA alignment. -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md index 181b92b8e1eae..5e79704cc48c9 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -1,20 +1,21 @@ --- title: Assign CPU Resources to Containers and Pods -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + This page shows how to assign a CPU *request* and a CPU *limit* to a container. Containers cannot use more CPU than the configured limit. Provided the system has CPU time free, a container is guaranteed to be allocated as much CPU as it requests. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -48,10 +49,10 @@ NAME v1beta1.metrics.k8s.io ``` -{{% /capture %}} -{{% capture steps %}} + + ## Create a namespace @@ -239,9 +240,10 @@ Delete your namespace: kubectl delete namespace cpu-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For app developers @@ -266,4 +268,4 @@ kubectl delete namespace cpu-example * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md index e8f7d8073ae9c..394f435d1250a 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -1,19 +1,20 @@ --- title: Assign Memory Resources to Containers and Pods -content_template: templates/task +content_type: task weight: 10 --- -{{% capture overview %}} + This page shows how to assign a memory *request* and a memory *limit* to a Container. A Container is guaranteed to have as much memory as it requests, but is not allowed to use more memory than its limit. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -46,9 +47,9 @@ NAME v1beta1.metrics.k8s.io ``` -{{% /capture %}} -{{% capture steps %}} + + ## Create a namespace @@ -330,9 +331,10 @@ Delete your namespace. This deletes all the Pods that you created for this task: kubectl delete namespace mem-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For app developers @@ -356,7 +358,7 @@ kubectl delete namespace mem-example * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md index 16773cd215ba0..8306724c1d900 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md +++ b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md @@ -1,22 +1,23 @@ --- title: Assign Pods to Nodes using Node Affinity min-kubernetes-server-version: v1.10 -content_template: templates/task +content_type: task weight: 120 --- -{{% capture overview %}} + This page shows how to assign a Kubernetes Pod to a particular node using Node Affinity in a Kubernetes cluster. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Add a label to a node @@ -112,9 +113,10 @@ This means that the pod will prefer a node that has a `disktype=ssd` label. nginx 1/1 Running 0 13s 10.200.0.4 worker0 ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn more about [Node Affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity). -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md index b5f6876e6b668..f1e6e6e9efe53 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md +++ b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -1,21 +1,22 @@ --- title: Assign Pods to Nodes -content_template: templates/task +content_type: task weight: 120 --- -{{% capture overview %}} + This page shows how to assign a Kubernetes Pod to a particular node in a Kubernetes cluster. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Add a label to a node @@ -94,10 +95,11 @@ You can also schedule a pod to one specific node via setting `nodeName`. Use the configuration file to create a pod that will get scheduled on `foo-node` only. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [labels and selectors](/docs/concepts/overview/working-with-objects/labels/). * Learn more about [nodes](/docs/concepts/architecture/nodes/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md index 57b5fad6c69dc..f5116e76917da 100644 --- a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md +++ b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md @@ -1,27 +1,28 @@ --- title: Attach Handlers to Container Lifecycle Events -content_template: templates/task +content_type: task weight: 140 --- -{{% capture overview %}} + This page shows how to attach handlers to Container lifecycle events. Kubernetes supports the postStart and preStop events. Kubernetes sends the postStart event immediately after a Container is started, and it sends the preStop event immediately before the Container is terminated. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Define postStart and preStop handlers @@ -56,11 +57,11 @@ The output shows the text written by the postStart handler: Hello from the postStart handler -{{% /capture %}} -{{% capture discussion %}} + + ## Discussion @@ -82,10 +83,11 @@ This means that the preStop hook is not invoked when the Pod is *completed*. This limitation is tracked in [issue #55087](https://github.com/kubernetes/kubernetes/issues/55807). {{< /note >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/). * Learn more about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/). @@ -97,6 +99,6 @@ This limitation is tracked in [issue #55087](https://github.com/kubernetes/kuber * [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) * See `terminationGracePeriodSeconds` in [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/configure-gmsa.md b/content/en/docs/tasks/configure-pod-container/configure-gmsa.md index 8045ae9a02265..82d3d87498a5b 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-gmsa.md +++ b/content/en/docs/tasks/configure-pod-container/configure-gmsa.md @@ -1,10 +1,10 @@ --- title: Configure GMSA for Windows Pods and containers -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.18" state="stable" >}} @@ -12,9 +12,10 @@ This page shows how to configure [Group Managed Service Accounts](https://docs.m In Kubernetes, GMSA credential specs are configured at a Kubernetes cluster-wide scope as Custom Resources. Windows Pods, as well as individual containers within a Pod, can be configured to use a GMSA for domain based functions (e.g. Kerberos authentication) when interacting with other Windows services. As of v1.16, the Docker runtime supports GMSA for Windows workloads. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You need to have a Kubernetes cluster and the `kubectl` command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes. This section covers a set of initial steps required once for each cluster: @@ -43,9 +44,9 @@ A [script](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission The [YAML template](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-webhook.yml.tpl) used by the script may also be used to deploy the webhooks and associated objects manually (with appropriate substitutions for the parameters) -{{% /capture %}} -{{% capture steps %}} + + ## Configure GMSAs and Windows nodes in Active Directory Before Pods in Kubernetes can be configured to use GMSAs, the desired GMSAs need to be provisioned in Active Directory as described in the [Windows GMSA documentation](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#BKMK_Step1). Windows worker nodes (that are part of the Kubernetes cluster) need to be configured in Active Directory to access the secret credentials associated with the desired GMSA as described in the [Windows GMSA documentation](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#to-add-member-hosts-using-the-set-adserviceaccount-cmdlet) @@ -252,4 +253,4 @@ If the above command corrects the error, you can automate the step by adding the If you add the `lifecycle` section show above to your Pod spec, the Pod will execute the commands listed to restart the `netlogon` service until the `nltest.exe /query` command exits without error. -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 19b077ab35ef9..ed5aa2404408f 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -1,10 +1,10 @@ --- title: Configure Liveness, Readiness and Startup Probes -content_template: templates/task +content_type: task weight: 110 --- -{{% capture overview %}} + This page shows how to configure liveness, readiness and startup probes for containers. @@ -25,15 +25,16 @@ it succeeds, making sure those probes don't interfere with the application start This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Define a liveness command @@ -360,9 +361,10 @@ For a TCP probe, the kubelet makes the probe connection at the node, not in the means that you can not use a service name in the `host` parameter since the kubelet is unable to resolve it. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes). @@ -373,6 +375,6 @@ You can also read the API references for: * [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) * [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index 024e6929c74f3..6ff6c21530244 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -1,10 +1,10 @@ --- title: Configure a Pod to Use a PersistentVolume for Storage -content_template: templates/task +content_type: task weight: 60 --- -{{% capture overview %}} + This page shows you how to configure a Pod to use a {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} @@ -20,9 +20,10 @@ PersistentVolume. 1. You create a Pod that uses the above PersistentVolumeClaim for storage. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * You need to have a Kubernetes cluster that has only one Node, and the {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} @@ -33,9 +34,9 @@ do not already have a single-node cluster, you can create one by using * Familiarize yourself with the material in [Persistent Volumes](/docs/concepts/storage/persistent-volumes/). -{{% /capture %}} -{{% capture steps %}} + + ## Create an index.html file on your Node @@ -237,10 +238,10 @@ sudo rmdir /mnt/data You can now close the shell to your Node. -{{% /capture %}} -{{% capture discussion %}} + + ## Access control @@ -270,10 +271,11 @@ When a Pod consumes a PersistentVolume, the GIDs associated with the PersistentVolume are not present on the Pod resource itself. {{< /note >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/). * Read the [Persistent Storage design document](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md). @@ -285,6 +287,6 @@ PersistentVolume are not present on the Pod resource itself. * [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core) * [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index c7f80b0fad5a8..42eff59db08e7 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -1,24 +1,25 @@ --- title: Configure a Pod to Use a ConfigMap -content_template: templates/task +content_type: task weight: 150 card: name: tasks weight: 50 --- -{{% capture overview %}} + ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Create a ConfigMap @@ -628,9 +629,9 @@ When a ConfigMap already being consumed in a volume is updated, projected keys a A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates. {{< /note >}} -{{% /capture %}} -{{% capture discussion %}} + + ## Understanding ConfigMaps and Pods @@ -680,9 +681,10 @@ data: - You can't use ConfigMaps for {{< glossary_tooltip text="static pods" term_id="static-pod" >}}, because the Kubelet does not support this. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md b/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md index a418a8d7c0bc6..9a8a33f655a97 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md @@ -1,22 +1,23 @@ --- title: Configure Pod Initialization -content_template: templates/task +content_type: task weight: 130 --- -{{% capture overview %}} + This page shows how to use an Init Container to initialize a Pod before an application Container runs. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Create a Pod that has an Init Container @@ -78,9 +79,10 @@ The output shows that nginx is serving the web page that was written by the init

Kubernetes is open source giving you the freedom to take advantage ...

... -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [communicating between Containers running in the same Pod](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/). @@ -88,6 +90,6 @@ The output shows that nginx is serving the web page that was written by the init * Learn more about [Volumes](/docs/concepts/storage/volumes/). * Learn more about [Debugging Init Containers](/docs/tasks/debug-application-cluster/debug-init-containers/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md index ec6f2d9528f03..ad99a05c27a98 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md @@ -3,11 +3,11 @@ reviewers: - jpeeler - pmorie title: Configure a Pod to Use a Projected Volume for Storage -content_template: templates/task +content_type: task weight: 70 --- -{{% capture overview %}} + This page shows how to use a [`projected`](/docs/concepts/storage/volumes/#projected) Volume to mount several existing volume sources into the same directory. Currently, `secret`, `configMap`, `downwardAPI`, and `serviceAccountToken` volumes can be projected. @@ -15,13 +15,14 @@ and `serviceAccountToken` volumes can be projected. {{< note >}} `serviceAccountToken` is not a volume type. {{< /note >}} -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Configure a projected volume for a pod In this exercise, you create username and password {{< glossary_tooltip text="Secrets" term_id="secret" >}} from local files. You then create a Pod that runs one container, using a [`projected`](/docs/concepts/storage/volumes/#projected) Volume to mount the Secrets into the same shared directory. @@ -77,9 +78,10 @@ kubectl delete pod test-projected-volume kubectl delete secret user pass ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [`projected`](/docs/concepts/storage/volumes/#projected) volumes. * Read the [all-in-one volume](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/node/all-in-one-volume.md) design document. -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/configure-runasusername.md b/content/en/docs/tasks/configure-pod-container/configure-runasusername.md index ac912327f321e..12c10a9ddf24d 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-runasusername.md +++ b/content/en/docs/tasks/configure-pod-container/configure-runasusername.md @@ -1,24 +1,25 @@ --- title: Configure RunAsUserName for Windows pods and containers -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.18" state="stable" >}} This page shows how to use the `runAsUserName` setting for Pods and containers that will run on Windows nodes. This is roughly equivalent of the Linux-specific `runAsUser` setting, allowing you to run applications in a container as a different username than the default. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You need to have a Kubernetes cluster and the kubectl command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes where pods with containers running Windows workloads will get scheduled. -{{% /capture %}} -{{% capture steps %}} + + ## Set the Username for a Pod @@ -114,12 +115,12 @@ Examples of acceptable values for the `runAsUserName` field: `ContainerAdministr For more information about these limtations, check [here](https://support.microsoft.com/en-us/help/909264/naming-conventions-in-active-directory-for-computers-domains-sites-and) and [here](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.localaccounts/new-localuser?view=powershell-5.1). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Guide for scheduling Windows containers in Kubernetes](/docs/setup/production-environment/windows/user-guide-windows-containers/) * [Managing Workload Identity with Group Managed Service Accounts (GMSA)](/docs/setup/production-environment/windows/user-guide-windows-containers/#managing-workload-identity-with-group-managed-service-accounts) * [Configure GMSA for Windows pods and containers](/docs/tasks/configure-pod-container/configure-gmsa/) -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index 021a8feb22d6b..eaaabb9e94e37 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -4,11 +4,11 @@ reviewers: - liggitt - thockin title: Configure Service Accounts for Pods -content_template: templates/task +content_type: task weight: 90 --- -{{% capture overview %}} + A service account provides an identity for processes that run in a Pod. {{< note >}} @@ -23,16 +23,17 @@ authenticated by the apiserver as a particular User Account (currently this is usually `admin`, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, `default`). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Use the Default Service Account to access the API server. @@ -370,9 +371,10 @@ override the `jwks_uri` in the OpenID Provider Configuration so that it points to the public endpoint, rather than the API server's address, by passing the `--service-account-jwks-uri` flag to the API server. Like the issuer URL, the JWKS URI is required to use the `https` scheme. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + See also: @@ -380,4 +382,4 @@ See also: - [Service Account Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/20190730-oidc-discovery.md) - [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md index bec97a2975bde..69e665b42ea43 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -1,10 +1,10 @@ --- title: Configure a Pod to Use a Volume for Storage -content_template: templates/task +content_type: task weight: 50 --- -{{% capture overview %}} + This page shows how to configure a Pod to use a Volume for storage. @@ -14,15 +14,16 @@ consistent storage that is independent of the Container, you can use a [Volume](/docs/concepts/storage/volumes/). This is especially important for stateful applications, such as key-value stores (such as Redis) and databases. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Configure a volume for a Pod @@ -126,9 +127,10 @@ of `Always`. kubectl delete pod redis ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * See [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core). @@ -140,6 +142,6 @@ GCE and EBS on EC2, which are preferred for critical data and will handle details such as mounting and unmounting the devices on the nodes. See [Volumes](/docs/concepts/storage/volumes/) for more details. -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/extended-resource.md b/content/en/docs/tasks/configure-pod-container/extended-resource.md index 36d957ca01581..25fa11b0d9f6b 100644 --- a/content/en/docs/tasks/configure-pod-container/extended-resource.md +++ b/content/en/docs/tasks/configure-pod-container/extended-resource.md @@ -1,19 +1,20 @@ --- title: Assign Extended Resources to a Container -content_template: templates/task +content_type: task weight: 40 --- -{{% capture overview %}} + {{< feature-state state="stable" >}} This page shows how to assign extended resources to a Container. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -21,10 +22,10 @@ Before you do this exercise, do the exercise in [Advertise Extended Resources for a Node](/docs/tasks/administer-cluster/extended-resource-node/). That will configure one of your Nodes to advertise a dongle resource. -{{% /capture %}} -{{% capture steps %}} + + ## Assign an extended resource to a Pod @@ -127,9 +128,10 @@ kubectl delete pod extended-resource-demo kubectl delete pod extended-resource-demo-2 ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For application developers @@ -140,4 +142,4 @@ kubectl delete pod extended-resource-demo-2 * [Advertise Extended Resources for a Node](/docs/tasks/administer-cluster/extended-resource-node/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index 9184883003168..ce0b5b3656a4c 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -1,26 +1,27 @@ --- title: Pull an Image from a Private Registry -content_template: templates/task +content_type: task weight: 100 --- -{{% capture overview %}} + This page shows how to create a Pod that uses a Secret to pull an image from a private Docker registry or repository. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * To do this exercise, you need a [Docker ID](https://docs.docker.com/docker-id/) and password. -{{% /capture %}} -{{% capture steps %}} + + ## Log in to Docker @@ -200,9 +201,10 @@ kubectl apply -f my-private-reg-pod.yaml kubectl get pod private-reg ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Secrets](/docs/concepts/configuration/secret/). * Learn more about [using a private registry](/docs/concepts/containers/images/#using-a-private-registry). @@ -211,5 +213,5 @@ kubectl get pod private-reg * See [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core). * See the `imagePullSecrets` field of [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core). -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md index cd9edd0410945..dec9e8db9160e 100644 --- a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md +++ b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md @@ -1,27 +1,28 @@ --- title: Configure Quality of Service for Pods -content_template: templates/task +content_type: task weight: 30 --- -{{% capture overview %}} + This page shows how to configure Pods so that they will be assigned particular Quality of Service (QoS) classes. Kubernetes uses QoS classes to make decisions about scheduling and evicting Pods. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## QoS classes @@ -235,9 +236,10 @@ Delete your namespace: kubectl delete namespace qos-example ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + ### For app developers @@ -263,7 +265,7 @@ kubectl delete namespace qos-example * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) * [Control Topology Management policies on a node](/docs/tasks/administer-cluster/topology-manager/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md index 0c2bb05d0c9fd..38662760b7b0b 100644 --- a/content/en/docs/tasks/configure-pod-container/security-context.md +++ b/content/en/docs/tasks/configure-pod-container/security-context.md @@ -4,11 +4,11 @@ reviewers: - mikedanese - thockin title: Configure a Security Context for a Pod or Container -content_template: templates/task +content_type: task weight: 80 --- -{{% capture overview %}} + A security context defines privilege and access control settings for a Pod or Container. Security context settings include, but are not limited to: @@ -37,15 +37,16 @@ for a comprehensive list. For more information about security mechanisms in Linux, see [Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features) -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Set the security context for a Pod @@ -409,9 +410,10 @@ kubectl delete pod security-context-demo-3 kubectl delete pod security-context-demo-4 ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [PodSecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritycontext-v1-core) * [SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core) @@ -423,4 +425,4 @@ kubectl delete pod security-context-demo-4 document](https://git.k8s.io/community/contributors/design-proposals/auth/no-new-privs.md) -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/share-process-namespace.md b/content/en/docs/tasks/configure-pod-container/share-process-namespace.md index ee227d3f9b98e..dfb8e40906af5 100644 --- a/content/en/docs/tasks/configure-pod-container/share-process-namespace.md +++ b/content/en/docs/tasks/configure-pod-container/share-process-namespace.md @@ -5,11 +5,11 @@ reviewers: - verb - yujuhong - dchen1107 -content_template: templates/task +content_type: task weight: 160 --- -{{% capture overview %}} + {{< feature-state state="stable" for_k8s_version="v1.17" >}} @@ -21,15 +21,16 @@ You can use this feature to configure cooperating containers, such as a log handler sidecar container, or to troubleshoot container images that don't include debugging utilities like a shell. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Configure a Pod @@ -93,9 +94,9 @@ events { worker_connections 1024; ``` -{{% /capture %}} -{{% capture discussion %}} + + ## Understanding Process Namespace Sharing @@ -117,6 +118,6 @@ containers, though, so it's important to understand these differences: `/proc/$pid/root` link.** This makes debugging easier, but it also means that filesystem secrets are protected only by filesystem permissions. -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/static-pod.md b/content/en/docs/tasks/configure-pod-container/static-pod.md index fc31526348609..5189fdb882454 100644 --- a/content/en/docs/tasks/configure-pod-container/static-pod.md +++ b/content/en/docs/tasks/configure-pod-container/static-pod.md @@ -3,10 +3,10 @@ reviewers: - jsafrane title: Create static Pods weight: 170 -content_template: templates/task +content_type: task --- -{{% capture overview %}} + *Static Pods* are managed directly by the kubelet daemon on a specific node, @@ -30,9 +30,10 @@ Pods to run a Pod on every node, you should probably be using a instead. {{< /note >}} -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -41,10 +42,10 @@ and that your nodes are running the Fedora operating system. Instructions for other distributions or Kubernetes installations may vary. -{{% /capture %}} -{{% capture steps %}} + + ## Create a static pod {#static-pod-creation} @@ -236,4 +237,4 @@ CONTAINER ID IMAGE COMMAND CREATED ... e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index 847d76f25c369..4fadbb3f42ddb 100644 --- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -2,26 +2,27 @@ reviewers: - cdrage title: Translate a Docker Compose File to Kubernetes Resources -content_template: templates/task +content_type: task weight: 200 --- -{{% capture overview %}} + What's Kompose? It's a conversion tool for all things compose (namely Docker Compose) to container orchestrators (Kubernetes or OpenShift). More information can be found on the Kompose website at [http://kompose.io](http://kompose.io). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Install Kompose @@ -200,9 +201,9 @@ you need is an existing `docker-compose.yml` file. $ curl http://192.0.2.89 ``` -{{% /capture %}} -{{% capture discussion %}} + + ## User Guide @@ -606,4 +607,4 @@ Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys. -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/audit.md b/content/en/docs/tasks/debug-application-cluster/audit.md index 5a57779b6c761..acde29fdabdc1 100644 --- a/content/en/docs/tasks/debug-application-cluster/audit.md +++ b/content/en/docs/tasks/debug-application-cluster/audit.md @@ -3,11 +3,11 @@ reviewers: - soltysh - sttts - ericchiang -content_template: templates/concept +content_type: concept title: Auditing --- -{{% capture overview %}} + Kubernetes auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators @@ -22,10 +22,10 @@ answer the following questions: - from where was it initiated? - to where was it going? -{{% /capture %}} -{{% capture body %}} + + [Kube-apiserver][kube-apiserver] performs auditing. Each request on each stage of its execution generates an event, which is then pre-processed according to @@ -503,12 +503,13 @@ plugin which supports full-text search and analytics. [logstash_install_doc]: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html [kube-aggregator]: /docs/concepts/api-extension/apiserver-aggregation -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Visit [Auditing with Falco](/docs/tasks/debug-application-cluster/falco). Learn about [Mutating webhook auditing annotations](/docs/reference/access-authn-authz/extensible-admission-controllers/#mutating-webhook-auditing-annotations). -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/crictl.md b/content/en/docs/tasks/debug-application-cluster/crictl.md index f7bfec87fffa5..a047f194e9512 100644 --- a/content/en/docs/tasks/debug-application-cluster/crictl.md +++ b/content/en/docs/tasks/debug-application-cluster/crictl.md @@ -4,11 +4,11 @@ reviewers: - feiskyer - mrunalp title: Debugging Kubernetes nodes with crictl -content_template: templates/task +content_type: task --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.11" state="stable" >}} @@ -17,15 +17,16 @@ You can use it to inspect and debug container runtimes and applications on a Kubernetes node. `crictl` and its source are hosted in the [cri-tools](https://github.com/kubernetes-incubator/cri-tools) repository. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + `crictl` requires a Linux operating system with a CRI runtime. -{{% /capture %}} -{{% capture steps %}} + + ## Installing crictl @@ -347,12 +348,12 @@ CONTAINER ID IMAGE CREATED STATE 3e025dd50a72d busybox About a minute ago Running busybox 0 ``` -{{% /capture %}} -{{% capture discussion %}} + + See [kubernetes-incubator/cri-tools](https://github.com/kubernetes-incubator/cri-tools) for more information. -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md index 2f5d6e7eda3bc..e0ce8166b0aaa 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -2,20 +2,20 @@ reviewers: - janetkuo - thockin -content_template: templates/concept +content_type: concept title: Application Introspection and Debugging --- -{{% capture overview %}} + Once your application is running, you'll inevitably need to debug problems with it. Earlier we described how you can use `kubectl get pods` to retrieve simple status information about your pods. But there are a number of ways to get even more information about your application. -{{% /capture %}} -{{% capture body %}} + + ## Using `kubectl describe pod` to fetch details about pods @@ -387,9 +387,10 @@ status: systemUUID: ABE5F6B4-D44B-108B-C46A-24CCE16C8B6E ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn about additional debugging tools, including: @@ -400,4 +401,4 @@ Learn about additional debugging tools, including: * [Connecting to containers via port forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) * [Inspect Kubernetes node with crictl](/docs/tasks/debug-application-cluster/crictl/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application.md b/content/en/docs/tasks/debug-application-cluster/debug-application.md index 08f0fad008ba4..a5c37541c3d60 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application.md @@ -3,19 +3,19 @@ reviewers: - mikedanese - thockin title: Troubleshoot Applications -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly. This is *not* a guide for people who want to debug their cluster. For that you should check out [this guide](/docs/admin/cluster-troubleshooting). -{{% /capture %}} -{{% capture body %}} + + ## Diagnosing the problem @@ -161,12 +161,13 @@ check: * Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP. * Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + If none of the above solves your problem, follow the instructions in [Debugging Service document](/docs/user-guide/debugging-services) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving. You may also visit [troubleshooting document](/docs/troubleshooting/) for more information. -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/debug-cluster.md b/content/en/docs/tasks/debug-application-cluster/debug-cluster.md index 473f364361e58..0a66bed195e08 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-cluster.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-cluster.md @@ -2,20 +2,20 @@ reviewers: - davidopp title: Troubleshoot Clusters -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the problem you are experiencing. See the [application troubleshooting guide](/docs/tasks/debug-application-cluster/debug-application) for tips on application debugging. You may also visit [troubleshooting document](/docs/troubleshooting/) for more information. -{{% /capture %}} -{{% capture body %}} + + ## Listing your cluster @@ -124,4 +124,4 @@ This is an incomplete list of things that could go wrong, and how to adjust your - Mitigates: Node shutdown - Mitigates: Kubelet software fault -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/debug-init-containers.md b/content/en/docs/tasks/debug-application-cluster/debug-init-containers.md index 296f0a064873c..a6a3a44d98095 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-init-containers.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-init-containers.md @@ -8,19 +8,20 @@ reviewers: - kow3ns - smarterclayton title: Debug Init Containers -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to investigate problems related to the execution of Init Containers. The example command lines below refer to the Pod as `` and the Init Containers as `` and ``. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -28,9 +29,9 @@ Init Containers. The example command lines below refer to the Pod as [Init Containers](/docs/concepts/abstractions/init-containers/). * You should have [Configured an Init Container](/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container/). -{{% /capture %}} -{{% capture steps %}} + + ## Checking the status of Init Containers @@ -113,9 +114,9 @@ Init Containers that run a shell script print commands as they're executed. For example, you can do this in Bash by running `set -x` at the beginning of the script. -{{% /capture %}} -{{% capture discussion %}} + + ## Understanding Pod status @@ -131,7 +132,7 @@ Status | Meaning `Pending` | The Pod has not yet begun executing Init Containers. `PodInitializing` or `Running` | The Pod has already finished executing Init Containers. -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md index 28c9885e57c82..9793b472e0c9f 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md @@ -2,25 +2,26 @@ reviewers: - bprashanth title: Debug Pods and ReplicationControllers -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to debug Pods and ReplicationControllers. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * You should be familiar with the basics of [Pods](/docs/concepts/workloads/pods/pod/) and [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/). -{{% /capture %}} -{{% capture steps %}} + + ## Debugging Pods @@ -106,4 +107,4 @@ or they can't. If they can't create pods, then please refer to the You can also use `kubectl describe rc ${CONTROLLER_NAME}` to inspect events related to the replication controller. -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md index a812640555aad..5e6758570575a 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md @@ -3,16 +3,17 @@ reviewers: - verb - soltysh title: Debug Running Pods -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page explains how to debug Pods running (or crashing) on a Node. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * Your {{< glossary_tooltip text="Pod" term_id="pod" >}} should already be scheduled and running. If your Pod is not yet running, start with [Troubleshoot @@ -21,9 +22,9 @@ This page explains how to debug Pods running (or crashing) on a Node. Pod is running and have shell access to run commands on that Node. You don't need that access to run the standard debug steps that use `kubectl`. -{{% /capture %}} -{{% capture steps %}} + + ## Examining pod logs {#examine-pod-logs} @@ -187,4 +188,4 @@ given tools in the Kubernetes API. Therefore, if you find yourself needing to ssh into a machine, please file a feature request on GitHub describing your use case and why these tools are insufficient. -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md index 8656f3ae7e70d..c4e12042fb97e 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-service.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md @@ -2,21 +2,21 @@ reviewers: - thockin - bowei -content_template: templates/concept +content_type: concept title: Debug Services --- -{{% capture overview %}} + An issue that comes up rather frequently for new installations of Kubernetes is that a Service is not working properly. You've run your Pods through a Deployment (or other workload controller) and created a Service, but you get no response when you try to access it. This document will hopefully help you to figure out what's going wrong. -{{% /capture %}} -{{% capture body %}} + + ## Running commands in a Pod @@ -728,10 +728,11 @@ Contact us on [Forum](https://discuss.kubernetes.io) or [GitHub](https://github.com/kubernetes/kubernetes). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Visit [troubleshooting document](/docs/troubleshooting/) for more information. -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/debug-stateful-set.md b/content/en/docs/tasks/debug-application-cluster/debug-stateful-set.md index 8bf56bb10ccf5..755c9b725e3dc 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-stateful-set.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-stateful-set.md @@ -8,23 +8,24 @@ reviewers: - kow3ns - smarterclayton title: Debug a StatefulSet -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This task shows you how to debug a StatefulSet. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. * You should have a StatefulSet running that you want to investigate. -{{% /capture %}} -{{% capture steps %}} + + ## Debugging a StatefulSet @@ -41,12 +42,13 @@ instructions on how to deal with them. You can debug individual Pods in a StatefulSet using the [Debugging Pods](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/) guide. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn more about [debugging an init-container](/docs/tasks/debug-application-cluster/debug-init-containers/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md b/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md index 6910b25ce0f91..44dcf0e90986f 100644 --- a/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md +++ b/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md @@ -1,9 +1,9 @@ --- title: Determine the Reason for Pod Failure -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to write and read a Container termination message. @@ -16,17 +16,18 @@ put in a termination message should also be written to the general [Kubernetes logs](/docs/concepts/cluster-administration/logging/). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Writing and reading a termination message @@ -110,16 +111,17 @@ to use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * See the `terminationMessagePath` field in [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core). * Learn about [retrieving logs](/docs/concepts/cluster-administration/logging/). * Learn about [Go templates](https://golang.org/pkg/text/template/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/events-stackdriver.md b/content/en/docs/tasks/debug-application-cluster/events-stackdriver.md index d852ed3cf95bc..859c163307eb0 100644 --- a/content/en/docs/tasks/debug-application-cluster/events-stackdriver.md +++ b/content/en/docs/tasks/debug-application-cluster/events-stackdriver.md @@ -2,11 +2,11 @@ reviewers: - piosz - x13n -content_template: templates/concept +content_type: concept title: Events in Stackdriver --- -{{% capture overview %}} + Kubernetes events are objects that provide insight into what is happening inside a cluster, such as what decisions were made by scheduler or why some @@ -34,10 +34,10 @@ of the potential inaccuracy. {{< /note >}} -{{% /capture %}} -{{% capture body %}} + + ## Deployment @@ -91,4 +91,4 @@ jsonPayload.involvedObject.name:"nginx-deployment" {{< figure src="/images/docs/stackdriver-event-exporter-filter.png" alt="Filtered events in the Stackdriver Logging interface" width="500" >}} -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/falco.md b/content/en/docs/tasks/debug-application-cluster/falco.md index 003b28760244f..f5f67406a9a22 100644 --- a/content/en/docs/tasks/debug-application-cluster/falco.md +++ b/content/en/docs/tasks/debug-application-cluster/falco.md @@ -3,19 +3,19 @@ reviewers: - soltysh - sttts - ericchiang -content_template: templates/concept +content_type: concept title: Auditing with Falco --- -{{% capture overview %}} + ### Use Falco to collect audit events [Falco](https://falco.org/) is an open source project for intrusion and abnormality detection for Cloud Native platforms. This section describes how to set up Falco, how to send audit events to the Kubernetes Audit endpoint exposed by Falco, and how Falco applies a set of rules to automatically detect suspicious behavior. -{{% /capture %}} -{{% capture body %}} + + #### Install Falco @@ -118,4 +118,4 @@ For further details, see [Kubernetes Audit Events][falco_ka_docs] in the Falco d [falco_installation]: https://falco.org/docs/installation [falco_helm_chart]: https://github.com/helm/charts/tree/master/stable/falco -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/get-shell-running-container.md b/content/en/docs/tasks/debug-application-cluster/get-shell-running-container.md index f3ff92c1964b2..12502ef1022e0 100644 --- a/content/en/docs/tasks/debug-application-cluster/get-shell-running-container.md +++ b/content/en/docs/tasks/debug-application-cluster/get-shell-running-container.md @@ -3,25 +3,26 @@ reviewers: - caesarxuchao - mikedanese title: Get a Shell to a Running Container -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to use `kubectl exec` to get a shell to a running Container. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Getting a shell to a Container @@ -122,9 +123,9 @@ kubectl exec shell-demo ls / kubectl exec shell-demo cat /proc/1/mounts ``` -{{% /capture %}} -{{% capture discussion %}} + + ## Opening a shell when a Pod has more than one Container @@ -138,14 +139,15 @@ shell to the main-app Container. kubectl exec -it my-pod --container main-app -- /bin/bash ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec) -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/local-debugging.md b/content/en/docs/tasks/debug-application-cluster/local-debugging.md index 9cfc216ee4b59..d00c1398ebe9e 100644 --- a/content/en/docs/tasks/debug-application-cluster/local-debugging.md +++ b/content/en/docs/tasks/debug-application-cluster/local-debugging.md @@ -1,9 +1,9 @@ --- title: Developing and debugging services locally -content_template: templates/task +content_type: task --- -{{% capture overview %}} + Kubernetes applications usually consist of multiple, separate services, each running in its own container. Developing and debugging these services on a remote Kubernetes cluster can be cumbersome, requiring you to [get a shell on a running container](/docs/tasks/debug-application-cluster/get-shell-running-container/) and running your tools inside the remote shell. @@ -12,17 +12,18 @@ Kubernetes applications usually consist of multiple, separate services, each run This document describes using `telepresence` to develop and debug services running on a remote cluster locally. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * Kubernetes cluster is installed * `kubectl` is configured to communicate with the cluster * [Telepresence](https://www.telepresence.io/reference/install) is installed -{{% /capture %}} -{{% capture steps %}} + + ## Getting a shell on a remote cluster @@ -46,9 +47,10 @@ where $DEPLOYMENT_NAME is the name of your existing deployment. Running this command spawns a shell. In the shell, start your service. You can then make edits to the source code locally, save, and see the changes take effect immediately. You can also run your service in a debugger, or any other local development tool. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + If you're interested in a hands-on tutorial, check out [this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s) that walks through locally developing the Guestbook application on Google Kubernetes Engine. @@ -56,4 +58,4 @@ Telepresence has [numerous proxying options](https://www.telepresence.io/referen For further reading, visit the [Telepresence website](https://www.telepresence.io). -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md b/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md index 327bfdf9253e2..c47b1173918f8 100644 --- a/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md +++ b/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md @@ -2,11 +2,11 @@ reviewers: - piosz - x13n -content_template: templates/concept +content_type: concept title: Logging Using Elasticsearch and Kibana --- -{{% capture overview %}} + On the Google Compute Engine (GCE) platform, the default logging support targets [Stackdriver Logging](https://cloud.google.com/logging/), which is described in detail @@ -21,9 +21,9 @@ Stackdriver Logging when running on GCE. You cannot automatically deploy Elasticsearch and Kibana in the Kubernetes cluster hosted on Google Kubernetes Engine. You have to deploy them manually. {{< /note >}} -{{% /capture %}} -{{% capture body %}} + + To use Elasticsearch and Kibana for cluster logging, you should set the following environment variable as shown below when creating your cluster with @@ -114,11 +114,12 @@ Here is a typical view of ingested logs from the Kibana viewer: ![Kibana logs](/images/docs/kibana-logs.png) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Kibana opens up all sorts of powerful options for exploring your logs! For some ideas on how to dig into it, check out [Kibana's documentation](https://www.elastic.co/guide/en/kibana/current/discover.html). -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md index a60ceeedfb046..be80133d34aa8 100644 --- a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md +++ b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md @@ -3,10 +3,10 @@ reviewers: - piosz - x13n title: Logging Using Stackdriver -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + Before reading this page, it's highly recommended to familiarize yourself with the [overview of logging in Kubernetes](/docs/concepts/cluster-administration/logging). @@ -18,10 +18,10 @@ see the [sidecar approach](/docs/concepts/cluster-administration/logging#sidecar in the Kubernetes logging overview. {{< /note >}} -{{% /capture %}} -{{% capture body %}} + + ## Deploying @@ -368,4 +368,4 @@ with minor changes: Then run `make build push` from this directory. After updating `DaemonSet` to pick up the new image, you can use the plugin you installed in the fluentd configuration. -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/monitor-node-health.md b/content/en/docs/tasks/debug-application-cluster/monitor-node-health.md index f434adb17e7e6..9ebeeeddaddbf 100644 --- a/content/en/docs/tasks/debug-application-cluster/monitor-node-health.md +++ b/content/en/docs/tasks/debug-application-cluster/monitor-node-health.md @@ -2,11 +2,11 @@ reviewers: - Random-Liu - dchen1107 -content_template: templates/task +content_type: task title: Monitor Node Health --- -{{% capture overview %}} + *Node problem detector* is a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) monitoring the node health. It collects node problems from various daemons and reports them @@ -23,15 +23,16 @@ introduced to deal with node problems. See more information [here](https://github.com/kubernetes/node-problem-detector). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Limitations @@ -162,9 +163,9 @@ Kernel monitor uses [`Translator`](https://github.com/kubernetes/node-problem-de plugin to translate kernel log the internal data structure. It is easy to implement a new translator for a new log format. -{{% /capture %}} -{{% capture discussion %}} + + ## Caveats @@ -177,4 +178,4 @@ resource overhead on each node. Usually this is fine, because: * Even under high load, the resource usage is acceptable. (see [benchmark result](https://github.com/kubernetes/node-problem-detector/issues/2#issuecomment-220255629)) -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md index 547790e5b051b..dbd4aa6cf4771 100644 --- a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md +++ b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md @@ -3,20 +3,20 @@ reviewers: - fgrzadkowski - piosz title: Resource metrics pipeline -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + Resource usage metrics, such as container CPU and memory usage, are available in Kubernetes through the Metrics API. These metrics can be either accessed directly by user, for example by using `kubectl top` command, or used by a controller in the cluster, e.g. Horizontal Pod Autoscaler, to make decisions. -{{% /capture %}} -{{% capture body %}} + + ## The Metrics API @@ -61,4 +61,4 @@ Metrics Server is registered with the main API server through Learn more about the metrics server in [the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md). -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md b/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md index de8c538118685..6cb716da9c8fb 100644 --- a/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md +++ b/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md @@ -1,11 +1,11 @@ --- reviewers: - mikedanese -content_template: templates/concept +content_type: concept title: Tools for Monitoring Resources --- -{{% capture overview %}} + To scale an application and provide a reliable service, you need to understand how the application behaves when it is deployed. You can examine @@ -16,9 +16,9 @@ information about an application's resource usage at each of these levels. This information allows you to evaluate your application's performance and where bottlenecks can be removed to improve overall performance. -{{% /capture %}} -{{% capture body %}} + + In Kubernetes, application monitoring does not depend on a single monitoring solution. On new clusters, you can use [resource metrics](#resource-metrics-pipeline) or [full metrics](#full-metrics-pipeline) pipelines to collect monitoring statistics. @@ -55,4 +55,4 @@ then exposes them to Kubernetes via an adapter by implementing either the [Prometheus](https://prometheus.io), a CNCF project, can natively monitor Kubernetes, nodes, and Prometheus itself. Full metrics pipeline projects that are not part of the CNCF are outside the scope of Kubernetes documentation. -{{% /capture %}} + diff --git a/content/en/docs/tasks/debug-application-cluster/troubleshooting.md b/content/en/docs/tasks/debug-application-cluster/troubleshooting.md index 1ed0f5aa5b0fb..82301275ce4b5 100644 --- a/content/en/docs/tasks/debug-application-cluster/troubleshooting.md +++ b/content/en/docs/tasks/debug-application-cluster/troubleshooting.md @@ -2,11 +2,11 @@ reviewers: - brendandburns - davidopp -content_template: templates/concept +content_type: concept title: Troubleshooting --- -{{% capture overview %}} + Sometimes things go wrong. This guide is aimed at making them right. It has two sections: @@ -17,10 +17,10 @@ two sections: You should also check the known issues for the [release](https://github.com/kubernetes/kubernetes/releases) you're using. -{{% /capture %}} -{{% capture body %}} + + ## Getting help @@ -104,4 +104,4 @@ problem, such as: * Cloud provider, OS distro, network configuration, and Docker version * Steps to reproduce the problem -{{% /capture %}} + diff --git a/content/en/docs/tasks/example-task-template.md b/content/en/docs/tasks/example-task-template.md index c723460fc01fd..b3dd5e8e43c90 100644 --- a/content/en/docs/tasks/example-task-template.md +++ b/content/en/docs/tasks/example-task-template.md @@ -2,11 +2,11 @@ title: Example Task Template reviewers: - chenopis -content_template: templates/task +content_type: task toc_hide: true --- -{{% capture overview %}} + {{< note >}} Be sure to also [create an entry in the table of contents](/docs/contribute/style/write-new-topic/#placing-your-topic-in-the-table-of-contents) for your new document. @@ -14,39 +14,40 @@ Be sure to also [create an entry in the table of contents](/docs/contribute/styl This page shows how to ... -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * Do this. * Do this too. -{{% /capture %}} -{{% capture steps %}} + + ## Doing ... 1. Do this. 1. Do this next. Possibly read this [related explanation](#). -{{% /capture %}} -{{% capture discussion %}} + + ## Understanding ... **[Optional Section]** Here's an interesting thing to know about the steps you just did. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + **[Optional Section]** * Learn more about [Writing a New Topic](/docs/home/contribute/write-new-topic/). * See [Using Page Templates - Task template](/docs/home/contribute/page-templates/#task_template) for how to use this template. -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md index dcc315871b58b..7d14b86f24856 100644 --- a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md +++ b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md @@ -4,23 +4,24 @@ reviewers: - juanvallejo - soltysh description: With kubectl plugins, you can extend the functionality of the kubectl command by adding new subcommands. -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This guide demonstrates how to install and write extensions for [kubectl](/docs/reference/kubectl/kubectl/). By thinking of core `kubectl` commands as essential building blocks for interacting with a Kubernetes cluster, a cluster administrator can think of plugins as a means of utilizing these building blocks to create more complex behavior. Plugins extend `kubectl` with new sub-commands, allowing for new and custom features not included in the main distribution of `kubectl`. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You need to have a working `kubectl` binary installed. -{{% /capture %}} -{{% capture steps %}} + + ## Installing kubectl plugins @@ -375,9 +376,10 @@ set up a build environment (if it needs compiling), and deploy the plugin. If you also make compiled packages available, or use Krew, that will make installs easier. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Check the Sample CLI Plugin repository for a [detailed example](https://github.com/kubernetes/sample-cli-plugin) of a @@ -386,4 +388,4 @@ installs easier. [SIG CLI team](https://github.com/kubernetes/community/tree/master/sig-cli). * Read about [Krew](https://krew.dev/), a package manager for kubectl plugins. -{{% /capture %}} + diff --git a/content/en/docs/tasks/inject-data-application/define-command-argument-container.md b/content/en/docs/tasks/inject-data-application/define-command-argument-container.md index 66ebd69c134ab..faaffc52a2e57 100644 --- a/content/en/docs/tasks/inject-data-application/define-command-argument-container.md +++ b/content/en/docs/tasks/inject-data-application/define-command-argument-container.md @@ -1,25 +1,26 @@ --- title: Define a Command and Arguments for a Container -content_template: templates/task +content_type: task weight: 10 --- -{{% capture overview %}} + This page shows how to define commands and arguments when you run a container in a {{< glossary_tooltip term_id="pod" >}}. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Define a command and arguments when you create a Pod @@ -145,14 +146,15 @@ Here are some examples: | `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` | -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [configuring pods and containers](/docs/tasks/). * Learn more about [running commands in a container](/docs/tasks/debug-application-cluster/get-shell-running-container/). * See [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core). -{{% /capture %}} + diff --git a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md index 5dd5aa92e04ce..5b115993af570 100644 --- a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -1,25 +1,26 @@ --- title: Define Environment Variables for a Container -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + This page shows how to define environment variables for a container in a Kubernetes Pod. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} -{{% /capture %}} -{{% capture steps %}} + + ## Define an environment variable for a container @@ -117,12 +118,13 @@ spec: Upon creation, the command `echo Warm greetings to The Most Honorable Kubernetes` is run on the container. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). * Learn about [using secrets as environment variables](/docs/user-guide/secrets/#using-secrets-as-environment-variables). * See [EnvVarSource](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#envvarsource-v1-core). -{{% /capture %}} + diff --git a/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md index 2fb15aa3b2eb8..de4d32d7b9333 100644 --- a/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md +++ b/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md @@ -1,22 +1,23 @@ --- title: Distribute Credentials Securely Using Secrets -content_template: templates/task +content_type: task weight: 50 min-kubernetes-server-version: v1.6 --- -{{% capture overview %}} + This page shows how to securely inject sensitive data, such as passwords and encryption keys, into Pods. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} -{{% /capture %}} -{{% capture steps %}} + + ## Convert your secret data to a base-64 representation @@ -243,9 +244,10 @@ This functionality is available in Kubernetes v1.6 and later. password: 39528$vdg7Jb ```` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Secrets](/docs/concepts/configuration/secret/). * Learn about [Volumes](/docs/concepts/storage/volumes/). @@ -256,5 +258,5 @@ This functionality is available in Kubernetes v1.6 and later. * [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core) * [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) -{{% /capture %}} + diff --git a/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md b/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md index a24aba65b6bb2..4ab41f2a2376d 100644 --- a/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md +++ b/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md @@ -1,25 +1,26 @@ --- title: Expose Pod Information to Containers Through Files -content_template: templates/task +content_type: task weight: 40 --- -{{% capture overview %}} + This page shows how a Pod can use a DownwardAPIVolumeFile to expose information about itself to Containers running in the Pod. A DownwardAPIVolumeFile can expose Pod fields and Container fields. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## The Downward API @@ -189,9 +190,9 @@ In your shell, view the `cpu_limit` file: You can use similar commands to view the `cpu_request`, `mem_limit` and `mem_request` files. -{{% /capture %}} -{{% capture discussion %}} + + ## Capabilities of the Downward API @@ -249,10 +250,11 @@ application, but that is tedious and error prone, and it violates the goal of lo coupling. A better option would be to use the Pod's name as an identifier, and inject the Pod's name into the well-known environment variable. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) * [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core) @@ -260,7 +262,7 @@ inject the Pod's name into the well-known environment variable. * [DownwardAPIVolumeFile](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#downwardapivolumefile-v1-core) * [ResourceFieldSelector](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcefieldselector-v1-core) -{{% /capture %}} + diff --git a/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md b/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md index c23b3ba75aabd..2b59921c6e367 100644 --- a/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md +++ b/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md @@ -1,26 +1,27 @@ --- title: Expose Pod Information to Containers Through Environment Variables -content_template: templates/task +content_type: task weight: 30 --- -{{% capture overview %}} + This page shows how a Pod can use environment variables to expose information about itself to Containers running in the Pod. Environment variables can expose Pod fields and Container fields. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## The Downward API @@ -154,9 +155,10 @@ The output shows the values of selected environment variables: 67108864 ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Defining Environment Variables for a Container](/docs/tasks/inject-data-application/define-environment-variable-container/) * [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) @@ -166,5 +168,5 @@ The output shows the values of selected environment variables: * [ObjectFieldSelector](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#objectfieldselector-v1-core) * [ResourceFieldSelector](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcefieldselector-v1-core) -{{% /capture %}} + diff --git a/content/en/docs/tasks/inject-data-application/podpreset.md b/content/en/docs/tasks/inject-data-application/podpreset.md index dcf159acf514b..6533629ce4c91 100644 --- a/content/en/docs/tasks/inject-data-application/podpreset.md +++ b/content/en/docs/tasks/inject-data-application/podpreset.md @@ -3,26 +3,27 @@ reviewers: - jessfraz title: Inject Information into Pods Using a PodPreset min-kubernetes-server-version: v1.6 -content_template: templates/task +content_type: task weight: 60 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.6" state="alpha" >}} This page shows how to use PodPreset objects to inject information like {{< glossary_tooltip text="Secrets" term_id="secret" >}}, volume mounts, and {{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}} into Pods at creation time. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one using [Minikube](/docs/setup/learning-environment/minikube/). Make sure that you have [enabled PodPreset](/docs/concepts/workloads/pods/podpreset/#enable-pod-preset) in your cluster. -{{% /capture %}} -{{% capture steps %}} + + ## Use Pod presets to inject environment variables and volumes @@ -321,4 +322,4 @@ The output shows that the PodPreset was deleted: podpreset "allow-database" deleted ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md index ae5b6633ad8f4..602ad8482d8ab 100644 --- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -3,11 +3,11 @@ title: Running Automated Tasks with a CronJob min-kubernetes-server-version: v1.8 reviewers: - chenopis -content_template: templates/task +content_type: task weight: 10 --- -{{% capture overview %}} + You can use a {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} to run {{< glossary_tooltip text="Jobs" term_id="job" >}} on a time-based schedule. These automated jobs run like [Cron](https://en.wikipedia.org/wiki/Cron) tasks on a Linux or UNIX system. @@ -21,15 +21,16 @@ Therefore, jobs should be idempotent. For more limitations, see [CronJobs](/docs/concepts/workloads/controllers/cron-jobs). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} -{{% /capture %}} -{{% capture steps %}} + + ## Creating a Cron Job @@ -207,4 +208,4 @@ The `.spec.successfulJobsHistoryLimit` and `.spec.failedJobsHistoryLimit` fields These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to `0` corresponds to keeping none of the corresponding kind of jobs after they finish. -{{% /capture %}} + diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md index 707c5b985044f..346fbdda8d1bd 100644 --- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -1,12 +1,12 @@ --- title: Coarse Parallel Processing Using a Work Queue min-kubernetes-server-version: v1.8 -content_template: templates/task +content_type: task weight: 30 --- -{{% capture overview %}} + In this example, we will run a Kubernetes Job with multiple parallel worker processes. @@ -23,19 +23,20 @@ Here is an overview of the steps in this example: 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Be familiar with the basic, non-parallel, use of [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/). {{< include "task-tutorial-prereqs.md" >}} -{{% /capture %}} -{{% capture steps %}} + + ## Starting a message queue service @@ -292,9 +293,9 @@ Events: All our pods succeeded. Yay. -{{% /capture %}} -{{% capture discussion %}} + + ## Alternatives @@ -331,4 +332,4 @@ exits with success, or if the node crashes before the kubelet is able to post th back to the api-server, then the Job will not appear to be complete, even though all items in the queue have been processed. -{{% /capture %}} + diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md index 26fbbacaa782b..f502113c8ffda 100644 --- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md @@ -1,11 +1,11 @@ --- title: Fine Parallel Processing Using a Work Queue -content_template: templates/task +content_type: task min-kubernetes-server-version: v1.8 weight: 40 --- -{{% capture overview %}} + In this example, we will run a Kubernetes Job with multiple parallel worker processes in a given pod. @@ -25,23 +25,24 @@ Here is an overview of the steps in this example: 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} -{{% /capture %}} -{{% capture steps %}} + + Be familiar with the basic, non-parallel, use of [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/). -{{% /capture %}} -{{% capture steps %}} + + ## Starting Redis @@ -226,9 +227,9 @@ Working on lemon As you can see, one of our pods worked on several work units. -{{% /capture %}} -{{% capture discussion %}} + + ## Alternatives @@ -240,4 +241,4 @@ consider running your background workers with a `ReplicaSet` instead, and consider running a background processing library such as [https://github.com/resque/resque](https://github.com/resque/resque). -{{% /capture %}} + diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md index e2d0975a70891..3477be2650905 100644 --- a/content/en/docs/tasks/job/parallel-processing-expansion.md +++ b/content/en/docs/tasks/job/parallel-processing-expansion.md @@ -1,11 +1,11 @@ --- title: Parallel Processing using Expansions -content_template: templates/task +content_type: task min-kubernetes-server-version: v1.8 weight: 20 --- -{{% capture overview %}} + This task demonstrates running multiple {{< glossary_tooltip text="Jobs" term_id="job" >}} based on a common template. You can use this approach to process batches of work in @@ -16,9 +16,10 @@ The sample Jobs process each item simply by printing a string then pausing. See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how this pattern fits more realistic use cases. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You should be familiar with the basic, non-parallel, use of [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/). @@ -35,10 +36,10 @@ Once you have Python set up, you can install Jinja2 by running: ```shell pip install --user jinja2 ``` -{{% /capture %}} -{{% capture steps %}} + + ## Create Jobs based on a template @@ -252,8 +253,8 @@ Kubernetes accepts and runs the Jobs you created. kubectl delete job -l jobgroup=jobexample ``` -{{% /capture %}} -{{% capture discussion %}} + + ## Using Jobs in real workloads @@ -310,4 +311,4 @@ objects. You could also consider writing your own [controller](/docs/concepts/architecture/controller/) to manage Job objects automatically. -{{% /capture %}} + diff --git a/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md b/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md index 4b1d4240665a9..2d6dc9d0d64f4 100644 --- a/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md @@ -2,28 +2,29 @@ reviewers: - janetkuo title: Perform a Rollback on a DaemonSet -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + This page shows how to perform a rollback on a DaemonSet. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * The DaemonSet rollout history and DaemonSet rollback features are only supported in `kubectl` in Kubernetes version 1.7 or later. * Make sure you know how to [perform a rolling update on a DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/). -{{% /capture %}} -{{% capture steps %}} + + ## Performing a Rollback on a DaemonSet @@ -104,10 +105,10 @@ When the rollback is complete, the output is similar to this: daemonset "" successfully rolled out ``` -{{% /capture %}} -{{% capture discussion %}} + + ## Understanding DaemonSet Revisions @@ -154,6 +155,6 @@ have revision 1 and 2 in the system, and roll back from revision 2 to revision * See [troubleshooting DaemonSet rolling update](/docs/tasks/manage-daemon/update-daemon-set/#troubleshooting). -{{% /capture %}} + diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index 8e32763e018ab..b9168ed098121 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -2,25 +2,26 @@ reviewers: - janetkuo title: Perform a Rolling Update on a DaemonSet -content_template: templates/task +content_type: task weight: 10 --- -{{% capture overview %}} + This page shows how to perform a rolling update on a DaemonSet. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * The DaemonSet rolling update feature is only supported in Kubernetes version 1.6 or later. -{{% /capture %}} -{{% capture steps %}} + + ## DaemonSet Update Strategy @@ -190,13 +191,14 @@ Delete DaemonSet from a namespace : kubectl delete ds fluentd-elasticsearch -n kube-system ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * See [Task: Performing a rollback on a DaemonSet](/docs/tasks/manage-daemon/rollback-daemon-set/) * See [Concepts: Creating a DaemonSet to adopt existing DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 4c0b9f9bc376b..63c798afd6975 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -1,11 +1,11 @@ --- reviewers: - vishh -content_template: templates/concept +content_type: concept title: Schedule GPUs --- -{{% capture overview %}} + {{< feature-state state="beta" for_k8s_version="v1.10" >}} @@ -15,10 +15,10 @@ Kubernetes includes **experimental** support for managing AMD and NVIDIA GPUs This page describes how users can consume GPUs across different Kubernetes versions and the current limitations. -{{% /capture %}} -{{% capture body %}} + + ## Using device plugins @@ -216,4 +216,4 @@ spec: This will ensure that the Pod will be scheduled to a node that has the GPU type you specified. -{{% /capture %}} + diff --git a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md index ad6b969c87ba5..b01ae06df2f63 100644 --- a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md +++ b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md @@ -2,19 +2,20 @@ reviewers: - derekwaynecarr title: Manage HugePages -content_template: templates/task +content_type: task --- -{{% capture overview %}} + {{< feature-state state="stable" >}} Kubernetes supports the allocation and consumption of pre-allocated huge pages by applications in a Pod as a **GA** feature. This page describes how users can consume huge pages and the current limitations. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + 1. Kubernetes nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can pre-allocate huge pages for multiple @@ -23,9 +24,9 @@ can consume huge pages and the current limitations. The nodes will automatically discover and report all huge page resources as schedulable resources. -{{% /capture %}} -{{% capture steps %}} + + ## API @@ -125,5 +126,5 @@ term_id="kube-apiserver" >}} (`--feature-gates=HugePageStorageMediumSize=true`). - NUMA locality guarantees as a feature of quality of service. - LimitRange support. -{{% /capture %}} + diff --git a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md index f82e54d364e47..308a4cf9b822a 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -1,27 +1,28 @@ --- title: Declarative Management of Kubernetes Objects Using Configuration Files -content_template: templates/task +content_type: task weight: 10 --- -{{% capture overview %}} + Kubernetes objects can be created, updated, and deleted by storing multiple object configuration files in a directory and using `kubectl apply` to recursively create and update those objects as needed. This method retains writes made to live objects without merging the changes back into the object configuration files. `kubectl diff` also gives you a preview of what changes `apply` will make. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Install [`kubectl`](/docs/tasks/tools/install-kubectl/). {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Trade-offs @@ -999,11 +1000,12 @@ template: controller-selector: "apps/v1/deployment/nginx" ``` -{{% capture whatsnext %}} +## {{% heading "whatsnext" %}} + * [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/) * [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/imperative-config/) * [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/) * [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md index 6b1357a133bb2..dd8b6b0f532b9 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md @@ -1,23 +1,24 @@ --- title: Managing Kubernetes Objects Using Imperative Commands -content_template: templates/task +content_type: task weight: 30 --- -{{% capture overview %}} + Kubernetes objects can quickly be created, updated, and deleted directly using imperative commands built into the `kubectl` command-line tool. This document explains how those commands are organized and how to use them to manage live objects. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Install [`kubectl`](/docs/tasks/tools/install-kubectl/). {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Trade-offs @@ -159,13 +160,14 @@ kubectl create --edit -f /tmp/srv.yaml 1. The `kubectl create service` command creates the configuration for the Service and saves it to `/tmp/srv.yaml`. 1. The `kubectl create --edit` command opens the configuration file for editing before it creates the object. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/) * [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/) * [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/) * [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md index ec6057cd68e25..97b62e6f0f1b8 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md @@ -1,24 +1,25 @@ --- title: Imperative Management of Kubernetes Objects Using Configuration Files -content_template: templates/task +content_type: task weight: 40 --- -{{% capture overview %}} + Kubernetes objects can be created, updated, and deleted by using the `kubectl` command-line tool along with an object configuration file written in YAML or JSON. This document explains how to define and manage objects using configuration files. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Install [`kubectl`](/docs/tasks/tools/install-kubectl/). {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Trade-offs @@ -142,13 +143,14 @@ template: controller-selector: "apps/v1/deployment/nginx" ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/) * [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/) * [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/) * [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md index c74374a0dc06c..a7d887da3b8a8 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -1,10 +1,10 @@ --- title: Declarative Management of Kubernetes Objects Using Kustomize -content_template: templates/task +content_type: task weight: 20 --- -{{% capture overview %}} + [Kustomize](https://github.com/kubernetes-sigs/kustomize) is a standalone tool to customize Kubernetes objects @@ -24,17 +24,18 @@ To apply those Resources, run `kubectl apply` with `--kustomize` or `-k` flag: kubectl apply -k ``` -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Install [`kubectl`](/docs/tasks/tools/install-kubectl/). {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Overview of Kustomize @@ -824,13 +825,14 @@ deployment.apps "dev-my-nginx" deleted | configurations | []string | Each entry in this list should resolve to a file containing [Kustomize transformer configurations](https://github.com/kubernetes-sigs/kustomize/tree/master/examples/transformerconfigs) | | crds | []string | Each entry in this list should resolve to an OpenAPI definition file for Kubernetes types | -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Kustomize](https://github.com/kubernetes-sigs/kustomize) * [Kubectl Book](https://kubectl.docs.kubernetes.io) * [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/) * [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md index 84f86495ae9f5..60d6ae8099487 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md @@ -1,25 +1,26 @@ --- title: Update API Objects in Place Using kubectl patch description: Use kubectl patch to update Kubernetes API objects in place. Do a strategic merge patch or a JSON merge patch. -content_template: templates/task +content_type: task weight: 50 --- -{{% capture overview %}} + This task shows how to use `kubectl patch` to update an API object in place. The exercises in this task demonstrate a strategic merge patch and a JSON merge patch. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Use a strategic merge patch to update a Deployment @@ -330,14 +331,15 @@ create the Deployment object. Other commands for updating API objects include and [kubectl apply](/docs/reference/generated/kubectl/kubectl-commands/#apply). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Kubernetes Object Management](/docs/concepts/overview/working-with-objects/object-management/) * [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/) * [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/imperative-config/) * [Declarative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/declarative-config/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/network/validate-dual-stack.md b/content/en/docs/tasks/network/validate-dual-stack.md index 0e6d586bea89f..1e21af226d819 100644 --- a/content/en/docs/tasks/network/validate-dual-stack.md +++ b/content/en/docs/tasks/network/validate-dual-stack.md @@ -4,14 +4,15 @@ reviewers: - khenidak min-kubernetes-server-version: v1.16 title: Validate IPv4/IPv6 dual-stack -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This document shares how to validate IPv4/IPv6 dual-stack enabled Kubernetes clusters. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces) * A [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports dual-stack (such as Kubenet or Calico) @@ -20,9 +21,9 @@ This document shares how to validate IPv4/IPv6 dual-stack enabled Kubernetes clu {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Validate addressing @@ -158,4 +159,4 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S my-service ClusterIP fe80:20d::d06b 2001:db8:f100:4002::9d37:c0d7 80:31868/TCP 30s ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/run-application/configure-pdb.md b/content/en/docs/tasks/run-application/configure-pdb.md index d98538c262523..d00ad62e47329 100644 --- a/content/en/docs/tasks/run-application/configure-pdb.md +++ b/content/en/docs/tasks/run-application/configure-pdb.md @@ -1,10 +1,10 @@ --- title: Specifying a Disruption Budget for your Application -content_template: templates/task +content_type: task weight: 110 --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.5" state="beta" >}} @@ -13,9 +13,10 @@ that your application experiences, allowing for higher availability while permitting the cluster administrator to manage the clusters nodes. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * You are the owner of an application running on a Kubernetes cluster that requires high availability. * You should know how to deploy [Replicated Stateless Applications](/docs/tasks/run-application/run-stateless-application-deployment/) @@ -23,9 +24,9 @@ nodes. * You should have read about [Pod Disruptions](/docs/concepts/workloads/pods/disruptions/). * You should confirm with your cluster owner or service provider that they respect Pod Disruption Budgets. -{{% /capture %}} -{{% capture steps %}} + + ## Protecting an Application with a PodDisruptionBudget @@ -34,9 +35,9 @@ nodes. 1. Create a PDB definition as a YAML file. 1. Create the PDB object from the YAML file. -{{% /capture %}} -{{% capture discussion %}} + + ## Identify an Application to Protect @@ -238,6 +239,6 @@ You can use a selector which selects a subset or superset of the pods belonging controller. However, when there are multiple PDBs in a namespace, you must be careful not to create PDBs whose selectors overlap. -{{% /capture %}} + diff --git a/content/en/docs/tasks/run-application/delete-stateful-set.md b/content/en/docs/tasks/run-application/delete-stateful-set.md index d37e3ba7a0c46..7a4a94fab4537 100644 --- a/content/en/docs/tasks/run-application/delete-stateful-set.md +++ b/content/en/docs/tasks/run-application/delete-stateful-set.md @@ -6,23 +6,24 @@ reviewers: - janetkuo - smarterclayton title: Delete a StatefulSet -content_template: templates/task +content_type: task weight: 60 --- -{{% capture overview %}} + This task shows you how to delete a {{< glossary_tooltip term_id="StatefulSet" >}}. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * This task assumes you have an application running on your cluster represented by a StatefulSet. -{{% /capture %}} -{{% capture steps %}} + + ## Deleting a StatefulSet @@ -81,12 +82,13 @@ In the example above, the Pods have the label `app=myapp`; substitute your own l If you find that some pods in your StatefulSet are stuck in the 'Terminating' or 'Unknown' states for an extended period of time, you may need to manually intervene to forcefully delete the pods from the apiserver. This is a potentially dangerous task. Refer to [Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/) for details. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md index b2b364f5f962f..48a61a260d964 100644 --- a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md +++ b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md @@ -5,22 +5,23 @@ reviewers: - foxish - smarterclayton title: Force Delete StatefulSet Pods -content_template: templates/task +content_type: task weight: 70 --- -{{% capture overview %}} + This page shows how to delete Pods which are part of a {{< glossary_tooltip text="stateful set" term_id="StatefulSet" >}}, and explains the considerations to keep in mind when doing so. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * This is a fairly advanced task and has the potential to violate some of the properties inherent to StatefulSet. * Before proceeding, make yourself familiar with the considerations enumerated below. -{{% /capture %}} -{{% capture steps %}} + + ## StatefulSet considerations @@ -74,10 +75,11 @@ kubectl patch pod -p '{"metadata":{"finalizers":null}}' Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn more about [debugging a StatefulSet](/docs/tasks/debug-application-cluster/debug-stateful-set/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index cab3e0af7f708..7f3b046b6838e 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -5,11 +5,11 @@ reviewers: - justinsb - directxman12 title: Horizontal Pod Autoscaler Walkthrough -content_template: templates/task +content_type: task weight: 100 --- -{{% capture overview %}} + Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization @@ -17,11 +17,12 @@ in a replication controller, deployment, replica set or stateful set based on ob This document walks you through an example of enabling Horizontal Pod Autoscaler for the php-apache server. For more information on how Horizontal Pod Autoscaler behaves, see the [Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. [metrics-server](https://github.com/kubernetes-incubator/metrics-server/) monitoring needs to be deployed in the cluster @@ -35,9 +36,9 @@ not related to any Kubernetes object you must have a Kubernetes cluster at versi you must be able to communicate with the API server that provides the external metrics API. See the [Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics) for more details. -{{% /capture %}} -{{% capture steps %}} + + ## Run & expose php-apache server @@ -181,9 +182,9 @@ Here CPU utilization dropped to 0, and so HPA autoscaled the number of replicas Autoscaling the replicas may take a few minutes. {{< /note >}} -{{% /capture %}} -{{% capture discussion %}} + + ## Autoscaling on multiple metrics and custom metrics @@ -483,4 +484,4 @@ kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml horizontalpodautoscaler.autoscaling/php-apache created ``` -{{% /capture %}} + diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index f6852845ec18d..dc6681063fb78 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -9,11 +9,11 @@ feature: description: > Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage. -content_template: templates/concept +content_type: concept weight: 90 --- -{{% capture overview %}} + The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with @@ -26,10 +26,10 @@ The resource determines the behavior of the controller. The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed average CPU utilization to the target specified by user. -{{% /capture %}} -{{% capture body %}} + + ## How does the Horizontal Pod Autoscaler work? @@ -431,12 +431,13 @@ behavior: selectPolicy: Disabled ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Design documentation: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md). * kubectl autoscale command: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale). * Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md index 7a85a740149a1..2a7d255c2b859 100644 --- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md @@ -7,11 +7,11 @@ reviewers: - kow3ns - smarterclayton title: Run a Replicated Stateful Application -content_template: templates/tutorial +content_type: tutorial weight: 30 --- -{{% capture overview %}} + This page shows how to run a replicated stateful application using a [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) controller. @@ -23,9 +23,10 @@ asynchronous replication. on general patterns for running stateful applications in Kubernetes. {{< /note >}} -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * {{< include "default-storage-class-prereqs.md" >}} @@ -38,18 +39,19 @@ on general patterns for running stateful applications in Kubernetes. * Some familiarity with MySQL helps, but this tutorial aims to present general patterns that should be useful for other systems. -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Deploy a replicated MySQL topology with a StatefulSet controller. * Send MySQL client traffic. * Observe resistance to downtime. * Scale the StatefulSet up and down. -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Deploy MySQL @@ -479,9 +481,10 @@ kubectl delete pvc data-mysql-3 kubectl delete pvc data-mysql-4 ``` -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + 1. Cancel the `SELECT @@server_id` loop by pressing **Ctrl+C** in its terminal, or running the following from another terminal: @@ -522,9 +525,10 @@ kubectl delete pvc data-mysql-4 Some dynamic provisioners (such as those for EBS and PD) also release the underlying resources upon deleting the PersistentVolumes. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [scaling a StatefulSet](/docs/tasks/run-application/scale-stateful-set/). * Learn more about [debugging a StatefulSet](/docs/tasks/debug-application-cluster/debug-stateful-set/). * Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/). @@ -532,7 +536,7 @@ kubectl delete pvc data-mysql-4 * Look in the [Helm Charts repository](https://github.com/kubernetes/charts) for other stateful application examples. -{{% /capture %}} + diff --git a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md index 777265c68bedf..4c43948a215c8 100644 --- a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md @@ -1,37 +1,39 @@ --- title: Run a Single-Instance Stateful Application -content_template: templates/tutorial +content_type: tutorial weight: 20 --- -{{% capture overview %}} + This page shows you how to run a single-instance stateful application in Kubernetes using a PersistentVolume and a Deployment. The application is MySQL. -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Create a PersistentVolume referencing a disk in your environment. * Create a MySQL Deployment. * Expose MySQL to other pods in the cluster at a known DNS name. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * {{< include "default-storage-class-prereqs.md" >}} -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Deploy MySQL @@ -180,10 +182,11 @@ PersistentVolume when it sees that you deleted the PersistentVolumeClaim. Some dynamic provisioners (such as those for EBS and PD) also release the underlying resource upon deleting the PersistentVolume. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Deployment objects](/docs/concepts/workloads/controllers/deployment/). @@ -193,6 +196,6 @@ underlying resource upon deleting the PersistentVolume. * [Volumes](/docs/concepts/storage/volumes/) and [Persistent Volumes](/docs/concepts/storage/persistent-volumes/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md index 68d41b5a837e2..9e6ed4a25e0f7 100644 --- a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md +++ b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md @@ -1,34 +1,36 @@ --- title: Run a Stateless Application Using a Deployment min-kubernetes-server-version: v1.9 -content_template: templates/tutorial +content_type: tutorial weight: 10 --- -{{% capture overview %}} + This page shows how to run an application using a Kubernetes Deployment object. -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Create an nginx deployment. * Use kubectl to list information about the deployment. * Update the deployment. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Creating and exploring an nginx deployment @@ -146,13 +148,14 @@ which in turn uses a ReplicaSet. Before the Deployment and ReplicaSet were added to Kubernetes, replicated applications were configured using a [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/). -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Deployment objects](/docs/concepts/workloads/controllers/deployment/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/run-application/scale-stateful-set.md b/content/en/docs/tasks/run-application/scale-stateful-set.md index 462025836dafe..6e34babf9d14d 100644 --- a/content/en/docs/tasks/run-application/scale-stateful-set.md +++ b/content/en/docs/tasks/run-application/scale-stateful-set.md @@ -8,15 +8,16 @@ reviewers: - kow3ns - smarterclayton title: Scale a StatefulSet -content_template: templates/task +content_type: task weight: 50 --- -{{% capture overview %}} + This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to increasing or decreasing the number of replicas. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * StatefulSets are only available in Kubernetes version 1.5 or later. To check your version of Kubernetes, run `kubectl version`. @@ -26,9 +27,9 @@ This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to incr * You should perform scaling only when you are confident that your stateful application cluster is completely healthy. -{{% /capture %}} -{{% capture steps %}} + + ## Scaling StatefulSets @@ -90,10 +91,11 @@ to reason about scaling operations at the application level in these cases, and perform scaling only when you are sure that your stateful application cluster is completely healthy. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/). -{{% /capture %}} + diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md index 73268ff71417f..499bc1fa90803 100644 --- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md +++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md @@ -2,18 +2,19 @@ title: Install Service Catalog using Helm reviewers: - chenopis -content_template: templates/task +content_type: task --- -{{% capture overview %}} + {{< glossary_definition term_id="service-catalog" length="all" prepend="Service Catalog is" >}} Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes cluster. Up to date information on this process can be found at the [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/install.md) repo. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * Understand the key concepts of [Service Catalog](/docs/concepts/service-catalog/). * Service Catalog requires a Kubernetes cluster running version 1.7 or higher. * You must have a Kubernetes cluster with cluster DNS enabled. @@ -24,10 +25,10 @@ Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes clust * Follow the [Helm install instructions](https://helm.sh/docs/intro/install/). * If you already have an appropriate version of Helm installed, execute `helm init` to install Tiller, the server-side component of Helm. -{{% /capture %}} -{{% capture steps %}} + + ## Add the service-catalog Helm repository Once Helm is installed, add the *service-catalog* Helm repository to your local machine by executing the following command: @@ -105,11 +106,12 @@ helm install svc-cat/catalog --name catalog --namespace catalog ``` {{% /tab %}} {{< /tabs >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * View [sample service brokers](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers). * Explore the [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog) project. -{{% /capture %}} + diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md index 2a50ca2ff8d07..a45474e297bb3 100644 --- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md +++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md @@ -2,10 +2,10 @@ title: Install Service Catalog using SC reviewers: - chenopis -content_template: templates/task +content_type: task --- -{{% capture overview %}} + {{< glossary_definition term_id="service-catalog" length="all" prepend="Service Catalog is" >}} You can use the GCP [Service Catalog Installer](https://github.com/GoogleCloudPlatform/k8s-service-catalog#installation) @@ -14,10 +14,11 @@ Google Cloud projects. Service Catalog itself can work with any kind of managed service, not just Google Cloud. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * Understand the key concepts of [Service Catalog](/docs/concepts/service-catalog/). * Install [Go 1.6+](https://golang.org/dl/) and set the `GOPATH`. * Install the [cfssl](https://github.com/cloudflare/cfssl) tool needed for generating SSL artifacts. @@ -27,10 +28,10 @@ Service Catalog itself can work with any kind of managed service, not just Googl kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user= -{{% /capture %}} -{{% capture steps %}} + + ## Install `sc` in your local environment The installer runs on your local computer as a CLI tool named `sc`. @@ -71,11 +72,12 @@ If you would like to uninstall Service Catalog from your Kubernetes cluster usin sc uninstall ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * View [sample service brokers](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers). * Explore the [kubernetes-incubator/service-catalog](https://github.com/kubernetes-incubator/service-catalog) project. -{{% /capture %}} + diff --git a/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md index b5dbd052152b5..da91611e1768a 100644 --- a/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md +++ b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md @@ -1,23 +1,24 @@ --- title: Set up Konnectivity service -content_template: templates/task +content_type: task weight: 70 --- -{{% capture overview %}} + The Konnectivity service provides TCP level proxy for the Master → Cluster communication. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} -{{% /capture %}} -{{% capture steps %}} + + ## Configure the Konnectivity service @@ -49,4 +50,3 @@ Last, if RBAC is enabled in your cluster, create the relevant RBAC rules: {{< codenew file="admin/konnectivity/konnectivity-rbac.yaml" >}} -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tasks/tls/certificate-rotation.md b/content/en/docs/tasks/tls/certificate-rotation.md index 3cf55db335af0..f7c6b55a36363 100644 --- a/content/en/docs/tasks/tls/certificate-rotation.md +++ b/content/en/docs/tasks/tls/certificate-rotation.md @@ -3,22 +3,23 @@ reviewers: - jcbsmpsn - mikedanese title: Certificate Rotation -content_template: templates/task +content_type: task --- -{{% capture overview %}} + This page shows how to enable and configure certificate rotation for the kubelet. -{{% /capture %}} + {{< feature-state for_k8s_version="v1.8" state="beta" >}} -{{% capture prerequisites %}} +## {{% heading "prerequisites" %}} + * Kubernetes version 1.8.0 or later is required -{{% /capture %}} -{{% capture steps %}} + + ## Overview @@ -77,6 +78,6 @@ kubelet will retrieve the new signed certificate from the Kubernetes API and write that to disk. Then it will update the connections it has to the Kubernetes API to reconnect using the new certificate. -{{% /capture %}} + diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md index 7cd4cc8be5866..5098d353d8262 100644 --- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md +++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md @@ -1,13 +1,13 @@ --- title: Manage TLS Certificates in a Cluster -content_template: templates/task +content_type: task reviewers: - mikedanese - beacham - liggit --- -{{% capture overview %}} + Kubernetes provides a `certificates.k8s.io` API, which lets you provision TLS certificates signed by a Certificate Authority (CA) that you control. These CA @@ -23,16 +23,17 @@ CA for this purpose, but you should never rely on this. Do not assume that these certificates will validate against the cluster root CA. {{< /note >}} -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture steps %}} + + ## Trusting TLS in a Cluster @@ -222,4 +223,4 @@ enable it, pass the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters to the controller manager with paths to your Certificate Authority's keypair. -{{% /capture %}} + diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md index 131362109a034..6dcad6b39c030 100644 --- a/content/en/docs/tasks/tools/install-kubectl.md +++ b/content/en/docs/tasks/tools/install-kubectl.md @@ -2,7 +2,7 @@ reviewers: - mikedanese title: Install and Set Up kubectl -content_template: templates/task +content_type: task weight: 10 card: name: tasks @@ -10,15 +10,16 @@ card: title: Install kubectl --- -{{% capture overview %}} + The Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/), allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For a complete list of kubectl operations, see [Overview of kubectl](/docs/reference/kubectl/overview/). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + You must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl helps avoid unforeseen issues. -{{% /capture %}} -{{% capture steps %}} + + ## Install kubectl on Linux @@ -508,12 +509,13 @@ compinit {{< /tabs >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Install Minikube](/docs/tasks/tools/install-minikube/) * See the [getting started guides](/docs/setup/) for more about creating clusters. * [Learn how to launch and expose your application.](/docs/tasks/access-application-cluster/service-access-application-cluster/) * If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). * Read the [kubectl reference docs](/docs/reference/kubectl/kubectl/) -{{% /capture %}} + diff --git a/content/en/docs/tasks/tools/install-minikube.md b/content/en/docs/tasks/tools/install-minikube.md index 50e4436dec88a..84c6dd03410b7 100644 --- a/content/en/docs/tasks/tools/install-minikube.md +++ b/content/en/docs/tasks/tools/install-minikube.md @@ -1,19 +1,20 @@ --- title: Install Minikube -content_template: templates/task +content_type: task weight: 20 card: name: tasks weight: 10 --- -{{% capture overview %}} + This page shows you how to install [Minikube](/docs/tutorials/hello-minikube), a tool that runs a single-node Kubernetes cluster in a virtual machine on your personal computer. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< tabs name="minikube_before_you_begin" >}} {{% tab name="Linux" %}} @@ -53,9 +54,9 @@ Hyper-V Requirements: A hypervisor has been detected. Features required for {{% /tab %}} {{< /tabs >}} -{{% /capture %}} -{{% capture steps %}} + + # Installing minikube @@ -200,13 +201,14 @@ To install Minikube manually on Windows, download [`minikube-windows-amd64`](htt {{< /tabs >}} -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * [Running Kubernetes Locally via Minikube](/docs/setup/learning-environment/minikube/) -{{% /capture %}} + ## Confirm Installation diff --git a/content/en/docs/tutorials/_index.md b/content/en/docs/tutorials/_index.md index 9f8de2129e658..95b8ec9e1fc25 100644 --- a/content/en/docs/tutorials/_index.md +++ b/content/en/docs/tutorials/_index.md @@ -2,10 +2,10 @@ title: Tutorials main_menu: true weight: 60 -content_template: templates/concept +content_type: concept --- -{{% capture overview %}} + This section of the Kubernetes documentation contains tutorials. A tutorial shows how to accomplish a goal that is larger than a single @@ -14,9 +14,9 @@ each of which has a sequence of steps. Before walking through each tutorial, you may want to bookmark the [Standardized Glossary](/docs/reference/glossary/) page for later references. -{{% /capture %}} -{{% capture body %}} + + ## Basics @@ -64,12 +64,13 @@ Before walking through each tutorial, you may want to bookmark the * [Using Source IP](/docs/tutorials/services/source-ip/) -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + If you would like to write a tutorial, see [Using Page Templates](/docs/home/contribute/page-templates/) for information about the tutorial page type and the tutorial template. -{{% /capture %}} + diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md index ae1de98ab27ca..d791a57e33ae6 100644 --- a/content/en/docs/tutorials/clusters/apparmor.md +++ b/content/en/docs/tutorials/clusters/apparmor.md @@ -2,10 +2,10 @@ reviewers: - stclair title: AppArmor -content_template: templates/tutorial +content_type: tutorial --- -{{% capture overview %}} + {{< feature-state for_k8s_version="v1.4" state="beta" >}} @@ -24,9 +24,10 @@ that AppArmor is not a silver bullet and can only do so much to protect against application code. It is important to provide good, restrictive profiles, and harden your applications and cluster from other angles as well. -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * See an example of how to load a profile on a node * Learn how to enforce the profile on a Pod @@ -34,9 +35,10 @@ applications and cluster from other angles as well. * See what happens when a profile is violated * See what happens when a profile cannot be loaded -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Make sure: @@ -111,9 +113,9 @@ gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready status. AppArmor e gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled ``` -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Securing a Pod @@ -458,13 +460,14 @@ Specifying the list of profiles Pod containers is allowed to specify: - Although an escaped comma is a legal character in a profile name, it cannot be explicitly allowed here. -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Additional resources: * [Quick guide to the AppArmor profile language](https://gitlab.com/apparmor/apparmor/wikis/QuickProfileLanguage) * [AppArmor core policy reference](https://gitlab.com/apparmor/apparmor/wikis/Policy_Layout) -{{% /capture %}} + diff --git a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md index 7ae7fb087b906..37f6f9e014b59 100644 --- a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md +++ b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md @@ -3,16 +3,17 @@ reviewers: - eparis - pmorie title: Configuring Redis using a ConfigMap -content_template: templates/tutorial +content_type: tutorial --- -{{% capture overview %}} + This page provides a real world example of how to configure Redis using a ConfigMap and builds upon the [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) task. -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Create a `kustomization.yaml` file containing: * a ConfigMap generator @@ -20,18 +21,19 @@ This page provides a real world example of how to configure Redis using a Config * Apply the directory by running `kubectl apply -k ./` * Verify that the configuration was correctly applied. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * The example shown on this page works with `kubectl` 1.14 and above. * Understand [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/). -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Real World Example: Configuring Redis using a ConfigMap @@ -105,12 +107,13 @@ Delete the created pod: kubectl delete pod redis ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/). -{{% /capture %}} + diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index de6875b582a22..9ba2de1abfbb3 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -1,6 +1,6 @@ --- title: Hello Minikube -content_template: templates/tutorial +content_type: tutorial weight: 5 menu: main: @@ -13,7 +13,7 @@ card: weight: 10 --- -{{% capture overview %}} + This tutorial shows you how to run a sample app on Kubernetes using [Minikube](/docs/setup/learning-environment/minikube) and Katacoda. @@ -23,23 +23,25 @@ Katacoda provides a free, in-browser Kubernetes environment. You can also follow this tutorial if you've installed [Minikube locally](/docs/tasks/tools/install-minikube/). {{< /note >}} -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Deploy a sample application to Minikube. * Run the app. * View application logs. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + This tutorial provides a container image that uses NGINX to echo back all the requests. -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Create a Minikube cluster @@ -272,12 +274,13 @@ Optionally, delete the Minikube VM: minikube delete ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Deployment objects](/docs/concepts/workloads/controllers/deployment/). * Learn more about [Deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/). * Learn more about [Service objects](/docs/concepts/services-networking/service/). -{{% /capture %}} + diff --git a/content/en/docs/tutorials/services/source-ip.md b/content/en/docs/tutorials/services/source-ip.md index ca3a2bb409095..03a9bb097c658 100644 --- a/content/en/docs/tutorials/services/source-ip.md +++ b/content/en/docs/tutorials/services/source-ip.md @@ -1,19 +1,20 @@ --- title: Using Source IP -content_template: templates/tutorial +content_type: tutorial min-kubernetes-server-version: v1.5 --- -{{% capture overview %}} + Applications running in a Kubernetes cluster find and communicate with each other, and the outside world, through the Service abstraction. This document explains what happens to the source IP of packets sent to different types of Services, and how you can toggle this behavior according to your needs. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + ### Terminology @@ -54,18 +55,19 @@ The output is: deployment.apps/source-ip-app created ``` -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Expose a simple application through various types of Services * Understand how each Service type handles source IP NAT * Understand the tradeoffs involved in preserving source IP -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Source IP for Services with `Type=ClusterIP` @@ -423,9 +425,10 @@ Load balancers in the second category can leverage the feature described above by creating an HTTP health check pointing at the port stored in the `service.spec.healthCheckNodePort` field on the Service. -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + Delete the Services: @@ -439,10 +442,11 @@ Delete the Deployment, ReplicaSet and Pod: kubectl delete deployment source-ip-app ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [connecting applications via services](/docs/concepts/services-networking/connect-applications-service/) * Read how to [Create an External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) -{{% /capture %}} + diff --git a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md index e8f3156694996..235de6cfaa0c0 100644 --- a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md @@ -7,17 +7,18 @@ reviewers: - kow3ns - smarterclayton title: StatefulSet Basics -content_template: templates/tutorial +content_type: tutorial weight: 10 --- -{{% capture overview %}} + This tutorial provides an introduction to managing applications with [StatefulSets](/docs/concepts/workloads/controllers/statefulset/). It demonstrates how to create, delete, scale, and update the Pods of StatefulSets. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Before you begin this tutorial, you should familiarize yourself with the following Kubernetes concepts. @@ -33,9 +34,10 @@ This tutorial assumes that your cluster is configured to dynamically provision PersistentVolumes. If your cluster is not configured to do so, you will have to manually provision two 1 GiB volumes prior to starting this tutorial. -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + StatefulSets are intended to be used with stateful applications and distributed systems. However, the administration of stateful applications and distributed systems on Kubernetes is a broad, complex topic. In order to @@ -49,9 +51,9 @@ After this tutorial, you will be familiar with the following. * How to delete a StatefulSet * How to scale a StatefulSet * How to update a StatefulSet's Pods -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Creating a StatefulSet Begin by creating a StatefulSet using the example below. It is similar to the @@ -1035,13 +1037,14 @@ Service. ```shell kubectl delete svc nginx ``` -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + You will need to delete the persistent storage media for the PersistentVolumes used in this tutorial. Follow the necessary steps, based on your environment, storage configuration, and provisioning method, to ensure that all storage is reclaimed. -{{% /capture %}} + diff --git a/content/en/docs/tutorials/stateful-application/cassandra.md b/content/en/docs/tutorials/stateful-application/cassandra.md index f55a852abb96e..3fa56b26eaef3 100644 --- a/content/en/docs/tutorials/stateful-application/cassandra.md +++ b/content/en/docs/tutorials/stateful-application/cassandra.md @@ -2,11 +2,11 @@ title: "Example: Deploying Cassandra with a StatefulSet" reviewers: - ahmetb -content_template: templates/tutorial +content_type: tutorial weight: 30 --- -{{% capture overview %}} + This tutorial shows you how to run [Apache Cassandra](http://cassandra.apache.org/) on Kubernetes. Cassandra, a database, needs persistent storage to provide data durability (application _state_). In this example, a custom Cassandra seed provider lets the database discover new Cassandra instances as they join the Cassandra cluster. *StatefulSets* make it easier to deploy stateful applications into your Kubernetes cluster. For more information on the features used in this tutorial, see [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). @@ -23,17 +23,19 @@ nodes in the ring. This tutorial deploys a custom Cassandra seed provider that lets the database discover new Cassandra Pods as they appear inside your Kubernetes cluster. {{< /note >}} -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Create and validate a Cassandra headless {{< glossary_tooltip text="Service" term_id="service" >}}. * Use a {{< glossary_tooltip term_id="StatefulSet" >}} to create a Cassandra ring. * Validate the StatefulSet. * Modify the StatefulSet. * Delete the StatefulSet and its {{< glossary_tooltip text="Pods" term_id="pod" >}}. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} To complete this tutorial, you should already have a basic familiarity with {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip text="Services" term_id="service" >}}, and {{< glossary_tooltip text="StatefulSets" term_id="StatefulSet" >}}. @@ -48,9 +50,9 @@ minikube start --memory 5120 --cpus=4 ``` {{< /caution >}} -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Creating a headless Service for Cassandra {#creating-a-cassandra-headless-service} In Kubernetes, a {{< glossary_tooltip text="Service" term_id="service" >}} describes a set of {{< glossary_tooltip text="Pods" term_id="pod" >}} that perform the same task. @@ -219,9 +221,10 @@ Use `kubectl edit` to modify the size of a Cassandra StatefulSet. cassandra 4 4 36m ``` -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all related StatefulSet resources. {{< warning >}} @@ -261,12 +264,13 @@ By using environment variables you can change values that are inserted into `cas | `CASSANDRA_RPC_ADDRESS` | `0.0.0.0` | -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn how to [Scale a StatefulSet](/docs/tasks/run-application/scale-stateful-set/). * Learn more about the [*KubernetesSeedProvider*](https://github.com/kubernetes/examples/blob/master/cassandra/java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java) * See more custom [Seed Provider Configurations](https://git.k8s.io/examples/cassandra/java/README.md) -{{% /capture %}} + diff --git a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index 0f97c2160b61c..eb389abf36f4c 100644 --- a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -2,7 +2,7 @@ title: "Example: Deploying WordPress and MySQL with Persistent Volumes" reviewers: - ahmetb -content_template: templates/tutorial +content_type: tutorial weight: 20 card: name: tutorials @@ -10,7 +10,7 @@ card: title: "Stateful Example: Wordpress with Persistent Volumes" --- -{{% capture overview %}} + This tutorial shows you how to deploy a WordPress site and a MySQL database using Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data. A [PersistentVolume](/docs/concepts/storage/persistent-volumes/) (PV) is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a [StorageClass](/docs/concepts/storage/storage-classes). A [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC) is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods. @@ -23,9 +23,10 @@ This deployment is not suitable for production use cases, as it uses single inst The files provided in this tutorial are using GA Deployment APIs and are specific to kubernetes version 1.9 and later. If you wish to use this tutorial with an earlier version of Kubernetes, please update the API version appropriately, or reference earlier versions of this tutorial. {{< /note >}} -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Create PersistentVolumeClaims and PersistentVolumes * Create a `kustomization.yaml` with * a Secret generator @@ -34,9 +35,10 @@ The files provided in this tutorial are using GA Deployment APIs and are specifi * Apply the kustomization directory by `kubectl apply -k ./` * Clean up -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} The example shown on this page works with `kubectl` 1.14 and above. @@ -47,9 +49,9 @@ Download the following configuration files: 1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml) -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Create PersistentVolumeClaims and PersistentVolumes @@ -218,9 +220,10 @@ Now you can verify that all objects exist. Do not leave your WordPress installation on this page. If another user finds it, they can set up a website on your instance and use it to serve malicious content.

Either install WordPress by creating a username and password or delete your instance. {{< /warning >}} -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + 1. Run the following command to delete your Secret, Deployments, Services and PersistentVolumeClaims: @@ -228,14 +231,15 @@ Do not leave your WordPress installation on this page. If another user finds it, kubectl delete -k ./ ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn more about [Introspection and Debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/) * Learn more about [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) * Learn more about [Port Forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) * Learn how to [Get a Shell to a Container](/docs/tasks/debug-application-cluster/get-shell-running-container/) -{{% /capture %}} + diff --git a/content/en/docs/tutorials/stateful-application/zookeeper.md b/content/en/docs/tutorials/stateful-application/zookeeper.md index ee58827f83a39..3bed3e059c0f1 100644 --- a/content/en/docs/tutorials/stateful-application/zookeeper.md +++ b/content/en/docs/tutorials/stateful-application/zookeeper.md @@ -8,18 +8,19 @@ reviewers: - kow3ns - smarterclayton title: Running ZooKeeper, A Distributed System Coordinator -content_template: templates/tutorial +content_type: tutorial weight: 40 --- -{{% capture overview %}} + This tutorial demonstrates running [Apache Zookeeper](https://zookeeper.apache.org) on Kubernetes using [StatefulSets](/docs/concepts/workloads/controllers/statefulset/), [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget), and [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature). -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + Before starting this tutorial, you should be familiar with the following Kubernetes concepts. @@ -40,18 +41,19 @@ This tutorial assumes that you have configured your cluster to dynamically provi PersistentVolumes. If your cluster is not configured to do so, you will have to manually provision three 20 GiB volumes before starting this tutorial. -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + After this tutorial, you will know the following. - How to deploy a ZooKeeper ensemble using StatefulSet. - How to consistently configure the ensemble using ConfigMaps. - How to spread the deployment of ZooKeeper servers in the ensemble. - How to use PodDisruptionBudgets to ensure service availability during planned maintenance. - {{% /capture %}} + -{{% capture lessoncontent %}} + ### ZooKeeper Basics @@ -1090,9 +1092,10 @@ node "kubernetes-node-ixsl" uncordoned You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure that your services remain available during maintenance. If drain is used to cordon nodes and evict pods prior to taking the node offline for maintenance, services that express a disruption budget will have that budget respected. You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled. -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + - Use `kubectl uncordon` to uncordon all the nodes in your cluster. - You will need to delete the persistent storage media for the PersistentVolumes @@ -1100,5 +1103,5 @@ You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure storage configuration, and provisioning method, to ensure that all storage is reclaimed. -{{% /capture %}} + diff --git a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md index 4f4dbda986caa..2974c77c94966 100644 --- a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -1,18 +1,19 @@ --- title: Exposing an External IP Address to Access an Application in a Cluster -content_template: templates/tutorial +content_type: tutorial weight: 10 --- -{{% capture overview %}} + This page shows how to create a Kubernetes Service object that exposes an external IP address. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + * Install [kubectl](/docs/tasks/tools/install-kubectl/). @@ -24,19 +25,20 @@ external IP address. * Configure `kubectl` to communicate with your Kubernetes API server. For instructions, see the documentation for your cloud provider. -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Run five instances of a Hello World application. * Create a Service object that exposes an external IP address. * Use the Service object to access the running application. -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Creating a service for an application running in five pods @@ -148,10 +150,11 @@ The preceding command creates a Hello Kubernetes! -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + To delete the Service, enter this command: @@ -162,11 +165,12 @@ the Hello World application, enter this command: kubectl delete deployment hello-world -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + Learn more about [connecting applications with services](/docs/concepts/services-networking/connect-applications-service/). -{{% /capture %}} + diff --git a/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md b/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md index bc991098d5b1b..0c4964a17fc2a 100644 --- a/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md +++ b/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md @@ -2,7 +2,7 @@ title: "Example: Add logging and metrics to the PHP / Redis Guestbook example" reviewers: - sftim -content_template: templates/tutorial +content_type: tutorial weight: 21 card: name: tutorials @@ -10,7 +10,7 @@ card: title: "Example: Add logging and metrics to the PHP / Redis Guestbook example" --- -{{% capture overview %}} + This tutorial builds upon the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components: * A running instance of the [PHP Guestbook with Redis tutorial](/docs/tutorials/stateless-application/guestbook) @@ -19,17 +19,19 @@ This tutorial builds upon the [PHP Guestbook with Redis](/docs/tutorials/statele * Metricbeat * Packetbeat -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Start up the PHP Guestbook with Redis. * Install kube-state-metrics. * Create a Kubernetes secret. * Deploy the Beats. * View dashboards of your logs and metrics. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} @@ -40,9 +42,9 @@ Additionally you need: * A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co), run the [download files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html) on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts). -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Start up the PHP Guestbook with Redis This tutorial builds on the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running. @@ -366,9 +368,10 @@ kubectl scale --replicas=3 deployment/frontend See the screenshot, add the indicated filters and then add the columns to the view. You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc. ![Kibana Discover](https://raw.githubusercontent.com/elastic/examples/master/beats-k8s-send-anywhere/scaling-up.png) -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command. 1. Run the following commands to delete all Pods, Deployments, and Services. @@ -396,11 +399,11 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels No resources found. ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Learn about [tools for monitoring resources](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) * Read more about [logging architecture](/docs/concepts/cluster-administration/logging/) * Read more about [application introspection and debugging](/docs/tasks/debug-application-cluster/) * Read more about [troubleshoot applications](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) -{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tutorials/stateless-application/guestbook.md b/content/en/docs/tutorials/stateless-application/guestbook.md index e8c71bc61351c..f321d5391a556 100644 --- a/content/en/docs/tutorials/stateless-application/guestbook.md +++ b/content/en/docs/tutorials/stateless-application/guestbook.md @@ -2,7 +2,7 @@ title: "Example: Deploying PHP Guestbook application with Redis" reviewers: - ahmetb -content_template: templates/tutorial +content_type: tutorial weight: 20 card: name: tutorials @@ -10,32 +10,34 @@ card: title: "Stateless Example: PHP Guestbook with Redis" --- -{{% capture overview %}} + This tutorial shows you how to build and deploy a simple, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components: * A single-instance [Redis](https://redis.io/) master to store guestbook entries * Multiple [replicated Redis](https://redis.io/topics/replication) instances to serve reads * Multiple web frontend instances -{{% /capture %}} -{{% capture objectives %}} + +## {{% heading "objectives" %}} + * Start up a Redis master. * Start up Redis slaves. * Start up the guestbook frontend. * Expose and view the Frontend Service. * Clean up. -{{% /capture %}} -{{% capture prerequisites %}} + +## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -{{% /capture %}} -{{% capture lessoncontent %}} + + ## Start up the Redis Master @@ -321,9 +323,10 @@ Scaling up or down is easy because your servers are defined as a Service that us redis-slave-2005841000-phfv9 1/1 Running 0 1h ``` -{{% /capture %}} -{{% capture cleanup %}} + +## {{% heading "cleanup" %}} + Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command. 1. Run the following commands to delete all Pods, Deployments, and Services. @@ -358,12 +361,13 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels No resources found. ``` -{{% /capture %}} -{{% capture whatsnext %}} + +## {{% heading "whatsnext" %}} + * Add [ELK logging and monitoring](../guestbook-logs-metrics-with-elk/) to your Guestbook application * Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials * Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) * Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/) * Read more about [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) -{{% /capture %}} +