Permalink
Browse files

removing toc shortcode. (#10720)

  • Loading branch information...
MengZn authored and k8s-ci-robot committed Oct 25, 2018
1 parent 8f2bb3d commit 04163e9a7c45aa447581bc567e9c86e555f7dab1
Showing with 183 additions and 289 deletions.
  1. +14 −15 content/en/docs/concepts/architecture/cloud-controller.md
  2. +2 −3 content/en/docs/concepts/architecture/master-node-communication.md
  3. +5 −6 content/en/docs/concepts/architecture/nodes.md
  4. +1 −2 content/en/docs/concepts/cluster-administration/addons.md
  5. +5 −6 content/en/docs/concepts/cluster-administration/certificates.md
  6. +5 −6 content/en/docs/concepts/cluster-administration/cloud-providers.md
  7. +0 −1 content/en/docs/concepts/cluster-administration/kubelet-garbage-collection.md
  8. +0 −1 content/en/docs/concepts/cluster-administration/logging.md
  9. +0 −1 content/en/docs/concepts/cluster-administration/manage-deployment.md
  10. +8 −9 content/en/docs/concepts/cluster-administration/networking.md
  11. +8 −9 content/en/docs/concepts/configuration/assign-pod-node.md
  12. +3 −4 content/en/docs/concepts/configuration/secret.md
  13. +0 −1 content/en/docs/concepts/configuration/taint-and-toleration.md
  14. +0 −3 content/en/docs/concepts/containers/container-environment-variables.md
  15. +0 −3 content/en/docs/concepts/containers/container-lifecycle-hooks.md
  16. +0 −1 content/en/docs/concepts/containers/images.md
  17. +0 −1 content/en/docs/concepts/containers/runtime-class.md
  18. +0 −1 content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
  19. +0 −1 content/en/docs/concepts/overview/kubernetes-api.md
  20. +1 −2 content/en/docs/concepts/overview/working-with-objects/labels.md
  21. +1 −2 content/en/docs/concepts/overview/working-with-objects/names.md
  22. +0 −1 content/en/docs/concepts/overview/working-with-objects/namespaces.md
  23. +8 −9 content/en/docs/concepts/policy/pod-security-policy.md
  24. +0 −1 content/en/docs/concepts/policy/resource-quotas.md
  25. +0 −2 content/en/docs/concepts/services-networking/connect-applications-service.md
  26. +3 −5 content/en/docs/concepts/services-networking/service.md
  27. +0 −3 content/en/docs/concepts/storage/dynamic-provisioning.md
  28. +1 −2 content/en/docs/concepts/storage/persistent-volumes.md
  29. +1 −2 content/en/docs/concepts/storage/volume-snapshot-classes.md
  30. +2 −3 content/en/docs/concepts/storage/volume-snapshots.md
  31. +8 −11 content/en/docs/concepts/storage/volumes.md
  32. +2 −3 content/en/docs/concepts/workloads/controllers/cron-jobs.md
  33. +1 −2 content/en/docs/concepts/workloads/controllers/daemonset.md
  34. +6 −7 content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md
  35. +1 −2 content/en/docs/concepts/workloads/controllers/ttlafterfinished.md
  36. +0 −4 content/en/docs/concepts/workloads/pods/disruptions.md
  37. +0 −4 content/en/docs/concepts/workloads/pods/init-containers.md
  38. +0 −3 content/en/docs/concepts/workloads/pods/pod-overview.md
  39. +0 −1 content/en/docs/concepts/workloads/pods/pod.md
  40. +3 −6 content/en/docs/concepts/workloads/pods/podpreset.md
  41. +0 −1 content/en/docs/contribute/localization.md
  42. +2 −4 content/en/docs/contribute/style/content-organization.md
  43. +30 −32 content/en/docs/contribute/style/page-templates.md
  44. +9 −10 content/en/docs/tasks/access-application-cluster/access-cluster.md
  45. +0 −1 content/en/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md
  46. +0 −1 content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md
  47. +0 −1 content/en/docs/tasks/administer-cluster/cluster-management.md
  48. +0 −1 content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md
  49. +0 −1 content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
  50. +3 −4 content/en/docs/tasks/administer-cluster/cpu-management-policies.md
  51. +0 −1 content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md
  52. +1 −2 content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md
  53. +0 −1 content/en/docs/tasks/administer-cluster/highly-available-master.md
  54. +0 −1 content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
  55. +6 −7 content/en/docs/tasks/administer-cluster/out-of-resource.md
  56. +0 −1 content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
  57. +0 −1 content/en/docs/tasks/administer-cluster/running-cloud-controller.md
  58. +1 −2 content/en/docs/tasks/administer-cluster/static-pod.md
  59. +0 −1 content/en/docs/tasks/administer-federation/events.md
  60. +0 −1 content/en/docs/tasks/administer-federation/secret.md
  61. +1 −4 content/en/docs/tasks/configure-pod-container/configure-service-account.md
  62. +23 −24 content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
  63. +0 −1 content/en/docs/tasks/debug-application-cluster/audit.md
  64. +0 −1 content/en/docs/tasks/debug-application-cluster/core-metrics-pipeline.md
  65. +1 −2 content/en/docs/tasks/debug-application-cluster/crictl.md
  66. +0 −1 content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md
  67. +0 −1 content/en/docs/tasks/debug-application-cluster/debug-application.md
  68. +0 −1 content/en/docs/tasks/debug-application-cluster/debug-cluster.md
  69. +0 −1 content/en/docs/tasks/debug-application-cluster/debug-service.md
  70. +0 −1 content/en/docs/tasks/debug-application-cluster/events-stackdriver.md
  71. +2 −3 content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md
  72. +1 −2 content/en/docs/tasks/debug-application-cluster/troubleshooting.md
  73. +11 −12 content/en/docs/tasks/federation/set-up-cluster-federation-kubefed.md
  74. +1 −3 content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md
  75. +0 −1 content/en/docs/tasks/job/fine-parallel-processing-work-queue.md
  76. +0 −1 content/en/docs/tasks/job/parallel-processing-expansion.md
  77. +1 −2 content/en/docs/tasks/manage-gpus/scheduling-gpus.md
  78. +0 −1 content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
  79. +1 −2 content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
  80. +0 −1 content/en/docs/tasks/run-application/rolling-update-replication-controller.md
  81. +0 −1 content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
@@ -18,7 +18,6 @@ Here's the architecture of a Kubernetes cluster without the cloud controller man
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
@@ -46,16 +45,16 @@ The CCM breaks away some of the functionality of Kubernetes controller manager (
In version 1.9, the CCM runs the following controllers from the preceding list:
* Node controller
* Route controller
* Route controller
* Service controller
Additionally, it runs another controller called the PersistentVolumeLabels controller. This controller is responsible for setting the zone and region labels on PersistentVolumes created in GCP and AWS clouds.
Additionally, it runs another controller called the PersistentVolumeLabels controller. This controller is responsible for setting the zone and region labels on PersistentVolumes created in GCP and AWS clouds.
{{< note >}}
**Note:** Volume controller was deliberately chosen to not be a part of CCM. Due to the complexity involved and due to the existing efforts to abstract away vendor specific volume logic, it was decided that volume controller will not be moved to CCM.
**Note:** Volume controller was deliberately chosen to not be a part of CCM. Due to the complexity involved and due to the existing efforts to abstract away vendor specific volume logic, it was decided that volume controller will not be moved to CCM.
{{< /note >}}
The original plan to support volumes using CCM was to use Flex volumes to support pluggable volumes. However, a competing effort known as CSI is being planned to replace Flex.
The original plan to support volumes using CCM was to use Flex volumes to support pluggable volumes. However, a competing effort known as CSI is being planned to replace Flex.
Considering these dynamics, we decided to have an intermediate stop gap measure until CSI becomes ready.
@@ -68,7 +67,7 @@ The CCM inherits its functions from components of Kubernetes that are dependent
The majority of the CCM's functions are derived from the KCM. As mentioned in the previous section, the CCM runs the following control loops:
* Node controller
* Route controller
* Route controller
* Service controller
* PersistentVolumeLabels controller
@@ -92,15 +91,15 @@ The Service controller is responsible for listening to service create, update, a
#### PersistentVolumeLabels controller
The PersistentVolumeLabels controller applies labels on AWS EBS/GCE PD volumes when they are created. This removes the need for users to manually set the labels on these volumes.
The PersistentVolumeLabels controller applies labels on AWS EBS/GCE PD volumes when they are created. This removes the need for users to manually set the labels on these volumes.
These labels are essential for the scheduling of pods as these volumes are constrained to work only within the region/zone that they are in. Any Pod using these volumes needs to be scheduled in the same region/zone.
The PersistentVolumeLabels controller was created specifically for the CCM; that is, it did not exist before the CCM was created. This was done to move the PV labelling logic in the Kubernetes API server (it was an admission controller) to the CCM. It does not run on the KCM.
### 2. Kubelet
The Node controller contains the cloud-dependent functionality of the kubelet. Prior to the introduction of the CCM, the kubelet was responsible for initializing a node with cloud-specific details such as IP addresses, region/zone labels and instance type information. The introduction of the CCM has moved this initialization operation from the kubelet into the CCM.
The Node controller contains the cloud-dependent functionality of the kubelet. Prior to the introduction of the CCM, the kubelet was responsible for initializing a node with cloud-specific details such as IP addresses, region/zone labels and instance type information. The introduction of the CCM has moved this initialization operation from the kubelet into the CCM.
In this new model, the kubelet initializes a node without cloud-specific information. However, it adds a taint to the newly created node that makes the node unschedulable until the CCM initializes the node with cloud-specific information. It then removes this taint.
@@ -118,13 +117,13 @@ For more information about developing plugins, see [Developing Cloud Controller
## Authorization
This section breaks down the access required on various API objects by the CCM to perform its operations.
This section breaks down the access required on various API objects by the CCM to perform its operations.
### Node Controller
The Node controller only works with Node objects. It requires full access to get, list, create, update, patch, watch, and delete Node objects.
v1/Node:
v1/Node:
- Get
- List
@@ -136,17 +135,17 @@ v1/Node:
### Route controller
The route controller listens to Node object creation and configures routes appropriately. It requires get access to Node objects.
The route controller listens to Node object creation and configures routes appropriately. It requires get access to Node objects.
v1/Node:
v1/Node:
- Get
### Service controller
The service controller listens to Service object create, update and delete events and then configures endpoints for those Services appropriately.
The service controller listens to Service object create, update and delete events and then configures endpoints for those Services appropriately.
To access Services, it requires list, and watch access. To update Services, it requires patch and update access.
To access Services, it requires list, and watch access. To update Services, it requires patch and update access.
To set up endpoints for the Services, it requires access to create, list, get, watch, and update.
@@ -249,7 +248,7 @@ rules:
## Vendor Implementations
The following cloud providers have implemented CCMs:
The following cloud providers have implemented CCMs:
* [Digital Ocean](https://github.com/digitalocean/digitalocean-cloud-controller-manager)
* [Oracle](https://github.com/oracle/oci-cloud-controller-manager)
@@ -18,7 +18,6 @@ cloud provider).
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
@@ -67,9 +66,9 @@ The connections from the apiserver to the kubelet are used for:
* Fetching logs for pods.
* Attaching (through kubectl) to running pods.
* Providing the kubelet's port-forwarding functionality.
* Providing the kubelet's port-forwarding functionality.
These connections terminate at the kubelet's HTTPS endpoint. By default,
These connections terminate at the kubelet's HTTPS endpoint. By default,
the apiserver does not verify the kubelet's serving certificate,
which makes the connection subject to man-in-the-middle attacks, and
**unsafe** to run over untrusted and/or public networks.
@@ -18,7 +18,6 @@ architecture design doc for more details.
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
@@ -76,18 +75,18 @@ the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce fr
permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from
Kubernetes causes all the Pod objects running on the node to be deleted from the apiserver, and frees up their names.
In version 1.12, `TaintNodesByCondition` feature is promoted to beta,so node lifecycle controller automatically creates
In version 1.12, `TaintNodesByCondition` feature is promoted to beta,so node lifecycle controller automatically creates
[taints](/docs/concepts/configuration/taint-and-toleration/) that represent conditions.
Similarly the scheduler ignores conditions when considering a Node; instead
it looks at the Node's taints and a Pod's tolerations.
Now users can choose between the old scheduling model and a new, more flexible scheduling model.
A Pod that does not have any tolerations gets scheduled according to the old model. But a Pod that
A Pod that does not have any tolerations gets scheduled according to the old model. But a Pod that
tolerates the taints of a particular Node can be scheduled on that Node.
{{< caution >}}
**Caution:** Enabling this feature creates a small delay between the
time when a condition is observed and when a taint is created. This delay is usually less than one second, but it can increase the number of Pods that are successfully scheduled but rejected by the kubelet.
**Caution:** Enabling this feature creates a small delay between the
time when a condition is observed and when a taint is created. This delay is usually less than one second, but it can increase the number of Pods that are successfully scheduled but rejected by the kubelet.
{{< /caution >}}
### Capacity
@@ -127,7 +126,7 @@ a node from the following content:
Kubernetes creates a node object internally (the representation), and
validates the node by health checking based on the `metadata.name` field. If the node is valid -- that is, if all necessary
services are running -- it is eligible to run a pod. Otherwise, it is
ignored for any cluster activity until it becomes valid.
ignored for any cluster activity until it becomes valid.
{{< note >}}
**Note:** Kubernetes keeps the object for the invalid node and keeps checking to see whether it becomes valid.
@@ -14,7 +14,6 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
@@ -30,7 +29,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
* [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
@@ -12,7 +12,6 @@ manually through `easyrsa`, `openssl` or `cfssl`.
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
@@ -81,18 +80,18 @@ manually through `easyrsa`, `openssl` or `cfssl`.
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
C = <country>
ST = <state>
L = <city>
O = <organization>
OU = <organization unit>
CN = <MASTER_IP>
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
@@ -101,7 +100,7 @@ manually through `easyrsa`, `openssl` or `cfssl`.
DNS.5 = kubernetes.default.svc.cluster.local
IP.1 = <MASTER_IP>
IP.2 = <MASTER_CLUSTER_IP>
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
@@ -213,7 +212,7 @@ Finally, add the same parameters into the API server start parameters.
"O": "<organization>",
"OU": "<organization unit>"
}]
}
}
1. Generate the key and certificate for the API server, which are by default
saved into file `server-key.pem` and `server.pem` respectively:
@@ -9,12 +9,11 @@ This page explains how to manage Kubernetes running on a specific
cloud provider.
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
### kubeadm
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
kubeadm has configuration options to specify configuration information for cloud providers. For example a typical
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
kubeadm has configuration options to specify configuration information for cloud providers. For example a typical
in-tree cloud provider can be configured using kubeadm as shown below:
```yaml
@@ -214,7 +213,7 @@ file:
connotation, a deployment can use a geographical name for a region identifier
such as `us-east`. Available regions are found under the `/v3/regions`
endpoint of the Keystone API.
* `ca-file` (Optional): Used to specify the path to your custom CA file.
* `ca-file` (Optional): Used to specify the path to your custom CA file.
When using Keystone V3 - which changes tenant to project - the `tenant-id` value
@@ -361,12 +360,12 @@ Note that the Kubernetes Node name must match the Photon VM name (or if `overrid
The VSphere cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
## IBM Cloud Kubernetes Service
## IBM Cloud Kubernetes Service
### Compute nodes
By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://console.bluemix.net/docs/containers/cs_clusters_planning.html#plan_clusters).
The name of the Kubernetes Node object is the private IP address of the IBM Cloud Kubernetes Service worker node instance.
The name of the Kubernetes Node object is the private IP address of the IBM Cloud Kubernetes Service worker node instance.
### Networking
The IBM Cloud Kubernetes Service provider provides VLANs for quality network performance and network isolation for nodes. You can set up custom firewalls and Calico network policies to add an extra layer of security for your cluster, or connect your cluster to your on-prem data center via VPN. For more information, see [Planning in-cluster and private networking](https://console.bluemix.net/docs/containers/cs_network_cluster.html#planning).
@@ -14,7 +14,6 @@ External garbage collection tools are not recommended as these tools can potenti
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
@@ -15,7 +15,6 @@ However, the native functionality provided by a container engine or runtime is u
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
@@ -14,7 +14,6 @@ You've deployed your application and exposed it via a service. Now what? Kuberne
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
Oops, something went wrong.

0 comments on commit 04163e9

Please sign in to comment.