Skip to content

Commit

Permalink
[zh] Fix links in setup section (2)
Browse files Browse the repository at this point in the history
There are some style corrections in the minikube page as well.
  • Loading branch information
tengqm committed Sep 7, 2020
1 parent 069aeec commit 73415d9
Show file tree
Hide file tree
Showing 5 changed files with 280 additions and 284 deletions.
43 changes: 24 additions & 19 deletions content/zh/docs/setup/best-practices/certificates.md
Expand Up @@ -6,24 +6,25 @@ content_type: concept
weight: 40
---
<!--
---
title: PKI certificates and requirements
reviewers:
- sig-cluster-lifecycle
content_type: concept
weight: 40
---
-->

<!-- overview -->

<!--
Kubernetes requires PKI certificates for authentication over TLS.
If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), the certificates that your cluster requires are automatically generated.
You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server.
You can also generate your own certificates - for example, to keep your private keys more secure by not storing them on the API server.
This page explains the certificates that your cluster requires.
-->
Kubernetes 需要 PKI 证书才能进行基于 TLS 的身份验证。如果您是使用 [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) 安装的 Kubernetes,则会自动生成集群所需的证书。您还可以生成自己的证书。例如,不将私钥存储在 API 服务器上,可以让私钥更加安全。此页面说明了集群必需的证书。
Kubernetes 需要 PKI 证书才能进行基于 TLS 的身份验证。如果你是使用
[kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 安装的 Kubernetes,
则会自动生成集群所需的证书。你还可以生成自己的证书。
例如,不将私钥存储在 API 服务器上,可以让私钥更加安全。此页面说明了集群必需的证书。



Expand Down Expand Up @@ -57,11 +58,13 @@ Kubernetes 需要 PKI 才能执行以下操作:
* 调度器的客户端证书/kubeconfig,用于和 API server 的会话
* [前端代理](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/) 的客户端及服务端证书

{{< note >}}
<!--
`front-proxy` certificates are required only if you run kube-proxy to support [an extension API server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/).
-->
只有当您运行 kube-proxy 并要支持[扩展 API 服务器](/docs/tasks/access-kubernetes-api/setup-extension-api-server/)时,才需要 `front-proxy` 证书
{{< note >}}
只有当你运行 kube-proxy 并要支持
[扩展 API 服务器](/zh/docs/tasks/extend-kubernetes/setup-extension-api-server/)
时,才需要 `front-proxy` 证书
{{< /note >}}

<!--
Expand Down Expand Up @@ -146,9 +149,12 @@ Required certificates:
where `kind` maps to one or more of the [x509 key usage][usage] types:
-->
[1]: 用来连接到集群的不同 IP 或 DNS 名(就像 [kubeadm][kubeadm] 为负载均衡所使用的固定 IP 或 DNS 名,`kubernetes``kubernetes.default``kubernetes.default.svc``kubernetes.default.svc.cluster``kubernetes.default.svc.cluster.local`
[1]: 用来连接到集群的不同 IP 或 DNS 名
(就像 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 为负载均衡所使用的固定
IP 或 DNS 名,`kubernetes``kubernetes.default``kubernetes.default.svc`
`kubernetes.default.svc.cluster``kubernetes.default.svc.cluster.local`)。

其中,`kind` 对应一种或多种类型的 [x509 密钥用途][usage]
其中,`kind` 对应一种或多种类型的 [x509 密钥用途][https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage]

<!--
| kind | Key usage |
Expand Down Expand Up @@ -227,20 +233,21 @@ You must manually configure these administrator account and service accounts:
-->
## 为用户帐户配置证书

您必须手动配置以下管理员帐户和服务帐户
你必须手动配置以下管理员帐户和服务帐户

| 文件名 | 凭据名称 | 默认 CN | O (位于 Subject 中) |
|-------------------------|----------------------------|--------------------------------|----------------|
| admin.conf | default-admin | kubernetes-admin | system:masters |
| kubelet.conf | default-auth | system:node:`<nodeName>` (see note) | system:nodes |
| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
| scheduler.conf | default-scheduler | system:kube-scheduler | |
| 文件名 | 凭据名称 | 默认 CN | O (位于 Subject 中) |
|-------------------------|----------------------------|--------------------------------|---------------------|
| admin.conf | default-admin | kubernetes-admin | system:masters |
| kubelet.conf | default-auth | system:node:`<nodeName>` (参阅注释) | system:nodes |
| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
| scheduler.conf | default-scheduler | system:kube-scheduler | |

{{< note >}}
<!--
The value of `<nodeName>` for `kubelet.conf` **must** match precisely the value of the node name provided by the kubelet as it registers with the apiserver. For further details, read the [Node Authorization](/docs/reference/access-authn-authz/node/).
-->
`kubelet.conf``<nodeName>` 的值 **必须** 与 kubelet 向 apiserver 注册时提供的节点名称的值完全匹配。有关更多详细信息,请阅读[节点授权](/docs/reference/access-authn-authz/node/)
{{< note >}}
`kubelet.conf``<nodeName>` 的值 **必须** 与 kubelet 向 apiserver 注册时提供的节点名称的值完全匹配。
有关更多详细信息,请阅读[节点授权](/zh/docs/reference/access-authn-authz/node/)
{{< /note >}}

<!--
Expand Down Expand Up @@ -278,5 +285,3 @@ These files are used as follows:
| controller-manager.conf | kube-controller-manager | 必需添加到 `manifests/kube-controller-manager.yaml` 清单中 |
| scheduler.conf | kube-scheduler | 必需添加到 `manifests/kube-scheduler.yaml` 清单中 |

[usage]: https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage
[kubeadm]: /docs/reference/setup-tools/kubeadm/kubeadm/
27 changes: 16 additions & 11 deletions content/zh/docs/setup/best-practices/cluster-large.md
Expand Up @@ -42,7 +42,9 @@ A cluster is a set of nodes (physical or virtual machines) running Kubernetes ag
<!--
Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)).
-->
通常,集群中的节点数由特定于云平台的配置文件 `config-default.sh`(可以参考 [GCE 平台的 `config-default.sh`](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh))中的 `NUM_NODES` 参数控制。
通常,集群中的节点数由特定于云平台的配置文件 `config-default.sh`
(可以参考 [GCE 平台的 `config-default.sh`](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh))
中的 `NUM_NODES` 参数控制。

<!--
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
Expand Down Expand Up @@ -175,7 +177,9 @@ On AWS, master node sizes are currently set at cluster startup time and do not c
<!--
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
-->
为了防止内存泄漏或 [集群插件](https://releases.k8s.io/{{<param "githubbranch" >}}/cluster/addons) 中的其它资源问题导致节点上所有可用资源被消耗,Kubernetes 限制了插件容器可以消耗的 CPU 和内存资源(请参阅 PR [#10653](http://pr.k8s.io/10653/files)[#10778](http://pr.k8s.io/10778/files))。
为了防止内存泄漏或 [集群插件](https://releases.k8s.io/{{<param "githubbranch" >}}/cluster/addons)
中的其它资源问题导致节点上所有可用资源被消耗,Kubernetes 限制了插件容器可以消耗的 CPU 和内存资源
(请参阅 PR [#10653](http://pr.k8s.io/10653/files)[#10778](http://pr.k8s.io/10778/files))。

例如:

Expand Down Expand Up @@ -211,33 +215,34 @@ To avoid running into cluster addon resource issues, when creating a cluster wit
* [FluentD with GCP Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
-->
* 根据集群的规模,如果使用了以下插件,提高其内存和 CPU 上限(每个插件都有一个副本处理整个群集,因此内存和 CPU 使用率往往与集群的规模/负载成比例增长) :
* [InfluxDB 和 Grafana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [kubedns、dnsmasq 和 sidecar](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
* [Kibana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
* [InfluxDB 和 Grafana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [kubedns、dnsmasq 和 sidecar](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
* [Kibana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
* 根据集群的规模,如果使用了以下插件,调整其副本数量(每个插件都有多个副本,增加副本数量有助于处理增加的负载,但是,由于每个副本的负载也略有增加,因此也请考虑增加 CPU/内存限制):
* [elasticsearch](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
* [elasticsearch](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
* 根据集群的规模,如果使用了以下插件,限制其内存和 CPU 上限(这些插件在每个节点上都有一个副本,但是 CPU/内存使用量也会随集群负载/规模而略有增加):
* [FluentD 和 ElasticSearch 插件](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
* [FluentD 和 GCP 插件](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
* [FluentD 和 ElasticSearch 插件](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
* [FluentD 和 GCP 插件](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)

<!--
Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185)
and [#22940](http://issue.k8s.io/22940)). If you find that Heapster is running
out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
-->
Heapster 的资源限制与您集群的初始大小有关(请参阅 [#16185](http://issue.k8s.io/16185)
Heapster 的资源限制与您集群的初始大小有关(请参阅 [#16185](https://issue.k8s.io/16185)
[#22940](http://issue.k8s.io/22940))。如果您发现 Heapster 资源不足,您应该调整堆内存请求的计算公式(有关详细信息,请参阅相关 PR)。

<!--
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting).
-->
关于如何检测插件容器是否达到资源限制,参见 [计算资源的故障排除](/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting) 部分。
关于如何检测插件容器是否达到资源限制,参见
[计算资源的故障排除](/zh/docs/concepts/configuration/manage-resources-containers/#troubleshooting) 部分。

<!--
In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster.
We welcome PRs that implement those features.
-->
[未来](http://issue.k8s.io/13048),我们期望根据集群规模大小来设置所有群集附加资源限制,并在集群扩缩容时动态调整它们。
[未来](https://issue.k8s.io/13048),我们期望根据集群规模大小来设置所有群集附加资源限制,并在集群扩缩容时动态调整它们。
我们欢迎您来实现这些功能。

<!--
Expand Down

0 comments on commit 73415d9

Please sign in to comment.