Skip to content

Commit

Permalink
[zh-cn]sync nodes assign-pod-node source-ip kubeadm_config_print_rese…
Browse files Browse the repository at this point in the history
…t-defaults

Signed-off-by: xin.li <xin.li@daocloud.io>
  • Loading branch information
my-git9 committed Mar 2, 2024
1 parent b4dc25c commit 58a60d7
Show file tree
Hide file tree
Showing 4 changed files with 47 additions and 54 deletions.
18 changes: 9 additions & 9 deletions content/zh-cn/docs/concepts/architecture/nodes.md
Expand Up @@ -342,7 +342,7 @@ For nodes there are two forms of heartbeats:

Kubernetes 节点发送的心跳帮助你的集群确定每个节点的可用性,并在检测到故障时采取行动。

对于节点,有两种形式的心跳:
对于节点,有两种形式的心跳

<!--
* Updates to the [`.status`](/docs/reference/node/node-status/) of a Node.
Expand Down Expand Up @@ -442,7 +442,7 @@ the same time:
- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
(default 0.01) per second.
-->
- 如果不健康节点的比例超过 `--unhealthy-zone-threshold` (默认为 0.55),
- 如果不健康节点的比例超过 `--unhealthy-zone-threshold`(默认为 0.55),
驱逐速率将会降低。
- 如果集群较小(意即小于等于 `--large-cluster-size-threshold` 个节点 - 默认为 50),
驱逐操作将会停止。
Expand Down Expand Up @@ -534,7 +534,7 @@ If you want to explicitly reserve resources for non-Pod processes, see
-->
## 节点拓扑 {#node-topology}

{{< feature-state state="stable" for_k8s_version="v1.27" >}}
{{< feature-state feature_gate_name="TopologyManager" >}}

<!--
If you have enabled the `TopologyManager`
Expand All @@ -552,7 +552,7 @@ for more information.
-->
## 节点体面关闭 {#graceful-node-shutdown}

{{< feature-state state="beta" for_k8s_version="v1.21" >}}
{{< feature-state feature_gate_name="GracefulNodeShutdown" >}}

<!--
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
Expand Down Expand Up @@ -707,7 +707,7 @@ Message: Pod was terminated in response to imminent node shutdown.
-->
### 基于 Pod 优先级的节点体面关闭 {#pod-priority-graceful-node-shutdown}

{{< feature-state state="beta" for_k8s_version="v1.24" >}}
{{< feature-state feature_gate_name="GracefulNodeShutdownBasedOnPodPriority" >}}

<!--
To provide more flexibility during graceful node shutdown around the ordering
Expand Down Expand Up @@ -868,7 +868,7 @@ kubelet 子系统中会生成 `graceful_shutdown_start_time_seconds` 和
-->
## 处理节点非体面关闭 {#non-graceful-node-shutdown}

{{< feature-state state="stable" for_k8s_version="v1.28" >}}
{{< feature-state feature_gate_name="NodeOutOfServiceVolumeDetach" >}}

<!--
A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
Expand Down Expand Up @@ -955,7 +955,7 @@ During a non-graceful shutdown, Pods are terminated in the two phases:
-->
## 交换内存管理 {#swap-memory}

{{< feature-state state="beta" for_k8s_version="v1.28" >}}
{{< feature-state feature_gate_name="NodeSwap" >}}

<!--
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
Expand All @@ -979,7 +979,7 @@ of Secret objects that were written to tmpfs now could be swapped to disk.
A user can also optionally configure `memorySwap.swapBehavior` in order to
specify how a node will use swap memory. For example,
-->
用户还可以选择配置 `memorySwap.swapBehavior` 以指定节点使用交换内存的方式。例如:
用户还可以选择配置 `memorySwap.swapBehavior` 以指定节点使用交换内存的方式。例如

```yaml
memorySwap:
Expand Down Expand Up @@ -1051,7 +1051,7 @@ see the blog-post about [Kubernetes 1.28: NodeSwap graduates to Beta1](/blog/202
[KEP-2400](https://github.com/kubernetes/enhancements/issues/4128) and its
[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
-->
只有 **cgroup v2** 支持交换空间,cgroup v1 不支持。
只有 **Cgroup v2** 支持交换空间,Cgroup v1 不支持。

如需了解更多信息、协助测试和提交反馈,请参阅关于
[Kubernetes 1.28:NodeSwap 进阶至 Beta1](/zh-cn/blog/2023/08/24/swap-linux-beta/) 的博客文章、
Expand Down
46 changes: 24 additions & 22 deletions content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md
Expand Up @@ -29,7 +29,7 @@ the Pod deploys to, for example, to ensure that a Pod ends up on a node with an
or to co-locate Pods from two different services that communicate a lot into the same availability zone.
-->
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}}
以便 **限制** 其只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行,
以便**限制**其只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行,
或优先在特定的节点上运行。有几种方法可以实现这点,推荐的方法都是用
[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来进行选择。
通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 Pod 分散到节点上,
Expand Down Expand Up @@ -278,7 +278,7 @@ to repel Pods from specific nodes.
If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied
for the Pod to be scheduled onto a node.
-->
如果你同时指定了 `nodeSelector``nodeAffinity`**两者** 必须都要满足,
如果你同时指定了 `nodeSelector``nodeAffinity`**两者**必须都要满足,
才能将 Pod 调度到候选节点上。

<!--
Expand Down Expand Up @@ -676,7 +676,7 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi

#### matchLabelKeys

{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
{{< feature-state feature_gate_name="MatchLabelKeysInPodAffinity" >}}

{{< note >}}
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
Expand Down Expand Up @@ -730,26 +730,27 @@ metadata:
...
spec:
template:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- database
topologyKey: topology.kubernetes.io/zone
# 只有在计算 Pod 亲和性时,才考虑指定上线的 Pod。
# 如果你更新 Deployment,替代的 Pod 将遵循它们自己的亲和性规则
# (如果在新的 Pod 模板中定义了任何规则)。
matchLabelKeys:
- pod-template-hash
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- database
topologyKey: topology.kubernetes.io/zone
# 只有在计算 Pod 亲和性时,才考虑指定上线的 Pod。
# 如果你更新 Deployment,替代的 Pod 将遵循它们自己的亲和性规则
# (如果在新的 Pod 模板中定义了任何规则)。
matchLabelKeys:
- pod-template-hash
```

#### mismatchLabelKeys

{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
{{< feature-state feature_gate_name="MatchLabelKeysInPodAffinity" >}}

{{< note >}}
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
Expand All @@ -773,7 +774,7 @@ One example use case is to ensure Pods go to the topology domain (node, zone, et
In other words, you want to avoid running Pods from two different tenants on the same topology domain at the same time.
-->
Kubernetes 为 Pod 亲和性或反亲和性提供了一个可选的 `mismatchLabelKeys` 字段。
此字段指定了在满足 Pod(反)亲和性时,**** 应与传入 Pod 的标签匹配的键。
此字段指定了在满足 Pod(反)亲和性时,****应与传入 Pod 的标签匹配的键。

一个示例用例是确保 Pod 进入指定的拓扑域(节点、区域等),在此拓扑域中只调度来自同一租户或团队的 Pod。
换句话说,你想要避免在同一拓扑域中同时运行来自两个不同租户的 Pod。
Expand Down Expand Up @@ -976,7 +977,7 @@ where each web server is co-located with a cache, on three separate nodes.
The overall effect is that each cache instance is likely to be accessed by a single client, that
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
-->
总体效果是每个缓存实例都非常可能被在同一个节点上运行的某个客户端访问
总体效果是每个缓存实例都非常可能被在同一个节点上运行的某个客户端访问
这种方法旨在最大限度地减少偏差(负载不平衡)和延迟。

<!--
Expand Down Expand Up @@ -1027,7 +1028,8 @@ Some of the limitations of using `nodeName` to select nodes are:
<!--
`nodeName` is intended for use by custom schedulers or advanced use cases where
you need to bypass any configured schedulers. Bypassing the schedulers might lead to
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the
[`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
-->
`nodeName` 旨在供自定义调度器或需要绕过任何已配置调度器的高级场景使用。
如果已分配的 Node 负载过重,绕过调度器可能会导致 Pod 失败。
Expand Down
Expand Up @@ -29,29 +29,13 @@ kubeadm config print reset-defaults [flags]
-->
### 选项

<table style="width: 100%; table-layout: fixed;">
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>

<tr>
<td colspan="2">--component-configs strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<p>
<!--
A comma-separated list for component config API objects to print the default values for. Available values: [KubeProxyConfiguration KubeletConfiguration]. If this flag is not set, no component configs will be printed.
-->
组件配置 API 对象的逗号分隔列表,打印其默认值。
可用值:[KubeProxyConfiguration KubeletConfiguration]
如果此参数未被设置,则不会打印任何组件配置。
</p>
</td>
</tr>

<tr>
<td colspan="2">-h, --help</td>
</tr>
Expand All @@ -74,7 +58,7 @@ reset-defaults 操作的帮助命令。
-->
### 从父命令继承的选项

<table style="width: 100%; table-layout: fixed;">
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
Expand Down
17 changes: 12 additions & 5 deletions content/zh-cn/docs/tutorials/services/source-ip.md
Expand Up @@ -43,19 +43,19 @@ the target localization.

<!--
[NAT](https://en.wikipedia.org/wiki/Network_address_translation)
: network address translation
: Network address translation
[Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT)
: replacing the source IP on a packet; in this page, that usually means replacing with the IP address of a node.
: Replacing the source IP on a packet; in this page, that usually means replacing with the IP address of a node.
[Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT)
: replacing the destination IP on a packet; in this page, that usually means replacing with the IP address of a {{< glossary_tooltip term_id="pod" >}}
: Replacing the destination IP on a packet; in this page, that usually means replacing with the IP address of a {{< glossary_tooltip term_id="pod" >}}
[VIP](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
: a virtual IP address, such as the one assigned to every {{< glossary_tooltip text="Service" term_id="service" >}} in Kubernetes
: A virtual IP address, such as the one assigned to every {{< glossary_tooltip text="Service" term_id="service" >}} in Kubernetes
[kube-proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
: a network daemon that orchestrates Service VIP management on every node
: A network daemon that orchestrates Service VIP management on every node
-->
[NAT](https://zh.wikipedia.org/wiki/%E7%BD%91%E7%BB%9C%E5%9C%B0%E5%9D%80%E8%BD%AC%E6%8D%A2)
: 网络地址转换
Expand Down Expand Up @@ -89,6 +89,7 @@ IP of requests it receives through an HTTP header. You can create it as follows:
```shell
kubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.4
```

<!--
The output is:
-->
Expand Down Expand Up @@ -130,6 +131,7 @@ kube-proxy,则从集群内发送到 ClusterIP 的数据包永远不会进行
```console
kubectl get nodes
```

<!--
The output is similar to this:
-->
Expand Down Expand Up @@ -341,6 +343,7 @@ Visually:
* Pod 的回复被发送回给客户端

用图表示:

{{< figure src="/zh-cn/docs/images/tutor-service-nodePort-fig01.svg" alt="图 1:源 IP NodePort" class="diagram-large" caption="如图。使用 SNAT 的源 IP(Type=NodePort)" link="https://mermaid.live/edit#pako:eNqNkV9rwyAUxb-K3LysYEqS_WFYKAzat9GHdW9zDxKvi9RoMIZtlH732ZjSbE970cu5v3s86hFqJxEYfHjRNeT5ZcUtIbXRaMNN2hZ5vrYRqt52cSXV-4iMSuwkZiYtyX739EqWaahMQ-V1qPxDVLNOvkYrO6fj2dupWMR2iiT6foOKdEZoS5Q2hmVSStoH7w7IMqXUVOefWoaG3XVftHbGeZYVRbH6ZXJ47CeL2-qhxvt_ucTe1SUlpuMN6CX12XeGpLdJiaMMFFr0rdAyvvfxjHEIDbbIgcVSohKDCRy4PUV06KQIuJU6OA9MCdMjBTEEt_-2NbDgB7xAGy3i97VJPP0ABRmcqg" >}}

<!--
Expand Down Expand Up @@ -368,6 +371,7 @@ Set the `service.spec.externalTrafficPolicy` field as follows:
```shell
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
```

<!--
The output is:
-->
Expand All @@ -385,6 +389,7 @@ Now, re-run the test:
```shell
for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done
```

<!--
The output is similar to:
-->
Expand Down Expand Up @@ -447,6 +452,7 @@ You can test this by exposing the source-ip-app through a load balancer:
```shell
kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
```

<!--
The output is:
-->
Expand Down Expand Up @@ -550,6 +556,7 @@ serving the health check at `/healthz`. You can test this:
```shell
kubectl get pod -o wide -l app=source-ip-app
```

<!--
The output is similar to this:
-->
Expand Down

0 comments on commit 58a60d7

Please sign in to comment.