Skip to content

Commit

Permalink
[zh-cn]sync manage-resources-containers garbage-collection nodes logg…
Browse files Browse the repository at this point in the history
…ing job

Signed-off-by: xin.li <xin.li@daocloud.io>
  • Loading branch information
my-git9 committed Jun 4, 2023
1 parent a72cd0c commit 0dc7f17
Show file tree
Hide file tree
Showing 6 changed files with 59 additions and 52 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ two types of cascading deletion, as follows:

Kubernetes 会检查并删除那些不再拥有属主引用的对象,例如在你删除了 ReplicaSet
之后留下来的 Pod。当你删除某个对象时,你可以控制 Kubernetes 是否去自动删除该对象的依赖对象,
这个过程称为 **级联删除(Cascading Deletion)**
这个过程称为**级联删除(Cascading Deletion)**
级联删除有两种类型,分别如下:

* 前台级联删除
Expand Down Expand Up @@ -236,12 +236,12 @@ break the kubelet behavior and remove containers that should exist.
To configure options for unused container and image garbage collection, tune the
kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
and change the parameters related to garbage collection using the
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)
resource type.
-->
要配置对未使用容器和镜像的垃圾收集选项,可以使用一个
[配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/),基于
[`KubeletConfiguration`](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
[`KubeletConfiguration`](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/)
资源类型来调整与垃圾收集相关的 kubelet 行为。

<!--
Expand Down
12 changes: 6 additions & 6 deletions content/zh-cn/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ register itself with the API server. This is the preferred pattern, used by most
For self-registration, the kubelet is started with the following options:
-->
### 节点自注册 {#self-registration-of-nodes}
### 节点自注册 {#self-registration-of-nodes}

当 kubelet 标志 `--register-node` 为 true(默认)时,它会尝试向 API 服务注册自己。
这是首选模式,被绝大多数发行版选用。
Expand Down Expand Up @@ -942,10 +942,10 @@ in a cluster,
|`regular/unset` | 0 |

<!--
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/)
the settings for `shutdownGracePeriodByPodPriority` could look like:
-->
[kubelet 配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)中,
[kubelet 配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/)中,
`shutdownGracePeriodByPodPriority` 可能看起来是这样:

| Pod 优先级类数值 | 关闭期限 |
Expand Down Expand Up @@ -1073,7 +1073,7 @@ VolumeAttachments will not be deleted from the original shutdown node so the vol
used by these pods cannot be attached to a new running node. As a result, the
application running on the StatefulSet cannot function properly. If the original
shutdown node comes up, the pods will be deleted by kubelet and new pods will be
created on a different running node. If the original shutdown node does not come up,
created on a different running node. If the original shutdown node does not come up,
these pods will be stuck in terminating status on the shutdown node forever.
-->
当某节点关闭但 kubelet 的节点关闭管理器未检测到这一事件时,
Expand Down Expand Up @@ -1150,12 +1150,12 @@ onwards, swap memory support can be enabled on a per-node basis.
<!--
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/)
must be set to false.
-->
要在节点上启用交换内存,必须启用 kubelet 的 `NodeSwap` 特性门控,
同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn`
[配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)设置为 false。
[配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/)设置为 false。

{{< warning >}}
<!--
Expand Down
33 changes: 19 additions & 14 deletions content/zh-cn/docs/concepts/cluster-administration/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,9 +132,10 @@ See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl
![Node level logging](/images/docs/user-guide/logging/logging-node-level.png)
A container runtime handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams.
Different container runtimes implement this in different ways; however, the integration with the kubelet is standardized
as the _CRI logging format_.
A container runtime handles and redirects any output generated to a containerized
application's `stdout` and `stderr` streams.
Different container runtimes implement this in different ways; however, the integration
with the kubelet is standardized as the _CRI logging format_.
-->
### 节点的容器日志处理方式 {#how-nodes-handle-container-logs}

Expand All @@ -144,11 +145,11 @@ as the _CRI logging format_.
不同的容器运行时以不同的方式实现这一点;不过它们与 kubelet 的集成都被标准化为 **CRI 日志格式**

<!--
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node,
all corresponding containers are also evicted, along with their logs.
By default, if a container restarts, the kubelet keeps one terminated container with its logs.
If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
The kubelet makes logs available to clients via a special feature of the Kubernetes API. The usual way to access this is
by running `kubectl logs`.
The kubelet makes logs available to clients via a special feature of the Kubernetes API.
The usual way to access this is by running `kubectl logs`.
-->
默认情况下,如果容器重新启动,kubelet 会保留一个终止的容器及其日志。
如果一个 Pod 被逐出节点,所对应的所有容器及其日志也会被逐出。
Expand Down Expand Up @@ -176,7 +177,7 @@ and the runtime writes the container logs to the given location.
kubelet(使用 CRI)将此信息发送到容器运行时,而运行时则将容器日志写到给定位置。

<!--
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration),
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/),
`containerLogMaxSize` and `containerLogMaxFiles`,
using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
These settings let you configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
Expand All @@ -185,7 +186,7 @@ When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands
the basic logging example, the kubelet on the node handles the request and
reads directly from the log file. The kubelet returns the content of the log file.
-->
你可以使用 [kubelet 配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/)配置两个
你可以使用 [kubelet 配置文件](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/)配置两个
kubelet [配置选项](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
`containerLogMaxSize``containerLogMaxFiles`
这些设置分别允许你分别配置每个日志文件大小的最大值和每个容器允许的最大文件数。
Expand Down Expand Up @@ -353,7 +354,8 @@ as your responsibility.
<!--
## Cluster-level logging architectures
While Kubernetes does not provide a native solution for cluster-level logging, there are several common approaches you can consider. Here are some options:
While Kubernetes does not provide a native solution for cluster-level logging, there are
several common approaches you can consider. Here are some options:
* Use a node-level logging agent that runs on every node.
* Include a dedicated sidecar container for logging in an application pod.
Expand All @@ -378,9 +380,12 @@ While Kubernetes does not provide a native solution for cluster-level logging, t
![使用节点级日志代理](/images/docs/user-guide/logging/logging-with-node-agent.png)

<!--
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
You can implement cluster-level logging by including a _node-level logging agent_ on each node.
The logging agent is a dedicated tool that exposes logs or pushes logs to a backend.
Commonly, the logging agent is a container that has access to a directory with log files from all of the
application containers on that node.
-->
你可以通过在每个节点上使用 **节点级的日志记录代理** 来实现集群级日志记录。
你可以通过在每个节点上使用**节点级的日志记录代理**来实现集群级日志记录。
日志记录代理是一种用于暴露日志或将日志推送到后端的专用工具。
通常,日志记录代理程序是一个容器,它可以访问包含该节点上所有应用程序容器的日志文件的目录。

Expand All @@ -395,7 +400,8 @@ Node-level logging creates only one agent per node and doesn't require any chang
节点级日志在每个节点上仅创建一个代理,不需要对节点上的应用做修改。

<!--
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects
these logs and forwards them for aggregation.
-->
容器向标准输出和标准错误输出写出数据,但在格式上并不统一。
节点级代理收集这些日志并将其进行转发以完成汇总。
Expand Down Expand Up @@ -654,4 +660,3 @@ Cluster-logging that exposes or pushes logs directly from every application is o
* 阅读有关 [Kubernetes 系统日志](/zh-cn/docs/concepts/cluster-administration/system-logs/)的信息
* 进一步了解[追踪 Kubernetes 系统组件](/zh-cn/docs/concepts/cluster-administration/system-traces/)
* 了解当 Pod 失效时如何[定制 Kubernetes 记录的终止消息](/zh-cn/docs/tasks/debug/debug-application/determine-reason-pod-failure/#customizing-the-termination-message)

Original file line number Diff line number Diff line change
Expand Up @@ -22,27 +22,27 @@ feature:
<!-- overview -->

<!--
When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how
much of each resource a {{< glossary_tooltip text="container" term_id="container" >}} needs.
The most common resources to specify are CPU and memory (RAM); there are others.
When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how much of each resource a
{{< glossary_tooltip text="container" term_id="container" >}} needs. The most common resources to specify are CPU and memory
(RAM); there are others.
When you specify the resource _request_ for containers in a Pod, the
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} uses this
information to decide which node to place the Pod on. When you specify a resource _limit_
for a container, the kubelet enforces those limits so that the running container is not
allowed to use more of that resource than the limit you set. The kubelet also reserves
at least the _request_ amount of that system resource specifically for that container
to use.
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} uses this information to decide which node to place the Pod on.
When you specify a resource _limit_ for a container, the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} enforces those
limits so that the running container is not allowed to use more of that resource
than the limit you set. The kubelet also reserves at least the _request_ amount of
that system resource specifically for that container to use.
-->
当你定义 {{< glossary_tooltip text="Pod" term_id="pod" >}} 时可以选择性地为每个
{{< glossary_tooltip text="容器" term_id="container" >}}设定所需要的资源数量。
最常见的可设定资源是 CPU 和内存(RAM)大小;此外还有其他类型的资源。

当你为 Pod 中的 Container 指定了资源 __请求__ 时,
当你为 Pod 中的 Container 指定了资源 **request(请求)**时,
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
就利用该信息决定将 Pod 调度到哪个节点上。
当你还为 Container 指定了资源 __限制__ 时,kubelet 就可以确保运行的容器不会使用超出所设限制的资源。
kubelet 还会为容器预留所 __请求__ 数量的系统资源,供其使用。
当你为 Container 指定了资源 **limit(限制)**时,{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
就可以确保运行的容器不会使用超出所设限制的资源。
kubelet 还会为容器预留所 **request(请求)**数量的系统资源,供其使用。

<!-- body -->

Expand Down
28 changes: 19 additions & 9 deletions content/zh-cn/docs/concepts/workloads/controllers/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -448,12 +448,6 @@ Jobs with _fixed completion count_ - that is, jobs that have non null
the deterministic hostnames to address each other via DNS. For more information about
how to configure this, see [Job with Pod-to-Pod Communication](/docs/tasks/job/job-with-pod-to-pod-communication/).
- From the containerized task, in the environment variable `JOB_COMPLETION_INDEX`.
The Job is considered complete when there is one successfully completed Pod
for each index. For more information about how to use this mode, see
[Indexed Job for Parallel Processing with Static Work Assignment](/docs/tasks/job/indexed-parallel-processing-static/).
Note that, although rare, more than one Pod could be started for the same
index, but only one of them will count towards the completion count.
-->
- `NonIndexed`(默认值):当成功完成的 Pod 个数达到 `.spec.completions`
设值时认为 Job 已经完成。换言之,每个 Job 完成事件都是独立无关且同质的。
Expand All @@ -467,11 +461,27 @@ Jobs with _fixed completion count_ - that is, jobs that have non null
有关如何配置的更多信息,请参阅[带 Pod 间通信的 Job](/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/)
- 对于容器化的任务,在环境变量 `JOB_COMPLETION_INDEX` 中。

<!--
The Job is considered complete when there is one successfully completed Pod
for each index. For more information about how to use this mode, see
[Indexed Job for Parallel Processing with Static Work Assignment](/docs/tasks/job/indexed-parallel-processing-static/).
-->
当每个索引都对应一个成功完成的 Pod 时,Job 被认为是已完成的。
关于如何使用这种模式的更多信息,可参阅
[用带索引的 Job 执行基于静态任务分配的并行处理](/zh-cn/docs/tasks/job/indexed-parallel-processing-static/)
需要注意的是,对同一索引值可能被启动的 Pod 不止一个,尽管这种情况很少发生。
这时,只有一个会被记入完成计数中。

{{< note >}}
<!--
Although rare, more than one Pod could be started for the same index (due to various reasons such as node failures,
kubelet restarts, or Pod evictions). In this case, only the first Pod that completes successfully will
count towards the completion count and update the status of the Job. The other Pods that are running
or completed for the same index will be deleted by the Job controller once they are detected.
-->
带同一索引值启动的 Pod 可能不止一个(由于节点故障、kubelet
重启或 Pod 驱逐等各种原因),尽管这种情况很少发生。
在这种情况下,只有第一个成功完成的 Pod 才会被记入完成计数中并更新作业的状态。
其他为同一索引值运行或完成的 Pod 一旦被检测到,将被 Job 控制器删除。
{{< /note >}}

<!--
## Handling Pod and container failures
Expand Down Expand Up @@ -697,7 +707,7 @@ Job 将被标记为失败。以下是 `main` 容器的具体规则:
- 退出码 42 代表**整个 Job** 失败
- 所有其他退出码都代表容器失败,同时也代表着整个 Pod 失效。
如果重启总次数低于 `backoffLimit` 定义的次数,则会重新启动 Pod,
如果等于 `backoffLimit` 所设置的次数,则代表 **整个 Job** 失效。
如果等于 `backoffLimit` 所设置的次数,则代表**整个 Job** 失效。

{{< note >}}
<!--
Expand Down
8 changes: 0 additions & 8 deletions content/zh-cn/docs/reference/tools/map-crictl-dockercli.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,6 @@ weight: 10

{{% thirdparty-content %}}

{{<note>}}
<!--
This page is deprecated and will be removed in Kubernetes 1.27.
-->
此页面已被废弃,将在 Kubernetes 1.27 版本删除。
{{</note>}}

<!--
`crictl` is a command-line interface for {{<glossary_tooltip term_id="cri" text="CRI">}}-compatible container runtimes.
You can use it to inspect and debug container runtimes and applications on a
Expand Down Expand Up @@ -151,4 +144,3 @@ crictl | 描述
`rmp` | 删除一个或多个 Pod
`stopp` | 停止一个或多个运行中的 Pod
{{< /table >}}

0 comments on commit 0dc7f17

Please sign in to comment.