Skip to content

Commit

Permalink
Merge pull request #35582 from krol3/merged-main-dev-1.25
Browse files Browse the repository at this point in the history
Merge main branch into dev-1.25
  • Loading branch information
k8s-ci-robot committed Aug 1, 2022
2 parents 552925f + d4fb248 commit acdef19
Show file tree
Hide file tree
Showing 227 changed files with 28,846 additions and 5,079 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ nohup.out
# Hugo output
public/
resources/
.hugo_build.lock

# Netlify Functions build output
package-lock.json
Expand Down
4 changes: 2 additions & 2 deletions content/de/_index.html
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,12 @@ <h2>Die Herausforderungen bei der Migration von über 150 Microservices auf Kube
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Video ansehen</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Besuche die KubeCon Europe vom 16. bis 20. Mai 2022</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Besuchen die KubeCon North America vom 24. bis 28. Oktober 2022</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">Besuchen die KubeCon North America vom 24. bis 28. Oktober 2022</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Besuche die KubeCon Europe vom 17. bis 21. April 2023</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
Expand Down
6 changes: 3 additions & 3 deletions content/en/_index.html
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
{{% blocks/feature image="scalable" %}}
#### Planet Scale

Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team.
Designed on the same principles that allow Google to run billions of containers a week, Kubernetes can scale without increasing your operations team.

{{% /blocks/feature %}}

Expand All @@ -43,12 +43,12 @@ <h2>The Challenges of Migrating 150+ Microservices to Kubernetes</h2>
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2023/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu23" button id="desktopKCButton">Attend KubeCon Europe on April 17-21, 2023</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Attend KubeCon Europe on April 17-21, 2023</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ Let's see an example of a cluster to understand this API.
As the feature name "PodTopologySpread" implies, the basic usage of this feature
is to run your workload with an absolute even manner (maxSkew=1), or relatively
even manner (maxSkew>=2). See the [official
document](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
document](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
for more details.

In addition to this basic usage, there are some advanced usage examples that
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ To correct the latter issue, we now employ a "hunt and peck" approach to removin
### 1. Upgrade to kubernetes 1.18 and make use of Pod Topology Spread Constraints

While this seems like it could have been the perfect solution, at the time of writing Kubernetes 1.18 was unavailable on the two most common managed Kubernetes services in public cloud, EKS and GKE.
Furthermore, [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) were still a [beta feature in 1.18](https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available.
Furthermore, [pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) were still a beta feature in 1.18 which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available.
The entire endeavour was concerningly reminiscent of checking [caniuse.com](https://caniuse.com/) when Internet Explorer 8 was still around.

### 2. Deploy a statefulset _per zone_.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ As stated earlier, there are several guides about
You can start with [Finding what container runtime are on your nodes](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/).
If your nodes are using dockershim, there are other possible Docker Engine dependencies such as
Pods or third-party tools executing Docker commands or private registries in the Docker configuration file. You can follow the
[Check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) guide to review possible
[Check whether Dockershim removal affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) guide to review possible
Docker Engine dependencies. Before upgrading to v1.24, you decide to either remain using Docker Engine and
[Migrate Docker Engine nodes from dockershim to cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) or migrate to a CRI-compatible runtime. Here's a guide to
[change the container runtime on a node from Docker Engine to containerd](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/).
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/configuration/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN

## Using Labels

- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app.kubernetes.io/name: MyApp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.

A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/containers/runtime-class.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ Runtime handlers are configured through containerd's configuration at
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}]
```

See containerd's [config documentation](https://github.com/containerd/cri/blob/master/docs/config.md)
See containerd's [config documentation](https://github.com/containerd/containerd/blob/main/docs/cri/config.md)
for more details:

#### {{< glossary_tooltip term_id="cri-o" >}}
Expand Down
1 change: 1 addition & 0 deletions content/en/docs/concepts/scheduling-eviction/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ of terminating one or more Pods on Nodes.
* [Kubernetes Scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)
* [Assigning Pods to Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/)
* [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
* [Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/)
* [Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework)
* [Scheduler Performance Tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
Expand Down
51 changes: 35 additions & 16 deletions content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,24 +11,27 @@ weight: 20

<!-- overview -->

You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
{{< glossary_tooltip text="node(s)" term_id="node" >}}.
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
_restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}},
or to _prefer_ to run on particular nodes.
There are several ways to do this and the recommended approaches all use
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
Often, you do not need to set any such constraints; the
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
However, there are some circumstances where you may want to control which node
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different
services that communicate a lot into the same availability zone.
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
or to co-locate Pods from two different services that communicate a lot into the same availability zone.

<!-- body -->

You can use any of the following methods to choose where Kubernetes schedules
specific Pods:
specific Pods:

* [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
* [Affinity and anti-affinity](#affinity-and-anti-affinity)
* [nodeName](#nodename) field
* [Pod topology spread constraints](#pod-topology-spread-constraints)

## Node labels {#built-in-node-labels}

Expand Down Expand Up @@ -170,7 +173,7 @@ For example, consider the following Pod spec:
{{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}}

If there are two possible nodes that match the
`requiredDuringSchedulingIgnoredDuringExecution` rule, one with the
`preferredDuringSchedulingIgnoredDuringExecution` rule, one with the
`label-1:key-1` label and another with the `label-2:key-2` label, the scheduler
considers the `weight` of each node and adds the weight to the other scores for
that node, and schedules the Pod onto the node with the highest final score.
Expand Down Expand Up @@ -337,13 +340,15 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
rules allow you to configure that a set of workloads should
be co-located in the same defined topology, eg., the same node.
be co-located in the same defined topology; for example, preferring to place two related
Pods onto the same node.

Take, for example, a three-node cluster running a web application with an
in-memory cache like redis. You could use inter-pod affinity and anti-affinity
to co-locate the web servers with the cache as much as possible.
For example: imagine a three-node cluster. You use the cluster to run a web application
and also an in-memory cache (such as Redis). For this example, also assume that latency between
the web application and the memory cache should be as low as is practical. You could use inter-pod
affinity and anti-affinity to co-locate the web servers with the cache as much as possible.

In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
In the following example Deployment for the Redis cache, the replicas get the label `app=store`. The
`podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas
with the `app=store` label on a single node. This creates each cache in a
separate node.
Expand Down Expand Up @@ -378,10 +383,10 @@ spec:
image: redis:3.2-alpine
```

The following Deployment for the web servers creates replicas with the label `app=web-store`. The
Pod affinity rule tells the scheduler to place each replica on a node that has a
Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler
to avoid placing multiple `app=web-store` servers on a single node.
The following example Deployment for the web servers creates replicas with the label `app=web-store`.
The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod
with the label `app=store`. The Pod anti-affinity rule tells the scheduler never to place
multiple `app=web-store` servers on a single node.

```yaml
apiVersion: apps/v1
Expand Down Expand Up @@ -430,6 +435,10 @@ where each web server is co-located with a cache, on three separate nodes.
| *webserver-1* | *webserver-2* | *webserver-3* |
| *cache-1* | *cache-2* | *cache-3* |

The overall effect is that each cache instance is likely to be accessed by a single client, that
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.

You might have other reasons to use Pod anti-affinity.
See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
for an example of a StatefulSet configured with anti-affinity for high
availability, using the same technique as this example.
Expand Down Expand Up @@ -468,6 +477,16 @@ spec:

The above Pod will only run on the node `kube-01`.

## Pod topology spread constraints

You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}}
are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other
topology domains that you define. You might do this to improve performance, expected availability, or
overall utilization.

Read [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
to learn more about how these work.

## {{% heading "whatsnext" %}}

* Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ of the scheduler:
## {{% heading "whatsnext" %}}

* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
* Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler
* Read the [kube-scheduler config (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) reference
* Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,9 +91,9 @@ Some kubelet garbage collection features are deprecated in favor of eviction:
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context |

### Eviction thresholds

Expand Down Expand Up @@ -216,7 +216,7 @@ the kubelet frees up disk space in the following order:
If the kubelet's attempts to reclaim node-level resources don't bring the eviction
signal below the threshold, the kubelet begins to evict end-user pods.

The kubelet uses the following parameters to determine pod eviction order:
The kubelet uses the following parameters to determine the pod eviction order:

1. Whether the pod's resource usage exceeds requests
1. [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
Expand Down Expand Up @@ -319,7 +319,7 @@ The kubelet sets an `oom_score_adj` value for each container based on the QoS fo

{{<note>}}
The kubelet also sets an `oom_score_adj` value of `-997` for containers in Pods that have
`system-node-critical` {{<glossary_tooltip text="Priority" term_id="pod-priority">}}
`system-node-critical` {{<glossary_tooltip text="Priority" term_id="pod-priority">}}.
{{</note>}}

If the kubelet can't reclaim memory before a node experiences OOM, the
Expand Down Expand Up @@ -401,7 +401,7 @@ counted as `active_file`. If enough of these kernel block buffers are on the
active LRU list, the kubelet is liable to observe this as high resource use and
taint the node as experiencing memory pressure - triggering pod eviction.

For more more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
For more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)

You can work around that behavior by setting the memory limit and memory request
the same for containers likely to perform intensive I/O activity. You will need
Expand Down
Loading

0 comments on commit acdef19

Please sign in to comment.