Skip to content

Commit

Permalink
Fix spelling mistake in scheduling section
Browse files Browse the repository at this point in the history
  • Loading branch information
alexis974 committed Feb 19, 2024
1 parent 33dcba8 commit e839bf7
Show file tree
Hide file tree
Showing 10 changed files with 29 additions and 29 deletions.
12 changes: 6 additions & 6 deletions content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
Expand Up @@ -254,13 +254,13 @@ the node label that the system uses to denote the domain. For examples, see
[Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/).

{{< note >}}
Inter-pod affinity and anti-affinity require substantial amount of
Inter-pod affinity and anti-affinity require substantial amounts of
processing which can slow down scheduling in large clusters significantly. We do
not recommend using them in clusters larger than several hundred nodes.
{{< /note >}}

{{< note >}}
Pod anti-affinity requires nodes to be consistently labelled, in other words,
Pod anti-affinity requires nodes to be consistently labeled, in other words,
every node in the cluster must have an appropriate label matching `topologyKey`.
If some or all nodes are missing the specified `topologyKey` label, it can lead
to unintended behavior.
Expand Down Expand Up @@ -364,7 +364,7 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi

{{< note >}}
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
The `matchLabelKeys` field is a alpha-level field and is disabled by default in
The `matchLabelKeys` field is an alpha-level field and is disabled by default in
Kubernetes {{< skew currentVersion >}}.
When you want to use it, you have to enable it via the
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
Expand Down Expand Up @@ -415,7 +415,7 @@ spec:

{{< note >}}
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
The `mismatchLabelKeys` field is a alpha-level field and is disabled by default in
The `mismatchLabelKeys` field is an alpha-level field and is disabled by default in
Kubernetes {{< skew currentVersion >}}.
When you want to use it, you have to enable it via the
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
Expand Down Expand Up @@ -561,7 +561,7 @@ where each web server is co-located with a cache, on three separate nodes.
| *webserver-1* | *webserver-2* | *webserver-3* |
| *cache-1* | *cache-2* | *cache-3* |

The overall effect is that each cache instance is likely to be accessed by a single client, that
The overall effect is that each cache instance is likely to be accessed by a single client that
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.

You might have other reasons to use Pod anti-affinity.
Expand Down Expand Up @@ -589,7 +589,7 @@ Some of the limitations of using `nodeName` to select nodes are:
{{< note >}}
`nodeName` is intended for use by custom schedulers or advanced use cases where
you need to bypass any configured schedulers. Bypassing the schedulers might lead to
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
failed Pods if the assigned Nodes get oversubscribed. You can use the [node affinity](#node-affinity) or the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
{{</ note >}}

Here is an example of a Pod spec using the `nodeName` field:
Expand Down
Expand Up @@ -41,14 +41,14 @@ ResourceClass
driver.

ResourceClaim
: Defines a particular resource instances that is required by a
: Defines a particular resource instance that is required by a
workload. Created by a user (lifecycle managed manually, can be shared
between different Pods) or for individual Pods by the control plane based on
a ResourceClaimTemplate (automatic lifecycle, typically used by just one
Pod).

ResourceClaimTemplate
: Defines the spec and some meta data for creating
: Defines the spec and some metadata for creating
ResourceClaims. Created by a user when deploying a workload.

PodSchedulingContext
Expand Down
Expand Up @@ -171,7 +171,7 @@ The kubelet has the following default hard eviction thresholds:
- `nodefs.inodesFree<5%` (Linux nodes)

These default values of hard eviction thresholds will only be set if none
of the parameters is changed. If you changed the value of any parameter,
of the parameters is changed. If you change the value of any parameter,
then the values of other parameters will not be inherited as the default
values and will be set to zero. In order to provide custom values, you
should provide all the thresholds respectively.
Expand Down
Expand Up @@ -182,8 +182,8 @@ When Pod priority is enabled, the scheduler orders pending Pods by
their priority and a pending Pod is placed ahead of other pending Pods
with lower priority in the scheduling queue. As a result, the higher
priority Pod may be scheduled sooner than Pods with lower priority if
its scheduling requirements are met. If such Pod cannot be scheduled,
scheduler will continue and tries to schedule other lower priority Pods.
its scheduling requirements are met. If such Pod cannot be scheduled, the
scheduler will continue and try to schedule other lower priority Pods.

## Preemption

Expand All @@ -199,7 +199,7 @@ the Pods are gone, P can be scheduled on the Node.
### User exposed information

When Pod P preempts one or more Pods on Node N, `nominatedNodeName` field of Pod
P's status is set to the name of Node N. This field helps scheduler track
P's status is set to the name of Node N. This field helps the scheduler track
resources reserved for Pod P and also gives users information about preemptions
in their clusters.

Expand All @@ -209,8 +209,8 @@ After victim Pods are preempted, they get their graceful termination period. If
another node becomes available while scheduler is waiting for the victim Pods to
terminate, scheduler may use the other node to schedule Pod P. As a result
`nominatedNodeName` and `nodeName` of Pod spec are not always the same. Also, if
scheduler preempts Pods on Node N, but then a higher priority Pod than Pod P
arrives, scheduler may give Node N to the new higher priority Pod. In such a
the scheduler preempts Pods on Node N, but then a higher priority Pod than Pod P
arrives, the scheduler may give Node N to the new higher priority Pod. In such a
case, scheduler clears `nominatedNodeName` of Pod P. By doing this, scheduler
makes Pod P eligible to preempt Pods on another Node.

Expand Down Expand Up @@ -288,7 +288,7 @@ enough demand and if we find an algorithm with reasonable performance.

## Troubleshooting

Pod priority and pre-emption can have unwanted side effects. Here are some
Pod priority and preemption can have unwanted side effects. Here are some
examples of potential problems and ways to deal with them.

### Pods are preempted unnecessarily
Expand Down
Expand Up @@ -59,7 +59,7 @@ The output is:
```

To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely
by re-applying a modified manifest:
by reapplying a modified manifest:

{{% code_sample file="pods/pod-without-scheduling-gates.yaml" %}}

Expand Down
Expand Up @@ -57,9 +57,9 @@ the `NodeResourcesFit` score function can be controlled by the
Within the `scoringStrategy` field, you can configure two parameters: `requestedToCapacityRatio` and
`resources`. The `shape` in the `requestedToCapacityRatio`
parameter allows the user to tune the function as least requested or most
requested based on `utilization` and `score` values. The `resources` parameter
consists of `name` of the resource to be considered during scoring and `weight`
specify the weight of each resource.
requested based on `utilization` and `score` values. The `resources` parameter
comprises both the `name` of the resource to be considered during scoring and
its corresponding `weight`, which specifies the weight of each resource.

Below is an example configuration that sets
the bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar`
Expand Down
Expand Up @@ -77,7 +77,7 @@ If you don't specify a threshold, Kubernetes calculates a figure using a
linear formula that yields 50% for a 100-node cluster and yields 10%
for a 5000-node cluster. The lower bound for the automatic value is 5%.

This means that, the kube-scheduler always scores at least 5% of your cluster no
This means that the kube-scheduler always scores at least 5% of your cluster no
matter how large the cluster is, unless you have explicitly set
`percentageOfNodesToScore` to be smaller than 5.

Expand Down
Expand Up @@ -113,7 +113,7 @@ called for that node. Nodes may be evaluated concurrently.

### PostFilter {#post-filter}

These plugins are called after Filter phase, but only when no feasible nodes
These plugins are called after the Filter phase, but only when no feasible nodes
were found for the pod. Plugins are called in their configured order. If
any postFilter plugin marks the node as `Schedulable`, the remaining plugins
will not be called. A typical PostFilter implementation is preemption, which
Expand Down
Expand Up @@ -84,7 +84,7 @@ An empty `effect` matches all effects with key `key1`.

{{< /note >}}

The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`.
The above example used the `effect` of `NoSchedule`. Alternatively, you can use the `effect` of `PreferNoSchedule`.


The allowed values for the `effect` field are:
Expand Down Expand Up @@ -227,7 +227,7 @@ are true. The following taints are built in:
* `node.kubernetes.io/network-unavailable`: Node's network is unavailable.
* `node.kubernetes.io/unschedulable`: Node is unschedulable.
* `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started
with "external" cloud provider, this taint is set on a node to mark it
with an "external" cloud provider, this taint is set on a node to mark it
as unusable. After a controller from the cloud-controller-manager initializes
this node, the kubelet removes this taint.

Expand Down
Expand Up @@ -71,7 +71,7 @@ spec:
```

You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints` or
refer to [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of the API reference for Pod.
refer to the [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of the API reference for Pod.

### Spread constraint definition

Expand Down Expand Up @@ -254,7 +254,7 @@ follows the API definition of the field; however, the behavior is more likely to
confusing and troubleshooting is less straightforward.

You need a mechanism to ensure that all the nodes in a topology domain (such as a
cloud provider region) are labelled consistently.
cloud provider region) are labeled consistently.
To avoid you needing to manually label nodes, most clusters automatically
populate well-known labels such as `kubernetes.io/hostname`. Check whether
your cluster supports this.
Expand All @@ -263,7 +263,7 @@ your cluster supports this.

### Example: one topology spread constraint {#example-one-topologyspreadconstraint}

Suppose you have a 4-node cluster where 3 Pods labelled `foo: bar` are located in
Suppose you have a 4-node cluster where 3 Pods labeled `foo: bar` are located in
node1, node2 and node3 respectively:

{{<mermaid>}}
Expand All @@ -290,7 +290,7 @@ can use a manifest similar to:
{{% code_sample file="pods/topology-spread-constraints/one-constraint.yaml" %}}

From that manifest, `topologyKey: zone` implies the even distribution will only be applied
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label
to nodes that are labeled `zone: <any value>` (nodes that don't have a `zone` label
are skipped). The field `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let the
incoming Pod stay pending if the scheduler can't find a way to satisfy the constraint.

Expand Down Expand Up @@ -494,7 +494,7 @@ There are some implicit conventions worth noting here:
above example, if you remove the incoming Pod's labels, it can still be placed onto
nodes in zone `B`, since the constraints are still satisfied. However, after that
placement, the degree of imbalance of the cluster remains unchanged - it's still zone `A`
having 2 Pods labelled as `foo: bar`, and zone `B` having 1 Pod labelled as
having 2 Pods labeled as `foo: bar`, and zone `B` having 1 Pod labeled as
`foo: bar`. If this is not what you expect, update the workload's
`topologySpreadConstraints[*].labelSelector` to match the labels in the pod template.

Expand Down Expand Up @@ -618,7 +618,7 @@ section of the enhancement proposal about Pod topology spread constraints.
because, in this case, those topology domains won't be considered until there is
at least one node in them.

You can work around this by using an cluster autoscaling tool that is aware of
You can work around this by using a cluster autoscaling tool that is aware of
Pod topology spread constraints and is also aware of the overall set of topology
domains.

Expand Down

0 comments on commit e839bf7

Please sign in to comment.