Skip to content

Commit

Permalink
Update Topology Manager documentation to include the scope feature
Browse files Browse the repository at this point in the history
Signed-off-by: Krzysztof Wiatrzyk <k.wiatrzyk@samsung.com>
  • Loading branch information
k-wiatrzyk committed Nov 13, 2020
1 parent ed07d3f commit ee5daf2
Showing 1 changed file with 42 additions and 0 deletions.
42 changes: 42 additions & 0 deletions content/en/docs/tasks/administer-cluster/topology-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,44 @@ The hint is then stored in the Topology Manager for use by the *Hint Providers*

Support for the Topology Manager requires `TopologyManager` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It is enabled by default starting with Kubernetes 1.18.

### Topology Manager Scopes

The Topology Manager can deal with the alignment of resources in a couple of distinct scopes:

* `container` (default)
* `pod`

Either option can be selected at a time of the kubelet startup, with `--topology-manager-scope` flag.

### container scope

This scope was available before any other scope was implemented, and it remains as the default setting in the kubelet.

Within this scope, the Topology Manager performs a number of sequential resource alignments, i.e., for each container (in a pod) a separate alignment is computed. In other words, there is no notion of grouping the containers to a specific set of NUMA nodes, for this particular scope. In effect, the Topology Manager performs an arbitrary alignment of individual containers to NUMA nodes.

The notion of grouping the containers was endorsed and implemented on purpose in the following scope, i.e. the Pod Scope.

### pod scope

This scope allows for grouping all containers in a pod to a common set of NUMA nodes. That is, the Topology Manager treats a pod as a whole and attempts to allocate the entire pod (all containers) to either a single NUMA node or a common set of NUMA nodes. The following examples illustrate the alignments produced by the Topology Manager on different occasions:

* all containers can be and are allocated to a single NUMA node;
* all containers can be and are allocated to a shared set of NUMA nodes.

The total amount of particular resource demanded for the entire pod is calculated according to [effective requests/limits](/docs/concepts/workloads/pods/init-containers/#resources) formula, and thus, this total value is equal to the maximum of:
* the sum of all app container requests,
* the maximum of init container requests,
for a resource.

The scope in tandem with `single-numa-node` Topology Manager Policy is specifically valuable for high-performance applications. By combining both options, we are able to place all containers in a pod on a single NUMA node; hence, the inter-NUMA communication overhead can be eliminated.

In the case of `single-numa-node` policy, a pod is accepted only if a suitable set of NUMA nodes is present among possible allocations. Reconsider the example above:

* a set containing only a single NUMA node - it leads to pod being admitted,
* whereas a set containing more NUMA nodes - it results in pod rejection (because instead of one NUMA node, two or more NUMA nodes are required to satisfy the allocation).

To recap, Topology Manager first computes a set of NUMA nodes and then tests it against Topology Manager policy, which either leads to the rejection or admission of the pod.

### Topology Manager Policies

The Topology Manager currently:
Expand All @@ -73,6 +111,10 @@ There are four supported policies:
* `restricted`
* `single-numa-node`

{{< note >}}
If Topology Manager is configured with the **pod** scope, the container, which is considered by the policy, is reflecting requirements of the entire pod, and thus each container from the pod will result with **the same** topology alignment decision.
{{< /note >}}

### none policy {#policy-none}

This is the default policy and does not perform any topology alignment.
Expand Down

0 comments on commit ee5daf2

Please sign in to comment.