Skip to content

Commit

Permalink
Clarify conceptual docs around exists operator
Browse files Browse the repository at this point in the history
  • Loading branch information
jonathan-innis committed Apr 21, 2024
1 parent bc653f0 commit 3135a47
Show file tree
Hide file tree
Showing 6 changed files with 366 additions and 42 deletions.
68 changes: 61 additions & 7 deletions website/content/en/docs/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -626,19 +626,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values
The `Exists` operator can be used on a NodePool to provide workload segregation across nodes.

```yaml
...
requirements:
- key: company.com/team
operator: Exists
apiVersion: karpenter.sh/v1beta1
kind: NodePool
spec:
template:
spec:
requirements:
- key: company.com/team
operator: Exists
...
```

With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset.
With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements.

If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset.

For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes.

#### Team A Deployment

```yaml
nodeSelector:
company.com/team: team-a
apiVersion: v1
kind: Deployment
metadata:
name: team-a-deployment
spec:
replicas: 5
template:
spec:
nodeSelector:
company.com/team: team-a
```

#### Team A Node

```yaml
apiVersion: v1
kind: Node
metadata:
labels:
company.com/team: team-a
```

#### Team B Deployment

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: team-b-deployment
spec:
replicas: 5
template:
spec:
nodeSelector:
company.com/team: team-b
```

#### Team B Node

```yaml
apiVersion: v1
kind: Node
metadata:
labels:
company.com/team: team-b
```

{{% alert title="Note" color="primary" %}}
If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node.
{{% /alert %}}
Expand Down
68 changes: 61 additions & 7 deletions website/content/en/preview/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -626,19 +626,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values
The `Exists` operator can be used on a NodePool to provide workload segregation across nodes.

```yaml
...
requirements:
- key: company.com/team
operator: Exists
apiVersion: karpenter.sh/v1beta1
kind: NodePool
spec:
template:
spec:
requirements:
- key: company.com/team
operator: Exists
...
```

With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset.
With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements.

If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset.

For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes.

#### Team A Deployment

```yaml
nodeSelector:
company.com/team: team-a
apiVersion: v1
kind: Deployment
metadata:
name: team-a-deployment
spec:
replicas: 5
template:
spec:
nodeSelector:
company.com/team: team-a
```

#### Team A Node

```yaml
apiVersion: v1
kind: Node
metadata:
labels:
company.com/team: team-a
```

#### Team B Deployment

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: team-b-deployment
spec:
replicas: 5
template:
spec:
nodeSelector:
company.com/team: team-b
```

#### Team B Node

```yaml
apiVersion: v1
kind: Node
metadata:
labels:
company.com/team: team-b
```

{{% alert title="Note" color="primary" %}}
If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node.
{{% /alert %}}
Expand Down
68 changes: 61 additions & 7 deletions website/content/en/v0.32/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -625,19 +625,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values
The `Exists` operator can be used on a NodePool to provide workload segregation across nodes.

```yaml
...
requirements:
- key: company.com/team
operator: Exists
apiVersion: karpenter.sh/v1beta1
kind: NodePool
spec:
template:
spec:
requirements:
- key: company.com/team
operator: Exists
...
```

With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset.
With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements.

If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset.

For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes.

#### Team A Deployment

```yaml
nodeSelector:
company.com/team: team-a
apiVersion: v1
kind: Deployment
metadata:
name: team-a-deployment
spec:
replicas: 5
template:
spec:
nodeSelector:
company.com/team: team-a
```

#### Team A Node

```yaml
apiVersion: v1
kind: Node
metadata:
labels:
company.com/team: team-a
```

#### Team B Deployment

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: team-b-deployment
spec:
replicas: 5
template:
spec:
nodeSelector:
company.com/team: team-b
```

#### Team B Node

```yaml
apiVersion: v1
kind: Node
metadata:
labels:
company.com/team: team-b
```

{{% alert title="Note" color="primary" %}}
If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node.
{{% /alert %}}
Expand Down
68 changes: 61 additions & 7 deletions website/content/en/v0.34/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -625,19 +625,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values
The `Exists` operator can be used on a NodePool to provide workload segregation across nodes.

```yaml
...
requirements:
- key: company.com/team
operator: Exists
apiVersion: karpenter.sh/v1beta1
kind: NodePool
spec:
template:
spec:
requirements:
- key: company.com/team
operator: Exists
...
```

With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset.
With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements.

If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset.

For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes.

#### Team A Deployment

```yaml
nodeSelector:
company.com/team: team-a
apiVersion: v1
kind: Deployment
metadata:
name: team-a-deployment
spec:
replicas: 5
template:
spec:
nodeSelector:
company.com/team: team-a
```

#### Team A Node

```yaml
apiVersion: v1
kind: Node
metadata:
labels:
company.com/team: team-a
```

#### Team B Deployment

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: team-b-deployment
spec:
replicas: 5
template:
spec:
nodeSelector:
company.com/team: team-b
```

#### Team B Node

```yaml
apiVersion: v1
kind: Node
metadata:
labels:
company.com/team: team-b
```

{{% alert title="Note" color="primary" %}}
If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node.
{{% /alert %}}
Expand Down
Loading

0 comments on commit 3135a47

Please sign in to comment.