Skip to content

Commit

Permalink
[DOCFIX] Update k8s doc with new Helm additions
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?
This PR proposes some additions to the [Alluxio k8s
guide](https://docs.alluxio.io/os/user/stable/en/deploy/Running-Alluxio-On-Kubernetes.html)
to reflect some of the recent additions to the Alluxio Helm chart:

- `serviceAccount`: #13297
- `tolerations`: #13214
- `hostAliases`: #13226
- `strategy`: #13423

### Why are the changes needed?
Doc validation

### Does this PR introduce _any_ user-facing change?
Yes, the documentation.

pr-link: #13579
change-id: cid-cd456e708547bb121cdab5df1de7599860e6e089
  • Loading branch information
ZhuTopher committed Jun 8, 2021
1 parent 85724a0 commit 5c95498
Showing 1 changed file with 166 additions and 2 deletions.
168 changes: 166 additions & 2 deletions docs/en/deploy/Running-Alluxio-On-Kubernetes.md
Expand Up @@ -1364,9 +1364,15 @@ and `volumeMounts` of each container if existing.
{% endnavtab %}
{% endnavtabs %}

### Configuring ServiceAccounts
### Kubernetes Configuration Options

By default Kubernetes will assign the namespace's `default` ServiceAccount
The following options are provided in our Helm chart as additional
parameters for experienced Kubernetes users.

#### ServiceAccounts

[By default](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server)
Kubernetes will assign the namespace's `default` ServiceAccount
to new pods in a namespace. You may specify for Alluxio pods to use
any existing ServiceAccounts you may have in your cluster through
the following:
Expand Down Expand Up @@ -1409,6 +1415,164 @@ spec:
{% endnavtab %}
{% endnavtabs %}

#### Node Selectors & Tolerations

Kubernetes provides many options to control the scheduling of pods
onto nodes in the cluster. The most direct of which is a
[node selector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector).

However, Kubernetes will avoid scheduling pods on any tainted nodes.
To allow certain pods to schedule on such nodes, Kubernetes allows
you to specify tolerations for those taints. See
[the Kubernetes documentation on taints and tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
for more details.

{% navtabs selectorsTolerations %}
{% navtab helm %}

You may specify a node selector in JSON as a top-level Helm value,
`nodeSelector`, which will apply to all pods in the chart. Similarly,
you may specify a list of tolerations in JSON as a top-level Helm value,
`tolerations`, which will also apply to all pods in the chart.
```properties
nodeSelector: {"app": "alluxio"}

tolerations: [ {"key": "env", "operator": "Equal", "value": "prod", "effect": "NoSchedule"} ]
```

You can **override** the top-level `nodeSelector` by specifying a value
for the specific component's `nodeSelector`.
```properties
master:
nodeSelector: {"app": "alluxio-master"}

worker:
nodeSelector: {"app": "alluxio-worker"}
```

You can **append** to the top-level `tolerations` by specifying a value
for the specific component's `tolerations`.
```properties
logserver:
tolerations: [ {"key": "app", "operator": "Equal", "value": "logging", "effect": "NoSchedule"} ]
```

{% endnavtab %}
{% navtab kubectl %}

You may add `nodeSelector` and `tolerations` fields to any of the Alluxio Pod template
specs. For example:
```properties
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: alluxio-master
spec:
template:
spec:
nodeSelector:
app: alluxio
tolerations:
- effect: NoSchedule
key: env
operator: Equal
value: prod
```

{% endnavtab %}
{% endnavtabs %}

#### Host Aliases

If you wish to add or override hostname resolution in the pods,
Kubernetes exposes the containers' `/etc/hosts` file via
[host aliases](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/).
This can be particularly useful for providing hostname addresses
for services not managed by Kubernetes, like HDFS.

{% navtabs hostAliases %}
{% navtab helm %}

You may specify a top-level Helm value `hostAliases` which will
apply to the Master and Worker pods in the chart.
```properties
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
```

{% endnavtab %}
{% navtab kubectl %}

You may add the `hostAliases` field to any of the Alluxio Pod template
specs. For example:
```properties
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: alluxio-master
spec:
template:
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.local"
```

{% endnavtab %}
{% endnavtabs %}

#### Deployment Strategy

[By default](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy)
Kubernetes will use the 'RollingUpdate' deployment strategy to progressively
upgrade Pods when changes are detected.

{% navtabs deployStrategy %}
{% navtab helm %}

The Helm chart currently only supports `strategy` for the logging server deployment:
```properties
logserver:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 1
```

{% endnavtab %}
{% navtab kubectl %}

You may add a `strategy` field to any of the Alluxio Pod template
specs to have the Pod run using the matching ServiceAccount. For example:
```properties
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: alluxio-master
spec:
template:
spec:
strategy:
type: Recreate
```

{% endnavtab %}
{% endnavtabs %}

## Troubleshooting

{% accordion worker_host %}
Expand Down

0 comments on commit 5c95498

Please sign in to comment.