Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add instructions about taints and tolerations to the install instructions #870

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
23 changes: 23 additions & 0 deletions site/content/en/docs/Installation/_index.md
Expand Up @@ -8,6 +8,10 @@ description: >

In this quickstart, we will create a Kubernetes cluster, and populate it with the resource types that power Agones.

_When running in production, Agones should be scheduled on a dedicated pool of nodes, distinct from where Game Servers
are scheduled for better isolation and resiliency. See the sections below for instructions on how to do this in your
preferred environment._

## Usage Requirements

- Kubernetes cluster version 1.11
Expand Down Expand Up @@ -109,6 +113,25 @@ Flag explanations:
* num-nodes: The number of nodes to be created in each of the cluster's zones. Default: 3
* machine-type: The type of machine to use for nodes. Default: n1-standard-2. Depending on the needs of you game, you may wish to [have a bigger machines](https://cloud.google.com/compute/docs/machine-types).

By default Agones prefers to be scheduled on nodes labeled with
`stable.agones.dev/agones-system=true` and tolerates the node taint `stable.agones.dev/agones-system=true:NoExecute`.
If no dedicated nodes are available, Agones will run on regular nodes, but that’s not recommended for production use.

```bash
gcloud container node-pools create agones-system \
--cluster=[CLUSTER_NAME] \
--node-taints stable.agones.dev/agones-system=true:NoExecute \
--node-labels stable.agones.dev/agones-system=true \
--num-nodes=1
```

Flag explanations:

* cluster: The name of the cluster in which the node pool is created.
* node-taints: The Kubernetes taints to automatically apply to nodes in this node pool.
* node-labels: The Kubernetes labels to automatically apply to nodes in this node pool.
* num-nodes: The Agones system controllers only require a single node of capacity to run. For faster recovery time in the event of a node failure, you can increase the size to 2.

Finally, let's tell `gcloud` that we are speaking with this cluster, and get auth credentials for `kubectl` to use.

```bash
Expand Down
12 changes: 2 additions & 10 deletions site/content/en/docs/Installation/helm.md
Expand Up @@ -29,16 +29,8 @@ _We recommend to install Agones in its own namespaces (like `agones-system` as s
you can use the helm `--namespace` parameter to specify a different namespace._

When running in production, Agones should be scheduled on a dedicated pool of nodes, distinct from where Game Servers are scheduled for better isolation and resiliency. By default Agones prefers to be scheduled on nodes labeled with `stable.agones.dev/agones-system=true` and tolerates node taint `stable.agones.dev/agones-system=true:NoExecute`. If no dedicated nodes are available, Agones will
run on regular nodes, but that's not recommended for production use.

As an example, to set up dedicated node pool for Agones on GKE, run the following command before installing Agones. Alternatively you can taint and label nodes manually.

```
gcloud container node-pools create agones-system --cluster=... --zone=... \
--node-taints stable.agones.dev/agones-system=true:NoExecute \
--node-labels stable.agones.dev/agones-system=true \
--num-nodes=1
```
run on regular nodes, but that's not recommended for production use. For instructions on setting up a decidated node
pool for Agones, see the [Agones installation instructions]({{< relref "../_index.md" >}}) for your preferred environment.

The command deploys Agones on the Kubernetes cluster with the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.

Expand Down