Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an example of how to attach labels to nodes and use nodeSelectors so... #4088

Merged
merged 1 commit into from
Feb 4, 2015
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
120 changes: 120 additions & 0 deletions examples/node-selection/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
## Node selection example
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd also mention that pinging pod to specific machine using this approach is discouraged. This is a scheduling problem, not something user needs to take care of. The plan is to add more resource type for node, and more resource requirement for pod.


This example shows how to assign a pod to a specific node or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it.

### Step Zero: Prerequisites

This example assumes that you have a basic understanding of kubernetes pods and that you have [turned up a Kubernetes cluster](https://github.com/GoogleCloudPlatform/kubernetes#documentation).

### Step One: Attach label to the node

Run `kubectl get nodes` to get the names of the nodes. Pick out the one that you want to add a label to. Note that label keys must be in the form of DNS labels (as described in the [identifiers doc](/docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters. Then run `kubectl get node <node-name> -o yaml > node.yaml`. The contents of the file should look something like this:

<pre>
apiVersion: v1beta1
creationTimestamp: 2015-02-03T01:16:46Z
hostIP: 104.154.60.112
id: <node-name>
kind: Node
resourceVersion: 12
resources:
capacity:
cpu: "1"
memory: 4.0265318e+09
selfLink: /api/v1beta1/minions/<node-name>
status:
conditions:
- kind: Ready
lastTransitionTime: null
status: Full
uid: 526a4156-ab42-11e4-9817-42010af0258d
</pre>

Add the labels that you want to the file like this:

<pre>
apiVersion: v1beta1
creationTimestamp: 2015-02-03T01:16:46Z
hostIP: 104.154.60.112
id: <node-name>
kind: Node
<b>labels:
disktype: ssd</b>
resourceVersion: 12
resources:
capacity:
cpu: "1"
memory: 4.0265318e+09
selfLink: /api/v1beta1/minions/<node-name>
status:
conditions:
- kind: Ready
lastTransitionTime: null
status: Full
uid: 526a4156-ab42-11e4-9817-42010af0258d
</pre>

Then update the node by running `kubectl update -f node.yaml`. Make sure that the resourceVersion you use in your update call is the same as the resourceVersion returned by the get call. If something about the node changes between your get and your update, the update will fail because the resourceVersion will have changed.

Note that as of 2015-02-03 there are a couple open issues that prevent this from working without modification. Due to [issue #3005](https://github.com/GoogleCloudPlatform/kubernetes/issues/3005), you have to remove all status-related fields from the file, which is both everything under the `status` field as well as the `hostIP` field (removing hostIP isn't required in v1beta3). Due to [issue 4041](https://github.com/GoogleCloudPlatform/kubernetes/issues/4041), you may have to modify the representation of the resource capacity numbers to make them integers. These are both temporary, and fixes are being worked on. In the meantime, you would actually call `kubectl update -f node.yaml` with a file that looks like this:

<pre>
apiVersion: v1beta1
creationTimestamp: 2015-02-03T01:16:46Z
id: <node-name>
kind: Node
<b>labels:
disktype: ssd</b>
resourceVersion: 12
resources:
capacity:
cpu: "1"
memory: 4026531800
selfLink: /api/v1beta1/minions/<node-name>
uid: 526a4156-ab42-11e4-9817-42010af0258d
</pre>


### Step Two: Add a nodeSelector field to your pod configuration

Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config:

<pre>
apiVersion: v1beta1
desiredState:
manifest:
containers:
- image: nginx
name: nginx
id: nginx
version: v1beta1
id: nginx
kind: Pod
labels:
env: test
</pre>

Then add a nodeSelector like so:

<pre>
apiVersion: v1beta1
desiredState:
manifest:
containers:
- image: nginx
name: nginx
id: nginx
version: v1beta1
id: nginx
kind: Pod
labels:
env: test
<b>nodeSelector:
disktype: ssd</b>
</pre>

When you then run `kubectl create -f pod.yaml`, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running `kubectl get pods` and looking at the "host" that the pod was assigned to.

### Conclusion

While this example only covered one node, you can attach labels to as many nodes as you want. Then when you schedule a pod with a nodeSelector, it can be scheduled on any of the nodes that satisfy that nodeSelector. Be careful that it will match at least one node, however, because if it doesn't the pod won't be scheduled at all.