Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update nodelocaldns.md with config steps. #18716

Merged
merged 1 commit into from
Mar 10, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 38 additions & 11 deletions content/en/docs/tasks/administer-cluster/nodelocaldns.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
reviewers:
- bowei
- zihongz
- sftim
title: Using NodeLocal DNSCache in Kubernetes clusters
content_template: templates/task
---
Expand Down Expand Up @@ -47,18 +48,44 @@ This is the path followed by DNS Queries after NodeLocal DNSCache is enabled:
{{< figure src="/images/docs/nodelocaldns.jpg" alt="NodeLocal DNSCache flow" title="Nodelocal DNSCache flow" caption="This image shows how NodeLocal DNSCache handles DNS queries." >}}

## Configuration

This feature can be enabled using the command:

`KUBE_ENABLE_NODELOCAL_DNS=true kubetest --up`

This works for e2e clusters created on GCE. On all other environments, the following steps will setup NodeLocal DNSCache:

* A yaml similar to [this](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml) can be applied using `kubectl create -f` command.
* No need to modify the --cluster-dns flag since NodeLocal DNSCache listens on both the kube-dns service IP as well as a link-local IP (169.254.20.10 by default)
{{< note >}} The local listen IP address for NodeLocal DNSCache can be any IP in the 169.254.20.0/16 space or any other IP address that can be guaranteed to not collide with any existing IP. This document uses 169.254.20.10 as an example.
{{< /note >}}

This feature can be enabled using the following steps:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@prameshj, The sed commands are okay. I was wondering why the manifest required further configuration.
It would be nice to indent the variables, lines 59 - 65 under the second list item.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have indented lines 59-65, could you take a look again?
The manifest needs further config because some variables are different depending on the environment this is deployed in. There are some developer scripts that substitute the variables in some environments, this doc provides the background to indetify what to substitute.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again, for the review @kbhawkey

* Prepare a manifest similar to the sample [`nodelocaldns.yaml`](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml) and save it as `nodelocaldns.yaml.`
* Substitute the variables in the manifest with the right values:

* kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`

* domain=`<cluster-domain>`

* localdns=`<node-local-address>`

`<cluster-domain>` is "cluster.local" by default. `<node-local-address>` is the local listen IP address chosen for NodeLocal DNSCache.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@prameshj , looking good. Double check that the list items and sub items line up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks better, thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Karen! Would you be able to approve this PR?


* If kube-proxy is running in IPTABLES mode:

``` bash
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml
```

`__PILLAR__CLUSTER__DNS__` and `__PILLAR__UPSTREAM__SERVERS__` will be populated by the node-local-dns pods.
In this mode, node-local-dns pods listen on both the kube-dns service IP as well as `<node-local-address>`, so pods can lookup DNS records using either IP address.

* If kube-proxy is running in IPVS mode:

``` bash
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml
```
In this mode, node-local-dns pods listen only on `<node-local-address>`. The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses this address.
`__PILLAR__UPSTREAM__SERVERS__` will be populated by the node-local-dns pods.

* Run `kubectl create -f nodelocaldns.yaml`
* If using kube-proxy in IPVS mode, `--cluster-dns` flag to kubelet needs to be modified to use `<node-local-address>` that NodeLocal DNSCache is listening on.
Otherwise, there is no need to modify the value of the `--cluster-dns` flag, since NodeLocal DNSCache listens on both the kube-dns service IP as well as `<node-local-address>`.

Once enabled, node-local-dns Pods will run in the kube-system namespace on each of the cluster nodes. This Pod runs [CoreDNS](https://github.com/coredns/coredns) in cache mode, so all CoreDNS metrics exposed by the different plugins will be available on a per-node basis.

The feature can be disabled by removing the daemonset, using `kubectl delete -f` command. On e2e clusters created on GCE, the daemonset can be removed by deleting the node-local-dns yaml from `/etc/kubernetes/addons/0-dns/nodelocaldns.yaml`

You can disable this feature by removing the DaemonSet, using `kubectl delete -f <manifest>` . You should also revert any changes you made to the kubelet configuration.
{{% /capture %}}