Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 14 additions & 14 deletions Documentation/generic.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,32 +2,32 @@

This guide is for running kube-router as the [CNI](https://github.com/containernetworking) network provider for on premise and/or bare metal clusters outside of a cloud provider's environment. It assumes the initial cluster is bootstrapped and a networking provider needs configuration.

All pod networking CIDRs are allocated by kube-controller-manager. Kube-router provides service/pod networking, a network policy firewall, and a high performance IPVS/LVS based service proxy. The network policy firewall and service proxy are both optional but recommended.

All pod networking [CIDRs](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) are allocated by kube-controller-manager. Kube-router provides service/pod networking, a network policy firewall, and a high performance [IPVS/LVS](http://www.linuxvirtualserver.org/software/ipvs.html) based service proxy. The network policy firewall and service proxy are both optional but recommended.

### Configuring the Kubelet

Ensure each kubelet is configured with the following options:
Ensure each [Kubelet](https://kubernetes.io/docs/reference/generated/kubelet/) is configured with the following options:

--network-plugin=cni
--cni-conf-dir=/etc/cni/net.d

If a previous CNI provider (e.g. weave-net, calico, or flannel) was used, remove old configurations from `/etc/cni/net.d` on each kubelet.
If running Kubelet containerised, make sure `/etc/cni/net.d` is mapped to the host's `/etc/cni/net.d`

**Note: Switching CNI providers on a running cluster requires re-creating all pods to pick up new pod IPs**
If a previous CNI provider (e.g. weave-net, calico, or flannel) was used, remove old configurations from `/etc/cni/net.d` on each kubelet.

_**Note: Switching CNI providers on a running cluster requires re-creating all pods to pick up new pod IPs**_

### Configuring kube-controller-manager

The following options are mandatory for kube-controller-manager:
The following options are mandatory for [kube-controller-manager](https://kubernetes.io/docs/reference/generated/kube-controller-manager/):

--cluster-cidr=${POD_NETWORK} # for example 10.32.0.0/12
--service-cluster-ip-range=${SERVICE_IP_RANGE} # for example 10.50.0.0/22


## Running kube-router with everything

This runs kube-router with pod/service networking, the network policy firewall, and service proxy to replace kube-proxy. The example command uses `10.32.0.0/12` as the pod CIDR address range and `https://cluster01.int.domain.com:6443` as the apiserver address. Please change these to suit your cluster.
This runs kube-router with pod/service networking, the network policy firewall, and service proxy to replace kube-proxy. The example command uses `10.32.0.0/12` as the pod CIDR address range and `https://cluster01.int.domain.com:6443` as the [apiserver](https://kubernetes.io/docs/reference/generated/kube-apiserver/) address. Please change these to suit your cluster.

CLUSTERCIDR=10.32.0.0/12 \
APISERVER=https://cluster01.int.domain.com:6443 \
Expand All @@ -37,7 +37,7 @@ This runs kube-router with pod/service networking, the network policy firewall,

### Removing a previous kube-proxy

If kube-proxy was never deployed to the cluster, this can likely be skipped.
If [kube-proxy](https://kubernetes.io/docs/reference/generated/kube-proxy/) was never deployed to the cluster, this can likely be skipped.

Remove any previously running kube-proxy and all iptables rules it created. Start by deleting the kube-proxy daemonset:

Expand All @@ -50,10 +50,10 @@ Any iptables rules kube-proxy left around will also need to be cleaned up. This

## Running kube-router without the service proxy

This runs kube-router with pod/service networking and the network policy firewall. The service proxy is disabled. Don't forget to update the cluster CIDR and apiserver addresses to match your cluster.
This runs kube-router with pod/service networking and the network policy firewall. The Services proxy is disabled.

CLUSTERCIDR=10.32.0.0/12 \
APISERVER=https://cluster01.int.domain.com:6443 \
sh -c 'curl https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/generic-kuberouter.yaml -o - | \
sed -e "s;%APISERVER%;$APISERVER;g" -e "s;%CLUSTERCIDR%;$CLUSTERCIDR;g"' | \
kubectl apply -f -
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/generic-kuberouter.yaml

In this mode kube-router relies on for example [kube-proxy](https://kubernetes.io/docs/reference/generated/kube-proxy/) to provide service networking.

When service proxy is disabled kube-router will use [in-cluster configuration](https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration) to access APIserver through cluster-ip. Service networking must therefore be setup before deploying kube-router.
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ data:
- name: cluster
cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://%APISERVER%
server: %APISERVER%
users:
- name: kube-router
user:
Expand Down
2 changes: 1 addition & 1 deletion daemonset/generic-kuberouter-all-features.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ data:
- name: cluster
cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://%APISERVER%
server: %APISERVER%
users:
- name: kube-router
user:
Expand Down
35 changes: 1 addition & 34 deletions daemonset/generic-kuberouter.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,25 +17,6 @@ data:
"type":"host-local"
}
}
kubeconfig: |
apiVersion: v1
kind: Config
clusterCIDR: %CLUSTERCIDR%
clusters:
- name: cluster
cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: %APISERVER%
users:
- name: kube-router
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
contexts:
- context:
cluster: cluster
user: kube-router
name: kube-router-context
current-context: kube-router-context

---
apiVersion: extensions/v1beta1
Expand Down Expand Up @@ -64,7 +45,6 @@ spec:
- "--run-router=true"
- "--run-firewall=true"
- "--run-service-proxy=false"
- "--kubeconfig=/var/lib/kube-router/kubeconfig"
env:
- name: NODE_NAME
valueFrom:
Expand All @@ -82,9 +62,6 @@ spec:
readOnly: true
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kubeconfig
mountPath: /var/lib/kube-router
readOnly: true
initContainers:
- name: install-cni
image: busybox
Expand All @@ -97,19 +74,12 @@ spec:
TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
cp /etc/kube-router/cni-conf.json ${TMP};
mv ${TMP} /etc/cni/net.d/10-kuberouter.conf;
fi;
if [ ! -f /var/lib/kube-router/kubeconfig ]; then
TMP=/var/lib/kube-router/.tmp-kubeconfig;
cp /etc/kube-router/kubeconfig ${TMP};
mv ${TMP} /var/lib/kube-router/kubeconfig;
fi
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni-conf-dir
- mountPath: /etc/kube-router
name: kube-router-cfg
- name: kubeconfig
mountPath: /var/lib/kube-router
name: kube-router-cfg
hostNetwork: true
tolerations:
- key: CriticalAddonsOnly
Expand All @@ -127,9 +97,6 @@ spec:
- name: kube-router-cfg
configMap:
name: kube-router-cfg
- name: kubeconfig
hostPath:
path: /var/lib/kube-router

---
apiVersion: v1
Expand Down