Chorus kube router is a plugin to establish route between Kubernetes cluster nodes and non Kubernetes nodes. This is use-full for ingress resource where service IP's (Pod IP) can be configured on Ingress device for load balancing front end applications. kube-router can be used for creating a route between Kubernetes and Ingress device. A service of type nodeport and loadbalncer can reach to the pods irrespective of network and subnet. However such solution reduces the performance due to translation and load balancing at different level. So typically customer prefer ingress mechanism [Reach to pod directly from external load balancer] to expose the route to the service. Chorus kube router helps to create network connectivity for such kind of deployments. kube-router is a micro service provided by Chorus that helps to create network between the Kubernetes cluster nodes and non kubernetes aware devices [F5, A10]. kube-router takes care of networking changes on kubernetes and produces a configmap output which can be used by vendors to establish route from thier devices to kubernetes cluster.
- Overview
- Supported platforms
- Architecture
- How it works
- Get started
- Support Matrix
- Issues
- Code of conduct
- License
In Kubernetes environments, when you expose the services for external access through the ingress device, to route the traffic into the cluster, you need to appropriately configure the network between the Kubernetes nodes and the Ingress device. Configuring the network is challenging as the pods use private IP addresses based on the CNI framework. Without proper network configuration, the Ingress device cannot access these private IP addresses. Also, manually configuring the network to ensure such reachability is cumbersome in Kubernetes environments.
Chorus provides a microservice plugin called as kube-router that you can use to create the network between the cluster and the Ingress devices.
The kube-router is supported on the following platforms:
- Kubernetes v1.10 and later
- Red Hat OpenShift version 3.11 and later
The following diagram provides the high-level architecture of the kube-router:
kube-router creates a seperate network for any external devices and generate config-map file with network details. It does the following
- Manage seperate subnet for non kubernetes aware nodes
- Creates vxlan overlays for the external non kubernetes aware nodes
- Genrate a config-map file which can be used for creating other endpoint overlays
kube-router can create overlay between kubernetes cluster nodes and non kubernetes aware nodes. There are two types of overlay supported. First one is based on VXLAN and the second one is based on IPIP.
kube-router creates a route entry point in each node present in the kubernetes cluster. When a node leaves it removes the route entry on the node. This information keeps in configmap which can be used for extending the route with other nodes. Config map can be found in kube-system namespace with the endpoint details.
MacBook-Pro:kube-chorus-router$ kubectl get cm -n kube-system kube-chorus-router -o json
{
"apiVersion": "v1",
"data": {
"CNI-10.106.170.62": "10.244.1.1/24",
"CNI-10.106.170.63": "10.244.6.1/24",
"EndpointIP": "192.168.1.254",
"Host-cb716e61-cab6-437e-a84a-d26a908260bc": "10.106.170.62",
"Host-d666ca12-5b2e-4716-a243-ece13e780122": "10.106.170.63",
"Interface-10.106.170.62": "192.168.254.1",
"Interface-10.106.170.63": "192.168.254.2",
"Mac-10.106.170.62": "76:13:e1:c7:4b:f6",
"Mac-10.106.170.63": "b2:30:00:b1:88:49",
"Node-10.106.170.62": "10.106.170.62",
"Node-10.106.170.63": "10.106.170.63"
},
"kind": "ConfigMap",
"metadata": {
"creationTimestamp": "2019-09-25T12:02:26Z",
"name": "kube-chorus-router",
"namespace": "kube-system",
"resourceVersion": "5439136",
"selfLink": "/api/v1/namespaces/kube-system/configmaps/kube-chorus-router",
"uid": "20f8407b-3871-4595-9bc4-d4eb65fb80b8"
}
}
MacBook-Pro:kube-chorus-router$
There are two hosts present in the given cluster, which can be identified by Host tag [Host-cb716e61-cab6-437e-a84a-d26a908260bc
and Host-d666ca12-5b2e-4716-a243-ece13e780122
] both having values as 10.106.170.62 and 10.106.170.63 respectively. These are nodes IP in the cluster. kube router creates an interface for each node which has subnet of 192.168.254.1 and 192.168.254.2 which maps to CNI subnet of 10.244.1.1 10.244.6.1 respectively.
kube-router creates a IPIP tunnel endpoint on all the kubernetes nodes for the eth0 IP.
Chorus kube-router can be used in the following two ways:
- In cluster configuration. In this configuration, kube-router is run as microservice.
- Out of the cluster configuration. In this configuration, the chorus is run as a process.
kube router can be used as package by importing. There are two API is available Create() and Delete(). Inputs have to given in the form of enviornment variables.
Before you deploy the kube-router package, ensure that you have installed Go binary for running kube-router.
Perform the following:
-
Download or clone the
kube-router
package. -
Navigate to the build directory
-
Start the
kube-router
usingmake run
Refer the deployment page for running kube-router as a microservice inside the Kubernetes cluster.
The following table lists the Container network interfaces (CNIs) supported by chorus-kube-router:
Container network interfaces (CNI) | kube-router versions |
---|---|
Flannel | chorus-kube-router:2.0.0 and later |
Calico | chorus-kube-router:2.0.0 and later |
Canal | chorus-kube-router:2.0.0 and later |
Use github issue template to report any bug. Describe the bug in details and capture the logs and share.
This project adheres to the Kubernetes Community Code of Conduct. By participating in this project you agree to abide by its terms.