Skip to content

Commit

Permalink
restructuring documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
Murali Reddy committed Jun 22, 2017
1 parent 0f6066e commit b001331
Show file tree
Hide file tree
Showing 3 changed files with 213 additions and 97 deletions.
60 changes: 59 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1 +1,59 @@
TODO

# Contributing to Kube-router

## Summary

This document covers how to contribute to the kube-router project. Kube-router uses github PRs to manage contributions (could be anything from documentation, bug fixes, manfiests etc.).

Please read [users guide](./Documentation/README.md#user-guide) and [developers guide](./Documentation/README.md#develope-guide) for the functionality and internals of kube-router.

## Filing issues

If you have a question about Kube-router or have a problem using it, please start with contacting us on [community forum](https://gitter.im/kube-router/Lobby) for quick help. If that doesn't answer your questions, or if you think you found a bug, please [file an issue](https://github.com/cloudnativelabs/kube-router/issues).

## Submit PR

### Fork the code

Navigate to: [https://github.com/cloudnativelabs/kube-router](https://github.com/cloudnativelabs/kube-router) fork the repository.

Follow these steps to setup a local repository for working on Kube-router:

``` bash
$ git clone https://github.com/YOUR_ACCOUNT/kube-router.git
$ cd kube-router
$ git remote add upstream https://github.com/cloudnativelabs/kube-router
$ git checkout master
$ git fetch upstream
$ git rebase upstream/master
```

### Making changes and raising PR

Create a new branch to make changes on and that branch.

``` bash
$ git checkout -b feature_x
(make your changes)
$ git status
$ git add .
$ git commit -a -m "descriptive commit message for your changes"
```
get update from upstream

``` bash
$ git checkout master
$ git fetch upstream
$ git rebase upstream/master
$ git checkout feature_x
$ git rebase master
```

Now your `feature_x` branch is up-to-date with all the code in `upstream/master`, so push to your fork

``` bash
$ git push origin master
$ git push origin feature_x
```

Now that the `feature_x` branch has been pushed to your GitHub repository, you can initiate the pull request.
147 changes: 143 additions & 4 deletions Documentation/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,151 @@
# Kube-router Documentation

## Getting Started
## General overview

Kube-router consist of 3 core components

* [Network Services Controller](#network-services-controller)
* [Network Policy Controller](#network-policy-controller)
* [Network Routes Controller](#network-routes-controller)

#### Network Services Controller

Network services controller is responsible for reading the services and endpoints information from Kubernetes API server and configure IPVS on each cluster node accordingly.

Please read blog for design details and pros and cons compared to iptables based Kube-proxy
https://cloudnativelabs.github.io/post/2017-05-10-kube-network-service-proxy/

Demo of Kube-router's IPVS based Kubernetes network service proxy

[![asciicast](https://asciinema.org/a/120312.png)](https://asciinema.org/a/120312)

Features:
- round robin load balancing
- client IP based session persistence
- source IP is preserved if service controller is used in conjuction with network routes controller (kube-router with --run-router flag)
- option to explicitly masquerade (SNAT) with --masquerade-all flag

#### Network Policy Controller

Network policy controller is responsible for reading the namespace, network policy and pods information from Kubernetes API server and configure iptables accordingly to provide ingress filter to the pods.

Please read blog for design details of Network Policy controller
https://cloudnativelabs.github.io/post/2017-05-1-kube-network-policies/

Demo of Kube-router's iptables based implementaton of network policies

[![asciicast](https://asciinema.org/a/120735.png)](https://asciinema.org/a/120735)

#### Network Routes Controller

Network routes controller is responsible for reading pod CIDR allocated by controller manager to the node, and advertises the routes to the rest of the nodes in the cluster (BGP peers). Use of BGP is transperent to user for basic pod-to-pod networking.

[![asciicast](https://asciinema.org/a/120885.png)](https://asciinema.org/a/120885)

However BGP can be leveraged to other use cases like advertising the cluster ip, routable pod ip etc. Only in such use-cases understanding of BGP and configuration is required. Please see below demo how kube-router advertises cluster IP and pod codes to external BGP router
[![asciicast](https://asciinema.org/a/121635.png)](https://asciinema.org/a/121635)

## User Guide

### Try Kube-router with cluster installers

Best way to get started is to deploy Kubernetes with Kube-router is through cluster installer.

#### kops
Please see the [steps](https://github.com/cloudnativelabs/kube-router/blob/master/Documentation/kops.md) to deploy Kubernetes cluster with Kube-router using [Kops](https://github.com/kubernetes/kops)

#### bootkube
Please see the [steps](https://github.com/cloudnativelabs/kube-router/tree/master/contrib/bootkube) to deploy Kubernetes cluster with Kube-router using [bootkube](https://github.com/kubernetes-incubator/bootkube)

### deployment

Depending on what functionality of kube-router you want to use, multiple deployment options are possible. You can use the flags `--run-firewall`, `--run-router`, `--run-service-proxy` to selectively enable only required functionality of kube-router.

Also you can choose to run kube-router as agent running on each cluster node. Alternativley you can run kube-router as pod on each node through daemonset.

### command line options

```
--run-firewall If false, kube-router won't setup iptables to provide ingress firewall for pods. true by default.
--run-router If true each node advertise routes the rest of the nodes and learn the routes for the pods. false by default
--run-service-proxy If false, kube-router won't setup IPVS for services proxy. true by default.
--cleanup-config If true cleanup iptables rules, ipvs, ipset configuration and exit.
--masquerade-all SNAT all traffic to cluster IP/node port. False by default
--cluster-cidr CIDR range of pods in the cluster. If specified external traffic from the pods will be masquraded
--config-sync-period duration How often configuration from the apiserver is refreshed. Must be greater than 0. (default 1m0s)
--iptables-sync-period duration The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m'). Must be greater than 0. (default 1m0s)
--ipvs-sync-period duration The maximum interval of how often ipvs config is refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 1m0s)
--kubeconfig string Path to kubeconfig file with authorization information (the master location is set by the master flag).
--master string The address of the Kubernetes API server (overrides any value in kubeconfig)
--routes-sync-period duration The maximum interval of how often routes are advertised and learned (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 1m0s)
--advertise-cluster-ip If true then cluster IP will be added into the RIB and will be advertised to the peers. False by default.
--cluster-asn ASN number under which cluster nodes will run iBGP
--peer-asn ASN number of the BGP peer to which cluster nodes will advertise cluster ip and node's pod cidr
--peer-router The ip address of the external router to which all nodes will peer and advertise the cluster ip and pod cidr's
--nodes-full-mesh When enabled each node in the cluster will setup BGP peer with rest of the nodes. True by default
--hostname-override If non-empty, this string will be used as identification of node name instead of the actual hostname.
```

### requirements

- Kube-router need to access kubernetes API server to get information on pods, services, endpoints, network policies etc. The very minimum information it requires is the details on where to access the kubernetes API server. This information can be passed as `kube-router --master=http://192.168.1.99:8080/` or `kube-router --kubeconfig=<path to kubeconfig file>`. If neither `--master` nor `--kubeconfig` option is specified then kube-router will look for kubeconfig at `/var/lib/kube-router/kubeconfig` location.

- If you run kube-router as agent on the node, ipset package must be installed on each of the nodes (when run as daemonset, container image is prepackaged with ipset)

- If you choose to use kube-router for pod-to-pod network connectivity then Kubernetes controller manager need to be configured to allocate pod CIDRs by passing `--allocate-node-cidrs=true` flag and providing a `cluster-cidr` (i.e. by passing --cluster-cidr=10.1.0.0/16 for e.g.)

- If you choose to run kube-router as daemonset, then both kube-apiserver and kubelet must be run with `--allow-privileged=true` option

- If you choose to use kube-router for pod-to-pod network connecitvity then Kubernetes cluster must be configured to use CNI network plugins. On each node CNI conf file is expected to be present as /etc/cni/net.d/10-kuberouter.conf .`bridge` CNI plugin and `host-local` for IPAM should be used. A sample conf file that can be downloaded as `wget -O /etc/cni/net.d/10-kuberouter.conf https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/cni/10-kuberouter.conf`

### running as daemonset

This is quickest way to deploy kube-router (**dont forget to ensure the requirements**). Just run

```
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kube-router-all-service-daemonset.yaml
```

Above will run kube-router as pod on each node automatically. You can change the arguments in the daemonset definition as required to suit your needs. Some samples can be found at https://github.com/cloudnativelabs/kube-router/tree/master/daemonset with different argument to select set of the services kube-router should run.

### running as agent

You can choose to run kube-router as agent runnng on each node. For e.g if you just want kube-router to provide ingress firewall for the pods then you can start kube-router as
```
kube-router --master=http://192.168.1.99:8080/ --run-firewall=true --run-service-proxy=false --run-router=false
```

### cleanup configuration

You can clean up all the configurations done (to ipvs, iptables, ip routes) by kube-router on the node by running
```
kube-router --cleanup-config
```

### trying kube-router as alternative to kube-proxy

If you have a kube-proxy in use, and want to try kube-router just for service proxy you can do
```
kube-proxy --cleanup-iptables
```
followed by
```
kube-router --master=http://192.168.1.99:8080/ --run-service-proxy=true --run-firewall=false --run-router=false
```
and if you want to move back to kube-proxy then clean up config done by kube-router by running
```
kube-router --cleanup-config
```
and run kube-proxy with the configuration you have.
- [General Setup](/README.md#getting-started)

## Deploying through cluster installers
- [Bootkube Deployment](bootkube.md)
- [Kops deployment](kops.md)
## Develope Guide

**Go version 1.7 or above is required to build kube-router**

All the dependencies are vendored already, so just run *make build* or *go build -o kube-router kube-router.go* to build

Alternatively you can download the prebuilt binary from https://github.com/cloudnativelabs/kube-router/releases

## BGP configuration

Expand Down
103 changes: 11 additions & 92 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,104 +19,25 @@ We have Kube-proxy which provides service proxy and load balancer. We have sever

<a href="https://asciinema.org/a/118056" target="_blank"><img src="https://asciinema.org/a/118056.png" /></a>

## Getting Started

### building

**Go version 1.7 or above is required to build kube-router**

All the dependencies are vendored already, so just run *make build* or *go build -o kube-router kube-router.go* to build

Alternatively you can download the prebuilt binary from https://github.com/cloudnativelabs/kube-router/releases

### command line options

```
--run-firewall If false, kube-router won't setup iptables to provide ingress firewall for pods. true by default.
--run-router If true each node advertise routes the rest of the nodes and learn the routes for the pods. false by default
--run-service-proxy If false, kube-router won't setup IPVS for services proxy. true by default.
--cleanup-config If true cleanup iptables rules, ipvs, ipset configuration and exit.
--masquerade-all SNAT all traffic to cluster IP/node port. False by default
--cluster-cidr CIDR range of pods in the cluster. If specified external traffic from the pods will be masquraded
--config-sync-period duration How often configuration from the apiserver is refreshed. Must be greater than 0. (default 1m0s)
--iptables-sync-period duration The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m'). Must be greater than 0. (default 1m0s)
--ipvs-sync-period duration The maximum interval of how often ipvs config is refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 1m0s)
--kubeconfig string Path to kubeconfig file with authorization information (the master location is set by the master flag).
--master string The address of the Kubernetes API server (overrides any value in kubeconfig)
--routes-sync-period duration The maximum interval of how often routes are advertised and learned (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 1m0s)
--advertise-cluster-ip If true then cluster IP will be added into the RIB and will be advertised to the peers. False by default.
--cluster-asn ASN number under which cluster nodes will run iBGP
--peer-asn ASN number of the BGP peer to which cluster nodes will advertise cluster ip and node's pod cidr
--peer-router The ip address of the external router to which all nodes will peer and advertise the cluster ip and pod cidr's
--nodes-full-mesh When enabled each node in the cluster will setup BGP peer with rest of the nodes. True by default
--hostname-override If non-empty, this string will be used as identification of node name instead of the actual hostname.
```

### Try Kube-router with cluster installers

#### kops
Please see the [steps](https://github.com/cloudnativelabs/kube-router/blob/master/Documentation/kops.md) to deploy Kubernetes cluster with Kube-router using [Kops](https://github.com/kubernetes/kops)

#### bootkube
Please see the [steps](https://github.com/cloudnativelabs/kube-router/tree/master/contrib/bootkube) to deploy Kubernetes cluster with Kube-router using [bootkube](https://github.com/kubernetes-incubator/bootkube)

### deployment

Depending on what functionality of kube-router you want to use, multiple deployment options are possible. You can use the flags `--run-firewall`, `--run-router`, `--run-service-proxy` to selectively enable only required functionality of kube-router.
## Project status

Also you can choose to run kube-router as agent running on each cluster node. Alternativley you can run kube-router as pod on each node through daemonset.
Project is in alpha stage. We are working towards beta release [milestone](https://github.com/cloudnativelabs/kube-router/milestone/2) and are activley incorporating users feedback.

### requirements
## Support & Feedback

- Kube-router need to access kubernetes API server to get information on pods, services, endpoints, network policies etc. The very minimum information it requires is the details on where to access the kubernetes API server. This information can be passed as `kube-router --master=http://192.168.1.99:8080/` or `kube-router --kubeconfig=<path to kubeconfig file>`. If neither `--master` nor `--kubeconfig` option is specified then kube-router will look for kubeconfig at `/var/lib/kube-router/kubeconfig` location.
If you experience any problems please reach us on gitter [community forum](https://gitter.im/kube-router/Lobby) for quick help. Feel free to leave feedback or raise questions at any time by opening an issue [here](https://github.com/cloudnativelabs/kube-router/issues).

- If you run kube-router as agent on the node, ipset package must be installed on each of the nodes (when run as daemonset, container image is prepackaged with ipset)

- If you choose to use kube-router for pod-to-pod network connectivity then Kubernetes controller manager need to be configured to allocate pod CIDRs by passing `--allocate-node-cidrs=true` flag and providing a `cluster-cidr` (i.e. by passing --cluster-cidr=10.1.0.0/16 for e.g.)

- If you choose to run kube-router as daemonset, then both kube-apiserver and kubelet must be run with `--allow-privileged=true` option

- If you choose to use kube-router for pod-to-pod network connecitvity then Kubernetes cluster must be configured to use CNI network plugins. On each node CNI conf file is expected to be present as /etc/cni/net.d/10-kuberouter.conf .`bridge` CNI plugin and `host-local` for IPAM should be used. A sample conf file that can be downloaded as `wget -O /etc/cni/net.d/10-kuberouter.conf https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/cni/10-kuberouter.conf`

### running as daemonset

This is quickest way to deploy kube-router (**dont forget to ensure the requirements**). Just run

```
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kube-router-all-service-daemonset.yaml
```

Above will run kube-router as pod on each node automatically. You can change the arguments in the daemonset definition as required to suit your needs. Some samples can be found at https://github.com/cloudnativelabs/kube-router/tree/master/daemonset with different argument to select set of the services kube-router should run.

### running as agent

You can choose to run kube-router as agent runnng on each node. For e.g if you just want kube-router to provide ingress firewall for the pods then you can start kube-router as
```
kube-router --master=http://192.168.1.99:8080/ --run-firewall=true --run-service-proxy=false --run-router=false
```
## Getting Started

### cleanup configuration
Use below guides to get started.

You can clean up all the configurations done (to ipvs, iptables, ip routes) by kube-router on the node by running
```
kube-router --cleanup-config
```
- [Users Guide](./Documentation/README.md#user-guide)
- [Developers Guide](./Documentation/README.md#develope-guide)

### trying kube-router as alternative to kube-proxy
## Contribution

If you have a kube-proxy in use, and want to try kube-router just for service proxy you can do
```
kube-proxy --cleanup-iptables
```
followed by
```
kube-router --master=http://192.168.1.99:8080/ --run-service-proxy=true --run-firewall=false --run-router=false
```
and if you want to move back to kube-proxy then clean up config done by kube-router by running
```
kube-router --cleanup-config
```
and run kube-proxy with the configuration you have.
We encourage all kinds of contributions, be they documentation, code, fixing typos, tests — anything at all. Please
read the [contribution guide](./CONTRIBUTING.md).

## Theory of Operation

Expand Down Expand Up @@ -211,5 +132,3 @@ Kube-router build upon following libraries:
- Ipset: https://github.com/janeczku/go-ipset
- IPVS: https://github.com/mqliang/libipvs

## Feedback
Kube-router is in active development, the most up-to-date version is HEAD.There are many more things to explore around IPVS and monitoring. If you experience any problems, feel free to leave feedback or raise questions at any time by opening an issue [here](https://github.com/cloudnativelabs/kube-router/issues).

0 comments on commit b001331

Please sign in to comment.