Skip to content
This repository has been archived by the owner on Mar 31, 2019. It is now read-only.

Latest commit

 

History

History
118 lines (78 loc) · 5.18 KB

architecture.md

File metadata and controls

118 lines (78 loc) · 5.18 KB

Cluster architecture

This page describes some technical aspects of a Kubernetes cluster deployed using kOVHernetes.

Contents

  1. Overview
  2. Topology
  3. Networking
  4. Security

Overview

kOVHernetes cluster architecture

Topology

Every cluster is composed of 1 master and a number of nodes (workers) defined on the command line.

  • The master instance runs the "master" Kubernetes components (api, scheduler, controller manager) including an instance of the etcd key-value store, together with the "node" Kubernetes components (kubelet, proxy). As deployed with kOVHernetes, the master instance is effectively both the cluster master and a worker node.
  • The node* instances run only the "node" Kubernetes components (kubelet, proxy).

Networking

3 different networks are involved in a typical Kubernetes cluster.

Host network

CIDR Description
192.168.0.0/27 Private network (OVH vRack) to which all cluster instances are connected.

A reserved and predictable IP address is assigned to each cluster instance duríng the bootstrap process. Each instance acquires its network configuration from the DHCP server (backed by OpenStack Neutron) at boot time.

Pod network

CIDR Description
172.17.0.0/16 Overlay network from which each Kubernetes Pod gets assigned an IP address.

This network is managed by Flannel, configured with the VXLAN backend. A subnet of length /24 gets allocated to the flannel.1 vxlan interface on every node from this network.

Service network

CIDR Description
10.0.0.0/16 Virtual network from which Kubernetes Services receive their cluster IP.

The Kubernetes proxy implements the service abstraction using iptables on each instance it runs on.

Security

Security within a cluster is enforced at multiple levels.

Transport

All inter-components communications within the cluster are made over a TLS connection, comprising:

  • Kubernetes components -> Kubernetes API server
  • System pods (Flannel, add-ons) -> Kubernetes API server
  • Kubernetes API server -> etcd

The only exception is the Kubernetes API server, which serves an unsecured and unauthenticated access reachable only from localhost on the TCP port 8080 (master instance).

Authentication

kOVHernetes generates a Certificate Authority and a set of X.509 certificates during the bootstrap process. These certificates are used to authenticate client components against server components. The matrix below describes these interactions:

🔑   X.509 auth

Client / Server kube-api kubelet kube-* etcd
kube-api - 🔑 🔑
kubelet 🔑 -
kube-* 🔑 -
etcd -

* kube-* includes kube-scheduler, kube-controller-manager and kube-proxy

System applications running inside the cluster, such as Flannel and Kubernetes add-ons, get authenticated using ServiceAccount tokens signed by the controller-manager.

Authorization

Kubernetes

The Kubernetes API authorization mode is left to its default of AlwaysAllow. In this mode:

  • any authenticated user (X.509/token/basic) can perform any action on the API
  • anonymous requests are always rejected

After the cluster is provisioned, the API server can be maually configured to enable other authorization plug-ins (RBAC, ABAC, ...).

etcd

While client certificate authentication is enforced for both client and peer communications (see above), authenticated users are systematically granted full access to the etcd v3 API. The v3 authorization mechanism is still in a design phase upstream.