Skip to content

Commit

Permalink
[RA2] Convert to rst (#2733)
Browse files Browse the repository at this point in the history
* [RA2] Markdown corrections

- Removal of raw HTML elements
- Removal of ToC
- Removal of bogometers
- Adding new line before and after lists
- Harmonizing buleted lists to use *
- Adding new lie before and after headings

Signed-off-by: Gergely Csatari <gergely.csatari@nokia.com>

* [RA2]: Adding back main ToC and figure captions

Signed-off-by: Gergely Csatari <gergely.csatari@nokia.com>

* [RA2]: Removal of chapter numbers from links

Signed-off-by: Gergely Csatari <gergely.csatari@nokia.com>

* [RA2]: Fixing bougus links

Signed-off-by: Gergely Csatari <gergely.csatari@nokia.com>

* [RA2]: Convert to rst

Signed-off-by: Gergely Csatari <gergely.csatari@nokia.com>

* [RA]: Undoing changes not strictly in RA2 scope

- Conversion of abbreviations are undone
- Creation of index.rst for RA1 and RA2 undone
- Removal of README.md undone

Signed-off-by: Gergely Csatari <gergely.csatari@nokia.com>
  • Loading branch information
CsatariGergely committed Jan 27, 2022
1 parent 95bbae4 commit f0600be
Show file tree
Hide file tree
Showing 17 changed files with 2,958 additions and 815 deletions.
54 changes: 0 additions & 54 deletions doc/ref_arch/kubernetes/README.md

This file was deleted.

68 changes: 68 additions & 0 deletions doc/ref_arch/kubernetes/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
Kubernetes based Reference Architecture
=======================================

This is Kubernetes based Reference Architecture (RA-2)

Release Information
-------------------

**Bundle: 7**

**Version: 0**

**Release Date: 4th Jan 2022**

Bundle/Version History
----------------------

============== ================= ===============
Bundle.Version Date Note
============== ================= ===============
1.0-alpha 10th January 2020 Snezka Release
3.0 15th May 2020 Baldy Release
4.0 25th Sep 2020 Baraque Release
5.0 29th Jan 2021 Elbrus Release
6.0 1st Jul 2021 Kali Release
7.0 4th Jan 2022 Lakelse Release
============== ================= ===============

Overall Status
--------------

=================================================================================================== ========================
Chapter Status
=================================================================================================== ========================
Chapter 01 Complete
Chapter 02 Lots of SME feedback
Chapter 03 Lots of SME feedback
Chapter 04 Lots of SME feedback
Chapter 05 Lots of SME feedback
Chapter 06 Still developing content
Chapter 07 Lots of SME feedback
Appendix B - Guidance For workload isolation (Multitenancy) with Kubernetes for application vendors Still developing content
=================================================================================================== ========================

Table of Contents
-----------------

.. toctree::
:numbered:
:maxdepth: 1

chapters/chapter01
chapters/chapter02
chapters/chapter03
chapters/chapter04
chapters/chapter05
chapters/chapter06
chapters/chapter07
chapters/appendix-a

Required versions of most important components
----------------------------------------------

========== ===================
Component Required version(s)
========== ===================
Kubernetes 1.22
========== ===================
Original file line number Diff line number Diff line change
@@ -1,35 +1,29 @@
[<< Back](../../kubernetes)
Appendix A - Guidance For workload isolation (Multitenancy) with Kubernetes for application vendors
===================================================================================================

# Appendix A - Guidance For workload isolation (Multitenancy) with Kubernetes for application vendors
Overview
--------

<p align="right"><img src="../figures/bogo_lsf.png" alt="scope" title="Scope" width="35%"/></p>

## Table of Contents <!-- omit in toc -->

- [Appendix A - Guidance For workload isolation (Multitenancy) with Kubernetes for application vendors](#appendix-a---guidance-for-workload-isolation-multitenancy-with-kubernetes-for-application-vendors)
- [A.1 Overview](#a1-overview)
- [A.2 Solution Areas](#a2-solution-areas)
- [A.3 Multitenancy Models](#a3-multitenancy-models)
- [A.3.1 Soft Multitenancy with Kubernetes Namespaces per tenant](#a31-soft-multitenancy-with-kubernetes-namespaces-per-tenant)
- [A.3.2 Hard Multitenancy with dedicated Kubernetes clusters per tenant](#a32-hard-multitenancy-with-dedicated-kubernetes-clusters-per-tenant)

## A.1 Overview

Problem statement: A single Kubernetes Cluster does not provide hard multitenancy* by design. Within a Cluster, Kubernetes Namespace is a mechanism to provide Soft isolation multitenancy.
Problem statement: A single Kubernetes Cluster does not provide hard multitenancy\* by design. Within a Cluster, Kubernetes Namespace is a mechanism to provide Soft isolation multitenancy.
A Kubernetes Namespace does provide isolation by means of role based access control (RBAC), Resource Isolation and Network Policy, however they are still within the same trust domain and a potential breach of Cluster Admin Role could lead to the Blast Radius across the entire Cluster and all its Kubernetes Namespaces.
So there is a need to define various use cases or ways to build Multitenancy Deployment Models and define the Best Practices to secure each Model.
Kubernetes Namespace is a logical representation of namespace(boundary for resources) within the Kubernetes Cluster.
This is different from the [linux namespaces](https://en.wikipedia.org/wiki/Linux_namespaces) which are defined at the operating system kernel level.
This is different from the `linux namespaces <https://en.wikipedia.org/wiki/Linux_namespaces>`__ which are defined at the operating system kernel level.

.. image:: ../figures/Model2-cluster-isolation.png
:alt: "Cluster Isolation"

.. image:: ../figures/Model1-ns.png
:alt: "Network Service"

<p align="left"><img src="../figures/Model2-cluster-isolation.png" alt="scope" title="Scope" width="50%"/></p>
<p align="left"><img src="../figures/Model1-ns.png" alt="scope" title="Scope" width="50%"/></p>

Use cases:

1. Two CNFs which are in the same trust domain (e.g.: they are from the same vendor) are running in a container infrastructure
2. Two CNFs which are in different trust domains (e.g.: they are from different vendors) are running in a container infrastructure

## A.2 Solution Areas
Solution Areas
--------------

The scope is to identify the solution area which is needed to secure the CNF workloads. Securing the platform might happen as part of it but not directly the focus or objective here.

Expand All @@ -40,34 +34,38 @@ The scope is to identify the solution area which is needed to secure the CNF wor
5. RBAC rules and secrets Management for each tenant
6. Separate Isolated view of Logging, Monitoring, Alerting and Tracing per tenant

## A.3 Multitenancy Models
Multitenancy Models
-------------------

Solution Models :

1. **Soft Multitenancy**: Separate Kubernetes Namespace per tenant within a Single Kubernetes Cluster. The same Kubernetes Cluster and its control plane are being shared between multiple tenants.
2. **Hard Multitenancy**: Separate Kubernetes Cluster per tenant.
The Kubernetes Clusters can be created using Baremetal Nodes or Virtual Machines, either on Private or Public Cloud.
The workloads do not share the same resources and Clusters are isolated.
The Kubernetes Clusters can be created using Baremetal Nodes or Virtual Machines, either on Private or Public Cloud.
The workloads do not share the same resources and Clusters are isolated.

### A.3.1 Soft Multitenancy with Kubernetes Namespaces per tenant
Soft Multitenancy with Kubernetes Namespaces per tenant
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Soft multitenancy with Namespaces per tenant can be implemented, resulting in a multi-tenant cluster - where multiple trusted workloads share a cluster and its control plane.
This is mainly recommended to reduce management overhead when the tenants belong to the same trust domain, and have the same Cluster configuration requirements (incl. release, add-ons, etc.).

The tenants will share the cluster control plane and API, including all add-ons, extensions, CNIs, CSIs, and any Custom Resources and Controllers.

To manage access control, the Kubernetes RBAC must be configured with rules to allow specific tenants to access only the objects within their own Namespace, using a `Role` Object to group the resources within a namespace, and a `RoleBinding` Object to assign it to a user or a group of users within a Namespace.
To manage access control, the Kubernetes RBAC must be configured with rules to allow specific tenants to access only the objects within their own Namespace, using a ``Role`` Object to group the resources within a namespace, and a ``RoleBinding`` Object to assign it to a user or a group of users within a Namespace.

In order to prevent (or allow) network traffic between Pods belonging to different Namespaces, `NetworkPolicy` must be created as well.
In order to prevent (or allow) network traffic between Pods belonging to different Namespaces, ``NetworkPolicy`` must be created as well.

Resource quotas enable the cluster administrator to allocate CPU and Memory to each Namespace, limiting the amount of resources the objects belonging to that Namespace can consume. This may be configured in order to ensure that all tenants use no more than the resources they are assigned.

By default, the Kubernetes scheduler will run pods belonging to any namespace on any cluster node. If it is required that pods from different tenants are run on different hosts, then affinity rules should be created by using the desired Node Labels on the Pod definition. Alternatively, Node taints can be used to reserve certain nodes for a predefined tenant.

### A.3.2 Hard Multitenancy with dedicated Kubernetes clusters per tenant
Hard Multitenancy with dedicated Kubernetes clusters per tenant
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

When tenants do not belong to the same trust domain, or the requirements on the cluster setup and configuration are irreconciliable, Hard Multitenancy must be implemented by creating multiple Kubernetes clusters for each tenant or group of compatible tenants.

All the default design decision for Kubernetes clusters apply in this case, and no special segregation or capacity management is required to be setup within the clusters.

From an operational perspective, the increased computational and operational overhead and the Cluster LCM (incl. Cluster provisioning automation) are the most impactful aspects.

43 changes: 14 additions & 29 deletions doc/ref_arch/kubernetes/chapters/chapter01.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,7 @@
[<< Back](../../kubernetes)

# 1. Overview
# Overview

<p align="right"><img src="../figures/bogo_com.png" alt="scope" title="Scope" width="35%"/></p>

## Table of Contents <!-- omit in toc -->

- [1. Overview](#1-overview)
- [1.1 Introduction](#11-introduction)
- [1.2 Terminology](#12-terminology)
- [1.3 Principles](#13-principles)
- [1.3.1 Cloud Native Principles](#131-cloud-native-principles)
- [1.3.2 Exceptions](#132-exceptions)
- [1.3.2.1 Technology Exceptions](#1321-technology-exceptions)
- [1.3.2.2 Requirements Exceptions](#1322-requirements-exceptions)
- [1.4 Scope](#14-scope)
- [1.5 Approach](#15-approach)

## 1.1 Introduction
## Introduction

The intention of this Reference Architecture is to develop a usable Kubernetes based platform for the Telecom operator community. The RA will be based on the standard Kubernetes platform where ever possible. This Reference Architecture for Kubernetes will describe the high level system components and their interactions, taking the [goals and requirements](../../../common/chapter00.md) and mapping them to real-world Kubernetes (and related) components. This document needs to be sufficiently detailed and robust such that it can be used to guide the production deployment of Kubernetes within an operator, whilst being flexible enough to evolve with and remain aligned with the wider Kubernetes ecosystem outside of Telco.

Expand All @@ -27,15 +11,15 @@ To assist with the goal of creating a reference architecture that will support T

The Kubernetes Reference Architecture will be used to determine a Kubernetes Reference Implementation. The Kubernetes Reference Implementation would then also be used to test and validate the supportability and compatibility with Kubernetes-based Network Function workloads, and lifecycle management of Kubernetes clusters, of interest to the Anuket community. It is expected that the Kubernetes Reference Architecture, Reference Implementation, and Reference Conformance will be developed building on the work already in place for OpenStack in the Anuket project. The intention is to expand as much of the existing test frameworks to be used for the verification and conformance testing of Kubernetes-based workloads, and Kubernetes cluster lifecycle management.

### 1.2 Terminology
### Terminology

For terminology used in this document refer to the [glossary](../../../common/glossary.md).

## 1.3 Principles
## Principles

This Reference Architecture conforms with the principles defined [here](../../../common/chapter00.md#2.0).

### 1.3.1 Cloud Native Principles
### Cloud Native Principles

The definition for Cloud Native is somewhat controversial. For the purposes of this document, the CNCF TOC's (Technical Oversight Committee) definition of Cloud Native will be used:
>CNCF Cloud Native Definition v1.0
Expand All @@ -62,29 +46,29 @@ Individual contributors who are also active in the CNCF TUG (Telecom User Group)
- **robust automation**
- **high-impact changes frequently and predictably**

### 1.3.2 Exceptions
### Exceptions

Anuket specification define certain policies and [principles](../../../common/chapter00.md#2.0) and strives to coalesce the industry towards conformant Cloud Infrastructure technologies and configurations. With the currently available technology options, incompatibilities, performance and operator constraints (including costs), these policies and principles may not always be achievable and, thus, require an exception process. These policies describe how to handle [non-conforming technologies](../../../common/policies.md#cntt-policies-for-managing-non-conforming-technologies). In general, non-conformance with policies is handled through a set of exceptions (please also see [Exception Types](../../../gov/chapters/chapter09.md#942-exception-types)).
Anuket specification define certain policies and [principles](../../../common/chapter00.md#2.0) and strives to coalesce the industry towards conformant Cloud Infrastructure technologies and configurations. With the currently available technology options, incompatibilities, performance and operator constraints (including costs), these policies and principles may not always be achievable and, thus, require an exception process. These policies describe how to handle [non-conforming technologies](../../../common/policies.md#cntt-policies-for-managing-non-conforming-technologies). In general, non-conformance with policies is handled through a set of exceptions (please also see [Exception Types](../../../gov/chapters/chapter09.md#exception-types)).

The following sub-sections list the exceptions to the principles of Anuket specifications and shall be updated whenever technology choices, versions and requirements change. The Exceptions have an associated period of validity and this period shall include time for transitioning.

#### 1.3.2.1 Technology Exceptions
#### Technology Exceptions

The list of Technology Exceptions will be updated or removed when alternative technologies, aligned with the principles of Anuket specifications, develop and mature.

| Ref | Name | Description | Valid Until | Rationale | Implication |
|-----|------|-------------|-------------|-----------|-------------|
| ra2.exc.tec.001 | SR-IOV | This exception allows workloads to use SR-IOV over PCI-PassThrough technology. | TBD | Emulation of virtual devices for each virtual machine creates an I/O bottleneck resulting in poor performance and limits the number of virtual machines a physical server can support. SR-IOV implements virtual devices in hardware, and by avoiding the use of a switch, near maximal performance can be achieved. For containerisation the downsides of creating dependencies on hardware is reduced as Kubernetes nodes are either physical, or if virtual have no need to "live migrate" as a VNF VM might.| |

#### 1.3.2.2 Requirements Exceptions
#### Requirements Exceptions

The Requirements Exceptions lists the Reference Model (RM) requirements and/or Reference Architecture (RA) requirements that will be either waived or be only partially implemented in this version of the RA. The exception list will be updated to allow for a period of transitioning as and when requirements change.

| Ref | Name | Description | Valid Until | Rationale | Implication |
|-----|------|-------------|-------------|-----------|-------------|
| ra1.exc.req.001 | Requirement | xxx | xxxxxxxxxxxxx. | | | |

## 1.4 Scope
## Scope

The scope of this particular Reference Architecture can be described as follows (the capabilities themselves will be listed and described in subsequent chapters), also shown in Figure 1-1:

Expand All @@ -97,10 +81,11 @@ The following items are considered **out of scope**:

- **Kubernetes-based Application / VNF Management**: similar to VNFM, this is an application layer capability that is out of scope of Anuket. This includes Kubernetes-based Application Package Management, such as Helm, as this is a client application and set of libraries that would be part of a modern/cloud native VNFM, not part of the infrastructure itself.

<p align="center"><img src="../figures/ch01_scope_k8s.png" alt="Kubernetes Reference Architecture scope" title="Kubernetes Reference Architecture scope" width="100%"/></p>
<p align="center"><b>Figure 1-1:</b> Kubernetes Reference Architecture scope</p>
![**Figure 1-1:**: Kubernetes Reference Architecture scope](../figures/ch01_scope_k8s.png) <!-- width="100%" -->

**Figure 1-1:**: Kubernetes Reference Architecture scope

## 1.5 Approach
## Approach

The approach taken in this Reference Architecture is to start as simply as possible (i.e. with a basic Kubernetes architecture), and then add detail and additional features/extensions as is required to meet the requirements of the Reference Model and the functional and non-functional requirements of common cloud native network functions.

Expand Down
Loading

0 comments on commit f0600be

Please sign in to comment.