Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RA2 Ch08] Initial content for Ch08 (Gap analysis) #672

Merged
merged 7 commits into from Jan 8, 2020
Merged
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
58 changes: 57 additions & 1 deletion doc/ref_arch/kubernetes/chapters/chapter08.md
Expand Up @@ -16,10 +16,66 @@ While this Reference Architecture is being developed, Gaps will be identified th
<a name="8.2"></a>
## 8.2 Gap analysis

- Container Run-Time Interfaces towards NFVI resources.
- Multi-Tenancy
- K8s as VM based VNF Orchestrator

<a name="8.2.1"></a>
### 8.2.1 Container run-time Interfaces towards NFVI resources
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since Container Run-time Interface (CRI) is well-defined, I think in this context we need to use a different term. Is this the OCI run-time spec?


> This is the southbound infrastructure resources from the container platform as presented by the IaaS provider.

> e.g. network interface type that is presented to a running container.


<a name="8.2.2"></a>
### 8.2.2 Multi-tenancy within Kubernetes

> Today, Kubernetes lacks hard multi-tenancy capabilities<sup>citations</sup>

> Ability to allow untrusted tenants to share infrastructure resources.


<a name="8.2.3"></a>
### 8.2.3 Kubernetes as a VM-based VNF Orchestrator

> In order to support a transition from VNFs only to VNFs and CNFs in the same environment.


<a name="8.2.4"></a>
### 8.2.4 Multiple network interfaces on Pods

> As well as having multiple network interfaces on Pods (e.g. Multus), being able to have different network interfaces in differnet Pods using different CNI plugins within the same cluster.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also have the same features available for all network interfaces.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sentence and a Typo. Suggest "As well as having multiple network interfaces on Pods (e.g. Multus), need to support different network interfaces in different Pods using different CNI plugins within the same cluster."

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed



<a name="8.2.5"></a>
### 8.2.5 Dynamic network management

> this is done today with Netconf etc. integration with SDN controllers, for example

> connecting individual VPNs - e.g. L3VPN - onto the CNF, on demand

> look to enable this via a standard API


<a name="8.2.6"></a>
### 8.2.6 Control Plane Efficiency

> For example, in situations where multiple sites / availability zones exist, an operator may choose to run multiple Kubernetes clusters, not only for security/multitenancy reasons but also fault, resilience, latency, etc.

> This produces an overhead of Kubernetes Masters - is there a way of making this more efficient whilst still able to meet the non-functional requirements of the operator (fault, resilience, latency, etc.)


<a name="8.2.7"></a>
### 8.2.7 Interoperability with VNF-based networking

> For example, today in existing networks L3 VPNs are commonly used for traffic separation (e.g. separate L3 VPN for signalling, charging, LI, O&M etc.). CNFs will have to interwork with existing network elements and therefore a K8s POD will somehow need to be connected to a L3 VPN. Today this is only possible via Multus (or DANM), however typically there is a network orchestration responsibility to connect the network interface to a gateway router (where the L3 VPN is terminated). This network orchestration is not taken care of by K8s, nor there is a production grade solution in the open source space to take care of this.
>
> Note: with an underlying IAAS this is possible, but then it introduces (undesirable) dependency between workload orchestration in K8s and infrastructure orchestration in IAAS.


<a name="8.3"></a>
## 8.3 Proposals & Resolution


<a name="8.4"></a>
## 8.4 Development Efforts