Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RA2 Core]: Define the place of Kubernetes in the ETSI NFV MANO stack and document the result #450

Closed
CsatariGergely opened this issue Oct 23, 2019 · 14 comments
Assignees
Projects

Comments

@CsatariGergely
Copy link
Collaborator

In the discussions or #383 there was no consensus about the place of Kubernetes in the ETSI NFV MANO stack. The resolution of this issue shall build a consensus on the functional areas Kubernetes covers in the ETSI NFV MANO stack and document the result.

@TamasZsiros
Copy link
Collaborator

I thought there is already a definition about CISM in ETSI NFV MANO. I would prefer if we could refer to ETSI and not try to define things in parallel.

@CsatariGergely
Copy link
Collaborator Author

I thought there is already a definition about CISM in ETSI NFV MANO. I would prefer if we could refer to ETSI and not try to define things in parallel.

I agree, that we should refer existing definitions if there are any, but I think it is more difficult than pointing to the CISM definition.

In my opinion it depends on what functionalities of Kubernetes are we looking at and in which context. The [IC]aaS functionalities are either NFVI/VIM or CISM depending on if we would like to model its relation to a possible other NFVI/VIM while the application management functionalities (scaling, config management, operators) are VNFM.
My 2 cents.

@TamasZsiros
Copy link
Collaborator

I would like to make use of K8s' capabilities when it comes to scaling, LCM, operators, etc. so that we do not promote or suggest re-implementing all those in todays' VNFMs.

Would it be reasonable to say that when we map implementation to architecture then K8s/CaaS implements some of the functionality that is covered by VNFM? I know it's not as clean cut as if CaaS would only be CISM, but the interface between the two does exist today (K8s APIs). And a generic guideline could be to always use what K8s provides (for scaling, LCM, etc.). Then the enrty point to those operations is still whatever VNFM implementation there is today (and so Or-Vnfm stays as intact as possible).

Then if someone does not want ETSI/MANO, they can skip the VNFM alltogether, but I don't expect that to be the mainstream use case.

@pgoyal01
Copy link
Collaborator

@TamasZsiros ETSI IFA029 V0.20.0 (2019-08) specifies 3 different models for CISM -- each with its own implications on VNFM and VIM. If the VNFM together with NFVO is already implementing, say, LCM operations, then changes would be required in these systems to accommodate k8s.

If we are going to adopt, the IFA029 recommendations then there are also implications in defining NFVI -- includes the CIS (see below) and hence needs changes to Reference Model (RM) chapters also.

BTW, for the convenience of others have included ETSI IFA029 definitions of CIS, CISM and CISI here:

Container Infrastructure Service: Service that provides runtime environment for one or more container virtualization technologies. NOTE 4: Container Infrastructure Service can run on top of a bare metal or hypervisor-based virtualization

Container Infrastructure Service Management: a function that manages one or more Container Infrastructure Services. NOTE 5: It provides mechanisms for lifecycle management of the containers, which are hosting application components as services or functions.

Container Infrastructure Service Instance: an instance providing runtime execution environment for container. EXAMPLE 2: In Kubernetes® this is a node, which consists of three components: container runtime, kubelet and kube-proxy. In OpenStack® Zun this is a compute node, which consists of two components: container runtime, Zun-compute. NOTE 6: Kubernetes® is a registered trademark of The Linux Foundation®. This information is given for the convenience of users of the present document and does not constitute an endorsement by ETSI of the product named.

@tomkivlin
Copy link
Collaborator

Hi all - I've had a read through IFA029 and understand it a little better now. Some terms have specific Kubernetes examples which is useful, but some don't.

Those with clear examples:
Container Infrastructure Service Instance: Kubernetes worker node (runtime, kubelet, kube-proxy)
Managed Container Infrastructure Object: Pods, services, deployments, etc.
Managed Container Infrastructure Object Package: Helm chart, etc.

Some don't - here's how I think they would map to capabilities we've been discussing:
Container Infrastructure Service: this sounds like a Kubernetes / CaaS cluster
Container Infrastructure Service Management: this sounds like the CaaS Manager component, managing multiple Kubernetes clusters

Do you agree?

@tomkivlin
Copy link
Collaborator

tomkivlin commented Oct 28, 2019

In terms of where CISM sits wrt MANO, section 7.1.1 has the three options @pgoyal01 mentions:

  1. PaaS services are modelled as VNFs (e.g. the VNFM provides the CISM/Kubernetes capabilities)
  2. PaaS services are modelled as a new type of NFVI resources (e.g. the VIM provides the CISM/Kubernetes capabilities)
  3. PaaS services are modelled as a new type of object specific to the PaaS layer

My view is that 3 is the correct architectural choice as this is a description of functional capability and both 1 and 2 will drive supply chain decisions, which an architecture shouldn't do. 3 still allows for the VIM vendor to supply the CISM/Kubernetes but it's not forced by the architecture.

My main concern, which I can't see mentioned in IFA029 is that we should avoid the PaaS Manager / CISM / CaaS Manager from having to call a northbound API to the NFVO. For example when scaling a cluster I would expect the PaaS Manager / CISM / CaaS Manager to be able to do so without a "grant" from NFVO - however, I can see that a VNFM could/should still get a grant to scale the VNF/CNF via the SOL003 interface.

Another question then arises - should that grant be for a specific number of replicas of a VNFC/pod, or for a range, allowing for HPA to be used, for example?

But then I think - does this have any impact on the scope of this RA? Sounds like, from other meeting minutes that we should be focussing on the capabilities provided to the VNF/CNF. In other words the container runtime, etc. rather than the capabilities provided to/by the VNFM such as the Kubernetes control plane, etc.?

@tomkivlin
Copy link
Collaborator

I'm also concerned that IFA029 seems to consider Kubernetes as a CISM, rather than a CIS? "Prominent CISM/COE implementations, at the time of writing the present document, are Kubernetes®, Apache Marathon with Mesos and DockerTM (Swarm)."

@pgoyal01
Copy link
Collaborator

@tomkivlin IFA029 "CIS provides the container runtime environment (e.g. Docker (TM))". also, IFA029 7.2.1.1.2, implies that the CIS is responsible for "infrastructure resources."

@tomkivlin
Copy link
Collaborator

OK, in which case that therefore implies that a CIS is a subset of Container Infrastructure Service Instance as the example in the doc for CISI is "an instance providing runtime execution environment for container.
EXAMPLE 2: In Kubernetes® this is a node, which consists of three components: container runtime, kubelet and kube-proxy."??

It also implies there is no equivalent to the CaaS Manager in IFA029? i.e. the component that manages the lifecycle of multiple Kubernetes clusters - e.g. create, scale, heal, delete.

@CsatariGergely
Copy link
Collaborator Author

Hi all - I've had a read through IFA029 and understand it a little better now. Some terms have specific Kubernetes examples which is useful, but some don't.

Those with clear examples:
Container Infrastructure Service Instance: Kubernetes worker node (runtime, kubelet, kube-proxy)
Managed Container Infrastructure Object: Pods, services, deployments, etc.
Managed Container Infrastructure Object Package: Helm chart, etc.

Some don't - here's how I think they would map to capabilities we've been discussing:
Container Infrastructure Service: this sounds like a Kubernetes / CaaS cluster
Container Infrastructure Service Management: this sounds like the CaaS Manager component, managing multiple Kubernetes clusters

Do you agree?

CIS is the set or worker nodes while CISM is the set of controller nodes.

@CsatariGergely
Copy link
Collaborator Author

In terms of where CISM sits wrt MANO, section 7.1.1 has the three options @pgoyal01 mentions:

1. PaaS services are modelled as VNFs (e.g. the VNFM provides the CISM/Kubernetes capabilities)

2. PaaS services are modelled as a new type of NFVI resources (e.g. the VIM provides the CISM/Kubernetes capabilities)

3. PaaS services are modelled as a new type of object specific to the PaaS layer

My view is that 3 is the correct architectural choice as this is a description of functional capability and both 1 and 2 will drive supply chain decisions, which an architecture shouldn't do. 3 still allows for the VIM vendor to supply the CISM/Kubernetes but it's not forced by the architecture.

My main concern, which I can't see mentioned in IFA029 is that we should avoid the PaaS Manager / CISM / CaaS Manager from having to call a northbound API to the NFVO. For example when scaling a cluster I would expect the PaaS Manager / CISM / CaaS Manager to be able to do so without a "grant" from NFVO - however, I can see that a VNFM could/should still get a grant to scale the VNF/CNF via the SOL003 interface.

Another question then arises - should that grant be for a specific number of replicas of a VNFC/pod, or for a range, allowing for HPA to be used, for example?

But then I think - does this have any impact on the scope of this RA? Sounds like, from other meeting minutes that we should be focussing on the capabilities provided to the VNF/CNF. In other words the container runtime, etc. rather than the capabilities provided to/by the VNFM such as the Kubernetes control plane, etc.?

I think we should not look into the PaaS part of IFA029, but rather to the "container part". The options there are listed in Chapter 7.2, but all of these should be read with caution as IFA029 is a study for what the normative work is just starting.

In my view in IFA029 Kubernetes is the CIS+CISM while in pre IFA029 it can be an NFVI+VIM (with some limitations on the hw management and multitenancy fronts) and in both cases have some VNFM functionalities.

@CsatariGergely
Copy link
Collaborator Author

OK, in which case that therefore implies that a CIS is a subset of Container Infrastructure Service Instance as the example in the doc for CISI is "an instance providing runtime execution environment for container.
EXAMPLE 2: In Kubernetes® this is a node, which consists of three components: container runtime, kubelet and kube-proxy."??

It also implies there is no equivalent to the CaaS Manager in IFA029? i.e. the component that manages the lifecycle of multiple Kubernetes clusters - e.g. create, scale, heal, delete.

There is no equivalent of CaaS manager in IFA029. ETSI NFV does not cover the management of the infrastructure itself.

@tomkivlin
Copy link
Collaborator

From 31.10 meeting: CNTT RA2 Core WS will describe the Kubernetes architecture as it related to ETSI NFV/MANO, for feedback via RA2 Dev if required.
Work likely to be heavily linked to #450

@tomkivlin tomkivlin self-assigned this Oct 31, 2019
@rabiabdel rabiabdel added this to the Snezka milestone Nov 5, 2019
@rabi-abdel rabi-abdel added this to To do in old-RA2 Nov 5, 2019
@tomkivlin tomkivlin removed this from the Snezka milestone Nov 15, 2019
@tomkivlin tomkivlin moved this from To do to Backlog in old-RA2 Nov 18, 2019
@rabi-abdel rabi-abdel removed the RA 2 label Feb 25, 2020
@rabi-abdel rabi-abdel modified the milestones: M3 (Freeze Contributions), Backlog Feb 25, 2020
@rabi-abdel rabi-abdel removed this from the Backlog milestone May 15, 2020
@tomkivlin
Copy link
Collaborator

Closed due to inactivity.

old-RA2 automation moved this from Backlog to Done Oct 1, 2020
@tomkivlin tomkivlin removed the Backlog label Oct 1, 2020
@project-bot project-bot bot moved this from Done to To do in old-RA2 Oct 1, 2020
@tomkivlin tomkivlin moved this from To do to Done in old-RA2 Oct 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
old-RA2
  
Done
Development

No branches or pull requests

6 participants