Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SLM: Start implementation for first year PoC #23

Closed
mpeuster opened this issue Mar 10, 2016 · 32 comments
Closed

SLM: Start implementation for first year PoC #23

mpeuster opened this issue Mar 10, 2016 · 32 comments

Comments

@mpeuster
Copy link
Contributor

The service lifecycle manager plugin: http://wiki.sonata-nfv.eu/index.php/T4.2_%2B_T4.3_Orchestrator_kernel_%2B_Plugins#Deploy_a_Service

There is a empty skeleton for it: https://github.com/sonata-nfv/son-mano-framework/tree/master/plugins/son-mano-service-lifecycle-management

@mpeuster
Copy link
Contributor Author

Problem: The original SLM MSC is huge: http://wiki.sonata-nfv.eu/index.php/NS_LifeCycle_Mgr_MSC

I was wondering if we should start with an (extremely!) simplified version of this (to get something running):

Simplified US:

  1. GK sends a "service start" event to SLM, containing the service's NSD, etc.
  2. SLM translates our descriptors to a deployable HEAT template
  3. SLM passes HEAT template to OpenStack (through the infrastructure abstraction, if its available in the Y1 prototype)
  4. We have a running service. Be happy.
  5. Create the NSR in the repo. and fill it with runtime data of the created service

This simplification omits the placement and FLM components in our design in order to make the implementation as simple as possible.

But this is something I would consider as a realistic goal for the given timeframe.

What do you think?

@tsoenen @jbonnet @shuaibsiddiqui @smendel @felipevicens @mehraghdam @stevenvanrossem

@shuaibsiddiqui
Copy link

Hi Manuel,

We can skip the step 2 and within step 1, when GK is sending the "start" event to SLM, it includes the NSD (already fetched from the Catalogue).

Rest of the steps I agree.

@mpeuster
Copy link
Contributor Author

@shuaibsiddiqui you are right 👍

@mehraghdam
Copy link

I agree with this approach. Once the "simple" functionality is there we can think about transferring the pieces to the right plugin as far as time allows.

@mpeuster mpeuster added this to the Full review/demo of what is done, difficulties,... milestone Mar 10, 2016
@DarioValocchi
Copy link
Contributor

A small comment on the file the SLM should pass to the Adaptor. The translation between the SD and the HEAT template sholud be done by the VIM wrapper and only if the VIM is OpenStack and Heat is available. We need a standard, abstract way to send data and command to the adaptor, and we should discuss this further.
In my view, I don't think that the SLM can assume that HEAT is there to receive the file, and probably neither it should.

@smendel
Copy link

smendel commented Mar 10, 2016

step 3 - should the SLM be the one translating the NSD to heat ?
perhaps we should use a translator used by the SLM ? this is perhaps something technical and doesnt really change the flow, but i think we should keep the translator(s) outside the plugins because in the future we will want a different SLM implementation (e.g. that includes a fancy placement alg but the NSD to Heat translation is the same)

@smendel
Copy link

smendel commented Mar 10, 2016

(just after posting i noted @DarioValocchi 's comment...)

@mpeuster
Copy link
Contributor Author

@DarioValocchi Fully agreed! A final SLM should use an internal abstract representation to talk to the VIM adapter, not HEAT. I am just proposing HEAT in the first prototype (30th April!) to get a system which can deploy something in a real cloud testbed (we have OpenStack and HEAT in place in our infrastructure). To get our Y1 story working. Not more.

@smendel Sure, the SLM could call an external translator. If we could reuse an existing translator its even better because it can again save some work for us.

In general: Don't take my proposal as a final solution for our system. It is just to build an initial - and working - version of our orchestrator within the remaining 4 -5 weeks ;-)

@smendel
Copy link

smendel commented Mar 10, 2016

what about registration of the instance in the relevant repositories once it is created ?
i think this is a very basic functionality we should also consider including.
otherwise the viability of the running instance will be only in OpenStack

@mpeuster
Copy link
Contributor Author

True! This should be integrated in such a simple prototype. I'll edit the steps in the original post.

@mehraghdam
Copy link

If catalogues and repositories are ready to use and the SLM can talk to them to do something like registering the instance records, then we can also keep step 2 as it was proposed by Manuel initially. It is not only NSD that should be sent to the SLM, other descriptors need to be fetched as well.

@shuaibsiddiqui
Copy link

Well, the GK can fetch all that is necessary for the instantiation of the service from Catalogue (or what the SLM requires to instantiate the service) before sending 'start' to the SLM.

@mpeuster
Copy link
Contributor Author

@shuaibsiddiqui @mehraghdam Yes, no matter where it comes from (for now). Lets not open a further GK vs. Catalogue discussion in this thread. We assume the SLM can access all needed descriptors, and artifacts here. :)
@shuaibsiddiqui You mentioned a colleague working on the SLM at i2CAT during todays call? Is he able to see this discussion?

@shuaibsiddiqui
Copy link

+1 for no more GK vs Catalogue (at least not for today :)

I already forwarded the link of this Github issue to him and told him the conversation here can give him a jump start on the SLM :) ..

@mehraghdam
Copy link

Sorry, my bad, had it differently in mind from the MSCs, but it is actually as Shuaib says there too :)

@DarioValocchi
Copy link
Contributor

@mpeuster sorry for the late reply 😃 I agree with you that we need to speed up the process to have a working prototype very soon, but maybe we can have the same result moving the "translation" code in the OpenStackWrapper (the first to be implemented) so we can kill two birds with one stone: we don't break the model and we have it working soon.
I believe that the first thing to settle on this topic is the kind of descriptor (a subset of the NSD) the adaptor should receive from the SLM and its creation should be inside the SLM itself. What do you think?

@mpeuster
Copy link
Contributor Author

+1 Moving the translation to the adaptor is defenitly the cleanest solution.

For the SLM this means we have to agree on ...

  • inputs (NSD, etc from GK / responses from infra adaptor / ...)
  • outputs (request towards infra adaptor / NSR, etc. written back to repos)
  • translation NSD to infra adaptor format

Since NSD, and NSR, etc. will be defined by other sub-teams (e.g., repo team) we should focus on the:

  • SLM <-> infra adaptor part
  • we could simply "fake" the repo API for now (e.g., assume a HTTP POST to store the NSR)

@DarioValocchi When you say a subset of the NSD is pushed to the infra adaptor, why don't we forward the entire NSD in our first prototype? Wouldn't this be the most pragmatic solution?

@smendel
Copy link

smendel commented Mar 11, 2016

i agree we should work with "dummies" for now. we should at least try that they be somehow aligned with the real interfaces that will exist eventually.
we should create issues in the relevant repositories in order for us to sync the interfaces with the relevant teams. @jbonnet @shuaibsiddiqui - what do you think, as you are responsible for the relevant repositories.

@shuaibsiddiqui
Copy link

+1 for creating issues in relevant repositories to sync the interfaces ..
@smendel We had a long dedicated discussion yesterday on GK, Catalogues and Repositories APIs and basic interactions for Y1. Once I gather the input from everyone, will send out an email.

@mpeuster
Copy link
Contributor Author

@smendel +1

@smendel
Copy link

smendel commented Mar 11, 2016

is there place for the 'GK-Plugins interfaces discussion' in the discussion mentioned by @shuaibsiddiqui ? or should we open a separate discussion ?

regarding the issues in the relevant repositories - i will do so and tag people from this discussion.
feel free to change / add / etc

@jbonnet
Copy link
Member

jbonnet commented Mar 11, 2016

@smendel Gatekeeper's API started here, but now I'll write per user story like in here. There are changes to this API/USs that I haven't yet incorporated.

@mpeuster
Copy link
Contributor Author

The following MSC tries to capture the interactions needed for the simple first implementation planned in this issue: http://wiki.sonata-nfv.eu/index.php/NS_LifeCycle_Mgr_MSC_Simplified

@shuaibsiddiqui
Copy link

Moving from Gitter to Github Issues :)

SLM does the NSD to Heat translation, right? We can add that step before Step 5 Or is it done by Infrastructure Adaptor?

@mpeuster
Copy link
Contributor Author

Ah ok missed that you also posted here ... the plan was to do it in the infrastructure adaptor ... so that we can have a well defined ... not HEAT-based interface between SLM and infra. adaptor:

@mpeuster sorry for the late reply 😃 I agree with you that we need to speed up the process to have a working prototype very soon, but maybe we can have the same result moving the "translation" code in the OpenStackWrapper (the first to be implemented) so we can kill two birds with one stone: we don't break the model and we have it working soon.
I believe that the first thing to settle on this topic is the kind of descriptor (a subset of the NSD) the adaptor should receive from the SLM and its creation should be inside the SLM itself.

+1 Moving the translation to the adaptor is defenitly the cleanest solution.
@DarioValocchi When you say a subset of the NSD is pushed to the infra adaptor, why don't we forward the entire NSD in our first prototype? Wouldn't this be the most pragmatic solution?

@shuaibsiddiqui
Copy link

@mpeuster Many thanks for clarifying!

@DarioValocchi
Copy link
Contributor

@mpeuster, yes definetly. The more I proceede with the adaptor, the more I see this point. 👍
As for your MSC there's a crucial bit a would like to understand more: The adaptor (or maybe the VIM Wrapper in side the adaptor) should download the VNFs images via HTTP. Is the URL of each image contained in the NSD? (I lost some bits on the NSD evolution of last weeks)

@mpeuster
Copy link
Contributor Author

@DarioValocchi As far as I know (and I also might have missed something), the NSD references the VNFDs and the VNFDs should somehow reference the images (URLs?). I guess we can clarify this during the call mentioned in Gitter tomorrow morning.

Maybe the SLM should send the Infra. Adaptor a simpler version of the information (only URLs and service graph or something like this, I see many possibilities for this).

@tsoenen
Copy link
Member

tsoenen commented Mar 16, 2016

Hi all! Sorry for joining the party late, I was otherwise occupied for the last 7 days.

@DarioValocchi and @mpeuster : Since in the fully integrated version the infra. Adaptor will not see the NSD, and the SLM will be fetching the VNFDs from the catalogue, @mpeuster' last proposition seems the best I think. After the request from the GK arrives, the SLM collects the VNFDs and sends the service graph and the VNFD image urls to the infra. Adaptor.

In this approach, the infra. Adaptor will need to handle the service graph, which will not be the case in the fully integrated version I think, so maybe we need another 'station' in between (SLM - FLM interaction?) to prevent developing functionality for the infra. Adaptor that will be useless in two months? But this will make the first version more complex.

@mpeuster
Copy link
Contributor Author

@tsoenen Right, sending the full NSD will not be needed later. It was just a simplification, because directly forwarding it is a one liner ;-) But for the service graph I am not sure. Doesn't the infra. adaptor need it to do the final chain setup by using e.g. a SDN controller?

@tsoenen
Copy link
Member

tsoenen commented Mar 16, 2016

@mpeuster In the fully integrated version, the SLM contacts the FLM for every VNF in the service graph to trigger the VNFs deployment/lifecycle. I was under the impression that this communication also included how to connect them with eachother, but sending connection details to the infra. Adaptor instead seems cleaner.

@mpeuster
Copy link
Contributor Author

@tsoenen Yes, this was also my first impression but the more I think about it the more I get the feeling that the infra. abstraction might need more of the "complete picture". But still, not sure what is the best way to go here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants