Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate Emulator Pilot to native K8s #72

Closed
14 tasks done
mpeuster opened this issue Jan 18, 2019 · 11 comments
Closed
14 tasks done

Migrate Emulator Pilot to native K8s #72

mpeuster opened this issue Jan 18, 2019 · 11 comments
Assignees

Comments

@mpeuster
Copy link
Collaborator

mpeuster commented Jan 18, 2019

This is an intermediate step before the pilot is deployed on K8s using 5GTANGO SP, we first want to have a version that is directly deployed on K8s without 5GTANGO.

TODOs

See corresponding wiki entry

  • adopt VNFs
  • prepare K8s deployment descriptors
    • Deployments for starting the containers
    • Deployment of each CDU in a separate pod
    • Services for accessing the containers (from outside, eg, for eae)
    • Ingress for simplified external access? Or simply setting a fixed nodePort? Optional for now.
  • prepare service descriptors and figure out interconnection of CDUs
    • Test connection from MDC to CC processor via CC broker using mosquitto CLI
    • Integrate and tests all CNFs of NS1 and NS2 to work together
  • move DT outside of NS2 and k8s and connect to MDC from the outside
  • combine all CDUs of one CNF in one deployment and pod that's reachable through one k8s service
  • add new CC CDUs: Prometheus DB and exporter
@mpeuster mpeuster self-assigned this Jan 18, 2019
@mpeuster mpeuster added this to To do in K8s-based Pilot via automation Jan 18, 2019
mpeuster pushed a commit to mpeuster/tng-industrial-pilot that referenced this issue Jan 28, 2019
Signed-off-by: peusterm <manuel.peuster@uni-paderborn.de>
mpeuster pushed a commit to mpeuster/tng-industrial-pilot that referenced this issue Jan 28, 2019
Signed-off-by: peusterm <manuel.peuster@uni-paderborn.de>
@mpeuster mpeuster moved this from To do to In progress in K8s-based Pilot Jan 28, 2019
@stefanbschneider stefanbschneider self-assigned this Feb 7, 2019
@stefanbschneider
Copy link
Member

Started working on deploying NS1 and NS2 on k8s: https://github.com/sonata-nfv/tng-industrial-pilot/wiki/Deploying-the-service-on-Kubernetes

To ensure all VNF containers keep running in k8s, they need a different start.sh than the vim-emu VNFs (that doesn't run the commands in background). I adjusted all relevant VNFs to use start.vimemu.sh for the :vimemu container and start.sh for :k8s (similar to Dockerfile.vimemu and Dockerfile).

Once all VNFs run on k8s, the next step is to configure the networking and connection of the VNFs within k8s.

@stefanbschneider
Copy link
Member

stefanbschneider commented Feb 7, 2019

Currently, all pods are deployed fine at the beginning, but then keep crashing every now and then:

$ kubectl get pods
NAME                                 READY   STATUS             RESTARTS   AGE
sm-ns1-deployment-5564965dcc-n5d57   2/3     CrashLoopBackOff   13         76m
sm-ns2-deployment-67c8f6574f-9zb82   2/2     Running            0          76m

It seems like this is the CC processor: terminated with exit code 137

@stefanbschneider
Copy link
Member

stefanbschneider commented Feb 15, 2019

Connecting cc processor to cc broker within NS1 worked: Both are running inside the same pod and the processor can simply subscribe to the broker at localhost:1883 as before

Connecting the MDC in NS2 to the CC broker in NS1 doesn't work yet because they are running in different pods. Next time, I'll have to follow the debug guide here to figure out what the problem is.
My current guess is that we need to change the service to type: ClusterIP (simply remove type because it's the default), but let's see.

Testing the connection works best by logging into the CC broker container and to the other VNF (eg, MDC) and using the mosquitto_sub CLI call to test subscribing to the broker. If this works, then the connection seems to work. Next would be pushing something from MDC via broker to CC processor.

@stefanbschneider
Copy link
Member

stefanbschneider commented Feb 20, 2019

Update/Todo: Put all containers/CDUs in separate pods such that they could be scaled independently. Also for SP to allow easy mapping of descriptors to pods, using VDU = pod.

For example: CC could have 3x Broker CDU and 5x Processor CDU and 1x DB CDU

Apparently, this means, I have to split the Kubernetes descriptors into separate deployments and services for each CDU: https://stackoverflow.com/a/43220561/2745116

This was referenced Feb 20, 2019
@stefanbschneider
Copy link
Member

stefanbschneider commented Feb 22, 2019

Connection between containers works now (with microk8s DNS enabled). MQTT works also manually. Next:

  • Test and debug NS1: Connect CC processor and broker
  • NS2: Connect MDC to CC broker

@stefanbschneider
Copy link
Member

stefanbschneider commented Feb 27, 2019

Next up: Move digital twin outside of NS2 and even outside of Kubernetes deployment. Start it outside as normal docker container trying to access the MDC running inside the k8s deployment. Similar to a real machine that would use NS2 from the outside.

Discuss with Manuel how to start in privileged mode such that the Samba file share works.

On k8s side, use NodePort service for MDC to expose its Samba host at a fixed port of the cluster, where it can be accessed from the outside. Pass the cluster address and the exposed node port as environmental variables to the DT docker.

@stefanbschneider
Copy link
Member

stefanbschneider commented Mar 1, 2019

Started wiki page on how to deploy the DT as Docker container outside of K8s (and later also on how to connect it): https://github.com/sonata-nfv/tng-industrial-pilot/wiki/Connecting-the-digital-twin

Problem: The current NodePort service maps the MDC's ports to other external ports for outside access (30000+). Not sure if this easily works with the DT. Better to somehow expose the MDCs normal ports to the outside. Have to google again how that works.

Note: The Docker run command doesn't work on Windows for some reason. But works on Ubuntu. This was due Windows line endings in the start.sh. When changing them to Unix line endings (in Notepad++) it works.

@mpeuster
Copy link
Collaborator Author

mpeuster commented Mar 5, 2019

CC CDU03 (mqttexporter) and CC CDU04 (prometheus DB) are ready to be picked up for the K8s deployment #92

Still, I face some issues with DT, MDC in the vim-emu case. Might also happen in the K8s case #106

@stefanbschneider
Copy link
Member

stefanbschneider commented Mar 15, 2019

I figured out how to assign and expose fixed ports using k8s' NodePort: See updated wiki page.

Unfortunately, it still doesn't seem to work. The MDC still says "Error no EM63 response file was found! Retry!".

Not sure if this is because the k8s service didn't work properly or due to some other problem. Also not sure how to debug.

Wait: I forgot to set the Samba host IP when starting the DT! By default this points to a fixed IP in the emulator. It should be the IP of the cluster node, which exposes the MDC's ports (kubectl cluster-info).

@stefanbschneider
Copy link
Member

Updated todos above according to call and updated wiki page: https://github.com/sonata-nfv/tng-industrial-pilot/wiki/Integration-with-SP

@stefanbschneider
Copy link
Member

Using a NodePort service and starting minikube with a custom port range works fine. But this doesn't seem to work for microk8s...

Workaround is to install a load balancer:
Microk8s-guide-V2.pdf

K8s-based Pilot automation moved this from In progress to Done Mar 19, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

2 participants