Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detach monitor functionality to other container with separate supervisor #51

Closed
vkotronis opened this issue Feb 4, 2019 · 7 comments
Closed
Assignees
Milestone

Comments

@vkotronis
Copy link
Member

vkotronis commented Feb 4, 2019

Is your feature request related to a problem? Please describe.
The taps on the backend compete for resources with the detection and database. We need to check if we can separate them in a different container for scalability reasons.

Describe the solution you'd like
Without changing anything in the configuration logic and control mechanisms (e.g., UI monitor on/off), need to check if we can detach and monitor.py taps from backend
and clone listener.py and supervisor in a new "taps" container

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

@pgigis
Copy link
Member

pgigis commented Jun 13, 2019

Service discovery tool: https://www.consul.io/discovery.html

@vkotronis
Copy link
Member Author

And the Github repo: https://github.com/hashicorp/consul

@vkotronis
Copy link
Member Author

vkotronis commented Jun 13, 2019

Sample mermaid diagram for system arch (https://mermaidjs.github.io/mermaid-live-editor/#/edit/eyJjb2RlIjoiZ3JhcGggVERcbkFbQmFja2VuZCBTZXJ2aWNlIE1hbmFnZXJdLS0-QltYTUxSUEMgU3VwZXJWaXNvciBFbmRwb2ludCAjMV1cbkEtLT5DWy4uLl0gXG5BLS0-RFtYTUxSUEMgU3VwZXJWaXNvciBFbmRwb2ludCAjTl1cbkItLT5FW0NvbnRhaW5lciBNaWNybyBTZXJ2aWNlcyAjMV1cbkMtLT5GWy4uLl1cbkQtLT5HW0NvbnRhaW5lciBNaWNybyBTZXJ2aWNlcyAjTl1cbkUtLT5IW0JhY2tlbmQgU2VydmljZSBEaXNjb3ZlcnldXG5GLS0-SFxuRy0tPkhcbkgtLT5BXG5cbiIsIm1lcm1haWQiOnsidGhlbWUiOiJkZWZhdWx0In19):

graph TD
A[Backend Service Manager]-->B[XMLRPC SuperVisor Endpoint #1]
A-->C[...] 
A-->D[XMLRPC SuperVisor Endpoint #N]
B-->E[Container Micro Services #1]
C-->F[...]
D-->G[Container Micro Services #N]
E-->H[Backend Service Discovery]
F-->H
G-->H
H-->A

Screenshot_2019-06-13 Online FlowChart Diagrams Editor - Mermaid Live Editor

The service manager communicates with a DNS-based service discovery module (consul) to register the microservices for the different containers. The discovery happens on microservice boot time. Then it (the manager) can be instructed by the frontend or other modules to control (start/stop/restart) the disaggregated microservices by addressing their Supervisor Endpoints via XMLRPC. Note that the user does not change anything in its custom supervisor configuration; simply the 5 detectors (for example) will run in their own separate container. If this is fine as an approach, we can start thinking of disaggregating more and more microservices in stages (e.g., detection, then db, etc.).
Btw, all microservices should be registered to the rmq bus (not shown on the figure for simplicity), including the service manager and discovery modules.

@vkotronis
Copy link
Member Author

Note that since with this arch we will have a supervisor running per container, we need to break down the initial sup conf into several files and map them to the different containers. So we will need a parser and conf generator for sup confs.

@vkotronis
Copy link
Member Author

After discussion with @slowr @pgigis , the following steps need to be made:

  1. Create dedicated supervisor files per microservice (e.g., monitor, detection). The user will no longer edit the services.d/supervisor.conf file, but will simply set scaling numbers in docker-compose.
  2. Pass the basic name of the microservice in the respective container as a variable in the docker-compose.yaml
  3. When a container gets up, it sends the following string to redis:
<container_name>_<hash_from_/etc/hostname>

Note that the hash from /etc/hostname is pingable by other containers.
4) The redis registers the new service (e.g., in a set indexed by the basic name of the service)
5) When the user needs to control services using the UI, redis is consulted and the corresponding services are controlled via the supervisor endpoint (answering in the respective hash names). Note that if a certain service does not respond it can be considered as "down" and be removed from redis.

@slowr if possible check if you can create a prototype of this discussed workflow for a single container (e.g., by isolating/detaching detection or monitor; you can start with monitor).

@vkotronis vkotronis changed the title Check detaching of monitor functionality to other container Detach monitor functionality to other container with separate supervisor Jul 30, 2019
@slowr slowr mentioned this issue Jul 31, 2019
16 tasks
@vkotronis
Copy link
Member Author

@slowr Since the first part is done, but the redis queries are pending, I am moving this to 1.4.0 for completion.

@slowr slowr removed the monitoring label Sep 21, 2019
@vkotronis vkotronis removed this from the release-1.4.0 milestone Oct 4, 2019
@vkotronis vkotronis added p/medium Medium priority and removed p/high High priority labels Oct 11, 2019
@vkotronis vkotronis added this to the release-.1.5.0 milestone Nov 16, 2019
@slowr
Copy link
Member

slowr commented Dec 30, 2019

I am going to close this ticket as the first iteration already happened. The split of the services to a separate container needs a bigger core change.

@slowr slowr closed this as completed Dec 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants