Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat] Daemon for interaction with OS and hardware #2849

Open
2 tasks done
dnmeid opened this issue Apr 27, 2024 · 7 comments
Open
2 tasks done

[Feat] Daemon for interaction with OS and hardware #2849

dnmeid opened this issue Apr 27, 2024 · 7 comments
Labels
area/backend Something in the core of companion area/surface Something to do with a control surface (eg Streamdeck) Enhancement New feature or request Idea

Comments

@dnmeid
Copy link
Member

dnmeid commented Apr 27, 2024

Is this a feature relevant to companion itself, and not a module?

  • I believe this to be a feature for companion, not a module

Is there an existing issue for this?

  • I have searched the existing issues

Describe the feature

Today we have more or less no possibility to interact with hardware connected to the computer or the OS. Only exception are the surfaces which are handeled by core or via Satellite, but with some limitations.
The proposed feature is to have a daemon per OS which handles all that. That means:

  • USB Surface communication is handled by the daemon and not by core, so the daemon replaces Satellite. The daemon can offload rendering stuff from core.
  • daemon provides access to OS stuff like:
    • MIDI Interfaces
    • Serial ports
    • Printer ports
    • GPIOs
    • other interesting USB devices, like HID, Gamecontrollers, DMX
    • Bluetooth
    • Sensors
    • Keyboard(s)
    • Mouse(s)
    • Running processes (count, names, windows, control, hooks)
    • Probably injection of native libraries used by modules
    • Filesystem
  • it should be possible to install the daemon with one installer together with Companion core
  • daemons on same or on remote computers should be automatically detected and connected to not add another layer of complexity for the user
  • one instance of Companion should be able to use the resources from multiple daemons at the same time
  • multiple instances of Companion should be able to use the resources from one daemon at the same time with exclusive access as an option
  • daemon should be performant and lightweight, probably not electron based

That way Companion core can be a network-only application and the whole system can be more distributed and flexible.

Usecases

  • Using surfaces connected at local PC
  • Using surfaces connected at remote PC
  • Triggering Keypresses at local or remote PC
  • Accessing processes at local or remote PC
  • Accessing USB Hardware or ports at local or remote PC
  • ...
@dnmeid dnmeid added Enhancement New feature or request Idea area/surface Something to do with a control surface (eg Streamdeck) area/backend Something in the core of companion labels Apr 27, 2024
@Julusian
Copy link
Member

I have more thoughts, which I will write out later.

But a big one is, should modules run in companion or the daemon?
Let's assume that we could auto deploy modules to the daemon (for the v3 ones, we definitely could)
I think some modules will need to run on this daemon, for example one which needs a serial port. Because the module could be using an existing npm library which doesn't give us the flexibility of injecting a proxy for the serial port access, or simply would end up with terrible performance due to waiting for promises which are now dependent on network latency.
I feel like we could say that modules always run in a daemon, which by default would be that local one

@dnmeid
Copy link
Member Author

dnmeid commented Apr 27, 2024

But a big one is, should modules run in companion or the daemon?

My first answer would be in Companion , but I'm open for any discussion.
If the module runs in the daemon, the daemon would need to be able to run javascript and whatever we may support in the future. I don't have a strong aversion to this, but it will make the daemon definitely less lightweight.
If the module runs in the daemon, one module can use only resources of the daemon it is running on. Again I don't feel this is a big con.
In my opinion the proxying part is a big pro of it, trying to make stuff available to more than one module.
If the module is fine with using a resource by an interface provided by the daemon, what would be the pros/cons of running it in Companion or the daemon?
If the module relies on a library which uses a resource out of our control, what would be the pros/cons then?
One con of running the complete module in the daemon may be, that the module then needs to be OS dependent, where one of my goals is to make as much as possible OS independent in Companion and provide it to the modules in a standardized way. The OS dependent stuff should be done only once for the daemon itself.
I would like to have a functionality for modules who need to use some OS dependent stuff and need to manage it themself, e.g. using a SDK, but I feel like this will only a small amount of modules.

@Julusian
Copy link
Member

Another feature that I think should be supported from the beginning:

  • Network traffic between the daemons and Companion should be encrypted and authenticated.
  • Not something to do or really think about initially, but it would be good if this could be possible to use over the internet without port forwarding, perhaps as part of companion cloud.

daemon provides access to OS stuff like:

It sounds like you want to provide an abstraction/friendly api over these things.
For many of these, I'm not sure how worthwhile that will be for us to do, given that some modules will be using libraries that expect direct access. But this is a similar thing to the TCPHelper and related classes, any modules without an existing library use these, and any module that does use a library will not.

I'm not entirely opposed to the idea, but I don't think it will work in every scenario. And I would still like to allow modules to be written in other languages one day, I have no idea how these abstractions would impact that.


daemons on same or on remote computers should be automatically detected and connected to not add another layer of complexity for the user

daemons on the same should be auto-connected to.
daemons on remote computers should be listed in the ui as available to be connected to, but I don't think it should auto-connect.
We definitely need to have some form of security in the protocol these talk, otherwise it is a bigger security hole than companion is today. And connecting to everything will often be undesirable, in my experience it is common for every operator to have their own companion install, on a shared network. Having every companion be automatically linked would be annoying.
But this could definitely be made simple to 'pair', such as the user entering the 4 digit code provided by the daemon.


daemon should be performant and lightweight, probably not electron based

I am open to this, but if not nodejs what languages would you consider?

For those unaware, the current model of companion is:

  • the launcher window is electron
  • the launcher runs a nodejs child process which is companion
  • companion runs each connection/instance as its own child process, which today are all nodejs

If you run headless companion (eg companionpi), then the launcher layer is skipped and systemd runs the nodejs process which is companion.

@Julusian
Copy link
Member

Another reason I have for running the modules in the daemon, is that it allows for a better distribution of processing.
Instead of needing a single large powerful companion machine which runs everything, you can push the modules out to smaller 'compute' nodes. And you could run these daemons close to what they are controlling.
It will also allow for

Perhaps you are doing a show which has some vmix in aws, an atem and other stuff in the studio and another vmix and streamdecks with the operator at home.
You could run companion in aws, with the built in daemon talking to the aws vmix. Add another daemon in the studio which talks to the stuff there, and a third daemon with the operator which will talk to their vmix and add their streamdecks to the system.
With this model, only the minimal amount of data will need to be pushed between the daemons and companion. Not the full vmix data each poll, just whatever the module wants to report back. And no risk of the atem connection timing out due to network latency, as that part happens locally.

And by running the modules in the daemon, if you connected two different companions to the daemon, they could both be able to run actions on the running connections/instances simultaneously. So this would give the benefit of sharing a limited network resource with multiple companions.


If the module runs in the daemon, the daemon would need to be able to run javascript and whatever we may support in the future.

Yes it would, but only when the user specifies a connection/instance should be run on that daemon.
The v3 module system was designed from the beginning to mean that the modules don't have to be nodejs, and dont have to be the same version of nodejs. Which I think will work in our favour, as I think it means the same could be said for the other direction too (other than one small issue which could be resolved)

If the module runs in the daemon, one module can use only resources of the daemon it is running on. Again I don't feel this is a big con.

I agree. To clarify, I think that a particular connection/instance of a module should be run on a daemon. So if that resource limitation is a problem, then I would question why a connection/instance needs physical presence on two separate machines, to me that sounds like an odd scenario.

In my opinion the proxying part is a big pro of it, trying to make stuff available to more than one module.
If the module is fine with using a resource by an interface provided by the daemon, what would be the pros/cons of running it in Companion or the daemon?

The only pro/con I have currently is in favour of the daemon is that it makes things more predictable.
For example:

await streamdeck.drawKey(0, ....)
await streamdeck.drawKey(1, ....)

This is a very simple case that expects to drawing to a streamdeck as fast as possible. But if the hid proxy it is using is being run over a vpn, then a drawKey call which locally would take 5ms might now take 105ms.
Lets ignore whether that has any real impact on how the first write behaves, that depends on what the source of this was. But that second write is waiting for the first to complete before it begins, so is now starting to execute 10ms later than it would have.

Unless usages of these proxies are carefully crafted to consider this latency, they will have a notable impact on how they 'perform'. For a streamdeck, this could worst case manifest as the drawing being slow with a rolling effect as our code iterates through all the buttons. Or best case it would mean that we are limited to a much slower fps of drawing each button.

But if the proxies are being used only locally, then I doubt there is a significant enough impact to care about.

If the module relies on a library which uses a resource out of our control, what would be the pros/cons then?

Yeah, in this case, then a module will only be able to access the resources on the same machine. And means that the rules enforced by the OS on ownership/exclusivity would have to be respected.

To me, the value in these proxies is to:

  • simplify/expose the api for modules to use without needing to reinvent everything themselves. This only makes sense if it is used by multiple modules
  • share resources which have exclusivity that we disagree with (eg Shared UDP Listener #2399)

One con of running the complete module in the daemon may be, that the module then needs to be OS dependent, where one of my goals is to make as much as possible OS independent in Companion and provide it to the modules in a standardized way. The OS dependent stuff should be done only once for the daemon itself.

Yes, but I think the current model works sufficiently well for this.
We require the native libraries which interact with these things to be written such that they are OS independent, then we ship all of those as part of the module.
This has resulted in me doing some work to make a native library conform to this structure, but some have been fine without needing any work.

@dnmeid
Copy link
Member Author

dnmeid commented Apr 28, 2024

  • Agree, encryption/authenthication is a bonus.
  • Agree, automatic connect to local, manual to remote.

The languages I'd consider are:

Yes, I have some abstraction for this stuff in mind. I think there will be more or less one generic module using each resource. e.g. generic-midi, generic-keyboard (replacing vicreo-listener). But these modules will be used a lot. Some resources may be used by a little more different modules like serial, but I guess most of them will be able to use the provided abstraction.

My first idea was to make this daemon lightweight, but more powerful than e.g. vicreo-listener. So it seems to me that the main drawback of allowing modules and a lot of stuff to be computed in the daemon is the buildup of size and complexity. I'm fine with that if it will be possible to still run that daemon from a raspi with maybe 3 streamdeck XL and GPIO as it is also meant to replace satellite.
The advantages of running modules in the daemon seem reasonably to me.

@Julusian
Copy link
Member

I would rather not use C++/Qt, I do enough C++ that it is a language I wouldn't choose for anything new.
I would be happy with rust, and I could accept go. I've not used either of them for something like this though, but I'm happy to give it a try.

Since I did the split of companion from the launcher, I have been thinking that it would be a good idea to rewrite the launcher so that it isn't electron, but I haven't been able to justify the effort or the need for another language in companion to myself yet.
But that should probably follow whatever decision gets made for this. Potentially could be used as a test project to make sure we don't hate the chosen framework, longer term having them match would be good for consistency and ease of maintenance.

I would be tempted to use a similar architecture in the daemon with a 'launcher' process, and then the main process being nodejs. Even if we don't use nodejs for that, unless we want to switch to a rust/go/whatever streamdeck library, then in a majority of cases this daemon will still need nodejs to be able to do streamdecks.


Yes, I have some abstraction for this stuff in mind. I think there will be more or less one generic module using each resource

I think it depends. For something like keyboard, then yeah I doubt there will be more modules which use that.
For midi, I can imagine that various midi modules might be useful. I know that some old yamaha desks only support physical midi as a control protocol, so it wouldnt be entirely unreasonable to have a module which exposes that in a nicer way than would be possible through the generic-midi.

So I am still a bit unsure on what can/should be abstracted, but I am open to it. It also sounds to me like we don't need to conclude on that before the rest of this can be started, as those abstractions will most likely become additions to the module-api, and don't have to be done at the same time as the daemon is created.
And I guess that as part of this daemon, #2041 should also be done so that the code for the surface handling is packaged and is distributable similarly or the same way as the modules


Some other slightly more UX thoughts:

  • The daemon should probably have a ui which shows:
    • companion installations which have access/permissions to the daemon, and whether they are currently connected
    • the connections currently running on this daemon, possibly with the full management of those connections ui.
    • show what modules are installed on the daemon, and manage them (install from local zip, install from 'store', remove, update etc)
    • show the surfaces connected to the daemon, and manage what companion they are assigned to
  • I expect that a lot of this ui will also want to exist inside of companion, to get a better overview across all of the daemons.

When I list it like that, it feels like a lot of work and some duplicated work. But I maintain that this would be a good architecture. Essentially it boils it down to the daemon acts as the io layer, without any companions connected it should be able to remember connections it runs and run them.
Then Companion connects to these daemons and provides the mapping of this io to the buttons/grid and triggers, and is still the brains.

@phillipivan
Copy link

Sorry to butt into this conversation. However, I can see another potential advantage / use case to running the modules on said daemon which I'd like to submit for consideration (apart from the hardware interaction)...

Occasionally a protocol will involve broadcast or multicast messaging, in which case, short of some fairly involved network trickery, it is necessary to be in the same network segment as the devices you are talking to. Similarly it is sometimes necessary to locate control in the same network segment as the controlled device because (a) the protocol specified no keep-alive (and thus is poorly suited for routing across firewalls) or (b) the device can not have a Gateway configured (I've seen this most on devices with multiple NICs, where only one of those NICs can have a Gateway configured).

Running modules in these daemons may open avenues for working with these devices in more complex network architectures, since the daemon could be located logically proximate to the devices and communicate back to the companion host via a more sane TCP connection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/backend Something in the core of companion area/surface Something to do with a control surface (eg Streamdeck) Enhancement New feature or request Idea
Projects
None yet
Development

No branches or pull requests

3 participants