New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

High level architecture #2

Closed
refack opened this Issue May 25, 2017 · 22 comments

Comments

Projects
None yet
@refack
Member

refack commented May 25, 2017

Follow up to #1

If we consider the use-cases brought up in Step 1) Figure out use cases what should be the high level architecture?
Let's try to consider pros and cons and not just bikeshed 馃槈

Some options that were brought up in nodejs/node#13143:

  1. Multithreading - nodejs/node#2133 and node-webworker-threads
  2. Multi process with mutable shared memory - nodejs/node#13143 (comment)
  3. Multi process with immutable shared memory and serialized communication - nodejs/node#13143 (comment)
  4. Multi process with only serialized communication
@refack

This comment has been minimized.

Member

refack commented May 25, 2017

Considering @bnoordhuis comments on the complexity of multithreading, and based on personal experience, personally I'm pro multi-process. IMHO it seems like a the more natural solution for JS in general and nodejs in particular.
Intuitively I would vote for option (3), but I'm not sure how well it serves the "parallelize heavy computation" use case over option (2)...

Pros for (3):

  • IMHO simplest to implement
  • Covers the "utilize multi core" requirement
  • Could fulfill the "prioritize main thread over Workers" requirement
  • Immutable shared memory fulfills the "efficiently share large amount of data" requirement (although only one way)

Cons for (3)

  • Depends on OS for IPC and shared memory (platform specific code fragmentation)
  • Necessitates a mechanism for loading/sharing code
  • Multithreading could be considered simpler to grok and use
  • Necessitates implementing immutability of shared-memory, and implementing efficient communication protocol.
  • Probably more memory intensive.

I'll be happy add to this list based on future comments.

Some biased refs off the top of my head
python's GIL
Chromium's multi process architecture

@refack

This comment has been minimized.

Member

refack commented May 25, 2017

@addaleax too soon?

@addaleax

This comment has been minimized.

Member

addaleax commented May 25, 2017

I think a more reasonable next step would be to figure out whether what we鈥檙e going for is exposing a full Node API, or just a reasonable minimum that doesn鈥檛 include things like I/O (which is part of the reason why I was starting by asking for use cases).

If we want a full Node API in Workers, yes, multi-process probably makes the most sense. But I鈥檓 not sure whether that鈥檚 a good idea; I would very well imagine that using parallel workers to do more I/O would be considered an anti-pattern. It would also make the lifes of those easier who would want to use Workers for script isolation.

@refack

This comment has been minimized.

Member

refack commented May 25, 2017

Ack. That's why I didn't call this "Step 2".
But it was on my mind, and was discussed in nodejs/node#13143...

@vkurchatkin

This comment has been minimized.

Member

vkurchatkin commented May 25, 2017

I think a more reasonable next step would be to figure out whether what we鈥檙e going for is exposing a full Node API, or just a reasonable minimum that doesn鈥檛 include things like I/O (which is part of the reason why I was starting by asking for use cases).

I think we need both modes.

@refack

This comment has been minimized.

Member

refack commented May 25, 2017

Your comments made me realize I had an hidden assumption that we are planning to comply with the Web Worker API. But I realize that's TBD as well. Thank you 馃憤

@icodeforlove

This comment has been minimized.

icodeforlove commented May 26, 2017

@refack I think it would be great if we could conform to the Web Worker API, would be less of a learning curve, and they have already solved most of the things we would be doing. Like you mentioned before we already have non-standard things like cluster, and fork.

The big advantage of Workers would be a standardized performant way of utilizing all cores and interfacing with large ArrayBuffers.

@addaleax

This comment has been minimized.

Member

addaleax commented May 26, 2017

Note that providing the WebWorker API and a full Node API aren鈥檛 necessarily mutually exclusive; I agree, having the former is very likely a good idea.

@vkurchatkin

This comment has been minimized.

Member

vkurchatkin commented May 26, 2017

IMO implementing WebWorker API should be a non-goal. It could be easily implemented in userland on top.

@domenic

This comment has been minimized.

Member

domenic commented May 26, 2017

FWIW the language spec these days has a built-in model of "agents" and "agent clusters" which are meant to represent threads and processes. SharedArrayBuffers can be shared between agents, but not between agent clusters.

@NawarA

This comment has been minimized.

NawarA commented Jun 14, 2017

I believe we should allow for an architecture that allows for horizontal, peer to peer messaging. Basically, let's have the ability to postMessage to any event loop. Today, we can send a message from Master to worker, but workers should be able to communicate directly with other workers.

If we can do this, and add shared memory between event loops using the shared array buffer (available in 6.0), I think that's a big win for the kind of software architectures we'll enable others to create. It'd be a big deal

@refack

This comment has been minimized.

Member

refack commented Jun 14, 2017

It'd be a big deal

It's definatly worth thinking about. I'm just thinking how will the workers identify each other 馃 (know if the other side is there/responsive/alive)...
@NawarA Do you have a use case in mind? (Re #1 )

@stevenvachon

This comment has been minimized.

@rtfeldman

This comment has been minimized.

rtfeldman commented Jul 10, 2017

Anecdotally, I'm using child_process.fork for the elm-test CLI, and avoiding the per-message IPC cost would be a big deal to me!

For my use case, any design that allows computation across multiple cores with less message-passing cost than IPC is 馃樆, and anything with the same (or more) message-passing cost than IPC means I'd just keep using child_process.fork. 馃槃

Not having to spawn separate processes (which only (1) would avoid) would be nice, as spawning processes contributes to overall execution time whenever someone runs elm-test, but it's not a huge deal.

@devsnek

This comment has been minimized.

Member

devsnek commented Jul 18, 2017

I think this is a big opportunity to do the work needed to switch to a multithreaded model, rather than trying to get around that. Implementing all the fun of mutable shared memory and atomics and whatnot would be a huge step forward with what is possible to do in node, and I think it would totally be worth the work of converting the codebase to play well with multithreading.

@bnoordhuis

This comment has been minimized.

Member

bnoordhuis commented Jul 18, 2017

@devsnek Atomics and shared memory are possible in both multi-process and multi-thread mode.

@devsnek

This comment has been minimized.

Member

devsnek commented Jul 18, 2017

that wasn't my point but ok

@bnoordhuis

This comment has been minimized.

Member

bnoordhuis commented Jul 18, 2017

Then what is your point?

@p3x-robot

This comment has been minimized.

p3x-robot commented Aug 27, 2017

See my #4 (comment)

@rektide

This comment has been minimized.

rektide commented Nov 29, 2017

I'd like to see this built around a very conventional UNIX model, if at all possible. Sending file-descriptors over (unix domain, &c) sockets would be ideal- that would mean something like usocket. I don't know much about memory is allocated to back something like a SharedArrayBuffer, but if we can use memfd to allocate that memory, we can pass the file-descriptor to other processes.

There's a non-JS example of this kind of thing here. This author also seals the memfd before sending, making it immutable, but I'm 95% confident that's optional.

This is more or less the standard way of passing data among processes on Unix, & it would be baller if WebWorkers backed to this common, well known architecture for data-sharing in a multi-process environment. Creating a SharedArrayBuffer backed by memory allocated via memfd would be the major core demonstrator, proving out the viability.

@p3x-robot

This comment has been minimized.

p3x-robot commented May 3, 2018

i am using unix sockets with redis, it is much faster then TCP about 50% :)

@addaleax

This comment has been minimized.

Member

addaleax commented Oct 3, 2018

I鈥檓 closing the existing issues here. If you have feedback about the existing Workers implementation in Node.js 10+, please use #6 for that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment