New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why is it so hard to add threads to nodejs? or web workers? #4

Closed
p3x-robot opened this Issue Jun 4, 2017 · 93 comments

Comments

Projects
None yet
@p3x-robot

p3x-robot commented Jun 4, 2017

i tried C++ async addon, and it took me 1 day. ( plus cancel a thread function)
why is it so hard to add a thread to nodejs?

@refack

This comment has been minimized.

Member

refack commented Jun 5, 2017

Ref: nodejs/node#13143 (comment)
I believe it's not hard to implement, what might be hard is to close all the edge cases. It's just that it's a big code base that was written and tested as a single (execution) thread program.
IMHO That is why we are trying to understand what are the use-cases, and so will the cost be worth the benefit. Just as an example if a multi-process architecture can improve 80% of the use-cases for 20% of the cost, then that's a good investment of everybody's time 👍

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 5, 2017

but v8 already there or charka has web workers. isnt that all?
non blocking single threaded event loop and we just add in web workers which is already there via threads. we dont need anything else, like atomics, since it would be like web workers, closed and we can get data via events, the default as web workers work.
maybe later shared arrays etc... webassembly etc...

looks nothing.
add addon c++ is so simple async functions, but the 75% use case we dont even need c++, js is enough,

only for ai we might need horsepower like facial recognition or speech.

like what is the plus we need to implement when it is in chrome and charka already!

@addaleax

This comment has been minimized.

Member

addaleax commented Jun 5, 2017

we dont need anything else, like atomics, since it would be like web workers

Well, having shared memory available would be a huge plus for Workers. You can already get acceptable event-based process coordination using process.on('message') and process.send(), so there are use cases already covered.

but v8 already there or charka has web workers. isnt that all?

If you want simple WebWorkers, yes, that would be comparatively easy. If you want to provide the full Node.js API (including things such as require()), it gets tricky, because a lot of the current code assumes isolation by process, so we’d likely prefer a multi-process model. Implementing shared memory support for that would be tricky.

Also, just to give everybody a status update, I’ll try to draft an EP text this week.

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 5, 2017

i guess require should be isloated and keep it simple like nodejs. the programmers can choose how to work with shared memory. todays we use multi servers/multi cores and can be left for the developers. myself, i use redis shared memory, but you can use a cluster or in 1 shared process and use the event based connumication.
so that can be tricky, that is can be done via NPM instead of ndoejs for shared memory by anyone else. :)

dont you think? once we have web workers, the shared memory can be implemented by many solutions, lik e express. connect, koa etc...

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 5, 2017

as well, dont you think we should not try to implement right away? it will take a few versions and refactors to make it mature?

if isolated require plus native web workers, we would have an awesome for like calculations.

why do you think IO should be not be including?

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 5, 2017

If you want simple WebWorkers, yes, that would be comparatively easy. If you want to provide the full Node.js API (including things such as require()), it gets tricky, because a lot of the current code assumes isolation by process, so we’d likely prefer a multi-process model. Implementing shared memory support for that would be tricky.

so we need the require in isolation and we are done? the require('something') will be not the same in a thread require('something'), they will have a different memory and variables. so we are done?

for later we can add shareddata-s , atomics as well. but not everything right away :)

@addaleax

This comment has been minimized.

Member

addaleax commented Jun 5, 2017

so we need the require in isolation and we are done?

That depends on what you mean by “in isolation”. What can you require in Workers? Built-in modules? Not everything would work outright, like fs, but right now require depends on fs working fine. Other things (e.g. everything in the v8 or vm modules) should probably be available in some way anyway, using require or not.

why do you think IO should be not be including?

I’m not saying it should not be included, I’m saying we might need a few tricks to get it to work. Or generally: The more of Node’s internals we want to expose, the more carefully one needs to think about what could go wrong.

as well, dont you think we should not try to implement right away? it will take a few versions and refactors to make it mature?

Because I’d like to make sure that we come up with an API that addresses everybody’s use cases, and doesn’t just take the path of least resistance. In particular, I would really not want to get us locked into a specific implementation and look back in a few months and have to say “yeah, this was a bad idea” (for example, it’s tempting to go for multi-thread support rather than multi-process support, but there are very valid points for not doing that).

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 5, 2017

Isn't just like, versions:

  1. default v8/charka web worker standard
  2. importScripts => require
  3. pass shared buffereds (like web worker api)

what other use cases are there?
why is require fs is tricky, when require cache is in isolation, so different require for the main process and every thread has its own.

@refack

This comment has been minimized.

Member

refack commented Jun 5, 2017

but v8 already there or charka has web workers. isnt that all?

Just to clarify my comment. The big codebase I was refering to is node's. Even thought V8 and chakra support MT, we have never tested (as MT) the codebase that wraps those engines and turns them into node.

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 5, 2017

that's true, if we leave out require, it works out of the box.
but if need definitely require, then we have to care what we load with require, i guess every thread should have it's own require cache.
looks easy, but easier to say then do it.

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 5, 2017

but guys, i can see there is code going in. so you are writing already on it?

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 5, 2017

besides, process.nextTick can work like a thread, isn't it? threads are overrated.
Idea IntelliJ, 1 process, like 1000 threads and the indexing like 30 projects at once, the GUI is frozen, if they used the indexing would be a different process, it would never blocked the gui, but they dont like multiple processes, so idea is always blocked, even though i use it for everything (CLion, IntelliJ Ultimate)

I guess, web workers are good, but overrated if we use process.nextTick

@bmeck

This comment has been minimized.

Member

bmeck commented Jun 13, 2017

The main problems are generally the idea that Worker should contain require and process those are leaky abstractions tied to process level shared data (process.env/cwd and require.cache). If instead Worker relies on passing transferables and async communication it is pretty simple.

I don't think the hacky solutions to these problems are good and really hope to go the route of not trying to create light weight processes

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 13, 2017

isn't the Worker require and process would have their own? not shared at all ( but thread)

@bmeck

This comment has been minimized.

Member

bmeck commented Jun 13, 2017

@p3x-robot you can't decouple them, they use OS level shared data

@bmeck

This comment has been minimized.

Member

bmeck commented Jun 13, 2017

I don't understand the question.

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 13, 2017

@bmeck

This comment has been minimized.

Member

bmeck commented Jun 13, 2017

@p3x-robot to a limited extent you can share data via transferring between threads, however some data like cwd and env come from the OS and are not thread local. For v8, you cannot share JS objects between Isolates/threads so you have to implement a layer like transferables. Even with that however syscalls like cwd are generally not thread safe to mutate which people do w/ standard Node APIs.

I am stating that certain Node APIs are not safe to reify in workers and workers must pass messages to the main thread in order to coordinate with them. This is similar to how the browser does not expose all the DOM APIs to workers due to threading issues.

@daviddias

This comment has been minimized.

daviddias commented Jul 22, 2017

Subscribing to this issue. Having this in Node.js would enable me to develop apps that leverage WebWorkers to offload CPU intensive operations (i.e Crypto) and make them work in the browser and Node.js in the same way. I would use it for js-ipfs and js-libp2p.

@p3x-robot

This comment has been minimized.

p3x-robot commented Jul 23, 2017

For now, there are so many issues, it will be so slow, I am just using processes, even Chrome uses a process for everything. So you can easy use https://www.npmjs.com/package/tiny-worker , check it out!
Although with C++ you can a lot as well. Lot's of examples for threads from NodeJs/ V8...

@pemrouz

This comment has been minimized.

pemrouz commented Aug 26, 2017

In order to move this forward, I think it might be good for people familiar with the internals to specify:

  • Which API's definitely cannot be shared between threads (+ some explanation of why)
  • Which API's would need some work (+ some explanation of what would be required)
  • Which API's can easily be shared between threads

This way we can build up a list, discuss and converge on an outline of the end goal.

@pemrouz

This comment has been minimized.

pemrouz commented Aug 30, 2017

Food for thought: https://webkit.org/blog/7846/concurrent-javascript-it-can-work/

@MylesBorins

This comment has been minimized.

Member

MylesBorins commented Dec 7, 2017

@AngelAngelov

That comment is not in line with our code of conduct. Is like to kindly ask that you consider removing it.

@evanlucas

This comment has been minimized.

Member

evanlucas commented Dec 7, 2017

Can we please keep comments in this thread relevant?

@oe

This comment has been minimized.

oe commented Dec 7, 2017

@p3x-robot @AngelAngelov personal attacks are considered a code of conduct offense, therefore i'd like to ask you two to remove or otherwise edit your comments

@stevenvachon

This comment has been minimized.

stevenvachon commented Dec 7, 2017

What happened? Comments appear to be removed.

@p3x-robot

This comment has been minimized.

p3x-robot commented Dec 24, 2017

@AngelAngelov I think https://github.com/Microsoft/napajs/blob/master/docs/api/node-api.md is will be full at some time an you can use threads at will without web-workers as well.

Merry XMAS

@aaronleeatali

This comment has been minimized.

aaronleeatali commented Mar 26, 2018

I wish you guys can take a look at here. https://github.com/alibaba/AliOS-nodejs/wiki/Workers-in-Node.js-based-on-multithreaded-V8
We propose a system for asynchronous multithreading with an API confirming to Web Worker standard in Node.js and maybe the performance improvements show that it's worth the effort.

@p3x-robot

This comment has been minimized.

p3x-robot commented Jun 28, 2018

this is closed, given 10.5 is implemented for threads.

@p3x-robot p3x-robot closed this Jun 28, 2018

@p3x-robot

This comment has been minimized.

p3x-robot commented Jul 6, 2018

@addaleax does the threads works with Atomics? i am testing, just asking if you have any info on it... THANKS

@addaleax

This comment has been minimized.

Member

addaleax commented Jul 6, 2018

@p3x-robot Yes, it does. :)

@p3x-robot

This comment has been minimized.

p3x-robot commented Aug 8, 2018

hey guys!
how are you?

do you have started using threads in NodeJs which has worker threads enabled?
what kind of use cases you are using?
thank you very much.

@a1xon

This comment has been minimized.

a1xon commented Aug 9, 2018

It would be great to have some rough numbers when it's worth it to use workers at all. e.g.: Is it possible to speed up functions that take only 10ms? Or more general information about the overhead that will occur when using workers.

I think the docs are really vague here.

Workers are useful for performing CPU-intensive JavaScript operations; do not use them for I/O, since Node.js’s built-in mechanisms for performing operations asynchronously already treat it more efficiently than Worker threads can.

@addaleax

This comment has been minimized.

Member

addaleax commented Aug 9, 2018

Is it possible to speed up functions that take only 10ms seconds?

That depends – do you spin up a Worker for every invocation? No, that’s not going to help here, it would normally take longer than 10 ms to do so. But if you use shared memory, or at least MessagePorts, to communicate the tasks to a Worker and back? Then, yes, that might be very much possible.

Maybe it’s worth doing more advertisement for using a worker thread pool in the docs?

Or more general information about the overhead that will occur when using workers.

Fwiw, some reasons why no specific overhead measurements are mentioned in the docs are that this is going to depend on the actual machine the code is running on and we’re actively working on improvements re: improving startup time, both for Node.js itself and Workers.

@davisjam

This comment has been minimized.

davisjam commented Aug 9, 2018

this is going to depend on the actual machine the code is running on

Of course, but we could generate some performance information to give folks an indication of pros and cons. @addaleax Do you know whether this something the benchmarking WG is considering?

@addaleax

This comment has been minimized.

Member

addaleax commented Aug 9, 2018

@davisjam Not that I know of. At this point the only benchmark we have is one for passing messages between workers (the MessageChannel approach)…

@a1xon

This comment has been minimized.

a1xon commented Aug 9, 2018

@addaleax

Maybe it’s worth doing more advertisement for using a worker thread pool in the docs?

Yes good idea. Maybe an example would be nice too.
I feel a little scared to leave the safe harbor of single threads and I fear a big explosion of speed and memory leaks 💀
Thanks for your awesome work btw!

@a1xon

This comment has been minimized.

a1xon commented Sep 1, 2018

@addaleax 50%+ speedup for my algorithm! workers rock!
Loops needed ~5ms before workers now they need ~2ms on a 6 core machine. I'm using channels for messaging and promises to gather the results. Also I'm creating only (cpu cores - 1) workers for the logic so I have 1 free core to handle new requests etc. Does this make sense? Am I right that every worker gets his own V8 instance that gets optimized only for the functions that the worker handles?

@addaleax

This comment has been minimized.

Member

addaleax commented Sep 1, 2018

Does this make sense? Am I right that every worker gets his own V8 instance that gets optimized only for the functions that the worker handles?

Yup – as far as V8/all JS stuff is concerned, every Worker is an independent instance. :)

@a1xon

This comment has been minimized.

a1xon commented Sep 8, 2018

I had problems with messages coming back from workers when the main function was called in a high frequency and the workers needed different amounts of time to return results. Classic async problem I guess. So old worker returns were smearing results into later called functions. I couldn't find any solution online (the simple subchannel example in node docs produces the same error). Guess this is an edge-case anyway but it produces surprisingly strange results.

My workaround is that I create a couple subchannels for each worker, send each worker it's ports and telling each worker on every call which port he should use to communicate, always iterating through the array of ports. Receiving specific function results on parent looks like this now:
worker.subChannels[worker.currentPort].port2.once("message", value => {});
I call it the "Ports-Merry-Go-Round". Not an official term yet? It's official now.
Had no problems after implementing that idea and once V8 is warmed up it's a beast on multi-core.
Was this "workaround" intended by you guys anyway? Is this a stupid idea? Pull no punches please.

@addaleax

This comment has been minimized.

Member

addaleax commented Sep 9, 2018

@a1xon Just for clarification … is the issue that it’s not obvious how to tie requests to workers back to the responses if they can arrive out-of-order?

@a1xon

This comment has been minimized.

a1xon commented Sep 9, 2018

@addaleax yes couldn't find anything online or in the docs to tackle that :\

@p3x-robot

This comment has been minimized.

p3x-robot commented Sep 9, 2018

After a little bit later, i still think js is a functional language and not horse power. For parallism, it is a language C++ and below to assembly or video card. NodeJS will never be good for processing. It is like Unicorn NodeJs vs Atom Hydrogen Bomb Assembly. Totally different approaches.

@p3x-robot p3x-robot reopened this Sep 9, 2018

@addaleax

This comment has been minimized.

Member

addaleax commented Sep 9, 2018

@a1xon I’d say it isn’t a problem that’s unique to workers – adding something like a request ID to the passed message could help here, so that you don’t have to maintain multiple ports?

I (and this is really just a personal opinion) think this is a kind of problem that we’d only tackle with a built-in solution if we were to provide some kind of built-in worker pool?

@p3x-robot Keep in mind that JS can definitely perform on the same order of magnitude as native languages, and there are things like WebAssembly that have a significant impact on the performance-in-JS world as well.

@p3x-robot

This comment has been minimized.

p3x-robot commented Sep 9, 2018

Of course, just an opinion.Webassembly is awesome, just like add-on or at last, thread worker. I still have not found a good use case, we always use native bindings or native c/c++ based proceses. But what we have not found, it is coming...

@davisjam

This comment has been minimized.

davisjam commented Sep 9, 2018

@a1xon

adding something like a request ID to the passed message could help here

Agreed.

@p3x-robot

NodeJS will never be good for processing

I think the question is "Is it fast enough for my purposes, and is it the bottleneck of the system on which I am processing?"

@p3x-robot

This comment has been minimized.

p3x-robot commented Sep 9, 2018

@davisjam in fact, i use requestId a lot for redis or socket.io for passing one time events.

As for bottleneck, we use like imagemagick, or some c or c++ to use horse power, i wanted threads so much and still i cant need it at all. Of course the thread worker can use a different core so it is for who find a good use case it will be so awesome totally.

Though, theads can block itself if the thread is for some weird reason is on the same core, but thread workers is a 1099999% feature, wanted so much, now it is in my palm, HAPPY.

@a1xon

This comment has been minimized.

a1xon commented Sep 9, 2018

@addaleax @davisjam tried to do it with only ids in the first place. But in 1 of 1000 cases the worker were called 2 or 3 times with the same random generated id. So I switched to changing ports and it works like a charm now - I'm also using IDs at the moment. Are you interested in the code? I can try to pretty it up a little.
@p3x-robot I am developing on my own and node.js + V8 is a huge blessing for me. The speed I can achieve now with workers is a lot higher than what I could achieve with naively written C/C++ code. The JIT Optimizers are way better than my C/C++ skills.

@p3x-robot

This comment has been minimized.

p3x-robot commented Sep 9, 2018

It is huge information that you tell us, because i am sure i will face this use case at some point. Thanks so much.

@addaleax

This comment has been minimized.

Member

addaleax commented Sep 9, 2018

@a1xon Do you think a counter or something like that would work, to avoid collisions? But either way, as long as you found something that works… :)

You can feel free to share code if you think it contains feedback that we can apply to the Workers implementation, or that could be looked at for developing benchmarks/tests/etc. if you think that makes sense.

@a1xon

This comment has been minimized.

a1xon commented Sep 9, 2018

worker were called 2 or 3 times with the same random generated id

@addaleax sorry that was missleading.
the workers returned their results a couple of times - They were only called once. It was difficult for me keeping track of the current function results while not binning the old ones. Will open a small repo and push the code later. thanks for your help :)

@a1xon

This comment has been minimized.

a1xon commented Sep 9, 2018

Just uploaded the code. Hope I didn't strip too much from it.
https://github.com/a1xon/Node-Workers-Example
Does this look reasonable?

@addaleax

This comment has been minimized.

Member

addaleax commented Sep 10, 2018

@a1xon I’d be a bit wary of using Math.random() to generate IDs… it might do fine, but it’s probably safer (and a bit easier?) to use a counter, like this:

let counter = 0;  // Use 0n if you want to be absolutely sure and use BigInt instead

const main = async someArgument => {
  …
  let requestID = ++counter;
  …
};

That way you can avoid collisions and don’t need to plan for them, even if they come with a very low frequency.

@bmeck

This comment has been minimized.

Member

bmeck commented Sep 10, 2018

If you need a shared counter for p2p workers without going through a coordination thread you can also create a SharedArrayBuffer and pass it around, and anyone wishing to use it can lock the structure then increment the counter.

@addaleax

This comment has been minimized.

Member

addaleax commented Oct 3, 2018

I’m closing the existing issues here. If you have feedback about the existing Workers implementation in Node.js 10+, please use #6 for that!

@addaleax addaleax closed this Oct 3, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment