-
Notifications
You must be signed in to change notification settings - Fork 97
please develop native threads that can load native modules with require and share/lock objects #31
Comments
Will you fund the development of this? 😉 Seriously though this would be a gigantic undertaking. |
I'm personally against adding shared-memory threading to Node. I think having stuff like lightweight WebWorkers is pretty alright, and probably requires a lot less work -- but the implications on adding shared-memory threading to a language that is woefully unprepared for it would be... A ton of work for something not very great which is not that much better than existing alternatives. Please remember that Node, through libuv, already parallelizes a number of I/O-related tasks at the C++ level, and does it without compromising the thread safety of user-level code. I believe this is already hugely beneficial to a platform like Node and adds to the pile of reasons to think more complicated threading will just result in minimal benefits for the user. Please keep in mind that child process workers are relatively easy to spawn, and often serve the use cases in question well enough. |
What I had a problem with is the json encoding/decodng cpu costs for the
rpc.. When the child thread is working on huge data for the main thread..
In my project I solved it awkwardly by having same data on the child and on
the main. and sending changeset each time. Arguments of a splice... I
wished for a better way.
I can't fund it. Just using the opportunity to talk about it - gambling for
free on a positive result - Maybe someone here is looking to make something
very amitios. Like it was with the previous fork. Frankly I did not
understood the the direction of this fork. Besides the being nice to
eachother finally at last. Which is important by itself.
…On שבת, 26 באוג׳ 2017, 22:02 Kat Marchán ***@***.***> wrote:
I'm personally against adding shared-memory threading to Node. I think
having stuff like lightweight WebWorkers is pretty alright, and probably
requires a lot less work -- but the implications on adding shared-memory
threading to a language that is woefully unprepared for it would be... A
ton of work for something not very great which is not that much better than
existing alternatives.
Please remember that Node, through libuv, already parallelizes a number of
I/O-related tasks at the C++ level, and does it without compromising the
thread safety of user-level code. I believe this is already hugely
beneficial to a platform like Node and adds to the pile of reasons to think
more complicated threading will just result in minimal benefits for the
user.
Please keep in mind that child process workers are relatively easy to
spawn, and often serve the use cases in question well enough.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#31 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AATMYOUlP45xkghvfAZ9KbFqZmQWHSfEks5scGusgaJpZM4PDlE1>
.
|
You can close this if you think it is unreasonable. I'm ok with it. |
I think for encoding/decoding overhead, you'd get more mileage just using streamable encoders and decoders. The marshalling and unmarshalling required to take a block of JS data and pass it through native code to another thread would most likely outweigh the existing encoding/decoding overhead anyway... |
There are some areas that threading, or rather, full or partial workers would be useful in. Partial, or rather, computation (non I/O access) workers are probably reasonable to implement and would be fairly useful for say, off-thread template rendering. |
I'm pretty onboard with some sort of native WebWorker thing being added. Those could actually be super handy, and they're shared-nothing so 💯 |
as one of the solutions to the problem I had I thought it might be useful to develop a module of shared memory object or array. if the threads are different processes this shared object should be named on shm. I meant sharing c++ objects with accessors (good enough for use). maybe that could be simple. |
It's not quite so simple. Sharing V8 objects between different isolates is not safe, so any data you want to move around needs to be marshalled and unmarshalled at the isolate barriers, which has rather substantial overhead. The V8 serialization API is exposed in recent versions of node, which helps, but there's still a noticeable performance penalty. https://nodejs.org/api/v8.html#v8_serialization_api |
@petkaantonov wrote a pull request for Node.js but got burned out on it in the middle. You can take a look at petkaantonov/io.js@ea143f7 |
Fwiw, I'm still planning on working on this in the near future, with the goal of getting something along the lines of Petka's work going. Also, re: shared memory -- I think it's not what @zkat was referring to, and I don't exactly have much background knowledge, but I think enabling (Like others said, shared memory access to JS objects is pretty much out of the question, for multiple reasons.) EDIT: I'd say, expect a PR by around this weekend :) |
Hi all, Sorry in advance if I'm mistaken, but can't the work of webkit on threading be useful to achieve the goal of this issue ? Here is a link of what I'm referring to : https://webkit.org/blog/7846/concurrent-javascript-it-can-work/ Regards, |
@tagkiller - we could do this two years ago nodejs/node#1159 - the collaborator got tired of not having enough feedback from core and it eventually got closed as nodejs/node#2133 This is still entirely possible regardless of WebKit. |
Yeah, I should clarify my comments above: threading is definitely possible, and there's been a bunch of work on that in the past. Threading for the purpose of offloading JSON serialization/deserialization however, likely has little benefit due to overhead of moving data between threads in a safe way. |
I have a couple questions:
I would personally vote to land #58 soon for a couple reasons:
|
First of all, remember we are all volunteers here. Nobody that I know of is paying @addaleax to write Worker, and as such there is no deadline attached. That said, I personally expect that it'll still be a while before it gets landed. Much code review is needed, and as of now there hasn't been much of it.
Technically there is no barrier for this to be merged into Node.js, as our code has not yet diverged from Node.js'. However, it's up to @addaleax if she would like to take the time to upstream this and go through their review process (see #40 (comment)). If not, someone else would have to. I am not familiar with node-chakracore, but from what I know they will have to at least do some work to support certain features (like ArrayBuffer transfers) in their V8-to-ChakraCore API bridge (chakrashim), or maybe even ChakraCore itself. |
The text was updated successfully, but these errors were encountered: