New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

worker: initial implementation #20876

Closed
wants to merge 26 commits into
base: master
from

Conversation

@addaleax
Copy link
Member

addaleax commented May 22, 2018

Hi everyone! 👋

This PR adds threading support for to Node.js. I realize that this is not exactly a small PR and is going to take a while to review, so: I appreciate comments, questions (any kind, as long as it’s somewhat related 😺), partial reviews and all other feedback from anybody, not just Node.js core collaborators.

The super-high-level description of the implementation here is that Workers can share and transfer memory, but not JS objects (they have to be cloned for transferring), and not yet handles like network sockets.

FAQ

See https://gist.github.com/benjamingr/3d5e86e2fb8ae4abe2ab98ffe4758665

Example usage

const { Worker, isMainThread, postMessage, workerData } = require('worker_threads');

if (isMainThread) {
  module.exports = async function parseJSAsync(script) {
    return new Promise((resolve, reject) => {
      const worker = new Worker(__filename, {
        workerData: script
      });
      worker.on('message', resolve);
      worker.on('error', reject);
      worker.on('exit', (code) => {
        if (code !== 0)
          reject(new Error(`Worker stopped with exit code ${code}`));
      });
    });
  };
} else {
  const { parse } = require('some-js-parsing-library');
  const script = workerData;
  postMessage(parse(script));
}

Feature set

The communication between threads largely builds on the MessageChannel Web API. Transferring ArrayBuffers and sharing memory through SharedArrayBuffers is supported.
Almost the entire Node.js core API is require()able or importable.

Some notable differences:

  • stdio streams may be captured by the parent thread.
  • Some functions, e.g. process.chdir() don’t exist in worker threads.
  • Native addons are not loadable from worker threads (yet).
  • No inspector support (yet).

(Keep in mind that PRs can change significantly based on reviews.)

Comparison with child_process and cluster

Workers are conceptually very similar to child_process and cluster.
Some of the key differences are:

  • Communication between Workers is different: Unlike child_process IPC, we don’t use JSON, but rather do the same thing that postMessage() does in browsers.
    • This isn’t necessarily faster, although it can be and there might be more room for optimization. (Keep in mind how long JSON has been around and how much work has therefore been put into making it fast.)
    • The serialized data doesn’t actually need to leave the process, so overall there’s less overhead in communication involved.
    • Memory in the form of typed arrays can be transferred or shared between Workers and/or the main thread, which enables really fast communication for specific use cases.
    • Handles, like network sockets, can not be transferred or shared (yet).
  • There are some limitations on the usable API within workers, since parts of it (e.g. process.chdir()) affect per-process state, loading native addons, etc.
  • Each workers have its own event loop, but some of the resources are shared between workers (e.g. the libuv thread pool for file system work)

Benchmarks

    $ ./node benchmark/cluster/echo.js
    cluster/echo.js n=100000 sendsPerBroadcast=1 payload="string" workers=1: 33,647.30473442063
    cluster/echo.js n=100000 sendsPerBroadcast=10 payload="string" workers=1: 12,927.907405288383
    cluster/echo.js n=100000 sendsPerBroadcast=1 payload="object" workers=1: 28,496.37373941151
    cluster/echo.js n=100000 sendsPerBroadcast=10 payload="object" workers=1: 8,975.53747186485
    $ ./node --experimental-worker benchmark/worker/echo.js
    worker/echo.js n=100000 sendsPerBroadcast=1 payload="string" workers=1: 88,044.32902365089
    worker/echo.js n=100000 sendsPerBroadcast=10 payload="string" workers=1: 39,873.33697018837
    worker/echo.js n=100000 sendsPerBroadcast=1 payload="object" workers=1: 64,451.29132425621
    worker/echo.js n=100000 sendsPerBroadcast=10 payload="object" workers=1: 22,325.635443739284

A caveat here is that startup performance for Workers using this model is still relatively slow (I don’t have exact numbers, but there’s definitely overhead).

Regarding semverness:

The only breaking change here is the introduction of a new top-level module. The name is currently worker, this is not under a scope as suggested in nodejs/TSC#389. It seems like the most natural name for this by far.

I’ve reached out to the owner of the worker module on npm, who declined to provide the name for this purpose – the package has 57 downloads/week, so whether we consider this semver-major because of that is probably a judgement call.

Alternatively, I’d suggest using workers – it’s not quite what we’re used to in core (e.g. child_process), but the corresponding npm package is essentially just a placeholder.

Acknowledgements

People I’d like to thank for their code, comments and reviews for this work in its original form, in no particular order:

… and finally @petkaantonov for a lot of inspiration and the ability to compare with previous work on this topic.

Individual commits

src: cleanup per-isolate state on platform on isolate unregister

Clean up once all references to an Isolate* are gone from the
NodePlatform, rather than waiting for the PerIsolatePlatformData
struct to be deleted since there may be cyclic references between
that struct and the individual tasks.

src: fix MallocedBuffer move assignment operator

src: break out of timers loop if !can_call_into_js()

Otherwise, this turns into an infinite loop.

src: simplify handle closing

Remove one extra closing state and use a smart pointer for
deleting HandleWraps.

worker: implement MessagePort and MessageChannel

Implement MessagePort and MessageChannel along the lines of
the DOM classes of the same names. MessagePorts initially
support transferring only ArrayBuffers.

worker: support MessagePort passing in messages

Support passing MessagePort instances through other MessagePorts,
as expected by the MessagePort spec.

worker: add SharedArrayBuffer sharing

Logic is added to the MessagePort mechanism that
attaches hidden objects to those instances when they are transferred
that track their lifetime and maintain a reference count, to make
sure that memory is freed at the appropriate times.

src: add Env::profiler_idle_notifier_started()

src: move DeleteFnPtr into util.h

This is more generally useful than just in a crypto context.

worker: initial implementation

Implement multi-threading support for most of the API.

test: add test against unsupported worker features

worker: restrict supported extensions

Only allow .js and .mjs extensions to provide future-proofing
for file type detection.

src: enable stdio for workers

Provide stdin, stdout and stderr options for the Worker
constructor, and make these available to the worker thread
under their usual names.

The default for stdin is an empty stream, the default for
stdout and stderr is redirecting to the parent thread’s
corresponding stdio streams.

benchmark: port cluster/echo to worker

worker: improve error (de)serialization

Rather than passing errors using some sort of string representation,
do a best effort for faithful serialization/deserialization of
uncaught exception objects.

test,tools: enable running tests under workers

Enable running tests inside workers by passing --worker
to tools/test.py. A number of tests are marked as skipped,
or have been slightly altered to fit the different environment.

Other work

I know that teams from Microsoft (/cc @fs-eire @helloshuangzi) and Alibaba (/cc @aaronleeatali) have been working on forms of multithreading that have higher degrees of interaction between threads, such as sharing code and JS objects. I’d love if you could take a look at this PR and see how well it aligns with your own work, and what conflicts there might be. (From what I’ve seen of the other code, I’m actually quite optimistic that this PR is just going to help everybody.)

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines
@addaleax

This comment was marked as outdated.

Copy link
Member

addaleax commented May 22, 2018

CI: https://ci.nodejs.org/job/node-test-commit/18659/
CITGM: https://ci.nodejs.org/view/Node.js-citgm/job/citgm-smoker/1422/

Edit: Will go to sleep now and look at comments tomorrow. Cheers!

@devsnek

This comment has been minimized.

Copy link
Member

devsnek commented May 22, 2018

i'm so heckin' happy to see this 🎉 🎉 🎉 🎉

regarding the whole name thing, why not use @nodejs/workers? we're pratically being shoved into it due to the owner of workers not wanting to give that name up, and we've been looking for excuses to namespace builtin modules anyway.

@@ -1204,6 +1213,8 @@ console.log(process.getgroups()); // [ 27, 30, 46, 1000 ]
This function is only available on POSIX platforms (i.e. not Windows or
Android).

This feature is not available in [`Worker`][] threads.

This comment has been minimized.

@devsnek

devsnek May 22, 2018

Member

maybe we should set up some sort of doc macro for this so that we can keep them all in sync if we decide to change the formatting of it.

This comment has been minimized.

@addaleax

addaleax May 22, 2018

Member

Do we know how to do that?

This comment was marked as resolved.

@cedric05

cedric05 May 23, 2018

does node runs on android?
is it experimental?
where can i refer

This comment was marked as resolved.

@vsemozhetbyt

vsemozhetbyt May 23, 2018

Member

@cedric05 see BUILDING.md#androidandroid-based-devices-eg-firefox-os for building on Android and package-manager/#android for installing precompiled app.


if (!process.binding('config').experimentalWorker) {
// TODO(addaleax): Is this the right way to do this?
// eslint-disable-next-line no-restricted-syntax

This comment has been minimized.

@devsnek

devsnek May 22, 2018

Member

i would try and gate it in the nativemodule loader instead

@@ -1375,6 +1375,8 @@ def BuildOptions():
help="Expect test cases to fail", default=False, action="store_true")
result.add_option("--valgrind", help="Run tests through valgrind",
default=False, action="store_true")
result.add_option("--worker", help="Run parallel tests inside a worker context",

This comment was marked as resolved.

@devsnek

devsnek May 22, 2018

Member

❤️

<a id="ERR_CLOSED_MESSAGE_PORT"></a>
### ERR_CLOSED_MESSAGE_PORT

Used when there was an attempt to use a `MessagePort` instance in a closed

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Nit: Remove Used when.

<a id="ERR_MISSING_PLATFORM_FOR_WORKER"></a>
### ERR_MISSING_PLATFORM_FOR_WORKER

The V8 platform used by this instance of Node does not support creating Workers.

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Nit: Node -> Node.js

<a id="ERR_WORKER_DOMAIN"></a>
### ERR_WORKER_DOMAIN

Used when trying to access the `domain` module inside of a worker thread.

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Remove Used when and reword:

The `domain` module cannot be used inside of a worker thread.

This comment has been minimized.

@addaleax

addaleax May 22, 2018

Member

This was one of two outdated error entries anyway. :) It might be cool if our linter could check that entries in this markdown file are also present in lib/internal/errors.h or src/node_errors.h

<a id="ERR_WORKER_NEED_ABSOLUTE_PATH"></a>
### ERR_WORKER_NEED_ABSOLUTE_PATH

Used when the path for the main script of a worker is not an absolute path.

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Remove Used when

<a id="ERR_WORKER_OUT_OF_MEMORY"></a>
### ERR_WORKER_OUT_OF_MEMORY

Used when a worker hits its memory limit.

This comment has been minimized.

@Trott

Trott May 22, 2018

Member
A worker has exceeded its memory limit.
Used when a worker hits its memory limit.

<a id="ERR_WORKER_UNAVAILABLE_FEATURE"></a>
### ERR_WORKER_UNAVAILABLE_FEATURE

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Missing description?

<a id="ERR_WORKER_UNSERIALIZABLE_ERROR"></a>
### ERR_WORKER_UNSERIALIZABLE_ERROR

Used when all attempts at serializing an uncaught exception from a worker fail.

This comment has been minimized.

@Trott

Trott May 22, 2018

Member
All attempts to serialize an uncaught exception from a worker failed.

...or perhaps...

All attempts to serialize an uncaught worker exception failed.
<a id="ERR_WORKER_UNSUPPORTED_EXTENSION"></a>
### ERR_WORKER_UNSUPPORTED_EXTENSION

Used when the pathname used for the main script of a worker has an

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Remove Used when

@@ -918,6 +922,8 @@ console.log(process.env.test);
// => 1
```

*Note*: `process.env` is read-only in [`Worker`][] threads.

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

My opinion only but I prefer we leave out *Note*: in nearly all cases.

This comment has been minimized.

@ChALkeR

ChALkeR May 22, 2018

Member

I believe *Note*: prefixes were eradicated earlier from the docs.

@@ -1030,6 +1036,9 @@ If it is necessary to terminate the Node.js process due to an error condition,
throwing an *uncaught* error and allowing the process to terminate accordingly
is safer than calling `process.exit()`.

*Note*: in [`Worker`][] threads, this function stops the current thread rather

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

My opinion only but I prefer we leave out *Note*: in nearly all cases.

added: REPLACEME
-->

Opposite of `unref`, calling `ref` on a previously `unref`d worker will *not*

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Nit: Add parentheses, so unfref() and ref(). Also elsewhere in this doc.

Nit: Change comma to period on this line.

Nit: We use `unref`ed rather than `unref'd` in existing documentation so use that here and in line 97, 105, and anywhere else for consistency.


Opposite of `unref`, calling `ref` on a previously `unref`d worker will *not*
let the program exit if it's the only active handle left (the default behavior).
If the worker is `ref`d calling `ref` again will have no effect.

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Add comma before "calling".

* Returns: {undefined}

Disables further sending of messages on either side of the connection.
This this method can be called once you know that no further communication

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Typo: "This this"

behind this API, see the [serialization API of the `v8` module][v8.serdes].

*Note*: Because the object cloning uses the structured clone algorithm,
non-enumberable properties, property accessors, and object prototypes are

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Typo: non-enumberable -> non-enumerable

- [`process.chdir()`][] as well as `process` methods that set group or user ids
are not available.
- [`process.env`][] is a read-only reference to the environment variables.
- [`process.title`][] can not be modified.

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

can not -> cannot

child thread.

To create custom messaging channels (which is encouraged over using the default
global channel because it facilitates seperation of concerns), users can create

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Typo: seperation -> separation

* data {any} Any JavaScript value that will be cloned and made
available as [`require('worker').workerData`][]. The cloning will occur as
described in the [HTML structured clone algorithm][], and an error will be
thrown if the object can not be cloned (e.g. because it contains

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

can not -> cannot

-->

The `'error'` event is emitted if the worker thread throws an uncaught
expection. In that case, the worker will be terminated.

This comment has been minimized.

@Trott

Trott May 22, 2018

Member

Typo: expection -> exception

@idibidiart

This comment has been minimized.

Copy link

idibidiart commented Jun 21, 2018

<<Native addons are not loadable from worker threads (yet).>>

Electron docs:

<<Most existing native modules have been written assuming single-threaded environment, using them in Web Workers will lead to crashes and memory corruptions. Note that even if a native Node.js module is thread-safe it's still not safe to load it in a Web Worker because the process.dlopen function is not thread safe.>>

I assume you'll have a follow-up PR for process.dlopen?

How about require()-ing built-in modules like zlib and fs? mind my lack of knowledge here but are they thread safe?

Thank you for this. It's about time Node has built-in threading support...

EDIT:

Also, how about the File object and JS objects in general? Latest browsers allow us to send JS objects between main thread and worker thread without ceremony, i.e. no longer need to specify as transferrable objects. For File object, I believe the reference to the file on disk is copied, not the entire content of the file. Can we hope for similar functionality in node workers?

@TimothyGu

This comment has been minimized.

Copy link
Member

TimothyGu commented Jun 21, 2018

I assume you'll have a follow-up PR for process.dlopen?

Yes, there will be.

How about require()-ing built-in modules like zlib and fs? mind my lack of knowledge here but are they thread safe?

They have been made to be.

Latest browsers allow us to send JS objects between main thread and worker thread without ceremony, i.e. no longer need to specify as transferrable objects.

The transfer mechanism for general JavaScript objects is implemented identically as browsers do.

For File object, I believe the reference to the file on disk is copied, not the entire content of the file. Can we hope for similar functionality in node workers?

There has been interest to implement File and Blob classes as well as the rest of the File API in Node.js, but we no concrete plan has emerged yet. Stay tuned.


We will be happy to answer any follow-up question if you file an issue at https://github.com/nodejs/help. Thanks for your interest in worker threads!

@idibidiart

This comment has been minimized.

Copy link

idibidiart commented Jun 21, 2018

Fantastic! really important work you’re doing! Thank you!!

@PazzaVlad

This comment has been minimized.

Copy link

PazzaVlad commented Jun 24, 2018

Thank you, really cool stuff!

@pavel-sindelka

This comment has been minimized.

Copy link

pavel-sindelka commented Jun 25, 2018

Hello guys! Am I able to use multi thread with Puppeteer for parallel testing?
https://github.com/GoogleChrome/puppeteer

@benjamingr

This comment has been minimized.

Copy link
Member

benjamingr commented Jun 25, 2018

@pavel-sindelka puppeteer is not CPU bound typically and runs an off-process Chrome - so that use case would not be a good fit for worker-threads.

@p3x-robot

This comment has been minimized.

Copy link

p3x-robot commented Jun 26, 2018

@addaleax so is the cluster obsolate? i think cluster is still good right?

@idibidiart

This comment has been minimized.

Copy link

idibidiart commented Jun 26, 2018

with a cluster you can use a load balancer but there is no shared memory out of the box and a process is heavier than a thread... each process can then have multiple threads with shared memory... I think the use case for threads is cpu bound work (vertical scaling) while clusters are for horizontal scaling...

would be interested in other opinions

@p3x-robot

This comment has been minimized.

Copy link

p3x-robot commented Jun 26, 2018

@idibidiart dont you think that the https, mongodb, redis is better using via cluster and for cpu processing is good for threads? look java? so slow with threads via https connections?

@idibidiart

This comment has been minimized.

Copy link

idibidiart commented Jun 26, 2018

I would use threads where shared memory is a plus not a minus and where the work should not be done in the event loop (i.e. any cpu bound work), so yes I'd agree with your statement

@p3x-robot

This comment has been minimized.

Copy link

p3x-robot commented Jun 26, 2018

thanks for you response.

@p3x-robot

This comment has been minimized.

Copy link

p3x-robot commented Jun 27, 2018

Guys, do you know how can fix this error?

memory leak detected

I use like this:

require('events').EventEmitter.prototype._maxListeners = 100;
require('events').EventEmitter.defaultMaxListeners = 100;
process.setMaxListeners(100)

In both worker and main thread as well.

C:\Users\patrikx3\Projects\patrikx3\play\scripts\worker-thread>node-thread main.js
Spawning thread 1
Spawning thread 2
Spawning thread 3
Spawning thread 4
Spawning thread 5
Spawning thread 6
Spawning thread 7
Spawning thread 8
Spawning thread 9
Spawning thread 10
Spawning thread 11
Spawning thread 12
Spawning thread 13
Spawning thread 14
Spawning thread 15
Spawning thread 16
Spawning thread 17
Spawning thread 18
Spawning thread 19
Spawning thread 20
(node:9816) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added. Use emitter.setMaxListeners() to increase limit
(node:9816) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 21 error listeners added. Use emitter.setMaxListeners() to increase limit
2018-6-27 08:19:11: Instance 1
2018-6-27 08:19:12: Instance 2
2018-6-27 08:19:13: Instance 3
2018-6-27 08:19:13: Instance 4
2018-6-27 08:19:14: Instance 5
2018-6-27 08:19:15: Instance 6
2018-6-27 08:19:15: Instance 7
2018-6-27 08:19:16: Instance 8
2018-6-27 08:19:17: Instance 9
2018-6-27 08:19:17: Instance 10
2018-6-27 08:19:18: Instance 11
2018-6-27 08:19:19: Instance 12
2018-6-27 08:19:20: Instance 13
2018-6-27 08:19:20: Instance 14
2018-6-27 08:19:21: Instance 15
2018-6-27 08:19:22: Instance 16
2018-6-27 08:19:22: Instance 17
2018-6-27 08:19:23: Instance 18
2018-6-27 08:19:23: Instance 19
2018-6-27 08:19:24: Instance 20
Stopped thread instance 1
This is the from main thread instance 1: 102334155
Stopped thread instance 1
Stopped thread instance 2
This is the from main thread instance 2: 102334155
Stopped thread instance 2
Stopped thread instance 3
This is the from main thread instance 3: 102334155
Stopped thread instance 3
Stopped thread instance 4
This is the from main thread instance 4: 102334155
Stopped thread instance 4
Stopped thread instance 5
This is the from main thread instance 5: 102334155
Stopped thread instance 5
2018-6-27 08:19:28: Instance 6
2018-6-27 08:19:28: Instance 7
2018-6-27 08:19:28: Instance 8
2018-6-27 08:19:28: Instance 9
2018-6-27 08:19:28: Instance 10
2018-6-27 08:19:28: Instance 11
2018-6-27 08:19:28: Instance 12
2018-6-27 08:19:28: Instance 13
2018-6-27 08:19:28: Instance 14
2018-6-27 08:19:28: Instance 15
2018-6-27 08:19:28: Instance 16
2018-6-27 08:19:28: Instance 17
2018-6-27 08:19:28: Instance 18
2018-6-27 08:19:28: Instance 19
2018-6-27 08:19:28: Instance 20
Stopped thread instance 6
This is the from main thread instance 6: 102334155
Stopped thread instance 6
Stopped thread instance 7
This is the from main thread instance 7: 102334155
Stopped thread instance 7
Stopped thread instance 8
This is the from main thread instance 8: 102334155
Stopped thread instance 8
Stopped thread instance 9
This is the from main thread instance 9: 102334155
Stopped thread instance 9
Stopped thread instance 10
This is the from main thread instance 10: 102334155
Stopped thread instance 10
Stopped thread instance 11
This is the from main thread instance 11: 102334155
Stopped thread instance 11
Stopped thread instance 12
This is the from main thread instance 12: 102334155
Stopped thread instance 12
Stopped thread instance 13
This is the from main thread instance 13: 102334155
Stopped thread instance 13
Stopped thread instance 14
This is the from main thread instance 14: 102334155
Stopped thread instance 14
Stopped thread instance 15
This is the from main thread instance 15: 102334155
Stopped thread instance 15
Stopped thread instance 16
This is the from main thread instance 16: 102334155
Stopped thread instance 16
Stopped thread instance 17
This is the from main thread instance 17: 102334155
Stopped thread instance 17
Stopped thread instance 18
This is the from main thread instance 18: 102334155
Stopped thread instance 18
Stopped thread instance 19
This is the from main thread instance 19: 102334155
Stopped thread instance 19
Stopped thread instance 20
This is the from main thread instance 20: 102334155
Stopped thread instance 20

#The code

main.js

require('events').EventEmitter.prototype._maxListeners = 100;
require('events').EventEmitter.defaultMaxListeners = 100;
process.setMaxListeners(100)
const {
    Worker, isMainThread, parentPort, workerData
} = require('worker_threads');

intervalCounter = {}

let instance = 0

const clearMainThread = (thisInstance) => {
    clearInterval(intervalCounter[thisInstance])
    console.log(`Stopped thread instance ${thisInstance}`)
}

const threads = () => {
    return new Promise((resolve, reject) => {
        const thisInstance = ++instance

        intervalCounter[thisInstance] = setInterval(() => {
            console.log(`${new Date().toLocaleString()}: Instance ${thisInstance}`)
        }, 1000)


        const worker = new Worker(`${__dirname}/thread.js`, {
            workerData: {
                instance: thisInstance
            },
        });
        worker.on('message', (data) => {
            clearMainThread(thisInstance)
            console.log(data)
            resolve(data)
        });
        worker.on('error', (err) => {
            clearMainThread(thisInstance)
            console.log(err)
            reject(err)
        });
        worker.on('exit', (code) => {
            clearMainThread(thisInstance)
            if (code !== 0)
                reject(new Error(`Worker stopped with exit code ${code}`));
        })
    })
}

const threadCounts = 20
for (let threadCount = 0; threadCount < threadCounts; threadCount++) {
    console.log(`Spawning thread ${instance + 1}`)
    threads()
}

thread.js

require('events').EventEmitter.prototype._maxListeners = 100;
require('events').EventEmitter.defaultMaxListeners = 100;
process.setMaxListeners(100)
const {
    Worker, isMainThread, parentPort, workerData
} = require('worker_threads');

function fib(n) {
    //  console.log(`Count instance ${workerData.instance}: fib(${n}) `)
    if (n > 1) {
        return fib(n - 1) + fib(n - 2)
    } else {
        return n;
    }
}
const fibResult = fib(40)
//console.log(`This is the from thread: ${fibResult}`)
parentPort.postMessage(`This is the from main thread instance ${workerData.instance}: ${fibResult}`);

What is weird is if i set setInterval for pinging from 1000 to 2000 ms it shows now leak

C:\Users\patrikx3\Projects\patrikx3\play\scripts\worker-thread>node-thread main.js
Spawning thread 1
Spawning thread 2
Spawning thread 3
Spawning thread 4
Spawning thread 5
Spawning thread 6
Spawning thread 7
Spawning thread 8
Spawning thread 9
Spawning thread 10
Spawning thread 11
Spawning thread 12
Spawning thread 13
Spawning thread 14
Spawning thread 15
Spawning thread 16
Spawning thread 17
Spawning thread 18
Spawning thread 19
Spawning thread 20
2018-6-27 08:26:26: Instance 1
2018-6-27 08:26:28: Instance 2
2018-6-27 08:26:29: Instance 3
2018-6-27 08:26:30: Instance 4
2018-6-27 08:26:31: Instance 5
2018-6-27 08:26:32: Instance 6
2018-6-27 08:26:33: Instance 7
2018-6-27 08:26:34: Instance 8
2018-6-27 08:26:35: Instance 9
2018-6-27 08:26:36: Instance 10
2018-6-27 08:26:36: Instance 11
2018-6-27 08:26:37: Instance 12
2018-6-27 08:26:37: Instance 13
2018-6-27 08:26:38: Instance 14
2018-6-27 08:26:38: Instance 15
2018-6-27 08:26:39: Instance 16
2018-6-27 08:26:39: Instance 17
2018-6-27 08:26:39: Instance 18
2018-6-27 08:26:40: Instance 19
2018-6-27 08:26:40: Instance 20
2018-6-27 08:26:40: Instance 1
2018-6-27 08:26:40: Instance 2
2018-6-27 08:26:40: Instance 3
2018-6-27 08:26:40: Instance 4
2018-6-27 08:26:40: Instance 5
2018-6-27 08:26:40: Instance 6
2018-6-27 08:26:40: Instance 7
2018-6-27 08:26:40: Instance 8
2018-6-27 08:26:40: Instance 9
2018-6-27 08:26:40: Instance 10
2018-6-27 08:26:40: Instance 11
2018-6-27 08:26:40: Instance 12
2018-6-27 08:26:40: Instance 13
This is the from main thread instance 1: 102334155
This is the from main thread instance 2: 102334155
This is the from main thread instance 3: 102334155
This is the from main thread instance 5: 102334155
This is the from main thread instance 4: 102334155
This is the from main thread instance 6: 102334155
This is the from main thread instance 7: 102334155
This is the from main thread instance 8: 102334155
This is the from main thread instance 9: 102334155
This is the from main thread instance 10: 102334155
This is the from main thread instance 11: 102334155
This is the from main thread instance 12: 102334155
This is the from main thread instance 13: 102334155
This is the from main thread instance 14: 102334155
This is the from main thread instance 15: 102334155
This is the from main thread instance 16: 102334155
This is the from main thread instance 17: 102334155
This is the from main thread instance 18: 102334155
This is the from main thread instance 19: 102334155
This is the from main thread instance 20: 102334155
@p3x-robot

This comment has been minimized.

Copy link

p3x-robot commented Jun 27, 2018

i test a lot but about 50 threads it freezes the windows.
I think is threads are still bad, I still prefer cluster, processes....
Blocks the whole system, maybe i am on Linux it might be better.
do i was using fibonacci(50) which is huge but yes using all cores in one process.

I think the Java and C# people wil understand threads are kaka...

@p3x-robot

This comment has been minimized.

Copy link

p3x-robot commented Jun 27, 2018

i have tested many benchmarks, but the threads for computing are faster 🥇 . not faster for many threads but the memory usage is smaller, so threads are totally vialable. 💯

@larshp larshp referenced this pull request Jun 27, 2018

Open

node, threads vs cluster #216

@idibidiart

This comment has been minimized.

Copy link

idibidiart commented Jun 28, 2018

As the author noted, this is work in progress and still in experimental stage ... you can certainly have more threads than much heavier nodejs processes/instances on any given machine... threads give you shared memory with its pros and cons... without threads, nodejs is designed for I/O bound work like fetching something from db... with threads it can do cpu bound work (potentially in parallel if you design it that way) outside the main thread so node can remain responsive to I/O, while processing in background... but even threads are limited by the number of cores and memory available ...

@ChALkeR

This comment has been minimized.

Copy link
Member

ChALkeR commented Jul 23, 2018

@addaleax This landed without proper error documentation.
ERR_TRANSFERRING_EXTERNALIZED_SHAREDARRAYBUFFER is undocumented, but is implemented and used.

@benjamingr

This comment has been minimized.

Copy link
Member

benjamingr commented Jul 23, 2018

@ChALkeR good catch #21947

@iravishah

This comment has been minimized.

Copy link

iravishah commented Sep 24, 2018

When can i expect LTS of thread? Eagerly waiting...

@jasnell

This comment has been minimized.

Copy link
Member

jasnell commented Sep 24, 2018

Workers are likely to be experimental for a while. They'll be in 10.x after it goes LTS but still as experimental.

@iravishah

This comment has been minimized.

Copy link

iravishah commented Sep 24, 2018

Thanks 👍

blattersturm added a commit to citizenfx/node that referenced this pull request Nov 3, 2018

src: cleanup per-isolate state on platform on isolate unregister
Clean up once all references to an `Isolate*` are gone from the
`NodePlatform`, rather than waiting for the `PerIsolatePlatformData`
struct to be deleted since there may be cyclic references between
that struct and the individual tasks.

PR-URL: nodejs#20876
Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: Shingo Inoue <leko.noor@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com>
Reviewed-By: John-David Dalton <john.david.dalton@gmail.com>
Reviewed-By: Gus Caplan <me@gus.host>

blattersturm added a commit to citizenfx/node that referenced this pull request Nov 3, 2018

src: remove unused fields isolate_
Currently the following compiler warnings are generated:

In file included from ../src/node_platform.cc:1:
../src/node_platform.h:83:16:
warning: private field 'isolate_' is not used [-Wunused-private-field]
  v8::Isolate* isolate_;
               ^
1 warning generated.

This commit removes these unused private member.

PR-URL: nodejs#20876
Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: Shingo Inoue <leko.noor@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Tiancheng "Timothy" Gu <timothygu99@gmail.com>
Reviewed-By: John-David Dalton <john.david.dalton@gmail.com>
Reviewed-By: Gus Caplan <me@gus.host>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment