lib,src: eagerly exit process on unhanded promise rejections #12734

Closed
wants to merge 1 commit into
from

Conversation

Projects
None yet
@Fishrock123
Member

Fishrock123 commented Apr 28, 2017

This PR is an alternate form of #12010, which takes a different approach to providing a good default for unhandled promise rejections.

This makes unhandled promise rejections exit the process though the usual exit handler if a promise is not handled by the time the 'unhandledRejection' event occurs AND no event handler existed for 'unhandledRejection'.

This is as per the current deprecation message.

This no longer waits, or attempts to wait for GC. The reasons for which may be summed up in more detail in @chrisdickinson's lengthy post in the previous thread.

The change is also partially due to GC handlers having potential issues with emitting the process 'exit' event back to JavaScript for regular exception cleanup. This approach avoids that problem.

In addition, this publicly exposes (docs & naming pending) process.promiseFatal(), allowing custom promise implementations to implement identical behavior when a promise should exit the process.

Edit: as a note I am not opposed to keeping the warnings behavior as an option, e.g. with a flag.

Refs: #12010
Refs: #5292
Refs: nodejs/promises#26
Refs: #6355
PR-URL: #6375

Once again, cc @nodejs/ctc, @chrisdickinson, & @benjamingr.

CI: https://ci.nodejs.org/job/node-test-pull-request/7738/

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines
Affected core subsystem(s)

lib, src, promise

+
+ Local<Promise> promise = args[0].As<Promise>();
+
+ CHECK(promise->State() == Promise::PromiseState::kRejected);

This comment has been minimized.

@addaleax

addaleax Apr 29, 2017

Member

tiny nit: CHECK_EQ? :)

@addaleax

addaleax Apr 29, 2017

Member

tiny nit: CHECK_EQ? :)

+void InternalFatalException(v8::Isolate* isolate,
+ v8::Local<v8::Value> error,
+ v8::Local<v8::Message> message,
+ bool from_promise);

This comment has been minimized.

@addaleax

addaleax Apr 29, 2017

Member

This isn’t used outside of node.cc, right?

@addaleax

addaleax Apr 29, 2017

Member

This isn’t used outside of node.cc, right?

+ gc();
+ gc();
+ gc();
+ /* eslint-enable no-undef */

This comment has been minimized.

@addaleax

addaleax Apr 29, 2017

Member

Same comment as in the other PR(s?): Do you need the linter comments if you use global.gc() instead?

@addaleax

addaleax Apr 29, 2017

Member

Same comment as in the other PR(s?): Do you need the linter comments if you use global.gc() instead?

@addaleax

This comment has been minimized.

Show comment
Hide comment
@addaleax

addaleax Apr 29, 2017

Member

Edit: as a note I am not opposed to keeping the warnings behavior as an option, e.g. with a flag.

I think I would like that, yes. It doesn’t seem like it would be hard to edit this PR for that?

Member

addaleax commented Apr 29, 2017

Edit: as a note I am not opposed to keeping the warnings behavior as an option, e.g. with a flag.

I think I would like that, yes. It doesn’t seem like it would be hard to edit this PR for that?

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Apr 29, 2017

Member

I'm liking this approach better but I'll have to dig through the details a bit later. In general, +1 on this tho.

Member

jasnell commented Apr 29, 2017

I'm liking this approach better but I'll have to dig through the details a bit later. In general, +1 on this tho.

@domenic

This comment has been minimized.

Show comment
Hide comment
@domenic

domenic Apr 29, 2017

Member

I don't want to get re-involved in this whole promises-in-node thing very much, but I hope I can drop some words here and maybe influence some people who will continue this discussion.

I am very against this, as it misunderstands the idea that promise exception handling can happen asynchronously. There is no good analogy here with sync exception handling. Of course it's reasonable to abort the process if a sync exception isn't handled synchronously; that's the only way it could be handled. But it's not reasonable to abort the process if an async exception isn't handled synchronously.

Collection on GC seemed like a great way to go for me, since that's when you actually know something is not handled. The objections against it are not very strong, IMO. You can make it opt-in if there's a performance concern. The GC thing doesn't work for other promise libraries, but neither does this PR, so I don't understand that argument at all. (It just provides a utility function for them, but that can be done in both cases.)

The last thing I want to draw attention to is that this essentially breaks any attempts at doing parallelism with async/await, as illustrated in #12010 (comment). This seems pretty terrible to me, as that coding pattern is a great one that people will expect to work across all environments and will use increasingly as async/await gains ground.

I think this crash thing might be fine as an opt-in, so people who really want it can use it instead of installing an npm package that gives you the right hook. But not as a default; it just breaks the model of promises too much.

Member

domenic commented Apr 29, 2017

I don't want to get re-involved in this whole promises-in-node thing very much, but I hope I can drop some words here and maybe influence some people who will continue this discussion.

I am very against this, as it misunderstands the idea that promise exception handling can happen asynchronously. There is no good analogy here with sync exception handling. Of course it's reasonable to abort the process if a sync exception isn't handled synchronously; that's the only way it could be handled. But it's not reasonable to abort the process if an async exception isn't handled synchronously.

Collection on GC seemed like a great way to go for me, since that's when you actually know something is not handled. The objections against it are not very strong, IMO. You can make it opt-in if there's a performance concern. The GC thing doesn't work for other promise libraries, but neither does this PR, so I don't understand that argument at all. (It just provides a utility function for them, but that can be done in both cases.)

The last thing I want to draw attention to is that this essentially breaks any attempts at doing parallelism with async/await, as illustrated in #12010 (comment). This seems pretty terrible to me, as that coding pattern is a great one that people will expect to work across all environments and will use increasingly as async/await gains ground.

I think this crash thing might be fine as an opt-in, so people who really want it can use it instead of installing an npm package that gives you the right hook. But not as a default; it just breaks the model of promises too much.

@Jamesernator

This comment has been minimized.

Show comment
Hide comment
@Jamesernator

Jamesernator Apr 30, 2017

I agree with @domenic, exiting because a handler wasn't added synchronously doesn't really make sense, there's lots of times you want to handle asynchronously for example:

const request = require('request-promise')

async function concurrentRequest(urls, concurrency=5) {
    // Start of the first lot of requests
    // we don't care about errors yet
    const urlQueue = [...urls]
    const promises = urlQueue.splice(0, concurrency).map(request)

    const results = []
    for (const url of urls) {
        try {
            // wait for a promise, now is when we actually care if it failed
            // or not
            const data = await promises.shift()
            results.push({
                url,
                data,
                isError: false
            })
        } catch (error) {
            results.push({
                url,
                error,
                isError: true
            })
        }
        // Put another promise on the queue
        if (urlQueue.length) {
            promises.push(request(urlQueue.shift()))
        }
    }
    return results
}

However with this change I'd need to add empty .catch handlers everywhere I create a Promise for absolutely no good reason:

const request = require('request-promise')

const pointlessNoopOnlyForNode = _ => {}

async function concurrentRequest(urls, concurrency=5) {
    // Start of the first lot of requests
    // we don't care about errors yet
    const urlQueue = [...urls]
    const promises = urlQueue.splice(0, concurrency).map(request)
        .map(promise => promise.catch(pointlessNoopOnlyForNode))

    const results = []
    for (const url of urls) {
        try {
            // wait for a promise, now is when we actually care if it failed
            // or not
            const data = await promises.shift()
            results.push({
                url,
                data,
                isError: false
            })
        } catch (error) {
            results.push({
                url,
                error,
                isError: true
            })
        }
        // Put another promise on the queue
        if (urlQueue.length) {
            promises.push(
                request(urlQueue.shift())
                .catch(pointlessNoopOnlyForNode)
            )
        }
    }
    return results
}

And personally I think it's even worse than that, by adding at a noop handler prematurely you might actually mask bugs that could've been prevented by the GC approach, because if someone adds a noop handler and then forgets for real to actually handle it then it'll just be swallowed anyway whereas if we rely on garbage collection to check for it then if they don't actually handle it for real then it will still be an error.


Another point I'd like to make is that checking synchronously after Promise rejection happens makes no sense because it means someone could still add a handler asynchronously as long as they do it before the rejection happens.

For example this will fail only ~50% of the time whereas based on this solution I'd expect it to fail 100% of the time, essentially all this behavior does is add bizarre race conditions that aren't intuitive, predictable or useful.

function delay(time) {
    return new Promise(resolve => setTimeout(resolve, time))
}


async function example() {
    const p = new Promise((_, reject) => {
        setTimeout(reject, Math.random()*2000)
    })
    await delay(1000)
    // This will definitely cause people bugs given that sometimes asynchronous handling will work just
    // perfectly fine, while this isn't a real example this sort've race condition will definitely happen in
    // real code
    p.catch(_ => {})
    console.log("Reached")
}

example()

Jamesernator commented Apr 30, 2017

I agree with @domenic, exiting because a handler wasn't added synchronously doesn't really make sense, there's lots of times you want to handle asynchronously for example:

const request = require('request-promise')

async function concurrentRequest(urls, concurrency=5) {
    // Start of the first lot of requests
    // we don't care about errors yet
    const urlQueue = [...urls]
    const promises = urlQueue.splice(0, concurrency).map(request)

    const results = []
    for (const url of urls) {
        try {
            // wait for a promise, now is when we actually care if it failed
            // or not
            const data = await promises.shift()
            results.push({
                url,
                data,
                isError: false
            })
        } catch (error) {
            results.push({
                url,
                error,
                isError: true
            })
        }
        // Put another promise on the queue
        if (urlQueue.length) {
            promises.push(request(urlQueue.shift()))
        }
    }
    return results
}

However with this change I'd need to add empty .catch handlers everywhere I create a Promise for absolutely no good reason:

const request = require('request-promise')

const pointlessNoopOnlyForNode = _ => {}

async function concurrentRequest(urls, concurrency=5) {
    // Start of the first lot of requests
    // we don't care about errors yet
    const urlQueue = [...urls]
    const promises = urlQueue.splice(0, concurrency).map(request)
        .map(promise => promise.catch(pointlessNoopOnlyForNode))

    const results = []
    for (const url of urls) {
        try {
            // wait for a promise, now is when we actually care if it failed
            // or not
            const data = await promises.shift()
            results.push({
                url,
                data,
                isError: false
            })
        } catch (error) {
            results.push({
                url,
                error,
                isError: true
            })
        }
        // Put another promise on the queue
        if (urlQueue.length) {
            promises.push(
                request(urlQueue.shift())
                .catch(pointlessNoopOnlyForNode)
            )
        }
    }
    return results
}

And personally I think it's even worse than that, by adding at a noop handler prematurely you might actually mask bugs that could've been prevented by the GC approach, because if someone adds a noop handler and then forgets for real to actually handle it then it'll just be swallowed anyway whereas if we rely on garbage collection to check for it then if they don't actually handle it for real then it will still be an error.


Another point I'd like to make is that checking synchronously after Promise rejection happens makes no sense because it means someone could still add a handler asynchronously as long as they do it before the rejection happens.

For example this will fail only ~50% of the time whereas based on this solution I'd expect it to fail 100% of the time, essentially all this behavior does is add bizarre race conditions that aren't intuitive, predictable or useful.

function delay(time) {
    return new Promise(resolve => setTimeout(resolve, time))
}


async function example() {
    const p = new Promise((_, reject) => {
        setTimeout(reject, Math.random()*2000)
    })
    await delay(1000)
    // This will definitely cause people bugs given that sometimes asynchronous handling will work just
    // perfectly fine, while this isn't a real example this sort've race condition will definitely happen in
    // real code
    p.catch(_ => {})
    console.log("Reached")
}

example()
@ljharb

This comment has been minimized.

Show comment
Hide comment
@ljharb

ljharb Apr 30, 2017

This will break the ecosystem. Unhandled rejections are normal and should not exit the process.

ljharb commented Apr 30, 2017

This will break the ecosystem. Unhandled rejections are normal and should not exit the process.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr Apr 30, 2017

Member

@ljharb note that this is only on GC of an unhandled rejection.

Member

benjamingr commented Apr 30, 2017

@ljharb note that this is only on GC of an unhandled rejection.

@addaleax

This comment has been minimized.

Show comment
Hide comment
@addaleax

addaleax Apr 30, 2017

Member

@benjamingr That is the case for #12010, this PR is the nextTick one ;)

Member

addaleax commented Apr 30, 2017

@benjamingr That is the case for #12010, this PR is the nextTick one ;)

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr Apr 30, 2017

Member

@addaleax Oh, then I agree with @domenic, exiting by default on unhandled rejections is not the way to go, although admittedly not doing so would be worse for post-mortem reasons.

I think that we should give the ecosystem some time to stabilize now that async/await is out, focus on making our own APIs work with async/await (like util.promisify) and see how common/uncommon unhandled-in-a-microtick rejections can be.

If people complain a lot about concurrency, we might even have to relax detection - but I assume we won't.

The only positive thing about this PR is that users can easily opt out of it with a handler added. It would probably work for my code base (I don't buy the particular examples above since no one uses .catch inside an async function like that) - I think it won't for some others though.

Member

benjamingr commented Apr 30, 2017

@addaleax Oh, then I agree with @domenic, exiting by default on unhandled rejections is not the way to go, although admittedly not doing so would be worse for post-mortem reasons.

I think that we should give the ecosystem some time to stabilize now that async/await is out, focus on making our own APIs work with async/await (like util.promisify) and see how common/uncommon unhandled-in-a-microtick rejections can be.

If people complain a lot about concurrency, we might even have to relax detection - but I assume we won't.

The only positive thing about this PR is that users can easily opt out of it with a handler added. It would probably work for my code base (I don't buy the particular examples above since no one uses .catch inside an async function like that) - I think it won't for some others though.

@Jamesernator

This comment has been minimized.

Show comment
Hide comment
@Jamesernator

Jamesernator May 1, 2017

@benjamingr Can you explain what you mean when you say it would be good for post-mortem reasons? Because in the case of GC you definitely know the value wasn't used.

Regardless of if you use async functions or not you can essentially imagine Promises having an implied stack trace (there's even ideas on how to formalize here), however what's important to realize is that the way values propagate in very different ways, in a synchronous exception a error propagates up the stack tree until either it's handled or it reaches the top of the stack (in the case of Node this causes a process exit because it clearly wasn't handled).

However because with Promises you can listen to a value anytime you want, and this is often useful, while my final example in the above post isn't realistic in async functions sure however the first one is (I just wanted to give the absolute minimal example to demonstrate that you'll get races conditions that you're actually racing the runtime not even your own logic) .

The first example I provided is mostly real, it's a simplified version of a piece of code from a scraper I wrote which is perfectly sound and reasonable, yet this pull request would make that not so, sure I can just add .catch(pointlessNoopToSatisfyNode) to prevent that behavior but it doesn't really make sense to me as to why I'd need to add that in lots of places to get concurrency. Now if one of those requests were accidentally skipped somehow then it would actually be helpful to know they had simply vanished (which did happen in initial versions, the logic wasn't quite a zip so the last few requests were accidentally skipped).

The thing is Promises (and async functions) aren't just about turning callbacks into synchronous looking code despite people often saying "Now you can write async code like sync code" (I'm not saying anyone here is saying that but there's certainly a mindset I've seen arising of it), if that were true then it'd be safe to actually block for real on a Promise, within some async functions that might actually be fine, however for the most part you probably want to use them in an environment where lots of tasks are running concurrently, in such cases it's important that consumers of Promises can decide to consume them when they're ready.


The worst bit for me about this pull request is that I actually thought the Deprecation warning was always referring to garbage collection given that the error message states promise rejections that are not handled will terminate the Node.js process which I thought meant that I simply had to add a handler at some point, I only learnt today that there's even another warning if you add the handler after it's already been rejected because I've never actually hit that (for example in the case of the web scraper the extra work it does is a lot faster than the time taken for a request to finish so it successfully so I managed to successfully race the warning Promise rejection was handled asynchronously).

@benjamingr Can you explain what you mean when you say it would be good for post-mortem reasons? Because in the case of GC you definitely know the value wasn't used.

Regardless of if you use async functions or not you can essentially imagine Promises having an implied stack trace (there's even ideas on how to formalize here), however what's important to realize is that the way values propagate in very different ways, in a synchronous exception a error propagates up the stack tree until either it's handled or it reaches the top of the stack (in the case of Node this causes a process exit because it clearly wasn't handled).

However because with Promises you can listen to a value anytime you want, and this is often useful, while my final example in the above post isn't realistic in async functions sure however the first one is (I just wanted to give the absolute minimal example to demonstrate that you'll get races conditions that you're actually racing the runtime not even your own logic) .

The first example I provided is mostly real, it's a simplified version of a piece of code from a scraper I wrote which is perfectly sound and reasonable, yet this pull request would make that not so, sure I can just add .catch(pointlessNoopToSatisfyNode) to prevent that behavior but it doesn't really make sense to me as to why I'd need to add that in lots of places to get concurrency. Now if one of those requests were accidentally skipped somehow then it would actually be helpful to know they had simply vanished (which did happen in initial versions, the logic wasn't quite a zip so the last few requests were accidentally skipped).

The thing is Promises (and async functions) aren't just about turning callbacks into synchronous looking code despite people often saying "Now you can write async code like sync code" (I'm not saying anyone here is saying that but there's certainly a mindset I've seen arising of it), if that were true then it'd be safe to actually block for real on a Promise, within some async functions that might actually be fine, however for the most part you probably want to use them in an environment where lots of tasks are running concurrently, in such cases it's important that consumers of Promises can decide to consume them when they're ready.


The worst bit for me about this pull request is that I actually thought the Deprecation warning was always referring to garbage collection given that the error message states promise rejections that are not handled will terminate the Node.js process which I thought meant that I simply had to add a handler at some point, I only learnt today that there's even another warning if you add the handler after it's already been rejected because I've never actually hit that (for example in the case of the web scraper the extra work it does is a lot faster than the time taken for a request to finish so it successfully so I managed to successfully race the warning Promise rejection was handled asynchronously).

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 1, 2017

Member

@Jamesernator

@benjamingr Can you explain what you mean when you say it would be good for post-mortem reasons? Because in the case of GC you definitely know the value wasn't used.

Good question, and sure.

When you exit after a microtick (rather than wait for GC) then I/O has no chance to come in and "corrupt" the heap. When you wait for GC, an arbitrary amount of modifications can happen in the meantime - so the core dump you might get at the end is a lot less useful.

That said, I am not in favor of breaking the abstraction in order to do this.

Member

benjamingr commented May 1, 2017

@Jamesernator

@benjamingr Can you explain what you mean when you say it would be good for post-mortem reasons? Because in the case of GC you definitely know the value wasn't used.

Good question, and sure.

When you exit after a microtick (rather than wait for GC) then I/O has no chance to come in and "corrupt" the heap. When you wait for GC, an arbitrary amount of modifications can happen in the meantime - so the core dump you might get at the end is a lot less useful.

That said, I am not in favor of breaking the abstraction in order to do this.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 1, 2017

Member

Also, I've been meaning to suggest changing the deprecation error myself, the naming of it is very poor, "unhandled rejections" aren't "deprecated". This is my mistake twice, once for not arguing for a better message when the error was introduced in 6.6 and one for not fixing it yet even though @domenic has pointed out it's a problematic message (which I agree with).

Member

benjamingr commented May 1, 2017

Also, I've been meaning to suggest changing the deprecation error myself, the naming of it is very poor, "unhandled rejections" aren't "deprecated". This is my mistake twice, once for not arguing for a better message when the error was introduced in 6.6 and one for not fixing it yet even though @domenic has pointed out it's a problematic message (which I agree with).

@mcollina

Really LGTM.

@Fishrock123

This comment has been minimized.

Show comment
Hide comment
@Fishrock123

Fishrock123 May 1, 2017

Member

@benjamingr So are you saying your position has changed since #12010 (comment)? (It's fine if it has)... or else I am misunderstanding something.

Member

Fishrock123 commented May 1, 2017

@benjamingr So are you saying your position has changed since #12010 (comment)? (It's fine if it has)... or else I am misunderstanding something.

@chrisdickinson

This comment has been minimized.

Show comment
Hide comment
@chrisdickinson

chrisdickinson May 1, 2017

Contributor

This is the approach I'd like to see Node take with regards to promises.

Every time I've had an unhandled rejection log in one of our services, it represented a programmer error. The service ends up in a bad state — my desire would be for Node to crash (within a tick) with a log of the reason so that upstart can bring the process back up. I do not need the stack trace intact in the core (or really, the core itself!)

It is always possible to write code using promises such that it doesn't trip the unhandled rejection handler, even in async functions of high concurrency. Either the author can explicitly mark the promise as handled, by creating a no-op derived promise via .catch(() => {}), or the author can await Promise.all(<all concurrent operations>). Aesthetics aside, this approach is feasible and does not reduce the expressiveness of Promises or async functions. This approach doesn't misunderstand that Promises can be asynchronously handled, rather, it accepts that and asks that authors take the necessary steps to indicate which promises they expect to handle at a later time.

Contributor

chrisdickinson commented May 1, 2017

This is the approach I'd like to see Node take with regards to promises.

Every time I've had an unhandled rejection log in one of our services, it represented a programmer error. The service ends up in a bad state — my desire would be for Node to crash (within a tick) with a log of the reason so that upstart can bring the process back up. I do not need the stack trace intact in the core (or really, the core itself!)

It is always possible to write code using promises such that it doesn't trip the unhandled rejection handler, even in async functions of high concurrency. Either the author can explicitly mark the promise as handled, by creating a no-op derived promise via .catch(() => {}), or the author can await Promise.all(<all concurrent operations>). Aesthetics aside, this approach is feasible and does not reduce the expressiveness of Promises or async functions. This approach doesn't misunderstand that Promises can be asynchronously handled, rather, it accepts that and asks that authors take the necessary steps to indicate which promises they expect to handle at a later time.

@ljharb

This comment has been minimized.

Show comment
Hide comment
@ljharb

ljharb May 1, 2017

However, that's not a step that is required in the JavaScript language, nor in browsers - so this is asking everybody who wants to write interoperable code - even non-node users - to complete extra steps. This will also mean that code written with the browser in mind that already works in node, might suddenly not work in node without modification. Please don't undersell the interoperability and maintenance burden that this change will impose, even if node ends up deciding to do it.

ljharb commented May 1, 2017

However, that's not a step that is required in the JavaScript language, nor in browsers - so this is asking everybody who wants to write interoperable code - even non-node users - to complete extra steps. This will also mean that code written with the browser in mind that already works in node, might suddenly not work in node without modification. Please don't undersell the interoperability and maintenance burden that this change will impose, even if node ends up deciding to do it.

@chrisdickinson

This comment has been minimized.

Show comment
Hide comment
@chrisdickinson

chrisdickinson May 2, 2017

Contributor

Please don't undersell the interoperability and maintenance burden that this change will impose, even if node ends up deciding to do it.

I wasn't trying to! Just because we disagree, it doesn't mean I'm trying to sell you a monorail.

FWIW, I think this interoperability difference is a reasonable ask, given how close it is to the status quo: it's perfectly cromulent to write browser code that throws exceptions with the expectation that other operations on the page will continue to work — which won't work in Node without installing a process.on('uncaughtException') handler. If you want to rely on unhandledRejection in your program, you can add process.on('unhandledRejection'). In this fashion we start Node at safe, crash-early behavior and allow authors to change the behavior if the tradeoffs better suit them.

Contributor

chrisdickinson commented May 2, 2017

Please don't undersell the interoperability and maintenance burden that this change will impose, even if node ends up deciding to do it.

I wasn't trying to! Just because we disagree, it doesn't mean I'm trying to sell you a monorail.

FWIW, I think this interoperability difference is a reasonable ask, given how close it is to the status quo: it's perfectly cromulent to write browser code that throws exceptions with the expectation that other operations on the page will continue to work — which won't work in Node without installing a process.on('uncaughtException') handler. If you want to rely on unhandledRejection in your program, you can add process.on('unhandledRejection'). In this fashion we start Node at safe, crash-early behavior and allow authors to change the behavior if the tradeoffs better suit them.

@ljharb

This comment has been minimized.

Show comment
Hide comment
@ljharb

ljharb May 2, 2017

The reality, however, is that the vast majority of users (who don't care about the debugging/stack trace concerns motivating this PR) who see that error message (likely due to a dep, or due to their own code) will either:

  1. if the message is helpful, will blindly add process.on('unhandledRejection') or will just always blindly start node with ~--no-exit-on-unhandled-rejection, or will add .catch() to the promise in question
  2. if the message is unhelpful, they will google it, and end up on an SO question where the answer will tell them to blindly add process.on('unhandledRejection') or will just always blindly start node with ~--no-exit-on-unhandled-rejection, or will add .catch() to the promise in question

It is highly unlikely that the majority of users will actually fix the problem, as opposed to above where they'll simply work around it.

Conversely, if this PR does not go in, and an error occurs that logs a slightly-unhelpful stack trace but does not exit the process, the users who wish to debug this can enable --exit-on-unhandled-rejection or similar, and opt in to this behavior for when the error reoccurs. I would wager that most users who will care about this situation will enable the flag blindly anyways - but in that scenario, the users who blindly enable the flag will be knowingly opting themselves into a world where previously-interoperable JS code breaks in node.

I don't think there's any disagreement that in either scenario ("exit on unhandled+GC" or "exit on unhandled") there should be an inverse command line option slash code workaround - I'm suggesting that as long as both options are possible, the default should be the thing that makes the most sense for the majority of users.

My position is that what makes the most sense is "the way JS is supposed to/intended to operate, and already operates everywhere", not "the way a group of vocal node users wish Promises would have behaved in the first place".

ljharb commented May 2, 2017

The reality, however, is that the vast majority of users (who don't care about the debugging/stack trace concerns motivating this PR) who see that error message (likely due to a dep, or due to their own code) will either:

  1. if the message is helpful, will blindly add process.on('unhandledRejection') or will just always blindly start node with ~--no-exit-on-unhandled-rejection, or will add .catch() to the promise in question
  2. if the message is unhelpful, they will google it, and end up on an SO question where the answer will tell them to blindly add process.on('unhandledRejection') or will just always blindly start node with ~--no-exit-on-unhandled-rejection, or will add .catch() to the promise in question

It is highly unlikely that the majority of users will actually fix the problem, as opposed to above where they'll simply work around it.

Conversely, if this PR does not go in, and an error occurs that logs a slightly-unhelpful stack trace but does not exit the process, the users who wish to debug this can enable --exit-on-unhandled-rejection or similar, and opt in to this behavior for when the error reoccurs. I would wager that most users who will care about this situation will enable the flag blindly anyways - but in that scenario, the users who blindly enable the flag will be knowingly opting themselves into a world where previously-interoperable JS code breaks in node.

I don't think there's any disagreement that in either scenario ("exit on unhandled+GC" or "exit on unhandled") there should be an inverse command line option slash code workaround - I'm suggesting that as long as both options are possible, the default should be the thing that makes the most sense for the majority of users.

My position is that what makes the most sense is "the way JS is supposed to/intended to operate, and already operates everywhere", not "the way a group of vocal node users wish Promises would have behaved in the first place".

@Jamesernator

This comment has been minimized.

Show comment
Hide comment
@Jamesernator

Jamesernator May 2, 2017

Yeah I'd be happy to see this as opt-in behaviour.

Something I noticed with the deprecation message about adding handlers asynchronously is I'm not still not clear of is with this PR would this be sufficient to enable me to handle Promises asynchronously without adding a .catch handler to every single Promise I want to handle async.

For example:

process.on('unhandledRejection', () => {})

const reject = Promise.reject(3)

setTimeout(_ => {
    reject.catch(err => console.log("Hello"))
}, 1000)

Currently with that code I get this:

(node:19937) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 1)
Hello

With this PR will this continue to print Hello (and perhaps get rid of the warning too)? If so then it's not nearly as terrible as I think it is currently (still a bit of a pain, but manageable as I could just shove that in to stop sudden exits from both my own code and libraries too).

Jamesernator commented May 2, 2017

Yeah I'd be happy to see this as opt-in behaviour.

Something I noticed with the deprecation message about adding handlers asynchronously is I'm not still not clear of is with this PR would this be sufficient to enable me to handle Promises asynchronously without adding a .catch handler to every single Promise I want to handle async.

For example:

process.on('unhandledRejection', () => {})

const reject = Promise.reject(3)

setTimeout(_ => {
    reject.catch(err => console.log("Hello"))
}, 1000)

Currently with that code I get this:

(node:19937) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 1)
Hello

With this PR will this continue to print Hello (and perhaps get rid of the warning too)? If so then it's not nearly as terrible as I think it is currently (still a bit of a pain, but manageable as I could just shove that in to stop sudden exits from both my own code and libraries too).

@mcollina

This comment has been minimized.

Show comment
Hide comment
@mcollina

mcollina May 2, 2017

Member

TL;DR Eagerly exiting the process on unhandled rejection is critical to guarantee that all error conditions are handled for every single user that a server is handling.

My position is that what makes the most sense is "the way JS is supposed to/intended to operate, and already operates everywhere", not "the way a group of vocal node users wish Promises would have behaved in the first place".

There is a fundamental difference between JavaScript on the browser and JavaScript on the server. JavaScript on the browser deals with one single user a single time: the user will refresh the page often in most cases, and even if the application leaks memory is not a huge deal. JavaScript on the server can deal with thousands of concurrent users, and the process is long lived. Understanding the difference between these two context is critical to develop reliable and resilient isomorphic applications.

Promises have been developed with browser in mind. In the browser, even throw new Error('eeehi') will not stop your application. If you click on a different button, it will keep going. Node.js is already different, because it has been proven that continuing operations on the servers in case of an exception is a bad idea. If you throw new Error('eeehi'), your program will crash: this PR (or another equivalent) equal promises to throw. If you want to ignore an error, you will have to be explicit about it.

I think a switch to --no-exit-on-unhandled-rejection is good to have for interoperability.

Member

mcollina commented May 2, 2017

TL;DR Eagerly exiting the process on unhandled rejection is critical to guarantee that all error conditions are handled for every single user that a server is handling.

My position is that what makes the most sense is "the way JS is supposed to/intended to operate, and already operates everywhere", not "the way a group of vocal node users wish Promises would have behaved in the first place".

There is a fundamental difference between JavaScript on the browser and JavaScript on the server. JavaScript on the browser deals with one single user a single time: the user will refresh the page often in most cases, and even if the application leaks memory is not a huge deal. JavaScript on the server can deal with thousands of concurrent users, and the process is long lived. Understanding the difference between these two context is critical to develop reliable and resilient isomorphic applications.

Promises have been developed with browser in mind. In the browser, even throw new Error('eeehi') will not stop your application. If you click on a different button, it will keep going. Node.js is already different, because it has been proven that continuing operations on the servers in case of an exception is a bad idea. If you throw new Error('eeehi'), your program will crash: this PR (or another equivalent) equal promises to throw. If you want to ignore an error, you will have to be explicit about it.

I think a switch to --no-exit-on-unhandled-rejection is good to have for interoperability.

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal May 2, 2017

Member

In the browser, even throw new Error('eeehi') will not stop your application. If you click on a different button, it will keep going. Node.js is already different, because it has been proven that continuing operations on the servers in case of an exception is a bad idea.

This is all true, but it also shows that we are discussing these Promise features differently than we discussed similar features in the past.

The fact that throw new Error() kills the process is a feature of Node.js that we've added.

The fact that emit('error', new Error()) will convert to a throw if unhandled is a feature of Node.js that we've added.

Yet, when it comes to Promises, we keep arguing about what the behavior should be in terms of "correct and incorrect." We are throwing use cases back and forth in terms of being in-line with Node.js current behavior as if that behavior is the default for JavaScript, which it is not.

If we want to change the default behavior of unhandled errors in Promises, that's a feature of Node.js, not a change to the language. If people want it to behave differently they should be able to disable it (similar to how you can trap process errors and avoid exiting if you want that behavior to be different than the JS default).

We don't need to re-litigate "who Promises were written for" or what the default behavior of the browser "should have been" if we concede that Node.js routinely adds features for error handling that don't exist in the browser. In fact, you could argue that proper support for Promises in Node.js must include features for errors similar to those we've already added. If we didn't add features and behaviors like this Promises would be second-class compared to what we've added for standard errors already.

Member

mikeal commented May 2, 2017

In the browser, even throw new Error('eeehi') will not stop your application. If you click on a different button, it will keep going. Node.js is already different, because it has been proven that continuing operations on the servers in case of an exception is a bad idea.

This is all true, but it also shows that we are discussing these Promise features differently than we discussed similar features in the past.

The fact that throw new Error() kills the process is a feature of Node.js that we've added.

The fact that emit('error', new Error()) will convert to a throw if unhandled is a feature of Node.js that we've added.

Yet, when it comes to Promises, we keep arguing about what the behavior should be in terms of "correct and incorrect." We are throwing use cases back and forth in terms of being in-line with Node.js current behavior as if that behavior is the default for JavaScript, which it is not.

If we want to change the default behavior of unhandled errors in Promises, that's a feature of Node.js, not a change to the language. If people want it to behave differently they should be able to disable it (similar to how you can trap process errors and avoid exiting if you want that behavior to be different than the JS default).

We don't need to re-litigate "who Promises were written for" or what the default behavior of the browser "should have been" if we concede that Node.js routinely adds features for error handling that don't exist in the browser. In fact, you could argue that proper support for Promises in Node.js must include features for errors similar to those we've already added. If we didn't add features and behaviors like this Promises would be second-class compared to what we've added for standard errors already.

@domenic

This comment has been minimized.

Show comment
Hide comment
@domenic

domenic May 2, 2017

Member

I feel like a point is being missed here in all this analogizing, which is that the analogy doesn't hold. Crashing the promise on promise rejections that aren't handled synchronously is not analogous to the feature Node.js added where you crash the process on unhandled exceptions that aren't handled synchronously. As I said earlier,

There is no good analogy here with sync exception handling. Of course it's reasonable to abort the process if a sync exception isn't handled synchronously; that's the only way it could be handled. But it's not reasonable to abort the process if an async exception isn't handled synchronously.

An attempt to port the intuition about synchronous exceptions into the world of asynchronous rejections is just conceptually wrong, and misunderstands how promises work and are meant to work---including their lineage which goes back to many languages beyond JavaScript.

If the goal is to crash the process when nothing in the program has handled an error, then the correct analogy is on GC, because that's the only time you actually know for a fact nothing has handled it. If you artificially constrain programs to react to promises synchronously instead of asynchronously, you're not creating an analogous system for promises, you're just creating an entirely new system that doesn't make sense for promises at all.

Member

domenic commented May 2, 2017

I feel like a point is being missed here in all this analogizing, which is that the analogy doesn't hold. Crashing the promise on promise rejections that aren't handled synchronously is not analogous to the feature Node.js added where you crash the process on unhandled exceptions that aren't handled synchronously. As I said earlier,

There is no good analogy here with sync exception handling. Of course it's reasonable to abort the process if a sync exception isn't handled synchronously; that's the only way it could be handled. But it's not reasonable to abort the process if an async exception isn't handled synchronously.

An attempt to port the intuition about synchronous exceptions into the world of asynchronous rejections is just conceptually wrong, and misunderstands how promises work and are meant to work---including their lineage which goes back to many languages beyond JavaScript.

If the goal is to crash the process when nothing in the program has handled an error, then the correct analogy is on GC, because that's the only time you actually know for a fact nothing has handled it. If you artificially constrain programs to react to promises synchronously instead of asynchronously, you're not creating an analogous system for promises, you're just creating an entirely new system that doesn't make sense for promises at all.

@mcollina

This comment has been minimized.

Show comment
Hide comment
@mcollina

mcollina May 2, 2017

Member

@mikeal thanks, I'm with you 100%.

If the goal is to crash the process when nothing in the program has handled an error, then the correct analogy is on GC, because that's the only time you actually know for a fact nothing has handled it. If you artificially constrain programs to react to promises synchronously instead of asynchronously, you're not creating an analogous system for promises, you're just creating an entirely new system that doesn't make sense for promises at all.

I would highly prefer to have that on the next tick, but I'm actually ok in having it at GC time if it is simpler to agree on. I think exiting on 'unhandledRejection' is something we must ship asap.

Member

mcollina commented May 2, 2017

@mikeal thanks, I'm with you 100%.

If the goal is to crash the process when nothing in the program has handled an error, then the correct analogy is on GC, because that's the only time you actually know for a fact nothing has handled it. If you artificially constrain programs to react to promises synchronously instead of asynchronously, you're not creating an analogous system for promises, you're just creating an entirely new system that doesn't make sense for promises at all.

I would highly prefer to have that on the next tick, but I'm actually ok in having it at GC time if it is simpler to agree on. I think exiting on 'unhandledRejection' is something we must ship asap.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 2, 2017

Member

I'd like to point that .NET changed this behavior from throw on GC to emit event but don't throw (http://stackoverflow.com/questions/21648094/task-unhandled-exceptions):

In .NET 4:

If you do not wait on a task that propagates an exception, or access its Exception property, the exception is escalated according to the .NET exception policy when the task is garbage-collected.

In .NET 4.5, after adding async/await which describes the problematic use case of:

async function two() {
   let first = asyncFn();
   let two = await asyncFn2();
   if(two !== 2) return "nope";
    try { 
      let three = await asyncFn3(); 
      let one = await first;
      return "something";
    } catch (e) {
      console.log("OOPS");
    }
}

Where functions might never be awaited, or awaited much later:

To make it easier for developers to write asynchronous code based on Tasks, .NET 4.5 changes the default exception behavior for unobserved exceptions. While unobserved exceptions will still cause the UnobservedTaskException event to be raised (not doing so would be a breaking change), the process will not crash by default. Rather, the exception will end up getting eaten after the event is raised, regardless of whether an event handler observes the exception. This behavior can be configured, though.

So basically, "unhandledRejection" is raised on GC - but the process doesn't exit.


That said, I'm not sure Node should follow the footsteps of other languages that have added async/await and there might be merit to setting up a default and easy to opt out of "unhandledRejection" exit handler. This would also help with the post-mortem debugging flow.

I disagree people add an empty event listener for unhandledRejection when they run into it - based on answering questions on SO and discussing this with people at meetups - process.on("unhandledRejection", e => { throw e; }) is far more common. No popular libraries emit unhandled rejections because of coding style ATM and it's a style enforcement Node gets to make. Also, I do agree that this is analogous to the different behavior in uncaught exceptions where Node crashes and the browser does not.

Member

benjamingr commented May 2, 2017

I'd like to point that .NET changed this behavior from throw on GC to emit event but don't throw (http://stackoverflow.com/questions/21648094/task-unhandled-exceptions):

In .NET 4:

If you do not wait on a task that propagates an exception, or access its Exception property, the exception is escalated according to the .NET exception policy when the task is garbage-collected.

In .NET 4.5, after adding async/await which describes the problematic use case of:

async function two() {
   let first = asyncFn();
   let two = await asyncFn2();
   if(two !== 2) return "nope";
    try { 
      let three = await asyncFn3(); 
      let one = await first;
      return "something";
    } catch (e) {
      console.log("OOPS");
    }
}

Where functions might never be awaited, or awaited much later:

To make it easier for developers to write asynchronous code based on Tasks, .NET 4.5 changes the default exception behavior for unobserved exceptions. While unobserved exceptions will still cause the UnobservedTaskException event to be raised (not doing so would be a breaking change), the process will not crash by default. Rather, the exception will end up getting eaten after the event is raised, regardless of whether an event handler observes the exception. This behavior can be configured, though.

So basically, "unhandledRejection" is raised on GC - but the process doesn't exit.


That said, I'm not sure Node should follow the footsteps of other languages that have added async/await and there might be merit to setting up a default and easy to opt out of "unhandledRejection" exit handler. This would also help with the post-mortem debugging flow.

I disagree people add an empty event listener for unhandledRejection when they run into it - based on answering questions on SO and discussing this with people at meetups - process.on("unhandledRejection", e => { throw e; }) is far more common. No popular libraries emit unhandled rejections because of coding style ATM and it's a style enforcement Node gets to make. Also, I do agree that this is analogous to the different behavior in uncaught exceptions where Node crashes and the browser does not.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 2, 2017

Member

Aside from my comments above - I think the path forward to resolve this is to bring the behavior to a CTC vote - I think a CTC vote (hopefully, if @domenic agrees to participate that would be a nice addition to the discussion) is the quickest way to resolve this already.

I think @Fishrock123, @addaleax are doing a great job pushing for more complete promise support in Node - and I'd love for them to attend that meeting too (that is, let's not set it up on a date where either won't be able to attend).

Member

benjamingr commented May 2, 2017

Aside from my comments above - I think the path forward to resolve this is to bring the behavior to a CTC vote - I think a CTC vote (hopefully, if @domenic agrees to participate that would be a nice addition to the discussion) is the quickest way to resolve this already.

I think @Fishrock123, @addaleax are doing a great job pushing for more complete promise support in Node - and I'd love for them to attend that meeting too (that is, let's not set it up on a date where either won't be able to attend).

@chrisdickinson

This comment has been minimized.

Show comment
Hide comment
@chrisdickinson

chrisdickinson May 2, 2017

Contributor

FWIW, my feelings on crash on GC behavior remain from over here: #12010 (comment)

@Jamesernator:

Re: your example could be rewritten as follows:

process.on('unhandledRejection', () => {})

const reject = Promise.reject(3)

reject.catch(() => {}) // "I intend to add a catch handler later."

setTimeout(_ => {
    reject.catch(err => console.log("Hello"))
}, 1000)

It would no longer hit unhandledRejection, but the rest of the program would work as desired.

(This is what I mean by "marking" a promise as "to be handled later" — a separate promise is created that catches until the primary catch handler is added asynchronously. This ensures that only truly unexpected unhandled rejections bubble to top level.)

Contributor

chrisdickinson commented May 2, 2017

FWIW, my feelings on crash on GC behavior remain from over here: #12010 (comment)

@Jamesernator:

Re: your example could be rewritten as follows:

process.on('unhandledRejection', () => {})

const reject = Promise.reject(3)

reject.catch(() => {}) // "I intend to add a catch handler later."

setTimeout(_ => {
    reject.catch(err => console.log("Hello"))
}, 1000)

It would no longer hit unhandledRejection, but the rest of the program would work as desired.

(This is what I mean by "marking" a promise as "to be handled later" — a separate promise is created that catches until the primary catch handler is added asynchronously. This ensures that only truly unexpected unhandled rejections bubble to top level.)

@ljharb

This comment has been minimized.

Show comment
Hide comment
@ljharb

ljharb May 3, 2017

@mikeal how could that behavior be kept out of code that's compiled for the browser? Most of that is browserified or webpacked node modules. Are you suggesting that the extra .catch()es that this approach necessitates would be able to be statically compiled out by a browser bundler?

ljharb commented May 3, 2017

@mikeal how could that behavior be kept out of code that's compiled for the browser? Most of that is browserified or webpacked node modules. Are you suggesting that the extra .catch()es that this approach necessitates would be able to be statically compiled out by a browser bundler?

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal May 3, 2017

Member

@ljharb maybe I'm missing something, but all of the code in this PR is in process handlers and node.cc, none of which are packaged up when code is browserified.

Sure, when the same code runs both in the browser and in Node.js the error behavior will be different. As it has been noted many times, so is the treatment of regular errors between Node.js and the browser.

If you want your code to execute in Node.js you'll need to consider how it handles errors, but nothing should prevent people who depend on the old behavior from continuing to package up those modules for the browser as-is. Additionally, if they want to rely the old behavior and execute in Node.js they can turn off the feature, but that is not a requirement for people simply packaging modules through Node.js for browser execution.

Member

mikeal commented May 3, 2017

@ljharb maybe I'm missing something, but all of the code in this PR is in process handlers and node.cc, none of which are packaged up when code is browserified.

Sure, when the same code runs both in the browser and in Node.js the error behavior will be different. As it has been noted many times, so is the treatment of regular errors between Node.js and the browser.

If you want your code to execute in Node.js you'll need to consider how it handles errors, but nothing should prevent people who depend on the old behavior from continuing to package up those modules for the browser as-is. Additionally, if they want to rely the old behavior and execute in Node.js they can turn off the feature, but that is not a requirement for people simply packaging modules through Node.js for browser execution.

@ljharb

This comment has been minimized.

Show comment
Hide comment
@ljharb

ljharb May 3, 2017

Ah, thanks for clarifying, I'd misunderstood.

The hazard here is indeed about code that works both in node and in the browser, which is quite a bit - I agree that code that is only sent to the browser won't be affected by any node runtime changes.

ljharb commented May 3, 2017

Ah, thanks for clarifying, I'd misunderstood.

The hazard here is indeed about code that works both in node and in the browser, which is quite a bit - I agree that code that is only sent to the browser won't be affected by any node runtime changes.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 3, 2017

Member

@ljharb

@mikeal how could that behavior be kept out of code that's compiled for the browser?

It's pretty simple to do, given that browsers have a onunhanldedrejection hook.

@mikeal

What I don't seem to be creative enough to understand is where you need the late rejection behavior to enable specific and necessary patterns for an application.

Let's say I have an HTTP request, I want to fetch of the data about the user from the cache, fetch the data about the user comment from the database, and return them.

async function handleRequest(req) {
  const userData = getUserData(req.params.userid); // note no await, because we want to fetch concurrently
  const commentData = getCommentData(req.params.commentid); 
  let user = await userData;
  if(user.likesPie) {
    return {user, comment: await commentData }
  } else {
    return "invalid user";
  }
}

Here, in one branch the comments aren't even awaited since we don't care about the result. This is not a made-up use case either - I do this in C# pretty often.

To be fair, C# Tasks have SynchronizationContexts and when the synchronization context has ended the task ends.

This might be a viable solution, if we can get all promises bound to a domain (gasp!) or something parallel to domains without the error handling semantics - then we could know when to raise the exception (when the request is done).

Member

benjamingr commented May 3, 2017

@ljharb

@mikeal how could that behavior be kept out of code that's compiled for the browser?

It's pretty simple to do, given that browsers have a onunhanldedrejection hook.

@mikeal

What I don't seem to be creative enough to understand is where you need the late rejection behavior to enable specific and necessary patterns for an application.

Let's say I have an HTTP request, I want to fetch of the data about the user from the cache, fetch the data about the user comment from the database, and return them.

async function handleRequest(req) {
  const userData = getUserData(req.params.userid); // note no await, because we want to fetch concurrently
  const commentData = getCommentData(req.params.commentid); 
  let user = await userData;
  if(user.likesPie) {
    return {user, comment: await commentData }
  } else {
    return "invalid user";
  }
}

Here, in one branch the comments aren't even awaited since we don't care about the result. This is not a made-up use case either - I do this in C# pretty often.

To be fair, C# Tasks have SynchronizationContexts and when the synchronization context has ended the task ends.

This might be a viable solution, if we can get all promises bound to a domain (gasp!) or something parallel to domains without the error handling semantics - then we could know when to raise the exception (when the request is done).

@mikeal

This comment has been minimized.

Show comment
Hide comment
@mikeal

mikeal May 3, 2017

Member

@benjamingr can't this be accomplished with a try/catch instead of an isValid check? I thought that one of the benefits of moving to await is the use of try/catch?

Member

mikeal commented May 3, 2017

@benjamingr can't this be accomplished with a try/catch instead of an isValid check? I thought that one of the benefits of moving to await is the use of try/catch?

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 3, 2017

Member

@benjamingr can't this be accomplished with a try/catch instead of an isValid check? I thought that one of the benefits of moving to await is the use of try/catch?

In this case user.isValid (the user exists in the cache and is valid) and the user not existing (or the request being valid, meaning an operational exception) are two different things, but for posterity I've amended the example above to use user.likesPie so it's clear it's not directly related to exceptions.

I'd like to clarify that the above can be fixed with an empty .catch, but it's not a very rare use case or one that requires advanced usage.

I'd also like to add that I'm happily running with "throw on unhandled rejections" as the behavior for my code over 3 years and across several apps now and it is yet to bite me. (Even with JSDOM). I'm just not sure it's the right default for our users.

Member

benjamingr commented May 3, 2017

@benjamingr can't this be accomplished with a try/catch instead of an isValid check? I thought that one of the benefits of moving to await is the use of try/catch?

In this case user.isValid (the user exists in the cache and is valid) and the user not existing (or the request being valid, meaning an operational exception) are two different things, but for posterity I've amended the example above to use user.likesPie so it's clear it's not directly related to exceptions.

I'd like to clarify that the above can be fixed with an empty .catch, but it's not a very rare use case or one that requires advanced usage.

I'd also like to add that I'm happily running with "throw on unhandled rejections" as the behavior for my code over 3 years and across several apps now and it is yet to bite me. (Even with JSDOM). I'm just not sure it's the right default for our users.

@Jamesernator

This comment has been minimized.

Show comment
Hide comment
@Jamesernator

Jamesernator May 3, 2017

@chrisdickinson

The browser will terminate the current stack, but the installed handlers will continue to run. Compare to a Node program, where a crash in one stack will bring down the process in the absence of an uncaughtException handler.

Yes but that's because browsers like to recover from errors as much as possible, personally I'd still consider an exception that went uncaught in such a way to almost certainly be a bug that needs fixing. With Promises in the browser, not handling a rejection doesn't kill the whole script because it might still be handled and lead to completely consistent code which is error free.

If you state that all promises should be immediately handled (even if it's to mark it for later handling), then you can reliably say that a promise won't be handled. If a promise generates an unhandled rejection, the process should crash. In the case that a promise will be asynchronously handled later, the promise can be marked as such at that point.

But it doesn't hold that just because a Promise had a .catch handler attached it'll actually be properly handled, what happens if you then actually forget to asynchronously handle it? All that would be done is now the Promise has some cruft handler attached to it and you have just as little information about Promise rejection as before.

Further, if you are relying on this behavior, there exists a class of programs that cannot consume your module because they will install a process.on('unhandledRejection', e => { throw e }) handler.

That's self-fulfilling, without this pull request it won't exit though. The thing with this compared to GC is that responsibility shifts to whoever is highest up in the chain, whereas with this approach everyone at every point of the chain needs to add pointless handlers to be able to handle asynchronously.

The cost is that authors must indicate ahead of time that they intend to handle any potential rejections their promises must generate. This aligns nicely with the expectation we already have around event emitters: one must install .on('error') on their emitters, or else the emitter could bring down the process.

I've never used event emitters but the reason for .on('error') is obvious to me, an emitter doesn't have the ability to pull values from it, a Promise is both a push and pull interface so I don't think the analogy holds. With a Promise I can always attach a handler later (or many handlers) but I wouldn't be able to do that with an event emitter because the event is simply gone, this simply isn't a point in favor on exiting on unhandledRejection because I always register later safely. If event emitters buffered their output then I don't think you'd need to exit on error because you could always subscribe later and get all the data you missed, but because that isn't the case the error is simply lost (probably a mistake).

And I think it's important to note that Promises are not events they might indicate an event, but all they really represent is a value that may not be ready yet, the whole point of creating should be because you intend to use it, now not using a Promise is probably an error (not one I'd like to enforce) but the fact that it can happen asynchronously is irrelevant. It's not a like a callback/event emitter where it's either be ready or you miss it.

Now the spec doesn't define what a unhandledRejection should do true, but it does certainly suggest that a typical implementation would be:

A typical implementation of HostPromiseRejectionTracker might try to notify developers of unhandled rejections, while also being careful to notify them if such previous notifications are later invalidated by new handlers being attached.

Which while I appreciate it does say might it indicates exactly what I think should be happening with unhandledRejections, because they're not truly unhandled if they get a handler later hence why it suggested notifying that the previous notification is now invalid. Instead of seeing unhandledRejection as everything is fire we need to panic now you should see it as maybe this won't be handled, I'll keep an eye out, but if it does get handled then I can forget about it.


@mikeal I wasn't suggested that this was being pushed for browsers, but rather that I can't now simply run my Ava tests in Node and get consistent results that I would in the browser, a good chunk of code doesn't require firing up a browser at all (I haven't tried headless chrome yet but personally I don't think I should need to if I'm simply running a few unit tests that don't use any browser APIs).


@benjamingr Has an interesting points with how C# works and actually that SynchronizationContext thing reminds a lot of a (currently) Stage 0 proposal by @domenic called Zones, I think it's worth taking a look given that one of the use cases specifically mentions dealing with unhandledPromise rejections. Mostly importantly I think is the fact that each async "stack" (if you can call it that) can be perfectly well defined by wrapping it in a Zone. So you could prevent a lot of the issues I have with this pull request by wrapping things (like Async Functions) within Zones (perhaps @domenic can explain the ideas behind zones further?). I definitely think they should be looked into as part of this discussion.

@chrisdickinson

The browser will terminate the current stack, but the installed handlers will continue to run. Compare to a Node program, where a crash in one stack will bring down the process in the absence of an uncaughtException handler.

Yes but that's because browsers like to recover from errors as much as possible, personally I'd still consider an exception that went uncaught in such a way to almost certainly be a bug that needs fixing. With Promises in the browser, not handling a rejection doesn't kill the whole script because it might still be handled and lead to completely consistent code which is error free.

If you state that all promises should be immediately handled (even if it's to mark it for later handling), then you can reliably say that a promise won't be handled. If a promise generates an unhandled rejection, the process should crash. In the case that a promise will be asynchronously handled later, the promise can be marked as such at that point.

But it doesn't hold that just because a Promise had a .catch handler attached it'll actually be properly handled, what happens if you then actually forget to asynchronously handle it? All that would be done is now the Promise has some cruft handler attached to it and you have just as little information about Promise rejection as before.

Further, if you are relying on this behavior, there exists a class of programs that cannot consume your module because they will install a process.on('unhandledRejection', e => { throw e }) handler.

That's self-fulfilling, without this pull request it won't exit though. The thing with this compared to GC is that responsibility shifts to whoever is highest up in the chain, whereas with this approach everyone at every point of the chain needs to add pointless handlers to be able to handle asynchronously.

The cost is that authors must indicate ahead of time that they intend to handle any potential rejections their promises must generate. This aligns nicely with the expectation we already have around event emitters: one must install .on('error') on their emitters, or else the emitter could bring down the process.

I've never used event emitters but the reason for .on('error') is obvious to me, an emitter doesn't have the ability to pull values from it, a Promise is both a push and pull interface so I don't think the analogy holds. With a Promise I can always attach a handler later (or many handlers) but I wouldn't be able to do that with an event emitter because the event is simply gone, this simply isn't a point in favor on exiting on unhandledRejection because I always register later safely. If event emitters buffered their output then I don't think you'd need to exit on error because you could always subscribe later and get all the data you missed, but because that isn't the case the error is simply lost (probably a mistake).

And I think it's important to note that Promises are not events they might indicate an event, but all they really represent is a value that may not be ready yet, the whole point of creating should be because you intend to use it, now not using a Promise is probably an error (not one I'd like to enforce) but the fact that it can happen asynchronously is irrelevant. It's not a like a callback/event emitter where it's either be ready or you miss it.

Now the spec doesn't define what a unhandledRejection should do true, but it does certainly suggest that a typical implementation would be:

A typical implementation of HostPromiseRejectionTracker might try to notify developers of unhandled rejections, while also being careful to notify them if such previous notifications are later invalidated by new handlers being attached.

Which while I appreciate it does say might it indicates exactly what I think should be happening with unhandledRejections, because they're not truly unhandled if they get a handler later hence why it suggested notifying that the previous notification is now invalid. Instead of seeing unhandledRejection as everything is fire we need to panic now you should see it as maybe this won't be handled, I'll keep an eye out, but if it does get handled then I can forget about it.


@mikeal I wasn't suggested that this was being pushed for browsers, but rather that I can't now simply run my Ava tests in Node and get consistent results that I would in the browser, a good chunk of code doesn't require firing up a browser at all (I haven't tried headless chrome yet but personally I don't think I should need to if I'm simply running a few unit tests that don't use any browser APIs).


@benjamingr Has an interesting points with how C# works and actually that SynchronizationContext thing reminds a lot of a (currently) Stage 0 proposal by @domenic called Zones, I think it's worth taking a look given that one of the use cases specifically mentions dealing with unhandledPromise rejections. Mostly importantly I think is the fact that each async "stack" (if you can call it that) can be perfectly well defined by wrapping it in a Zone. So you could prevent a lot of the issues I have with this pull request by wrapping things (like Async Functions) within Zones (perhaps @domenic can explain the ideas behind zones further?). I definitely think they should be looked into as part of this discussion.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 3, 2017

Member

@Jamesernator

That's self-fulfilling, without this pull request it won't exit though. The thing with this compared to GC is that responsibility shifts to whoever is highest up in the chain, whereas with this approach everyone at every point of the chain needs to add pointless handlers to be able to handle asynchronously.

To be fair, from my experience it appears to already be the case in the Node ecosystem by a large margin. I've been running this way for years and so far so good.

But it doesn't hold that just because a Promise had a .catch handler attached it'll actually be properly handled, what happens if you then actually forget to asynchronously handle it? All that would be done is now the Promise has some cruft handler attached to it and you have just as little information about Promise rejection as before.

That's the point of making it the default though, you can still opt out of unhandled rejection crashing with this PR - it's just meant to be a better default than swallowing errors.

I've never used event emitters but the reason for .on('error') is obvious to me, an emitter doesn't have the ability to pull values from it, a Promise is both a push and pull interface so I don't think the analogy holds

What events do in Node is actually very similar to this behavior (that is, no error handler makes it throw), the key difference is that if an error event isn't handled it can't be handled later - even if an error handler is attached later.

Promises are a specific kind of events though. It's as "push and pull" as event emitters - just for one value. The caching is the behavior that's causing the difference here - as you later mention, since errors in event emitters can't be handled later.


The key point is that this is a valid choice Node gets to make, it doesn't violate a spec, it is not conceptually unsound, and there is no "grave misunderstanding" here.

IMO Node shouldn't opt to throw eagerly here because of how async/await changed how people write asynchronous code. C# changed the behavior because of async/await. That said, C# is not Node and throwing is pretty consistent with Node's behavior in other areas.

SynchronizationContext thing reminds a lot of a (currently) Stage 0 proposal by @domenic called Zones

Not quite but it's possible to build on top of! I had Zones in mind when I wrote that comment.

Basically, you subclass Promise.

  • in the constructor you add this to the Map of promises on the current zone.
  • the request has access to the said map (and the zone), and when it ends - if there are rejected ones the process exits with the error. Pending promises that reject after the zone ended cause a process exit too.

Although, what I'd really want to build this on top of is a scheduler (like Bluebird.setScheduler), which would enable strictly more expressive promise use cases, and this as a side perk - but that's me dreaming.

Member

benjamingr commented May 3, 2017

@Jamesernator

That's self-fulfilling, without this pull request it won't exit though. The thing with this compared to GC is that responsibility shifts to whoever is highest up in the chain, whereas with this approach everyone at every point of the chain needs to add pointless handlers to be able to handle asynchronously.

To be fair, from my experience it appears to already be the case in the Node ecosystem by a large margin. I've been running this way for years and so far so good.

But it doesn't hold that just because a Promise had a .catch handler attached it'll actually be properly handled, what happens if you then actually forget to asynchronously handle it? All that would be done is now the Promise has some cruft handler attached to it and you have just as little information about Promise rejection as before.

That's the point of making it the default though, you can still opt out of unhandled rejection crashing with this PR - it's just meant to be a better default than swallowing errors.

I've never used event emitters but the reason for .on('error') is obvious to me, an emitter doesn't have the ability to pull values from it, a Promise is both a push and pull interface so I don't think the analogy holds

What events do in Node is actually very similar to this behavior (that is, no error handler makes it throw), the key difference is that if an error event isn't handled it can't be handled later - even if an error handler is attached later.

Promises are a specific kind of events though. It's as "push and pull" as event emitters - just for one value. The caching is the behavior that's causing the difference here - as you later mention, since errors in event emitters can't be handled later.


The key point is that this is a valid choice Node gets to make, it doesn't violate a spec, it is not conceptually unsound, and there is no "grave misunderstanding" here.

IMO Node shouldn't opt to throw eagerly here because of how async/await changed how people write asynchronous code. C# changed the behavior because of async/await. That said, C# is not Node and throwing is pretty consistent with Node's behavior in other areas.

SynchronizationContext thing reminds a lot of a (currently) Stage 0 proposal by @domenic called Zones

Not quite but it's possible to build on top of! I had Zones in mind when I wrote that comment.

Basically, you subclass Promise.

  • in the constructor you add this to the Map of promises on the current zone.
  • the request has access to the said map (and the zone), and when it ends - if there are rejected ones the process exits with the error. Pending promises that reject after the zone ended cause a process exit too.

Although, what I'd really want to build this on top of is a scheduler (like Bluebird.setScheduler), which would enable strictly more expressive promise use cases, and this as a side perk - but that's me dreaming.

@chrisdickinson

This comment has been minimized.

Show comment
Hide comment
@chrisdickinson

chrisdickinson May 3, 2017

Contributor

@Jamesernator:

I understand that a promise may be handled in the future, the root of our disagreement is that I think module authors should indicate their intention to handle promises later down the line when they need that functionality. In that case, the .catch() is not a pointless handler, it's preserving modularity by not hitting a global error handler.

The problem with "notifying developers of possible errors" is that Node doesn't have a side channel for this. It has to take over stderr, which is often meaningful for Node programs. If something appears there, it gives the indication of an error. Tracking it down the line to discard the possibility that the promise remains unhandled adds extra work to a manual process that doesn't bear extra work well. This is the status quo, and while it's better than when we didn't log anything at all, it's still not ideal — hence the two approaches proposed.

@mikeal:

With async functions, you have two options to avoid an unhandled rejection: either you await all extant promises at once with Promise.all(), or you mark them as "handled" with a no-op .catch() until they're awaited.

// the problem:
async function example (req) {
  const getA = getPromiseA()
  const getB = getPromiseB()

  try {
    const a = await getA
    // if "getB" rejected before "await getB", it will hit the unhandledRejection handler
    // once it hits "await getB", it will hit the unhandledRejectionHandled handler.
    const b = await getB
  } catch (err) {
     // err could be from "a" or "b"
  }
}

// solution 1:
async function example (req) {
  const getA = getPromiseA()
  const getB = getPromiseB()

  try {
    const [a, b] = await Promise.all([getA, getB])
  } catch (err) {
     // err could be from "a" or "b"
  }
}

// solution 2:
async function example (req) {
  const getA = getPromiseA()
  const getB = getPromiseB()
  getB.catch(() => {}) // "await getB" will still throw!
  try {
    const a = await getA
  } catch (err) {
    // err is from "a"
  }

  try {
    const b = await getB
  } catch (err) {
     // err is from "b"
  }
}
Contributor

chrisdickinson commented May 3, 2017

@Jamesernator:

I understand that a promise may be handled in the future, the root of our disagreement is that I think module authors should indicate their intention to handle promises later down the line when they need that functionality. In that case, the .catch() is not a pointless handler, it's preserving modularity by not hitting a global error handler.

The problem with "notifying developers of possible errors" is that Node doesn't have a side channel for this. It has to take over stderr, which is often meaningful for Node programs. If something appears there, it gives the indication of an error. Tracking it down the line to discard the possibility that the promise remains unhandled adds extra work to a manual process that doesn't bear extra work well. This is the status quo, and while it's better than when we didn't log anything at all, it's still not ideal — hence the two approaches proposed.

@mikeal:

With async functions, you have two options to avoid an unhandled rejection: either you await all extant promises at once with Promise.all(), or you mark them as "handled" with a no-op .catch() until they're awaited.

// the problem:
async function example (req) {
  const getA = getPromiseA()
  const getB = getPromiseB()

  try {
    const a = await getA
    // if "getB" rejected before "await getB", it will hit the unhandledRejection handler
    // once it hits "await getB", it will hit the unhandledRejectionHandled handler.
    const b = await getB
  } catch (err) {
     // err could be from "a" or "b"
  }
}

// solution 1:
async function example (req) {
  const getA = getPromiseA()
  const getB = getPromiseB()

  try {
    const [a, b] = await Promise.all([getA, getB])
  } catch (err) {
     // err could be from "a" or "b"
  }
}

// solution 2:
async function example (req) {
  const getA = getPromiseA()
  const getB = getPromiseB()
  getB.catch(() => {}) // "await getB" will still throw!
  try {
    const a = await getA
  } catch (err) {
    // err is from "a"
  }

  try {
    const b = await getB
  } catch (err) {
     // err is from "b"
  }
}
@Jamesernator

This comment has been minimized.

Show comment
Hide comment
@Jamesernator

Jamesernator May 3, 2017

@benjamingr

Basically, you subclass Promise. in the constructor you add this to the Map of promises on the current zone.
the request has access to the said map (and the zone), and when it ends - if there are rejected ones the process exits with the error. Pending promises that reject after the zone ended cause a process exit too.

Yeah this would be a massive improvement to this PR because it means that simple concurrent work within functions is still perfectly well defined, in fact the example @chrisdickinson could work like this without the user having to do anything:

async function example (req) {
  // New Zone is created as we enter the "example" function
  
  // Promises created within the Zone fork from the zone so that
  // unhandledRejections bubble up into this Zone
  const getA = getPromiseA()
  const getB = getPromiseB()
  
  try {
    // getA and getA continue in the normal way until they reach this point
    const a = await getA
    const b = await getB
  } catch (err) {
     // Errors are caught normally
  }
  // Now we exit the zone, because all unhandledRejections are truly handled
  // within the Zone we don't need to progress any further
}

// In the case of a fail case:

async function failExample(req) {
    // Create a Zone for failExample same as example
    const getA = getPromiseA()
    const getB = getPromiseB()
    
    try {
        await null
        // Even if a rejected early it still won't bubble out as the notion
        // of the Zone captures it 
        await getA
    } catch (err) {
        
    }
    // Now we're at the end of the Zone if getB is unhandled we'll bubble
    // up the error to the parent Zone
}

I think that approach seems quite elegant and still allows unhandledRejections to bubble up the async stack, doesn't prevent using them concurrently, allows installing custom unhandledRejection behaviour at any point within the process not just arbitrarily at the top level, doesn't prevent consumers using parallelism without having to use an arbitrary .catch handler.

It's great too because it even works with Promises too, and callbacks as well! It's a powerful solution which I think would give great debuggability given that now things that previously couldn't be given stack traces could be given well defined stack traces now too!

Jamesernator commented May 3, 2017

@benjamingr

Basically, you subclass Promise. in the constructor you add this to the Map of promises on the current zone.
the request has access to the said map (and the zone), and when it ends - if there are rejected ones the process exits with the error. Pending promises that reject after the zone ended cause a process exit too.

Yeah this would be a massive improvement to this PR because it means that simple concurrent work within functions is still perfectly well defined, in fact the example @chrisdickinson could work like this without the user having to do anything:

async function example (req) {
  // New Zone is created as we enter the "example" function
  
  // Promises created within the Zone fork from the zone so that
  // unhandledRejections bubble up into this Zone
  const getA = getPromiseA()
  const getB = getPromiseB()
  
  try {
    // getA and getA continue in the normal way until they reach this point
    const a = await getA
    const b = await getB
  } catch (err) {
     // Errors are caught normally
  }
  // Now we exit the zone, because all unhandledRejections are truly handled
  // within the Zone we don't need to progress any further
}

// In the case of a fail case:

async function failExample(req) {
    // Create a Zone for failExample same as example
    const getA = getPromiseA()
    const getB = getPromiseB()
    
    try {
        await null
        // Even if a rejected early it still won't bubble out as the notion
        // of the Zone captures it 
        await getA
    } catch (err) {
        
    }
    // Now we're at the end of the Zone if getB is unhandled we'll bubble
    // up the error to the parent Zone
}

I think that approach seems quite elegant and still allows unhandledRejections to bubble up the async stack, doesn't prevent using them concurrently, allows installing custom unhandledRejection behaviour at any point within the process not just arbitrarily at the top level, doesn't prevent consumers using parallelism without having to use an arbitrary .catch handler.

It's great too because it even works with Promises too, and callbacks as well! It's a powerful solution which I think would give great debuggability given that now things that previously couldn't be given stack traces could be given well defined stack traces now too!

@ChALkeR

This comment has been minimized.

Show comment
Hide comment
@ChALkeR

ChALkeR May 7, 2017

Member

@jasnell To further explain my point that I mentioned earlier:

Let's label noticing an unhandled rejection on next tick as TICK and noticing a garbage collection of an unhandled rejection as GC.

I think that, in the current situation:

  1. The current UnhandledPromiseRejectionWarning warning (and the corresponding event) should be moved from TICK to GC.
  2. There should be a flag that makes the abovementioned GC warning a crash (process exit) — either on or off by default (no specific opinion on that).
  3. It might be reasonable to introduce a separately named event for TICK, for those people who absolutely wish to crash/debug there. This is likely to get broken by thirdparty modules in this setup, though.
  4. Under a separate flag, preserve the required info for core dumps on TICK and use that on GC to produce meaningful core dumps and debug info.
  5. It might make sense to trigger a gc() on TICK — that would make the process exit faster in some situations. It is not going to be reliable, though, and might require significant logic, e.g. ratelimiting. It could improve the average situation a bit, though — but that requires testing.

I am aware of the concerns about delaying to GC and that Chromium displays a warning immediately (like TICK), but the concerns here are that:

  1. Per the spec, that is the allowed behaviour. There were mentioned usecases that are affected by crashing/warnings on TICK. I am aware that there are workarounds for those, but neverthenless — we are going to break the spec if we crash on TICK, and that is very very bad.
  2. In Chromium, only developers are going to see those warnings, and the page is going to still be usable, and that is true no matter how exactly those warnings would look like as long as they are prited only to the developers console. This significantly differs to users seing those warnings in the console (which is going to happen with Node.js).
  3. In Chromium, those warnings don't stop any execution — nothing would change in the program logic or observed behaviour if those warnings weren't printed to the console. We are considering crashing the whole process, which is a very different thing.
  4. That affects portable code which is not expecting Node.js Promise implementation to differ from the spec.
  5. Not only the top-level programs are going to be affected — undesired behaviour (that is correct by spec but will trigger a crash) could be in some third-party library, e.g. a dependency from npm or even some library that was originally written for browsers and is operating, for example, with jsdom.
    Btw, this is the reason why even a separate event on TICK is controversial — the end developer could have no direct control of all the modules that could be triggering it in their valid code, so there could always be a chance that the situation is bening.

That said, I see an alternative path forward, given the current actions on the Chromium side.

An alternative way would be to somehow change the spec to directly forbid that behaviour.
Then:

  1. Cooperate with v8/Chromium devs to push a spec change.
  2. Leave the current default behaviour as is until the spec is changed, possibly adding an opt-in flag for crashing on unhandled rejections on TICK.
  3. Once the spec change lands, reverse the flag to an opt-out in the next major version, making the process creash on TICK by default.

This would perhaps be the better choice for the Node.js core than the first one (less logic, less changes, less flags, better debugging), but it requires a spec change.

/cc @ljharb, @domenic , perhaps?


Shortly put: I think that violating the spec is worse than all the other concerns against delaying to GC.
I would be in favor of crashing on TICK if there is going to be a path forward to a spec change.

I have no idea atm how exactly should that spec change look like to cover both the browsers and Node.js behaviour without an UB there.

Member

ChALkeR commented May 7, 2017

@jasnell To further explain my point that I mentioned earlier:

Let's label noticing an unhandled rejection on next tick as TICK and noticing a garbage collection of an unhandled rejection as GC.

I think that, in the current situation:

  1. The current UnhandledPromiseRejectionWarning warning (and the corresponding event) should be moved from TICK to GC.
  2. There should be a flag that makes the abovementioned GC warning a crash (process exit) — either on or off by default (no specific opinion on that).
  3. It might be reasonable to introduce a separately named event for TICK, for those people who absolutely wish to crash/debug there. This is likely to get broken by thirdparty modules in this setup, though.
  4. Under a separate flag, preserve the required info for core dumps on TICK and use that on GC to produce meaningful core dumps and debug info.
  5. It might make sense to trigger a gc() on TICK — that would make the process exit faster in some situations. It is not going to be reliable, though, and might require significant logic, e.g. ratelimiting. It could improve the average situation a bit, though — but that requires testing.

I am aware of the concerns about delaying to GC and that Chromium displays a warning immediately (like TICK), but the concerns here are that:

  1. Per the spec, that is the allowed behaviour. There were mentioned usecases that are affected by crashing/warnings on TICK. I am aware that there are workarounds for those, but neverthenless — we are going to break the spec if we crash on TICK, and that is very very bad.
  2. In Chromium, only developers are going to see those warnings, and the page is going to still be usable, and that is true no matter how exactly those warnings would look like as long as they are prited only to the developers console. This significantly differs to users seing those warnings in the console (which is going to happen with Node.js).
  3. In Chromium, those warnings don't stop any execution — nothing would change in the program logic or observed behaviour if those warnings weren't printed to the console. We are considering crashing the whole process, which is a very different thing.
  4. That affects portable code which is not expecting Node.js Promise implementation to differ from the spec.
  5. Not only the top-level programs are going to be affected — undesired behaviour (that is correct by spec but will trigger a crash) could be in some third-party library, e.g. a dependency from npm or even some library that was originally written for browsers and is operating, for example, with jsdom.
    Btw, this is the reason why even a separate event on TICK is controversial — the end developer could have no direct control of all the modules that could be triggering it in their valid code, so there could always be a chance that the situation is bening.

That said, I see an alternative path forward, given the current actions on the Chromium side.

An alternative way would be to somehow change the spec to directly forbid that behaviour.
Then:

  1. Cooperate with v8/Chromium devs to push a spec change.
  2. Leave the current default behaviour as is until the spec is changed, possibly adding an opt-in flag for crashing on unhandled rejections on TICK.
  3. Once the spec change lands, reverse the flag to an opt-out in the next major version, making the process creash on TICK by default.

This would perhaps be the better choice for the Node.js core than the first one (less logic, less changes, less flags, better debugging), but it requires a spec change.

/cc @ljharb, @domenic , perhaps?


Shortly put: I think that violating the spec is worse than all the other concerns against delaying to GC.
I would be in favor of crashing on TICK if there is going to be a path forward to a spec change.

I have no idea atm how exactly should that spec change look like to cover both the browsers and Node.js behaviour without an UB there.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 7, 2017

Member

@ChALkeR well written.

I think a problematic point is that GC does not work for global objects, module level objects and other objects that are still referenced in places that can't be GC'd very quickly or at all. A single false negative could leave the server in inconsistent state and swallow an error. That's why I suggested "exit on GC, warn on TICK".

Note that in practice, I've done "exit on tick" and no library ever broke because of it, I've found it to be the case that no libraries "in the wild" really use this capability very much, and the few who do opt out explicitly (bluebird has a suppressUnhnaldedRejection and with native promises it's typically .catch(() => {}).

That said, I think the real path forward is spec change, and introducing synchronization contexts on top of Zones, this is similar to what C# and other languages with tasks and async/await do. If you're unfamiliar with Zones think "domains without the error handling bits".

  • When a promise is created in a Zone, it gets assigned to the Zone and added to a WeakMap (or similarly, to a map of pointers on the C++ side).
  • When a zone "ends" (nothing else is set to run on its context), any unhandled rejections attached to it cause the process to abort.
  • When the process exits, any promises attached to it that are unhandled are logged to stderr (or otherwise).
  • unhandledRejection is still made available to override the above functionality, but no longer logs on tick. Unhandled rejections in the global zone do log on TICK. GC still crashes the Zone.
  • It is impossible for an unhandled rejection to recover from the zone - even if a catch handler is added to the promise. This is far easier than exposing them via zone.exceptions or something like that.

Alternatively, allowing the user to set the promise scheduler solves all this too, but gives users a lot of power I'm not sure we should give them (this can be done at the platform level, without spec changes).

Splitting the app into zones this way allows for clear "context"s the library/framework defines for when something won't be handled.

Some examples:

  • An express/koa/hapi/restify app would likely set the zone on every request/response pair. So when the response ends any unhandled rejections throw.
  • A careful library would set its bounds in its own code.
  • A user would set zones wherever they make sense.

This is a little bit like domains, but without the domain problems with error recovery and handling - since we're not recovering from errors - just detecting errors people forgot to listen to.

Member

benjamingr commented May 7, 2017

@ChALkeR well written.

I think a problematic point is that GC does not work for global objects, module level objects and other objects that are still referenced in places that can't be GC'd very quickly or at all. A single false negative could leave the server in inconsistent state and swallow an error. That's why I suggested "exit on GC, warn on TICK".

Note that in practice, I've done "exit on tick" and no library ever broke because of it, I've found it to be the case that no libraries "in the wild" really use this capability very much, and the few who do opt out explicitly (bluebird has a suppressUnhnaldedRejection and with native promises it's typically .catch(() => {}).

That said, I think the real path forward is spec change, and introducing synchronization contexts on top of Zones, this is similar to what C# and other languages with tasks and async/await do. If you're unfamiliar with Zones think "domains without the error handling bits".

  • When a promise is created in a Zone, it gets assigned to the Zone and added to a WeakMap (or similarly, to a map of pointers on the C++ side).
  • When a zone "ends" (nothing else is set to run on its context), any unhandled rejections attached to it cause the process to abort.
  • When the process exits, any promises attached to it that are unhandled are logged to stderr (or otherwise).
  • unhandledRejection is still made available to override the above functionality, but no longer logs on tick. Unhandled rejections in the global zone do log on TICK. GC still crashes the Zone.
  • It is impossible for an unhandled rejection to recover from the zone - even if a catch handler is added to the promise. This is far easier than exposing them via zone.exceptions or something like that.

Alternatively, allowing the user to set the promise scheduler solves all this too, but gives users a lot of power I'm not sure we should give them (this can be done at the platform level, without spec changes).

Splitting the app into zones this way allows for clear "context"s the library/framework defines for when something won't be handled.

Some examples:

  • An express/koa/hapi/restify app would likely set the zone on every request/response pair. So when the response ends any unhandled rejections throw.
  • A careful library would set its bounds in its own code.
  • A user would set zones wherever they make sense.

This is a little bit like domains, but without the domain problems with error recovery and handling - since we're not recovering from errors - just detecting errors people forgot to listen to.

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell May 9, 2017

Member

@benjamingr ... while I have many concerns around zones, at an abstract level what you describe is certainly ideal... that is, the idea that some additional spec changes that provide clear hook points for all of this would be absolutely fantastic... and that is something that I can bring back to TC-39 to discuss. It would be excellent.

At this point, I think there are still way too many open questions and potential issues for us to reasonably come up with a reasonable default behavior. I think we need to take a more iterative approach. At the collaborator summit we talked about a flag driven option that would allow users to opt in. It would also allow us to experiment with various options to see what is the best strategy. So here's what I would recommend:

We add a --on-unhandled-rejection={action} command line argument where {action} can be one of: ignore, warn-on-tick, warn-on-gc, throw-on-tick, or throw-on-gc. With the default option being the current behavior of warning on gc.

I know that this is not an ideal solution because it kicks the ball down the road a bit, and adding multi-valued command line arguments is always irritating and causes it's own set of problems, but going with this approach would allow us collectively to experiment over a longer period of time with a range of possible strategies. It also gives us time to go back to TC-39 or the VM implementors to work through possible spec modifications that can provide a significantly better solution.

Member

jasnell commented May 9, 2017

@benjamingr ... while I have many concerns around zones, at an abstract level what you describe is certainly ideal... that is, the idea that some additional spec changes that provide clear hook points for all of this would be absolutely fantastic... and that is something that I can bring back to TC-39 to discuss. It would be excellent.

At this point, I think there are still way too many open questions and potential issues for us to reasonably come up with a reasonable default behavior. I think we need to take a more iterative approach. At the collaborator summit we talked about a flag driven option that would allow users to opt in. It would also allow us to experiment with various options to see what is the best strategy. So here's what I would recommend:

We add a --on-unhandled-rejection={action} command line argument where {action} can be one of: ignore, warn-on-tick, warn-on-gc, throw-on-tick, or throw-on-gc. With the default option being the current behavior of warning on gc.

I know that this is not an ideal solution because it kicks the ball down the road a bit, and adding multi-valued command line arguments is always irritating and causes it's own set of problems, but going with this approach would allow us collectively to experiment over a longer period of time with a range of possible strategies. It also gives us time to go back to TC-39 or the VM implementors to work through possible spec modifications that can provide a significantly better solution.

@addaleax

This comment has been minimized.

Show comment
Hide comment
@addaleax

addaleax May 9, 2017

Member

With the default option being the current behavior of warning on gc.

Huh – just to be clear, the current behavior is to warn on nextTick. And if we want to leave the option of either warning or crashing on nextTick open, then we should keep that, because it’s much easier to move the warning/crash to a later point in time than to an earlier one.

Member

addaleax commented May 9, 2017

With the default option being the current behavior of warning on gc.

Huh – just to be clear, the current behavior is to warn on nextTick. And if we want to leave the option of either warning or crashing on nextTick open, then we should keep that, because it’s much easier to move the warning/crash to a later point in time than to an earlier one.

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell May 9, 2017

Member

oh.. right sorry, I get those mixed up. Yes that, keep the current default but allow a flag to enable the other behaviors

Member

jasnell commented May 9, 2017

oh.. right sorry, I get those mixed up. Yes that, keep the current default but allow a flag to enable the other behaviors

@mcollina

This comment has been minimized.

Show comment
Hide comment
@mcollina

mcollina May 9, 2017

Member

@jasnell just do add one note: we also discussed about printing the stack trace of the offending exception as well. I would prefer to do so for every 'unhandledRejection'.

Member

mcollina commented May 9, 2017

@jasnell just do add one note: we also discussed about printing the stack trace of the offending exception as well. I would prefer to do so for every 'unhandledRejection'.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 9, 2017

Member

@mcollina

@jasnell just do add one note: we also discussed about printing the stack trace of the offending exception as well. I would prefer to do so for every 'unhandledRejection'.

The stack trace of the offending exception is already printed if --trace-warnings is on (and it should probably be on), although if we want to change it to trace by default I'd be in favor.

Member

benjamingr commented May 9, 2017

@mcollina

@jasnell just do add one note: we also discussed about printing the stack trace of the offending exception as well. I would prefer to do so for every 'unhandledRejection'.

The stack trace of the offending exception is already printed if --trace-warnings is on (and it should probably be on), although if we want to change it to trace by default I'd be in favor.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 9, 2017

Member

@jasnell I'm not sure what a command line switch would get us - users can define an "unhandledRejection" handler themselves.

The default behavior gave users a bad experience before warnings were added - I think they really helped. I also think that async/await seeing wide-scale adoption will change how users feel about this issue in the coming months.

I think adding a flag won't really solve much - I think Node should lead the push for solving scheduling and unhandled promise rejections with technology - I'll hopefully set up a prototype next month (super busy :( ) that:

  • Uses the Zone polyfill (https://github.com/angular/zone.js)
  • Whenever a promise is created (by subclassing Promise and then maybe overriding the global) - add the promise to the zone in an array.
  • Collect unhandledRejections in a WeakMap.
  • When a zone is "done" - if there are any promises in the zone that are in the map - throw and terminate the Node process.
Member

benjamingr commented May 9, 2017

@jasnell I'm not sure what a command line switch would get us - users can define an "unhandledRejection" handler themselves.

The default behavior gave users a bad experience before warnings were added - I think they really helped. I also think that async/await seeing wide-scale adoption will change how users feel about this issue in the coming months.

I think adding a flag won't really solve much - I think Node should lead the push for solving scheduling and unhandled promise rejections with technology - I'll hopefully set up a prototype next month (super busy :( ) that:

  • Uses the Zone polyfill (https://github.com/angular/zone.js)
  • Whenever a promise is created (by subclassing Promise and then maybe overriding the global) - add the promise to the zone in an array.
  • Collect unhandledRejections in a WeakMap.
  • When a zone is "done" - if there are any promises in the zone that are in the map - throw and terminate the Node process.
@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell May 9, 2017

Member

What the command line switch does is provide a more opinionated view of what the options are. Yes users can handle unhandledRejection already, but the command line options would provide a more narrow focus on what we think the right things to do may be, rather than leaving it completely wide open. Having the multiple values allows us to experiment between very specific choices.

Member

jasnell commented May 9, 2017

What the command line switch does is provide a more opinionated view of what the options are. Yes users can handle unhandledRejection already, but the command line options would provide a more narrow focus on what we think the right things to do may be, rather than leaving it completely wide open. Having the multiple values allows us to experiment between very specific choices.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 9, 2017

Member

the command line options would provide a more narrow focus on what we think the right things to do may be

I think providing multiple options is a recipe for confusion - saying "we're not sure what the right thing to do is, here are some options" in core is something I'd rather avoid. We're aware of better alternatives for the culture, and what we currently do works pretty well and solves the hardest problem (swallowing rejections) already for us.

I'm definitely in favor of throwing on GC and keeping the warnings on tick in place. I think it's good middle ground until we build synchronization contexts or schedulers to address this (what other platforms do).

Member

benjamingr commented May 9, 2017

the command line options would provide a more narrow focus on what we think the right things to do may be

I think providing multiple options is a recipe for confusion - saying "we're not sure what the right thing to do is, here are some options" in core is something I'd rather avoid. We're aware of better alternatives for the culture, and what we currently do works pretty well and solves the hardest problem (swallowing rejections) already for us.

I'm definitely in favor of throwing on GC and keeping the warnings on tick in place. I think it's good middle ground until we build synchronization contexts or schedulers to address this (what other platforms do).

@Fishrock123 Fishrock123 removed the ctc-agenda label May 10, 2017

@mcollina

This comment has been minimized.

Show comment
Hide comment
@mcollina

mcollina May 10, 2017

Member

@benjamingr

The stack trace of the offending exception is already printed if --trace-warnings is on (and it should probably be on), although if we want to change it to trace by default I'd be in favor.

That would print the stacktrace of the warning, not the original stacktrace that caused the warning. I propose to print the stacktrace of every error that reaches the unhandledRejection.

More or less, make this the default behavior, while maintaining the warning:

process.on('unhandledRejection', function (err) {
  console.log(err.stack);
})

Promise.reject(new Error('kaboom'))
Member

mcollina commented May 10, 2017

@benjamingr

The stack trace of the offending exception is already printed if --trace-warnings is on (and it should probably be on), although if we want to change it to trace by default I'd be in favor.

That would print the stacktrace of the warning, not the original stacktrace that caused the warning. I propose to print the stacktrace of every error that reaches the unhandledRejection.

More or less, make this the default behavior, while maintaining the warning:

process.on('unhandledRejection', function (err) {
  console.log(err.stack);
})

Promise.reject(new Error('kaboom'))
@@ -673,6 +673,8 @@ asyncTest('Throwing an error inside a rejectionHandled handler goes to' +
tearDownException();
done();
});
+ // Prevent fatal unhandled error.
+ process.on('unhandledRejection', common.noop);

This comment has been minimized.

@Trott

Trott May 10, 2017

Member

Nit: common.mustCall() instead of common.noop?

@Trott

Trott May 10, 2017

Member

Nit: common.mustCall() instead of common.noop?

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 11, 2017

Member

@mcollina yes, that makes sense

Member

benjamingr commented May 11, 2017

@mcollina yes, that makes sense

@jasnell jasnell referenced this pull request May 11, 2017

Closed

promises: improve unhandledrejection warnings #12982

3 of 3 tasks complete
@mhdawson

This comment has been minimized.

Show comment
Hide comment
@mhdawson

mhdawson May 16, 2017

Member

I think discussion in the last CTC meeting was that we should pull together the group of interested people to start working on post-mortem/promises. I think @jasnell took the action to push that forward. I'm available to help with that at well if necessary.

Member

mhdawson commented May 16, 2017

I think discussion in the last CTC meeting was that we should pull together the group of interested people to start working on post-mortem/promises. I think @jasnell took the action to push that forward. I'm available to help with that at well if necessary.

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr May 16, 2017

Member

I would definitely be interested in being a member of such a group @jasnell

Member

benjamingr commented May 16, 2017

I would definitely be interested in being a member of such a group @jasnell

@medikoo

This comment has been minimized.

Show comment
Hide comment
@medikoo

medikoo Sep 12, 2017

Does anyone know the current status of this (and #12010)?

Is this going to be taken in at some point?

Situation is that by not having that, ecosystem is really broken. Just recently I approached a popular project (19k+ stars, 1.9k+ forks, 100+ contributors) which had failing tests in master and nobody knew about it. Purely because unhandled rejections are not exposed as exceptions.
More on that here: serverless/serverless#4139

medikoo commented Sep 12, 2017

Does anyone know the current status of this (and #12010)?

Is this going to be taken in at some point?

Situation is that by not having that, ecosystem is really broken. Just recently I approached a popular project (19k+ stars, 1.9k+ forks, 100+ contributors) which had failing tests in master and nobody knew about it. Purely because unhandled rejections are not exposed as exceptions.
More on that here: serverless/serverless#4139

@benjamingr

This comment has been minimized.

Show comment
Hide comment
@benjamingr

benjamingr Sep 12, 2017

Member

There is a newer one by @BridgeAR that's fairly recent #15126 (comment)

Contribution and involvement are welcome

Member

benjamingr commented Sep 12, 2017

There is a newer one by @BridgeAR that's fairly recent #15126 (comment)

Contribution and involvement are welcome

@BridgeAR

This comment has been minimized.

Show comment
Hide comment
@BridgeAR

BridgeAR Sep 20, 2017

Member

@Fishrock123 I am closing this for now. I am going to proceed with my PR soon. If you would like to continue working on this, I would very much appreciate that as well (especially as C++ is hard for me - I learn it while working on the code).

Member

BridgeAR commented Sep 20, 2017

@Fishrock123 I am closing this for now. I am going to proceed with my PR soon. If you would like to continue working on this, I would very much appreciate that as well (especially as C++ is hard for me - I learn it while working on the code).

@BridgeAR BridgeAR closed this Sep 20, 2017

@bminer bminer referenced this pull request in nodejs/promises Dec 4, 2017

Closed

Default Unhandled Rejection Behavior #26

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment