Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable "uncaughtException" handler for tests #782

Closed
kanongil opened this issue Nov 1, 2017 · 11 comments · Fixed by #843
Closed

Disable "uncaughtException" handler for tests #782

kanongil opened this issue Nov 1, 2017 · 11 comments · Fixed by #843
Assignees
Labels
feature New functionality or improvement
Milestone

Comments

@kanongil
Copy link
Contributor

kanongil commented Nov 1, 2017

In my exiting module I have code that triggers on uncaughtException, which I want to test. This works in v14, but fails in v15.

Maybe add an option to disable the built-in uncaughtException handling?

@geek
Copy link
Member

geek commented Nov 1, 2017

@kanongil can you remove all listeners before adding your listener... or remove all listeners but yours in the tests?

@kanongil
Copy link
Contributor Author

kanongil commented Nov 4, 2017

Yeah, I guess I can do a process.removeAllListeners('uncaughtException'); before my listener is attached.

@dominykas
Copy link
Contributor

dominykas commented Jun 11, 2018

It would be handy to have this built-in... So that you could do something like this:

it('rethrows a system error to crash the process', async (flags) => {

    const team = new Teamwork();
    flags.onUnhandledRejection((err) => {

        expect(err).to.be.an.error('TypeError: obj.noSuchMethod is not a function');
        team.attend();
    });

    myModule.asyncOperation();

    await team.work;
});

I'm thinking of releasing a helper (as suggested - by removing lab's listeners during the test) to deal with the rejections/exceptions, something like this:

afterEach(() => {

    processErrorHandlers.restore()
});

it('rethrows a system error to crash the process', async (flags) => {

    processErrorHandlers.intercept(); // records existing handlers and attaches our own

    myModule.asyncOperation();

    expect(processErrorHandlers.rejections[0]).to.be.an.error('TypeError: obj.noSuchMethod is not a function');
});

However, if something goes wrong, and they handlers are not reattached - then the process will crash and lab will stop intercepting the errors and things will not look pretty.

@dominykas
Copy link
Contributor

dominykas commented Jun 11, 2018

Now that I think of it, maybe the second approach is a nicer API, so lab could have something like

flags.recordUnhandledRejections(3); // after three rejections - fail the same way as it happens currently

expect(flags.rejections[0]).to.be.an.error('TypeError: obj.noSuchMethod is not a function');

and would restore/assert the behavior automatically?

Edit: then again, this probably needs something like await flags.rejections(3) too...

Happy to PR.

@geek
Copy link
Member

geek commented Jun 12, 2018

@dominykas can you wrap your operation in a try/catch or create a catch function to trap the error?

@dominykas
Copy link
Contributor

dominykas commented Jun 12, 2018

@geek I suppose it's always possible to rearchitect the code in such a way, that all promises/errors thrown are handled or bubble up to a single "main" app run() promise, but it does get tricky under certain situations, e.g. setInterval(async () => { /* process stuff */ }) or even a setTimeout(() => { thrown new Error('synchronously') }). In this situation you basically need an explicit way to "reject" the main app or if you're a library - you need to create some context and emit an error there.

An actual use case - I was testing how my lib behaves if the consumer does not attach an error event handler (I do manipulate the errors and I intend it to crash things).

@geek
Copy link
Member

geek commented Jun 12, 2018

We might be able to do something with onCleanup (https://github.com/hapijs/lab/blob/master/lib/runner.js#L511) receiving the exception, then it can determine if the exception is expected or not. I don't think we want to set expectations on how many exceptions should be thrown.

Another potential solution is flags.errorAssertion being set to a function that is passed the caught error.

I'm open to possible solutions

@dominykas
Copy link
Contributor

I don't think we want to set expectations on how many exceptions should be thrown.

Maybe not an expectation, but you do need to to a way to "enable" this tracking, because you need to wait for the error to happen, before you finish the test.

Another potential solution is flags.errorAssertion being set to a function that is passed the caught error.

Explicit flags.onUnhandledRejection / flags.onUncaughtException? And then when a global exception/rejection happens:

  1. If no flag present - maintain existing behavior (i.e. report it and fail the test run)
  2. If the flag is present - call the callback with the error object; if callback rejects or throws - fail the test, otherwise assume the error was "handled" (the dev can team.attend() inside or whatever)

@geek
Copy link
Member

geek commented Jun 14, 2018

I think those flags are a decent option. In order to simplify things we can also restrict them to synchronous behavior and simply wrap them in try/catch. Are you interested in creating a PR?

@dominykas
Copy link
Contributor

Sure, I'll try to get to this over the next few weeks.

@geek geek closed this as completed in #843 Jul 25, 2018
@geek geek added this to the 16.0.0 milestone Jul 25, 2018
@geek geek self-assigned this Jul 25, 2018
@geek geek added the feature New functionality or improvement label Jul 25, 2018
@lock
Copy link

lock bot commented Jan 9, 2020

This thread has been automatically locked due to inactivity. Please open a new issue for related bugs or questions following the new issue template instructions.

@lock lock bot locked as resolved and limited conversation to collaborators Jan 9, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
feature New functionality or improvement
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants