Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: Cannot destructure property 'error' of 'results[(results.length - 1)]' as it is undefined. [Suspense] #554

Closed
ntucker opened this issue Jan 24, 2021 · 64 comments
Labels
bug Something isn't working

Comments

@ntucker
Copy link
Contributor

ntucker commented Jan 24, 2021

  • react-hooks-testing-library version: 5.0.2
  • react version: 17.0.1
  • react-dom version (if applicable): 17.0.1
  • react-test-renderer version (if applicable):17.0.1
  • node version:14.13.1
  • npm (or yarn) version:1.22.5

Relevant code or config:

expect(result.error).toBe(null);

What you did:

Directly after the renderHooks() line with a function that throws a Promise

What happened:

Reproduction:

result.error just after a hook that suspends (don't await the resolution of suspense).

result.current was fixed in dc21e59 but result.error was not.

Problem description:

Suspense is the most important use of this library

Suggested solution:

Do the thing in dc21e59 but apply to all members

@ntucker ntucker added the bug Something isn't working label Jan 24, 2021
@ntucker
Copy link
Contributor Author

ntucker commented Jan 24, 2021

If something does not suspense is it still undefined? If so this is a really bad change as now one cannot test for suspense.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 24, 2021

Please tell me you set current to some other value if it does not suspend.

@mpeyper
Copy link
Member

mpeyper commented Jan 24, 2021

Hi @ntucker,

You're correct that the same fix should have been applied to error and was missed. My apologies for that.

With respect to you other points, I think you are misunderstanding the role of the results array and what current would be in a standard, non-suspense call.

I'll get a patch out for error (unless someone beats me to it) and leave a more comprehensive comment for the other points once my kids are in bed (about 2 hours from now).

@mpeyper
Copy link
Member

mpeyper commented Jan 24, 2021

Fix has gone out in version 5.0.3.

I have also updated the release notes of version 5.0.0 with a Breaking Change note about the change in functionality around both result.current and result.error (despite result.error not actually being usable until now).

@mpeyper
Copy link
Member

mpeyper commented Jan 24, 2021

Ok, now to your other points:

The changes were undocumented in 3.6 as they were never meant to affect the functionality of suspending hooks (i.e. it was bug of that version not a breaking change intended to make it version 4.0). The tests that would have caught that break only went out with 5.0 when I identified the issue with result.current and put the fix in for it, with a test to make sure it does not break again. I missed result.error at the time by accident as it was a minor fix during a larger feature build and was not my priority in the moment. Perhaps I should have come back and check more in the area, but I did not and can't change history on that one.

"Why was there no test to catch it prior to 3.6?" I hear you ask. Well, prior to 3.6, there was no special handling required to cater for a hook that had not returned a result (or error) in the first pass. In fact, I'd go so far as to say that "seeing" null in that circumstance was unintended behaviour from when suspense functionality was first added and the fact that it did not throw and returned something made it go unnoticed for well over a year. This probably isn't helped by the fact that many people use null and undefined more or less interchangeably to mean "no value"

The reason I say that null results were unintended is that when result.current or result.error get set, the other is set to undefined (commit before result.all changes were made), meaning a non-suspending renderHook call would never see the null as by the time it had returned the would either have a value or be replaced with undefined. This is likely also why my fix in 5.0 had these values returning undefined instead of the original null value. You must also remember that a non-trivial amount of time had passed since the original bug and the fix, so much of the context had been lost by then.

I actually believe undefined is the more correct value to use for a value that has never been set, so it's for this reason that I've decided to document the break and continue with it for version 5 onwards.

So in summary, when the changes were made in 3.6, I believed they were non-breaking as for a non-suspending hook call, the values are effectively internal variables and never get exposed. Obviously I failed to consider the suspending hook cases and a hole in our test suite (I'm aware your test suite did catch the bug) let it through unnoticed. I am sorry about that.

I hope that addresses your concerns about non-suspending hooks, the lack of documented breaks and the reasons it broke in the first place. If I have missed anything, please leave a comment and I will do my best to give you address it as well.

Finally, I want to address one final thing:

Suspense is the most important use of this library

I think you may be conflating your most import use of this library and everyones most important use of this library. Handling suspending hooks is something we support and not something we ever purposely break, but it's only one part of what this library offers. I honestly couldn't tell you what the most important feature we offer is, other than perhaps simply users being able to call a hook without defining a component themselves, as we have many users, each with their own most important use. In fact, until 5.0 we'd been having many people calling out for the ability to test their hooks with server rendering which does not support suspense at all.

I guess what I'm trying to say is that comments like these leave me very disheartened with the amount of care and effort I put into supporting all of our users the best way we can. I'm genuinely sorry that the break happened and has caused you the headaches it has. I'm also genuinely appreciative of the effort you put into identifying what the issue was and when it broke.

@mpeyper
Copy link
Member

mpeyper commented Jan 24, 2021

Note: I've also updated the release notes for version 3.6. I was still learning with semantic-release and had messed it up. I did not realise the fix did not capture the commit in the release notes (feat was in the commit description and not the summary so the commit was ignored).

I'll close this now as the issue has been resolved but please feel free to continue the discussion.

@mpeyper mpeyper closed this as completed Jan 24, 2021
@ntucker
Copy link
Contributor Author

ntucker commented Jan 24, 2021

Thanks for the quick response, and explanation!

I can see how undefined makes sense, tho the slight disadvantage here is that functions that don't return anything (thus really easy to happen) will return undefined, and thus in those cases suspense will be indistinguishable from when they return. On the other hand it would be clear that it is the expectation, vs null being somewhat arbitrary and still having (although less likely) a potential conflict with a real return value. Perhaps it might be better to have an additional value explicitly for suspense, but I think this is fine for now - especially since there are tests.

Library usage of suspense and other hooks

I can definitely see - at least in the current world - other hooks cases having utility without suspense. And obviously there's really no objective way to measure 'total impact' worldwide easily or accurately. I think for me having done both cases, the unique thing about Suspense is how incredibly convoluted testing was before this library. Vs other hooks in components, while not ideal, were not this level of difficulty. So from my experience it was the value of the use case - given that you had it - rather than the likelihood of a given person having a use case.

Of course this is all sort of silly if both can be supported well. I was kind of surprised this didn't have good test coverage as an early adopter (#27) of the library for suspense use case I was under the impression at that time that it was a strong use case.

My concern for depending on the library

@rest-hooks/test uses @testing-library/react-hooks as a dependency. This is a testing util built on top of this testing util. Because of that it's not a matter of working to make tests work.

What I realized yesterday is that since December 7th, anyone installing my library for the first time would experience a completely broken library. npm or yarn would resolve the ^3.2.1 to 3.6.0, which is broken and they would experience those errors. My test suite currently only uses the library to run its tests, so locking with yarn means I would not notice when minor/patch releases absolutely break since it will stay on 3.2.1. This is me relying on semver in an external package.

Now I have a few choices. I can release a fix version where I get rid of ^. This would at least solve it for new users, but not users who had the existing version and shuffle their installs. So I'm stuck with several versions of my library that - when released - worked fine, but are now broken.

A better world would be where 3.6.1 was released with this fix, so the entire 3 line wasn't destroyed.

If this doesn't happen, I'm stuck in a situation where my trust in this library has let down the trust of my users. For the short term I can absolutely fix the version, which isn't ideal because any improvements won't be had, but at least I won't fear them breaking.

So that's the context of that comment - the thinking I have to do to not let down my users. My intention was not for you to read it, as it certainly sounds harsh. I do appreciate the efforts you have put into this library, which is why I originally came to mark it as a dependency.

I expect most people are simply adding this as a devdependency, and so these situations are no more than a minor annoyance.

But if you can release a fix to the 3 line, I would very much appreciate it. Thanks again for the fix to the 5 line.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 24, 2021

(Although any fix to 3 line would also need null so it doesn't break)

@mpeyper
Copy link
Member

mpeyper commented Jan 24, 2021

Yes, I can see the issue you face. For what it's worth, it only would affect people asserting the pre-resolved values which in my experience, most people don't do (they only assert after the wait resolves), and I'm assuming you haven't seen any issues about it in the past month and a bit, so perhaps the issue isn't as widespread in actual usage as you're implying. I don't discount the risk of potential issues though.

If it helps, fixing the version to ~3.5.0 for your current version and then upgrading to ^5.0.0 in a major release would be fairly safe. No further improvements are planned for the older version so the risk of missing out on something is minimal and upgrading to the latest major of your library is always an option to get the shiny new features.

That said, if it's still an issue for you, I'm happy to patch the older version for you to retain the null and apply it to the ^4.0.0 branch as well of that helps. It should be possible without breaking the non-suspense cases but I'll let you know if I run into any issues.

I think you might be wrestling with what it actually means to have a dependency that is exposed to your users as well. I don't know exactly how we are exposed through your library (I haven't looked through your docs/code), but I cannot guarantee we will never ship another bug. I wish I could, but I can't. There will always be an inherent risks when you don't own the code yourself and the best you can hope for is that the dependencies you do have are actively maintained and are quick to push out patches when bugs are identified.

If that risk is more than you're willing to accept, then perhaps forking and maintaining it yourself is the more appropriate action to take. You can always synchronise with the mainline when new versions are pushed out. Just be sure to PR any of your own fixes and features hack again for consideration 😉

@ntucker
Copy link
Contributor Author

ntucker commented Jan 25, 2021

Any idea why reactive/data-client#497 is failing now? This only happens in particular cases when i'm looping over providers (and obviously since upgrade to 5)

@mpeyper
Copy link
Member

mpeyper commented Jan 25, 2021

I'm not sure off-hand. I'll take a look tonight for you.

@mpeyper
Copy link
Member

mpeyper commented Jan 25, 2021

@ntucker it feels like you've got some leaky tests that are leaving promises that end up resolving after the test has finished. I can't quite put my finger on where the leaks are coming from, if from the source code or particular tests, but in many cases if you run the failing tests in isolation they pass and it's only when the whole file is is run that they issues appear. This appears the be the case for endpoint types › [makeExternal] should enforce defined types › should pass with exact params and useResource() › should throw error when response is array when expecting entity.

I have some theories why these leaks may have been hidden before:

  1. we removed flush-microtasks (feat(cleanup): remove unnecessary flush microtasks as already handled by act #511, released in 3.7.0) as this was supposedly heing handled by act already, so its possible your tests are hitting cleanup a tick or two sooner than before
  2. the async utils got completely refactored (Refactor async utils #537, released in 5.0.0) to have interval and timeout options set as default and the changes to these functions were significant and it's possible we messed something up (although generally the code is cleaner and the changes actually found some issues with our tests of the old utils so overall I believe them to be better now)
  3. we introduced an error boundary (feat: use error boundary to capture useEffect errors #539, released in 5.0.0) to capture errors being thrown in useEffect calls

useResource() › should throw error when response is bad (on mount) is a bit different as when debugging the test there are is only ever a single call of the hook callback, meaning there is nothing to trigger waitForNextUpdate to resolve. I did try to scan through the code, and I'm still quite unfamiliar with it so may have just missed it, but I could not see anything (e.g. retry logic) that would make me think the code should actually rerender here. The test also does not seem to expect the values to change, so I'm unsure if the test is correct or not?

Ok, I've done some digging with with local builds of older versions (I hacked them to ensure undefined is returned prior to suspense resolving):

  1. everything passes with version 3.6.0
    • this version was the last before flush-microtasks was removed
  2. endpoint types › [makeExternal] should enforce defined types › should fail with improperly typed param fails in 3.7.0
    • this version removed flush-microtasks
  3. endpoint types › [makeExternal] should enforce defined types › should fail with improperly typed param fails in 5.0.0-beta.9
    • this version was the last before async utils were refactored
  4. endpoint types › [makeExternal] should enforce defined types › should fail with improperly typed param fails in 5.0.0-beta.10
    • this version refactored async utils
  5. endpoint types › [makeExternal] should enforce defined types › should fail with improperly typed param fails in 5.0.0-beta.11
    • this version was last before error boundary was introduced
  6. all 3 tests fail in 5.0.0-beta.12
    • this version introduced the error boundary

So it appears to be a combination of removing flush-microtasks and the introduction of the error boundary that has made these issues appear.

My battery is on 1%. I'll post this now and make some edits on my phone.

I'm not sure where this leaves us. Do either of those changes make sense to you as to why these tests would now be failing?

The error boundary is a case where we plugged a hole in our existing result.error functionality. I'd be interested in following that thread to find out why your code didn't get caught in the old try/catch block but is getting caught in the error boundary (any throws in a useEffect would be high on the list of likely culprits). If this is legitimate we might need to think about how to refactor those tests for the new world.

The flush-microtasks one is a bit more unknown to me. Unlike react-testing-library, who replaced theirs with an act call, we always had the act call and the flush. I'll be honest and say that I never fully understood what that code did or how it worked, I just lifted it wholesale from react-testing-library l, so when they replaced it with act I assumed that it was unnecessary and effectively a no-op. So it's interesting that your test seems to be reliant on it. I suspect it might have introduces a few extra ticks for your promises to resolve before cleanup occured, but I'm not sure.

I'm hesitant to undo either of these changes until we understand better the precise cause of the issue. Your usage of this library would be close to the most complex I've seen to date and I'm not sure whether the root cause lies in our changes, your test code, or your production code (I hope it's not this one).

@ntucker
Copy link
Contributor Author

ntucker commented Jan 25, 2021

Hmm the failing tests go from suspense to throwing an error. They never return a real value. The waitfornextupdate is waiting for a thrown error rather than return value. Perhaps catching on error boundary doesn't resolve waitfornextupdate?

@ntucker
Copy link
Contributor Author

ntucker commented Jan 25, 2021

I think I have an idea for should throw error when response is bad (on mount). ErrorBoundaries have this annoying problem where once they catch they just block the entire tree below. If there was one added, it would be preventing re-render of this component that will only throw errors. Before it would simply run again. This was the test to make sure even without errorboundary to stop there wouldn't be an infinite loop.

Perhaps there needs to be a mechanism to force reset the error boundary?

@ntucker
Copy link
Contributor Author

ntucker commented Jan 25, 2021

Since the other issue is with another component that goes from suspending to throwing errors, perhaps this is also related to the error boundary change

@mpeyper
Copy link
Member

mpeyper commented Jan 25, 2021

We do force the error boundary to reset when rerender is called (or any time we reconstruct the test harness).

Is calling rerender an option?

I have also contemplated whether we should have options for renderHook to remove the Suspense and ErrorBounday wrappers, but haven't been able to reconsider what that would mean for result.error.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 25, 2021

rerender and then checking that it's a new error works for this test. However, it's still breaking the other test should throw error when response is array when expecting entity - which only breaks when should throw error when response is bad (on mount) runs.

Is there some cleanup that might fail if the component is stuck in errorboundary?

@mpeyper
Copy link
Member

mpeyper commented Jan 25, 2021

Not that I'm aware of. We unmount the test harness in an afterEach and construct a new one for every renderHook call. The error boundaries should be completely isolated.

@mpeyper
Copy link
Member

mpeyper commented Jan 25, 2021

Actually, I've got an idea, but no time to investigate. I'll try to look later today.

@mpeyper
Copy link
Member

mpeyper commented Jan 26, 2021

So my idea turned out to be nothing. The error boundary and the test harness are behaving as expected, so no leads there.

I've discovered a few thing though:

  1. Commenting out renderRestHook.cleanup(); in afterEach allows the tests to pass (I'm guessing this is undesirable)
  2. should throw error when response is array when expecting entity makes it as far as waitForNextUpdate then afterEach is called
    • The test just ends there, the await neither resolves or rejects
    • jest is producing the following warning

      A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --detectOpenHandles to find leaks

    • running with --detectOpenHandles doesn't give any more information
  3. Updating should throw error when response is bad (on mount) to use rerender instead of waitForNextUpdate (should still be valid, right?) with a patched version to use the old try/catch instead of the error boundary produces the same result
  4. Commenting out should throw error when response is array when expecting entity just causes should throw error when response is {} when expecting entity to fail in the same way
  5. Commenting out should throw error when response is bad (on mount) allows all other tests in the file to pass

All in all, I'm completely thrown by this issue. I've got no idea why that cleanup for that tests is literally killing the the test worker. I don't think it's related to any of the version 5 changes, however, I have not run the same debugging in an older version to rule it out (and my laptop battery is dangerously low again so I'll have to pick it up tomorrow now). All I can think is that by using waitForNextUpdate instead of rerender, jest was somehow able to deal with the forced rejections, but by making it synchronous it's rejecting it for the whole test worker. I'm grasping at straws here though and I really don't know inough about the internals of your library, jest or whatever else is going on here to do better than that right now.

@mpeyper
Copy link
Member

mpeyper commented Jan 28, 2021

All in all, I'm completely thrown by this issue.

Lol, I didn't notice my pun until just now... Unfortunately it's still true, I've got no idea why the test worker is just dying on should throw error when response is array when expecting entity and I don't know your codebase well enough to offer any more clues.

All I do know is that our library is behaving as I expect, so I'm not sure where we go from here.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 28, 2021

Thanks so much for the investigation - this looks to be super helpful. Unfortunately I have to wrap up some other stuff for now so I have to put a pause on this til probably end of the week. I'll update on whatever I find there. Thanks again!

@mpeyper
Copy link
Member

mpeyper commented Jan 28, 2021

No worries. I'll leave this open for now until were certain it's not an issue on our end.

@mpeyper
Copy link
Member

mpeyper commented Jan 28, 2021

Actually, on second thought, I'll close this as the original issue and the one in the title has been resolved. Feel free to comment your findings here, and if there is another issue, we'll raise a new ticket for it.

@mpeyper mpeyper closed this as completed Jan 28, 2021
@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

Hmm, errors existed in this lib before you added the error boundary - how did it work before?

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

rerender() is what i can use to reset error boundary?

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

It' was just a try/catch block on the inner render function (where the hook got called). We needed to change it because errors in useEffect were getting caught because they get called after rendering, but still synchronously with the component lifecycle

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

Hmm this could be problematic.... for more complex test cases, the Provider (sent in wrapper) is expected to stay around as things evolve between renders. Users are expected to have error boundaries before the provider in actual applications...so catching errors above the wrapper (where I set the provider) doesn't really allow testing the normal flow.

The previous behavior (while it broke effects - I wasn't testing those), did catch them before the wrapper. What about moving the error boundary below wrapper? Is there any problem with that?

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

If other people expect wrapper to be below....could there be two options for above and below? (also I would expect this to be around suspense as well)

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

Im not sure there's a rule about where exactly providers and error boundaries should belong in a real application, as it would depend on many factors, but I do appreciate that top level providers are very common.

I don't think it would semantically change anything for us to move the error boundary within the wrapper, except that it won't catch provider errors (which the old version would fail to do as well).

Are you able to change the order in node_modules locally and see if it helps? If you're unsure I can paste some code in here and line numbers to replace.

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

If other people expect wrapper to be below....could there be two options for above and below? (also I would expect this to be around suspense as well)

Sorry, you expect the provider or the error boundary to be to be above suspense as well?

Fundamentally, I'm open to moving the wrapper to the top level as we are about testing the hooks, not providers (despite their common coupling) and users can add their own suspense and error boundaries to the wrapper if it's really required.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

I'm pretty sure I'm on the right track for the last one. It has to do with ExternalCacheProvider - I have two providers - one that hooks completely into react state, and the other that uses redux (an external store). I'm currently looping over the same tests to do them on both providers. Doing just one or the other makes it all pass.

I would expect Wrapper -> Suspense -> error boundary -> hook.

Though I think I assumed react would not set everything back to its existing state after you un-fallback (I assumed it would run all mounting operations again?). Maybe this is wrong.

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

Though I think I assumed react would not set everything back to its existing state after you un-fallback (I assumed it would run all mounting operations again?). Maybe this is wrong.

I would have assumed this too. Can we test this?

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

I've narrowed it down to this:

      it('should error on invalid params', async () => {
        const { result, waitForNextUpdate, rerender } = renderRestHook(() => {
          return useFetcher(TypedArticleResource.update());
        });
        try {
          console.log('before');
          // @ts-expect-error
          await expect(result.current({ id: 'hi' }, { title: 'hi' })).rejects;
          console.log('first reject');
          await waitForNextUpdate();
          console.log('update finished');
        } catch (e) {
          console.error('oh no', e);
        }
        rerender();
      });

This is the last test in the file. I realized it was this after noticing the first or second test of the second run would always fail.

Once I added the await waitForNextUpdate(); - now the last test always fails - because it actually waits for the rejection to occur.

PS) All my comments... only 'before' and 'first reject' are output. 'update finished' and 'oh no' never display.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

Maybe why there's no stack trace is because all the handlers (in the provider) were removed by time the rejection occured - and with no handlers javascript just pops up the error as its own thing.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

Oh wait, this shouldn't actually throw in the hook since the hook just returns the dispatch function

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

Hmm, it's unmounting the wrapper/provider before await expect(result.current({ id: 'hi' }, { title: 'hi' })).rejects; finishes

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

Could that be the error boundary unmounting it?

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

No components should throw errors, but I need to confirm

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

result.error is undefined - so error boundary would not be hit, right?

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

Correct... unless undefined was thrown (can you even do that?)

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

Or it did rerender and now there is a result that cleared the error. You could log result.all to see every captured result.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

Only one result - the function as expected

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

If the error came from the wrapper it would also show up - right?

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

At the moment, yes, assuming you meant thrown during rendering of the wrapper. If it's putting a function into context that the hook is returning, and the error is thrown when that function is called, then no.

We were proposing to put the wrapper outside the error boundary so that might be changing at some point.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

Hmm, just realized the unmount is actually from the last test. Do they not get unmounted after every test?

  1. last test start message
  2. unmounting the previous test (second to last)
  3. await expect(result.current({ id: 'hi' }, { title: 'hi' })).reject completes
  4. NetworkError: Not Found (the thing showing as failed) gets thrown

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

Do they not get unmounted after every test

Yes they do. As part of the initial render we register a cleanup to unmount it that gets called in an afterEach.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

I was able to verify the sequencing by outputting the state of the provider when it was unmounting. So there must be something going wrong because it unmounts after the next test run

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

Our cleanup is async, and the jest docs do say they support that so I'm not sure why unmounting would be occuring when the next test runs.

I'll run some experiments when my kids go to bed.

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

@ntucker

I.... I think I found it 😱

Try changing these lines like so:

- // @ts-expect-error
- await expect(result.current({ id: 'hi' }, { title: 'hi' })).rejects;
- console.log('post reject');
- await waitForNextUpdate();
- console.log('post update');
+ await expect(
+   // @ts-expect-error
+   result.current({ id: 'hi' }, { title: 'hi' }),
+ ).rejects.toEqual(expect.any(Error));
+ console.log('post reject');

Not sure if you can make the matcher more appropriate for the test than just expect.any(Error), but this worked for me.

image

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

What does this mean? It doesn't actually wait forever if you don't compare? or is there some special path of cleanup? This worked for me too

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

I think it's just that expect(result.current({ id: 'hi' }, { title: 'hi' })).rejects doesn't return a promise to be awaited so by normal JS rules it just flies passed that call without waiting. JS being the single threaded beast it is won't show the effect of whatever that call was supposed to do (reject on the Network manager?) until that thread is relinquished, either when that test is waiting for the next update with your added waitForNextUpdate or when the next test was suspending execution for some reason. It used to work because our cleanup had a few extra ticks of waiting thanks to flush-microtasks.

There is a bit of speculation here as I'm not confident what the result of not awaiting that call actually is, and it sure did manifest in a strange way that was full of red herrings.

@ntucker
Copy link
Contributor Author

ntucker commented Jan 31, 2021

Oh wow, I didn't realize it wasn't actually a promise. I kinda expected typescript to complain about awaiting something that isn't a promise. Well, thanks for all your help - upgrade complete! :)

@mpeyper
Copy link
Member

mpeyper commented Jan 31, 2021

Yeah, it because JS supports awaiting anything and just ignoring it. I found it because vscode was giving me a warning (eslint, I think) but not an error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants