Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add failing test for Suspense bug #13832

Closed
wants to merge 1 commit into from

Conversation

gaearon
Copy link
Collaborator

@gaearon gaearon commented Oct 11, 2018

I expect that maxDuration that is larger than both the actual waiting time and the current expiration context timeout should be equivalent to not specifying maxDuration at all. But that doesn't seem like what happens.

A more concrete example:

import React, { lazy, unstable_Suspense as Suspense } from "react";
import { unstable_createRoot as createRoot } from "react-dom";

let done = false;
let promise;
function Delay({ ms, children }) {
  if (done) {
    return children;
  }
  if (!promise) {
    promise = new Promise(resolve => {
      setTimeout(() => {
        done = true;
        resolve();
      }, ms);
    });
  }
  throw promise;
}

class Spinner extends React.Component {
  componentDidMount() {
    alert("spin");
  }
  render() {
    console.log("spinner");
    return <h1>🌀 'Loading....'</h1>;
  }
}

class App extends React.Component {
  state = {};
  render() {
    return (
      <>
        <h1 onClick={() => this.setState({ showStuff: true })}>Suspense</h1>
        {this.state.showStuff && (
          <Suspense maxDuration={2000} fallback={<Spinner />}>
            <Delay ms={10}>Ready</Delay>
          </Suspense>
        )}
      </>
    );
  }
}

createRoot(document.getElementById("root")).render(<App />);

I'd expect Spinner to never be shown — or at least the behavior to be consistent between maxDuration={2000} and missing maxDuration.

@gaearon gaearon changed the title Add test for Suspense bug Add failing test for Suspense bug Oct 11, 2018

// This is a regression test for a Suspense bug.
it('specifying larger maxDuration is equivalent to omitting it', () => {
[100, 500, 1000, 2000, 5000, 10000, undefined].forEach(maxDuration => {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if all of these are a bug. At the very least [2000, undefined] seems to demonstrate a bug to me. Others might fail for another (maybe legit?) reason although I don't see it immediately.

@sizebot
Copy link

sizebot commented Oct 11, 2018

Details of bundled changes.

Comparing: 0af8199...9fe64c8

react-test-renderer

File Filesize Diff Gzip Diff Prev Size Current Size Prev Gzip Current Gzip ENV
react-test-renderer.development.js +0.4% +0.3% 390.79 KB 392.33 KB 85.36 KB 85.62 KB UMD_DEV
react-test-renderer.production.min.js 🔺+0.4% 🔺+0.5% 54.26 KB 54.45 KB 16.58 KB 16.66 KB UMD_PROD
react-test-renderer.development.js +0.4% +0.3% 386.36 KB 387.91 KB 84.24 KB 84.5 KB NODE_DEV
react-test-renderer.production.min.js 🔺+0.4% 🔺+0.5% 53.96 KB 54.16 KB 16.39 KB 16.48 KB NODE_PROD
ReactTestRenderer-dev.js +0.4% +0.3% 390.42 KB 391.98 KB 82.97 KB 83.23 KB FB_WWW_DEV

scheduler

File Filesize Diff Gzip Diff Prev Size Current Size Prev Gzip Current Gzip ENV
scheduler.development.js n/a n/a 0 B 19.17 KB 0 B 5.74 KB UMD_DEV
scheduler.production.min.js n/a n/a 0 B 3.16 KB 0 B 1.53 KB UMD_PROD

Generated by 🚫 dangerJS

@acdlite
Copy link
Collaborator

acdlite commented Oct 11, 2018

This is a known behavior for updates at user-blocking (nee interactive) priority, the consequence of an intentional performance/modeling trade-off.

They time out effectively immediately, because our heuristic for computing the elapsed time is to subtract the normal expiration time offset from the render expiration time. Because the user-blocking expiration offset is ~1, and normal expiration time offset is ~5, the computed elapsed time of a user-blocking update is always ~4 seconds higher than the real one. Code is here:

// This suspend happened outside of any already timed-out
// placeholders. We don't know exactly when the update was scheduled,
// but we can infer an approximate start time from the expiration
// time. First, find the earliest uncommitted expiration time in the
// tree, including work that is suspended. Then subtract the offset
// used to compute an async update's expiration time. This will cause
// high priority (interactive) work to expire earlier than necessary,
// but we can account for this by adjusting for the Just Noticeable
// Difference.
const earliestExpirationTime = findEarliestOutstandingPriorityLevel(
root,
renderExpirationTime,
);
const earliestExpirationTimeMs = expirationTimeToMs(
earliestExpirationTime,
);
startTimeMs = earliestExpirationTimeMs - LOW_PRIORITY_EXPIRATION;

The reason we made this trade-off was so we could avoid storing the start time of every pending priority level. It seemed reasonable because if you want to suspend, you probably should be deferring the update to normal priority, anyway.

We might want to consider revisiting this decision. I think it's probably fine for now. There are other compelling reasons to go back to tracking all the pending levels, so if we do, we should definitely pick this task back up at that point.

@acdlite
Copy link
Collaborator

acdlite commented Oct 11, 2018

Follow-up: I think we can fix this without needing to track every expiration time, once we start tracking the current priority level with Scheduler, we can use that to subtract the correct number. The Scheduler priority integration (which we were already planning to do for other reasons) replaces the need to maintain our own list of tasks per root — Scheduler is already doing that.

@acdlite
Copy link
Collaborator

acdlite commented Oct 11, 2018

Oh right, past Andrew also left this comment:

This will cause high priority (interactive) work to expire earlier than necessary, but we can account for this by adjusting for the Just Noticeable Difference.

IOW, if we're in Concurrent Mode, and we get a timeout of 0ms, we can round that up to the nearest noticable threshold. Say, 250ms. That should avoid the pathological case.

That'll be an easier fix for now, since Scheduler integration is a larger effort.

@koba04
Copy link
Contributor

koba04 commented Nov 9, 2018

I've also struggled this.
I've got the reason but I think it seems that the behavior is confusing for many developers.
Because when they specify maxDuration, they would expect that maxDuration will be treated as the timeout.

I think it would be nice to print a warning in development mode if the timeout has come earlier than maxDuration.

@gaearon
Copy link
Collaborator Author

gaearon commented Nov 9, 2018

@koba04 I don't think what you're describing is necessarily related to this PR btw. It is going to be necessary to address this in the docs though.

@necolas necolas added the React Core Team Opened by a member of the React Core Team label Jan 8, 2020
@gaearon
Copy link
Collaborator Author

gaearon commented Jan 23, 2020

I don't know if this is still relevant. maxDuration is dead but maybe this heuristic can still bite us in some way?

@gaearon gaearon closed this Feb 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed React Core Team Opened by a member of the React Core Team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants