New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

React Context value propagation performance #13739

Open
alexreardon opened this Issue Sep 27, 2018 · 30 comments

Comments

Projects
None yet
9 participants
@alexreardon
Copy link

alexreardon commented Sep 27, 2018

Hi there!

I have observed a performance issue with context providers and value updates. I have spoken with @gaearon with this on twitter, so he will have a bit more context

Let's say you have a provider:

const CounterContext = React.createContext(1);
const Provider = ContextContext.Provider

And you update the value to the provider

render() {
  return (
    return <Provider value={this.state.value}>{this.props.children}</Provider>
  )
}

All good so far.

Let's say you want to pass this value down the tree, but for performance reasons you do not want to render the tree. The only components you want to render are your consumer components (in our case CounterContext.Consumer)

A naive way would be to do something like this:

class Blocker extends React.Component {
  shouldComponentUpdate() {
    return false;
  }
  render() {
    return this.props.children;
  }
}

// ...

render() {
  return (
    <Provider value={this.state.value}>
	 <Blocker>
          {this.props.children}
     </Blocker>
    </Provider>
  )
}

Even though no components in the tree are rendered except for the consumers, the update itself is very expensive. I suspect that the tree walking algorithm takes a while to run.

Standalone example

https://codesandbox.io/s/61jnr811kr

This example has about a 20-30ms render time for a simple counter update. In a production app with a list of 500 nodes (~500 * 10 components, 5000 components) we were looking at update times similar to that of rendering the tree (150ms+)

A bit more context

I was trying to replace react-redux with a root StateProvider that would create a subscription to a store and pass the latest state into the context. The consumers would then pick up this update, run a selector, and re-render if the result value had changed. I had this all working in react-beautiful-dnd but I found the updates through the context itself was too slow for usage (You can see the relevant files here and here)

@trueadm

This comment has been minimized.

Copy link
Contributor

trueadm commented Sep 27, 2018

I looked into this and running your CodeSandbox example with production mode flags makes the update time take around ~1ms for me: https://codesandbox.io/s/7kw8qozz9x.

Did you try running with the production flags to see if your performance issue went away? Looking into the development bundle, 10ms of the time taken seems to be due to performance.measure markers being really bad for performance (something the React team has already found in the past).

@gaearon

This comment has been minimized.

Copy link
Member

gaearon commented Sep 27, 2018

Looks like @alexreardon might be assuming

import "react/cjs/react.production.min.js";
import "react-dom/cjs/react-dom.production.min.js";

import React from "react";
import ReactDOM from "react-dom";

would switch on production mode — I don't think that's correct.

That said I'd still appreciate if @trueadm could look into this locally with a prod build (and maybe less performance.now calls). In our previous discussion with @alexreardon he said these issues occur in prod builds too even without measurements.

@trueadm

This comment has been minimized.

Copy link
Contributor

trueadm commented Sep 27, 2018

@gaearon I added them in (well Ivan did) after I asked how to enable production mode on CodeSandbox and this was the recommended way for now. I'll keep digging into the issue though and see if something comes up.

@alexreardon

This comment has been minimized.

Copy link

alexreardon commented Sep 27, 2018

The issue still existed, to a reduced amount, with production builds. In the real world example I listed (500 item list), the time to hit all the consumers was still taking 20-30ms with production builds. Compare this against something like react-redux which uses an event emitter to pass values down a tree, where the cost to hit a consumer is ~0ms

@alexreardon

This comment has been minimized.

Copy link

alexreardon commented Sep 27, 2018

I made the codesandbox after to illustrate the issue and i did not try to get that working with production builds

@markerikson

This comment has been minimized.

Copy link

markerikson commented Sep 27, 2018

This would kinda be a big deal for React-Redux, actually, because we're trying to switch to using createContext in version 6 instead of having all connected components be separate subscribers.

This does also tie into the performance benchmarks we've been trying to do with our WIP branches as well.

@alexreardon

This comment has been minimized.

Copy link

alexreardon commented Sep 27, 2018

I can try to gather more numbers if that helps. I thought the sandbox showed the issue enough. If more clarity is needed, I can try to obtain more numbers

@markerikson

This comment has been minimized.

Copy link

markerikson commented Sep 27, 2018

Yes, I'd certainly appreciate any further details we can gather on this.

I'm also curious whether observedBits comes into play with this at all. If consumers are marked with bitmasks that shouldn't require an update, does React still have to traverse the entire tree, or does that speed things up?

@gaearon

This comment has been minimized.

Copy link
Member

gaearon commented Sep 27, 2018

Sandbox isn't helpful by itself because it doesn't run in production mode and includes sandbox-specific overhead (e.g. they inject code into every loop to check if it's infinite).

To get these issues sorted, we'll definitely need repro cases as plain HTML files working against UMD production builds of React and ReactDOM.

@markerikson Let's leave observedBits out of this for now, I think it derails the issue a little bit. The issue is specifically about the default update performance, and if there's some extra overhead we'd want to look at it regardless of opt-in optimization APIs. In fact talking about these APIs here distracts us from fixing the bigger issue because it's tempting to say "we just need this opt-in thing to fix it".

@trueadm

This comment has been minimized.

Copy link
Contributor

trueadm commented Sep 27, 2018

Yeah, a self-contained HTML version using UMD builds would be a big help. I noticed also that there's another ReactDOM being used to render the CodeSandbox UI and that probably affected profiling a bit too.

@alexreardon

This comment has been minimized.

Copy link

alexreardon commented Sep 27, 2018

I did a bit more digging. It turns out things are not as bad as I first observed, which is good! I must have been tested against dev builds at some point.

Tip: if you want to run the codesandbox examples locally as a React developer you can just copy paste it into babel-standalone/dev in the React repo. Then you can run it against your own custom builds of React (cheers @gaearon for the tip)

I have simplified my real world use case for clarity. You can find it here:

branch: state-provider-simplier (it will not allow parent renders through at all so reordering is broken, but it simplifies the use case)

Results (with production build of React 16.5.2):

  • with react-redux and critical render: ~3ms
  • using Context and critical render: ~5m

(a 40% slow down)

Critical render: when dragging an item we only render that Draggable and not the rest of the tree

Which sort of makes sense at face value. Context updates walk the tree, whereas react-redux has a direct subscription so its state update propigation is closer to 0ms.

It looks like context updates would be sufficient for most* use cases. Although it is a significant slowdown from an event emitter model.

For react-beautiful-dnd we would push for extremely fast updates, so a 2ms => 40% slow down is still significant for us

@alexreardon

This comment has been minimized.

Copy link

alexreardon commented Sep 27, 2018

This is relevant for react-redux @markerikson as moving to leaning on context for propigating state values would lead to a slow down over the current subscription pattern

@theKashey

This comment has been minimized.

Copy link

theKashey commented Sep 28, 2018

Look like a zero-subscription-cost model, used in Context API, is not absolutely free.

@markerikson

This comment has been minimized.

Copy link

markerikson commented Sep 28, 2018

Ironically, one of the reasons what we're moving to createContext is because I had hoped that switching to a single subscription and passing the state down via context, instead of N subscriptions, would be faster. But, React-Redux v5's ability to do its memoization checks and bail out before even asking React to update does have an advantage over immediately calling setState() at the top of the component tree and letting React walk over everything.

FWIW, our current benchmarks indicate that the "single subscription + createContext approach winds up being just a bit slower than v5's "multiple subscriptions" approach in artificial stress test scenarios.

Would be nice if we can quantify exactly what the cost is inside React in this case.

@theKashey

This comment has been minimized.

Copy link

theKashey commented Sep 28, 2018

Yep, with "old subscription" mechanics we could use "observedBits" on the sender side, not the receiver.

And I still don't get the real problem with "tearing". Is it actually so bad?

@trueadm

This comment has been minimized.

Copy link
Contributor

trueadm commented Sep 28, 2018

@alexreardon I also dug more into it last night and this morning. The new context will never be as fast as how Redux is updating state, simply because the new context model does more work behind the scenes and thus incurs more overhead (it has to traverse the entire tree, going through each fiber at a time). There might be things we can do to tweak the implementation of this function however, so I'll dig into that today.

@gaearon

This comment has been minimized.

Copy link
Member

gaearon commented Sep 28, 2018

@trueadm I think in the beginning we thought we would cache the traversal path between updates but then put it out of scope of the initial implementation. Maybe we can revisit this. The problem is then we’ll need to somehow invalidate that cache or keep it consistent.

@gaearon

This comment has been minimized.

Copy link
Member

gaearon commented Sep 28, 2018

We could also have a linked list of all context consumers. That’s still a traversal to do but it’s less than the whole tree.

@alexreardon

This comment has been minimized.

Copy link

alexreardon commented Sep 28, 2018

@markerikson

This comment has been minimized.

Copy link

markerikson commented Sep 28, 2018

@trueadm @gaearon : Yeah , that's about what I expected. I knew calling setState() at the top of the tree on each Redux update will cause React to have to reevaluate things, which is more work than just bailing out early. That's one of the reasons why I've been so interested in the potential of this observedBits thing to help potentially skip some of that work. If there's any way the context implementation can be sped up (and especially cases where only context updates need to be propagated to the tree due to use of shouldComponentUpdate-type blocking), it would be very helpful.

@theKashey

This comment has been minimized.

Copy link

theKashey commented Sep 28, 2018

Something like obtaining ref to Provider and calling value update out of React render would be great. We almost got unstable_read, why not to unstable_write?

@markerikson

This comment has been minimized.

Copy link

markerikson commented Sep 29, 2018

@theKashey : see #13293 :)

@theKashey

This comment has been minimized.

Copy link

theKashey commented Sep 30, 2018

@markerikson - that would work only client side stuff, where “default” value could work for everyone, but not for SSR, where different clients shares the same context, but not a value.

@Andarist

This comment has been minimized.

Copy link
Contributor

Andarist commented Oct 27, 2018

simply because the new context model does more work behind the scenes and thus incurs more overhead (it has to traverse the entire tree, going through each fiber at a time).

@trueadm why is that? how current context model is different than consumers subscribing to the provider directly? rendering consumer (or using a hook 😉) has to traverse the tree up (should be fairly lightweight) to find the provider that it has subscribe to, but other than that i would imagine that providing a new value should have same-ish cost as in redux case, is there any particular reason why provider doesnt maintain consumers registry (I guess it would have to be non-flat, to trigger consumers top-down rather than in insertion order)?

@probablyup

This comment has been minimized.

Copy link
Contributor

probablyup commented Nov 15, 2018

I think we're seeing a similar issue over at styled-components, but with use of the context consumer components (there's a repro sandbox in that issue): styled-components/styled-components#2215

In a wide tree of the same component rendered many times, it's quite slow and the vast majority of the scripting time is in React-land. In styled-components v4, the stack of each component looks like this:

ForwardRef > ReactClass > StylesheetConsumer > ThemeConsumer* > Element

* for static components with no function interpolations, we skip the ThemeConsumer

@markerikson

This comment has been minimized.

Copy link

markerikson commented Nov 15, 2018

I have to say this is still my biggest concern about moving React-Redux to use createContext internally. Still not sure exactly how much it's going to affect things in real-world apps, but it does seem like there's definite overhead involved in traversing the tree to find consumers.

@alansouzati

This comment has been minimized.

Copy link
Contributor

alansouzati commented Nov 20, 2018

I'm curious why the traverse would affect the initial mount. Maybe I'm missing something, but it seems that we could face performance issues with updates. Can someone clarify this ?

@alansouzati

This comment has been minimized.

Copy link
Contributor

alansouzati commented Nov 22, 2018

One thing I've noticed is that this performance issue is only in development mode. If I build a production distribution and do the same performance tests, the issue simple go away.

Does anyone have an idea why this is slow just in dev mode? I can take a stab at investigating that, but I'm not very familiar with the react code base.

Here is yet another codesandbox: https://codesandbox.io/s/xv1rz10p8q.

It wraps a bunch of consumers in a loop. This is where I've started observing the slow to render issue.

@alansouzati

This comment has been minimized.

Copy link
Contributor

alansouzati commented Nov 30, 2018

Hey can anyone in the react core team try to answer to this? I could definitely try to help, but I'm afraid I will take forever to understand what is going on. Please? 🙏 FYI @gaearon

@abritinthebay

This comment has been minimized.

Copy link

abritinthebay commented Dec 5, 2018

We could also have a linked list of all context consumers. That’s still a traversal to do but it’s less than the whole tree.

Is this something that is likely to be on the roadmap soon? Given React-Redux has just released its new context subscription model I imagine you'll be hearing more about perf issues related to this...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment