Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance considerations #10

Closed
viskin opened this issue Jul 11, 2016 · 23 comments
Closed

Performance considerations #10

viskin opened this issue Jul 11, 2016 · 23 comments

Comments

@viskin
Copy link

viskin commented Jul 11, 2016

I'm checking this library as architecture option for our chrome extension. Conceptually, your work looks fantastic.
I just wonder if it provides adequate performance?

If I get it right, given an background page and a content script, both communicate using chrome's port.postMessage. What happens is that on each state change, the whole store's state being transferred from background to content script.

Here's excerpt from wrapStore.js that shows the (possible) performance bottleneck:

const sendState = () => {
  port.postMessage({
    type: STATE_TYPE,
    payload: store.getState()
  });
};

AFAIK port.postMessage serializes data it sends. So as state grows and as number of small state manipulations is large, the performance hit of serialization should be noticeable.

Another and even bigger issue is that each time state changes, it will be fully re-created in content script, including parts of the state that weren't modified. This will cause re-render of all components. Normally, not modified parts will be kept in same place in memory, so only changed parts of the state will re-render.

So the question is: do you experience any performance issues in extension that uses react-chrome-redux?

If you do, maybe using some diff patching algorithm like jsondiffpatch, and passing over port only the change delta can in theory improve the performance.

What do you think?

@viskin
Copy link
Author

viskin commented Jul 11, 2016

And some afterthoughts.
Why do you need to move state from one part of extension to another? Wouldn't it be more performant to just run reducers on both stores as action arrives? No unneeded re-renders, stores are synchronized (reducers are deterministic), minimal amount of data is passed through Chrome's port.
Even more, content script can define sub-store, e.g. for settings, and register only setting reducers on it. All actions will be provided to it, but it will only change when 'setting' actions will arrive, ignoring all other!
Even if state is big, it should work very fast, as reducer functions are usually very fast.

@tshaddix
Copy link
Owner

Hi, @viskin! Thanks for all this input. I'll try to answer/respond to your questions and suggestions in order:

  1. Performance Issues: I have yet to see performance issues from the implementations of this library that I've been involved in. That said, all of the implementations I've been part of have been relatively small and simple projects.
  2. Transferring the whole state: You're completely right! The current implementation is naive in how it updates the state (by just transferring and replacing the whole thing). I fully expected to have the need to come back and optimize this part of the package, and I think your suggestion is a wonderful one. It would be a good idea to send diffs to UI pages and Content Scripts, as long as you made sure to send the complete state to newly initialized content scripts and ui pages.
  3. Why do you need to move state from one part of extension to another: Personally, I wouldn't say we "move state from one part of extension to another". We transfer actions one way, and state updates another. The current implementation provides a single source of truth (at least in terms of logic implementation) as well as a modular code base. I explain most of my reasoning here.

I'm very interested in moving forward with your suggestion of diffing state referenced in 2. I'd be happy to work on it in the near-ish future, but would also be thrilled to look over a PR if you have one. Thanks, again!

@viskin
Copy link
Author

viskin commented Jul 11, 2016

Great.
I don't have a PR now, I just sniffing around :)

  1. Calculating diff on each state change can be pretty costly operation by itself, but maybe less costly than serializing / deserializing and then rerendering everything. It's seams relatively easy to implement and measure the difference.
  2. Yeah, maybe it's a good thing that Proxy Store doesn't need to know what are reducers etc. And your mental model and code are beautiful. I think that for extension with large state that we are planning, performance will be a big issue. From performance point of view (and only from it) it seams that following implementation will be more appropriate to us IMHO. Just an alternative solution that seams feasible (but kind-of less beautiful and consists of more moving parts):
  • Each proxy store is implemented as redux store inside. It registers same root reducer as background store.
  • At the beginning, proxy stores get initial state from the background store. - seems redundant.
  • Each action (not thunk etc but pure action) passed to all stores, no matter by which store it was initiated.
  • Each store updates its state by running reducer(state, action). As reducers are same, this operation will complete with same state in all stores.
  • Everything else (aliases, thunks etc) remain the same as you implemented them, really good things.

The hope is that running reducers is very cheap operation, so running it multiple times (multipy number of stores) also should be cheap.

It's up to you what to implement, of course, I just propose other possible approach to the problem :)

@tshaddix
Copy link
Owner

Completely understandable! It sounds like you and your team has done their due-diligence and have figured out a great way to handle large-scale applications (in terms of state-size, that is). If you end up abstracting your code into a stand-alone package, I'd love to see it :)

I love the concept you've described. My hesitation with implementing it in this library is that it would really change the API and the overall structure of projects implementing this library. For developers who are not experiencing the performance issues you are expecting, I'd like to leave the general structure consistent. It sounds like you and your team have a great concept for an alternative package which can handle large-scale applications!

@viskin
Copy link
Author

viskin commented Jul 14, 2016

Cool. I think we'll start by using your implementation at the beginning, it's great and simple.
If we'll reach the point where we need more performance, we'll enhance the library a bit.
Thank you 👍

@viskin viskin closed this as completed Jul 14, 2016
@tshaddix
Copy link
Owner

@viskin Sounds great! Thank you for your in-depth and valuable feedback. I appreciate you taking the time to outline suggestions and possible alternatives!

@vhmth
Copy link
Collaborator

vhmth commented Jul 18, 2016

I have a Chrome extension that at times will be sending state updates very often (audio visualizer whose data comes through on the background during a recording). I'll keep my eye on this thread and weigh in if I hit a bottle neck.

@tshaddix
Copy link
Owner

@vhmth Sounds good!

@tyao1
Copy link

tyao1 commented Aug 15, 2016

I forked a boilerplate on which I implemented something similar to what @viskin suggested. The action is sent to the background and the background send it to every other reducers, it works well in the todo mvc example for now. https://github.com/xiaobuu/react-chrome-extension-boilerplate

@vhmth
Copy link
Collaborator

vhmth commented Aug 15, 2016

Hey @XiaoBuu awesome!

From which commit to which should we be looking at? Assuming the commits up to 5f3e838ffc312c439c9fbecb50ea2821b241630f?

Would you be cool with opening a PR against your own repo so we can more easily see which changes you made? Have you tested the perf of this? If not, I'd be game with you landing in a separate branch here and I can build some perf tests on top (see #12).

@tshaddix
Copy link
Owner

I'm not seeing a straight-forward merge between @XiaoBuu's project and this one at this time. There is a pretty strong shift of philosophy between the two approaches. I'd love to see some stats if you come upon them, @XiaoBuu .

Have we had anyone experience performance issues with the current methodology? I think we're still at a theoretical limit, not a practiced one. @viskin have you seen anything in your project? @vhmth ?

@tyao1
Copy link

tyao1 commented Aug 16, 2016

@vhmth Hi, I've made the pr https://github.com/HeartRunner/react-chrome-extension-boilerplate/pull/1/files
@vhmth @tshaddix I haven't tested the perf. I don't know how to do the perf test 😂.

@vhmth
Copy link
Collaborator

vhmth commented Aug 16, 2016

I have not started running perf tests (actually haven't built the audio bar widget in our new rewrite with this lib). We're currently in a pretty crunched iteration mode. Was planning on looking into perf testing this weekend (just got back from a friend's wedding last weekend haha). That being said, the performance is still very acceptable from what I've just been able to eye.

@XiaoBuu I'm starting to see how your approach works. If you could clarify this is correct, that would be good:

  1. Page A creates action and updates its local store.
  2. Page A tells background store about action which dispatches action to every foreground store if the global key is set in the action.
  3. Each foreground store receives the action and updates its state of the world.

I'm a little weary about this approach since, any time you introduce a sync step that's governed by an asynchronous mechanism, you introduce complexity. The nice thing about having a single source of truth is that you can reason about the state of your application (especially applications that span several pages) without worrying that the state isn't incorrect. This has been absolutely key when writing our current Chrome extension.

Other source: we dealt with a lot of these kinds of syncing issues at my previous company and it quickly turned into a nightmare.

That being said, this is probably gonna be much better performance-wise for most apps (since most apps really do concern themselves with the current tab when the extension icon is activated). You never really serialize any of the state so you're not bound by it.

The competing idea would be to obviously diff the state. We would certainly need perf tests as well as tests that ensure the code that actually does the dispatching to each connected foreground window does not regress (I'm thinking about all the cases where a port gets disconnected and a state update doesn't get received and breaks the app experience).

I'll let @tshaddix chime in here to see if he's cool with continuing down this path since this is his baby. My vote would be to try this out on a separate branch with clearly-documented modules and a wiki of the architecture, no choice of a global flag in actions, and full tests (especially on the port connectivity logic). I'm cool with trying new things. I'd be happy to try to then build the Opentest extension and see how it fairs perf-wise.

@tyao1
Copy link

tyao1 commented Aug 16, 2016

@vhmth That's correct. I totally agree with you, my approach can cause problem when the project grows. Theoretically if every actions are dispatched correctly to the reducers, the reducers will have the same state. But my approach can cause problems since global actions and local actions can cause some part of the reducer to have different state. I do this purposefully because I want different tabs to have some part of their own state, but I didn't think about the drawbacks.

I thinks my approach suits some use cases, but overall the current architecture is more elegant and easy to reason about and should be continued.

@vhmth
Copy link
Collaborator

vhmth commented Aug 16, 2016

I definitely think this is something worth looking into (that's my vote anyway). :-P Didn't mean to discourage you! Actually perhaps we could introduce the concepts of these optimizations as middleware? So you could choose to use the diffing middleware or localStore middleware.

@viskin
Copy link
Author

viskin commented Aug 16, 2016

@XiaoBuu I thought about possibility that background page will have aggregated state that includes state of each of foreground pages, something like this:

  {
    foregroundStates: [
        { /* state of tab 0 */ },
        { /* state of tab 1 */ },
    ]
    commonState: { /* common state */ }
  }

And each of foreground pages will only be synced with data relevant to it, like this:

/* I'm tab 0, here's my state */
{
   foregroundState: { /* state of tab 0 */ },
   commonState: { /* common state */ }
}

Each action should be provided to all the reducers, so that states will be in sync.

The good thing is that such approach is memory-friendly and should perform really fast.
The problem is that now states have different structure in each page. While redux architecture assumes that there's only one state in application, which is source of truth. But maybe we can see each page as separate application, while these applications sync their states between them?

@viskin
Copy link
Author

viskin commented Aug 16, 2016

One more thing. IMHO it's never worth optimizing something without measuring beforehand that optimization is indeed needed.
Maybe chrome is smart enough to, say, perform copy-on-write when state being sent through message channel between pages. In such case it will perform crazy-fast. And every optimization over current architecture of react-chrome-redux is just not needed.
It will be interesting to see how fast big state is moved between pages.

@tyao1
Copy link

tyao1 commented Aug 16, 2016

@viskin I think each page can be thought as different applications since they may vary a lot.
I just came up with one test case in my head: keeping input's state in the reducers, like what redux-form does, if the input is not lagged when the state tree is big, there would be no need to worry about the performance of the current implementation.

@tshaddix
Copy link
Owner

tshaddix commented Sep 2, 2016

@viskin I agree heavily with all of your points. Just as you pointed out (and as described in the Introduction) this project takes the approach that your extension is one application with many UI components which offer a different way to use or manipulate the application data.

@XiaoBuu it sounds like you would rather view your application as multiple applications that use the chrome extension as a common platform. In this case, it would make sense to take an alternative approach such as the one @viskin is suggesting, as it is both efficient and much more along the lines of your thinking.

Good conversation going on here!

@vhmth
Copy link
Collaborator

vhmth commented Sep 2, 2016

Guys thought about this some more and am heavily leaning with @tshaddix and @viskin with only going about these optimizations when concerns are raised. I will certainly raise any issues I see when we start implementing the audio bar visualizer I mentioned before.

@vhmth
Copy link
Collaborator

vhmth commented Sep 2, 2016

Wow can't believe this recording made it through on this airplane wifi. 💯

https://www.opentest.co/share/1b133350715911e6aafc697df8fe7c98

@vhmth
Copy link
Collaborator

vhmth commented Sep 2, 2016

Apologies for the shoddy mic in my headphones. Forgot that I broke them last weekend. :-P

@tshaddix
Copy link
Owner

tshaddix commented Sep 4, 2016

@vhmth You were totally understandable once I turned the volume up a bit! 🔈

Sounds good! Let's continue to operate under the "optimize as needed" philosophy. Once we have a good report of performance issues we can revisit this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants