Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Orders of magnitude slower than mobx #440

Closed
knoopx opened this issue Oct 9, 2017 · 64 comments
Closed

Orders of magnitude slower than mobx #440

knoopx opened this issue Oct 9, 2017 · 64 comments
Labels
enhancement Possible enhancement

Comments

@knoopx
Copy link

knoopx commented Oct 9, 2017

Hi I'm migrating an app from mobx to mobx-state-tree and I'm facing some performance issues.

Here is the simplest example case:

https://mattiamanzati.github.io/mobx-state-tree-playground/#src=import%20%7B%20observable%20%7D%20from%20'mobx'%0Aimport%20%7B%20types%20%7D%20from%20%22mobx-state-tree%22%0A%0Aconst%20MSTStore%20%3D%20types.model(%7B%0A%20%20%20%20items%3A%20types.optional(types.array(types.string)%2C%20%5B%5D)%0A%7D).actions(self%20%3D%3E%20(%7B%0A%20%20%20%20setItems(items)%7B%0A%20%20%20%20%20%20%20%20self.items%20%3D%20items%0A%20%20%20%20%7D%0A%7D))%0A%0Aclass%20MobxStore%20%7B%0A%20%20%20%20constructor()%7B%0A%20%20%20%20%20%20%20%20this.items%20%3D%20observable(%5B%5D)%20%0A%20%20%20%20%7D%0A%20%20%20%20setItems(items)%7B%0A%20%20%20%20%20%20%20%20this.items%20%3D%20items%0A%20%20%20%20%7D%0A%7D%20%0A%0Aconst%20items%20%3D%20Array.from(Array(100000).keys()).map(x%20%3D%3E%20%60Item%20%24%7Bx%7D%60)%0A%0Aconst%20mstStore%20%3D%20MSTStore.create()%0A%0Aconsole.time(%22mst%22)%0AmstStore.setItems(items)%0Aconsole.timeEnd(%22mst%22)%0A%0Aconst%20mobxStore%20%3D%20new%20MobxStore()%0A%0Aconsole.time(%22mobx%22)%0AmobxStore.setItems(items)%0Aconsole.timeEnd(%22mobx%22)

Replacing an observable array with 100.000 items takes ~5sec when using MST and just 0.02sec when using mobx. The worst part is that it completely blocks the UI while doing so. Is there something I could do to improve speed?

EDIT: Running with NODE_ENV=production does not improve things too much.

development
mst: 744.248291015625ms
mobx: 0.052001953125ms
production
mst: 645.447021484375ms
mobx: 0.064208984375ms
@mattiamanzati mattiamanzati added the enhancement Possible enhancement label Oct 9, 2017
@mattiamanzati
Copy link
Contributor

We have not included much perf improvement yet, we're working mostly on stability right now, but is that much of items in an array a real use case? :)

@knoopx
Copy link
Author

knoopx commented Oct 9, 2017

Not really, I'm dealing with ~15.000 items but with far more properties than just a string. Here's the actual model: https://github.com/knoopx/plex-music/blob/mst/src/store/album.js

The app is an electron-based music player that holds, in memory, all my music on Plex Media server. I know the use case may not be realistic but performance is perfectly fine with mobx (just a few hundred megabytes of memory).

@mattiamanzati
Copy link
Contributor

Uhm, can you please try with the MST version 1.0.0?
I changed the array reconciler, and I am curious to see if there is any actual performance change.

@knoopx
Copy link
Author

knoopx commented Oct 9, 2017

Results running the above example under mst@1.0.0:

diff --git a/package.json b/package.json
index ef4d331..23fb1aa 100644
--- a/package.json
+++ b/package.json
@@ -55,7 +55,7 @@
     "lodash": "^4.17.4",
     "mobx": "^3.3.1",
     "mobx-react": "^4.3.3",
-    "mobx-state-tree": "^1.0.1",
+    "mobx-state-tree": "1.0.0",
development
mst: 3577.899169921875ms
mobx: 0.063720703125ms
production
mst: 3780.85986328125ms
mobx: 0.02099609375ms

@knoopx
Copy link
Author

knoopx commented Oct 9, 2017

I would say there's no actual perf difference between 1.0.0<->1.0.1 when just replacing an empty array with 100.000 items.

@mattiamanzati
Copy link
Contributor

development
mst: 3577.899169921875ms
mobx: 0.063720703125ms

development
mst: 744.248291015625ms
mobx: 0.052001953125ms

Uhm, so 1.0.1 is faster indeed

@adamsanderson
Copy link

Anecdotally, we've been seeing initializing a store with about 800 models having about five or six simple fields (2 strings, a number, and an ID) taking around 200ms with MST 1.0.1. It's not terrible, but it's definitely chewing up some of our perf budget before we've really gotten to the complex stuff ;)

@mattiamanzati
Copy link
Contributor

@adamsanderson Does it performs better with 1.0.0? :)

@adamsanderson
Copy link

adamsanderson commented Oct 11, 2017 via email

@evanblack
Copy link

We've run into this with our tree data as well. Currently on 1.0.0, so will see if the upgrade to 1.0.1 helps and report back.

@mackross
Copy link

me too — its crashing chrome before I can load ~400 complex objects

@mattiamanzati
Copy link
Contributor

As said before we havent looked yet at performance improvements, if someone wants to profile it a bit to understand which functions are taking so long, we can improve accordingly :)

@mackross
Copy link

mackross commented Oct 12, 2017

I've attached a profile and it looks as if most of it is happening within the array replace nested object creation, but I'm not that familiar with js optimization.
Profile-20171011T202546.zip
I had to reduce it down to 10 objects to run the profile as otherwise it crashes the page with the profiler running.

@sanketsahu
Copy link
Collaborator

We can help. This would be really critical for us.

@skellock
Copy link
Contributor

^ I'm with my fellow React Native brethern, Sanket here. I'd love to help out in anyway.

One very corner-case thing I've noticed when using frozen is the act of freezing objects seems to incur a lot of overhead. Especially on Android devices.

I know of other libs that opt out on prod builds.

I'd love to see some kind of types.yolo() that would be frozen minus the freeze. Or preferably, some kind of switch to turn off freeze/isFrozen just in prod?

@mattiamanzati
Copy link
Contributor

Yeah, that was already a TODO in the code, that would be a starting point for performance improvements for sure, node.ts and mst-operations.ts contains a lot of those perf todos in comments :)

https://github.com/mobxjs/mobx-state-tree/blob/master/src/core/node.ts#L269

Yeah, I'd opt to not freeze at all in prod mode :)

@mweststrate
Copy link
Member

Performance PR's are welcome!

Note there are quite some // optimization comments in the code which indicate clear possibilities for optimizations. Without really testing I think the following things might help big time:

    1. Node is quite a complicated and generic object, but for immutable values (primitives) we should probably create a cheaper and much version of it (ImmutableNode) that doesn't support attaching onPatch / onSnapshot handlers etc etc. This one might not be trivial btw
    1. Caching middleware listeners will probably make action invocations significantly cheaper (now they are collected on every call). Make sure to invalidate when adding listeners or moving nodes around
    1. Skipping freeze, at least in prod mode, by @skellock sounds like a good idea
    1. I think more properties of nodes can be cached (@computed), like root, environment etc.
    1. nodes have a lot of object allocations which are just empty arrays etc. This could be initialized as null instead of [], that makes object creation cheaper but requires a bit more pre-condition checking in the code. Shouldn't be too complicated either

@mattiamanzati
Copy link
Contributor

Another possible improvement would be having a computed value based on parent signaling if any of the parent have any listener for onPatch/onSnapshot or onAction, and don't bubble up that event at all if any of that is not present! :)

@evanblack
Copy link

evanblack commented Oct 12, 2017

So after some testing today, our issue seems to stem from how deeply nested our tree goes. We have a Stage model that gets children and those children can have children. In practice, I don't think we go more than a couple levels deep, but the root creation seems to get exponentially worse based on the depth of the data. I'm also not sure yet the impact the recursive types are having here.

const ChildTypes = () => {
  return types.array(
    types.union(
      snapshot => {
        const getModel = Type => {
          const modelTypes = {
            'Button': () => Button,
            'Checkbox': () => Checkbox,
            'Container': () => Container,
            'Dropdown': () => Dropdown,
            'Embed': () => Embed,
            'Input': () => Input,
            'List': () => List,
            'ListItem': () => ListItem,
            'Radio': () => Radio,
            'Slider': () => Slider,
            'Sprite': () => Sprite,
            'Text': () => Text,
            'Video': () => Video,
            'default': () => null,
          }
          return (modelTypes[Type] || modelTypes['default'])()
        }
        return getModel(snapshot.Type)
      },
      Button,
      Checkbox,
      Container,
      Dropdown,
      Embed,
      Input,
      List,
      ListItem,
      Radio,
      Slider,
      Sprite,
      Text,
      Video
    )
  )
}

const stage = types.model('Stage', {
  children: types.late(ChildTypes)
})

@mattiamanzati
Copy link
Contributor

@evanblack yeah, that might confirm my hypothesis about resource being spent over bubbling up actions, snapshots and patches.
I am also not sure if maybe avoiding that long stack with a while loop at root may improve performances.

@sanketsahu
Copy link
Collaborator

Good to see you here too @skellock! 🙌 Let's see how we can optimize this.

I will get sometime next week to look into this. Points noted @mweststrate @mattiamanzati

@mattiamanzati
Copy link
Contributor

@sanketsahusoft @skellock Feel free to ping me over the mobx-state-tree gitter channel for any question! I will be a little busy next week, but I should have time to reply :)

@mackross
Copy link

Thanks for having a look at this guys. For my use case, I got around the performance problems by managing a compact vs expanded state (lazily setting deeply nested fields in MST). Obviously, won't work for everyone but it let me continue on and get my app deployed 🎉

@mweststrate
Copy link
Member

Version 1.3.1 should be a lot faster. Closing this issue for now, although I don't doubt that many more improvements are still possible :). But in that case more measuring tests would be appreciated, so that we optimize the right thing :)

@dnakov
Copy link
Contributor

dnakov commented Feb 7, 2018

@mweststrate @mattiamanzati

I've been toying around with lazy init-ing MST nodes as my app started to take over 10s to initialize my MST store of nested hell and I came up with a hacky solution to my problem -- don't instantiate the node until something actually cares about its value.

This is still all synchronous and brought me back to the 100s of ms load time. It's making me wonder whether this would be a better default for MST.

An array with 50000 items with a single string property:

(full) init: 2976.90185546875ms
(full) read all items: 76.2509765625ms

(lazy) init: 140.288818359375ms
(lazy) read all items: 1957.93408203125ms

Code Sandbox

Would love to hear from you as to how to achieve this properly!
Thanks!!

@dnakov
Copy link
Contributor

dnakov commented Feb 11, 2018

Here's my latest implementation, now running in production. Everything seems to be working so far.

import { extras } from 'mobx'
import { types } from 'mobx-state-tree'

export default subType => {
  const BlankType = types.model({})
  class LazyType extends BlankType.constructor {
    constructor (opts) {
      super(opts)
      let instantiate = this.instantiate.bind(this)
      this.instantiate = (parent, subpath, environment, value) => {
        let node = instantiate(parent, subpath, environment, {})
        node._value = value
        return node
      }
      this.finalizeNewInstance = () => {}
      this.createNewInstance = () => ({})

      this.getChildNode = (node, key) => {
        this.initNode(node).getChildNode(key)
      }
    }

    initNode (node) {
      let instance

      let cd = extras.getGlobalState().computationDepth
      extras.getGlobalState().computationDepth = 0
      instance = subType.instantiate(
        node.parent,
        node.subpath,
        node.environment,
        node._value || {}
      )
      node.storedValue = instance.storedValue
      node.type = subType
      extras.getGlobalState().computationDepth = cd

      return instance
    }

    getValue (node) {
      return this.initNode(node).storedValue
    }

    getSnapshot (node) {
      return node._value
    }

    isValidSnapshot (value, context) {
      return subType.isValidSnapshot(value, context)
    }
  }
  return new LazyType({ name: subType.name })
}

@mweststrate
Copy link
Member

@ConneXNL I think your analysis is correct. 100K objects is definitely not what MST is designed for (see also: https://github.com/mobxjs/mobx-state-tree#when-not-to-use-mst). Even if we could fix performance, it would be memory wise very expensive because we bind all actions (the self) thing. Which is developer friendly, but means an additional 100K closures per action. The same holds for many other features in MST as well. For example, by default (at least in non prod), it will typecheck every single instance to see if the json is valid etc etc. The impact of that increases quickly with these amounts of data.

With plain MobX you can choose to store actions on the prototype instead (which is the default, unless using action.bound). Which is way more efficient. So for your use case, or at least the data type for which you have 100K instances, Mobx, with maybe something like serializr, sounds the much more efficient solution indeed.

@KBorders01
Copy link

KBorders01 commented Mar 4, 2018

@mweststrate is there any plan to support storing actions on the prototype for MST in the future, or would this not be possible due to the way that MST works? Alternatively, would you recommend just using a single action function that takes a sub-action name as an argument and call other statically-defined functions to reduce the number of closures per object instance?

@jayarjo
Copy link
Member

jayarjo commented Mar 19, 2018

@mweststrate while wrapping all actions is helpful, some might easily do without it (if it affects performance dramatically). Maybe make this behaviour optional/configurable instead?

@wbercx
Copy link

wbercx commented Mar 20, 2018

Performance is becoming an issue for me too, particularly with React Native on Android where a Nexus 7 (2013) takes close to 16 seconds to restore the tree from snapshots (written to disk) and become interactive during a cold-start. The snapshots contain ~400 models in total and are spread across 8 files - one for each domain store in my tree. It's not a large tree by any means.

@skellock
Copy link
Contributor

@wbercx even in production mode?

@wbercx
Copy link

wbercx commented Mar 20, 2018

@skellock Even in production mode. Even when I don't render a FlatList ;)

@robinfehr robinfehr mentioned this issue Mar 21, 2018
@KBorders01
Copy link

KBorders01 commented Mar 21, 2018

@robinfehr @mweststrate, any thoughts on how we would create a prototype object for actions and where it would be stored? If not, I can try to look at the code to see how mobx is doing it and propose a solution.

@mweststrate
Copy link
Member

@wbercx your project sounds like a nice real life benchmark given the size & complexity, could you create a sample repo or PR with benchmark test so that we can further investigate?

@wbercx
Copy link

wbercx commented Apr 2, 2018

@mweststrate Sorry Michel, completely missed this. I probably can, but I don't think I can get around to it for another month or so.

I'm not actually convinced that MST is to blame for the bulk of the loading time. I suspect a lot of it is related to Android.

It's quite hard to profile at the moment as some of the tools just don't work anymore with recent React Native versions. And as soon as you try to do any sort of remote debugging or profiling, everything's lightning fast as all of the JS then runs on the local machine rather than the device.

@mweststrate
Copy link
Member

See also: #664 (comment) for some tests

@aripekkako
Copy link

We're running into performance problems too. We have an ecommerce app, which has quite a big product object. Creating 1000 of the simplest kind product objects in tests takes around 5 secs on my Macbook Pro (Mid 2015). The end result is render blocking in the UI, when loading product lists (60 objects).

Our product models generate 25+ closures (views+actions) / each. What would be the best way to optimize object creation? I've gone through all the issues + docs and the only thing I could find was @KBorders01 suggestion on using a single action function (#440 (comment)) . Do you think it would be in any way feasible to add support for prototypal inheritance of some sort @mweststrate?

@dualscyther
Copy link

I agree, storing actions on the prototype seems like the way to go - maybe this can be configurable/a new feature so that the API doesn't break?

Additionally, lazily binding the action to each object seems like a good way to proceed in the meantime.

@mweststrate
Copy link
Member

Thanks to #810 / MST 3 will be significant faster, closing this issue for now. Best open a new one if follow up is needed after releasing MST 3

@ejabu
Copy link

ejabu commented Jul 29, 2018

@mweststrate how are you getting a 6x bump? I'm seeing 15-25% improvement

[Dev] Initializing MST spreadsheet...: 743.10107421875ms
[Prod] Initializing MST spreadsheet...: 645.926025390625ms

I randomly tried the sandbox today

this is the result

Creating sample data...: 0.5999999993946403ms 
10000
Initializing MST spreadsheet...: 0.19999999494757503ms 
Initializing mobx spreadsheet...: 0.29999999969732016ms

What I confused of are the dependencies.

it is stated

mobx 3.5.1
mobx-state-tree 1.2.1

I thought the performance was improved by the update of MST. But it seems that version 1 already performed

@markhu
Copy link

markhu commented Dec 6, 2018

I'm not sure how to find out what version the playground is running, but it still seems to impose a 1000x+ speed penalty, #810 notwithstanding.

@yang
Copy link

yang commented Mar 7, 2019

@ejabu You may be looking at the CodeSandbox console, which also reports small numbers for me. Look at the Chrome Devtools console instead.

For posterity: I just forked @mweststrate's sandbox and upgraded to MST 3.10.2 and mobx 5.9.0. Here are updated times—MST is apparently considerably faster than MobX for this construction benchmark. Not sure if these numbers are suspicious (or even how interesting a construction benchmark is).

Initializing MST spreadsheet...: 65.73388671875ms
Initializing mobx spreadsheet...: 130.56787109375ms

Link to sandbox: https://codesandbox.io/s/6w01k206yw

@jrmyio
Copy link
Contributor

jrmyio commented Mar 10, 2019

Although object creation became very fast. Reading is still very slow (because of the lazy initialization being triggered at that point).

I edited the sandbox: https://codesandbox.io/s/l9n5mo4rjl

As mentioned before, use the Chrome devtools rather than the codesandbox console to test. Here are my results:

Initializing MST spreadsheet...: 36.467041015625ms
Reading MST spreadsheet...: 429.496826171875ms
Initializing mobx spreadsheet...: 71.212890625ms
Reading mobx spreadsheet...: 10.56005859375ms

For some use cases fast initializing is all you need, for other use cases, for example when you want to aggregate some values of your models, you will always need to add the read speed on top of the creation speed.

Combing both times we are still looking at a 5+ multiplier, not to mention this is a very simple MST model. If your model is going to have embedded/child models it will most likely become even more of a difference.

If I recall @mweststrate MST is slower because it doesn't utilize the JS prototypes as it creates copies of each function? Is there any way to change or improve this?

@jayarjo
Copy link
Member

jayarjo commented Mar 18, 2019

@ConneXNL Making copies of each model for every method is at the core of MST architecture, changing that will require substantial refactoring.

@dualscyther
Copy link

@jayarjo being not too familiar with this codebase, is there actually a reason for creating a copy per model or was it just something that happened at the time?

@aripekkako
Copy link

@dualscyther This is by design and is discussed in this comment #440 (comment)

@jrmyio
Copy link
Contributor

jrmyio commented Mar 18, 2019

According to the FAQ:
MST provides access to snapshots, patches and interceptable actions. Also, it fixes the this problem.

Here I wonder how much is added by the "fixes the this problem", and what exactly is the this problem? Wouldn't using = () => {} instead of () {} just fix this? Or is there more?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Possible enhancement
Projects
None yet
Development

No branches or pull requests