MVP #1

mottosso opened this Issue Mar 18, 2016 · 0 comments


None yet
1 participant

mottosso commented Mar 18, 2016


Cache, apply cache, and store serialised .atom file of user-animatable attributes on nodes located in a pre-defined objectSet into the resulting cache node.

The data should then be retrievable by reading the serialised .atom file, writing it out, and importing it onto the same nodes in the objectSet.

By serialising and storing in the scene, we gain two things.

  1. We can associate a state with a node, by storing it in a string attribute.
  2. We avoid file management, and stick the data directly into the scene file.

Considering that caches can grow large, in the megabyte range, we are likely looking at a significant file size increase. The good news is that, once the cache nodes are removed, the data is removed with it. In this way I think we will find a good place to start looking into making optimisations.


When working on simulations, it's important to relate a cache to how it was made(1).

Traditionally, this would have required great discipline, in saving a scene for each cache made and somehow associating the files to the scene which created it, typically by name. E.g. cache_v058 is coming from scene_v58.

Loosing track of the source scene, or not having one in the first place, has disastrous consequences, in that you can no longer perform tweaks to the version they may like the most.

With memorycache, each cache can preserve the exact circumstance with which it was created without user intervention, and automatically maintain a link between the two.

(1) the reason being that you might be requested to continue working on v65, even though v68 is your latest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment