Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] Caching and small run preparation improvements #832

Merged
merged 45 commits into from Sep 8, 2017

Conversation

mstimberg
Copy link
Member

I got around to work a bit on the caching stuff we have been discussing for a long time (in particular, see #124), to avoid the overhead of repeated preparations when doing multiple runs.

For somewhat extreme cases (running a short simulation for a HH model in a loop), this PR has a great impact on performance (e.g. running a 20ms simulation of 100 HH neurons took ~2s instead of ~9s). Simulations with a single run are only affected in a minor way, and of course all this is only relevant for the preparation period, so it only matters when the run time for your actual simulation is short.

Contrary to the title of #124, I did not use joblib to do the caching, because I did not really see what this would buy us. The caching is only done in memory with simple dictionaries, I did not try to implement caching on disk (for which of course joblib would be helpful). The main reason is that I am seeing a lot of potential problems (e.g., we fix a bug in a state updater or in the code generation, and users still access an old cached version) and not much benefit.

Caching is done on multiple levels. The lowest levels are helpful even for single runs:

  • sympy_to_str and str_to_sympy cache their results -- the mapping from one to the other should be fixed, I only see one hypothetical problem: if the user changes a function in DEFAULT_FUNCTIONS so that it maps to another sympy name, this would not be taken into account. But I don't see why a user would do this in the first place, and even less so between two runs. I also made some minor tweaks to the two functions, making them slightly faster for non-cached calls.
  • Equations.get_substituted_expressions is cached. This is called several times during the preparation period and since Equations are immutable by design, caching is safe here.

Then there is caching on the abstract code level:

  • The state updater application (same equations, same variables, same method) is cached in apply_stateupdater, which also means that it remembers which method was chosen if none was specified explicitly. For complex models and/or integration methods, this part is where most of the time was spent during run preparation, so this has a big influence on the run time for repeated runs. To make this work, I had to make CodeString, SingleEquation, and Equations properly comparable/hashable, so that they compare equal when their content is equal (it probably would have worked with a simpler identity-based comparison as well, since we don't recreate these objects normally, but well).
  • The transformation of code into a series of statements is cached as well (i.e. make_statements). Since the time when we introduced optimizations this is a very costly operation as well.

Finally, there is caching on the target-specific code generation level: if a CodeObject is created for the same code, same variables, same template, same target, etc., then it is re-used instead of re-created. Re-creating these objects was also taking a significant time. However, this caching was a bit trickier to get right, since previously the transformation from the variable dictionary to the namespace that was used for the CodeObject was done during object initialization. We have to do this for every run, though, because things like e.g. the num_... variables for synapses have to be updated in case they changed between runs. This resulted in the changes you see in the last commit.

I tried to err on the side of caution, and much of this relies on they way how Variable objects are compared/hashed. Standard variables (arrays) etc. are compared by identity -- if the object is the same, it is considered the same from the code generation point of view (of course the values it refers to may have changed, but our generated code only references them). The same is true for user-defined functions, however there it comes with a very minor disadvantage: user-defined Python-only functions (i.e., just defined with check_units) will create a new Function object every time, so code referring to them will not benefit from caching. I don't think that is a problem that needs fixing, though. Finally, Constant objects are compared by their actual value -- we re-create them at every run and codegen might have hard-coded them into the code, so we want to make sure to not hide changes in constant values by caching, while not making our caching completely useless at the same time :)

That's it, it seems to be working fine as far as I can tell, so good to merge from my side.

By design, these objects are mutable, therefore we can use them e.g. as keys in dictionaries. This will be useful for caching.
Now the cache uses the actual object identity (we normally pass references to the objects around) for Variables, except for Constant objects (which will be re-generated for each run)
This avoids having a transformation string->string that then has to be eval'ed
…ect` might be reused due to caching, but e.g. the size of a dynamic variable might have changed)
@mstimberg
Copy link
Member Author

That's it, it seems to be working fine as far as I can tell, so good to merge from my side.

Hah, the latest mail on the mailing list made me realize there's actually a problem with my current implementation that we need to fix first: all the caches hold strong references to the Variable objects, so this means that memory will not be freed if you get rid of objects...

@mstimberg mstimberg changed the title [MRG] Caching and small run preparation improvements [WIP] Caching and small run preparation improvements Apr 5, 2017
@thesamovar
Copy link
Member

This is the sort of change that needs careful thought and testing, so let's not rush but... great work! It will be fantastic to have this in Brian.

I would argue that there is a benefit to cacheing to disk (e.g. people who run a script multiple times for parameter search rather than doing multiple run calls from within a single run of a script), and we can potentially make this safe. In the key for the (on-disk) dictionary, we can include Brian's version number (or better a hash of the source code if this is reasonably quick to compute once at the beginning). Another alternative is to include in the install script something that deletes the cache. There'll be some users who could shoot themselves in the foot, i.e. those who run from git and have their python path point directly to the git directory rather than installing. But that's probably only developers!

The other really huge benefit is to standalone mode. There are a few things we could do to improve this. For example, if we detect that the code we are about to write to a file is the same as the code already in that file, we can just choose not to write to it. Like that the compiler knows that the file is already compiled and doesn't have to recompile it. Hmm, not sure how much benefit we get from that though given that the objects.h file is the most likely to change, and almost all files depend on this. Maybe there's a way to reorganise to avoid this problem though? Presumably the current cacheing of the code generation pathway will already help for standalone. Since we don't yet cache to disk this is of relatively minimal use right now, but with cacheing to disk we'd already get some benefit there. Combining these different bits of cacheing could make a huge difference to standalone.

Last (techical) point. For DEFAULT_FUNCTIONS if there is no conceivable need to change it, and that can only cause problems, why not make it a frozen dict after it's created?

@thesamovar
Copy link
Member

Oh, and it might take me a bit of time to get around to testing and reviewing this because it's a biggie, but it's also important so will try to get to it as soon as possible.

@mstimberg
Copy link
Member Author

This is the sort of change that needs careful thought and testing, so let's not rush

Yes, agreed! I will probably also add a few more tests with potential problems in mind (changing stuff between runs). Apart from that, there's the memory problem and apparently the test server failed for Python 3 and standalone, so there's still some work to do ;-)

I would argue that there is a benefit to cacheing to disk (e.g. people who run a script multiple times for parameter search rather than doing multiple run calls from within a single run of a script), and we can potentially make this safe.

I have to say I am quite hesitant to add on-disk-caching for now. This is really a non-trivial problem to get right:

  1. it has to be safe for multiprocessing (joblib could take care of this, though)
  2. Brian's version/source code is potentially not enough, we'd also need to take into account dependencies like sympy (an install script would not help here)
  3. Packages like Brian2GeNN would also need to be taken into account, or we'd need a whole system where packages can register their relevance for caching, or we'd need to version CodeObject/Generator classes or something like that

Standalone is also a whole different beast, for stuff like parameter explorations I think there it would make sense to have a more explicit system that does not rely on automatic caching. Something like the parameter exploration stuff we discussed previously (run an already compiled simulation but inject different parameters).

How about first getting the memory-based caching right and giving it some exposure? We can then tackle the disk-based cache as a new issue.

For example, if we detect that the code we are about to write to a file is the same as the code already in that file, we can just choose not to write to it.

Dan-of-the-past already implemented this ~3 years ago 😄 https://github.com/brian-team/brian2/blob/master/brian2/devices/cpp_standalone/device.py#L112

For DEFAULT_FUNCTIONS if there is no conceivable need to change it, and that can only cause problems, why not make it a frozen dict after it's created?

Possibly. I wonder whether adding new functions would not be something reasonable to allow. If it is done at the beginning of a script, that should not lead to any problem (assuming only in-memory caching, of course). Another easy fix would be to simply use the DEFAULT_FUNCTIONS dictionary as part the cache key, so changes would invalidate the cache automatically.

@thesamovar
Copy link
Member

Brian's version/source code is potentially not enough, we'd also need to take into account dependencies like sympy (an install script would not help here)

This probably shouldn't matter though, right? In this case, certainly a version number for sympy would do the trick.

Packages like Brian2GeNN would also need to be taken into account, or we'd need a whole system where packages can register their relevance for caching, or we'd need to version CodeObject/Generator classes or something like that

Agreed that we don't want to get into that! Is it definitely necessary though since those libraries won't use the cacheing mechanism by default? Or maybe they will because they'll be called by Brian which will do cacheing.

Another idea. We make the cacheing to disk off by default and you need to explicitly enable it, either with a preference or maybe (even safer) with something in the code (e.g. set_brian_disk_cache('directory_name'). Like that you're reminded about it. We could then transition to default-on cacheing after some testing and resolving some of these issues. The benefit is that this gives a way for people who want to do multiple runs to get the speed improvements, even if it's off by default.

Standalone is also a whole different beast, for stuff like parameter explorations I think there it would make sense to have a more explicit system that does not rely on automatic caching. Something like the parameter exploration stuff we discussed previously (run an already compiled simulation but inject different parameters).

Yes, but if we could - at relatively low effort - get an intermediate solution that improves it a bit that would still be good because the compilation times for standalone can be pretty long and negates a lot of the benefit.

Dan-of-the-past already implemented this ~3 years ago

I'm getting to be like one of those old men who keep telling the same stories as if you'd never heard them before.

In any case, apparently that alone isn't enough to stop recompilation. I might think about that idea for optimising the header files to save on recompilations.

Another easy fix would be to simply use the DEFAULT_FUNCTIONS dictionary as part the cache key, so changes would invalidate the cache automatically.

Yeah that's probably good, although you don't necessarily want to have to hash that dictionary every time you access the cache?

@mstimberg
Copy link
Member Author

Brian's version/source code is potentially not enough, we'd also need to take into account dependencies like sympy (an install script would not help here)

This probably shouldn't matter though, right? In this case, certainly a version number for sympy would do the trick.

Sure, that was just to point out that there are many potential pitfalls that are not obvious at first sight.

Agreed that we don't want to get into that! Is it definitely necessary though since those libraries won't use the cacheing mechanism by default? Or maybe they will because they'll be called by Brian which will do cacheing.

Yes, the way it is currently implemented, a GeNNCodeObject for example would be cached by Brian.

Another idea. We make the cacheing to disk off by default and you need to explicitly enable it, either with a preference or maybe (even safer) with something in the code (e.g. set_brian_disk_cache('directory_name'). Like that you're reminded about it.

Something like this might be a good solution (or a preference or something like that). However, I'd still prefer to finish/merge the memory-based caching before going the next step. I also wonder whether a proper explicit Network.write_to_disk and Network.load_from_disk (which would store all the caches on disk as well) would not be worth it. For runtime mode, the caching does not help you much for repeated runs if your code spends a lot of time generating synapses, for example. Ok, you can do that part with store/restore already actually. It still feels like a waste to do all the object creation etc. again, but maybe this is not really an issue if all the computationally demanding stuff is cached.

In any case, apparently that alone isn't enough to stop recompilation. I might think about that idea for optimising the header files to save on recompilations.

Right, that's actually a bug. It should not recompile if you run exactly the same script twice (assuming that you used set_device(...) or build(...) with clean=False which is not the default (maybe it should?)). I think the only reason why it does this right now is because in some places, we did not fix the sorting for elements. So the contents of a dictionary are printed in an order that can change between runs, triggering the re-compile. As soon as one single file changed, we'll have to link all the object files again which seems to take most of the time of the compilation process.

Yeah that's probably good, although you don't necessarily want to have to hash that dictionary every time you access the cache?

I was worried about this for the Variables dictionary as well, but hashing is really fast. I just tested hashing DEFAULT_FUNCTIONS and it takes 1.3us, so I think this negligible.

@thesamovar
Copy link
Member

Something like this might be a good solution (or a preference or something like that). However, I'd still prefer to finish/merge the memory-based caching before going the next step.

That seems reasonable. Maybe let's not close #124 when this is merged then?

I also wonder whether a proper explicit Network.write_to_disk and Network.load_from_disk (which would store all the caches on disk as well) would not be worth it. For runtime mode, the caching does not help you much for repeated runs if your code spends a lot of time generating synapses, for example. Ok, you can do that part with store/restore already actually. It still feels like a waste to do all the object creation etc. again, but maybe this is not really an issue if all the computationally demanding stuff is cached.

Indeed! That would be even better.

Right, that's actually a bug.

OK, will open a separate issue for that. Yeah, maybe clean=False by default is a good idea!

Let me know when you've got this to a state where it's mergeable and I'll do a more detailed review?

@mstimberg
Copy link
Member Author

That seems reasonable. Maybe let's not close #124 when this is merged then?

Sure. It will be closed automatically due to my issue comment, but we can of course reopen it.

Let me know when you've got this to a state where it's mergeable and I'll do a more detailed review?

Will do!

@mstimberg
Copy link
Member Author

I reverted the caching of CodeGenerator.translate, so please have another look!

@mstimberg mstimberg changed the title [WIP] Caching and small run preparation improvements [MRG] Caching and small run preparation improvements May 2, 2017
@thesamovar
Copy link
Member

I haven't forgotten! I promise.

@thesamovar
Copy link
Member

I'm going to start reviewing this now. Might take a few days for me to get it done. Also, I'm planning to ask a few more questions about particular lines of code than I normally would so as to be sure that everything is safe.

Copy link
Member

@thesamovar thesamovar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks all good. My main concern is still the creation of the keys for cacheing. I've put a few suggestions for how this might be automated to make it safer that we can discuss. There's also a couple of questions about particular code changes. All in all, I think this is pretty close to ready to go. All the tests (long and standalone) pass on my machine too.

cache_key = (code, _hashable(variables), dtype, optimise, blockname)
if cache_key in _make_statements_cache:
return _make_statements_cache[cache_key]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idea for improvement. What about making this into a decorator that puts each argument through _hashable? Like that, if we ever change the signature of this function it automatically ensures that the cache is updated too. Also, might save on copy/pasted code elsewhere too.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I did this (I'll push the commit in a minute) -- we still have to be careful whenever we add arguments and think about how they relate to the caching, but I think the most common issue that we could have is that the cache no longer works.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's ideal if we make a change and it causes an error rather than silently making a mistake.

@@ -274,6 +275,9 @@ def __repr__(self):
constant=repr(self.constant),
read_only=repr(self.read_only))

_state_tuple = property(lambda self: (self.dim, self.dtype, self.scalar,
self.constant, self.read_only,
self.dynamic, self.name))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar comment to above, is there any way we can make this more safe against our own future changes by automating the definition of _state_tuple?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can automatize the _state_tuple here: it's very point is to make the distinction between attributes that matter for cacheing and those that don't. E.g., for arrays all describing attributes (dtype, dim, etc.) matter, but value does not. For a Constant, on the other hand, the value matters (because it might be used with its value in the generated code).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mention this a bit later, but what about a system where all attributes are assumed to matter unless we explicitly exclude them? Like that, if we change the class in the future we are forced to make a decision about the new attributes. I feel like this isn't a lot more work and minimises the chance of silent errors in the future.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, sorry, overlooked this comment... Hmm, I guess we could use __dict__ to get all attributes and do something along these lines. I'll give it a go!

for name, value in code_object.variables.iteritems():
if isinstance(value, np.ndarray):
self.static_arrays[name] = value

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the idea here that previously we looked for static arrays, but since the only way they can be accessed is by a function that we can just insert function namespaces instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More or less: it is part of a fundamental change how function namespaces (most important use case is TimedArray) where added. Previously, we added them to the original variables dictionary at some point. This screwed up caching, because the variables were used as part of the cache key earlier and therefore should not change. For this specific bit of code we therefore no longer search for those arrays as part of the variables dictionary, but explicitly add them from the Function.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK!


# : Set of identifiers in the code string
self.identifiers = get_identifiers(code)

code = property(lambda self: self._code,
doc='The code string')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for changing code from an attribute to a property?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just to make the immutability of the CodeString very clear: code is a read-only attribute and should never be changed. We use this idiom in a few places and it seemed to make sense to use here as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK!

self.expr, tuple(self.flags)),
doc='A tuple representing the full state of this '
'object, used for hashing and equality '
'testing.')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment again. Here we could automate the creation of _state_tuple from the arguments passed to __init__, so either put a decorator there or have it derive from some class that automates the creation of the _state_tuple. This solution could probably also be used for Variable actually.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned above, I don't think we can automate this in a straightforward way (we could have a function where we define which attributes should be considered for caching and which not, but then I don't think this would make things clearer).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above - it's not about making it clearer it's about protecting ourselves from our future selves. I think this is particularly important for this cacheing stuff because it would be so easy to make a mistake here that we have to go to some efforts to avoid doing so.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point, but any change to those core classes has to be done very carefully -- there are a lot of other possibilities to break this apart from the _state_tuple. I don't see a straightforward way to make this specific thing safer with regard to future changes, IMO having extensive tests is the most important...

elif op_name == 'Not':
return sympy.Not(self.render_node(node.operand))
else:
raise ValueError('Unknown unary operator: ' + op_name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all rewritten to avoid going to a string and then turning them into a sympy object, right? That seems sensible, but just wondering why it was necessary.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, before we took a string, parsed it's AST, created a string from it and then eval'ed it which again parsed its AST and created the sympy objects. Since this stuff is called a lot of times (less with the caching now, though), it made a measurable time difference. It also feels more logical to do it this way instead of making the string roundtrip. Oh, and I think it might even prevent some loss of precision due to converting floats to strings and back, but not sure that was actually an issue.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great!

@@ -148,7 +141,11 @@ def sympy_to_str(sympy_expr):
str_expr : str
A string representing the sympy expression.
'''

if sympy_expr in _sympy_to_str_cache:
return _sympy_to_str_cache[sympy_expr]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How safe is this in cases of changes in version of sympy?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you mean -- this is still all about caching in memory, so I don't see how version changes can come into play.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah good point!

cache_key = (equations, _hashable(variables), method_key)
if cache_key in StateUpdateMethod._state_update_cache:
return StateUpdateMethod._state_update_cache[cache_key]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like another case where wrapping _init__ or the class could work to automate this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, see my upcoming commit.

import collections


class _CacheStatistics(object):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we use this or just there for potential future performance/debugging?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used this just for seeing that the caching actually worked. We could potentially use this in tests I guess?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep - fine to leave it there anyway. We might find a use for it one way or the other.

@thesamovar
Copy link
Member

That's all looking great - I would also have the system we mentioned for marking some variables/attributes as not part of the cache for Variable and derivatives (can maybe add arguments to the @cached decorator to list ignored function arguments?), and then probably good to go. I already like the new version with your latest commit much better - I find it much cleaner.

@@ -165,6 +164,16 @@ def __init__(self, name, dimensions=DIMENSIONLESS, owner=None, dtype=None,
#: Whether the variable is an array
self.array = array

#: Mark the list of attributes that should not be considered for caching
#: of state update code, etc.
self._cache_irrelevant_attributes = ['owner']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be a class attribute maybe? Not sure if it's worth doing but maybe slightly more logical.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed! It would even simplify the code a bit, since it wouldn't appear in __dict__ itself!

@property
def _state_tuple(self):
return tuple(value for key, value in self.__dict__.iteritems()
if key not in (self._cache_irrelevant_attributes +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to sort the items in a dict or is the order deterministic?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't matter here since the order is arbitrary but deterministic, but I'm not 100% sure there are no corner cases. So, using sorted sounds reasonable.

hashing and equality testing.'''
return tuple(value for key, value in self.__dict__.iteritems()
if key not in (self._cache_irrelevant_attributes +
['_cache_irrelevant_attributes']))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could potentially be refactored since it's the same code as before?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I considered it, but given that the function is almost trivial, I'm not sure it's worth introducing a common super-class for SingleEquation and Variable given that the classes don't have much in common... Hmm, maybe something like CacheKey could make sense, I'll have a look.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I was thinking a sort of mixin class.

@thesamovar
Copy link
Member

I like the new changes, this is feeling much safer and cleaner to me. Hope you didn't find that too tedious to do. I made a couple of non-essential suggestions. Let me know when you're ready and I can do a final review.

@mstimberg
Copy link
Member Author

Ok, moved stuff into a CacheKey mixin class -- I agree that it's cleaner this way! Ready for final review I guess :)

Copy link
Member

@thesamovar thesamovar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks great and all tests passing on my machine. Good to go! Hurray! Sorry for the epic delay in getting around to this.

@mstimberg
Copy link
Member Author

No worries, it was a quite substantial change and I think the code improved quite a bit during the discussion.
I might delay the merge a bit until @CharleeSF's PR, I could imagine that it leads to quite a few merge conflicts...

@thesamovar
Copy link
Member

Sure!

@mstimberg mstimberg mentioned this pull request Sep 5, 2017
@mstimberg mstimberg merged commit e1fa9e9 into master Sep 8, 2017
@mstimberg mstimberg deleted the stateupdate_caching branch September 8, 2017 09:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants