-
Notifications
You must be signed in to change notification settings - Fork 219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modifying presynaptic variables doesn't work using numpy #435
Comments
This is obviously a serious bug. The "good" news: this is not a regression, the same problem occurs with the first alpha version. |
This is actually quite tricky. We do a complicated algorithm for repeated indices on the post-synaptic side but nothing equivalent for the pre-synaptic indices. To do this properly we'd need to check whether pre- or post-synaptic variables are used in the code and accordingly partition synapses according to pre- or post-synaptic indices or both. This might get quite complicated... Probably it is better to directly go for a solution using I think I'd prefer the latter, I think we could come up with something reasonable that covers the corner cases but is not necessarily the most performant solution possible, e.g. we could do new copies whenever a variable is used on the RHS, e.g. something like:
would become A_pre = _array_neurongroup_A[_presynaptic_idx]
w = _array_synapses_w[_idx]
# This operation marks A_pre as "dirty", so it has to be reloaded when it is used later
add.at(_array_neurongroup_A, _presynaptic_idx, A_pre + w)
A_pre = _array_neurongroup_A[_presynaptic_idx]
add.at(_array_synapses_w, _idx, A_pre) What do you think? |
Does this problem also occur in Brian 1? If so, do we need to also fix the problem there and/or issue an advisory warning? |
I agree it would be good to get a general solution using |
Who should work on this by the way? We need to decide soon because I'm on holiday next week and may not have internet access, but I have some long train journeys at the beginning and end where I could work on it. |
Is this actually ill-defined? The approach I outlined above would interpret this as:
by doing v = _array_neurongroup_v[some_indices] # this is a copy
add.at(_array_neurongroup_v, some_indices, f(v + w)) Isn't this the the correct interpretation, or am I missing something? |
I'd be happy for you to work on this, since you already did quite a bit of work in this direction in the |
Oh, and about Brian 1: it suffers from the same problem... For that, I'd probably just raise an error if pre-synaptic variables are changed in synaptic pre/post statements (and we should probably do a bug-fix release at some point...). |
The new/old interpretation is one possible interpretation, and it would work for numpy, but it wouldn't work for weave (without having to make a copy of the arrays). I'm also not sure that it's consistent with our general semantics for abstract code, which is that a reference to a variable gives its current value. OK for Brian 1: the error it raises could say "Please use Brian 2" once we've solved it for Brian 2. We should do an announcement though (even though it's painful to do so), maybe today actually. |
Ah ok, now I'm getting it -- I didn't really have the situation of repeated indices in mind, but this is of course what this is all about. I'm happy with any solution that correctly addresses the most common use case and in some way deals with any other case. From my side, I'd be even happy with a solution that just raises an error (instead of doing an inefficient loop). Can we find some general criterion for statements that are unproblematic? If we are very strict and only allow statements that do not depend on the order of evaluation (e.g. order of synapses) then something like
Given that the error has been around for quite a while and that changing pre-synaptic variables in a synaptic operation is quite rare, I'm not sure we really need an emergency announcement... |
We have two issues, one is whether or not the statements can be efficiently implemented. So for this case, One of the complexities is that people might write I do think we should make an announcement about the bug since it silently gives wrong results. We can say that it is unlikely to affect anyone. We've done this twice before: https://groups.google.com/forum/#!searchin/briansupport/bug$20dan$20goodman$20-%22re%22/briansupport/R6Bs4EDxuNM/reMIFFLkLNQJ and https://groups.google.com/forum/#!searchin/briansupport/bug$20dan$20goodman$20-%22re%22/briansupport/vOGwh3YHrbY/Hey9qXpNA2wJ |
I'll do the announcement later unless you have strong feelings against it. |
No, of course not, please go ahead. |
Done! I meant to add: I don't think we should check the indices. |
OK I'm trying to work out what the conditions should be for meaningfulness and for vectorisability with Let I_syn be the synaptic indices, and I_pre, I_post be the corresponding pre- and post-synaptic neuron indices. The definition for meaningfulness of an abstract code block is that permuting I_syn doesn't change the results. If we look at each line of abstract code that has the form If var is a synaptic variable, then expr must be made up of constants, synaptic variables, and non-synaptic variables whose values are permutation-dependent. So for example, if it reads a non-synaptic variable that is never written to, that value is not permutation-dependent. So If var is a non-synaptic variable, then there are two cases depending on the operation. First case: the operation is commutative (i.e. Second case: the operation is non-commutative (i.e. op is =). In this case, the expr needs to consist only of constants or variables that have the same index as var and are permutation-independent. In this case, the code is meaningful (it's permutation-independent because it only depends on indices belonging to the variable), and it can be vectorised in a separate block from the synapses. The existence of this case leaves var being permutation-independent (e.g. we could write I think that covers all cases, but the logic is quite difficult to work through so I'd appreciate it if you could go through it carefully and check if you agree. If I'm right then I think there is a reasonably straightforward algorithm for checking if its meaningful and vectorising it based on the reasoning above. Do we want to allow for optimisations in the case that something can be efficiently vectorised but isn't meaningful (e.g. |
There are some cases that are missed out by the analysis above. For example, if we wrote So the question is: do we want to worry about any of this or not? My feeling is no, that all examples other than So I think the reasoning in my previous post gives the condition for vectorisability of meaningful code using |
I need a bit of time to think about this in detail. Actually maybe something like a wiki page might be better to put the information together? For now, I have the feeling that it might be better to come up with something less general and more restrictive and rather enhance the scope when we find new use cases (but then again one of the ideas behind Brian is that we support things that we didn't think of, so...) A more general thought: maybe we should allow statements that depend on the order of synapses? I don't quite remember where, but I do remember that we once had a discussion about the order of synapses and we decided to follow some simple rules (something like: connect statements with concrete source/target arrays will create synapses in the specified order, statements with string expressions will create synapses in some specified order (ordered by pre- or by post), several connect calls will simply result in concatenated synapses). This means that abstract code is slightly less abstract than we want it to be in general and that synaptic propagation cannot be easily paralleled for OpenMP/GPU, but maybe this is more robust and easy to explain than a complex set of rules such as the ones you outline above? The reference implementation would be a single loop over the indices, i.e. what weave does. We'd make sure that numpy matches this. I'm not sure about this, but I have brian2hears in mind where linked variables are used in a way that depends on the order of execution which is not well-defined according to a strict interpretation of abstract code. The issue is basically the same as for synapses (you can re-create synaptic pathway statements -- except for delays -- with linked variables and custom operations, without using any |
I think it would be good to have buffering be more explicit for Brian Hears 2, like in BH1. Let's discuss that another time though. I quite like having the freedom to reorder synapses if necessary (and this could be very necessary on GPU as you say, and on SpiNNaker we can't make any guarantees about the order in which synapses are considered). So I think I'd prefer to stick to the current approach of warning if the ordering makes a difference, but not insisting on it and not guaranteeing that they'll be evaluated in a particular order. I've started work on this in a new branch. I'll make a pull request for discussing the implementation. |
Bug reported by Owen Mackwood on the Brian development list:
If you increment a presynaptic variable in the Synapse pre/post code, it is only incremented once per presynaptic neuron, rather than once per synapse (as it presumably should). The following code exhibits this behaviour:
The plot should show increments of 9, whether on_pre is True or False. The numpy version exhibits increments of 1 in both cases.
The text was updated successfully, but these errors were encountered: