-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potentially confusing Node behaviour #626
Comments
|
Cute. I've run into the same problem with the data = []
def node_function(t, x):
data.append(x)
node = nengo.Node(node_function, size_in=1) I've gotten into the habit of doing this instead: data = []
def node_function(t, x):
data.append(np.copy(x))
node = nengo.Node(node_function, size_in=1) Not sure what to do about it in general. I still forget and get annoyed by this now and then. My instinct is that the overhead involved in doing (1) or (4) is pretty high, but it's something that probably should be benchmarked and checked. If it is actually a significant cost, then my vote is (2). |
I would probably have an option on the node whether to copy |
One thing to keep in mind is whether this is a backend-specific option or something that all backends support. Right now in other backends, I've never run into this problem with t, but x behaves similarly to the reference backend. I'd lean towards checking the speed hit, and if it is negligible, then just always passing in a copy. (fewer parameters) |
I guess I was assuming that backends can always make a copy if they want, or at least wrap the Python function so that a copy is passed in. Since this is just a performance question, it's not a problem if they have to make a copy (i.e. they don't need to support the no-copy option, I'd even support silent failure in that situation).
Another benefit to this is people can't accidentally change something inside their simulation (e.g. by setting |
I meant that if there's a parameter that specifies which behaviour to use, then that would mean that backends would also have to support the non-copy option, with its strange side effects. That could be awkward for the backend to implement.
Yikes. Hadn't even thought of that. That makes me even more convinced that the right answer is "always copy". |
I guess I had assumed that this was all undesirable behaviour, so that it wouldn't matter if it was replicated. But I guess that could be the source of annoying bugs, if someone expects I'm fine with always copying. As I said, I'm guessing the speed hit will be minimal (though perhaps someone should check that). |
+1 copying |
I wrote the following script to test: import sys
import timeit
import numpy as np
import nengo
args = sys.argv[1:]
copy_tx = True if len(args) > 0 and '--copy' in args else False
nx = 10
nnode = 100
rng = np.random.RandomState(9)
def node_fn(t, x):
if copy_tx:
t = np.array(t)
x = np.array(x)
return x + t
probes = []
with nengo.Network() as model:
u = nengo.Node(output=rng.uniform(-1, 1, size=nx))
for i in range(nnode):
a = nengo.Node(output=node_fn, size_in=nx)
nengo.Connection(u, a)
probes.append(nengo.Probe(a))
sim = nengo.Simulator(model)
print(copy_tx)
print(timeit.timeit('sim.run(10.)', 'from __main__ import sim', number=3)) And got the following results for with copying:
and without copying
So the difference is about a 9% slowdown, for this simulation of only nodes. For a similar script but with a 10-neuron 10-dimensional ensemble for each node, the slowdown is 3%. If I make them 100-neuron populations, then it was actually faster when I copied (but just because of random fluctuations, I'm sure, but this indicates to me that the difference was negligible). If I only copied I looked into the One final thing to note is that array copying time is not linear in array size. There is a lot of overhead that seems to account for most of the time. |
Interesting results! Especially about setting the |
I think not copying |
Node functions now receive `t` as a float (as opposed to a zero-dimensional array), and `x` as a readonly view onto the simulator data. This addresses #626.
I realized putting together a PR that we can make the readonly view onto |
But then we can't modify it on each timestep? Oh, or I suppose the builder has a view that allows for writing. Nice 🌈 |
Node functions now receive `t` as a float (as opposed to a zero-dimensional array), and `x` as a readonly view onto the simulator data. This addresses #626.
Node functions now receive `t` as a float (as opposed to a zero-dimensional array), and `x` as a readonly view onto the simulator data. This addresses #626.
Pulling a Travis here and submitting bugs in my code as Nengo bugs, but just spent a little while tracking this down and thought I should document it. Here's a minimal example:
I think most people would expect that to print out "0.0, 0.0, 0.5, 0.5, 0.5, ...", but in reality it prints out "0.0, 0.0, 0.5, 0.75, 1.0, ...". Clearly (in hindsight) the problem is that
t
is an ndarray, not a float, sotmp
andt
get aliased. But the problem is thatt
looks and acts, in basically all other respects, as if it is a float (e.g.,t == 0.5
passes,t[0]
fails, etc.).Potential solutions
t
to a float before passing it to theNode
function (cleanest solution, but pay the cost of type cast)Node
(hope people read it)The text was updated successfully, but these errors were encountered: