Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing Sinusoid oscillator of Supervised learning in spiking neural networks with FORCE training #1445

Merged
merged 2 commits into from
Dec 12, 2022

Conversation

schmitts
Copy link
Contributor

No description provided.

@mstimberg
Copy link
Member

Many thanks (again 😊)! I wonder whether we cannot replace the two network operations with a "creative use" of synapses – this might be a nice showcase for doing this and would make everything more efficient (and enable us to use standalone mode!). It will make things quite a bit less readable, though, not sure...

@schmitts
Copy link
Contributor Author

schmitts commented Dec 2, 2022

Yes, I thought the same thing. I was not really happy with the explicit matrix operations. On the other hand it makes the math more accessible/direct. Do you want to draft a version with Synapse operations? It's easier to discuss/compare with an example at hand.

@mstimberg
Copy link
Member

I have hacked together a solution that only uses Synapses, summed variables, and run_regularly operations, but… it isn't pretty. In particular, we don't have a nice syntax to set the timing of several summed variable operations, but here they need to be executed in the right order to faithfully reproduce the algorithm in your network_operation. Actually, the result doesn't seem to be exactly the same, so there might be a bug as well – but either way, I don't think this should go into an example like this. You can compare the two implementations here: schmitts/brian2@Nicola_Clopath_2017...brian-team:brian2:Nicola_Clopath_2017_no_network_op

The best solution is probably to go with your implementation for now, but also think about how we can improve syntax to make these things more straightforward in the future. For example, I had one idea that would work quite nicely as an extension of our current syntax, I think: we could extend (summed) to NeuronGroups, to make it possible to sum expressions into a shared variable. This would allow users to replace this

# linear readout
@network_operation(dt=defaultclock.dt)
def readout(t):
    neurons.z = np.dot(neurons.BPhi, neurons.r)

by:

"""...
z = BPhi * r : 1 (summed, shared)
"""

in the equations.

@schmitts
Copy link
Contributor Author

schmitts commented Dec 6, 2022

Hi!

Thanks for your effort! Did you do some benchmarking between the two implementations?

Allowing for

"""...
z = BPhi * r : 1 (summed, shared)
"""

would be cool!

Cheers,

Sebastian

@mstimberg
Copy link
Member

Did you do some benchmarking between the two implementations?

It was faster, but not dramatically so. Instead of 1m20s, it ran in something like 42s. With C++ standalone mode, it went further down to 22s. But I think the network_operation can be slightly sped up as well, I'll try out a few things.

@mstimberg
Copy link
Member

I tried a few things and nothing makes it significantly faster. In general, I'd recommend to use e.g. neurons.v[:] instead of neurons.v when passing Brian variable to functions, since neurons.v[index] adds quite a bit of overhead to support string-based indexing, etc. In this case, the numpy functions deal with this in a decent way, and it only shaved off a second or two here and there. For simplicity, I'll therefore leave it as it is and go ahead with the merge.

We can use this example as a use case for the syntax I mentioned earlier (more general discussion: #180).

Copy link
Member

@mstimberg mstimberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again 🙏

@mstimberg mstimberg merged commit 77e6f7f into brian-team:master Dec 12, 2022
@mstimberg
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants