Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistency about refractory period with Brian Simulator #137

Closed
ghost opened this issue Oct 1, 2018 · 6 comments
Closed

Inconsistency about refractory period with Brian Simulator #137

ghost opened this issue Oct 1, 2018 · 6 comments

Comments

@ghost
Copy link

ghost commented Oct 1, 2018

In my opinion, implementation of refractory period in Brian Simulation makes more sense.

Assume refrac_period = 1

Bindsnet
t=0: mem+=input --> neuron spikes --> refrac = 1
t=1: refrac = 0 --> mem+=input --> neuron spikes 

Ref:
https://github.com/Hananel-Hazan/bindsnet/blob/6d4a7a7980080556c79c34d2a603bada12dc78d0/bindsnet/network/nodes.py#L324-L334

Brian simulation
t=0: mem+=input --> neuron spikes --> t_spike = 0
t=1: t_current - t_spike < refrac_period  --> do nothing
t=2: t_current - t_spike > refrac_period --> mem+=input --> neuron spikes --> ...

Ref:
https://github.com/brian-team/brian2/blob/2b8e459798bd84be1c01e707d74993f2f260b5ce/brian2/groups/neurongroup.py#L121

@djsaunde
Copy link
Collaborator

djsaunde commented Oct 2, 2018

Good point. How would you fix this? Feel free to open a PR.

@ghost
Copy link
Author

ghost commented Oct 2, 2018

Since there is a definition of t in network/init, one possible way is to add t as an additional argument to step function. Inside the step function, introduce new variable to store spiking time (say t_spike). Everytime step function is called, check if t-t_spike is greater than refractory period and do operation accordingly.

@djsaunde
Copy link
Collaborator

djsaunde commented Oct 3, 2018

Hm, that could be problematic when doing multiple simulations in sequence. For example:

network.run(inpts, time=100)
network.run(inpts, time=100)  # Forgets refractory neurons from end of previous simulation!

Alternatively, one could just reverse the ordering, from:

# Decrement refractory counters.
self.refrac_count[self.refrac_count != 0] -= dt 

# Integrate inputs. 
self.v += (self.refrac_count == 0).float() * inpts 
  
# Check for spiking neurons.
self.s = self.v >= self.thresh

# Refractoriness and voltage reset. 
self.refrac_count.masked_fill_(self.s, self.refrac) 

to

# Integrate inputs. 
self.v += (self.refrac_count == 0).float() * inpts 
  
# Decrement refractory counters.
self.refrac_count[self.refrac_count != 0] -= dt 

# Check for spiking neurons.
self.s = self.v >= self.thresh

# Refractoriness and voltage reset. 
self.refrac_count.masked_fill_(self.s, self.refrac) 

Does this make sense? Does this appear to solve the problem?

@ghost
Copy link
Author

ghost commented Oct 3, 2018

Yes, that will be better than my solution.

@ghost
Copy link
Author

ghost commented Oct 3, 2018

You may need to be careful when changing order. There are models (like this) that do self.refrac_count == 0 more than once.

@djsaunde
Copy link
Collaborator

djsaunde commented Oct 5, 2018

Solved by #136.

@djsaunde djsaunde closed this as completed Oct 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant