Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute exponential decays only once #231

Merged
merged 2 commits into from
Apr 22, 2019
Merged

Compute exponential decays only once #231

merged 2 commits into from
Apr 22, 2019

Conversation

Huizerd
Copy link
Collaborator

@Huizerd Huizerd commented Apr 16, 2019

So while this would be the best solution to #229, the fact that layer.dt is only assigned after layer creation doesn't allow one to compute the exponential decay in the __init__ (as of now).

https://github.com/Hananel-Hazan/bindsnet/blob/020ba619deeea61e9dcedaf5601037bdf2431bb8/bindsnet/network/__init__.py#L109

How would you prefer this to be solved?

@djsaunde
Copy link
Collaborator

djsaunde commented Apr 16, 2019

Why don't we do self.decay = decay in __init__, then compute layer.decay = torch.exp(-self.dt / layer.tc_trace) in the Network.add_layer function? That way, we keep track of the decay argument that was passed in, and compute the exact solution when we learn the global timestep. Layers aren't meant to be used outside of Networks, so there shouldn't be a problem.

@Hananel-Hazan
Copy link
Collaborator

After discussing with @dee0512, I think that we need to add a set_dt function in each node and then call it in the Network.add_layer. In this way, each type of node can set its own decay or not. Also, we don't need to remember to set the decay every time we set the dt

@djsaunde
Copy link
Collaborator

@Hananel-Hazan Nodes don't have a dt, only the Network does. What we're proposing is that the decay constant is computed when the layer is added to the Network.

@dee0512
Copy link
Collaborator

dee0512 commented Apr 16, 2019

If you are suggesting to also remove the line layer.dt = self.dt and then set decay there, it makes sense. But if node will have a dt like they do now, then it would make sense to have a function to set dt.

@djsaunde
Copy link
Collaborator

Nodes don't have a dt independent of their parent Network. We only set layer.dt = self.dt because it is more convenient than to set layer.network = self and refer to self.network.dt in Nodes objects. The dt is set only once, when the Network is created.

@Huizerd
Copy link
Collaborator Author

Huizerd commented Apr 17, 2019

Why don't we do self.decay = decay in __init__, then compute layer.decay = torch.exp(-self.dt / layer.tc_trace) in the Network.add_layer function? That way, we keep track of the decay argument that was passed in, and compute the exact solution when we learn the global timestep. Layers aren't meant to be used outside of Networks, so there shouldn't be a problem.

The problem with this is that different nodes have different decay variables. While all of them have self.trace_decay (we would still need to check for traces == True in add_layer), LIFNodes only have self.decay, while e.g. AdaptiveLIFNodes also have self.theta_decay. So I'm wondering whether this can be done in add_layer without a whole bunch of if-statements and isinstance() checks.

A solution might be an additional method compute_decays for the Nodes class and child classes like LIFNodes, where the appropriate decays are computed for each node specifically. We would then simply set all decays in the __init__ of both Nodes and e.g. LIFNodes to None, and call the layer.compute_decays() in the add_layer function.

@Hananel-Hazan
Copy link
Collaborator

Yes, I agree @Huizerd.

@djsaunde
Copy link
Collaborator

I agree too! Maybe make it a "private" function (_compute_delays instead of compute_delays is the recommended syntax).

@djsaunde djsaunde merged commit bd5f836 into BindsNET:master Apr 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants