Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alpha filter #4

Merged
merged 2 commits into from
Sep 18, 2020
Merged

Alpha filter #4

merged 2 commits into from
Sep 18, 2020

Conversation

hunse
Copy link
Contributor

@hunse hunse commented Sep 11, 2020

Add Alpha filter implementation

Copy link
Member

@drasmuss drasmuss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made various fixups, more details in the commit messages. Let me know if those changes look good to you, then will merge!

@hunse
Copy link
Contributor Author

hunse commented Sep 16, 2020

The problem with just using one step in test_lowpass_trainable is that all the system really needs to learn is the initial filter state, it doesn't need to learn tau. I set trainable=False for tau and found that this test still passes.

I would recommend using multiple steps and checking explicitly that tau has changed just to make sure it's training properly, as I did in the modified test. Unfortunately, that probably means using looser tolerances, because once it's optimizing over multiple timesteps it will choose a larger initial_value so that after the initial value decays for a few timesteps, it's still near the target value of 1.

It would be nice if there was a way we could have really tight tolerances and be sure that tau is training properly. The only way I can think to do that would be to have the target be a lowpass-filtered version of the input, with e.g. a tau that's twice as long as the initial tau. That way, the target should be perfectly learnable. (In the alpha filter test, I just use all ones, which it can never learn perfectly since it would need an infinite tau; hence the looser tolerances.)

@drasmuss
Copy link
Member

Updated the test in the other PR (as it worked better after those changes), see 9decbd0

----------
states : `~numpy.ndarray`
Optional state array that can be used to override the values returned by
`.SpikingActivationCell.get_initial_state`.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be ``cell.get_initial_state`` or something like that.

@hunse
Copy link
Contributor Author

hunse commented Sep 17, 2020

The new test looks good. So I'm fine to merge this as-is (with the minor fix to the docstring I mentioned above), and then the test will get fixed when we merge #5.

Also, it might be a good idea to double-check the docstrings for the base classes, since I obviously didn't do a great job updating them when I reduced the code redundancy.

@drasmuss drasmuss merged commit 8f51dc7 into master Sep 18, 2020
@drasmuss drasmuss deleted the alpha-filter branch September 18, 2020 18:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants