Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added memory_d>1 functionality #40

Merged
merged 3 commits into from Jun 16, 2021

Conversation

NarsimhaChilkuri
Copy link

LMUFFT should now be able to handle memory_d > 1, i.e, the input need not be projected down to a scalar before being fed into the delay layer.

@arvoelke
Copy link
Contributor

Awesome! This will be very useful to have. The check inside of LMU that delegates to this implementation can also have the memory_d check removed. Would be good to have a unit test too. Could probably extend one of the ones already in here.

@tbekolay
Copy link
Member

tbekolay commented Apr 28, 2021

Thanks @NarsimhaChilkuri! Can you sign the CAA by adding yourself to this page?

@hunse hunse force-pushed the multi-dim-memory branch 2 times, most recently from 9d61854 to 7aeb03a Compare June 15, 2021 15:18
Copy link
Contributor

@hunse hunse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've got this fixed up and ready to go.

I had to add a has_input_kernel option to allow turning off the input kernel in LMUFFT.delay_layer so that test_layer_vs_cell can pass (otherwise there was a mismatch in the number of weights). This is a feature that I figured we want anyway (a lot of our recent work uses LMUs with no dense encoding layer).

I combined the test that @NarsimhaChilkuri added into our existing test_fft. The main difference is that it tests a few more cases. Otherwise just added a few comments and a changelog entry.

It looks like TravisCI doesn't run when it's an outside contributor branch, so I made a copy of this at origin/multi-dim-memory. The build is running here. (EDIT: That's funny... it wouldn't start a build when I pushed to Nani's branch, but when the build passed on the copy that I pushed to origin, Github was able to identify that a build passed on the same commit and update the check badges below.)

Copy link
Member

@drasmuss drasmuss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some minor comments below, looks good overall!

CHANGES.rst Outdated Show resolved Hide resolved
keras_lmu/layers.py Outdated Show resolved Hide resolved
@hunse hunse mentioned this pull request Jun 15, 2021
3 tasks
@hunse
Copy link
Contributor

hunse commented Jun 15, 2021

OK, I've addressed your comments @drasmuss.

I also noticed that test_multivariate_lmu was failing on my machine, though it appears unrelated to these changes (it also failed on master for me); upping the tolerances a hair fixed it for me. (I tried explicitly setting all the dtypes to float32, since it seems like it might be some sort of typing issue, but that didn't fix it. 🤷)

Copy link
Member

@drasmuss drasmuss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

hunse and others added 3 commits June 16, 2021 10:46
@drasmuss drasmuss merged commit 74e9b03 into nengo:master Jun 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants