Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please expand time series prediction examples #46

Open
jabowery opened this issue Nov 9, 2021 · 3 comments
Open

Please expand time series prediction examples #46

jabowery opened this issue Nov 9, 2021 · 3 comments

Comments

@jabowery
Copy link

jabowery commented Nov 9, 2021

LMU hasn't received the attention due it since it's introduction. Nor has Nengo's spiking approach received due attention. IMHO, ABR's priorities have gotten the spiking cart before the LMU horse. LMU's superiority to LSTM is a strength of that horse that may well draw greater attention to spiking deployments.

Since the LMU's primary strength is dynamical systems modeling, (and having casually worked with the LMU code for a couple of years off and on) one of the things that becomes apparent to me (as a casual user) is the need for more examples involving dynamics and, more specifically, time series prediction. The most obvious omission is the original repository's Mackey-Glass example. But even that example, although it does demonstrate superiority to LSTM (other than in the hybrid architecture), it doesn't really get to the heart of dynamical systems identification for which LMU is likely to really shine:

Online identification of nonstationary dynamical systems.

Something that would accomplish this is an algorithm generating multiple, dynamically interdependent, wave forms (generated in the CPU fed to the GPU(s)), with dependency parameters changing continuously in time for the LMU to learn, online, in the GPU/TPU, and predict.

Particular attention to illustrating the function of LMU-unique parameters (theta, etc.) -- especially in contrast to the LSTM in this environment -- would help the outreach a great deal.

PS: Something to avoid in this kind of outreach is reliance on interpolative test sets -- that is to say, avoid the normal Keras training/testing mode involving chopping up time series data into training and test sets where what the model actually learns to do is interpolate rather than extrapolate.

@stwrn
Copy link

stwrn commented Dec 27, 2022

Completely agree with the post above. I'm also looking forward to tf 2.10 support in version 0.4.3. 🙏
Thanks for LMU it's great!

@drasmuss
Copy link
Member

Hi @stwrn, FYI we just did a new release that includes TF 2.10/11 support!

@stwrn
Copy link

stwrn commented Feb 10, 2023

@drasmuss
Wow! Thank you very much for your work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants