-
-
Notifications
You must be signed in to change notification settings - Fork 675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to do Stacked LSTM with attention using this framework ? #30
Comments
Ok It was simple :
|
by the way, I try to use attention with conv1D to specify the "neighbors lenght" contribute to the importance of the step in question (using the size of the kernel) , the results improved:
this way you also do not need to permute - it will build attention vector for time steps and not for variables without permuting... |
@rjpg thanks! The attention block got updated. So maybe this is deprecated now. |
hello,
I have run your code successful.
I have also include stacked LSTM in your code :
But maybe this is not the correct way to apply staked LSTM with attention right ?
My ultimate goal is to include attention into this code (classification of multivariate time series ) :
Thanks in advance for any possible info
The text was updated successfully, but these errors were encountered: