You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @saxenarohit ,
There is just one layer in the example, but multiple layers can be used with ease.
The encoder is word-level, indeed. What's wrong with that? The overall idea is to apply attention to specific words, so it makes sense to work on word level rather than character or sentence level (the last one is possible though for large texts)
I see what you mean, it's not an exact implementation of an algorithm from "Hierarchical Attention Network". It's more simple example of attention use.
I can only see the word level encoder. Am I missing something? There should be two Bi-GRU layers right?
The text was updated successfully, but these errors were encountered: