You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm trying to implement the paper in PyTorch, but my model seems to ignore the recurrent states. Is the sliding window attention over blocks of tokens mandatory for getting it to work, or can it be trained with regular attention? It's the only thing I'm missing, I've already made the special gate initialization, but still, no luck, and I'm trying to identify what's actually missing to get the model to attend to its recurrent states.
Also, would it be possible to fine-tune an existing model to use the recurrent layer?
The text was updated successfully, but these errors were encountered:
Hi, I'm trying to implement the paper in PyTorch, but my model seems to ignore the recurrent states. Is the sliding window attention over blocks of tokens mandatory for getting it to work, or can it be trained with regular attention? It's the only thing I'm missing, I've already made the special gate initialization, but still, no luck, and I'm trying to identify what's actually missing to get the model to attend to its recurrent states.
Also, would it be possible to fine-tune an existing model to use the recurrent layer?
The text was updated successfully, but these errors were encountered: