You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@PhilippMarquardt Hey! So if you read the vision transformers paper, they actually try to have the positional embedding generalize to different sizes by interpolating them. It depends on what your goal is
Hi,
once again thanks for your great work! Since I want to use the axial attention with positional embedding for unknown image sizes (But I know the max size), I was wondering if you think that changing https://github.com/lucidrains/axial-attention/blob/master/axial_attention/axial_attention.py#L104 to
does the right thing. I can now do this
I think that makes it easier to integrate it in fully convolutional nets for multi scale training.
The text was updated successfully, but these errors were encountered: