Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About 3D transformer #7

Closed
vv123-star opened this issue Apr 17, 2021 · 1 comment
Closed

About 3D transformer #7

vv123-star opened this issue Apr 17, 2021 · 1 comment

Comments

@vv123-star
Copy link

Hello, I run the tranBTS code for 3D medical images, the shape of the generated attention weight is (Batch_size,num_heads, n_patches,n_patches ), is this not applicable to 3D images?
Thank you in advance!

@Rubics-Xuan
Copy link
Owner

Thanks for your questions. In fact, I am sure what is your question. After downsampling, we reshape each volume into a feature vector instead of patch. I think you might be wondering is the way of patch-style applicable to 3D images. You can check for more details from our paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants