Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training my own dataset with axial attention network #9

Closed
ysl2 opened this issue May 6, 2022 · 2 comments
Closed

Training my own dataset with axial attention network #9

ysl2 opened this issue May 6, 2022 · 2 comments

Comments

@ysl2
Copy link

ysl2 commented May 6, 2022

I would like to know which parameters in the axial attention network are to be modified if for custom datasets? Since your dataset is (128, 128, 128) and my own dataset is (24, 64, 80), I don't know which parts of the network need to be modified in order to satisfy this condition. Thanks!

@ysl2
Copy link
Author

ysl2 commented May 6, 2022

image

@rixez
Copy link
Owner

rixez commented May 6, 2022

Hi @ysl2,
Sorry for the problem. Seems like you already checked the other issue regarding the shape problem and none of the options there work for you. Since your dataset is quite different from the Brats dataset, which leads to different preprocessing and network architecture, I recommend manually checking the shape for the axial attention embedding! You can modify the shape to the axial embedding in this part:

emb_shape = (self.volume_shape/(2**d)).astype(np.int16)

@ysl2 ysl2 closed this as completed May 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants