New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cuda out of memory #31
Comments
You can try the "tf_efficientnet_b5_ns" and the num_features should be 2048 if I am correct. |
You need to adapt the skip_input= for all upsampling layer also.
you need to change "+ 224" by another number to match the encoder feature dimension |
I have tried it ,but the error is:RuntimeError: Given groups=1, weight of size [1024, 2272, 3, 3], expected input[1, 2224, 30, 40] to have 2272 channels, but got 2224 channels instead
how to make sure of the skip_input add? |
From the error message, the input dim is 2224 so you need to change the skip_input value to features + some number to match 2224. So with features=2048 as when you use the b5, some number should be 2224 - 2048. You should do the same with other upsampling layer. |
sorry, I have not understannd. Have any approches to reduce cuda? I have use 5 gpus,but it can't work. |
you can simply set skip_input=2224 as:
and for others up8, up4, ... you can see the error message and change accordingly |
Another simpler approach is to keep the b7 and set this features variable to a smaller number liks |
I solve it. But also cuda out of memory. Have any approch to reduce ? |
It also didn't work |
The model use full 32g GPU so you need to reduce the size of the network even more to fit into 14G. You can also reduce the input image size by half, becareful to scale the 3D-> 2D projection accordingly. |
Thanks. I use B1 and reduce the image_size by half, now it works. |
Great! I glad to hear that. |
Dear author, you said that Use smaller 2D backbone by chaning the basemodel_name and num_features
The pretrained model name is here. You can try the efficientnet B5 can reduces the memory, I want to know the B5 weight and the value of num_features?
The text was updated successfully, but these errors were encountered: