-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no grad back propagate to EMAU.conv1? #14
Comments
No bug here. It is due to Line 227 in f7d7b47
|
Can you provide a model to check? I think that the CONV1 may not be included in your final model. Without gradient from loss, It is pruned by pytorch. |
You can train a model without line Line 227 in f7d7b47
I don't agree with you. Without grad from loss, pytorch don't update its parameter, but it still works in the forward process. |
but i think if the param not update, conv1 make no sense. Will it work appropriate only using the init param of conv1? by the way, have you tried to test performance of model after removing conv1? |
Yes, I have. With the 'no_grad' setting, the only function of conv1, is just to map the distribution of input feature maps from R^+ to R. |
@XiaLiPKU thanks for your quickly reply, so performance is a bit worse? can you provide the concrete value? |
I forgot the concrete value here. But in my memory, Deleting the 'with torch.no_grad():' will decrease around 0.5 in mIoU. |
@XiaLiPKU thanks for your detailed explanation~it is a good job! |
Excuse me, I can not find the grad back to the CONV1. Are there some bugs?
The text was updated successfully, but these errors were encountered: