New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sign error in CNN Layer Visualization #16
Comments
Holy! I was sure I fixed that! I will look into it asap! |
There is another problem that I have with version 0.4.0. Setting model.eval() my model doesn't learn anything and stays the same throughout all the epochs for both cnn visualization as well as generate class specific samples. Even if I change that to train() and add another sample to make sure it works it doesn't change the input either. |
@herandy I think you are confusing concepts. model.eval() freezes the weights and converts the model into prediction mode. To train you need to call model.train(). If you have a dropout or batchnorm (or any custom layer that behaves that way) in the model and get visualizations while model is in train mode you will get a new visualization every time. For some of the discussion see: |
Then what is the purpose of having 150 epochs in the generate class specific samples? |
It is to optimize the image, not the model. See what is being optimized:
|
Thank you. |
Fixed the error, thanks again for notifying! |
Hi @utkuozbulak I don't understand the method you use to Visualize Convolutional Neural Network Filter
So you mean as the network minimize the negetive mean of output, |
The loss function aims to maximize the activation of a specific filter with loss = -torch.mean(self.conv_output) and updates the input image in such a way that the average activation for specified conv output is (hopefully) higher at each step. |
Yes, I know that. So at the end, the input image chaged to a image that maximize the activation of a specific filter? |
Yes. |
There is only a very minor sign error in the code. In the hook version you correctly minimize the negative mean, hence, maximizing the mean.
In the unhooked version however, the minus sign is missing, computing the input than activates the feature the least (line 102):
loss = torch.mean(self.conv_output)
Additionally, the code has to be modified to run the latest pytorch 0.4, where zero-dimensional tensor can no longer be indexed (line 101 and 63).
The text was updated successfully, but these errors were encountered: