Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sign error in CNN Layer Visualization #16

Closed
McLawrence opened this issue Jun 22, 2018 · 11 comments
Closed

Sign error in CNN Layer Visualization #16

McLawrence opened this issue Jun 22, 2018 · 11 comments

Comments

@McLawrence
Copy link

There is only a very minor sign error in the code. In the hook version you correctly minimize the negative mean, hence, maximizing the mean.

In the unhooked version however, the minus sign is missing, computing the input than activates the feature the least (line 102):

loss = torch.mean(self.conv_output)

Additionally, the code has to be modified to run the latest pytorch 0.4, where zero-dimensional tensor can no longer be indexed (line 101 and 63).

@utkuozbulak
Copy link
Owner

Holy! I was sure I fixed that! I will look into it asap!

@herandy
Copy link

herandy commented Jul 17, 2018

There is another problem that I have with version 0.4.0. Setting model.eval() my model doesn't learn anything and stays the same throughout all the epochs for both cnn visualization as well as generate class specific samples. Even if I change that to train() and add another sample to make sure it works it doesn't change the input either.

@utkuozbulak
Copy link
Owner

@herandy I think you are confusing concepts. model.eval() freezes the weights and converts the model into prediction mode. To train you need to call model.train().

If you have a dropout or batchnorm (or any custom layer that behaves that way) in the model and get visualizations while model is in train mode you will get a new visualization every time.

For some of the discussion see:
https://stackoverflow.com/questions/48146926/whats-the-meaning-of-function-eval-in-torch-nn-module
pytorch/pytorch#5406

@herandy
Copy link

herandy commented Jul 19, 2018

Then what is the purpose of having 150 epochs in the generate class specific samples?

@utkuozbulak
Copy link
Owner

It is to optimize the image, not the model. See what is being optimized:

optimizer = Adam([self.processed_image], lr=0.1, weight_decay=1e-6)

@herandy
Copy link

herandy commented Jul 19, 2018

Thank you.

@utkuozbulak
Copy link
Owner

Fixed the error, thanks again for notifying!

@roachsinai
Copy link

roachsinai commented Nov 7, 2018

Hi @utkuozbulak I don't understand the method you use to Visualize Convolutional Neural Network Filter

CNN filters can be visualized when we optimize the input image with respect to output of the specific convolution operation.

So you mean as the network minimize the negetive mean of output, self.created_image = recreate_image(self.processed_image), the self.created_image is the texture of the cnn filter? Dose the most corresponding image texture get the biggest activation with that corresponding cnn filter is the reason?

@utkuozbulak
Copy link
Owner

utkuozbulak commented Nov 7, 2018

The loss function aims to maximize the activation of a specific filter with

loss = -torch.mean(self.conv_output)

and updates the input image in such a way that the average activation for specified conv output is (hopefully) higher at each step.

@roachsinai
Copy link

Yes, I know that. So at the end, the input image chaged to a image that maximize the activation of a specific filter?

@utkuozbulak
Copy link
Owner

Yes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants