Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

register_backward_hook #51

Closed
maryjis opened this issue Aug 15, 2019 · 4 comments
Closed

register_backward_hook #51

maryjis opened this issue Aug 15, 2019 · 4 comments

Comments

@maryjis
Copy link

maryjis commented Aug 15, 2019

Good day! I use fine-tuning loaded model ResnetXt-101 from .pth file. GradCam works good with my model, but when I try to use GuidedBackprop I get the error:

-> 80 gradients_as_arr = self.gradients.data.numpy()[0]
81 return gradients_as_arr
82

AttributeError: 'NoneType' object has no attribute 'data'

The error occurs due to hook_function is not called.

def hook_layers(self):
    def hook_function(module, grad_in, grad_out):
        print("Vaxx")
        self.gradients = grad_in[0]

Could you help me understand why it does not call?

@utkuozbulak
Copy link
Owner

It probably does not register hooks in the first place. Check the next two lines where it registers the hooks on the layers.

first_layer = list(self.model.features._modules.items())[0][1]
first_layer.register_backward_hook(hook_function)

@maryjis
Copy link
Author

maryjis commented Aug 16, 2019

I think they should be registered:

class GuidedBackprop():
    def __init__(self, model):
        self.model = model
        self.gradients = None
        self.forward_relu_outputs = []
        # Put model in evaluation mode
        self.model.eval()
        self.update_relus()
        self.hook_layers()



    def hook_layers(self):
        def hook_function(module, grad_in, grad_out):
            print("Vaxx")
            self.gradients = grad_in[0]
        # Register hook to the first layer
        first_layer = list(self.model.features._modules.items())[0][1]
        first_layer.register_backward_hook(hook_function)

    def generate_gradients(self, input_image, target_class):
        # Forward pass
        model_output = self.model(input_image)
        # Zero gradients
        self.model.zero_grad()
        # Target for backprop
        one_hot_output = torch.FloatTensor(1, model_output.size()[-1]).zero_()
        one_hot_output[0][target_class] = 1
        # Backward pass
        print(one_hot_output)
        model_output.backward(gradient=one_hot_output)
        # Convert Pytorch variable to numpy array
        # [0] to get rid of the first channel (1,3,224,224)
        gradients_as_arr = self.gradients.data.numpy()[0]
        return gradients_as_arr

There is my function where I use GuidedBackprop.

    def guided_grad_cam(self):
            batch = next(iter(self.test_loader))
            img_paths, img_tensors, labels = batch
            img_path = img_paths[0]
            print(img_tensors.shape)
            prep_img = img_tensors[0].unsqueeze(0)

            # Grad cam
            gcv2 = GradCam(self.model, target_layer=7)
            # Generate cam mask
            cam = gcv2.generate_cam(prep_img,labels[0].data.numpy())
            print('Grad cam completed')

            # Guided backprop
            GBP = GuidedBackprop(self.model)
            # Get gradients
            print(labels[0].data.numpy())
            guided_grads = GBP.generate_gradients(prep_img,labels[0].data.numpy())
            print('Guided backpropagation completed')

            # Guided Grad cam
            cam_gb = guided_grad_cam(cam, guided_grads)

            file_name_to_export = os.path.join("cnn_visual", self.name)

            save_gradient_images(cam_gb, file_name_to_export + '_GGrad_Cam')
            grayscale_cam_gb = convert_to_grayscale(cam_gb)
            save_gradient_images(grayscale_cam_gb, file_name_to_export + '_GGrad_Cam_gray')

@maryjis
Copy link
Author

maryjis commented Aug 19, 2019

I have decided this problem by unfreezing all layers in model.

for ind, param in enumerate(self.resnet.named_parameters()):
            param[1].requires_grad = True

@utkuozbulak
Copy link
Owner

If requires_grad is False then you can't register a backward hook (because there is no gradient). Glad you were able to solve it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants