Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Class autograd.Function in module cause autoreload fail in jupyter lab #12553

Open
cnjackhu opened this issue Sep 14, 2020 · 1 comment
Open

Comments

@cnjackhu
Copy link

cnjackhu commented Sep 14, 2020

I define a foo.py file and then creat a jupyter lab notebook that will import the class CNN in this file.In this foo.py file. I first define the class CNN then the class fun

'import` torch
import torch.nn as nn
import torch.nn.functional as F

class CNN(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1,32,3,1,1) 
        print('pytorch')
        print('ok')
        
    
class fun(torch.autograd.Function):

    @staticmethod
    def forward(ctx, input):
        ctx.save_for_backward(input)
        return input.gt(thresh).float()

    @staticmethod
    def backward(ctx, grad_output):
        input, = ctx.saved_tensors
        grad_input = grad_output.clone()
        temp = abs(input - thresh) < lens
        return grad_input * `temp.float()`

In the jupyter lab, I enable the autoreload
%load_ext autoreload %autoreload 2
And the autoreload magic will work. However, if I change the sequence of class defined in foo.py,
which means I first define class fun then define class CNN. The autoreload will not work in my jupyter lab notebook.

import torch
import torch.nn as nn
import torch.nn.functional as F

class fun(torch.autograd.Function):

    @staticmethod
    def forward(ctx, input):
        ctx.save_for_backward(input)
        return input.gt(thresh).float()

    @staticmethod
    def backward(ctx, grad_output):
        input, = ctx.saved_tensors
        grad_input = grad_output.clone()
        temp = abs(input - thresh) < lens
        return grad_input * temp.float()
    
    
class CNN(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1,2,3,1,1) 
        print('pytorch')
        print('ok')`

In this case, If I modify the __init__ in class CNN, the autoreload didn’t work.
Anyone can answer why ?

I also post this in torch forum as
https://discuss.pytorch.org/t/class-autograd-function-in-module-cause-autoreload-fail-in-jupyter-lab/96250

@Carreau
Copy link
Member

Carreau commented Sep 17, 2020

(Auto)reload is really tricky, in general reloading code is not possible and is just a bunch of hacks.
It is likely an edge case of the autoreload code. It is weird as those two class don't rely on each other, but I'm not surprised either it does not work well. Especially classes are hard to reload as we have to find all the instances of the previous definition and update them, so that likely screw up the internal interpreter state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants