Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error on running on my Jupyter notebook #76

Open
salimcodes opened this issue Jun 27, 2022 · 7 comments
Open

Error on running on my Jupyter notebook #76

salimcodes opened this issue Jun 27, 2022 · 7 comments

Comments

@salimcodes
Copy link

When I tried to run the example notebook, it returned an error.

This error was specifically on the x_stats = dec(z).float() line.

@Arkitu
Copy link

Arkitu commented Jun 28, 2022

I have the same error, with 'Upsample' object has no attribute 'recompute_scale_factor'

@duanjiding
Copy link

me too

1 similar comment
@hejonathan
Copy link

me too

@alosdiallo
Copy link

I am getting something similar but get a warning first:
:28: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
img = TF.resize(img, s, interpolation=PIL.Image.LANCZOS)
/Users/adiallo/opt/miniconda3/lib/python3.8/site-packages/torchvision/transforms/functional.py:417: UserWarning: Argument 'interpolation' of type int is deprecated since 0.13 and will be removed in 0.15. Please use InterpolationMode enum.
warnings.warn(

Then I get: AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

@wzhao6898
Copy link

Error message I got:
AttributeError Traceback (most recent call last)
Input In [15], in <cell line: 7>()
4 z = torch.argmax(z_logits, axis=1)
5 z = F.one_hot(z, num_classes=enc.vocab_size).permute(0, 3, 1, 2).float()
----> 7 x_stats = dec(z).float()
8 x_rec = unmap_pixels(torch.sigmoid(x_stats[:, :3]))
9 x_rec = T.ToPILImage(mode='RGB')(x_rec[0])

File ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []

File ~\Anaconda3\lib\site-packages\dall_e\decoder.py:94, in Decoder.forward(self, x)
91 if x.dtype != torch.float32:
92 raise ValueError('input must have dtype torch.float32')
---> 94 return self.blocks(x)

File ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []

File ~\Anaconda3\lib\site-packages\torch\nn\modules\container.py:139, in Sequential.forward(self, input)
137 def forward(self, input):
138 for module in self:
--> 139 input = module(input)
140 return input

File ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []

File ~\Anaconda3\lib\site-packages\torch\nn\modules\container.py:139, in Sequential.forward(self, input)
137 def forward(self, input):
138 for module in self:
--> 139 input = module(input)
140 return input

File ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []

File ~\Anaconda3\lib\site-packages\torch\nn\modules\upsampling.py:154, in Upsample.forward(self, input)
152 def forward(self, input: Tensor) -> Tensor:
153 return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners,
--> 154 recompute_scale_factor=self.recompute_scale_factor)

File ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py:1207, in Module.getattr(self, name)
1205 if name in modules:
1206 return modules[name]
-> 1207 raise AttributeError("'{}' object has no attribute '{}'".format(
1208 type(self).name, name))

AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

@Naphier
Copy link

Naphier commented Aug 19, 2022

ultralytics/yolov5#6948 (comment)

Find E:\condaaa\Lib\site-packages\torch\nn\modules\upsampling.py and change to this:

def forward(self, input: Tensor) -> Tensor:
    return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners,
                         #recompute_scale_factor=self.recompute_scale_factor
                         )

Commenting out this optional parameter was a quick fix for me but I think that pinning versions to this:

pip install torchvision==0.10.1
pip install torch==1.9.1

is what's really appropriate. I'll make a PR with the versions pinned after testing.

@digitalShaman
Copy link

The error
'Upsample' object has no attribute 'recompute_scale_factor'

is related to a change in the torch Upscale class from 1.10 to 1.11.

It appears that 'old' Upscale objects are saved within the model after this line of code:
model = load_model("https://cdn.openai.com/dall-e/decoder.pkl", 'cuda')
i used the following code immediatly after the load_model call to patch this:

# Patch for torch 1.11 and higher: replace the old Upsample object by the new version
# that exposes recompute_scale_factor
_ = model.blocks.group_1.upsample
model.blocks.group_1.upsample = torch.nn.Upsample(scale_factor = _.scale_factor, mode= _.mode)
_ = model.blocks.group_2.upsample
model.blocks.group_2.upsample = torch.nn.Upsample(scale_factor = _.scale_factor, mode= _.mode)
_ = model.blocks.group_3.upsample
model.blocks.group_3.upsample = torch.nn.Upsample(scale_factor = _.scale_factor, mode= _.mode)

and it's running fine with torch 1.12.1!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants