Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with inference UFONE class #2

Open
Djdefrag opened this issue Oct 25, 2023 · 8 comments
Open

Problem with inference UFONE class #2

Djdefrag opened this issue Oct 25, 2023 · 8 comments

Comments

@Djdefrag
Copy link

Hi, thank you for the wonderful work :)

Trying to do inference on some images, some are successful, while others gives error in the UFONE class in the forward function, in particular the error is:

  • local_features = x.view(B, C, H//self.patch_size, self.patch_size, W//self.patch_size, self.patch_size)
  • RuntimeError: shape '[1, 60, 52, 8, 31, 8]' is invalid for input of size 6380100

in particular it seems to happen with images that are not compatible with patch_size = 8

Thank you :)

@yongliuy
Copy link
Owner

Hi, we use the function ‘check_image_size’ in the DITN class to ensure that the image size matches the model. Please check whether it is used in your code.

@Djdefrag
Copy link
Author

Djdefrag commented Oct 28, 2023

Hi, yes it is used, but the error is still present.

from torch.nn.functional import pad as torch_functional_pad
def check_image_size(self, x):
    _, _, h, w = x.size()
    wsize = self.patch_sizes[0]
    for i in range(1, len(self.patch_sizes)):
        wsize = wsize*self.patch_sizes[i] // math_gcd(wsize, self.patch_sizes[i])
    mod_pad_h = (wsize - h % wsize) % wsize
    mod_pad_w = (wsize - w % wsize) % wsize
    x = torch_functional_pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect')
    return x

def forward(self, input_image):
    _, _, old_h, old_w = input_image.shape
    input_image = self.check_image_size(input_image)
    sft = self.sft(input_image)

    local_features = self.UFONE(sft)

    local_features = self.conv_after_body(local_features)
    out_dec_level1 = self.upsample(local_features + sft)

    return out_dec_level1[:, :, 0:old_h * self.scale, 0:old_w * self.scale]

@yongliuy
Copy link
Owner

What is the size of your input image? Or how should I reproduce your problem?

@Djdefrag
Copy link
Author

Hi, for example this image
width: 447
height: 571

Immagine 2023-10-29 080610

@yongliuy
Copy link
Owner

Hi, I can successfully run the program using the pictures provided and cannot reproduce the problem you describe.
Here are the details:

class DITN_Real(nn.Module):
    def forward(self, inp_img):
        _, _, old_h, old_w = inp_img.shape
        ## torch.Size([1, 3, 571, 447])

        inp_img = self.check_image_size(inp_img)
        ## torch.Size([1, 3, 576, 448])
class UFONE(nn.Module):
    def forward(self, x):
        B, C, H, W = x.data.shape
        ## torch.Size([1, 60, 576, 448])

        local_features = x.view(B, C, H//self.patch_size, self.patch_size, W//self.patch_size, self.patch_size)
        ## torch.Size([1, 60, 72, 8, 56, 8])

I suggest you re-download the code from our repository and run it again to make sure your code has not been modified incorrectly~

@Djdefrag
Copy link
Author

Thank you, i will try :)

@Djdefrag
Copy link
Author

I want to add also that i am using torch 1.13.1

@Djdefrag
Copy link
Author

Djdefrag commented Oct 29, 2023

For the specified image, using your code without changing.

  > Image pre check_image_size()
  > torch.Size([1, 3, 571, 447])
  > Image post check_image_size()
  > torch.Size([1, 3, 572, 452])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants