Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues about why it always skips the images that I upload #3

Closed
HengyiWang opened this issue Sep 19, 2020 · 7 comments
Closed

Issues about why it always skips the images that I upload #3

HengyiWang opened this issue Sep 19, 2020 · 7 comments
Labels
good first issue Good for newcomers

Comments

@HengyiWang
Copy link

HengyiWang commented Sep 19, 2020

Hi, recently I tried your Colab demo to restore some of my old images, but here comes the issue. When I follow the steps and run the code, it always prints skip the my uploaded images like the following shows:

Running Stage 1: Overall restoration
initializing the dataloader
model weights loaded
directory of testing image: /content/photo_restoration/test_images/upload
processing testScratch.png
You are using NL + Res
Now you are processing testScratch.png
Skip testScratch.png
Finish Stage 1 ...

Running Stage 2: Face Detection
Finish Stage 2 ...

Running Stage 3: Face Enhancement
The main GPU is
0
dataset [FaceTestDataset] of size 0 was created
The size of the latent vector size is [8,8]
Network [SPADEGenerator] was created. Total number of parameters: 92.1 million. To see the architecture, do print(network).
hi :)
Finish Stage 3 ...

Running Stage 4: Blending
Finish Stage 4 ...

All the processing is done. Please check the results.

Therefore, I'd like to know whether there are any requirements that the uploaded images have to satisfy or did I make some mistakes ? Thanks a lot.

@raywzy
Copy link
Collaborator

raywzy commented Sep 19, 2020

Hi,
I think the problem arises due to the large input size. The partial non-local block is a memory consuming operation. One solution is to scale your input using original ratio in advance. We also implement this option in the code, but it will cost your some time to slightly modify the run.py. Hence, just try to scale the input and see if it will work :)

@HengyiWang
Copy link
Author

Thanks a lot, it works well after I reduce the input size!

@loretoparisi
Copy link

loretoparisi commented Sep 19, 2020

@HengjieWang which is the max image size in bytes it can be handled?
I have noticed that using the option --with_scratch will also skip images of less that 1MB:

EMILIA 55.png(image/png) - 936193 bytes, last modified: 19/9/2020 - 100% done
Saving EMILIA 55.png to EMILIA 55.png

while without the option it works for that size:

Running Stage 1: Overall restoration
initializing the dataloader
model weights loaded
directory of testing image: /content/photo_restoration/test_images/upload
processing EMILIA 55.png
You are using NL + Res
Now you are processing EMILIA 55.png
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:3121: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  "See the documentation of nn.Upsample for details.".format(mode))
Skip EMILIA 55.png
Finish Stage 1 ...


Running Stage 2: Face Detection
Finish Stage 2 ...


Running Stage 3: Face Enhancement
The main GPU is 
0
dataset [FaceTestDataset] of size 0 was created
The size of the latent vector size is [8,8]
Network [SPADEGenerator] was created. Total number of parameters: 92.1 million. To see the architecture, do print(network).
hi :)
Finish Stage 3 ...


Running Stage 4: Blending
Finish Stage 4 ...


All the processing is done. Please check the results.

while if the size is < 500KB it works:

EMILIA 55.png(image/png) - 462560 bytes, last modified: 19/9/2020 - 100% done
Saving EMILIA 55.png to EMILIA 55.png

@elfsmelf
Copy link

I need to test it more. It might just be a file size thing. But I found that as long as one of the dimensions is either 256 or 512 it works (Divisable by 256). Like 512px wide by 683 px high. 1024 seems to be too high in resolution for colab.

@HengyiWang
Copy link
Author

I have reduced my images to 288 × 368 and 480 × 656 (less than 200KB) for --with_scratch, it works well. By the way, I do not think the file size would matter, I guess the key is to make the width and height to be reduced, because your images would become matrices in the code. Therefore, just try to simply reduce the total pixels by reducing the size and see if it works :)

@zhangmozhe zhangmozhe added the good first issue Good for newcomers label Sep 28, 2020
@witzatom
Copy link

witzatom commented Sep 28, 2020

I added a log to the Skipped *.jpg output and the reason the model fails is that its trying to allocate more memory than is available in the collab. For example

CUDA out of memory. Tried to allocate 168.00 MiB (GPU 0; 7.43 GiB total capacity; 6.42 GiB already allocated; 88.94 MiB free; 6.75 GiB reserved in total by PyTorch)

So from that im pretty sure that the issue will not be the file size but rather the number of pixels in the image. As jpeg is compressed the input file size could be deceiving. Along with the fact that the image will most likely be upscaled to some power of 2. Based on trial/error I was for example able to process 1127x742 but not 1278x842 (without scratch, scratch seems to up the memory requirements significantly).

@loretoparisi
Copy link

I confirm that when using the option --with_scratch will significantly increase the memory pressure and therefore reduce the file size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

6 participants