Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How much memory needed to run inference? #4

Open
KyriaAnnwyn opened this issue May 26, 2022 · 7 comments
Open

How much memory needed to run inference? #4

KyriaAnnwyn opened this issue May 26, 2022 · 7 comments

Comments

@KyriaAnnwyn
Copy link

I get gpu oom error when running test.py. I currently have 16G. This is not enough?

@chxy95
Copy link
Member

chxy95 commented May 27, 2022

@KyriaAnnwyn What are the specific settings? GPU oom may occur when the input size is too large, especially for HAT-L on SRx2.

@KyriaAnnwyn
Copy link
Author

I tried SRx2 and SRx4 for 512x512 images. Both led to GPU OOM. CPU ran ok, but took a lot of time

@chxy95
Copy link
Member

chxy95 commented May 27, 2022

@KyriaAnnwyn 512x512 is really a large input size, which may cost about 20G memory for HAT-L on SRx2. You might consider testing the image in overlapping patches then merging together for limited GPU resources.

@chxy95
Copy link
Member

chxy95 commented May 27, 2022

I will test the memory requirement for the models and provide a solution for limited GPU resources for testing.

@KyriaAnnwyn
Copy link
Author

@chxy95 Thank you

@chxy95
Copy link
Member

chxy95 commented Sep 24, 2022

The tile mode is provided for limited GPU memory when testing. The setting can be referred to

tile: # use the tile mode for limited GPU memory when testing.
tile_size: 256 # the higher, the less utilized GPU memory. must be an integer multiple of the window size.
tile_pad: 32 # overlapping between adjacency patches.must be an integer multiple of the window size.

@BitCalSaul
Copy link

Hello @chxy95, I've trained a super-resolution model with a scaling factor of 1, setting the gt_size parameter to 64, despite my dataset comprising images of (512, 512) dimensions. I believe the DataLoader automatically crops these images to the specified gt_size of 64. My query pertains to the inference process using hat/test.py. Specifically, does the script perform inference on individual (64, 64) segments of the larger (512, 512) images and then stitch these segments back together to reconstruct the full (512, 512) image? Any clarification on this would be greatly appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants