New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce memory consumption in blob_* functions with preallocation #6597
Conversation
Initialization of image_cube reduce the max memory consumption by X% (depending on dims and num_sigma), which helped me run the code on weaker machine. With pre-allocating of the output array, we avoid using `np.stack` Testing: ``` from math import sqrt import skimage from skimage import data from skimage.feature import blob_dog, blob_log, blob_doh from skimage.color import rgb2gray import numpy as np import matplotlib.pyplot as plt image = data.hubble_deep_field()[:700, :700] image_gray = rgb2gray(image) Ds = 20 image_grayD = np.stack([image_gray for i in range(Ds)],2) print(image_grayD.shape) blobs_log = blob_log(image_grayD, max_sigma=40, min_sigma = 5, num_sigma=20, threshold=.01) ``` Max Ram: Without change - 3.247GB With change - 4.707GB
Thank you @orena1 for working on these memory improvements. I took the liberty to change a few minor things and merged our main branch into this PR. The latter hopefully fixes the CI. |
@jarrodmillman, @stefanv Do you see an immediate answer to why build fails on the linux CI? It doesn't seem to detect ninja although ninja 1.11.1 is present. |
I did not know the empty default type is float, it is more efficient to initialize it with the same dtype of the image
Hi @lagru, I added another change, I did not know that the default of empty is float64; the image can be float32 sometimes, so it is better to initialize empty with the correct datatype. |
Apologies for not seeing this before opening #6621. We had implemented the changes independently upstream in rapidsai/cucim#413 when porting the blob detectors there and i was backporting that here and did not properly check for an open PR first. Can you pull the Alternatively, I can also just add co-authored-by credit for you in the other PR. |
reduce memory consumption by preallocating image_cube
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updating this @orena1, it looks good to me.
@lagru, please give a 2nd look when you get a chance. I think it is ready to merge |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My small suggestion is not worth holding this up until I have the chance to verify it myself. Otherwise this looks good so I'll approve. Feel free to address my suggestion, or ignore it and merge. 👍
Thanks @orena1!
to hide flake8 complaints about to long lines in those docstrings due to long URLs.
Great, thanks |
Description
preallocating of image_cube reduce the max memory consumption by 30% (depending on dims and
num_sigm
), which helped me run the code on weaker machine. With pre-allocating of the output array, we avoid usingnp.stack
Testing:
Max Ram:
Without change - 4.707GB
With change - 3.247GB