You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hessian_matrix_eigenvalues has an efficient, analytical code path for 2D, but for nD the current implementation is very memory-hungry. Internally arrays of shape: image.shape + (image.ndim, image.ndim) get passed to cp.linalg.eigvalsh, so memory requirements grow rapidly with ndim.
The same underlying innefficiency applies to several other functions relying on the same symmetric eigenvalue computations:
skimage.feature.blob_doh
skimage.feature.multiscale_basic_features
skimage.feature.shape_index
skimage.feature.structure_tensor_eigenvalues
skimage.filters.frangi
skimage.filters.hessian
skimage.filters.meijering
skimage.filters.sato
For the 3D case, I recently implemented a memory-efficient analytical solution downstream in cuCIM. It is also much faster using an analytical solution than relying on a general eigenvalue solver.
For the nD case, I don't know of any analytical solution. I think the easiest workaround in that case is to divide up the eigenvalue computations to only operate on subsets of the image at a time to reduce the overhead. e.g. using dask.array.map_blocks. Each pixel is computed independently of the others so the blocks do not need to be overlapping.
The text was updated successfully, but these errors were encountered:
Hey, there hasn't been any activity on this issue for more than 180 days. For now, we have marked it as "dormant" until there is some new activity. You are welcome to reach out to people by mentioning them here or on our forum if you need more feedback! If you think that this issue is no longer relevant, you may close it by yourself; otherwise, we may do it at some point (either way, it will be done manually). In any case, thank you for your contributions so far!
Description:
hessian_matrix_eigenvalues
has an efficient, analytical code path for 2D, but for nD the current implementation is very memory-hungry. Internally arrays of shape:image.shape + (image.ndim, image.ndim)
get passed tocp.linalg.eigvalsh
, so memory requirements grow rapidly withndim
.The same underlying innefficiency applies to several other functions relying on the same symmetric eigenvalue computations:
skimage.feature.blob_doh
skimage.feature.multiscale_basic_features
skimage.feature.shape_index
skimage.feature.structure_tensor_eigenvalues
skimage.filters.frangi
skimage.filters.hessian
skimage.filters.meijering
skimage.filters.sato
For the 3D case, I recently implemented a memory-efficient analytical solution downstream in cuCIM. It is also much faster using an analytical solution than relying on a general eigenvalue solver.
For the nD case, I don't know of any analytical solution. I think the easiest workaround in that case is to divide up the eigenvalue computations to only operate on subsets of the image at a time to reduce the overhead. e.g. using
dask.array.map_blocks
. Each pixel is computed independently of the others so the blocks do not need to be overlapping.The text was updated successfully, but these errors were encountered: