You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m really interested in getting GPU Richardson-Lucy deconvolution working on my machine (Windows10, Quadro M2200 4GB) and am running through the CElegans example notebook. But I get an out of memory error when I runt the deconvolution cell. I’m wondering if this is a true error (GPU requires more than 4GB of memory for this example?) or if there is a configuration error I should be looking into?
The text was updated successfully, but these errors were encountered:
The fix in this case was to change the padding mode used on the images to avoid resizing them to the next highest power of 2 along each dimension:
# Use pad_mode = 'none' instead of default 'log2'algo=fd_restoration.RichardsonLucyDeconvolver(n_dims=3, pad_mode='none').initialize()
Additionally, configuring these options will override the default TF behavior where it will attempt to preallocate nearly 100% of GPU memory for every python process (or jupyter kernel) using it, which can be problematic even with a single process:
session_config=tf.ConfigProto()
# allow_growth=True will allocate memory as needed rather than preemptively session_config.gpu_options.allow_growth=Truesession_config.gpu_options.per_process_gpu_memory_fraction=1.0res= {ch: algo.run(acqs[ch], niter=100, session_config=session_config) forchinacqs}
via email from Samantha Esteves:
The text was updated successfully, but these errors were encountered: