You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, sorry if posting this question here is abusing the purpose of Github's issues feature, but I didn't see any information in the documentation about where to go or who to contact if we have questions about cellpose.
I'm using the jupyter notebook that's provided for running cellpose, and I'm noticing that it loads all the images into memory and then runs model.eval, io.masks_flow_to_seg, etc. on these lists. Do these functions take lists of images as a convenience to do batch processing, or do they actually aggregate information across multiple images for doing the segmentation? I'm wondering if there would be a downside to running these functions on each image one at a time, or at least on smaller batches, as the number of images in our experiments are too large to fit all of them in memory at the same time.
The text was updated successfully, but these errors were encountered:
No this is a good question, we are not running batches which contain multiple different images, so there is no change in performance whether or not you use lists or single images as inputs. I've modified the example notebook accordingly to show both running in a loop and running images as a list.
the only time you would need to provide a single list is if you wanted cellpose to stitch together 2D images into a 3D volume with the stitch_threshold parameter activated.
Hi, sorry if posting this question here is abusing the purpose of Github's issues feature, but I didn't see any information in the documentation about where to go or who to contact if we have questions about cellpose.
I'm using the jupyter notebook that's provided for running cellpose, and I'm noticing that it loads all the images into memory and then runs model.eval, io.masks_flow_to_seg, etc. on these lists. Do these functions take lists of images as a convenience to do batch processing, or do they actually aggregate information across multiple images for doing the segmentation? I'm wondering if there would be a downside to running these functions on each image one at a time, or at least on smaller batches, as the number of images in our experiments are too large to fit all of them in memory at the same time.
The text was updated successfully, but these errors were encountered: