You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
evenly divides the data in pred_file with the constraint that no dimension of any subvolume is longer than max_len
Zwatershed the subvolumes
eval_with_spark(partition_data[0])
with spark
eval_with_par_map(partition_data[0],NUM_WORKERS)
with python multiprocessing map
after evaluating, subvolumes will be saved into the out_folder directory named based on their smallest indices in each dimension (ex. path/to/out_folder/0_0_0_vol)
Stitch the subvolumes together
stitch_and_save(partition_data,outname)
stitch together the subvolumes in partition_data
save to the hdf5 file outname
outname['starts'] = list of min_indices of each subvolume
outname['ends'] = list of max_indices of each subvolume
outname['seg'] = full stitched segmentation
outname['seg_sizes'] = array of size of each segmentation