You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @CallShaul, thank you for this suggestion. Can you illustrate the issue that you are solving with some lines of code?
An example can help us better understand the problem 😉
ds_block=(10, 10)
vid = -uint8 video data file- [x, y, time]
img = skm.block_reduce(vid[:, :, k], ds_block, func=np.mean)
As you can see in the image below the lowest row has a lower gray level values since the image wasn't perfectly divided by ds_block.
I've solved the issue slicing the extra rows / columns:
spare_rows = np.mod(vid.shape[0], ds_block[0])
spare_cols = np.mod(vid.shape[1], ds_block[1])
vid = vid[:vid.shape[0] - spare_rows]
vid = vid[:, :vid.shape[1] - spare_cols]
it will be more elegant if the block_reduce function will be able to receive a Boolean argument mentioning whether we want those lines removed or maybe even up-sample by average or something like that to fit a perfect block division.
I hope I was clear enough, I'm available for any further discussion, thanks a lot!
A suggestion:
Adding a boolean option for reducing output 1D / 2D arrays size, in case the down-sampling integer factor don't match perfectly (block_size).
The text was updated successfully, but these errors were encountered: