You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 20, 2021. It is now read-only.
If I have a png w/ an alpha channel (eg: periphery pixels would like to ignore)
for DCT phash, is it better to leave alpha, convert to white, or convert to black?
(for example, a bunch of "circular" imagery, where are trying to "best match")
Or, due to DCT transform, does it not matter because there are contiguous groups of pixels the same?
thanks for this awesome code, BTW!
I'm evolving an AR iOS app to match camera image to pre-photographed clay statues and send the user the best match.
In testing, pHash DCT is beating dHash and more brute force/simpler MAE, MSE and RMSE comparisons, too :)
The text was updated successfully, but these errors were encountered:
You shouldn't have to do anything. The library greyscales the image automatically (ignoring alpha values) for DCT hashing. See greyscale_pixels_rgba_32_32 in OSFastGraphics.m for implementation details.
If I have a png w/ an alpha channel (eg: periphery pixels would like to ignore)
for DCT phash, is it better to leave alpha, convert to white, or convert to black?
(for example, a bunch of "circular" imagery, where are trying to "best match")
Or, due to DCT transform, does it not matter because there are contiguous groups of pixels the same?
thanks for this awesome code, BTW!
I'm evolving an AR iOS app to match camera image to pre-photographed clay statues and send the user the best match.
In testing, pHash DCT is beating dHash and more brute force/simpler MAE, MSE and RMSE comparisons, too :)
The text was updated successfully, but these errors were encountered: