You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This code assumes that the input image is in RGBA8888 pixel format, specifically that the R,G,B component values are in at offsets 0, 1, 2:
r += basis * sRGBToLinear(buffer[bytesPerPixel * x + pixelOffset + 0 + y * bytesPerRow])
g += basis * sRGBToLinear(buffer[bytesPerPixel * x + pixelOffset + 1 + y * bytesPerRow])
b += basis * sRGBToLinear(buffer[bytesPerPixel * x + pixelOffset + 2 + y * bytesPerRow])
This isn't always true. This yields blurHashes whose hue is wrong.
The blurHash(numberOfComponents:) method verifies the number of components. At the very least, it should also verify that the alphaInfo is .premultipliedLast. However, that will reject many images. Better would be to remove this assumption from the encoder.
Additionally, the Swift encoder invokes the linearTosRGB() method for every pixel NxM times for blurHashes with components (N, M). If the image is WxH (width, height), the function will be called NxMxWxH. It would dramatically improve perf to break this into two steps: convert all pixels once, then compute factors.
Additionally, given the low fidelity of blurHashes, it probably doesn't make sense to sample every pixel. Another significant perf improvement would be to use a "stride" to sample every Nth pixel vertically and horizontally.
The text was updated successfully, but these errors were encountered:
The correct way to call it is to first scale the image down to a small size, such as 32x32, and the converting that. That handles both any possible slowness, and gets much better results than just skipping pixels.
Pushed a change to convert images to a known sRGB format in the Swift encoders now. Seems to lead to slightly nicer colours too in the test app so I guess those images weren't in sRGB as I had expected.
@DagAgren Doesn't the sRGBToLinear still get called NxMxWxH times or am I missing something?
The Swift decoder is surprisingly slower than the Kotlin implementation (I've tested it for benchmark purposes with decode widths and heights of 400, Swift takes longer than 4 seconds while Kotlin (Android) is done in less than 100ms)
This code assumes that the input image is in RGBA8888 pixel format, specifically that the R,G,B component values are in at offsets 0, 1, 2:
This isn't always true. This yields blurHashes whose hue is wrong.
The
blurHash(numberOfComponents:)
method verifies the number of components. At the very least, it should also verify that thealphaInfo
is.premultipliedLast
. However, that will reject many images. Better would be to remove this assumption from the encoder.Additionally, the Swift encoder invokes the
linearTosRGB()
method for every pixel NxM times for blurHashes with components (N, M). If the image is WxH (width, height), the function will be called NxMxWxH. It would dramatically improve perf to break this into two steps: convert all pixels once, then compute factors.Additionally, given the low fidelity of blurHashes, it probably doesn't make sense to sample every pixel. Another significant perf improvement would be to use a "stride" to sample every Nth pixel vertically and horizontally.
The text was updated successfully, but these errors were encountered: