Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Swift encoder assumes pixel format, has inefficient implementation. #19

Closed
charlesmchen-signal opened this issue Oct 8, 2019 · 3 comments

Comments

@charlesmchen-signal
Copy link

This code assumes that the input image is in RGBA8888 pixel format, specifically that the R,G,B component values are in at offsets 0, 1, 2:

                r += basis * sRGBToLinear(buffer[bytesPerPixel * x + pixelOffset + 0 + y * bytesPerRow])
                g += basis * sRGBToLinear(buffer[bytesPerPixel * x + pixelOffset + 1 + y * bytesPerRow])
                b += basis * sRGBToLinear(buffer[bytesPerPixel * x + pixelOffset + 2 + y * bytesPerRow])

This isn't always true. This yields blurHashes whose hue is wrong.

The blurHash(numberOfComponents:) method verifies the number of components. At the very least, it should also verify that the alphaInfo is .premultipliedLast. However, that will reject many images. Better would be to remove this assumption from the encoder.

Additionally, the Swift encoder invokes the linearTosRGB() method for every pixel NxM times for blurHashes with components (N, M). If the image is WxH (width, height), the function will be called NxMxWxH. It would dramatically improve perf to break this into two steps: convert all pixels once, then compute factors.

Additionally, given the low fidelity of blurHashes, it probably doesn't make sense to sample every pixel. Another significant perf improvement would be to use a "stride" to sample every Nth pixel vertically and horizontally.

@DagAgren
Copy link
Collaborator

The correct way to call it is to first scale the image down to a small size, such as 32x32, and the converting that. That handles both any possible slowness, and gets much better results than just skipping pixels.

@DagAgren
Copy link
Collaborator

Pushed a change to convert images to a known sRGB format in the Swift encoders now. Seems to lead to slightly nicer colours too in the test app so I guess those images weren't in sRGB as I had expected.

@mrousavy
Copy link
Contributor

@DagAgren Doesn't the sRGBToLinear still get called NxMxWxH times or am I missing something?

The Swift decoder is surprisingly slower than the Kotlin implementation (I've tested it for benchmark purposes with decode widths and heights of 400, Swift takes longer than 4 seconds while Kotlin (Android) is done in less than 100ms)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants