Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance [suggestion] #30

Closed
eltonmesquita opened this issue Nov 3, 2017 · 8 comments
Closed

Performance [suggestion] #30

eltonmesquita opened this issue Nov 3, 2017 · 8 comments

Comments

@eltonmesquita
Copy link
Contributor

eltonmesquita commented Nov 3, 2017

Hello!
Recently I've used RGBaster in a project and had some performance issues. In my case, the images processed were a bit big and I couldn't resize them because that would brake the layout.

As I dug deep debugging the page trying to find ways to optimize the script I've changed the image data loop to iterate skipping from 4 to 128 pixels and obviously, got a considerable increase in the performance. No more long frames and in my case no impact on the results.

My suggestion is to provide a way to decrease precision so we can gain on performance when needed. It might help a lot in some cases and be of great help for other developers.

If needed, I can provide some measurements and some examples for testing purpose.

@briangonzalez
Copy link
Owner

Great idea, @eltonmesquita.

I'll find some time soon to add in the suggestion.

@AlfredJKwack
Copy link
Collaborator

We should give some thought to the implementation approach perhaps.

Using the the data loop directly has some side effects.

Essentially the way the data is being read is one line of pixels at a time from the top left of the image to the bottom right. Arbitrarily setting the number of pixels to skip at that location in the code will essentially lead to slices being taken from the image to provide the palette.

For instance, setting the skip from 4 to 128 would mean you're reading 1 out of every 32 pixels. The slices themselves will be oriented depending on the width of the image. If your image has a width which happens to be a multiple of 32 the slice would be nice and vertical. If your width is a factor of 32 you'd get horizontal slices. In all other cases it would be some form of a diagonal slice.

Another approach would be to take the center of a 3x3 (or 5x5 etc.) square of pixels each time. and go through the entire image that way. You could use a formula like (2n+1)^2 where n could be your coarseness factor. The formula ensures you always have a square with a central pixel. The approach is a little less flexible but seems more logical somehow.

Thoughts? Any more approaches that we could take with this? What do you think would yield the better result for the speed?

@eltonmesquita
Copy link
Contributor Author

Sorry for the late answer.
You are completely right @AlfredJKwack, messing with the loop can be really troublesome but still one of most effective way to bring performance to something so computing intensive as image analysis.

I'm interested in the center square idea but would this facilitate the providing of a "precision option"? I'm a newbie on image analysis, pardon me if I'm asking something silly.

The only other alternative that I can think now is to provide an option for the script to scale down the image, so the script would have a lot less work to do. I don't know if it's a good idea but might be an option.

@AlfredJKwack
Copy link
Collaborator

AlfredJKwack commented Mar 9, 2018

Hi @eltonmesquita

That alternative is actually quite feasible as well.

The DrawImage() function allows for scaling the image. Modifyling the equivalent function on RGBaster L25 to reflect some form of 'optimizer' would be trivial. Doing so has an effect on the getImageData function insofar as it returning fewer pixels for RGBaster to analyze.

A quick and dirty implementation for testing and timing could look like this.

imgObj.onload = function(){
    var optimizer = 12; //scaling factor  
    
    var t0 = performance.now() //timer start
    
    var context = getContext(imgObj.width / optimizer, imgObj.height / optimizer);
    context.drawImage(imgObj, 0, 0, imgObj.width/optimizer, imgObj.height/optimizer);
    var imageData = context.getImageData(0, 0, imgObj.width/optimizer, imgObj.height/optimizer);

    var t1 = performance.now(); // timer end
    console.log("Call canvas work took: " + (t1 - t0) + " milliseconds.")     
    console.log("W:" + imgObj.width +" H:"+ imgObj.height);
    console.log("W*H:" + imgObj.width * imgObj.height);      
    console.log("imageData.W*H:" + imageData.data.length /4);
    
    loaded && loaded(imageData.data);
  };

A quick test on a modest image showed the following promising results.

optimizer set at 12

"Call canvas work took: 0.29999999969732016 milliseconds."
"W:298 H:380"
"W*H:113240"
"imageData.W*H:744"

optimizer set at 1

"Call canvas work took: 5.799999998998828 milliseconds."
"W:298 H:380"
"W*H:113240"
"imageData.W*H:113240"

I think the next steps would have to be to compare this approach with the originally proposed one to see if letting the browser do the scaling is more efficient than iterating over fewer elements in the ImageData.data array.

Another thing worth checking out is the CanvasRenderingContext2D.imageSmoothingEnabled = true //default feature which may also have an effect on performance.

@eltonmesquita would you be willing to have a go at providing a set of images and test harnas for the different options?

@eltonmesquita
Copy link
Contributor Author

Yep, I've done some quick test and the performance gains when resizing are impressive and without much loss on the precision of the script. About the CanvasRenderingContext2D.imageSmoothingEnabled = true, I think it affects not only the performance but the end result too as when scaling down, similar neighbor colors might get blended together which might be a good thing on some cases. Maybe one more option to set it true might be a good idea.

On one of my tests I saw the script run in about 40ms from the original ˜400ms on a 1082x394 image when scaled down 6 time. I even have this version with images being scaled down running live on a quite heavy traffic e-commerce and things are pretty good until now. You can see it here on the main banner setting the background based on the image's predominant color).

I do have a set of about 26 images with different patterns for testing but it's highly focused for my case, so all the images have nearly the same resolution. I'll try to make a proper test case in the next week and post it here, so we can have something more general and get some proper metrics.

@AlfredJKwack
Copy link
Collaborator

Sounds promising. I’m in transit and will have further a look when I next can.

A little off topic but have you considered responsive images at all?

@AlfredJKwack
Copy link
Collaborator

AlfredJKwack commented Mar 9, 2018

I ran a bunch more tests alternatively skipping 8 pixels in the loop or scaling the image down by a factor of 8. I compared that with normal runs as the library is today. I ran the same image through a dozen or so times by firing the library with script.

Results

Test image: 500 x 718
Browser: Chrome Version 64.0.3282.186 (Official Build) (64-bit)
OS: MacOS 10.13.3 (17D102)

Run Variable Average runtime 
Run 1 Scaling: 1 3,6100 ms
Run 1 LoopSkip: 1 298,6500 ms
Run 2 Scaling: 1 3,9222 ms
Run 2 LoopSkip: 8 54,8667 ms
Run 3 Scaling: 8 1,7000 ms
Run 3 LoopSkip: 1 11,1200 ms

Based on these results I ran some further tests with a 3264 × 2448 image. There's no comparison. Clearly the DrawImage() scaling wins out in terms of efficiency. What I do find odd though is that scaling the image is faster than drawing the image normally.

AlfredJKwack pushed a commit that referenced this issue Mar 9, 2018
#30
Implements a new optional feature to speed up processing when dealing
with large images at the cost of some precision.
@eltonmesquita
Copy link
Contributor Author

Hello @AlfredJKwack ! I think a test isn't needed anymore, right? ;D
I saw the PR and the implementation looks nice. Hoping to see it merged soon.
As the issue is solved and there's a PR already I'm closing this to keep the repo clean.

What I do find odd though is that scaling the image is faster than drawing the image normally.

Yeah... That's weird. But browser engines are so well optimized these days that it doesn't come much as a surprise.

About this:

A little off topic but have you considered responsive images at all?

Yeah, but the platform used is a legacy one that doesn't offer much control of the HTML, so I had to deal with some major hacks to load a dedicated image for large screen and another one for small screen. Currently it's not possible to use responsive images on this platform, unfortunately. :(

@briangonzalez briangonzalez mentioned this issue Nov 28, 2018
10 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants