You can clone with
HTTPS or Subversion.
The canvas clearing test is a complete fantasy as long as it never actually draws anything. It's trivial to optimize clearing an already cleared render target (modern GPUs make a clear inexpensive if not free) and the fact that you never draw anything to it after the clear means that browsers can trivially identify that they have no need to do any actual work either. Furthermore, you're not doing anything with the results of the rendering operation, and some browsers will trivially optimize rendering operations that are never presented (for example, rendering to an offscreen canvas and never using the output can be deferred in some Chrome configurations).
This test should not be included or scored until it's real.
A trick I use quite often is to get the value of a pixel at the end of the operation. This flushes the pending painting operations. It may have negative side effects, e.g. preventing an otherwise useful optimization.
Yeah, it's understandable to avoid things like getImageData or whatever. I would recommend just ensuring that a visible canvas is in the DOM, and render the cleared canvas into the DOM. That will flush any pending paint operations, but not require any slow-paths (like GPU readback).
Note that a sufficiently clever browser could still optimize that, though. Not sure how much you can do to make it completely real, but that would be a start.
A quick test at my end suggests that the browser is doing a full clear of the context irrespective of whether there is anything written in or not. Might be worth checking that yourself to see if you're seeing the same thing.
Right now not all browser implementations of the canvas element are hardware accelerated, so while I totally take your point that in the GPU context clearing is an optimised path, and especially so when nothing has been written, there are still some implementations for which this is a pain point.
Currently we don't have a pre-test function that could write something into the canvas, so we'd end up polluting the test with draw calls right now. We could either look at whether it's doable to add a pre-test or accept that as browser vendors switch to HW accelerated canvases that this test becomes irrelevant. I would personally rather do the former, as I know many developers currently run up against the canvas context clearing speed and typically have to do their own dirty region management or pull other stunts.
Edit: Also, forgot to say, the canvas is in the DOM, it's appended to the container element as part of the setUp() call.
Ah, if the canvas is in the DOM, that makes it more likely that the rendering operations will at least actually occur. That's good.
Even if this theoretically represents a real issue - i.e. that the cost of clear operations matters to developers, I can believe that - it still does not, arguably, make as much sense to clear after performing no rendering. In the real world, there would be no reason to do dirty rectangle management or anything else if your goal was to have a blank canvas. A test like this doesn't need to be completely real-world but it at least should do a small amount of rendering so that the cost of an actual clear operation can be measured. The cost of the rendering can be factored out by measuring it on its own, if people are worried about it skewing the scores.
I should also note that optimizing clears of an already cleared context can be done even in software, so if this benchmark becomes important to browser developers you could see this test turned into a no-op even in software rendering paths.
The dirty rectangle management thing is a little weird to me also. If the cost of clearing a surface is so significant that you're applying dirty rectangles to the problem, your actual rendering operations seem like they would cost even more, unless you're repainting a canvas that is largely empty. I've seen clearRect show up in my profiles, but I've also yet to find a faster alternative, and simply not clearing the canvas is not an option unless what you're painting is opaque.
It's important to understand what's actually being measured here and compare that with what you're actually intending to measure.
If your goal is to specifically measure the cost of clear operations, you should compare them with alternatives: Compare clearRect with fillRect with changing the size of the canvas with creating a new canvas.
If your goal is to try and measure the cost of a clear as part of a larger drawing operation, you need to control factors here: Like I mentioned above, a mostly-empty scene with a small object in it will obviously be dominated by time spent in clearRect, but most real world scenes would likely not be so empty, unless the application is making unusual use of canvases for layering and compositing (in which case arguably it is just an unoptimized application).
If your goal is to measure pure GPU pixel throughput, benchmarking clears is a poor choice and it would instead make sense to benchmark doing a full-canvas fill with a different color each time, and you probably would want to disable alpha blending (though you should note that doing that currently deoptimizes canvas in some versions of Firefox). On the other hand, GPUs and software implementations could easily optimize a solid color fill or full screen primitive just as well as a clear. You might have to resort to blitting images to measure throughput (which sucks, since that would also be factoring in texture read performance).
So there are a few things here:
I wrote the test to show that clearing the context was a disproportionately expensive operation when compared to relatively simple drawing, i.e. that clearing the canvas just takes too long. (This, of course, is a subjective issue as to what we classify as simple.) My concern about putting something in the canvas is that it potentially skews the results, but equally I see the concern that right now it's easy to game, or it's tapping into a special case.
Since implementations and optimisations have improved I'm personally happy to see this test be gone. That said, I wouldn't want to remove it if devs are still hitting this problem in their work, but from your feedback and others in #43 that seems to not be the case. If it is the case then I think it's more about finding a way to adequately prove that clearing is an unusually slow op.
In Gecko, we actually have two separate canvas implementations. (The details of why are not relevant.) Both lazily create the graphics object that backs the canvas. Because you've never drawn anything to the canvas, that graphics object does not exist, and we end up taking the early returns at http://hg.mozilla.org/mozilla-central/annotate/654489f4be25/content/canvas/src/nsCanvasRenderingContext2D.cpp#l2165 or http://hg.mozilla.org/mozilla-central/annotate/654489f4be25/content/canvas/src/nsCanvasRenderingContext2DAzure.cpp#l1969. Thus all this tests is how fast we can call into C++ and return to JS.
@khuey OK, gotcha. Any suggestions on how we can ensure that we are specifically testing the speed of clearing the context?
@paullewis You want to make sure you draw something into the canvas before clearing it. Ideally before every clear call.
@bzbarsky OK, fine with that. If everyone else is happy that it's more representative I'll add that in.
@paullewis Ideally, what we would use here is an actual real-life example where clearRect performance is a problem, as described in your "few things here" comment above.
@bzbarsky Definitely, although separating out the drawing part from the clearing part is my concern here. The only solution I can think of is to see if a pre-test call can be used to populate the canvas, but failing that I will see about dropping the test. Unless anyone has better ideas.
If this originates with applications that were impacted by the comparatively large cost of clearRect in software canvas backends, then it actually makes way more sense to just measure the cost of a realistic canvas workload that does the things those apps did - clear the canvas, blit some images to it. That's something almost any application will want to do. It's true that you aren't measuring a single API call anymore, but that's fine, because a single API call is meaningless in many cases - a clear of an already cleared canvas is meaningless, and so is blitting an image on top of itself if there's no alpha channel involved. Drawing a complete scene multiple times is a lot better, and will provide a meaningful benchmark value regardless of how the scene actually gets rendered.
@kevingadd The problem with that approach, if I understand you correctly, would be that we would be deriving very broad conclusions, i.e. "canvas drawing is slow" when, in fact, as you say, there are specific operations that are more costly and therefore cause pain points for developers. That's the origin of this particular test, as I said, that historically clearing a context has been expensive (when the back end is in software) to the degree that developers have been forced to come up with alternative approaches.
The reason I think this particular test is interesting and important is that sometimes one has no choice but to clear the canvas. Say the canvas is composited on top of other elements, filling it with a colour isn't going to work. In these situations there is no meaningful workaround, and that means the developer must use the clearRect call and, if clearing the context is slow, then it's a pain point. Conversely when it comes to drawing they can potentially draw less or in a different way. That is, they have options.
I'm really encouraged to see such a great discussion about how to improve the test.
I'm not an expert in this area, but would, say, drawing a diagonal line from one side of the canvas to the other and then calling clear address the problem in a way that existing browser implementations wouldn't be able to optimize? That's a simple enough draw call that hopefully it wouldn't dilute the clearRect part of the performance too much.
@paullewis You're basically asking "ok, the microbenchmark is currently flawed; how can we fix it?" The answer is "you can't". It's an inherent problem with microbenchmarks. They are occasionally useful, but a hopeless foundation for a general performance benchmark suite. See Issue #67 for more.
If the claim is that all the tests in this suite are based on actual "pain points" that exist in real web applications, I would be very curious to see what real code has clearing a canvas like this on its critical path.
If such code does exist, can we just make that the benchmark?
If such code doesn't exist, then doesn't this benchmark fail the (too permissive, see #67) basic criterion for inclusion in this suite of benchmarks?