Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Canvas2D Layers #7329

Closed
Juanmihd opened this issue Nov 12, 2021 · 17 comments
Closed

Canvas2D Layers #7329

Juanmihd opened this issue Nov 12, 2021 · 17 comments
Labels
addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest topic: canvas

Comments

@Juanmihd
Copy link

A javascript interface for using layers within canvas.

Provide a beginLayer() and endLayer() functions that create and close layers for Canvas. These methods provide an alternative method for applying a filter, shadow or compositing to a set of drawn operations (as opposed to a single one), other than having to create a temporary canvas and then draw the temporary canvas into the intended canvas.

Working proposal: https://github.com/fserb/canvas2D/blob/master/spec/layers.md

(cc @whatwg/canvas)

@Kaiido
Copy link
Member

Kaiido commented Nov 13, 2021

I am very supportive of introducing a "layers" concept in the canvas API, and I'm glad to see this proposal.
I'd like to challenge a bit the proposed model though.

Currently the idea is to use the same model as the current save() / restore() one.
I believe a different model using a CanvasLayer interface would be both more powerful and easier to use.

Even though the proposal aims at allowing nested layers, the current beginLayer() / endLayer() model actually forces the authors to contain all their layers's definitions in the same place. Code can thus become very confusing with no easy mean to tell in which layer we really are since everything is at the same level.

// draw something on the default layer
ctx.globalCompositeOperation = "source-in";
ctx.beginLayer();
// dozens of lines of code here
// ...
ctx.beginLayer();
// more lines here
// ...
ctx.endLayer();
// wait, which layer is this already?

It is thus also very easy to miss one call to endLayer(), leaving the context completely broken with no possibility to paint anything anymore on the context.
Also with this model there is no easy way to reuse a layer later on, which makes it a lot less attractive e.g for GUI.
And it is quite disturbing to think that we call the context's drawing method, but that nothing is actually drawn yet, for instance calls to getImageData() and ctx.drawImage(ctx.canvas) from inside a layer are quite complicated to conceptualize.

And while less about the general model, as currently written, every time we call beginLayer() only some properties of the canvas state are both saved and reset, until we call endLayer() where they'll get restored. I don't see why only a few properties are saved and reset, e.g if a layer changes fillStyle we need to set it back to what it was, and thus we need to store that previous fillStyle value before changing it.


So I'd like to bring an alternative model into the discussion, actually based on an other proposal in the same repository: Recorded Pictures, which we could rename CanvasLayer.

Here is a repository of a

The idea would be to have a new CanvasLayer interface, including all of the CanvasRenderingContext2D methods except CanvasUserInterface and CanvasImageData, and to add a new method (maybe on the CanvasState mixin) renderLayer(CanvasLayer layer). This CanvasLayer would then record a series of commands to be executed by the context in a single pass in the renderLayer() call, which would follow the same rules that a drawImage() call would follow (i.e it would be affected by all composite modes and even image smoothing).

So to be exhaustive and in IDL words:

interface mixin CanvasState {
(+)  undefined renderLayer(CanvasLayer layer);
}

interface CanvasLayer {
  constructor();
  CanvasLayer clone();
}
CanvasLayer includes CanvasState;
CanvasLayer includes CanvasTransform;
CanvasLayer includes CanvasCompositing;
CanvasLayer includes CanvasImageSmoothing;
CanvasLayer includes CanvasFillStrokeStyles;
CanvasLayer includes CanvasShadowStyles;
CanvasLayer includes CanvasFilters;
CanvasLayer includes CanvasRect;
CanvasLayer includes CanvasDrawPath;
CanvasLayer includes CanvasText;
CanvasLayer includes CanvasDrawImage;
CanvasLayer includes CanvasPathDrawingStyles;
CanvasLayer includes CanvasTextDrawingStyles;
CanvasLayer includes CanvasPath;

Now we can have in a module something like

const avatarLayer = new CanvasLayer();
avatarLayer.fill(somePath);
// dozens of line of code

export { avatarLayer };

and in an other module

import avatarLayer from "./avatar.mjs";
// ... import more layers
const userLayer = new CanvasLayer();
userLayer.globalAlpha = 0.8;
userLayer.renderLayer(avatarLayer);
export { userLayer };

to finally do in the main script

import userLayer from "./user.mjs";
// ...
ctx.globalCompositeOperation = "lighter";
ctx.renderLayer(userLayer);

Now the code can be easily and clearly segmented, it's easy to understand which layer we're in and what rules will apply on the rendered layer, and it's easy to understand that calling layer.drawImage(ctx.canvas) will not include the drawing commands that have been executed on the layer thus far.

It may be confusing to think about when the drawings actually occur, e.g calling layer.drawImage(video, x, y) one may think the video frame would be grabbed at that moment when it will actually only be at the final ctx.renderLayer(layer) but I believe this is still less confusing than the beginLayer() / endLayer() model.

The clone() method is added so that it's possible to "fork" a layer into several branches (cloning all the drawing operations stored until clone() is called). This is something the beginLayer()/endLayer() model can't do. There we'd have to repeat the code for every fork. Truly recursive layers (layerA.renderLayer(layerB); layerB.renderLayer(layerA)) would throw, which I think is fine, since we can clone them and avoid that issue.

The main pain point of this model (more in #7329 (comment)), is that it may be too close to the Canvas2D API and one may end up writing code forgetting that drawing new commands to the same CanvasLayer without calling .reset() would pile up all the commands. Though that still sounds like a minor pain point compared to the other proposal's ones.

@annevk annevk added addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest labels Nov 17, 2021
@fserb
Copy link
Contributor

fserb commented Nov 23, 2021

@Kaiido thanks for the great comment. Let me try to address the top comments first, and I'll write a second reply about the other proposal.

And it is quite disturbing to think that we call the context's drawing method, but that nothing is actually drawn yet, for instance calls to getImageData() and ctx.drawImage(ctx.canvas) from inside a layer are quite complicated to conceptualize.

Do you think it would be easier if the EndLayer was called RenderLayer instead? Because the model becomes more clear to think about when you consider that the layer is only rendered at the EndLayer point.

And while less about the general model, as currently written, every time we call beginLayer() only some properties of the canvas state are both saved and reset, until we call endLayer() where they'll get restored. I don't see why only a few properties are saved and reset, e.g if a layer changes fillStyle we need to set it back to what it was, and thus we need to store that previous fillStyle value before changing it.

I think this was a confusion with the explainer. All state is saved/restored (identical to save/restore), but only the state that is applied to the layer is reset inside the layer. This seems to be the more intuitive approach as:

ctx.globalAlpha = 0.5;
ctx.fillStyle = 'red';
ctx.beginLayer();
ctx.fillRect(0, 0, 10, 10);
ctx.fillStyle = 'blue';
ctx.fillRect(10, 10, 10, 10);
ctx.endLayer();

produces the same result as (and leave the context at the same state as):

ctx.fillStyle = 'red';
ctx.globalAlpha = 0.5;
ctx.save();
ctx.fillRect(0, 0, 10, 10);
ctx.fillStyle = 'blue';
ctx.fillRect(10, 10, 10, 10);
ctx.restore();

This seems to be clear to me, but maybe I'm having some extra assumption that is not obvious.

I see your point that one problem with beginLayer/endLayer is that the drawing commands after beginLayer are for the layer, even though they present themselves as for the canvas, and that could potentially lead to confusion. I'd argue that there's also an upside of this, as everything behaves the same as normal drawing (not having to worry about CTM or sizes). But I agree that "when this is going to be rendered" is a potential confusion point.

@Kaiido
Copy link
Member

Kaiido commented Nov 24, 2021

Do you think it would be easier if the EndLayer was called RenderLayer instead? Because the model becomes more clear to think about when you consider that the layer is only rendered at the EndLayer point.

Not really, As you note in your post-scriptum, my point is more that it's confusing to call the drawing methods on the context directly.
For instance if we read something like,

ctx.fillStyle = "#00F";
ctx.fillRect(0, 0, 10, 10);
const px = ctx.getImageData(0, 0, 1, 1).data;

one would assume that px will hold the values [0, 0, 255, 255], but this will not be the case in a layer. And once again it can be quite hard to notice that we're in a layer in large scripts since the layer code is at the same "level" as the rest of the code. (Sure, one could create {} blocks to better indent their layers, but that sounds kind of hackish). So in this case that the method is called endLayer or renderLayer doesn't change much, and I find renderLayer() to be even less clear regarding the separation of layers.

I think that by acting on two different objects it becomes clearer that we're not getting the layer's pixels.

layer.fillStyle = "#00F";
layer.fillRect(0, 0, 10, 10);
const px = ctx.getImageData(0, 0, 1, 1).data;

I think this was a confusion with the explainer.

You are entirely right, somehow I still had my first reading of the first alpha-version of the proposal in mind when writing that. The new wording is a lot clearer in this regard.
However I am still not sure that resetting only the "layer rendering attributes" is the best move here.
I guess this boils down to whether users will expect to start from a fresh state or if they'd expect to start from the current context's state. Personally, I would expect a fresh state, probably because that's what the current detached-canvas workaround got me used to. But I can see why one would prefer the alternative too.

At least one point that should be clarified then is how will ctx.reset() behave in a layer.

  • Going by the "layer rendering attributes" way, I think it would be best that it resets only the properties of the context for that layer, without clearing the canvas buffer. (By the way, what would ctx.clearRect() do in a layer?)
  • Going with the "fresh state" way, I think it should ignore that we are in a layer and behave as it currently does, i.e auto-closing the layer and hard resetting everything.

Also I assume that the current default path would be temporarily "sand-boxed", right? So doing

ctx.rect(0, 0, 50, 50);
ctx.beginLayer();
ctx.rect(50, 50, 20, 20);
ctx.stroke();
ctx.endLayer();

would only draw the 20 x 20 rectangle in the layer and ignore the one in the parent layer. This current default path is not part of the CanvasState, so it would need special care.

And I'd like to propose to also add imageSmoothing... attributes to this list, an other quite common use-case where we currently have to use detached canvases is to render zoom-ins of small "pixel-perfect" drawings.


I'd argue that there's also an upside of this, as everything behaves the same as normal drawing (not having to worry about CTM or sizes)

I am not sure to entirely follow here. What would be different between this model and the CanvasLayer one with regard to CTM? CTM is part of the attributes that will get reset, and any transformation that occurs in the layer will get multiplied with the CTM of the parent layer, right? I think CanvasLayers would do the same and I actually find it less confusing that doing

layer.setTransform(1, 0, 0, 1, 50, 50);
layer.fillRect(0, 0, 50, 50);
ctx.setTransform(1, 0, 0, 1, -50, -50);
ctx.renderLayer(layer);

will result in the rectangle to be drawn from the context's coords 0,0 to 50,50 rather than

ctx.setTransform(1, 0, 0, 1, -50, -50);
ctx.beginLayer();
ctx.setTransform(1, 0, 0, 1, 50, 50);
ctx.fillRect(0, 0, 50, 50);
ctx.endLayer();

doing the same.

Regarding the size I think I can see the point, you mean like if we prepare a CanvasLayer to draw at the bottom-right corner of the canvas, but then the canvas is shrank and our layer ends up off-screen because canvas.width was computed before the shrinking happened? That's a good point, which can be solved by either updating the layers at resize (which would happen every frame in the beginLayer/endLayer model), or by using CTM to place it, basically like one would do with the current detached-canvas workarounds.

@fserb
Copy link
Contributor

fserb commented Nov 24, 2021

I do like the RecordedPicture proposal quite a lot (no surprise there, given that I proposed it in the first place ;) ), and I agree that it does address a similar issue to the layer proposal (whatever name we want to call it).

I'm not against an alternative proposal, and my initial thinking was very similar to yours (that RecordedPicture was more generic, and Layer was too specific and a bit confusing). Over time I convinced myself that the layer is probably a better solution, and I'll try to go over some of the arguments that moved me there (and that were brought up when I presented RecordedPicture to other folks). But I welcome more conversations over this.

(I'm using RecordedPicture nomenclature here instead of your proposed CanvasLayer just to keep it less confusing).

  1. One problem we had with RecordedPicture is that it was not clear if we would have to make a copy of them every time a drawPicture() is called. The alternatives would be to make it immutable (with a .freeze() command that looks ugly) or to make a copy at every drawPicture. None of those looked very attractive. And it was not clear at that point if a copy-on-write mechanic could be implemented at all to address this transparently.

  2. There was also the issue with drawImage, as you mentioned, as it was not clear if the intuitive (and/or useful) behavior is to use the image at layer construction time or layer render time.

  3. Another problem was the behavior for Canvas state changing. What is the subset of the state that gets used inside the picture. The reason this is different than layers, is that the display lists may not guarantee that all the operations are applied at the same point and not in sequence. If we force them to be so (i.e., drawPicture is actually beginLayer(); drawPicture(); endLayer(); we may lose all the performance gains we get from actually having a display list primitive (like the original proposal), as this is much more expensive than simply attaching a display list. We also end up in the same situation of having to decide which state gets used (what you call 'same rule as drawImage', which is, in a way, the same thing the beginLayer/endLayer proposal has).

  4. It was not very clear if all implementation could make this RecordedPicture in a way that it was completely canvas independent (i.e., usable on multiple canvas) in an easy way.

  5. I remember some folks also not being super happy about having another object that mirrors all the functions of a canvas context without being one, but that was a minor thing.

Those are the ones I remember by head, I'd have to dig into some past discussions to see other arguments. Again, I'm not against it. I'm just bringing up some of the potential downsides of this other approach, so we can compare with the slight confusion of layers.

Also, it's interesting to notice that the semantics of other 2D APIs match the layer one (CoreGraphics's BeginTransparencyLayer, skia's SaveLayer, etc...).

@Kaiido
Copy link
Member

Kaiido commented Nov 25, 2021

Interesting points, thanks for it.

I should probably put a reminder that I am not an implementer, and I only had the ergonomics as an user and JS dev in mind when I came to the same idea as yours here. That's also why I took care of not mentioning the potential performance advantages of the RecordedPicture since I really have no idea if this will really have any.

Naively, I thought of it only like something that would store a list of method names and arguments that would then get called on the target context. As a very schematic JS implementation:

const actions = new WeakMap(); // would store arrays
// keeping RecordedPicture instead of CanvasLayer for clarity
RecordedPicture.prototype.fillRect = function(...args) { actions.get(this).push(["fillRect", args]); }
// repeat for every methods and do something more complicated for setters & getters
CanvasRenderingContext2D.prototype.drawPicture = function(picture) {
  const { width, height } = magicallyGetTheSizeOfPicture(picture);
  const detachedCanvas = new OffscreenCanvas(width, height);
  const detachedCtx = detachedCanvas.getContext("2d");
  actions.get(picture).forEach(([prop, args]) => detachedCtx[prop](...args));
  this.drawImage(detachedCanvas, 0, 0);
};

So regarding the first point, with this kind of implementation in mind I am not sure what would need to be copied. For me everything would just be reapplied on the context at rendering, and stored as JS values (or seamlessly at least).
This would also force drawImage to use the state of the source at the time of rendering, which we agree might be a point of confusion for authors.

But it seems you had a very different idea of how this would be implemented, and I am unfortunately unable to get a grasp on it. As such the third point is quite opaque to me, I am really not sure what kind of advantages in terms of performance either solutions would offer and I will take your words for it. For the state of the canvas, yes, it would probably be the same list as the "layer rendering attributes" list, but here there is no save()/reset()/restore() thingy implied.

@Juanmihd
Copy link
Author

Thanks @Kaiido for the comments and suggestions! And thanks @fserb as well for your inputs!

Going back to the original proposal, I made a quick change trying to better model the behavior when the layer stays open at the end of the frame.
At the end of the frame, a layer that was not closed will be rasterized, and in the next frame the layer starts empty and can still be used (and closed). This would behave as if at the end of the frame the layer was closed, and reopened - while keeping the same state as the original one.

We tried to model the proposal of layers following closely to the current save/restore behavior. This way, it will probably be more consistent to what some web developers are already used, will know how to work with it, and it will also be an incremental change.


Regarding RecordedPicture, I also like that idea a lot, and I was not around when it was initially proposed, but I agree with @fserb on the issues regarding that proposal. But I also see how this, even that it solves as well the same problem as beginLayer/endLayer would do, it solves also so many other problems, bringing also the challenges to define it properly in the spec.
It could may be worth it exploring it again as a separate proposal?

@Kaiido
Copy link
Member

Kaiido commented Dec 1, 2021

(I did comment on the "auto-endLayer" idea on your commit directly, to avoid making this thread already quite big get even bigger).

It could may be worth it exploring it again as a separate proposal?

I fear that if there is already a layer feature available, there won't be as much traction for a RenderedPicture proposal, the layering abilities it offers seems to be the most noticeable improvement.

@Juanmihd
Copy link
Author

Juanmihd commented Dec 7, 2021

By what fserb mentioned I still don't think that there is much traction for RendererPicture, as it was discarded in the past.
I see how these two features could exclude each other, but, on the other hand, they do solve also different problems, so I think it still valuable to explore them in different paths. Particularly given the fact that the RecordedPicture was already discarded in the past.

I still think it'd be better to keep this issue discussion focused on this current proposal, and probably create a new one for RecordedPicture. It will be hard to discuss two different proposals in a single issue?

@Juanmihd
Copy link
Author

Juanmihd commented Dec 9, 2021

cc @jdashg @litherum, could you please take a look at this issue of adding BeginLayer and EndLayer and the explainer linked and share your thoughts? Thanks!

@kdashg
Copy link

kdashg commented Dec 9, 2021

Why not continue to use separate canvases? What are the benefits of this versus the status quo?
It does look implementable, but it does add complexity to an already-very-complex API. Simplicity and composability being a feature, I want a compelling case for adding complexity here.

@Juanmihd
Copy link
Author

Juanmihd commented Dec 9, 2021

The main two reasons from the browser developers side are:

  1. The browser can now take care of deciding what's the best dimension for the temporary canvas, given the current transform/clip. The developer does not have to worry about how big the canvas has to be.
  2. The browser can also find ways to implement this without necessarily having to create an auxiliary canvas. That offers some benefits aside of the size one mentioned in 1), as will allow browsers to potentially improve memory usage (not needing to create an auxiliary canvas), and also performance (as there is no need to draw things in one canvas and then draw the canvas to the other canvas, the draw operations could all be done directly on the intended canvas.

For web-developers that only need that auxiliary canvas for that specific drawing, it will reduce the name pollution (avoiding having to use aux_canvas, canvas2, and aux_ctx, ctx2...), without having to make it too complex.

This beginLayer/endLayer could follow the same logic of save/restore, in regards of the full current state of the canvas, making it esaier to reason about both for browser and for web developers.

@kdashg
Copy link

kdashg commented Dec 10, 2021

How do we address the portability concerns of browsers choosing sizes? Generally this causes calcification of heuristics as we eventually standardize on the de facto implementation, as alternative browsers respond to web-compat pressures.

@Juanmihd
Copy link
Author

The way I phrased the point one is indeed misleading. Probably that point will make sense as:

  1. The developer does not have to create a temporary canvas or specify any given size. The browser will produce the drawing equivalent to having a temporary canvas with the minimum size required, given the current transform/clip.

The idea of this addition to the spec is precisely allow browsers to do that, as mentioned in point 2.

The layer should nevertheless behave in the same way as creating the the smallest canvas possible that would allow to draw its contents to the intended canvas.

@Juanmihd Juanmihd mentioned this issue Jan 27, 2022
3 tasks
@Kaiido
Copy link
Member

Kaiido commented Feb 11, 2022

I think we should really continue discussing the API design before going forward with this, and as per the new features guidelines, I believe that here is actually the right place to do so.

The recent PR and the current implementation in Canary only cement my initial doubts regarding this design. And to be clear, I don't doubt at all of the use-case and the need for a solution to it.

The main grievance I have against the beginLayer()/saveLayer() design (on top of the ones I already enounced before) is that many points are only based on non-obvious empirical choices. For instance, why are not all the properties of the context reset after beginLayer()? Why is imageSmoothingEnabled not in the list of layer drawing atributes(sic)? Why is ctx.reset() breaking the layer entirely1? Why beginLayer()/endLayer() doesn't touch the current sub-path at all1? Why calling drawImage(canvas)1 while defining a layer will make the layer get rendered until that point, then resumed, but now with the half-baked layer already rendered2?
I'm not saying that all these choices are necessarily bad, my point is that it's very unfortunate that we have to make such choices basically backed only by gut-feelings, for every situation. It's almost certain that we won't handle all the possible edge-cases by doing so. For instance, the current PR is quite unclear as to what calling .restore() from the layer should produce, and indeed in the current Canary implementation, save()/restore() from inside layers is broken.
We also end up inventing new concepts like this undefined "frame" idea, which is apparently1 made of the next-event-loop-iteration, canvas-used-as-source-image, getImageData, toBlob and toDataURL, and more surprisingly putImageData.

For all such choice we will make here, we will need thorough and extensive docs for the user to be able to know what will happen, since they won't be able to logically deduce it.

I'm pretty confident that we can come up with an alternative design where all these questions wouldn't even need answers because the design itself would force a logical behavior.

I probably didn't see it all through with my proposed CanvasLayer interface either, but at least from here it seems that at least all these points would get clear answers and that it would get us closer to the stated goal in the PR's note:

"When rendering the Canvas layer it has to behave the same way as if creating an auxiliary canvas with the content of the layer, and drawing it to the original canvas with the drawImage() method."

Footnotes

  1. Based on tests made on the current Canary implementation; the current specs PR doesn't handle these points explicitly yet. 2 3 4

  2. Based on my reading of the PR; Canary actually renders the "remaining" of the layer as if it wasn't in a layer at all.

@Juanmihd
Copy link
Author

After some deliberation and re-prioritization I think that we will stop pursuing this change on the spec for the moment.

It's clear that there are still many things open before arriving a consensus and probably this proposal is not totally well shaped and defined. The current prototype in Chrome has some issues that make the layers close sometimes, so it is actually making it more confusing to argue about it.

We may bring back again this idea of Layers in the future, and we can probably get to a better proposal taking as well into account all of Kaiido suggestions, and we can work together on something better and more beneficial for the web :)

Thanks for all the reviews and comments!

@Kaiido
Copy link
Member

Kaiido commented May 20, 2022

In the hope it helps the discussion, I built an user-land prototype of my CanvasLayer interface proposal.
Source is available here, a very simple playground here and a more complex demo here.

Doing so, I must admit that I found a few issues with this proposal that I wasn't envisioning at first:

Implementation

This is a JS implementation, dealing only with what the current API offers and I thus expect it to be far more complex than what native implementations would look like. However, while building it I discovered a few pitfalls:

Getters and setters are relatively awkward with this model.

Indeed, getters ought to work synchronously, so that if we do layer.fillStyle = gradient; layer.fillStyle.addColorStop(1, 'red'); it just works. This doesn't concern only the attributes, even methods like setLineDash(), or even all the CTM related methods so that getTransform() works.
This means that all these "setters" must actually be called twice: once synchronously and once in renderLayer(). (In my implementation they're even called a third time when getting the layer's bounding box).
Then come relative units, in font and in some filter values. Doing something like

layer.font = "1vmin sans-serif";
layer.strokeRect(x, y, layer.measureText(text).width, 2);
layer.fillText(text, x, y);

the stroked rect would use the size of the window when the CanvasLayer was built, while the text would use the size of the window when the layer has been rendered on a canvas. Arguably, these relative units are already a mess, so I personally don't think this is a big draw-back.

I'm not sure what implementers and spec editors will think of all that, but I believe that from an user perspective having these setters work is a must.

Auto-sizing was hard (in user-land).

Automagically finding the size of the layer based on the drawing input was really hard in this user-land implementation. I had to call all the saved commands in a first batch (without the painting ones) to be able to determine the bounding box of the current layer. Through comment I assume that at least in Skia there is something that would help here, but I'm not sure if this applicable to all the engines.
While fighting with this I kept thinking that new CanvasLayer(width, height) could be an acceptable compromise with this model, it might help determining what size the layer would end up on the target, without preventing the ones that can to perform more clever optimizations.

Usability

This is a different model than a 2D context but maybe a bit too similar.

While making the demo animation, I shot myself in the foot by not calling layer.reset() at the beginning of each frame. Since the whole context gets wiped out when we call ctx.reset(), my implementation does remove all the previous commands stored in the layer when layer.reset() is called. Given that ctx.reset() is relatively recent (and not yet implemented everywhere), I still have the habit of clearing my context with a simple ctx.resetTransform(); ctx.clearRect(0, 0, width, height); ctx.beginPath(); at every frame. And thus, I did the same here with a CanvasLayer... The result was that I was adding new commands to the CanvasLayer every frames and in no time my computer's fans were trying to make it fly.


Apart from these few points, I still believe we absolutely need a layering API. I still think this CanvasLayer interface model requires less arbitrary decisions and is easier to think about than the saveLayer()/restoreLayer() one. I also still think it's far easier to use, and allows to write cleaner code.
But I also do think we need more discussion about it, at least to overcome these few issues. And I would be very glad if this prototype could help you all make the best choices in this path.

@Kaiido
Copy link
Member

Kaiido commented Aug 26, 2022

Playing a bit more with my prototype I faced another case that I wasn't expecting and that neither the beginLayer()/endLayer() model nor mine were handling (until now): cyclic layers.
In some cases it's useful to start drawing on a layer and then create forks from it.
With the beginLayer()/endLayer() model, we'd need to call all the drawing commands again for each fork. With the CanvasLayer interface we could face recursion issues (a layer trying to render itself).
I thus added a new .clone() method on my prototype to allow this use case. The idea is that all the commands stored on the layer until .clone() is called are copied to the new CanvasLayer instance. This doesn't prevent cyclic layers, but allows to workaround it. Cyclic layers are now detected in renderLayer() which does throw a TypeError.

I edited my original comment to add this method in the IDL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest topic: canvas
Development

No branches or pull requests

5 participants