Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions #3

Open
raizam opened this issue Jun 19, 2017 · 26 comments
Open

Questions #3

raizam opened this issue Jun 19, 2017 · 26 comments

Comments

@raizam
Copy link

raizam commented Jun 19, 2017

I have couple questions, sorry if they seem stupid sometimes, but maybe you can clear my mind:

  1. What type of Matrix to supply to ApplyTransform (3x2, 4x4?).

  2. Is the matrix supplied is global to the IRenderer or relative to the current state in tree (PopState/PushState) methods?

  3. What are pros/cons using Shapes instances? are they faster? Are they suited only for static shapes or can be animated?

  4. Is this library suited for complex animations? (dynamic shapes+ dynamic colors)

  5. Is it possible to rasterize a Shape into a texture?

  6. Are the shapes vertices stored in the vertexBuffer, or are they just a quads, with the shaders responsible of drawing the outline?

  7. What is the best approach to implement colliders/mouse input detection?

Thank you

@jdryg
Copy link
Owner

jdryg commented Jun 19, 2017

  1. It's a 3x2 matrix. The same matrix you used in NanoVG (in case you ever worked with NanoVG before).

  2. Relative to the current state.

  3. Shapes are supposed to be static. When you build a shape, all the commands are recorded in a command list. Later, you just submit the shape to the renderer, instead of executing all the path commands. If your paths are animated, you might be better executing the commands manually, because there's overhead in creating the command list every frame.

Submitting a shape is faster than executing the individual commands every frame in the following cases:

  • If the average scale of the transformation matrix is constant between frame, the generated geometry is cached and it's just copied to the frame's VB/IB. You have to have VG_CONFIG_ENABLE_SHAPE_CACHING enabled.
  • The shape has concave polygons. In constrast to NanoVG which uses the stencil buffer to render concave polygons correctly, BGFXVGRenderer decomposes the polygon into convex parts. The decomposition step is costly, so caching the resulting convex polygons into the shape helps.
  1. It depends. The performance of the code is mostly relative to the number of vertices your paths have and the types of strokes (joins/caps) you are using. AA also affects performance because it generates a lot more vertices. But if all those factors stay constant over time, animating your paths/control points shouldn't affect performance much. I.e. animating a bezier from a straight line to a quarter circle, will increase the number of generated vertices, but I don't expect this to affect performance much.

In other words, if you can render a static frame of your scene fast enough, animating it shouldn't make a big difference. Culling (on your side) might also help but again it really depends on the use case.

  1. Currently no, but I guess it defeats the purpose of vector graphics, doesn't it? :) It might help in certain cases if it's handled internally by the renderer, but it's not currently implemented.
    You can try it out on your own if you want, by submitting all the draw calls to an off-screen render target/view.

  2. All vertices are stored in vertex/index buffers. The shaders just render the geometry. There's no outline tracing in the shaders.

  3. Unfortunately, I cannot comment on this one. I understand that it might be useful to have the resulted/rendered geometry as collision geometry to pass to your physics engine, but there's currently no such functionality. Your best bet at the moment is to approximate the final shape of your paths and give those approximations to the physics engine. E.g. a 100px x 100px quad with a 5px stroke has an outer size of 102.5x102.5 (half the stroke ends up inside the quads and the other half on the outside).

Hope I answered your questions adequately :) If not, say it and I'll try to give more detailed answers.

@jdryg
Copy link
Owner

jdryg commented Nov 20, 2017

Hello again,

Don't know if you ever used the code and/or you are still using it, but here it goes...

Regarding q7 of your list. I just commited some changes to the experimental branch. The path functionality has been moved out of the renderer into a separate class (path.cpp/.h). It might be helpful for generating paths for collision and mouse picking.

Instead of executing the path commands on the renderer instance, create a vg::Path, execute the commands on it, and then get back the generated vertices. You can treat each sub-path as a concave polygon which you can pass to your physics engine of choice or perform point-in-polygon tests for mouse picking.

I plan on also moving out the stroke and fill functions from the renderer, in order to clean things up a bit more. The stroker might be more useful for generating collision geometry if you use large stroke widths.

@raizam
Copy link
Author

raizam commented Nov 23, 2017

Hello, I'm currently not using it, all I have done is exporting C functions to be used from C#, but no further work on it, but will probably do in the future.

I feel like these changes are heading to the right direction.
To give my opinion on this, this library would be really useful as a raw triangulation library, without any dependency, not even bgfx. A dependency free Svg triangulation lib would be very popular I believe.

@jdryg
Copy link
Owner

jdryg commented Nov 24, 2017

To give my opinion on this, this library would be really useful as a raw triangulation library, without any dependency, not even bgfx.

If I manage to separate the stroke/fill functions from the renderer, it'll probably be close to what you describe. The only dependency I'll probably keep is bx because I like the AllocatorI interface (currently using it in my code) and the idea of being able to easily compile the code w/o linking to the CRT (haven't tried it but I think it's an option for most of the code I'm using).

@raizam
Copy link
Author

raizam commented Nov 24, 2017

yeah bx is a cool lib, I think it's header only so it's not a real dependency. You could add it as a submodule.
This project is getting exciting :)

@raizam
Copy link
Author

raizam commented Nov 24, 2017

you could get some inspiration from Nuklear and ImGui library, both dependency free, without a renderer.
For example you'll need to expose a vertex type like:

struct vg_vertex {
	float position[3];
	uchar col[4];
	float uv[2];
};

@jdryg
Copy link
Owner

jdryg commented Nov 24, 2017

Problem with such a struct is that:

  1. It's not SIMD friendly. Don't remember the details but I think I saw some perf gains from having separate arrays from pos/col/uv.
  2. Most of the time UVs are constant (even if they are specified per vertex) because solid color strokes and fills use UVs inside a small white rect at the top left of the texture atlas.

I think it might be better to keep the current design (separate pos/col/uv streams) because you can set all colors and UVs faster in some cases.

Either way, I hope to find the time to make the changes. One other thing I'm thinking is to completely remove the IRenderer interface. This means that you won't be able to switch between the current renderer and NanoVG. Haven't thought it much but it feels like having such interface limits the things I can try out.

@hugoam
Copy link
Contributor

hugoam commented Dec 31, 2017

Hi, reading this, I realized another possible advantage of this lib over nanovg might be that the stencil buffer is not used ? Does that mean it should be much easier to implement clipping to an arbitrary shape, since we could use stencil for that ? I struggled to find a way to do rounded rect clipping in nanovg and never found a proper solution.

Also I second removing the interface. Simple code is the best code. If someone wants to compare nanovg and this library, they can write their own interface. I don't think it belongs in the library itself. (maybe in a test wrapper ?)

@jdryg
Copy link
Owner

jdryg commented Jan 1, 2018

Hello and happy new year!

Hi, reading this, I realized another possible advantage of this lib over nanovg might be that the stencil buffer is not used ? Does that mean it should be much easier to implement clipping to an arbitrary shape, since we could use stencil for that ?

It should be possible, yes. It's in my list of things to try at some point. I don't really have a use for it at the moment, that's why I haven't done it yet. The only reason I thought about it is because I wanted to replicate fastuidraw's clipIn/clipOut functionality and its painter-cell demo (a rotated grid of 100 individually rotated rectangular cells, each one clipping an image and a string of text). I know it won't be as fast as fastuidraw (~2ms, 1 draw call on my machine), just to get an idea how much slower it'll be (see this for a comparison with other libs)

painter_cells

Also I second removing the interface. Simple code is the best code. If someone wants to compare nanovg and this library, they can write their own interface. I don't think it belongs in the library itself. (maybe in a test wrapper ?)

If I remove the interface it would be because I want to try out things not supported by NanoVG. This means that the comparison won't be possible even with a custom wrapper. In the meantime, nothing stops you from using just BGFXVGRenderer directly instead of the interface.

@jdryg
Copy link
Owner

jdryg commented Jan 28, 2018

@hugoam I uploaded a proof of concept (i.e. haven't tested it thoroughly) to the experimental branch.

Currently limited to 254 different clip regions, because I cannot clear the stencil buffer mid-frame (0 = the initial stencil value, +1 stencil value for each clip region). More than that requires a separate view.

Instead of following the canvas API, I decided to use Begin/End pairs for specifying clip regions. This means that you can also clip something to the stroke of another shape or independently transform the clip paths.

Example:

vg::IRenderer* vgr = ....;

// Start a new clip region with 2 rects
// Every path until EndClip() is added to the clip region.
vgr->BeginClip(ClipRule::In);
{
  // Rotate the clip paths
  vgr->PushState();
  vgr->Rotate(bx::toRad(45.0f));

  // Render the clip paths
  vgr->BeginPath();
  vgr->Rect(0.0f, 0.0f, 10.0f, 10.0f);
  vgr->Rect(20.0f, 20.0f, 10.0f, 10.0f);
  vgr->FillConvexPath(vg::ColorRGBA::Black, false); // Actual values don't matter

  // Restore state so the paths below will be rendered with the identity transform.
  vgr->PopState();
}
vgr->EndClip();

// Render all shapes you want to clip with the above clip region
vgr->BeginPath();
vgr->Rect(0.0f, 0.0f, 100.0f, 100.0f);
vgr->FillConvexPath(vg::ColorRGBA::Red, true);

// Clear the current clip region
vgr->ResetClip();

@hugoam
Copy link
Contributor

hugoam commented Feb 7, 2018

So I almost completely successfully migrated to your renderer, it was a breeze since how both API's are similar.

A few issues I encountered :

  • Member m_RecordClipCommands in src\vg\bgfxvg_renderer.cpp is uninitialized (so in msvc, initialized to true, which causes wrong rendering). Also I suggest initializing default values at the place of declaration when it is absolute / never depends on the constructor, I think it is less error prone and also shorter (and goodbye initialization order warnings). (like so : https://gist.github.com/hugoam/9cd2c94fb7fc974f881bb90ebfc71c9b#file-src-vg-bgfxvg_renderer-cpp-L238)
  • At line 1570 in src\vg\stroker.cpp you are missing a expandIB(3); line / call which causes a crash.
  • There is no stroke with gradient parameter ? this was quite handy in nanovg to allow for shiny graph editors for example. Would it be a quickly solved issue or does it involves some complex things in the 'backend' ?
  • Do you think it would possible to add some functions to get some of the current state data, especially, the current scissor ? I am not duplicating current clip / scissor test but I like to get it from the renderer state to just not render part of the item tree that are completely outside. This was not in vanilla nanovg (only transform could be get) but I added it to my fork since it is quite handy for UI rendering.

Apart from that, props for the cool library, I didn't get to measuring performance or try the experimental clipping functionality yet, but I'll get back to you when I fix the remaining issues I have.

@jdryg
Copy link
Owner

jdryg commented Feb 7, 2018

First of all thanks for trying out the code and for the feedback. Having said that, I hope you kept your nanovg code around in case you end up deciding this lib doesn't work for you :)

  1. I've fixed that in my working copy but I haven't managed to upload it yet. Didn't know if anyone is using the code (especially the experimental branch). I just uploaded that plus a couple of other fixes.
  2. I don't know how I missed that for so long! Thanks. Fixed.
  3. Haven't tried that in nanovg so I don't know how it's supposed to look. Do you have a screenshot handy to check it out? The easy way would be to have a strokePath(GradientHandle, ...) function which will call createDrawCommand_Gradient() just like fillConvexPath(GradientHandle) (https://github.com/jdryg/vg-renderer/blob/experimental/src/vg/bgfxvg_renderer.cpp#L1557) instead of createDrawCommand_VertexColor(). I'm 95% sure it will work, but I'm not really sure that's what you are after :)
  4. Yes, that's possible. If you want a binary result (i.e. zero scissor rect or not), you can also use the value returned by intersectScissor().

After adding the clipping functionality, I'm seriously thinking of dropping the IRenderer interface and just keep BGFXVGRenderer. I haven't touched NanoVGRenderer in a while and I'm pretty sure it won't work except from basic stuff (no shapes/caching, no scissor clipping, etc.).

Problem is that I'm currently reimplementing my UI and until I manage to fully migrate my code to it, I'm not going to touch vg-renderer (except from fixing bugs of course).

@hugoam
Copy link
Contributor

hugoam commented Feb 7, 2018

I solved the second-to-last issue I had, which is that gradients and solid color fills are not rendered in the proper order (that would be submission order).
The problem is that bgfx sorts draw calls for you, and one of the criteria is the program. Since you have two different shader programs for solid fills and gradient fills, it renders all solids first then all gradients. I think this is definitely wrong regarding the behavior of a canvas API.

Bgfx has a sequential mode to solve that issue, so you need to add bgfx::setViewMode(m_ViewID, bgfx::ViewMode::Sequential); at line 1200 in src\vg\bgfxvg_renderer.cpp
If this proves to be a performance issue and we wanted to reduce state change in the future I guess one way would be to merge the shaders in one.

Picture of the issue for reference :
mudui

@hugoam
Copy link
Contributor

hugoam commented Feb 7, 2018

Now I realized the last remaining issue I had was actually an error in my code, so that's a wrap, the migration is complete !
Considering I started the migration yesterday at the exact same time, I'm pretty satisfied with the operation. And don't worry, I kept the nanovg renderer in case this fine library suddenly disappears !

To respond to your previous answers :
3. The principle is really simple, it's the same as for a gradient fill : you specify the gradient position on the canvas and the path is just stroked using the colors from the gradient thus positioned instead of a solid color. Here is how it looks with my nanovg renderer :
gradient
4. That's perfect actually ! I didn't realize IntersectScissor returned a value since I was just using it as the equivalent to the nanovg function. This actually simplifies my own API since I was calling clip test and this function at the exact same location. So I don't need a function to get the current scissor in the end. Good thing we've both been developing an UI library, we have the same needs apparently ^^

Now that's about it. I just have another possible feature request, but I might implement it myself and send you a pull request.
To be able to implement a color picker, I think gradients could have two modes : RGB mode, and HSV mode, where gradient is actually interpolated on the HSV spectrum instead of the RGB spectrum.

This would allow to do something like a hue selector or a color wheel, the former, with current gradients would require to concatenate many gradients, and the latter is almost impossible :
https://camo.githubusercontent.com/4ff70a9655a6b6823985bbceb410abc75f94e541/687474703a2f2f692e696d6775722e636f6d2f6f6271774378702e706e67
https://cdn.instructables.com/F3T/5MG1/IO4LWWKN/F3T5MG1IO4LWWKN.LARGE.jpg

@jdryg
Copy link
Owner

jdryg commented Feb 8, 2018

Now I realized the last remaining issue I had was actually an error in my code, so that's a wrap, the migration is complete !

👍

Considering I started the migration yesterday at the exact same time, I'm pretty satisfied with the operation. And don't worry, I kept the nanovg renderer in case this fine library suddenly disappears !

Don't worry it won't disappear :) I'm more concerned that you will end up needing something nanovg already supports and it'll be hard to implement in vg-renderer. If you don't see and perf difference between the two and you cannot do what you want, it's logical to return to nanovg asap. That's why I mentioned it.

Bgfx has a sequential mode to solve that issue, so you need to add bgfx::setViewMode(m_ViewID, bgfx::ViewMode::Sequential); at line 1200 in src\vg\bgfxvg_renderer.cpp

Yes you need sequential mode, the same way you need it in nanovg. I'm setting the mode outside of the renderer, but since this is bgfx-specific I might as well do that in the constructor.

If this proves to be a performance issue and we wanted to reduce state change in the future I guess one way would be to merge the shaders in one.

Actually, in my case, the perf issue was the large amount of uniforms uploaded. If you have a single program for all cases, you have to upload the uniforms for each draw call. Since you cannot reduce the amount of draw calls, you have to upload the uniforms of the program/shader path which uses the most of them. Solid color paths need way less uniforms than gradients, so you end up uploading the same info over and over and it's not used by the selected shader path. E.g. See the numbers from this: https://twitter.com/jdryg/status/834491103680331776

  1. The principle is really simple, it's the same as for a gradient fill : you specify the gradient position on the canvas and the path is just stroked using the colors from the gradient thus positioned instead of a solid color.

That should be relatively easy to implement as I described above. Hope to be able to try that sometime today and get back to you.

Now that's about it. I just have another possible feature request, but I might implement it myself and send you a pull request.
To be able to implement a color picker, I think gradients could have two modes : RGB mode, and HSV mode, where gradient is actually interpolated on the HSV spectrum instead of the RGB spectrum.

The way the nanovg sample is drawing the color "circle" is by using 6 arcs to make a circle and each arc uses 1 linear gradient. I haven't implemented arcs yet but for a linear hue selector you can try that out with boxes.

The color circle is indeed hard (impossible?) to implement with gradients. You can try baking the gradient in a texture and use that to fill a circle. It should work.

A couple of things to keep in mind regarding performance (some of them are obvious and some of them are already described in the readme).

  • A new draw call is created whenever one of the following is true
    • The scissor rect changes (*)
    • The clip path changes
    • The path uses gradients/custom images
  • Antialiasing creates a lot more geometry. In case you draw a stroke around a filled path, avoid turning aa on for the fill. Prefer to aa only the stroke.
  • Concave shapes are triangulated on the CPU. If you have a lot of them, it might be wise to think about cached shapes. If that's not possible, unfortunately you have to live with the perf penalty.
  • Text rendering is slow (especially kerning). If you can, either use a monospace font most of the time, or (in case you are mostly rendering ASCII text and have the RAM) enable FONS_GLYPH_INDEX_ARRAY and FONS_GLYPH_KERN_ARRAY (https://github.com/jdryg/vg-renderer/blob/experimental/src/nanovg/fontstash.h#L61)

(*) Regarding scissor rects: I tried passing the scissor rect to the fragment shader and discard pixels based on that, to be able to merge more draw calls together. It was actually worse in my case, so I haven't uploaded the code. I might reimplement the idea in the future, with a compile time flag to turn it on/off on demand. If this is the case for you, say it and I'll try to implement it sooner.

PS. Don't use vg::String! It'll probably be removed in the near future.

@jdryg
Copy link
Owner

jdryg commented Feb 8, 2018

I just commited the changes for a stroke with gradient. Doesn't support AA because currently the gradient shader doesn't take into account per-vertex colors.

Actually none of the gradient/image functions (fillConvex/fillConcave) take into account AA. I should fix those at some point. I just didn't have a use for them that's why I postponed it for so long. I guess it's time to fix that and also add strokes with images and concave paths with images/gradients for completeness.

Will take a bit more time to fix them all, so if you have any particular need, I'll be glad to give it priority.

Simple demo:
gradient_stroke

@hugoam
Copy link
Contributor

hugoam commented Mar 2, 2018

So actually I'm afraid I must reopen the IntersectScissor topic.
I realized that it doesn't fit my use case, since IntersectScissor actually modifies the scissor region.
I need a way to check for intersection for widgets that don't themselve clip. So maybe there could be a checkIntersectScissor function ?
As a workaround I could Scissor and then un-Scissor but that's a bit ugly

@jdryg
Copy link
Owner

jdryg commented Mar 2, 2018

Wouldn't a getScissor() function be better for this? This way you can read the current scissor rect at any time you want and perform whatever operation you want with it. What do you think?

Also, it might be more helpful if we move the discussion about specific features into their own issues to keep track of them more easily.

@hugoam
Copy link
Contributor

hugoam commented Mar 2, 2018

I actually tried implementing a getScissor() function, but then it got trickier because the scissor is actually stored in global coordinates, whereas I need the current one / local one. So then I would need a getScissor() function to actually transform it to local, and for that you need the inverse transform... So that actually ended up trickier than implementing a checkIntersect function like so :

bool checkIntersectScissor(Context* ctx, float x, float y, float w, float h)
{
	State* state = getState(ctx);
	const float* stateTransform = state->m_TransformMtx;
	const float* scissorRect = state->m_ScissorRect;

	float pos[2], size[2];
	vgutil::transformPos2D(x, y, stateTransform, &pos[0]);
	vgutil::transformVec2D(w, h, stateTransform, &size[0]);

	if(scissorRect[2] == 0.f || scissorRect[3] == 0.f)
		return true;

	return !(scissorRect[0] > pos[0] + size[0]
		  || scissorRect[1] > pos[1] + size[1]
		  || scissorRect[0] + scissorRect[2] < pos[0]
		  || scissorRect[1] + scissorRect[3] < pos[1]);
}

@jdryg
Copy link
Owner

jdryg commented Mar 2, 2018

I just added both getScissor() and getTransform() before reading your comment. Can you implement the function you posted using those two in your code for now?

Keep in mind that when you are using command lists, all transformations and scissor rects are just recorded to the command list's buffer. No command is applied unless you submit the command list for rendering. So in order to perform those tests in your UI while using command lists, you have to keep track of the state hierarchy on your own.

Please be 100% sure you actually need such function, by implementing it on your side using getScissor()/getTransform(), and if the overhead of calling them starts to affect performance, I will add it to the library.

@hugoam
Copy link
Contributor

hugoam commented Mar 20, 2018

I finally got to try the gradient stroke you implemented weeks ago, so I'm finally getting back to you. It's perfect ! Those are some really nice looking gradients ! They look better than the NanoVG ones actually. (Almost looks like they are interpolated in sRGB space) EDIT: forget it, I was looking at gradients between different colors, hence why I thought the NanoVG one was worse. Nevermind !

gradients

@carloscm
Copy link

Since I can see this is where the clip feature was born, and it is not closed, I will comment on my changes here. I'm not doing a pull request since I believe the changes are too special cased for my needs, but I wanted to share what I did anyway. Here is the commit: carloscm@09795c7

I needed to draw this:

Annotation 2019-06-27 205516 copy

Basically one or more circles whose area is fused together when touching, while having alpha blending and not overdrawing them.

A bit of thinking out of the box and I managed to do the fill by drawing the circles as the clipping shapes in In mode, and then drawing an fullscreen translucent quad.

The stroke part was impossible. The stencil buffer from the previous part was perfect, but issuing a new drawing for the stroked circles didn't work since I needed Out mode for them. So I needed to redraw the clip shapes just to change the mode to Out. If I did that, then the filled areas of the previous call disappeared. It appears it was somehow drawing out of order with respect to the moment its stencil pass was valid, or maybe the stencil writing is not happening when the stencil buffer already has a value. I don't know about those low level details.

My change sidesteps both possibilities by adding a "hold" version of the clip modes. This just means to not increment the stencil value after the stencil draw commands are done. This shares the stencil contents with the next drawing by virtue of it using the same stencil value, and most importantly the next draw will also stencil test using the same stencil value.

This is still a bad design since even if the stencil and its value are shared some kind of clip shape has to be submitted, otherwise vg does not enable stencil test. In my user code I am just doing offscreen draws to force it.

I believe it would be a better idea to just expose a limited stencil API like BeginStencil/EndStencil and then allow some kind of stencil test mode flag in the calls that can issue draw commands like stroke/fill. Maybe even user control of the stencil value for more flexibility, there's space in the u32 flags.

Anyway just my brain dump on this. vg-renderer rocks and I am very happy user, thank you so much for it!

@jdryg
Copy link
Owner

jdryg commented Jul 13, 2019

Thanks for using vg-renderer and sorry for the delayed reply.

I could change the BeginClip()/EndClip() API to return some kind of handle (similar to images and gradients). You'll then be able to use this handle with the appropriate clipping mode for each draw command.

E.g.

vg::beginClip(ctx);
// Submit clip shapes
vg::ClipHandle clipHandle = vg::endClip(ctx);

// Later
vg::setClip(ctx, clipHandle, vg::ClipRule::In);
// Draw shapes
vg::setClip(ctx, clipHandle, vg::ClipRule::Out);
// Draw shapes
vg::resetClip(ctx);

Issues:

  1. Should the user be able to clip stencil commands (i.e. vg::setClip() inside vg::beginClip()/vg::endClip() blocks)?
  2. Should vg::beginClip() or vg::endClip() return the ClipHandle? It will probably be generated in beginClip() but returning it there might imply it's ready to use. Returning from endClip() forces you to call it in order to use the ClipHandle.
  3. Should ClipHandles be per layer or global? Since layers might draw to different bgfx viewIDs they should be per viewID. On the other hand, 2 different layers might share the same bgfx viewID.

@carloscm
Copy link

That would allow for great flexibility indeed, it would be a more useful API than the existing one or my hack. The concern I can see it's the fact there is only one stencil buffer, but the user can make perfectly API-legal calls to the beginClip/endClip that will overwrite stencil values from previous calls. Due to the name and usage users may think they are dealing with a geometry-level clipping API and not a stencil based one, that's why I proposed beginStencil/endStencil earlier. But it all boils down to documentation and examples in the end.
My two cents on the issues.

  1. That sounds very powerful, it would boil down to writing into stencil with stencil test enabled, isn't it?
  2. I vote for endClip(), it's the safer/ least surprising option.
  3. And it's still the same underlying stencil buffer for all of them too, which is not going to be cleared between layers drawing unless the user really goes out of their way to do so. But I agree it introduces a wrinkle in the symmetry of the API. Don't really know about this one.

@jdryg
Copy link
Owner

jdryg commented Jul 15, 2019

I think the availability of stencil values depends on the framebuffers you are rendering to.

The current layer implementation (layers branch) accepts a different bgfx viewID per layer. vg-renderer doesn't know if two different viewIDs refer to the same framebuffer or not. E.g. if 1 viewID draws to the window back buffer and another to an offscreen buffer with a depth-stencil texture attached, then the available stencil values are twice as many BUT they are not shared between the two buffers/layers.

If, on the other hand, you have 2 layers with 2 different viewIDs, both drawing to the window back buffer, then the available stencil values are only those of the back buffer's stencil range and they are shared between the layers.

There are ways to overcome the limited stencil range by clearing the stencil mid frame (e.g. drawing a fullscreen quad while replacing existing stencil values with 0), but it complicates things a bit more.

Also regarding no 1, it's a bit more complicated than I initally thought because there's only 1 stencil ref value which must be used for both testing and calculating the next value. The next stencil value can only be 1 greater (inc) or less (dec) than the ref value. So if you are clipping a clip mask, you have to use the the stencil value of the clip mask as the ref to test against, which in turn means that you have to use inc/dec as stencil op to generate the new clipped clip mask.

Hope the above make sense :) English is not my native language.

Either way, some corners must be cut because a generic clipping API with unlimited clipping masks and layers is too complicated. I have to think about it a bit more.

@carloscm
Copy link

I wasn't aware of the extra complexity in the layers branch. It then makes sense to pick a simpler design that fits it well, since it's the current development branch. No nesting, attached to whatever is the closest representation of the underlying stencil (I guess it would be view ID then), etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants