Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anti-aliasing #185

Closed
2 of 4 tasks
nical opened this issue Oct 13, 2017 · 7 comments
Closed
2 of 4 tasks

Anti-aliasing #185

nical opened this issue Oct 13, 2017 · 7 comments
Milestone

Comments

@nical
Copy link
Owner

nical commented Oct 13, 2017

Right now lyon doesn't deal with rendering so it can't really implement anti-aliasing. But it is important to make sure that the tessellators provide enough information to efficiently implement various antialiasing approaches.

  • MSAA: nothing to do.
  • temporal reprojection: nothing to do.
  • vertex-aa: needs to be able to generate a strip of triangles on the contour of the shape with normals. It should be possible to separate the shape and aa in separate passes.
  • post-processing: most post processing approaches run filters on the screen to determine where outstanding edges are. In our case we can skip that if we have the same contour information required for vertex aa.

I think that the first step is to experiment with:

  • implement vertex normals in the fill tesselator.
  • output contours in the fill tessellator it's probably enough to extend the GeometryBuilder API or add a ContourBuilder that takes edges represented by pairs of vertex ids (the same vertex ids passed to the GeometryBuilder).
  • make a demo that looks nice and smooth on top of that to validate the idea.
@nical nical added this to the 1.0 milestone Oct 13, 2017
@nical
Copy link
Owner Author

nical commented Jul 12, 2018

Yeah, that's what I call vertex-aa. Skia also has this feature built into their tessellator for (open-source) reference. Right now Lyon's tessellator can compute vertex normals but doesn't retain information about the connectivity around contour which would be needed to generate the extra geometry. It's a little tricky to do that properly with self-intersections, but it's doable.

@nical
Copy link
Owner Author

nical commented Jan 12, 2020

I don't have the time to implement something as complicated as vertex-aa in the fill tessellator and other anti-aliasing methods would have to be implemented in code that uses lyon rather than lyon itself, so I'll close this for now.

If anyone has the courage to implement vertex-aa in the tessellator I'll happily take pull requests, though!

@nical
Copy link
Owner Author

nical commented Apr 15, 2020

I'm going to declare anti-aliasing out of scope for now (as far as the tessellation algorithms are concerned). This doesn't mean AA can't be implemented by users of lyon on top of the tessellation (via msaa, taa, post-processing or some other way).

@nical nical closed this as completed Apr 15, 2020
@mxsdev
Copy link

mxsdev commented Jun 15, 2023

@nical

Hi! First of all, thanks for all of your great work on the library, as well as the resources you've provided for learning about it. I've learned a ton about computational geometry just from this repo :)

I really want to implement this feature, but I have been struggling. You said:

Right now Lyon's tessellator can compute vertex normals but doesn't retain information about the connectivity around contour which would be needed to generate the extra geometry. It's a little tricky to do that properly with self-intersections, but it's doable.

Can you elaborate on this? In particular, do you have some ideas on doing this efficiently?

I am assuming you are describing an approach where you determine a list of boundary contours, and then inset/outset as a separate pass. But this is challenging, since each Span needs to be aware of which boundary contour(s) its a part of (since a single VertexId can participate in multiple contours). This seems difficult to do during the scan, since as intersections are discovered, contours will split, and we would potentially need to go back and reassign Spans to contours. Regardless, it's not clear to me how to determine which contour each monotone poly is a part of - neither during the scan, or in a separate pass afterwards.

If you could just point me in a direction (and/or provide some reading materials), I am more than happy to put in a lot of effort to get this working.

For reference, here's what else I've tried:

  1. Aliasing monotone polygon contours. This seems to be what they did in Skia, however it requires at least two additional passes and a sort: one pass to inset/outset along normals, and a second pass to eliminate redundant triangles along diagonal lines (this requires a sort to find and eliminate duplicated vertices). I have to eliminate redundant tris since otherwise we will alpha blend two linear gradients across diagonals, creating ugly gaps. In the presentation, they actually send the monotone polygons back through the algorithm to do this, but I feel like this is pretty inefficient...
  2. Generating a list of non-intersecting boundary contours. I'm assuming this is what you were suggesting above, but as I described, generating the list of boundary contours is not enough, you also have to determine which Span is associated with which boundary contour(s), and it's unclear to me how to do this. It's also unclear to me how you would know the right direction of each contour for the fill mode - such information is immediate for monotone polys (go along left boundary chain then right boundary chain), but for arbitrary contours, we would need to determine "inside" vs "outside," which again, its unclear how to do that efficiently as part of a single pass.
  3. Re-constructing boundary contours from monotone contours. The idea here is to remember which edges are diagonals (which I think you can do by keeping track of merge/split vertices), and then re-combine monotone polys based on mutual diagonal lines. Then you would have a list of non-disjoint components consisting of monotone polys, from which you could extract boundary contours easily and in linear time. Moreover, it would be clear which polys belong to which contours.
  4. Doing multiple passes. Basically, we would do the intersection identification first (Bentley–Ottmann?) so that all pre-existing contours are already connected.

I was able to have the most success with approach 1 - see below for an example. I used the approach given in the skia presentation. I had to implement miter limiting since they use a miter join, which was causing extreme results for sharp angles.

Once again, if you could just point me in the right direction, that would be awesome! I already read Chapter 3 of Berg, Cheong, Kreveld & Overmars (Computational Geometry: Algorithms and Applications), and it helped a lot in understanding the codebase.

Thanks!

Screenshot 2023-06-14 at 7 21 44 PM

@nical
Copy link
Owner Author

nical commented Jun 15, 2023

Can you elaborate on this? In particular, do you have some ideas on doing this efficiently?

Since I wrote that things have become a bit more complicated even: Lyon's tessellator now only creates a single vertex at any given position. This means that at a self intersection the normal isn't well defined. Older versions of the tessellator would place two vertices at self intersections so they could have normals that somewhat make sense. Normals were unfortunately lost in the process of that change but the simplification made it possible to greatly improve the robustness of the algorithm.

This seems difficult to do during the scan, since as intersections are discovered, contours will split, and we would potentially need to go back and reassign Spans to contours. Regardless, it's not clear to me how to determine which contour each monotone poly is a part of - neither during the scan, or in a separate pass afterwards.

You got the idea. Lyon's tessellator does everything in a single pass, which I'm proud of because it helps a lot with making it fast, but it also complicates things quite a bit. This approach avoids having to create a data structure that represents the boundary of the polygon, which means that information about

If I really wanted to do vertex-aa with a tessellator, I think that I'd probably start over, write slower tessellator more similar to skia's in that it would have a data structure that represents contours, remembers what edges are diagonals, etc. The algorithm would be done in multiple passes (typically doing polygon simplification first to avoid having to deal with the geometry changing from under me during subsequent passes (what you suggest in (4)). The first pass could also cache normals and mark for each edge which side is in and which side is out to make the work of subsequent passes easier. That tessellator would probably exist besides the existing one and people would get to chose between speed and vertex-aa.

I don't see myself going in that direction in the foreseeable future, mostly because I have much less spare time than I used to and I think that other approaches are more promising. I do have plans and prototypes for anti-aliased path rendering, though. In short it is a tiling approach that is similar to pathfinder which I described in a blog post a while back. The tiles are rasterized differently but the general idea of tiling paths is there. My hope is to make enough progress there to make it part of lyon towards the end of the year but I already thought I would get there by the end of last year so no promises. So that's something we can also discuss if you are interested.

For what it's worth skia is moving away from tessellated meshes, probably because the CPU tends to be the bottleneck and there are simpler approaches with less work happening on the CPU (often at the cost of doing more on the GPU).
I think that the biggest advantage of tessellated meshes is how easy they are to integrate in GPU renderers, but they are actually harder and more costly to produce that a lot of other methods (assuming you want to be able to produce new geometry every frame).

I was able to have the most success with approach 1 - see below for an example. I used the approach given in the skia presentation. I had to implement miter limiting since they use a miter join, which was causing extreme results for sharp angles.

That's pretty cool! Did you write that from scratch or did you base it off of lyon's tessellator? Is the performance comparable to the non-aa tessellator?

I'm not sure how to help you further. I think that I would tend to go for an approach that would work for the whole contour rather than try to stitch contours of separate monotone polygons, but that's more instinct than experience and at this point you have probably more field knowledge on than I do on the specifics of making vertex-aa work.

As far as reading material is concerned, The computational geometry book and skia presentations are unfortunately all I could find. The code of skia's tessellator is also a good reference.

@mxsdev
Copy link

mxsdev commented Jun 17, 2023

Thanks so much for the detailed response!

I don't see myself going in that direction in the foreseeable future, mostly because I have much less spare time than I used to and I think that other approaches are more promising. I do have plans and prototypes for anti-aliased path rendering, though. In short it is a tiling approach that is similar to pathfinder which I described in a blog post a while back. The tiles are rasterized differently but the general idea of tiling paths is there. My hope is to make enough progress there to make it part of lyon towards the end of the year but I already thought I would get there by the end of last year so no promises. So that's something we can also discuss if you are interested.

For what it's worth skia is moving away from tessellated meshes, probably because the CPU tends to be the bottleneck and there are simpler approaches with less work happening on the CPU (often at the cost of doing more on the GPU).
I think that the biggest advantage of tessellated meshes is how easy they are to integrate in GPU renderers, but they are actually harder and more costly to produce that a lot of other methods (assuming you want to be able to produce new geometry every frame).

Ah, I see. I have read a lot about pathfinder - sounds like going this direction would be another re-write for lyon?

I am interested in tessellation because, as you say, it's rather straight forward to implement in a GPU pipeline. The pathfinder approach seems to require binding float textures and then copying them over to framebuffers, which I feel would make batch optimization more complex, and batch optimization hurts my brain haha. But the algorithm behind pathfinder is incredibly elegant, much moreso than most GPU vector renderering schemes I've seen.

Would your approach basically be a more broadly compatible version of pathfinder? Since that project uses geometry shaders and doesn't expose a graphics-api agnostic interface. If so that is very exciting!

That's pretty cool! Did you write that from scratch or did you base it off of lyon's tessellator? Is the performance comparable to the non-aa tessellator?

I built it on top of lyon's tessellator. I essentially made a separate GeometryBuilder struct for anti-aliasing, which keeps track of contour information and does its anti-aliasing in a separate pass at the end.

It also had a custom AntiAliasConstructor for anti-aliasing, so that users would first convert a FillVertex to an IntermediateVertex (which would store colors, etc). Then another method would provide the position data and a reference to the appropriate IntermediateVertex, as well as a boolean for inside vs outside (opaque or transparent) which creates the final output vertex. I thought this was a nice solution since it still gives users complete control over everything except position data.

I didn't do any benchmarks haha. It won't deal well with non-opaque colors which makes it unusable for my purposes. I was mostly interested in dealing with the extrusion/intrusion logic, which isn't that hard, but had some gotchas like nearly parallel edges and sharp joints. Not a tremendous accomplishment or anything, but I was happy to at least get something that looked nice :)

If I really wanted to do vertex-aa with a tessellator, I think that I'd probably start over, write slower tessellator more similar to skia's in that it would have a data structure that represents contours, remembers what edges are diagonals, etc. The algorithm would be done in multiple passes (typically doing polygon simplification first to avoid having to deal with the geometry changing from under me during subsequent passes (what you suggest in (4)). The first pass could also cache normals and mark for each edge which side is in and which side is out to make the work of subsequent passes easier. That tessellator would probably exist besides the existing one and people would get to chose between speed and vertex-aa.

Yeah, after further reflection, this seems to be the most promising approach for tessellation. I think maybe if the "vertex-aa" version of the tessellator was basically like the current tessellator, except that it didn't merge vertices & created multiple vertices at intersections (one per each contour), maybe the pre-processing could still be only one pass? Since then at least there is a bijection from VertexId to points on contours.

For keeping track of inside/outside, is e.g. keeping track of candidate edges to the left and counting (signed on winding order for non-zero) what you have in mind?

If I was able to get this approach to work well (maybe with an extra pass), and submitted a PR, is this something you would consider merging? Or would you rule that out in favor of the pathfinder approach? Like I said I'm definitely interested in making it work. But if you feel it will be re-written soon, then I understand wanting to hold off.

@nical
Copy link
Owner Author

nical commented Jun 17, 2023

sounds like going this direction would be another re-write for lyon?

Not really, the tessellator is useful to enough people. I'd leave lyon_tessellation more or less in its current form, and evolve it as needed without drastically changing what it does and how. The plan is to add other lyon_ crates on the side for new things with different approaches.

which I feel would make batch optimization more complex

What I'm looking into right now rasterizes 16x16 pixels masks into an atlas (a regular 8bit alpha texture as opposed to pathfinder's float texture), and the main pass reads from it. Batching is not fundamentally different from the use of tessellated meshes.

Would your approach basically be a more broadly compatible version of pathfinder?

Yes, it targets a GLES 3.0 level of features. It can probably do older devices, the most advanced thing it does is read from a texture in vertex shaders. It's otherwise pretty simple vertex and fragment shaders.

The prototype is at https://github.com/nical/misc/tree/master/path_renderer/demo you can run it locally (cargo run --release tiger.svg) and inspect it with renderdoc or the gpu tool of your choice to see how it works. you'll see there is a fairly small number of batches. If you press j or k you can switch between tiling, very basic tessellation and very basic stencil-and-cover to render the same scene. I'm using this proto as a testing ground but I like how it is evolving so I am hoping to clean it up, add some features and eventually include it in lyon as a lyon_renderer crate which would use lyon_tiling, etc.

I think maybe if the "vertex-aa" version of the tessellator was basically like the current tessellator, except that it didn't merge vertices & created multiple vertices at intersections (one per each contour), maybe the pre-processing could still be only one pass?

Maybe it is possible, I think that one has to try and see what problems arise in practice. I don't think I would have the time and will power to push through making it robust against floating point precision errors. Making lyon's tessellator robust is hands down the hardest thing I have ever programmed. It took me many rewrites over many years.

For keeping track of inside/outside, is e.g. keeping track of candidate edges to the left and counting (signed on winding order for non-zero) what you have in mind?

Bentley-ottmann is a typical sweep-line algorithm (just like monotone polygon decomposition, that's what makes it possible to merge them into a single pass), so you have to traverse your edges from top to bottom and maintain a sorted active edge list. At each iteration of instead of just checking intersections, the first pass could scan the active edge list, count the winding number and mark for each edge the winding number left and right (or directly resolve the winding numbers and mark whether each side is in or out. If that pass creates a retained and connected representation of the contour (edges have links to previous and next edges), then you can skip scanning most of the time since you can deduce the winding number from the connected edge directly above an edge when inserting it (probably only need to scan for "start" events when there isn't an edge directly above and at complex intersections where more than two edges intersect).

I was mostly interested in dealing with the extrusion/intrusion logic, which isn't that hard, but had some gotchas like nearly parallel edges and sharp joints.

There are a few other tricky bits like how thin geometry (for example a sort of line that is less than 1px wide can "flip" when the intrusion moves each side my half a pixel (a generalization of the "small features" slide of the skia presentation).

Not a tremendous accomplishment or anything

I'll at least congratulate you for finding your way into lyon's tessellator's code without any assistance and modifying it into producing what you were after. Not an easy task.

If I was able to get this approach to work well (maybe with an extra pass), and submitted a PR, is this something you would consider merging? Or would you rule that out in favor of the pathfinder approach?

Yes I'd consider it for sure. I'd love to have a good implementation of that and certainly not planning to rewrite or remove the fill tessellator even when I'm ready to add alternatives to the project. One big constraint is that I don't want to make modifications to the current fill tessellator in ways that would penalize the performance of the existing common cases, so a separate implementation is probably the simplest way forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants