Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic GPU stroke rendering with support for caps and joins #408

Merged
merged 12 commits into from Nov 14, 2023

Conversation

armansito
Copy link
Collaborator

@armansito armansito commented Nov 8, 2023

This PR implements support for GPU-side stroke expansion with initial support for caps and joins. This is further progress towards implementing the design outlined in #303

  • The flatten stage implements rudimentary stroke offsetting based on displacing the endpoints of lines that result from flattening the baseline curve along their (transformed) normal vector. This results in a crude subdivision estimate for the offset curves but this is initially acceptable as it unblocks follow-up stroke work. @dragostis suggested a solution that sounds promising, which we could use until Euler Spiral based offsetting arrives.

  • The scheme described in Stroke rework #303 involving a "stroke cap marker" segment to detect when to render a start vs end cap on the GPU has been implemented with a slight tweak. For an open path, the start position and tangent require two points to be encoded in the marker which cannot be done with a lineTo without first inserting a moveTo. Since inserting a moveTo would terminate the subpath prematurely and because we want to preserve the invariant that only one segment has the SUBPATH_END_BIT set, the marker for an open path gets encoded as a quadTo instead. The marker for a closed path is encoded as a lineTo since the final control point of the preceding segment (i.e. the close) should already match the start control point.

    Since we can distinguish between an open and close path based on whether the marker is encoded as a quadTo or a lineTo, adding a "close vs open" bit to the encoding wasn't necessary.

  • Currently only butt caps and bevel joins are supported. Support for other stroke styles will be added as a follow-up.

  • PathEncoder has some new handling for zero-length segments to avoid encoding a cap marker with a degenerate start tangent. If the lengths of all possible tangent vectors are within an epsilon threshold (currently 1e-12) then the segment is dropped. This will get extended to all segments in a follow-up PR to avoid encoding zero-length segments for strokes in general.

  • Added additional stroke test cases for closed paths, including a single curve with a co-incident start and end control point.

Implemented a stroking scheme in the flatten stage in which every
flattened line segment gets offset along the curve normal by the line
width. The line vertices are offset _after_ subdivision and does not
try to accurately estimate subdivisions for the offset curve.

Cusps are not detected and are prone to failure. Regions with high
curvature (especially around self-intersections) are handled by simply
boosting the subdivision count using the curve offset which is not
correct but avoids awfully missing samples when the tolerance is set too
low.

This is intended as a placeholder but is the first step towards
implementing correct strokes without optimizing for performance.
- Implemented the stroke cap marker segment encoding scheme outlined in
  #303. The proposed scheme has been slightly modified such that instead
  of an open/closed bit, the segment is encoded as a lineto or quadto
  to distinguish between the different states.

  Encoding the marker segment as a quad to for open segments also
  handles a limitation with only using linetos: for an open path, the
  marker must encode both the start tangent and the coordinates of the
  first control point to correctly position a start cap. Encoding these
  as 4 f32s requires an implicit moveto before the lineto which must
  terminate the preceding subpath.

  Encoding this as a quadto preserves the invariant that every stroked
  subpath only contains one segment with the SUBPATH_END_BIT set to 1,
  which now serves as the stroke cap marker. For a closed path, a lineto
  is sufficient since the preceding coordinate (from the lineto encoded
  for a "closepath" verb) must already equal the subpath's first control
  point.

- Shitty stroking continues to work by detecting and ignoring the
  stroke cap marker.
The flatten stage now makes use of the stroke cap marker encoding scheme
to draw caps and joins at the right locations. At this stage only the
butt cap and bevel join styles are supported.
Make sure that a path start segment has a tangent that has a non-zero
length.
- Add versions of the cubics without co-incident start and end points
- Reposition the closed paths so they fit within the view
@armansito armansito mentioned this pull request Nov 9, 2023
Make it so that tangent vectors are NOT normalized and normals
are normalized. Vector helpers that derive normals from tangents now
always return normalized vectors.

This keeps things internally consistent.
Copy link
Contributor

@raphlinus raphlinus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done a quick pass, may come back later with a deeper look. Overall it's what I expected to see. Thanks! I'll go on to 410 next.

/// segment tells the GPU stroker whether to draw a cap or a join based on the topology of the
/// path:
///
/// 1. This marker segment is encoded as a `quad-to` for an open path and a `line-to` for a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth mentioning the fact that quad-to is just a convention to make the number of encoded coordinate pairs come out right? Could be as simple as saying (2 additional points) and (1 additional point) after the verb.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

const EPS: f32 = 1e-12;
if (x - x0).abs() < EPS && (y - y0).abs() < EPS {
// Drop the segment if its length is zero
// TODO: do this for all not segments, not just start?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can take the question mark off :)

(x2, y2)
} else {
// Drop the segment if its length is zero
// TODO: do this for all not segments, not just start?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same.

(x3, y3)
} else {
// Drop the segment if its length is zero
// TODO: do this for all not segments, not just start?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same. I'm also wondering if there might be an opportunity to factor for DRY.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I initially thought of using kurbo::BezPath::tangents here (which does the same thing) but instead ended up writing a simpler helper for just the start tangent. Let me know if you'd rather use the kurbo utility here.


// HACK: this increase subdivision count as function of the stroke width for shitty strokes.
var tol = sqrt(REM_ACCURACY);
if cubic.flags == 1u {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Name the flag rather than having a magic constant?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. With the way the code is evolving, I may want to make some changes to the Cubic struct (such as removing the stroke field). That may be an opportunity to more strictly re-define the flags field.

p3 = read_f32_point(tm.pathseg_offset + 6u);
}
struct CubicPoints {
p0: vec2f,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticing that we're not being consistent whether to use vec2f or vec2. I'm generally in favor of being more concise.

Copy link
Collaborator Author

@armansito armansito Nov 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vec2f is a predeclared alias for vec2<f32> in type declarations. vec2 seems to work as a constructor expression where the element type gets inferred from the parameters. I've been using vec2f in new type declarations and I tried to match vec2(...) in constructor expressions since that's what the existing code did. I'd be happy to standardize on vec2f though.

@@ -21,6 +21,7 @@ let PATH_TAG_TRANSFORM = 0x20u;
let PATH_TAG_PATH = 0x10u;
let PATH_TAG_STYLE = 0x40u;
#endif
let PATH_TAG_SUBPATH_END_BIT = 4u;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're not using _BIT convention for other flags, but that's a minor style nit. Also a self-reminder, at some point I'd like to pack this more tightly so a single tag word can have path, transform, and style set along with the other content.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

* Fix the phrasing in a comment regarding subdivision count
* Use the vec4f and vec2f predeclared aliases for vec4<f32> and vec2<f32>
  in `flatten`. I've been consistent with this style for new code in
  this shader and I may make the other shaders use the alises in a
  follow up since it's more concise.
@armansito armansito merged commit e6c5bcf into main Nov 14, 2023
4 checks passed
@armansito armansito deleted the gpu-shitty-strokes branch November 14, 2023 00:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants