Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upAdd an experimental geometry api #1461
Conversation
| @@ -22,6 +22,7 @@ fnv = "1.0" | |||
| gleam = "0.4.7" | |||
| lazy_static = "0.2" | |||
| log = "0.3" | |||
| nanovg = {version = "0.3", features = ["gl2"]} | |||
This comment has been minimized.
This comment has been minimized.
jrmuizel
Jul 7, 2017
Contributor
This will need to be an optional dependency because we won't be using it in Gecko
This comment has been minimized.
This comment has been minimized.
stshine
Jul 8, 2017
Author
yep, and it needs to decide whether use OpenGL or OpenGL ES during compile time because only one of them can be enable at a time. Will try to fix.
| @@ -445,6 +471,23 @@ impl ResourceCache { | |||
| } | |||
| } | |||
|
|
|||
| pub fn request_geometry(&mut self, | |||
This comment has been minimized.
This comment has been minimized.
nical
Jul 8, 2017
Collaborator
I think it would be preferable if the resource cache treated this as a regular image (just like blob images are treated as regular images).
This comment has been minimized.
This comment has been minimized.
stshine
Jul 8, 2017
Author
It does; the difference is for vector images we need to record the dimensions we ask the svg renderer to rasterize and if a different dimensions is requested the webrender should treat it as a new regular image. Otherwise it should just behave like regular images.
This comment has been minimized.
This comment has been minimized.
kvark
Jul 10, 2017
Member
I see the reasoning for treating the SVG dimensions, coming from the primitive rectangle, as a part of the key. I don't think we should do that though - it's breaking the primitive abstraction a bit. Instead, I'd like to see the rasterized dimensions being passed to update_geometry. This way we'd have:
- geometry being rendering (on screen) via the existing image API, as @nical suggested
- better support for transformed SVG primitives
This comment has been minimized.
This comment has been minimized.
stshine
Jul 10, 2017
Author
I'm not sure I understand your point -- would you like to elaborate a bit?
There is a possible situation that the dimensions changed while the geometry data didn't, so if we pass the dimensions through geometry we may also need to pass the geometry data unnecessarily.
This comment has been minimized.
This comment has been minimized.
kvark
Jul 10, 2017
Member
we could have a method like update_geometry_dimensions in order to preserve the data
| } | ||
| } | ||
| vg.end_frame(); | ||
| context.gl().read_pixels(0, |
This comment has been minimized.
This comment has been minimized.
nical
Jul 8, 2017
Collaborator
Doing this read-back right after rendering the path defeats the purpose of painting it on the GPU. In most cases it'll be faster to just paint on the CPU.
This comment has been minimized.
This comment has been minimized.
stshine
Jul 8, 2017
Author
Well, I am not familiar with OpenGL and it took me sometime to work this out so I choose the easiest approach. Do you have suggestion on improve this? Or maybe we can do this to later, consider for now when the request dimension changes I simply delete previous texture from the cache and insert the new one, so it may be benefiting to use the ExternalImage api or wait #1367 work out. (I guess the won't be slower than skia? I don't know)
This comment has been minimized.
This comment has been minimized.
nical
Jul 10, 2017
Collaborator
We could use rust-azure/skia and render the path into an image on the CPU. This would avoid adding a dependency to a new rendering library to servo. The operation of reading the pixels back from the GPU is almost always going to be slower than just rendering a single path on the CPU (unfortunately).
The other (longer term but harder) way is to render the path on the GPU in the renderer. This is an area @pcwalton is actively working on. The idea would be that the resource_cache would transform the paths into geometry that the renderer would be able to process directly.
Either way it is good to put in place the infrastructure to add and remove path objects in the resource cache.
This comment has been minimized.
This comment has been minimized.
pcwalton
Jul 10, 2017
Collaborator
The idea would be that the resource_cache would transform the paths into geometry that the renderer would be able to process directly.
As I mentioned before, I don't think that's a given. I think it'd be more robust to have the client of WebRender submit B-meshes directly. That way they can be cached and hung off the <svg:path> render objects.
|
|
c0b810d
to
8da644f
|
Thank you, I'm happy to see the geometry API! My attempt to use For your PR, looks like you'll need to replace |
| local_clip: &LocalClip, | ||
| geometry_key: GeometryKey) { | ||
| let prim_cpu = GeometryPrimitiveCpu { | ||
| geometry_key: geometry_key, |
This comment has been minimized.
This comment has been minimized.
| fn write_gpu_blocks(&self, mut request: GpuDataRequest) { | ||
| let gpu_blocks = [ [self.dimensions.width, self.dimensions.height, 0.0, 0.0].into(), | ||
| TexelRect::invalid().into() ]; | ||
| request.extend_from_slice(&gpu_blocks); |
This comment has been minimized.
This comment has been minimized.
kvark
Jul 10, 2017
Member
nit: could look simpler with just 2 push calls. The extend_from_slice is mainly for cases where you already have a list stored somewhere.
This comment has been minimized.
This comment has been minimized.
| @@ -1042,6 +1076,13 @@ impl PrimitiveStore { | |||
| text.render_mode, | |||
| text.glyph_options); | |||
| } | |||
| PrimitiveKind::Geometry => { | |||
| let geometry_cpu = &mut self.cpu_geometries[metadata.cpu_prim_index.0]; | |||
| let width = (geometry_cpu.dimensions.width * device_pixel_ratio) as u32; | |||
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
| @@ -445,6 +471,23 @@ impl ResourceCache { | |||
| } | |||
| } | |||
|
|
|||
| pub fn request_geometry(&mut self, | |||
This comment has been minimized.
This comment has been minimized.
kvark
Jul 10, 2017
Member
I see the reasoning for treating the SVG dimensions, coming from the primitive rectangle, as a part of the key. I don't think we should do that though - it's breaking the primitive abstraction a bit. Instead, I'd like to see the rasterized dimensions being passed to update_geometry. This way we'd have:
- geometry being rendering (on screen) via the existing image API, as @nical suggested
- better support for transformed SVG primitives
| @@ -70,6 +70,7 @@ mod render_task; | |||
| mod resource_cache; | |||
| mod scene; | |||
| mod spring; | |||
| mod svg_renderer; | |||
This comment has been minimized.
This comment has been minimized.
|
So if this is needed in the short term, I guess I'm fine with it, but this is all going to have to be rewritten when Pathfinder lands. And by that I mean that the interface is going to have to change too, as SVG is the wrong abstraction for clients to deal with. Moveto/lineto/curveto style commands are inferior to meshes, because meshes allow for maximum parallelism during fills and PostScript style commands don't. The right approach, in my view, is to have the client of WebRender (i.e. Servo) be responsible for converting SVG paths into meshes, and WR would not know about moveto/lineto/curveto style commands at all (except maybe when rasterizing glyphs). This way, the conversion can be done once and cached on the Servo render object corresponding to the Honestly, if we want SVG in the short term, I would prefer to see Servo just rasterize SVG on the CPU and submit the SVG as a raster image to WebRender. It'll be faster than the readback in this patch anyway. |
Could you elaborate on this point? I wouldn't expect parsing of the path to be nearly as time-consuming as it's rasterization. Besides, isn't it up to the API of the backing CPU solution anyway (e.g. Skia/Freetype/etc)?
We still have to maintain the cache mapping of multiple other things (images, glyphs, etc), so I don't see much of a problem (in terms of implementation) here. Even if Pathfinder gets integrated, proves to be an ultimate replacement for CPU based rasterizers (which we'd arguably want to keep for a while as a backup anyway), there would still be the point of doing the path->mesh conversion inside WR because both Gecko and Servo need SVG rendering.
I think we all agree on that :) |
The basic idea is that with winding rules you can't determine whether a pixel is filled or not without looking at all the path intersections on the scanline prior to that pixel, which is inherently sequential due to being a global operation, while with a mesh of primitives you can fill every primitive independently without having to perform a global scan (in other words: every pixel belongs to precisely one primitive and so all pixels can be easily drawn in parallel).
I guess either doing the path-to-mesh conversion inside Servo or doing it inside WR can be made to work. My general feeling is that PostScript style moveto/lineto/curveto style commands are bad in general and it would be nice for WebRender to present a more sensible mesh-based interface. I also just think it's simpler to hang meshes directly off the SVG path objects. But it's probably work fine either way I suppose. |
That was my first approach; however, the webrender image API currently is not resizeable, so if the dimensions of an SVG changed during layout I need to delete the previous image and generate a new key. And I couldn't make this work because the layout object in servo is refcounted and I couldn't find a way to tell webrender to drop rasterized image when the SVG fragment goes out of scope. If there is a way to solve this please tell me :) Instead, I went for this geometry API mainly for exploration purpose. Here is some observation I get:
I would really like to know in what form we should handle an SVG to webrender, especially, consider the goal is add native support for vector graphics rasterizing in webrender. should we add path display item to the existing ones, and reuse them for SVG, or should we create brand new ones? Should the whole SVG stored in a vector with all the paths serialized in it, or be tree-like with all the path objects cached in webrender? I do understand how dumb my design could be and I don't mind rewrite them at all. So I am also preparing the SVG pull request for servo and I would like to submit it soon for a panorama. Feel free to criticise and share your opinions if you like :) |
|
Sorry, I haven't looked through this in detail yet. Certainly, I think it would be ideal if we could make this work through the blob / external image APIs - then we can use nanovg experimentally here (external images can provide a raw GL texture), and make it easy to iterate on this without introducing dependencies into WR itself. Perhaps we need to resolve the image resizing question first, and then this might be fairly straightforward to convert to that? [apologies if missing context from not having read all the other comments yet, and thanks for working on it]. |
|
For path objects, I think that the important part is that they are retained in webrender with a cache for the tessellated representation, like images and font data. For the representation itself, I have a slight preference towards keeping a postscript-like representation in the display list but we could make it work either ways. Off the top of my head some pros/cons of sending tessellated trapezoid meshes to webrender: pros:
cons:
On the topic of representing SVG documents, we need new display items for the SVG stuff that is not yet possible to express with the current set of display items (for example filling and stroking paths), and use regular display items for the rest, just like gecko does. Looks like gecko only has 3 SVG-specific display items: SVG_OUTER_SVG, SVG_GEOMETRY and SVG_TEXT. |
|
@glennw yeah if we use the blob image api we would move the renderer part to servo. |
|
I have submitted servo/servo#17681, FWIW. |
|
OK, the lazy tessellation argument is convincing. I'm fine with submitting high level path commands. |
|
@nical About the representation of SVG elements, Note that of course we can reuse the structures in
I believe the answers to both questions are the latter. First, considering from the rendering perspective, an SVG is simply treated as an image inside the CSS content, and there is no interference between them. Hence:
At last, the first part should not matter much until we start to work on text, image and effects, before when the interface should be easy to change. |
|
|
|
SVG's drawing model is very close to the rest of HTML's. It's pretty much a superset, so everything we already have in webrender has to be implemented in for SVG. Applying/animating transforms, rendering (rounded) rects, images, filters, shadows, group opacity, etc. is mostly the same at the spec level and it should desugar to the same thing at webrender's level. Looking at display items is just the visible part of the iceberg. There is also managing fonts, and caching glyphs, and more importantly we want SVG primitives to benefit from the same batched rendering architecture as other primitives in webrender, the logic that chooses what can be drawn directly or in an intermediate target, find the best order to draw things and use the z-buffer to reduce overdraw, etc. If rendering svg elements is separated from the rest there is a ton of optimization that is lost.
The internal representation of effects should not differ heavily (they don't in gecko for example). Not sure what you have in mind for the bounding rect, but this is negligible compared to reimplementing almost all of webrender in a separate svg renderer. In general bounding rects are super useful to cull out items that are out of view so off hands I don't see any reason not to have it.
Not having incremental display lists is not a good reason to re-implement webrender's functionality separately. Just wait for incremental display lists to be implemented if you believe it is a blocker (I don t think it is).
We don't need to re-build display lists when animated properties change. webrender has a concept of animated properties that can be modified separately from the display list, specifically to avoid to rebuilding display lists. That's not a problem specific to svg. Caching the display list of an SVG document can also be done without having a separate representation and rendering infrastructure. And having a display lists refer to other display lists is also already possible: you can create a pipeline, put the display list of an svg in there and link this pipeline to the rest of the page the same way we do it for iframes. I'm not saying this is what we should do for SVG, just that webrender already has the flexibility to express that. |
|
Once #1490 lands, will that simplify moving the code to the blob / external image API? |
|
Since SVG is a huge and complicated feature, I believe if we want native SVG support in webrender, experiment and iteration the geometry API with Servo is necessary to make it practical, so I am still proposed to add it. But this is under your decisions. |
3ab5118
to
0888765
|
|
|
We discussed this on IRC, and have decided to experiment with something similar in design to the blob renderer. That is, we can start adding an experimental geometry API, but the implementation is a pluggable trait provided by the caller. That should allow us to experiment with the API for various backends, while iterating on the implementation(s) externally. |
|
My home network went out of service recently, will back to this ASAP. |
|
|
|
|
Add types for displayable SVG items. Currently it only support basic shapes and fill with solid colors.
Add a vector graphics rasterizer using nanovg. Currently, it is single-threaded and rasterize incomming geometries into bitmap images, and send the result back through channel.
Add geometry api. Currently, it store the geometries in the svg renderer, rasterize them when needed, and simply treat them as bitmap images in the renderer.
|
|
|
@stshine Are you still looking to work on this in the future? Perhaps we should close this for now until you have time to work on it again, or someone else picks it up? |
|
My personal feeling is that Pathfinder is so close to ready that there's little point in using NanoVG for this, even as a temporary stopgap. The code necessary to hook up Pathfinder in Loop Blinn SSAA mode is comparable to this. |
|
Closing for now due to inactivity. Thanks for the work on this so far! And please re-open if more work is planned for this branch. |
stshine commentedJul 7, 2017
•
edited by larsbergstrom
This pull pull add a geometry api for vector graphics for webrender. It add channel to the resource cache that connected to a SVG rendering thread, accept SVG geometries from the api, rasterize them into bitmap using nanovg when required, and in the renderer the are simply treated as raster images.
Currently it only suports basic shapes and filling with solid colors, but I believe it worth reviews.
fixes #402 .
This change is