Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RmlUi 5.0 and 6.0 - Progress and feedback #307

Closed
mikke89 opened this issue May 13, 2022 · 28 comments · Fixed by #594
Closed

RmlUi 5.0 and 6.0 - Progress and feedback #307

mikke89 opened this issue May 13, 2022 · 28 comments · Fixed by #594
Labels
discussion Meta talk and feedback
Milestone

Comments

@mikke89
Copy link
Owner

mikke89 commented May 13, 2022

Edit 2022-12-11: The backends concept has been completed and released together with RmlUi 5.0! There has been a lot of progress on the filters and effects part of this post, but it has not yet been merged into master. Development continues in the filter effects branch and is now targeting RmlUi 6.0.


Hi all!

I wanted to discuss some ideas and gather feedback from everyone towards an eventual RmlUi 5.0 release. I have been working in the backends and filter branches which I intend to eventually merge into master. There are two main changes:

Backends

This is a change I've wanted to do for a long time. It intends to replace the current sample shell with a multitude of backends split into renderers and platforms (loosely inspired by how Dear ImGui organize things). This way it should be a lot easier to add new backends and maintain existing ones. See details and currently implemented backends here. This already includes a more modern OpenGL renderer which has been repeatedly requested since the very beginning of this library (#261), and also Emscripten support so RmlUi even runs in web browsers now (which I find kind of ironic).

This organization allows users to more easily pick-and-choose their desired backend. Ideally, the renderers and platforms are sufficiently general so that users can use them directly in their own projects. That way, any improvements to the renderer and platform can trivially be ported upstream to everyone's benefit.

Another advantage of this approach is that we can now easily switch between different backends and test them on all the samples. This has already revealed several limitations and bugs in the backends ported from the SDL and SFML samples, so these are already better supported.

Filters, effects, and shaders

This is probably the biggest change yet to the visual aspect of RmlUi. I have long been asked by users to add support for more advanced rendering features such as custom shaders, filters, and mask images. Actually this goes all the way back to the very first issue #1. I have been working on exactly these features in the filter branch.

The new rendering features allow us to implement CSS properties including filter, backdrop-filter, mask-image, and box-shadow. In fact, I already have these properties working, although all of this is still a work in progress. Filters and backdrop filters are implemented to support all effects in CSS. Mask image is designed such that all decorators can be used as mask. Box shadows are fully featured with blur, offsets, and insets. I have also added support for linear-gradient as a decorator, now with arbitrary rotation and color stops. And yes, this can be used as a mask!

I'm considering renaming decorator to background, leaving the decorator term to be more of an implementation detail which incorporates all of filters, masks, and backgrounds. This would make everything more in-line with CSS, e.g. background: linear-gradient(#fff, #000). I've also added a shader decorator which makes adding a custom shader effect in RCSS as simple as background/decorator: shader(my-custom-shader).

Now naturally, all of this comes with the caveat of a more complex renderer. This is also why the backends change is important, because then people can more easily reuse the included renderers and contribute to fix issues. Importantly though, everything will mostly be backward compatible, so if someone does not need support for these advanced effects they can simply skip implementing the new rendering functions to keep their simple renderer and everything will work just like before. Furthermore, these changes add surprisingly little complexity to the core library so it shouldn't bring any bloat for users not wanting it.

When starting this work I figured I wanted to implement all of these rendering features simultaneously instead of taking it in steps, mainly to understand and guide the necessary abstraction layer for the render interface. Here are the new render interface functions being added:

enum class StencilCommand { None, Clear, WriteValue, WriteIncrement, WriteDisable, TestEqual, TestDisable };
enum class RenderCommand { None, StackPush, StackPop, StackToTexture, StackToFilter, FilterToStack, StackToMask };

class RenderInterface {
	 /* ... */
	virtual bool ExecuteStencilCommand(StencilCommand command, int value = 0, int mask = 0xff);

	virtual TextureHandle ExecuteRenderCommand(RenderCommand command, Vector2i offset = {}, Vector2i dimensions = {});

	virtual CompiledEffectHandle CompileEffect(const String& name, const Dictionary& parameters);
	virtual TextureHandle RenderEffect(CompiledEffectHandle effect, CompiledGeometryHandle geometry = {}, Vector2f translation = {});
	virtual void ReleaseCompiledEffect(CompiledEffectHandle effect);
}

This is very much a work-in-progress, there are still questions and improvements to be made. I'd love to hear some feedback in this regard.

Some notes and open questions regarding the interface and implementation:

  • We do need some sort of rendering stack like this to enable compositing effects on top of each other.
  • Filters generally work by applying some shader effects to an input texture. If the rendering scene is multisampled, we have to resolve this to a texture first. This is partly what the StackToFilter and FilterToStack rendering commands are used for.
  • Generally, I want to avoid going too low-level in order to give flexibility to the renderer implementation. Not sure I succeeded, it becomes increasingly difficult with more advanced effects.
  • Box shadows in particular make extensive use of low-level stencil commands. Are there perhaps better approaches here?
  • Filters and transforms are a bit tricky. Currently we just apply the filter to the rendered and transformed element. Ideally though we would apply the filter in element space. That is not so easy though, and would require changes to our transform chaining and we would need to figure out which parts of the element is visible.
  • Some effects may need to sample from neighboring pixels such as drop-shadow, box-shadow, and blur. These may look strange near window edges or other clip edges since we don't have any rendered information outside these edges. Not sure if this is acceptable or something we need to deal with. Probably requires implementing the previous point to deal with this.
  • Should filter effects be applied as (clipped) fullscreen filters or rendered using geometry?
    • The tricky part for rendering geometry here is handling filters that need to render outside the normal geometry of the element, in particular drop-shadow and blur filters. We also need to support rendering to lower resolutions, especially for blur.
    • Right now filters use fullscreen rendering, clipped using scissor regions for performance sake, and then applied to the scene while clipping with stencils. This feels complicated and error-prone.
    • On the other hand background decorators (e.g. linear-gradients) renders using a geometry. This makes the API a bit weird though, sometimes the effect takes a geometry and some times not.
  • I find it difficult to judge how involved the library should be on deciding whether to render some effect to a static texture or not.
    • For example, box-shadow is currently forced to render to a static texture, and then this texture is rendered on subsequent calls. This is quite natural because we know exactly when the background changes and needs to be updated.
    • On the other hand, all filter effects are currently rendered on every rendering loop, because it is impossible for the library to know if there are visual changes requiring it to be updated. However, the user might know?
    • What about linear-gradients? I feel like the renderer should be able to decide whether this should be rendered to a static texture.
  • These changes also allow us to fix some rendering issues such as clipping children to the border-radius of a parent. See border-radius does not respect overflow: hidden #253 (comment) for an example.

Apologies this became longer than intended, and more like a combined blog entry and notes to myself, not sure if these notes make sense for anyone else. In any case, some general feedback would be highly appreciated.


I'll leave you with a screenshot which showcases many of the filters and effects. The really cool part is that all of these can be composited and stacked as desired by the user. Note that there are only two bitmaps used in this whole sample, the little invader saucer and the RmlUi logo.

effects

@mikke89 mikke89 added the discussion Meta talk and feedback label May 13, 2022
@mikke89 mikke89 added this to the 5.0 milestone May 13, 2022
@xland
Copy link

xland commented May 13, 2022

All the work you doing is amazing!
Is there a pre-release branch for testing?

@paulocoutinhox
Copy link

Hi,

Im trying run in iOS and MacOS, but can run it in Metal.

#error "Only the opengl sdl backend is supported."

Can you add support for Metal?

Thanks for this great library!

@mikke89
Copy link
Owner Author

mikke89 commented May 13, 2022

Thanks guys! Yeah, you can find these changes in the backends and filter branches. Regarding Metal support, see #193.

@xland
Copy link

xland commented May 17, 2022

@mikke89

What's the difference between backends branch and filter branch?
If I want to try the new features, which branch should I choose?
Will there be a 5.0 branch in the future?

@mikke89
Copy link
Owner Author

mikke89 commented May 17, 2022

@xland Good question, I could have been more clear here.

So backends implements the Backends as described in the original post. The filter branch is based on top of the backends branch and contains the advanced graphical features discussed.

I intend to merge backends to master soon, as I think this is getting close to some sort of stability. The filter branch on the other hand is more experimental and might change a lot, so I intend to keep it in a separate branch for a while.

The filter branch is where you find the latest and greatest, but consider it highly experimental :) You'll find the new effect sample there which is shown in the screenshot.

@stephenap07
Copy link

I think the Effect addition to the API would probably work well. ExecuteRenderCommand sounds generic enough to apply to all rendering though. Would these render commands only be applied for filters?
I personally use RmlUi to render in a game world in 3d space without framebuffers. It just looks better to use individual mipmapped textures for rendering the GUI. I haven't tested yet, but I suspect SDF fonts will also look better from various distances with this method as well. I use stencil tests instead of scissor regions for this reason. Since there will be extensive use of stencils now with this API, how much control over the mask values will the client have?

Good work, by the way.

@mikke89
Copy link
Owner Author

mikke89 commented May 19, 2022

@stephenap07 Yeah that's a good point. Do I understand you correctly that you mean it is better to render the RmlUi geometry directly to the 3d-world, rather than rendering to a flat texture and then rendering that texture in the 3d-world? That is a use case I would need to keep in mind indeed, I think it should be compatible currently though.

This is a similar situation to what happens internally when we have filter + transforms applied, and I'm honestly not sure what the best approach is.

One change I've currently implemented is that the stencil command is used instead of the scissor region automatically whenever a transform is applied. In your case you would still have to direct the scissor region to stencil operations manually I guess. Alternatively, you could also add a transform to the document itself, placing it into the world. This way you could just use the commands as usual, no need for another manual transformation/projection step.

Yeah, you make a good point that we should accommodate client-side use of the stencil buffer as well. I was originally thinking that we would require an 8-bit stencil buffer (and thereby mask), and never go above this. But I guess stencil buffers are often/sometimes limited to 8-bit, so maybe we should reserve, say, the lowest 4-bit of that for the RmlUi commands and let the client use the rest?

Yeah, ExecuteRenderCommand is a bit general, do you have a suggestion for a better name or other functions? You can see the commands in the enum, and they mostly/all have to do with the render stack. Implementation of this will be completely optional and only needed to support new properties: filter, backdrop-filter, mask-image, and box-shadow.

@xland
Copy link

xland commented May 31, 2022

@mikke89

Is there a plan for RmlUi 5.0?
I'll start a new project in the middle of June.
Should I wait for RmlUi 5.0?

RmlUi is a very nice project.
Thanks very much.

@mikke89
Copy link
Owner Author

mikke89 commented May 31, 2022

I don't have a any dates in mind yet, it will probably be some time. I don't expect any major breaking changes so you should be fine starting with RmlUi 4 now or even using the master branch, and then upgrading once the new version is out.

Thanks for the kind words!

@mikke89
Copy link
Owner Author

mikke89 commented Jun 1, 2022

The backends branch has now been merged into master :)

The filter branch is very much still experimental and open to API changes.

@paulocoutinhox
Copy link

Hi,

This is working for metal backend?

Thanks.

@mikke89
Copy link
Owner Author

mikke89 commented Jun 3, 2022

No, there is no work being done towards a Metal backend right now. That would be entirely up to contributions from users. As previously noted, there is already #193 for discussion on this topic.

@enesaltinkaya
Copy link

Any chance of using this library in plain C ?

@mikke89
Copy link
Owner Author

mikke89 commented Oct 4, 2022

We don't have any official C bindings so you would have to make your own.

@enesaltinkaya
Copy link

Alright, thanks.

@YTN0
Copy link

YTN0 commented Oct 8, 2022

Just wanted to see what the progress on this was. I am specifically interested in the custom shaders aspect.

Would there be a way to specify additional or custom metadata to an object / tag for use by custom shaders? E.g. for an image / texture display, I might want to attach a normal map or other maps (occlusion etc) for use by the custom shader that will render it.

@mikke89
Copy link
Owner Author

mikke89 commented Oct 9, 2022

Hey, appreciate the interest. It has come a long way, I spent a lot of time trying to arrive at an API I'm happy with. There have been a lot of changes in this regard, and it seems I'm converging on something I quite like. In particular, you can see the latest render interface here.

There are quite a lot of changes, so I want to make sure to get things right before starting to merge things. I'm currently considering making an RmlUi 5.0 release without the advanced effects, as there has already been quite a lot of useful additions and fixes in the library. And then we can bring in the advanced effects later on.

Right now I've implemented a "shader" decorator which takes a single string as a parameter. That way you can in principle specify whatever you wish in RCSS, but you'll have to parse it yourself. Do you think this is suitable for your case?
For more detailed control you can make your own decorator. Although I do think this will be a rather common use case so perhaps there are some things we could do to make this case require less setup.

@YTN0
Copy link

YTN0 commented Oct 10, 2022

Thanks for the update. I think that could work. I agree that flexible shader decorator would be best as opposed to having to go the custom route. I suspect the shader decorator would get a fair amount of use.

Let me know when you have something testable, and I'd be happy to give it a whirl.

@mikke89
Copy link
Owner Author

mikke89 commented Oct 10, 2022

Alright, sounds good. I'll probably put up a pull request once I'm ready for feedback, I'll try to make sure to ping you :)

@YTN0
Copy link

YTN0 commented Oct 10, 2022

👍

@mikke89 mikke89 modified the milestones: 5.0, 6.0 Dec 11, 2022
@mikke89 mikke89 changed the title RmlUi 5.0 - Progress and feedback RmlUi 5.0 and 6.0 - Progress and feedback Dec 11, 2022
@mikke89
Copy link
Owner Author

mikke89 commented Dec 11, 2022

Hey, just an update now that RmlUi 5.0 has been released. The backends concept has been fully merged, and released with the new version! There has been a lot of progress on the filters and effects part as well, but it has not yet been merged into master. Development continues in the filter branch and is now targeting RmlUi 6.0.

@YTN0
Copy link

YTN0 commented Dec 11, 2022

@mikke89 Appreciate the ping. Will definitely give this a try. Thanks for your continued work on this!

@andreasschultes
Copy link
Contributor

I See that Context* CreateContext(const String& name, const Vector2i dimensions, RenderInterface* custom_render_interface) was removed in the last commits. I currently use different context with different render interfaces because I render to different render targets. For example render to a texture and use the texture than on some rendered objects. Or UI on screen which is blurred and UI on screen which is not blurred.
I'm not sure, how such cases will be handled in future.

@mikke89
Copy link
Owner Author

mikke89 commented Apr 24, 2023

Yeah, you're right. I decided to make this change for the reasons outlined in the commit message: 0bbd019.

I also work with render-to-texture and effects similar to what you describe. For me its mostly about setting up the render state before issuing the call to Context::Render. I hope you'll find a way to work with these changes.

@SirNate0
Copy link

SirNate0 commented Dec 3, 2023

@mikke89, from the documentation it seems that PopLayer will be used to actually render the layer, rather than just removing it from the layer stack. I am wondering if ApplyLayer might be a better name (and then perhaps CreateLayer instead of PushLayer)? That said, I see that it is also supposed to return something, but it seems not to, so if the documentation is just out of date, sorry.

/// Called by RmlUi when...
/// @return A handle to the resulting render texture, or zero if the render target is not a render texture.
/// @note Should render the current layer to the target specified using the given blend mode.
/// @note Should apply mask image, and then clear these attachments.
/// @note Render texture targets should be extracted from the bounds of the active scissor.
/// @note Affected by transform: No. Affected by scissor: Yes. Affected by clip mask: Yes.
virtual void PopLayer(BlendMode blend_mode, const FilterHandleList& filters);

@mikke89
Copy link
Owner Author

mikke89 commented Dec 3, 2023

@SirNate0, the development has moved to the effects branch and has seen many updates since. I understand this wasn't very clear from the initial post, so I updated that too. I intend to put up a pull request relatively soon, and looking forward to any feedback on that.

With that said, the functions you refer to have the same names, so your feedback is very much still applicable:

/// Called by RmlUi when it wants to push a new layer onto the render stack.
/// @param[in] layer_fill Specifies how the color data of the new layer should be filled.
virtual void PushLayer(LayerFill layer_fill);
/// Called by RmlUi when it wants to pop the render layer stack, after applying filters to the top layer and blending it into the layer below.
/// @param[in] blend_mode The mode used to blend the top layer into the one below.
/// @param[in] filters A list of compiled filters which should be applied to the top layer before blending.
virtual void PopLayer(BlendMode blend_mode, const FilterHandleList& filters);

I agree that Create/ApplyLayer much better communicates that rendering is occurring. One challenge I see is that normally we are blending onto the layer below while popping. However, without push/pop, it isn't clear that it is supposed to be a stack of layers, and the term "layer below" doesn't really have a meaning. Some alternatives that we might want to consider:

  1. Maybe keep push/pop, but make a separate Blend/CompositeLayers function.
  2. Drop the concept of the stack entirely, use CreateLayer and let it return a user handle for that new layer. Then, let the apply/blend/composite function use the layer handles. I guess we would also need a ReleaseLayer.

Both of these might complicate implementations a bit, especially the latter one. But I think it's something worth investigating. Any thoughts? Also, the current PushLayer has different ways of initializing the layer (clearing or cloning the current layer), not sure how this fits into the picture?

@mikke89
Copy link
Owner Author

mikke89 commented Feb 6, 2024

A pull request is now out, implementing all of the effects mentioned here, and some more: #594

I hope you'll like it :) And I'd appreciate any help with testing it.

@mikke89
Copy link
Owner Author

mikke89 commented Mar 30, 2024

The PR has been merged, and I just wanted to give an update on the feedback by @SirNate0.

  1. Maybe keep push/pop, but make a separate Blend/CompositeLayers function.

I decided to go with this approach in the end, and I think it helped a lot. Both to simplify the API usage, since we now can composite two arbitrary layers, and it also works a lot better semantically in my view. I kept the push/pop render stack, since it simplifies things to have one main active (top) layer that all draw functions render to. And the push/pop nature plays nicely with how the library uses layers generally. Thanks for the feedback, it definitely helped to make the interface better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Meta talk and feedback
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants