-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Editor-Ready UI #254
Comments
I wanted to share a few quick thoughts on UI systems in games. I'm a gamedev professionally, and I read this list of requirements, and had a bit of a knee-jerk reaction. These are good features from a basic point of view, but sort of miss the boat on what I would consider are the harder questions around UI:
I'm happy to elaborate on any of these, but primarily I'd recommend trying to learn from where Unity is at the moment. They started with an immediate mode UI engine for both the Editor and the Runtime. Later, because of performance and flexibility, they adopted a retained mode UI for the Runtime. Now, as of two years ago, they've finally been building a (really fantastic) unified UI system for both the Editor and Runtime, open source. This is a great talk on the new design (UIElements/UIToolkit): https://www.youtube.com/watch?v=zeCdVmfGUN0 I'd highly recommend taking as much learning as you can from their path. Frankly, both previous systems were a total mess. They seemed great in the moment, but just collapsed under any sort of complexity. What makes UI challenging is that it's very hard to build incrementally, you have to plan a lot of these things in from the start. I don't think Bevy UI should be nearly as complicated as the latest Unity UI, but it's really worth learning, and not repeating their mistakes. |
And also, if the intention for Bevy is to start off as a hobbyist engine, incremental is perfectly fine. I would just expect to have to rewrite the UI a few times as you want to attract larger projects. |
Some early distillation of thoughts is happening here: At the moment anyone is free to add their particular domain experience. This is stopgap - suggestions are that Cart takes ownership of the hackmd / it's used as ingest for a github wiki/markdown page |
Would it be worth also looking as SwiftUI as an example of prior art? |
Add your write-up of it to the doc if you’d like 👍 |
Can we add "screen-reader friendly" and "multiple input types" (keyboard, mouse) as hard requirements for any UI framework / UI primitives? |
At the risk of being very ignorant, is screen-reader friendliness a high priority for a visual editor? I have a feeling visually impaired folks will have difficulty getting use out of the editor. Making a visual scene editor friendly to visually impaired users sounds like an entire research project. Mouse + keyboard navigation is a clear win for accessibility so I think I'd want to focus efforts there, given our limited resources. I'm certainly not saying we shouldn't make ourselves screen reader friendly, as that will absolutely be useful in other apps, I'm just questioning if it should be an Editor-UI priority. I'm curious what the use cases would be for screen readers + visual editors. Please tell me if I'm wrong here. |
Yeah no worries, it was a genuine question. I'm not sure how well screen readers are supported in other editors, I am coming from a context of developing on the open web and it is really important in that context. I imagine native software is a lot more complicated. The extent to which it is important comes down to how visual the editor is. If there are lots of text inputs (e.g. a way to change transform values of an entity numerically rather than just using click + drag) then I think screen reader support would be something to consider. If at least in the short-term we are looking at a more purely graphical interface then it is less important. |
Also for the record, I don't use screen readers so I don't actually know what users would prefer in this case. Maybe folks would rather just edit the RON files directly. Let's not worry about it too much. :-) |
Have been looking at SwiftUI and one of the nice things about the declarative approach is that it does mean your data is well structured for adding things like that in the future. The general concepts they use are really elegant and I suspect would translate well to Rust, although it may be an "all or nothing" commitment to go that route. Main principle is defining a tree of very lightweight units composed together to create consistent and predictable results with surprisingly little boilerplate and small number of primitive/leaf components. Data dependencies are all declared up front so the framework knows what to update as & when the underlying data changes and there's an "environment" concept so that data can be passed down the hierarchy without needing to be passed through every hop. I quite like the bounds calculation process too, where parent gives its children size "suggestions", they respond with their desired size and parent determines ultimate positioning (but has to respect child's desired size) Worth watching the WWDC '19 & '20 talks on it all for ideas. |
@tektrip-biggles: FYI, @raphlinus has done a bunch of great research in this whole topic, as well as in comparison to existing systems like SwiftUI: https://raphlinus.github.io/rust/druid/2019/10/31/rust-2020.html |
A few thoughts: Screen Reading: If the goal is for Bevy UI to support in-game UI as well as the editor, that means the UI will need to support gamepad-only and keyboard-only navigation of UI from the beginning. So in a lot of ways, you're already halfway there to basic screen-reading support. The bigger question is one of UX design (ie. does the Editor require the mouse?), rather than implementation. FRP: Modern FRP-based approaches work fantastically for the web. I think they're really strong when you have a flat UI that is structured as a hierarchy. Game UI can be much more complex. In-Game UI often isn't a flat plane, nor a tree. There might be a myriad of text cards, health-bars, holographic displays, etc. Depending on the game, it may be hard to treat this as a single tree. Additionally, there are entire projects currently figuring out FRP in Rust (Yew, etc). It's been a massive undertaking, and most require lots of macros, templates, generics, etc. And that's without having to build a renderer. So I worry about complexity, scope here. What I'd favor is a general, high performance UI rendering engine, integrated with the ECS. An FRP crate could be built on top of it, but wouldn't be explicitly baked into the solution. That would allow UI-heavy and 2D games to use FRP as needed, but not require jamming all other use cases inside of it. |
I think Godot can serve as a nice project to look at, while developing the BevyUI |
I think that's right. Especially the part about complexity and scope. I really really like developing UI in the FRP style and would love if Bevy supported it, but it's a mountain of work that we don't need to take on right now. I think really nailing a UI rendering engine and ECS would us a good foundation to build different higher level UI experiments on. I could see a React-like UI system someday that treats the ECS UI the way React treats the DOM. |
I 100% agree with this @ncallaway and I've had similar thoughts over the last few days. The high-level API is something that can - perhaps should - be developed only once the underlying system has been established and is found to be performant and fit for purpose. If it means writing very verbose or repetitive code for the time-being, that is a worthwhile trade-off IMO. |
Also agree with @Kleptine that the high-level stuff can be user-land crates for the time being. |
One point @Kleptine
Why not start in-code and then we can add some optional CSS-like solution later (I'm thinking something like how CSS-in-JS works, i.e. it mostly compiles to target language at build time so everything is static at runtime). |
Precisely my thoughts as well. The ECS would be a great way to store this information in a flat structure. You should take a look at the way Unity stores their UI components in a raw buffer (in the UIElements design talk). It's fairly similar.
I think that could work out. Unity used in-code styling for their IMGUI implementation. One of the challenges was just that it was cumbersome to feed through all of the different styles through all parts of your program. It might be better, though, if styles can be stored as constants, somehow and exported statically. So I think some more succinct form of styling would be nice. A CSS-like alternative could be added as a crate, although it might make the API a little more challenging to design. But I agree it's a fairly big task, probably too big for now. Personally, I would be averse to anything other than a CSS subset. There's enough UI styling languages out there that there's no need to reinvent the wheel. Edit: Another downside of code-only styling is that you need to recompile to see changes. Bevy is already fast to recompile, but you might still have to spend a few minutes replaying the game to get back to the UI you were looking at. It'd be ideal if styling were an asset that could be hot-reloaded in-place, just like any other asset. |
Anyone please correct me if I'm wrong, but I think styling as it currently works in Bevy UI can be saved/loaded from scene assets at runtime.
I was assuming that this would be the case, or at least that UI styling would be built up out of composable functions (i.e. "mixins"). I think this would already be quite easy to do with Bevy UI, but I haven't actually tried it. |
I wrote in Discord about the possibility of adopting the Every Layout (https://every-layout.dev/layouts/) primitives as our "building blocks" for layout. I think we could potentially just copy the CSS from the Every Layout components into Bevy UI as some sort of "style mixins". I'm a big fan of their composable approach to flexbox layout primitives, and since Bevy UI currently uses the flexbox model anyway this would be a good fit: https://every-layout.dev/rudiments/composition/ P.S you have to pay for access to the book, but the CSS itself is not patented, so we can use it. |
As to screen reader support in the editor/UI, I can speak a bit to that. I'm a blind game developer working on adding accessibility to Godot. It may not be as relevant here as it is in Godot since I gather that more code-based workflows are first-class here in a way they aren't with Godot, but a few use cases I have in mind:
I'm running up against some Godot limits that make accessibility challenging to implement, so tentatively pencil me in as willing to help with Bevy UI accessibility. My big condition for helping out is that it be baked directly into the UI (I.e. separate components are fine, but I'd want to ship it as a UI requirement that someone might choose to disable for whatever reason, than as a third-party crate.) I'd also like for it to be as integrated with the repo as is the UI crate, such that CI failures breaking accessibility are blockers. IOW, I'm fine with people not launching the screen reader system if they'd rather not, but I'd want UI consumers to automatically have it and be assured that it works with the most recent UI. Hope that's acceptable. In terms of making my job easier, here are two bits of advice:
Anyhow, hope that helps. Sorry for going long. I'm about to launch an accessible Godot game, but am happy to try Bevy for my next title and work on accessibility. One good aspect of audio-only games is that they don't exactly require lots of resources--a blank screen and good audio are usually enough. :) |
I'd vote for accessibility / screen-reader friendly support as part of Editor Ready UI too. I think @ndarilek lays out great reasons why accessibility is important in the Editor itself. The other reasons why I'd vote to tackle it as early as possible (even if it does expand scope somewhat) are:
I don't necessarily think we'll be able to get the first version of the editor to be in a place where it integrates with popular screen-readers on all platforms, but I would really like the core of the UI system to at least have all the pieces in place so that if someone wanted to add screen-reader support the UI system doesn't get in the way. |
To clarify, as much as I don't like this route, I don't believe screen
reader support is the way to go. Instead, I'd advocate for the approach
I took with godot-accessibility--build a screen reader into the engine.
Each platform has its own accessibility API, so screen reader support
means supporting that entire wide surface area. Additionally, I'm not
sure that natively-rendered UIs can export accessibility information on
Android. It's certainly possible to improve the accessibility of custom
widgets, though I'm not sure that support extends to arbitrarily-drawn
pixels on a canvas. By contrast, the custom screen reader approach just
requires platform-specific text-to-speech, for which I've already
developed the tts crate which supports all major platforms except for
Android.
I don't advocate this approach generally, nor would I oppose eventually
putting in the work to bridge to platform-specific accessibility APIs.
Most of the challenge in game accessibility seems to involve changes to
the game rules and world itself (UI-heavy games not withstanding) and I
don't perceive that game UIs are necessarily as expansive as, say, a web
browser or office suite. An embedded screen reader might not get us 100%
of the way there, but it'd get us close enough for all intents and
purposes, and would be much easier to implement.
|
Thank you so much @ndarilek!! This is really insightful. Ideally, and especially because we have prior art in the form of Godot, we shouldn’t need to rely on blind/partially sighted contributors to implement this. So sign me up as well. |
Copying @ncallaway's comment from Discord.
|
Based on discussion on Discord, I think #195 should be a priority right now. I'm keen to get going on some of the styling stuff but I don't want to have to re-do too many bits for DPI scaling. |
Pretty sure it should work with wgpu, skia is renderer agnostic like e.g. sdl2, there are safe bindings for rust, webgpu backend is currently in progress. You can read more about it here: skia-safe, but the thing is, Skia isn't C library but C++ and looks like even C++ apps like Aseprite are shipping prebuild version of it... |
Nice!
I see. Same argument for any non-rust language though.
We are currently shipping prebuilt versions of shaderc. The problem is that this requires a version for every supported target. Currently we for example don't ship anything for arm32/aarch64 linux, despite bevy fully working on these platforms. There is a limit on the amount of targets for which we can ship prebuilt versions due to the crate size limit on crates.io. Cart already had to ask for it to be increased. |
Yeah I'm pretty hesitant to complicate our builds with more c or c++ deps after we've done so much work to remove things like shaderc, bevy_glsl_to_spirv, and spirv_reflect from our tree (in the upcoming Bevy 0.6). Non rust dependencies have a habit of making some of our users' lives very difficult. |
Perhaps Pathfinder is worth looking at. It's a Rust library that implements a subset of the HTML canvas API on GPU, aiming to be very fast for use in Firefox.
A lot of hard work seems to have been done on getting very nice high performance GPU vector graphics working, but I don't know how well it would play with ECS or how easy it would be to drop into the editor. I've found @cart commenting on a Pathfinder github issue so I'm sure this is already known about, but I thought it was worth mentioning in this thread. @cart, are you currently expecting Bevy to implement its own canvas, or to use a library for this? |
My take is that Bevy should implement its own canvas api and render things using |
That makes a lot of sense, but it sounds like a pretty significant undertaking if the amount of work and expertise that's gone into Pathfinder is any indication. What's the plan for getting this done - are you going to lead that effort yourself? |
It's definitely significant, but not nearly as much work as something like Pathfinder. They have pretty significantly different use cases, and Bevy can take advantage of simplifying assumptions that a more general rasterizer like Pathfinder can't. That said, it doesn't make a lot of sense to re-implement something like font rendering1, so there's a line to be drawn as to what should used as a dependency. If pathfinder is sufficiently flexible, I think it could make sense to be used as a tool, just not as the basis for the entire stack. Footnotes
|
For font rendering/rasterization, you could use fontdue and bevy's renderer for everything else, instead of huge library for rendering in the browser. 🤔 |
I don't have experience with Pathfinder, but I have some with lyon. The only problem that I found using lyon was that the tessellator creates polygons in clockwise order, and I think it's not configurable, so the rendering pipeline must be configured properly to accomodate for that. |
I'm not experienced in this area, but I believe that tessellation based solutions don't play particularly nicely with AA and other subtle subpixel effects, or animated transitions. Looking at projects like Pathfinder, a lot of work seems to go into custom rasterisation for high quality results at higher performance. I would assume that Bevy wants its core canvas - used in both the editor and games - to be capable of those types of ultra crisp looks. |
Actually discovered subset of skia called tiny-skia, it's implemented in pure Rust, I'm using it as UI backend for my aseprite-style tilemap editor and works perfectly fine, also saw that someone uses it for desktop widgets backend on linux. |
Instead of playing around with dead project, better idea could've be using Eframe by Egui author, you can use it natively and through WASM as editor for web games. |
You don't need eframe; there's a bevy_egui plugin. |
Interesting. Thanks for the link. |
I was getting frustrated with trying to build UIs with all the boilerplate it involves, so made myself a rudimentary way to style Bevy UI elements with CSS (the current implementation is basically a subset of web styling anyway). It's been sitting on my PC for 6 months, and I've found it useful, so maybe someone else will too. https://github.com/sharky-david/bevy_prototype_css I'd be very interested in what others think. |
I am with the others, who think CSS has no place in a modern world. |
I have been using blender+armory for the past couple of months, absolutely amazing experience using blender as an editor. Unbeatable, hands down in every aspect from management and organization to work flow. I strongly recommend Cart to reconsider blender as the editor and improve upon Armory's approach with bevys ECS |
I don't think that you fully understand scope of the editor for game engine. It's not only 3D editing but also 2D, visual UI design, story telling, quests, asset management, potential visual scripting with node graphs and many other tools, plugins etc. For example Guirrela Games uses Autodesk Maya with custom renderer for game's world editing while editor is used for visual scripting, UI design, quests and many more technical stuff. Problem with blender is that, you cannot just plug bevy's renderer to it... so you will have backed lights and a lot of other hardcoded data while you atill need an editor for stuff unrelated to rendering. |
Yes, Armory has all of that. I suggest you give it a look. It heavily extends Blender by adding a lot of extra features. It is a full-featured editor and game engine.
I think it is possible to integrate Bevy with Blender. Blender's add-on API does support custom renderers (even Cycles is written as just an add-on). Blender's add-on system is incredibly powerful. And since it's possible to call C / Rust code from Python, I don't see any technical reason why it can't be done. This sort of integration is actually quite common, it's fairly standard nowadays. For example, BabylonJS is a JS 3D game engine which has integration with 3DS MAX, Blender, Cheetah, and Maya. And Unreal Engine also has Blender add-ons which provide integration with Unreal. It would be fairly easy to create a Blender add-on which exports a Blender scene into the Bevy editor. |
I agree that the older parts of the spec have their issues, but the modern features, ie grid & flex, are super powerful. Unity has resolved to use a CSS subset in their "Singular UI Solution" moving forward. |
I think one of the huge benefits, that we gain by going with Blender, is obviously that we develop less code. The maintainance is done for us, especially with Armory as well doing stuff for us. Another huge benefit that I see, is familarity: Blender is used and loved worldwide, there are literally tens of thousands people, who can already use 'our editor' I think this deserves consideration. |
I understand the "Blender integrated engine" value prop. I have used Armory a reasonable amount and have read a good portion of their code. However my stance on this hasn't changed. Bevy's Editor should be built using Bevy. I won't restate every reason I've stated above and elsewhere, but in short: Blender has its own way to represent data, extend the editor (Python), represent scenes, handle UX, etc. This means that developing the editor experience would require Bevy developers to learn a brand new skill set. Bevy Users that want to tinker on the Bevy Editor can't rely on their existing Bevy experience to build the tooling. They need to learn a brand new set of skills. This raises the barrier to entry. The Bevy<->Blender translation layer would also introduce significant complexity (and almost certainly some runtime overhead). Building the Bevy Editor in Bevy means that we can use the same toolsets, the same skills, and the same paradigms everywhere. And it forces us to dogfood Bevy Engine features, ensuring Bevy is a good option for building "engine tooling". Additionally, while Blender is a top-of-class 3D-editor, they aren't optimizing their experiences (or data layers) for real-time game development (anymore). If Bevy's needs are ever mismatched with Blenders, or our users need something that Blender's developers aren't willing to accommodate, then we would just be stuck. If the Bevy Org owns the editor in its entirety, we can provide a holistic experience that optimizes for the exact needs of our community. |
Just wanna highlight again that editor-ready ui crate is already exists - it is |
This is a dump of my personal "UI is usable" checklist: Essential
Nice to have
|
@mrDIMAS, do you have any examples of using it with Bevy engine? I'm using bevy-egui currently for my projects, but don't mind exploring other options. |
FYI, I don't think that |
Related to this UI project thingy and regarding these points from the top post:
I'd like to just drop some of my experience using Bevy UI so far and how I've built a little widget system on top of it. I'm curious whether anyone else has done something similar (surely someone has?) and whether this is an approach that could be built into Bevy (or maybe used in the editor or something). Or maybe this is not desirable for some reason? I haven't explored this approach a lot so perhaps there's some disadvantages I'm not seeing yet. I think that a big issue with Bevy UI at the moment is that it's very verbose to define UI and it's cumbersome to compartmentalize. It's also not intuitive how a "widget" (or element or whatever you want to call it) can have control over where to place its internals (i.e. its children UI elements). After experimenting a bit, I ended up writing these couple of traits: /// Types that can spawn new entities.
pub(crate) trait Spawner<'w, 's> {
/// Spawns a new entity with the given bundle of components.
///
/// Returns an [`EntityCommands`] to enable further commands on the entity.
fn spawn<B: Bundle>(&mut self, bundle: B) -> EntityCommands<'w, 's, '_>;
}
impl<'w, 's> Spawner<'w, 's> for Commands<'w, 's> {
fn spawn<B: Bundle>(&mut self, bundle: B) -> EntityCommands<'w, 's, '_> {
self.spawn(bundle)
}
}
impl<'w, 's> Spawner<'w, 's> for ChildBuilder<'w, 's, '_> {
fn spawn<B: Bundle>(&mut self, bundle: B) -> EntityCommands<'w, 's, '_> {
self.spawn(bundle)
}
} /// A trait for widgets that lets them be spawned neatly.
pub(crate) trait Widget<'w, 's> {
/// Spawns the widget with the given spawner.
fn spawn<S: Spawner<'w, 's>>(self, spawner: &mut S) -> EntityCommands<'w, 's, '_>;
} The The /// The menu widget consists of a container with a title.
pub(crate) struct MenuWidget<F: FnOnce(&mut ChildBuilder)> {
/// The title is displayed at the top of the menu.
title: &'static str,
/// The content of the menu is filled by this closure.
content: F,
}
impl<F: FnOnce(&mut ChildBuilder)> MenuWidget<F> {
/// Creates a new menu with the given title and content.
pub(crate) fn new(title: &'static str, content: F) -> Self {
MenuWidget { title, content }
}
}
impl<'w, 's, F> Widget<'w, 's> for MenuWidget<F>
where
F: FnOnce(&mut ChildBuilder),
{
fn spawn<S: Spawner<'w, 's>>(self, spawner: &mut S) -> EntityCommands<'w, 's, '_> {
let mut root = spawner.spawn(menu_body());
root.with_children(|root| {
root.spawn(menu_background()).with_children(|background| {
background.spawn(
TextBundle::from_section(
self.title,
TextStyle {
font_size: 72.0,
color: Color::WHITE,
..default()
},
)
.with_style(Style {
margin: UiRect::all(Val::Px(48.0)),
..default()
}),
);
(self.content)(background)
});
});
root
}
} Note how this allows the fn setup_settings_menu(mut cmds: Commands, settings: Res<Settings>) {
info!("Setting up settings menu...");
MenuWidget::new("Settings", |menu| {
CheckboxWidget::new("Enable vsync:", settings.vsync, Action::ToggleVsync).spawn(menu);
CheckboxWidget::new(
"Enable development mode:",
settings.dev_mode == DevMode::On,
Action::ToggleDevMode,
)
.spawn(menu);
ButtonWidget::new("Back", |world| {
world
.resource_mut::<NextState<AppState>>()
.set(AppState::Menu(MenuState::Main));
})
.spawn(menu);
})
.spawn(&mut cmds)
// Insert a marker component to easily despawn the menu later.
.insert(SettingsMenu);
} I find the above quite neat as each widget is self-contained. I only have to give the widget it's required inputs and it'll take care of the rest. In the above code, buttons take a label and an action (a The point is that each widget is in control of what inputs it gets and what it uses those inputs for in order to construct the UI. It definitely still requires some additional systems to make these widgets do things, but I thought I'd still share this approach in case it is interesting to anyone. |
This is a Focus Area tracking issue
Before we can start work on the Bevy Editor, we need a solid UI implementation. Bevy UI already has nice "flexbox" layout, and we already have a first stab at buttons and interaction events. But Bevy UI still needs a lot more experimentation if we're going to find the "right" patterns and paradigms. Editor-Ready UI has the following requirements:
Active Crates / Repos
No active crates or repos. Feel free to make one. Link to it in this issue and I'll add it here!
Sub Issues
No active issues discussing subtopics for this focus area. If you would like to discuss a particular topic, look for a pre-existing issue in this repo. If you can't find one, feel free to make one! Link to it in this issue and I'll add it to the index.
Documents
The text was updated successfully, but these errors were encountered: