New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Amethyst UI #10

Open
jojolepro opened this Issue Oct 27, 2018 · 20 comments

Comments

Projects
None yet
10 participants
@jojolepro
Copy link
Member

jojolepro commented Oct 27, 2018

Ui RFC

Here we go...

Warning: Things might be out of order or otherwise hard to understand. Don't be afraid to jump between sections to get a better view.

So here, I will be describing the requirements, the concepts, the choices offered and the tradeoffs as well as the list of tasks.

Surprisingly enough, most games actually have more work in their ui than in the actual gameplay. Even for really small projects (anything that isn't a prototype), games will have a ui for the menu, in game information display, settings screen, etc...

Let's start by finding use cases, as they will give us an objective to reach.
I will be using huge games, to be sure to not miss out on anything.

Since the use cases are taken from actual games and I will be listing only the re-usable/common components in them, nobody can complain that the scope is too big. :)

Use cases: Images

Endless Space
Endless Space
Path of exiles
Wow
COD_BO4

Use cases: Text

So now, let's extract the use cases from those pictures.

  • 2d text
  • 3d ui (render to texture)
  • 3d positionned flat text
  • 3d positionned text with depth
  • 3d text can be visible through 3d elements or occluded by them (including partial occlusion)
  • 2d world text (billboard)
  • images
  • color box
  • color patterns (gradient)
  • color filter (change saturation, grayscale, alpha, etc) color filter
  • multiple color in same text vs multiple text segments aligned
  • locale support
  • display data
  • clicky buttons
  • clicky checkboxes
  • draggable elements (free positioning)
  • draggable elements (constrained) (can be used to scroll through scroll views, or to move sliders heads)
  • drag and drop (with constraints on what can be dropped where)
  • Scroll views
  • Sliders
  • tab view
  • menu bar
  • editable text
  • focusable elements
  • keyboard, mouse, controller, touchscreen, rc remote, wii remote inputs
  • on screen keyboard (for use with mouse or controller)
  • overlays (a.k.a help bubbles/popups)
  • automatic layouting
  • auto-resizable text
  • auto-resizable images/color boxes
  • circular progress bars circularpb circularpb2
  • adapts to different screensizes + auto resizes content
  • reactive (past a minimal size, content rearranges itself according to other layout rules)
  • transparency settings for all elements
  • glow effect
  • transitions/animation(fade in, fade out, movements, scaling, etc..)
  • draw lines (straight + curve)
  • non-rectangular ui elements or triggers
  • ui scaling, working in scroll views (path of exile picture)
  • draw a texture (including dynamic) to screen (also allows showing 3d objects on ui, but requires rendering in a different world) wowminimap
  • occlusion pattern (example: make a square image just show as a circle, removing the corners. also affecting the event triggers)
  • Progress bar (gradient/image + partial display + background) progressbar
  • Scrolling text (or view) scrollingtext scrollingview
  • Input fields (edit text + background + focus + keyboard handling)
  • Growable lists
  • Different presentation when selected
  • Different presentation depending on set condition (have enough money to buy this upgrade ? white : gray)
  • Can select multiple elements at once (lists)
  • Expandable views (bottom button closes the window) expviews
  • Graphs!
  • play sound when hovering or clicking
  • change texture when hovering or clicking, or animate, or apply effect, or trigger a custom side effect in the world (click a button -> spawn an "explosion" entity in the world, triggering a state Trans)
  • Links (opens browser)
  • Theming (changing the color of all links to red, changing the margin or padding for some elements, etc)
  • tables
  • a simple in-game console to enter commands that get sent to amethyst_terminal (when it exists)

Use cases: Conclusion

There are a lot of use cases and a lot of them are really complex. It would be easy to do like any other engine and just provide the basic elements, and let the game devs do their own custom elements. If you think about it however, if we are not able to provide those while we are the ones creating the engine, do you honestly expect game developers to be able to make them from the outside?

Also, if we can implement those using reusable components and systems, and make all of that data oriented, I think we will be able to cover 99.9% of all the use cases of the ui.

Big Categories

Let's create some categories to know which parts will need to be implemented and what can be done when.

I'll be listing some uses cases on each to act as a "description". The lists are non-exhaustives.

Eventing

  • User input
  • Selecting
  • Drag and drop
  • Event chaining and side effects

Layouting

  • Loading layout definitions
  • Resize elements
  • Ordering elements
  • Dynamic sizes (lists)
  • Min/Max/Preferred sizes

Rendering

  • Animation
  • Show text (2d, 3d billboard, 3d with rotation)
  • Gradients
  • Drawing renders (camera) on other textures (not specifically related to ui but required)

Partial Solutions / Implementation Details

Here is a list of design solution for some of the use cases. Some are pretty much ready, some require some thinking and others are just pieces of solutions that need more work.

Note: A lot are missing, so feel free to write on the discord server or reply on github with more designs. Contributions are greatly appreciated!

Here we go!

Drag

Add events to the UiEvent enum. The UiEvent enum already exists and is responsible of notifying the engine about what user-events (inputs) happened on which ui elements.

pub struct UiEvent {
    target: Entity,
    event_type: UiEventType,
}

enum UiEventType {
    Click, // Happens when ClickStop is triggered on the same element ClickStart was originally.
    ClickStart,
    ClickStop,
    ClickHold, // Only emitted after ClickStart, before ClickStop, and only when hovering.
    HoverStart,
    HoverStop,
    Hovering,
    Dragged{element_offset: Vec2}, // Element offset is the offset between ClickStart and the element's middle position.
    Dropped{dropped_on: Entity},
}

Only entities having the "Draggable" component can be dragged.

#[derive(Component)]
struct Draggable<I> {
    keep_original: bool, // When dragging an entity, the original entity can optionally be made invisible for the duration of the grab.
    clone_original: bool, // Don't remove the original when dragging. If you drop, it will create a cloned entity.
    constraint_x: Axis2Range, // Constrains how much on the x axis you can move the dragged entity.
    constraint_y: Axis2Range, // Constrains how much on the y axis you can move the dragged entity.
    ghost_alpha: f32,
    obj_type: I, // Used in conjunction with DropZone to limit which draggable can be dropped where.
}

Dragging an entity can cause a ghost entity to appear (semi transparent clone of the original entity moving with the mouse, using element_offset)
When hovering over draggable elements, your mouse optionally changes to a grab icon.
The Dragged ghost can have a DragGhost component to identify it.

#[derive(Component)]
struct DropZone<I> {
	accepted_types: Vec<I>, // The list of user-defined types that can be dropped here.
}

Event chains/re-triggers/re-emitters

The point of this is to generate either more events, or side effects from previously emitted events.

Here's an example of a event chain:

  • User clicks on the screen -> a device event is emitted from winit
  • The ui system catches that event and checks if any interactable ui element was located there. It finds one and emits a UiEvent for that entity with event_type: Click
  • The EventRetriggerSystem catches that event (as well as State::handle_event and custom user-defined systems!), and checks if there was a EventRetrigger Component on that entity. It does find one. This particular EventRetrigger was configured to create a Trans event that gets added into the TransQueue
  • The main execution loop of Amethyst catches that Trans event and applies the changes to the StateMachine. (PR currently opened for this.)

This can basically be re-used for everything that makes more sense to be event-driven instead of data-driven (user-input, network Future calls, etc).

The implementation for this is still unfinished. Here's a gist of what I had in mind:

Note: You can have multiple EventRetrigger components on your entity, provided they have unique In, Out types.

// The component
pub trait EventRetrigger: Component {
    type In;
    type Out;
    fn apply(func: Fn(I) -> Vec<O>);
}

// The system
// You need one per EventRetrigger types you are using.
pub struct EventRetriggerSystem<T: EventRetrigger>;
impl<'a, T> System<'a> for EventRetriggerSystem<T> {
    type SystemData = (
        Read<'a, EventChannel<T::In>>,
        Write<'a, EventChannel<T::Out>>,
        ReadStorage<'a, T>,
    );
    fn run...
    read the events, run "func", write the events 
}

Edit text

Currently, the edit text behaviour is

  1. Hardcoded in the pass.
  2. Partially duplicated in another file.

All the event handling, the rendering and the selection have dedicated code only for the text.

The plan here is to decompose all of this into various re-usable parts.
The edit text could either be composed of multiple sub-entities (one per letter), or just be one single text entity with extra components.

Depending on the choice made, there are different paths we can take for the event handling.

The selection should be managed by a SelectionSystem, which would be the same for all ui elements (tab moves to the next element, shift-tab moves back, UiEventType::ClickStart on an element selects it, etc...)

The rendering should also be divided into multiple parts.
There is:

  • The text
  • The vertical cursor or the horizontal bar at the bottom (insert mode)
  • The selected text overlay

Each of those should be managed by a specific system.
For example, the CursorSystem should move a child entity of the editable text according to the current position.
The blinking of the cursor would happen by using a Blinking component with a rate: f32 field in conjunction with a BlinkSystem that would be adding and removing a HiddenComponent over time.

Selection

I already wrote quite a bit on selection in previous sections, and I didn't fully think about all the ways you can select something, so I will skip the algorithm here and just show the data.

#[derive(Component)]
struct Selectable<G: PartialEq> {
	order: i32,
    multi_select_group: Option<G>, // If this is Some, you can select multiple entities at once with the same select group.
    auto_multi_select: bool, // Disables the need to use shift or control when multi selecting. Useful when clicking multiple choices in a list of options.
}

#[derive(Component)]
struct Selected;

Element re-use

A lot of what is currently in amethyst_ui looks a lot like other components that are already defined.

UiTransform::local + global positions should be decomposed to use Transform+GlobalTransform instead and
GlobalTransform should have its matrix4 decomposed into translation, rotation, scale, cached_matrix.

UiTranform::id should go in Named

UiTransform::width + height should go into a Dimension component (or other name), if they are deemed necessary.

UiTransform::tab_order should go into the Selectable component.

UiTransform::scale_mode should go into whatever component is used with the new layouting logic.

UiTransform::opaque should probably be implicitly indicated by the Interactable component.

I'm also trying to think of a way of having the ui elements be sprites and use the DrawSprite pass.

Defining complex/composed ui elements

Once we are able to define recursive prefabs with child overrides, we will be able to define the most complex elements (the entire scene) as a composition of simpler elements.

Let's take a button for example.
It is composed of: A background image and a foreground text.
It is possible to interact with it in multiple ways: Selecting (tab key, or mouse), clicking, holding, hovering, etc.

Here is an example of what the base prefab could look like for a button:

// Background image
(
    transform: (
        y: -75.,
        width: 1000.,
        height: 75.,
        tab_order: 1,
        anchor: Middle,
    ),
    named: "button_background"
    background: (
        image: Data(Rgba((0.09, 0.02, 0.25, 1.0), (channel: Srgb))),
    ),
    selectable: (order: 1),
    interactable: (),
),
// Foreground text
(
    transform: (
        width: 1000.,
        height: 75.,
        tab_order: 1,
        anchor: Middle,
        stretch: XY(x_margin: 0., y_margin: 0.),
        opaque: false, // Let the events go through to the background.
    ),
    named: "button_text",
    text: (
        text: "pass",
        font: File("assets/base/font/arial.ttf", Ttf, ()),
        font_size: 45.,
        color: (0.2, 0.2, 1.0, 1.0),
        align: Middle,
        password: true,
    )
    parent: 0, // Points to first entity in list
),

And its usage:

// My custom button
(
    subprefab: (
        load_from: (
            // path: "", // you can load from path
            predefined: ButtonPrefab, // or from pre-defined prefabs
        ),
        overrides: [
            // Overrides of sub entity 0, a.k.a background
            (
                named: "my_background_name",
            ),
            // Overrides of sub entity 1
            (
                text: (
                    text: "Hi!",
                    // ... pretend I copy pasted the remaining of the prefab, or that we can actually override on a field level
                ),
            ),
        ],
    ),
),
                

Ui Editor

Since we have such a focus on being data-oriented and data-driven, it only makes sense to have the ui be the same way. As such, making a ui editor is as simple as making the prefab editor, with a bit of extra work on the front-end.

The bulk of the work will be making the prefab editor. I'm not sure how this will be done yet.
A temporary solution was proposed by @randomPoison until a clean design is found: Run a dummy game with the prefab types getting serialized and sent to the editor, edit the data in the editor and export that into json.
Basically, we create json templates that we fill in using a pretty interface.

Long-Term Requirements

  • Draw text on sprites
  • Draw sprites on 3d textures
  • Asset caching
  • Good eventing system (in progress)
  • Recursive prefabs

Crate Separation

A lot of things we make here could be re-usable for other rust projects.
It could be a good idea to make some crates for everyone to use.

One for the layouting, this is quite obvious.
Probably one describing the different ui event and elements from a data standpoint (with a dependency to specs).
And then the one in amethyst_ui to integrate the other two and make it compatible with the prefabs.

Remaining Questions

  • Multiple colors in same text component VS multiple text with layout so they look like a single string
  • Display data: Data binding? User defined system? impl SyncToText: Component?
  • Which layout algorithm will we use? Should it be externally defined? If so, how to define default components?
  • How to define occlusion patterns (pictures with alpha?). How to do the render for those?
  • How to make circular filing animations?
  • Theming?
  • How to integrate the locales with the text?
  • Make implementation designs for everything that wasn't covered yet

If you are not good with code, you can still help with the design of the api and the data layouts.
If you are good with code, you can implement said designs into the engine.

As a rule of thumb for the designs, try to make the Systems the smallest possible, and the components as re-usable as possible, while staying self contained (and small).

Imgur Image Collection

Tags explanation:

  • Diff Hard: The different parts aren't all hard, but the whole thing is complex.
  • Priority Important: Some things aren't super important and are there to improve the visual, others are important improvements to the api that we can't go around.
  • Status Ready: Some parts are ready to be implemented (at least as prototypes), mostly the design section.
  • Project Ui: This is ui
  • RFC Discussing: Discussions and new designs will go on for a long long long time I'm afraid. This is the biggest RFC of amethyst I think.
@Velfi

This comment has been minimized.

Copy link

Velfi commented Oct 27, 2018

Re: layout algorithms, I'm quite partial to the way that IOS apps control layout. Their system is based on the Cassowary algorithm, which has been implemented in Rust.

For an idea of how that kind of layout works, check out this tutorial.

@jojolepro

This comment has been minimized.

Copy link
Member

jojolepro commented Oct 27, 2018

image

I analysed the layout of one of the pictures. While most of it can be represented as non-overlapping boxes, some parts involve arbitrary shapes and object placements (triangle, circle). There's also the lines that link different elements. I'm not sure how those would work here.

I was thinking about using either cassowary or flexbox, but I'm still trying to understand how some of the ui layouts work and if those layouting algorithms would restrict what it is possible to do.

In the past, I had a hard time using cassowary, so I'm biased against it. I started using flexbox recently, but I can't remember all the css classes for it yet, so its not going to well either ;)

Let me know what you think about the comments I left on the picture.

@randomPoison

This comment has been minimized.

Copy link

randomPoison commented Oct 27, 2018

In my experience, UI like that circular tech tree don't use a conventional layout system, there's either a minimal custom layout system, or everything is manually positioned and there's no layout system at all. The menu in the box on the lest side of the image could be done with a standard layout system, but the circular UI stuff is likely completely custom. Similarly, the lines linking different elements are likely done manually using some line drawing primitive.

This sort of thing is pretty common in game development, and I don't think it necessarily makes sense to try to cover that case with amethyst's built-in UI system. Neither cassowary nor flexbox would be able to make this UI, and I doubt there's anything that would handle this kind of thing easily. Rather than trying to find one UI layout system that can handle every game's UI, we should make it easy for developers to manually position elements on the screen so that they can create any exotic layout they want.

@azriel91

This comment has been minimized.

Copy link
Member

azriel91 commented Oct 28, 2018

Random tidbit before I forget:

  • events as a higher level abstraction over device events (e.g. Selection can encompass mouse click, keyboard enter, wii controller thrown at tv button press; SecondaryAction can encompass right click, touchscreen long press)
@ab0v3g4me

This comment has been minimized.

Copy link

ab0v3g4me commented Oct 28, 2018

Perhaps adding couple of methods to Transform like setting the relative origin, rotating relative to that origin would allow us to make these kind of circular layouts?

Not sure if you will keep the current Transform component or write one specifically for UI, nonetheless this is an interesting subject, i'll be lurking around.

@Xaeroxe

This comment has been minimized.

Copy link

Xaeroxe commented Oct 29, 2018

The rendering should also be divided into multiple parts.
There is:

The text
The vertical cursor or the horizontal bar at the bottom (insert mode)
The selected text overlay
Each of those should be managed by a specific system render pass. EDIT BY Xaeroxe

This isn't quite reasonable, as draw order matters a lot for these. First you need to draw the overlay on the lowest layer, then the text, then the cursor. We could make each of these render passes dependent on each other, but then it'd probably be easier to simplify this into a single render pass.

Maybe we can have the pass call three separate functions instead.

UiTransform::local + global positions should be decomposed to use Transform+GlobalTransform instead and
GlobalTransform should have its matrix4 decomposed into translation, rotation, scale, cached_matrix.

Hold on, we separated these for a reason. Transform+GlobalTransform is in world space while UiTransform is in screen space. It'd be weird to have Transform+GlobalTransform conditionally in screen space, would probably create some unexpected results for end users. Additionally though some UI elements do need to be in world space. So perhaps we should rename UiTransform to ScreenTransform and use a hybrid approach where Transform+GlobalTransform is used for 3D UI and ScreenTransform is used for 2D UI. That way if we need other things in screen space we have an easy to re-use component for them. Alternatively we could make our transform component an enum with Screen and World variants.

How to make circular filing animations?

Here's my first attempt. Have an ImageArcRender component sort of like this

pub struct ImageArcRender {
    pub start: f32,
    pub radian_distance: f32,
} 

start is an angle expressed in radians, while radian_distance describes the length of the arc. Positive moves counter clockwise while negative moves clockwise. In the draw pass an arc of the image is drawn based on this description, start is interpreted as though it resided on a standard mathematical unit circle. So if I wanted quadrant 1 drawn I would provide start: 0 and radian_distance: PI/2.0. If I wanted to animate the arc filling counterclockwise over time starting from PI/2.0 I would start with

start: PI/2.0,
radian_distance: 0.0,

and over time increase radian_distance until it equaled 2PI. If start is greater than 2PI it'll be treated as equivalent to angle % 2PI. A negative angle describes clockwise motion over the unit circle, and would instead be treated as equivalent to angle % -2PI.

@Xaeroxe

This comment has been minimized.

Copy link

Xaeroxe commented Oct 29, 2018

Also if we're going with a monolithic RFC approach one thing I've always wanted for the UI is a kind of "layer mask" support. This would be useful for rendering a circular map on screen kind of like this one in the bottom right
zelda_e3_11am_scrn051 0

A layer mask would work very similar to the feature in Adobe Photoshop of the same name, where we provide a greyscale image to be used to "mask" a rendered part of the image. The alpha channel in the rendered texture is multiplied by how bright the mask is in that location. So if I wanted to render an image as a circle rather than a square, I provide a texture of a white circle on a black background as my layer mask.

@jojolepro

This comment has been minimized.

Copy link
Member

jojolepro commented Oct 29, 2018

This isn't quite reasonable, as draw order matters a lot for these. First you need to draw the overlay on the lowest layer, then the text, then the cursor. We could make each of these render passes dependent on each other, but then it'd probably be easier to simplify this into a single render pass.

That's why we have a Z value in Transform ;)

re: UiTransform::local + global positions should be decomposed to use Transform+GlobalTransform instead and
GlobalTransform should have its matrix4 decomposed into translation, rotation, scale, cached_matrix.

We could have a ScreenSpace component indicating that a Transform should be mapped to the screen coordinates instead of using the Camera.

ImageArcRender

Probably the way to go.

Mask

I was thinking of the same solution ^^
That seem like the easier solution.

The fun part will be combining ImageArcRender and a mask to get the filing effect for something like this
image

I'm thinking of a data layout like this one:

root_entity
-Transform
-ScreenSpace
-Sprite (white square, can be generated from color, background of slider)
-Mask (also a sprite, but containing the full filing slider shape)

child_entity
-Transform :z+0.001
-ScreenSpace
-Sprite (blue square, can be generated from color, filled of slider)
-Mask (same mask as root_entity, except the end circle things for this specific example)
-ImageArcRender (controlled by a system to fill the slider shape)
-Parent (root_entity)
@Xaeroxe

This comment has been minimized.

Copy link

Xaeroxe commented Oct 29, 2018

We could have a ScreenSpace component indicating that a Transform should be mapped to the screen coordinates instead of using the Camera.

That still makes the Transform interpretation conditional and dependent on factors external to the Transform component. so now instead of

(&transform, ...).join() we now need (&transform, !&screen_space, ...).join() to gather all world space transforms. It's also hard to imagine a scenario where we'd want to apply the same operation to world space and screen space coordinates unconditionally, you'd want one or the other. So why not make them separate components?

@Xaeroxe

This comment has been minimized.

Copy link

Xaeroxe commented Oct 29, 2018

Also furthermore the current Z order rendering works because it's all in the same DrawUI pass. That pass does the ordering itself, we're not using GLSL's depth buffer for that because otherwise we couldn't blend as we need to.

@jojolepro

This comment has been minimized.

Copy link
Member

jojolepro commented Oct 29, 2018

I do see your point.
I wanted them to be the same because that way we can have shared logic for rotations and scaling too. Also you get useful methods like look_at.

3D objects:
could be on screenspace by using an orthographic default camera (that could be a way of making icons from 3d objects. however, this might require lighting to look good. Traditionally, people make a background scene with those item inside to generate a texture to show on the ui. hurtworld icons)
are usually in worldspace.

ui:
can be a flat texture with 3d coordinates
is usually in screenspace

Both have different defaults, but the behaviour is shared for both cases.

Maybe I am stretching the re-use a bit too far however. I'm trying to avoid having a UiTransform and a Transform that are essentially the same. Also, we often get requests for Sprites to act like if it was ui (react to click), and requests for ui to behave like sprites (be drawn on screen, be a child of a sprite, have tint effects).

Trying to find solutions to no re-code the same logic for both ui and sprites ^^

Also, I'm not sure I understand what blending doesn't happen correctly?

(ps: I'm happy to have some discussions going on for this rfc :) )

@Xaeroxe

This comment has been minimized.

Copy link

Xaeroxe commented Oct 29, 2018

Trying to find solutions to no re-code the same logic for both ui and sprites ^^

That's a noble goal and I agree, which is why I think we should stop calling it UiTransform and instead go to ScreenTransform. Wherein we use ScreenTransform for screen space sprites as well.

Also, I'm not sure I understand what blending doesn't happen correctly?

Objective: Render a PNG with some pixels where alpha channel is not equal to 1.0 and have it blend with the pixels below it correctly. Allows us to render non-rectangular elements.

Approach 1: Render this using GLSL depth buffer

Benefit: Massively parallel, takes full advantage of CPU and GPU parallel power.

Drawback: If multiple UI elements are layered on top of each other, as tends to happen with more complex UI such as an inventory, the elements beneath the top most image appear to be invisible because their pixels were discarded by the depth buffer culling.

Basically, we can't optimize out the rendering of the lower elements because their pixels are still an important part of the final image, even if they are partially covered.

Approach 2: Render all elements one after another, starting with the elements furthest from the screen.

Benefit: Renders correctly, we now can see all UI elements without weird holes in them.

Drawback: Extremely single threaded. Rendering an element can't proceed until the operation before it is complete because the output of rendering the further elements is an input into the rendering of nearer elements.

@jojolepro

This comment has been minimized.

Copy link
Member

jojolepro commented Oct 29, 2018

I'm not sure which solution is the best concerning the rendering

When I was refactoring the selection logic (click to select an element, tab goes to next, etc), I put CachedSelectionOrder as a Resource. (previously called CacheTabOrder)

UiTransform approaches

Approach 1:
We could probably to the same with the z value if we had Transform + ScreenSpace component.
The ordering would still happen on a single thread like currently, but the pass wouldn't have to do it. It would happen during the game frame in parallel with the rest of the game (assuming everything the user wanted to change in the Transforms is done (for entities with both ScreenSpace and Transform).

Approach 2:
If we go with the other solution of having Transform and ScreenTransform, the end result will be aproximatively the same. We'll be duplicating most of Transform in the process, but might gain a tiny bit of parallelism by not joining over our holy master component Transform

Edit:

re: UiText, in case I didn't explain it properly, I intend to have edit text be 3 separated entities

TextEntity
Text
EditableText (tag)

-CursorEntity (child)
-Blink
-Sprite
-Transform* (managed by system)

-SelectedTextEntity (child)
-Sprite
-Transform*

Sprites if I manage to separate Sprites from Spritesheet on a data-layout level. Otherwise UiImage.

@Xaeroxe

This comment has been minimized.

Copy link

Xaeroxe commented Oct 29, 2018

It would happen during the game frame in parallel with the rest of the game (assuming everything the user wanted to change in the Transforms is done

Cool idea, I like it! Assuming we can make sure this happens after all Write for our transform components. The easiest way to do that with the current system graph was to make it part of the render pass. (Maybe we can improve on this with some upcoming RFCs in specs @torkleyy )

The biggest reason I'd prefer approach 2 is because I'm not completely convinced there's actually that much overlap between screen space handling and world space handling. Let's investigate what aspects of each transform would need differing implementations:

x, y, z: f32 that works, although z is technically unit-less for 2D and has a unit for 3D. That's just all semantics though, let's see whether or not it actually impacts usage.

fn look_at given 3D rotation is a very different beast from 2D rotation these implementations wouldn't be very similar.

fn matrix Great for 3D, mostly not applicable to 2D.

fn orientation Has a different output for 3D than 2D.

fn move_global and fn move_local This is where the semantics of the Z value get kind of weird for 2D. By including Z in the same input vector we're implying Z has the same units as the X and Y, but given that we don't actually perform any projection distortion on 2D elements the units are meaningless.

fn move_forward, fn move_backward, fn move_left, fn move_right These make a lot of sense for both! However the implementations will differ a lot. Where things get weird is fn move_up and fn move_down. Is this supposed to move along the Z order? If so why can't we just say that?

pitch yaw and roll Exactly one of these is applicable to 2D.

Basically I'm of the opinion 3D math is an unnecessary burden on the 2D ecosystem. If I want to lay things out in 2D the last thing I want is to be thinking about quaternions, because my rotation can be expressed as a single f32.

@happenslol

This comment has been minimized.

Copy link

happenslol commented Nov 20, 2018

I'm currently working on refactoring the Button component into multiple components so functionality like doing things (playing sounds, changing textures) on click or hover can be reused by other components. I wanted to sketch down a few ideas for this:

The basic idea is to make a component for each type of interaction. This way, any Ui Widget that has a system supporting this can react to it's different kinds of interactions:

world.create_entity()
    .with(button_transform)
    .with(button_texture)
    .with(OnHover(...))
    .with(OnClick...))
    .build()

This would make handling these in the respective systems very easy. The question is what is passed into those. The most basic solution would be to have a simple action enum, and for every event you want you can add an additional reaction component:

.with(OnHover(PlaySound(sound_handle))
.with(OnHover(ChangeTexture(tex_handle))
// and so on

Implementing it like this could cause more complex actions to result in a lot of boilerplate though, and there is no way to control the order in which these happen or add any delay between them. My idea is to have some kind of action chain, which would enable all of the above. A builder pattern can be used to make the syntax very speaking:

.with(OnHover::(Chain::new()
    .then(PlaySound(sound_handle))
    .then(Delay(100.0))
    .then(ChangeTexture(tex_handle))
    .build()
))
// and so on

The chain would just be pushed into a vec behind the scenes, and a system would keep track of running actions. You would also be able to create reusable actions and attach them to all your buttons, for example.

Here is some things I'm still unsure about:

  • Could the syntax be made nicer somehow?
  • What should the naming for all these be?
  • Some behaviour will be specific to components, some won't. E.g. all delayed actions or played sounds could be handled by the same systems, but changing the texture would differ from buttons to say, containers. Is there a better way to do this or do we just have to be careful to generalize as much as possible?

Would love to hear some comments on this, I'll probably implement a simple example and report back.

Edit: As for naming, I'm feeling UiTrigger for OnHover, OnClick, etc and UiAction for PlaySound, ChangeTexture, etc right now

@happenslol

This comment has been minimized.

Copy link

happenslol commented Nov 22, 2018

Alright, so I've scrapped the idea for actions chains for now, as they would require the system handling them to keep around a large amount of state, which is probably not desirable for now and way out of the scope of the button refactor.

The current idea is that you just pass an array of UiActions that you want to happen to the UiTrigger component, so it would look a little like this: (simplified of course)
.with(OnHover::new(&[PlaySound, ChangeTexture]))

There can be a central system receives the events, plays sounds, adds the necessary components to the entities for which events have been triggers, and keeps track of them. This would actually be very similar to the EventRetriggerSystem that @jojolepro proposed, and could implement that functionality in the future.

@petvas

This comment has been minimized.

Copy link

petvas commented Dec 10, 2018

For positioning elements CSS has some (maybe) related solutions:
For fixed positioning (health bar, map, fixed icons etc...) is has CSS Grid
For dynamic content (Item list, active buffs. menu items etc..) it has CSS Flexbox

@derekdreery

This comment has been minimized.

Copy link

derekdreery commented Dec 20, 2018

This sort of thing is pretty common in game development, and I don't think it necessarily makes sense to try to cover that case with amethyst's built-in UI system.

What you can do is provide support for 2d vector graphics in your layout - so you get a scalable interface. So those lines, circles etc. are described as paths, and then something like lyon is used to generate the rendering primitives for them. You should also allow the developer to say when they want an aspect ratio preserved, and when not, so they control how the UI resizes.

@derekdreery

This comment has been minimized.

Copy link

derekdreery commented Dec 20, 2018

I think we can make 2 really powerful demonstrations of the UI framework once its done

  1. A kitchen sink demonstration - the amethyst editor
  2. A more minimal demonstration that still uses some advanced features - a standard debug overlay that shows fps, gpu statistics like verts, % culled etc., and some 3D stuff like an orientation widget
    orientation widget

bors bot referenced this issue in amethyst/amethyst Dec 22, 2018

Merge #1189
1189: Implement EventRetriggers and refactor UiButton r=jojolepro a=happenslol

I'm opening this up for preliminary review, since this introduces a lot of changes and I'd like some feedback on it before I actually implement the new patterns for all use cases. Here's what this PR basically does:

* Introduce a new generic `System` called `EventRetriggerSystem`. It works very similarly to the one proposed in #1072, allowing you to basically trigger follow-up events for any events.
* Refactor `UiButton` and its builder to use the new system to trigger its click and hover actions (This is only implemented for `SetTextColor` so far)
* Refactor the UI prefab builder to also use these (Haven't started on this, shouldn't be too much work though).

Here's things I'm still unsure about/not satisfied with:

* How to keep track of the state the buttons are in/what has changed due to which event. At the moment, I'm basically keeping a stack of changes, where the first element is the original state. This feels a little hacky, and might be nicer to read with an additional struct that contains the original state as a field, plus a `Vec` of changes. Let me know if you see a better way to do this.
* How to structure the `UiButtonRetrigger` component. Currently it's a struct containing `Vec`s for all event types (`on_hover`, `on_click`, etc.) and I've been told it's not a good idea to keep `Vec`s in components. It might be a good idea here to use `SmallVec` to keep the memory contiguous, but I'm not sure.
We definitely need some dynamically sized array here, since with a builder or at any point at runtime, the user can add/remove events from this. Since there's no variant typing and multiple events of the same type might occur for one action, it's also not feasible to add a separate component for every one.

If you want to see what's working so far in action, please take a look at the `simple_ui` example. I'm still not sure if I want to flesh out this example a bit while I develop this, but I think it might be a good idea to have an example around that programmatically constructs a UI with some simple effects.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/amethyst/amethyst/1189)
<!-- Reviewable:end -->


Co-authored-by: Hilmar Wiegand <me@hwgnd.de>

bors bot referenced this issue in amethyst/amethyst Dec 23, 2018

Merge #1189
1189: Implement EventRetriggers and refactor UiButton r=happenslol a=happenslol

I'm opening this up for preliminary review, since this introduces a lot of changes and I'd like some feedback on it before I actually implement the new patterns for all use cases. Here's what this PR basically does:

* Introduce a new generic `System` called `EventRetriggerSystem`. It works very similarly to the one proposed in #1072, allowing you to basically trigger follow-up events for any events.
* Refactor `UiButton` and its builder to use the new system to trigger its click and hover actions (This is only implemented for `SetTextColor` so far)
* Refactor the UI prefab builder to also use these (Haven't started on this, shouldn't be too much work though).

Here's things I'm still unsure about/not satisfied with:

* How to keep track of the state the buttons are in/what has changed due to which event. At the moment, I'm basically keeping a stack of changes, where the first element is the original state. This feels a little hacky, and might be nicer to read with an additional struct that contains the original state as a field, plus a `Vec` of changes. Let me know if you see a better way to do this.
* How to structure the `UiButtonRetrigger` component. Currently it's a struct containing `Vec`s for all event types (`on_hover`, `on_click`, etc.) and I've been told it's not a good idea to keep `Vec`s in components. It might be a good idea here to use `SmallVec` to keep the memory contiguous, but I'm not sure.
We definitely need some dynamically sized array here, since with a builder or at any point at runtime, the user can add/remove events from this. Since there's no variant typing and multiple events of the same type might occur for one action, it's also not feasible to add a separate component for every one.

If you want to see what's working so far in action, please take a look at the `simple_ui` example. I'm still not sure if I want to flesh out this example a bit while I develop this, but I think it might be a good idea to have an example around that programmatically constructs a UI with some simple effects.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/amethyst/amethyst/1189)
<!-- Reviewable:end -->


Co-authored-by: Hilmar Wiegand <me@hwgnd.de>
@fhaynes

This comment has been minimized.

Copy link
Member

fhaynes commented Jan 8, 2019

I am moving this to the RFC repo with this nifty transfer beta feature! I totally have a backup!

@fhaynes fhaynes transferred this issue from amethyst/amethyst Jan 8, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment