Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GUI subsystem refactor #855

Open
Vtec234 opened this issue Jun 11, 2017 · 13 comments
Open

GUI subsystem refactor #855

Vtec234 opened this issue Jun 11, 2017 · 13 comments
Labels
area: ui Related to the graphical user interface (Qt) lang: c++ Done in C++ code to-discuss Idea or suggestion that needs some discussion before implementation

Comments

@Vtec234
Copy link
Contributor

Vtec234 commented Jun 11, 2017

As part of both the renderer rewrite (#850) and the architectural rework of the engine, we need to decouple subsystems. Without this, proper support for a client/server split and job-based parallelism with communication through channels will be practically impossible. I'm working on the graphics side of things, so I'd like to work on decoupling the GUI first.

To achieve this design goal, we unfortunately have to get rid of all the GuiLinks as they stand, as well as any other links to non-GUI engine components (e.g. EngineQMLInfo). From a very high-level viewpoint, the initial idea of how we could replace this is a callback system, where all communication proceeds solely through the Gui class, with an API of something like:

class Gui {
public:
  Gui (SDL_Window*, util::Path const& source, util::Path const& root_dir); // no EngineQMLInfo

  void register_on_X_cb(std::function<..> callback); // registers a function to be called when X happens in the GUI

  void enqueue_event(SDL_Event*); // stores an event to be processed
  void process_events(); // actually processes events and calls callbacks

private:
  internal state
};

Obviously, we will need more than just the above in practice, but the general idea is that the GUI is an event-based component like any other. By itself it doesn't do anything. The user registers functions that they want called on GUI state changes. To give an example, currently changing properties of a game involves a GameControlLink that directly manipulates Engine members. This is unfortunate, because it will result in completely unpredictable behaviour in a concurrent execution context. Instead, what we would have is a on_game_property_changed callback which is called from process_events whenever the GUI detects that one of the enqueued events is a mouse click on something that triggers a game property change. It is up to the user to ensure threadsafety for that callback, and what we would in fact do is either make that callback send an event on the WorldUpdater concurrent input channel or directly modify some property of the PresentationState, which is local to the Presenter class - the user of the GUI.

I'd be happy to hear thoughts on this, especially from @ChipmunkV, who I understand is the principal author of the GUI.

@TheJJ TheJJ added lang: c++ Done in C++ code to-discuss Idea or suggestion that needs some discussion before implementation labels Jun 11, 2017
@VelorumS
Copy link
Contributor

VelorumS commented Jun 11, 2017

The only problem with the GUI it that the coupling is hard to write. And that's precisely because of the thread safety you're talking about.

Current GUI is okay with concurrent execution. GuiLinkss aren't manipulating the game objects directly. Their purpose is to enqueue functors to the queue that's consumed by the event loop of the thread where the engine and everything else currently lives.

GUI recognizes four threads: render, game logic, GUI logic, raw input. The GuiLinkthings are communication between game logic and GUI logic. The GuiLink objects that live in the GUI logic thread are they keeping a copy of the data in their members (so there is less blocking and more copying through the queues).

@Vtec234
Copy link
Contributor Author

Vtec234 commented Jun 11, 2017

I missed that, sorry! Since the GUI enqueues events in a threadsafe manner, concurrent execution might not in fact be a problem.

There are, however, other reasons for why I proposed this change of the interface, which are just as important.

Firstly, on the server, there is no GUI, but the Engine, GameMain, and other classes which manage the world contain GuiLinks. To be able to cleanly reuse them in a server binary without using hacks like if (is_server) { link = nullptr; }, we have to have classes which manage the world and the world only, without user interaction. To be clear, the GUI is just a first step. We'll also have to move all game drawing and input functionality out of the world simulation code.

Secondly, a decoupled design would be more easily maintainable, due to splitting of concerns - writing game simulation code wouldn't involve having to think about user and graphics interactions. And in general, it's also good software development practice.

@VelorumS
Copy link
Contributor

VelorumS commented Jun 11, 2017

Secondly, a decoupled design would be more easily maintainable, due to splitting of concerns - writing game simulation code wouldn't involve having to think about user and graphics interactions.

Delete the libopenage/gui directory, comment out the code till it compiles. You'll see that decoupling is perfect. (maybe the datastructure in generator.h isn't there, but it's a generic key-value map)

@Vtec234
Copy link
Contributor Author

Vtec234 commented Jun 11, 2017

I went forward with your suggestion and deleted the mentions of GUI. Except the problem with generator you mentioned, the game does in fact compile and run, which is indeed impressive. However, this doesn't mean that all the GUI code is no longer there. Game components still contain Qt classes everywhere, and it is difficult to write purely simulation logic-related code without thinking about graphics. My proposal is more about making maintaining the code and new contributions easier than, so to speak, making the engine compile.

@VelorumS
Copy link
Contributor

VelorumS commented Jun 12, 2017

Game components still contain Qt classes everywhere

The GuiItemLink *gui_link; things? They are tags and aren't doing anything. Can probably hide them with some casts and inheritance or an unordered_map or a casts/struct/composition.

The second type of things is the classes like GameSpecSignals. They are definitions of events that can occur in the game-logic objects. Used for one-way communication, and end up being not attached to anything when there is no GUI.

It's probably something else that causes problems?

@Vtec234
Copy link
Contributor Author

Vtec234 commented Jun 12, 2017

My apologies, it seems that I formulated my stance somewhat ambiguously. First of all, I don't think that there is anything wrong with the GUI subsystem as it stands or that there are any problems with it. The GUI fits the current design of the engine quite well and works fine.

This isssue is, instead, a single step on the road to redesigning the communication between components of the engine. Naturally, as we change this design, the design of the individual components will have to change as well. An initial, largely incomplete, but clear on the general picture, draft is available here. I decided to go with modifying the GUI first, because I am working on the graphics subsystem (the renderer), and I am willing to do it, so there won't be maintenance burden for you.

The reason why I want to modify the way QML accesses other engine components is that as part of the redesign, some of these components will have their internals changed drastically or even cease to exist at all. Moreover, we will add some new components, which ideally should be easily pluggable into interactions handling with the graphics subsystem. Consider for example GeneratorParametersConfiguration.qml. This file contains controls for manipulating the parameters for world generation. However, the world generator need not be part of the client. It is likely that during the redesign it will be moved out completely to only be present in servers, and for singleplayer games there will be an embedded server. Proceeding with such a change would require modifying QML - this is what I meant by coupling. Changing a world generation component requires also modifying the corresponding QML controls. We could in fact carry out the redesign in this way, but it would be quite burdensome. On the other hand, with a modified GUI component that has an additional layer of indirection, it is enough to only modify the world generator component. So what I'm asking is, would you be okay with me and possibly other people carrying out these changes to the GUI subsystem?

@VelorumS
Copy link
Contributor

From the document I see that there is an intention to do a state machine in C++ for the states that player sees on the screen (Presenter). That's actually a thing that should be pushed into QML scripts as much as possible.

The Generator is used in QML because currently user has to interact with is. When the user don't have to (when the thing is used only on the server) - delete the qml and Link, no changes for generator itself.

Returning to the initial post of this thread, if the key idea is to be the "event-based component", then we're short of luck: QtQuick is model-view-based to the core. In the event-based stack like "actual game objects - events layer - virtual game objects models for the GUI" there is no 1-to-1 relation between the actual objects and the models that user controls.

Current GUI is a subsystem when it comes to rendering and input. But for the actual operation and logic - it's a script language binding: you take your classes and make them usable from a script.

Decoupling of a language binding isn't something that people do.

@TheJJ
Copy link
Member

TheJJ commented Jun 30, 2017

I don't think the mvc based qtquick can't be made friends with event based triggers: we just update the model by events, and derive events from the view or controller, when appropriate.

I totally agree that the UI mode state machine should be in QML.

What are the virtual game object models you talk about? Is that the copy of data displayed by the gui system? If yes, couldn't we just update that copy whenever the gamelogic decides to update some values in the gui?

@VelorumS
Copy link
Contributor

Refactorings are done only if there is some visible feature that desperately needs them (that justifies another level of indirection). I don't know what it is, so I'll probably go and actually read the commits of #850 to find out.

@TheJJ
Copy link
Member

TheJJ commented Aug 6, 2017

I think we agree that the GUI is part of the so-called Presenter.
The presenter is everything that represents the "now" and is presented to the user, concretely:

  • Renderer
  • UI
  • Sound

So I wouldn't say we need to refactor the GUI subsystem, but rather integrate it into that model, which might require some refactoring.

The UI (and partially the renderer) are special as they have to feed information into the simulation.
This means that actions done by the user that influence the simulation have to be transferred to it.

The current model is that the simulation pushes the keyframes that are sufficient to describe the current presenter state into it. The presenter then shows the content at the time the simulation says so.

In order to proceed we need to define a clear interface to and from the presenter. The gui is then updated whenever the state at the requested time is different.

Another goal should be the possibility of extending the UI with mods through python. I have barely an idea what is possible and elegant there.

@Tomatower I think you need to clarify or correct me 😁

@Tomatower
Copy link
Contributor

Reminder: "curves" are basically lines drawn between (time1, value1) and (time2, value2), with the option to calculate time1 <= time <= time2 as ((time2-time1)/(time-time1)* (value2 - value1)) + value1 - in other words: linear interpolation between two points identified by their time.

My suggestion, running on top of the curve-concept is the following:

The simulation keeps its own state, having all the curves, having all events, having all segments of path-planning.

The renderer also keeps its own state, having only the curves containing the two keyframes relevant for rendering at the moment. (and maybe the future - two keyframes may be as far apart as seconds, maybe even minutes!)

The simulation changes its state by executing at a lower rate than the presenter, keeping track of the changes it performed. The simulation needs for that a current "now", where changes must not happen before.
After it is done executing for a "physics-frame" it will apply all changes that happened to the keyframes that are currently in the presenter-state:

  1. By changing the second keyframe (for example the time of arrival of a unit, or the destination of a unit)
  2. By inserting the next keyframe into the buffer (because the time has passed by the second keyframe, and cannot continue rendering since it needs 2 for interpolation)

Inputs/Actions shall be stored in an action queue where the simulation picks it up and executes them at the appropriate times in the simulation. The simulation logic is currently build as an event-driven system, so user interactions can be implemented as "just another" event that happens within the game logic. And that way it would not matter for the logic if the event came from a keyboard or from a network input.

As actual code between the presenter and the simulation i think we have multiple options. One is to re-use the gamestate used in the simulation, so we do not have to duplicate the code, and just change the mode of operation for the containing types, the second would be to create a optimized subset, containing only the information that is relevant for the user interface rendering - but here it depends mostly on what the presenter needs to perform best.

@TheJJ TheJJ added this to the Architecture restructuring milestone Aug 7, 2017
@Vtec234
Copy link
Contributor Author

Vtec234 commented Aug 8, 2017

The renderer also keeps its own state, having only the curves containing the two keyframes relevant for rendering at the moment. (and maybe the future - two keyframes may be as far apart as seconds, maybe even minutes!)
As actual code between the presenter and the simulation i think we have multiple options. One is to re-use the gamestate used in the simulation, so we do not have to duplicate the code, and just change the mode of operation for the containing types, the second would be to create a optimized subset, containing only the information that is relevant for the user interface rendering - but here it depends mostly on what the presenter needs to perform best.

The simplest solution with respect to synchronisation issues (as in, this one has almost none) is to have a concurrent (thread-safe) channel on which the simulation sends keyframe/curve updates to the presenter, with the presenter keeping a complete replica of the simulation state. This does duplicate state in a memory-wasting manner, but the state is simple enough for this not to be a an issue in practice. This version is also the one I would favour.

@TheJJ
Copy link
Member

TheJJ commented Aug 17, 2017

The curve in the renderer can be "cleaned up" much earlier, that is to remove unnecessary keyframes from the past quickly. And adding new keyframes can be done pretty efficient as well, by just appending the few new ones.

@TheJJ TheJJ added the area: ui Related to the graphical user interface (Qt) label May 2, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: ui Related to the graphical user interface (Qt) lang: c++ Done in C++ code to-discuss Idea or suggestion that needs some discussion before implementation
Projects
None yet
Development

No branches or pull requests

4 participants