-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GUI subsystem refactor #855
Comments
The only problem with the GUI it that the coupling is hard to write. And that's precisely because of the thread safety you're talking about. Current GUI is okay with concurrent execution. GUI recognizes four threads: render, game logic, GUI logic, raw input. The |
I missed that, sorry! Since the GUI enqueues events in a threadsafe manner, concurrent execution might not in fact be a problem. There are, however, other reasons for why I proposed this change of the interface, which are just as important. Firstly, on the server, there is no GUI, but the Engine, GameMain, and other classes which manage the world contain GuiLinks. To be able to cleanly reuse them in a server binary without using hacks like Secondly, a decoupled design would be more easily maintainable, due to splitting of concerns - writing game simulation code wouldn't involve having to think about user and graphics interactions. And in general, it's also good software development practice. |
Delete the |
I went forward with your suggestion and deleted the mentions of GUI. Except the problem with |
The The second type of things is the classes like It's probably something else that causes problems? |
My apologies, it seems that I formulated my stance somewhat ambiguously. First of all, I don't think that there is anything wrong with the GUI subsystem as it stands or that there are any problems with it. The GUI fits the current design of the engine quite well and works fine. This isssue is, instead, a single step on the road to redesigning the communication between components of the engine. Naturally, as we change this design, the design of the individual components will have to change as well. An initial, largely incomplete, but clear on the general picture, draft is available here. I decided to go with modifying the GUI first, because I am working on the graphics subsystem (the renderer), and I am willing to do it, so there won't be maintenance burden for you. The reason why I want to modify the way QML accesses other engine components is that as part of the redesign, some of these components will have their internals changed drastically or even cease to exist at all. Moreover, we will add some new components, which ideally should be easily pluggable into interactions handling with the graphics subsystem. Consider for example |
From the document I see that there is an intention to do a state machine in C++ for the states that player sees on the screen (Presenter). That's actually a thing that should be pushed into QML scripts as much as possible. The Generator is used in QML because currently user has to interact with is. When the user don't have to (when the thing is used only on the server) - delete the qml and Link, no changes for generator itself. Returning to the initial post of this thread, if the key idea is to be the "event-based component", then we're short of luck: QtQuick is model-view-based to the core. In the event-based stack like "actual game objects - events layer - virtual game objects models for the GUI" there is no 1-to-1 relation between the actual objects and the models that user controls. Current GUI is a subsystem when it comes to rendering and input. But for the actual operation and logic - it's a script language binding: you take your classes and make them usable from a script. Decoupling of a language binding isn't something that people do. |
I don't think the mvc based qtquick can't be made friends with event based triggers: we just update the model by events, and derive events from the view or controller, when appropriate. I totally agree that the UI mode state machine should be in QML. What are the virtual game object models you talk about? Is that the copy of data displayed by the gui system? If yes, couldn't we just update that copy whenever the gamelogic decides to update some values in the gui? |
Refactorings are done only if there is some visible feature that desperately needs them (that justifies another level of indirection). I don't know what it is, so I'll probably go and actually read the commits of #850 to find out. |
I think we agree that the GUI is part of the so-called Presenter.
So I wouldn't say we need to refactor the GUI subsystem, but rather integrate it into that model, which might require some refactoring. The UI (and partially the renderer) are special as they have to feed information into the simulation. The current model is that the simulation pushes the keyframes that are sufficient to describe the current presenter state into it. The presenter then shows the content at the time the simulation says so. In order to proceed we need to define a clear interface to and from the presenter. The gui is then updated whenever the state at the requested time is different. Another goal should be the possibility of extending the UI with mods through python. I have barely an idea what is possible and elegant there. @Tomatower I think you need to clarify or correct me 😁 |
Reminder: "curves" are basically lines drawn between My suggestion, running on top of the curve-concept is the following: The simulation keeps its own state, having all the curves, having all events, having all segments of path-planning. The renderer also keeps its own state, having only the curves containing the two keyframes relevant for rendering at the moment. (and maybe the future - two keyframes may be as far apart as seconds, maybe even minutes!) The simulation changes its state by executing at a lower rate than the presenter, keeping track of the changes it performed. The simulation needs for that a current "now", where changes must not happen before.
Inputs/Actions shall be stored in an action queue where the simulation picks it up and executes them at the appropriate times in the simulation. The simulation logic is currently build as an event-driven system, so user interactions can be implemented as "just another" event that happens within the game logic. And that way it would not matter for the logic if the event came from a keyboard or from a network input. As actual code between the presenter and the simulation i think we have multiple options. One is to re-use the gamestate used in the simulation, so we do not have to duplicate the code, and just change the mode of operation for the containing types, the second would be to create a optimized subset, containing only the information that is relevant for the user interface rendering - but here it depends mostly on what the presenter needs to perform best. |
The simplest solution with respect to synchronisation issues (as in, this one has almost none) is to have a concurrent (thread-safe) channel on which the simulation sends keyframe/curve updates to the presenter, with the presenter keeping a complete replica of the simulation state. This does duplicate state in a memory-wasting manner, but the state is simple enough for this not to be a an issue in practice. This version is also the one I would favour. |
The curve in the renderer can be "cleaned up" much earlier, that is to remove unnecessary keyframes from the past quickly. And adding new keyframes can be done pretty efficient as well, by just appending the few new ones. |
As part of both the renderer rewrite (#850) and the architectural rework of the engine, we need to decouple subsystems. Without this, proper support for a client/server split and job-based parallelism with communication through channels will be practically impossible. I'm working on the graphics side of things, so I'd like to work on decoupling the GUI first.
To achieve this design goal, we unfortunately have to get rid of all the
GuiLink
s as they stand, as well as any other links to non-GUI engine components (e.g.EngineQMLInfo
). From a very high-level viewpoint, the initial idea of how we could replace this is a callback system, where all communication proceeds solely through theGui
class, with an API of something like:Obviously, we will need more than just the above in practice, but the general idea is that the GUI is an event-based component like any other. By itself it doesn't do anything. The user registers functions that they want called on GUI state changes. To give an example, currently changing properties of a game involves a
GameControlLink
that directly manipulatesEngine
members. This is unfortunate, because it will result in completely unpredictable behaviour in a concurrent execution context. Instead, what we would have is aon_game_property_changed
callback which is called fromprocess_events
whenever the GUI detects that one of the enqueued events is a mouse click on something that triggers a game property change. It is up to the user to ensure threadsafety for that callback, and what we would in fact do is either make that callback send an event on theWorldUpdater
concurrent input channel or directly modify some property of thePresentationState
, which is local to thePresenter
class - the user of the GUI.I'd be happy to hear thoughts on this, especially from @ChipmunkV, who I understand is the principal author of the GUI.
The text was updated successfully, but these errors were encountered: