Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Schematics - An abstraction layer for stable scene representation and exposing ECS configuration in the editor #3877

Open
SamPruden opened this issue Feb 6, 2022 · 94 comments
Labels
A-Editor Graphical tools to make Bevy games A-Scenes Serialized ECS data stored on the disk C-Feature A new feature, making something new possible S-Needs-Design-Doc This issue or PR is particularly complex, and needs an approved design doc before it can be merged

Comments

@SamPruden
Copy link

SamPruden commented Feb 6, 2022

Problem

Scene representation and the ECS runtime have different and conflicting data representation requirements.

ECS data should be:

  • Optimised for performance and engineering concerns
  • Generally split into the smallest components that may be used independently
  • Implementation of desired behaviour
  • An implementation detail that can be refactored during development

Scene representation data should be:

  • Convenient and intuitive for designers working in the editor
  • Grouped into logical units that are likely quite large. e.g. mesh setup, rigidbody setup
  • Declaration of designer intent, not of technical implementation
  • A stable data format, so that refactorings to the ECS data don't invalidate scenes composed in the editor
  • Where practical, stable enough to transfer composed assets between projects

Proposed Solution

This isn't a detailed technical proposal as I'm not yet experienced enough with Bevy's internals to go there, but I'd like to propose that Bevy's architecture adopt this distinction. Inspiration and proof of concept is provided by Unity's DOTS, although Bevy can do better.

Schematics

These were called Authors in the first version of this proposal.

I propose a new first class feature which I will call "schematics", with the following properties:

  • In the editor, objects are composed from schematics
  • Represents a high level, convenient logical grouping of information, such as Rigidbody or RenderedMesh
  • schematics are the basis of the serialisation format for scenes and assets
  • In the editor, each schematic is displayed as a panel in the object inspector
    • The editor can provide default UI here based on the schematic's fields
    • Optionally, custom UI can be provided instead
  • A schematic is a declarative description of the functionality being added, and dynamically sets up the ECS data to achieve that goal. E.g. a Receive Shadows checkbox on the Mesh schematic may conditionally add a ShadowReceiver marker component
  • Arbitrary logic converts schematics into arbitrary representations in the ECS - typically one-to-many components, but potentially additional entities too - e.g. a Rigidbody schematic may create entities representing joints
  • Engineers should think of schematics as the stable public interface to the functionality implemented in ECS. ECS data layout may be changed during development, but as long as the schematic remains stable, this won't break compatibility with assets that have already been composed.
  • Conversion from schematics to ECS happens live in the editor - schematic and ECS scenes are kept in sync
  • Can also be used directly from code - if you're composing an entity in code, you may sometimes choose to do this using schematics for interface stability

I can e.g. rewrite my physics engine to use completely different components, but the Rigidbody schematic will remain untouched, and scenes that feature it will continue to work seamlessly! I think this is very important for allowing designers and engineers to iterate on development side-by-side. If these data representations are tied too closely together, changes will be blocked by breaking compatibility with composed assets. More trivially, I can change some component field from f32 to f16.

Unity provides inspiration here. They have a Data Oriented Technology Stack (Unity DOTS) in development which builds ECS into the engine. They have a similar separation between editor data and ECS data, with a live conversion and synchronisation mechanism (Unity LiveLink). They use their classic GameObjects and Monobehaviours as schematics to convert into ECS data and it's all rather messy and clumsy, but it validates the core idea of having this abstraction layer. Bevy has the opportunity do to this right, instead of building on legacy jank.

Joachim Ante, Unity's CTO, said on the forum:

The concept of runtime data and editing data being the same is simply a bad idea. It took me 14 years to figure that out (Sorry it took so long...)

He's a voice of experience to listen to, and a reason to take this idea seriously.

Open questions

Two way synchronisation

In the editor, can a change to an entity reflect back to the schematic?

  • We probably want this - e.g. if we can play the game live in the editor, we probably want the Transform widget, backed by a schematic, to update when it moves
  • This sounds tricky to do - lots of bookkeeping and conversion logic - but Unity does it successfully, so it's possible
  • Should probably be an optional opt-in for specific fields on specific schematics

Should schematics be dev only, or shipped in production?

One option would be to have schematics be used only in the editor, and compiled into a more compact binary format for production. This would likely be optimal for loading times, as in the best case scenario data can be loaded into ECS via optimal mem copy. Unity does this. A disadvantage is that if we change the ECS layout, we need to recompile and reship that asset data. There's a middle ground, where assets are shipped in schematic format, but the compiled form is cached on disk for fast loading.

This could even be taken to the extreme, and used to automatically implement game saving in a way that's reasonably robust across updates which may change data layouts. It would require full serialisation of ECS data back into schematics. I have a hunch that this would end up being impractical and cumbersome, but it's worth thinking about. This is probably a bad idea.

Schematics as crates?

In a healthy ecosystem of Bevy assets, would it make sense to have schematic only crates? They could provide a standard interface to certain units of functionality, that could be implemented independently by other crates. For example, perhaps a standardised Rigidybody schematic could be used that would allow physics engines to be swapped out, each interpreting the schematic data as they see fit.

Pros

  • Allows the ECS data and scene representations to each be designed optimally without conflict or compromise
  • Allows engineers and designers to work simultaneously without "merge conflicts" and breaking changes
  • Seperates implementation from interface - usually good practice on sizable projects
  • Solves the stable asset serialisation problem
  • Solves the problem of organising and presenting configuration neatly in the editor - e.g. Marker component discoverability #3833

Cons

  • An extra abstraction layer might be seen as boilerplate
  • Implementation of the live conversion in the editor may get complex - there's some amount of performance overhead
  • Extra complexity in the engine

If this is better served by a discussion than an issue, then feel free to convert it.

@SamPruden SamPruden added C-Feature A new feature, making something new possible S-Needs-Triage This issue needs to be labelled labels Feb 6, 2022
@alice-i-cecile alice-i-cecile added A-Editor Graphical tools to make Bevy games A-Scenes Serialized ECS data stored on the disk S-Needs-Design-Doc This issue or PR is particularly complex, and needs an approved design doc before it can be merged and removed S-Needs-Triage This issue needs to be labelled labels Feb 6, 2022
@alice-i-cecile
Copy link
Member

This is a great write-up, and I like keeping it as an issue. Eventually, this will need to be an RFC, but I think we're too far out for that to be feasible: there's still a lot of related work that needs to be fleshed out.

@alice-i-cecile
Copy link
Member

alice-i-cecile commented Feb 6, 2022

I propose a new first class feature which I will call "authors", with the following properties:

IMO this badly needs a new name: it will deeply confuse users, as "authors" in the sense of "users, particularly on the art side" is a natural interpretation of that term.

So, what to call them?

  • class: Maybe? It has OOP connotations, but these are very OOP-flavored. They don't inherit and can be composed.
  • bundle: No. The whole point is that this representation is distinct from the raw engine representation.
  • prototype: No. There's no default behavior associated.
  • module: No, has an existing meaning. Right sort of flavor though: these are composable units of functionality.
  • prefab: No. These are sort of the right flavor, but there's no default behavior associated.
  • template: Pretty good! These lay out the structure that should be followed.
  • blueprint: Very good, but Unity uses this term for their visual scripting tool.
  • model: No. I love it, but it's overloaded with "3D model".
  • pattern: Maybe? Like template, but very abstract.
  • schematic: Love it! Captures the idea, Googleable, not overloaded.

@alice-i-cecile
Copy link
Member

As for the idea itself: I like this quite a bit. Certainly, I like it much, much more than overloading bundles to incorporate these use cases.

I think it does a good job solving the stable serialization problem, and decoupling the "designer / artist representation" from the "programmer representation". There will be some serious technical and UX hurdles to overcome, but I think they're worth it.

@SamPruden
Copy link
Author

SamPruden commented Feb 6, 2022

Agreed on the need for a name change! I stole that from Unity, which calls theirs "Authoring components", but that's clumsy.

I think definitely not "prefab". If Bevy has prefabs, they would be composed of objects, which are in turn composed of authors - or whatever we call them.

Blueprint is actually Unreal not Unity, not that that's relevant. :)

I think I like "schematic"!

Another option might be "widget", or "editor". If there's a one-to-one correspondence between "schematics" and editor widgets, maybe they're just the same thing with one name.

I've updated the proposal to use "schematic" instead of "author". It's a simple find/replace, so hopefully I didn't break anything.

@SamPruden
Copy link
Author

I should mention the existence of the Unity patent. I haven't read it and I'm not qualified to interpret it, but I understand that it gets close to this area and touches on their GameObject to ECS conversion system. Hopefully it's not a blocker. It would be rather evil if it were, as I think this is a fairly straightforward and obvious design direction that falls out of basic engineering principles.

@SamPruden
Copy link
Author

If for some reason one wants to be lazy and not deal with the boilerplate of schematics, a simple component could perhaps be its own schematic by simply throwing a #[derive(Schematic)] on it. It would then show up in the editor, and conversion would be done by .clone() or something.

This would forgo most of the benefits and generally be against Best Practice, but maybe people would want this to lower friction during rapid prototyping. It might be a good idea to make sure this works, even if we maybe lint against it.

@alice-i-cecile
Copy link
Member

alice-i-cecile commented Feb 6, 2022

If for some reason one wants to be lazy and not deal with the boilerplate of schematics, a simple component could perhaps be its own schematic by simply throwing a #[derive(Schematic)] on it. It would then show up in the editor, and conversion would be done by .clone() or something.

This would forgo most of the benefits and generally be against Best Practice, but maybe people would want this to lower friction during rapid prototyping. It might be a good idea to make sure this works, even if we maybe lint against it.

Doing bad things should be hard. They can manually impl Schematic if they really want. We may want a helper method between Schematic and Bundle though?

@sixfold-origami
Copy link
Contributor

In general, I like this a lot!

Conversion from schematics to ECS happens live in the editor - schematic and ECS scenes are kept in sync

I might be misinterpreting a typo here, but IMO this should happen in the code, not the editor (although the editor should provide a visual for what the schematic "breaks down" into.)

Can also be used directly from code - if you're composing an entity in code, you may sometimes choose to do this using schematics for interface stability

This is arguably an implementation detail, but IMO this shouldn't be enforced. Not all schematics will make sense to use a psuedo-bundles, especially if they contain entities. I think the better design here is to allow and encourage that schematics impl the Bundle trait. This way they can easily hook into the existing Bundle functionality without adding additional complexity to the user.

Re: two-way synchronization

Yes, this should probably be borne out in the editor, at least visually. This means we need a bijective mapping between schematics and their internals. This is likely more work, but probably worth it.

Re: schematics being dev-only

This is probably a bad idea.

Yeah. Unfortunately, I think I'm with you on this one, even though it does sound really cool. Regardless, I think this can be considered separately, probably in its own RFC down the line. (In the meanwhile, I think we can/should act under the assumption that schematics are dev only.)

Re: schematics as crates

I like this idea a lot. It hits very similar notes to, for example, the log crate, which exists purely to "logging facade". My only hesitation is that different teams/internal architectures may want different schematic divisions, but I think we should try to support this regardless.

@IceSentry
Copy link
Contributor

This idea sounds interesting, but I don't personally know of any popular game engines that actually exposes an editor that let's you play with the ECS directly. I feel like this is an unexplored space and we might be missing some cool concepts if we don't at least try to have an ECS first editor.

I get the part that unity isn't great because it merges runtime and editing time data, but I feel the issue is also based around the fact that unity doesn't really have a data first model. At least not currently. It's still a good concern to have and making it easier for artist to use is a good idea. I just don't want to do this at the expanse of making tooling for people wanting to work with the ECS more directly.

@SamPruden
Copy link
Author

Conversion from schematics to ECS happens live in the editor - schematic and ECS scenes are kept in sync

I might be misinterpreting a typo here, but IMO this should happen in the code, not the editor (although the editor should provide a visual for what the schematic "breaks down" into.)

Assuming that the editor provides a live preview of the game, that preview will be powered by the ECS. This would basically be the actual game, but not running the gameplay systems.

When we change data on the schematics, this needs to update in realtime in that preview. There needs to be a live ECS world converted from the schematic world. The editor would be responsible for doing incremental updates to keep this in sync by detecting changes to the schematics and reinvoking the conversion for only those relevant entities.

This is roughly how Unity does it.

Can also be used directly from code - if you're composing an entity in code, you may sometimes choose to do this using schematics for interface stability

This is arguably an implementation detail, but IMO this shouldn't be enforced. Not all schematics will make sense to use a psuedo-bundles, especially if they contain entities. I think the better design here is to allow and encourage that schematics impl the Bundle trait. This way they can easily hook into the existing Bundle functionality without adding additional complexity to the user.

I see these as having different purposes. A bundle represents specific a collection of ECS components and is tied to the ECS layout. A schematic is more like a declarative factory, where the same schematic may later map to different components after a refactoring. I'm excited about Bevy but have very little experience with it, so I'm not confident in my understanding of the role of bundles and can easily be persuaded on this point.

Re: two-way synchronization

Yes, this should probably be borne out in the editor, at least visually. This means we need a bijective mapping between schematics and their internals. This is likely more work, but probably worth it.

Agreed, although I think that full bijection is probably impossible because it's not guaranteed to be one-to-one, so fields would need to be either bijected, or enter a desynchronised state.

Re: schematics as crates

I like this idea a lot. It hits very similar notes to, for example, the log crate, which exists purely to "logging facade". My only hesitation is that different teams/internal architectures may want different schematic divisions, but I think we should try to support this regardless.

The open question here is how many different implementations would be comfortable working from the same stable schematic interface, or whether they would all have slightly different requirements that makes this impractical.

@SamPruden
Copy link
Author

This idea sounds interesting, but I don't personally know of any popular game engines that actually exposes an editor that let's you play with the ECS directly. I feel like this is an unexplored space and we might be missing some cool concepts if we don't at least try to have an ECS first editor.

My take on this would be that a good editor design would still expose the ECS in a mostly readonly form. As we edit schematics, we get a live visualisation of what they're being converted into, as well as visualisations of which queries they feature in and all of that. ECS data could even be edited directly in the editor for debugging purposes.

Any well designed schematic will give intuitive and complete control over the ECS data anyway.

My intent is not to hide the ECS, just to provide a safe, mediated, and well designed way of interacting with it.

@james7132
Copy link
Member

Having dealt with this in Unity's ECS implementation, I would like to say that this introduces quite a few interesting and unintuitive edge cases. Trying to explain to a game designer that two distinct authoring components X and Y cannot be used together because they share a underlying component is horrible UX. For programmers, this is sort of workable since the current Bundles has this issue, but the code and bundle definitions are directly visible to them from the API.

@SamPruden
Copy link
Author

Having dealt with this in Unity's ECS implementation, I would like to say that this introduces quite a few interesting and unintuitive edge cases. Trying to explain to a game designer that two distinct authoring components X and Y cannot be used together because they share a underlying component is horrible UX. For programmers, this is sort of workable since the current Bundles has this issue, but the code and bundle definitions are directly visible to them from the API.

Agreed. Unity's current version of this is clunky, but I think those are solvable problems. The editor should be able to detect things like conflicts and give clear explanations.

Probably the best that can be done here is best practices for schematic design that don't lead to conflicts.

There can be relations like a schematic requiring a component to be present without setting its value, or setting a marker component in ways that automatically resolve conflicts. This is a little tricky and will require some design work, but I'd say it's doable.

Can you give some examples of times when you've had multiple Unity author components that conflict like that?

@sixfold-origami
Copy link
Contributor

sixfold-origami commented Feb 8, 2022

Conversion from schematics to ECS happens live in the editor - schematic and ECS scenes are kept in sync

I might be misinterpreting a typo here, but IMO this should happen in the code, not the editor (although the editor should provide a visual for what the schematic "breaks down" into.)

Assuming that the editor provides a live preview of the game, that preview will be powered by the ECS. This would basically be the actual game, but not running the gameplay systems.

When we change data on the schematics, this needs to update in realtime in that preview. There needs to be a live ECS world converted from the schematic world. The editor would be responsible for doing incremental updates to keep this in sync by detecting changes to the schematics and reinvoking the conversion for only those relevant entities.

This is roughly how Unity does it.

Ah, that makes sense. I thought you were saying that the mapping layer would be defined by the editor. This is a lot more clear to me now. Thanks!

Can also be used directly from code - if you're composing an entity in code, you may sometimes choose to do this using schematics for interface stability

This is arguably an implementation detail, but IMO this shouldn't be enforced. Not all schematics will make sense to use a psuedo-bundles, especially if they contain entities. I think the better design here is to allow and encourage that schematics impl the Bundle trait. This way they can easily hook into the existing Bundle functionality without adding additional complexity to the user.

I see these as having different purposes. A bundle represents specific a collection of ECS components and is tied to the ECS layout. A schematic is more like a declarative factory, where the same schematic may later map to different components after a refactoring. I'm excited about Bevy but have very little experience with it, so I'm not confident in my understanding of the role of bundles and can easily be persuaded on this point.

Bundles are mostly used to add a group of components together to an entity. They're often used when multiple components are required together, or when a plugin wants to put many components on certain entities (AKAIK at least). The idea of the schematic being a factory is interesting. Do you imagine that, when used in this way, multiple schematics would be composed or combined together? If not, then I can see the argument that they should be used as primitives. If, however, they are to be composed with other components, then I think hooking them into the Bundle trait provides a cleaner API. This distinction might be best left for the RFC though.

I think the refactoring point is moot here- if, during a refactor, the schematic mapping changes, then the Bundle implementation would also need to change in turn. This is more work, but I think that tradeoff is worth it.

Re: two-way synchronization
Yes, this should probably be borne out in the editor, at least visually. This means we need a bijective mapping between schematics and their internals. This is likely more work, but probably worth it.

Agreed, although I think that full bijection is probably impossible because it's not guaranteed to be one-to-one, so fields would need to be either bijected, or enter a desynchronised state.

Hm. Could you give an example where the fields would become desynchronized?

Re: schematics as crates
I like this idea a lot. It hits very similar notes to, for example, the log crate, which exists purely to "logging facade". My only hesitation is that different teams/internal architectures may want different schematic divisions, but I think we should try to support this regardless.

The open question here is how many different implementations would be comfortable working from the same stable schematic interface, or whether they would all have slightly different requirements that makes this impractical.

This is an excellent point. I'm not entire sure what the right answer is. I think it's entirely reasonable that, in some cases, the backing implementation of a schematic is a black box. For example, using different collision detection engines for realtime games vs scientific simulation. The main point of divergence at the schematic level would come from, I think, configuration parameters. If one physics library has additional options/features, then those would need to be exposed with either a different schematic, or a second schematic working in tandem.

Unfortunately, I don't know enough about different implementation strategies to confidently say how common either of those cases will be.

@SamPruden
Copy link
Author

Bundles are mostly used to add a group of components together to an entity. They're often used when multiple components are required together, or when a plugin wants to put many components on certain entities (AKAIK at least). The idea of the schematic being a factory is interesting. Do you imagine that, when used in this way, multiple schematics would be composed or combined together? If not, then I can see the argument that they should be used as primitives. If, however, they are to be composed with other components, then I think hooking them into the Bundle trait provides a cleaner API. This distinction might be best left for the RFC though.

I think the refactoring point is moot here- if, during a refactor, the schematic mapping changes, then the Bundle implementation would also need to change in turn. This is more work, but I think that tradeoff is worth it.

Perhaps it makes sense to think of schematics like parameterised bundles. That isn't a perfect analogue, because schematics can also add additional entities, which bundles can't do.

In the editor, we would add a Mesh Renderer schematic to associate a mesh with an entity, and setup all of the appropriate components in the ECS. Here's Unity's Mesh Renderer component, as an approximate example of what that might look like.

image

In the ECS, this maps to different components depending on what settings are used. For example, the receive shadows bool on the schematic probably conditionally adds a ShadowReceiver marker component. When I'm setting up this schematic in the editor, I don't have to know or care about that marker component, I just care about that bool/checkbox. Maybe during development it's decided that there should be a NotShadowReceiver marker component instead, and the schematic conversion logic can be updated for this, but the schematic's data doesn't have to change, and remains compatible with work already done in the editor.

A schematic takes in some declarative description of the functionality being added, and dynamically sets up the ECS data to achieve that goal, for example conditionally adding marker components according to a bool value.

People could simply have a fn setup_mesh(mesh: Handle<Mesh>, [...], receive_shadows: bool) that does this in their code, but that would be duplicating functionality from the schematic, so we might as well allow the schematic to be used from code in this same way.

Hm. Could you give an example where the fields would become desynchronized?

Well schematics can be converted into ECS data by arbitrary conversion logic, so that process will only be bijective if the user writes a bijective conversion. We can't guarantee that, so we need to have a fallback.

I think most conversions would be bijective, but anything that does a many-to-one mapping wouldn't be. Maybe a schematic is setup to allow a high/medium/low enum config, but in the end medium and low get mapped to the exact same ECS data. Now it's impossible to reconstruct the schematic from the ECS.

It's even possible that one of the components added by a schematic gets dynamically removed during gameplay. In this case, there's nothing to sync back, but the editor still has to do something.

There's probably some clever design work to do here, but I think that the obvious and general approach is to just sync back when we can and when it's helpful, and gracefully fail when we can't.

This is an excellent point. I'm not entire sure what the right answer is. I think it's entirely reasonable that, in some cases, the backing implementation of a schematic is a black box. For example, using different collision detection engines for realtime games vs scientific simulation. The main point of divergence at the schematic level would come from, I think, configuration parameters. If one physics library has additional options/features, then those would need to be exposed with either a different schematic, or a second schematic working in tandem.

Unfortunately, I don't know enough about different implementation strategies to confidently say how common either of those cases will be.

Agreed. I think supporting schematic crates should be fairly trivial - it's just a struct - but whether they will prove valuable is an open question.

@therocode
Copy link

Just a thought as I'm reading this very promising discussion: If schematics are about a stable API for scenes/designers, maybe the schematic data format and system could automatically include version awareness?

If I do need to break compatibility in the API of a certain schematic, maybe that should automatically lead to a version bump for that schematic so that the editor/code can detect when an older schematic is loaded. This could lead to an error instead of undefined behaviour, and perhaps a mechanism for writing migrations on a per-schematic basis could be provided.

Maybe I'm getting ahead of myself here but these situations will definitely arise so maybe worth thinking about.

@alice-i-cecile
Copy link
Member

If schematics are about a stable API for scenes/designers, maybe the schematic data format and system could automatically include version awareness?

I like this a lot. IMO embedding a semver'd version number is a critical part of the design here.

@SamPruden
Copy link
Author

Agreed on the version point, although relying on devs to semver each schematic might be a little brittle. This could be at least somewhat automated by also tracking descriptions of the structs, but I haven't put much thought into that beyond assuming that it's a good idea.

Although the aim is to have schematics be as stable as possible, we should also make them as robust to changes as possible. For example, if a field were removed from a schematic, that probably shouldn't break assets built with it. That data can simply be ignored. When there's an incompatibility that can't be resolved automatically, we should make sure that we can give good errors. The serialisation format needs to have enough information to do that.

Another area where automated compatibility checking is key would be cached conversion. If we assume that we would like to avoid running the full conversion process every time an asset is loaded, then we would want to cache it into a compact representation for fast loading into the ECS. That cache would need to be automatically invalidated when the conversion logic changed. Detecting that automatically sounds... maybe hard.

Serialisation should probably be in a fairly simple plaintext format rather than any type of packed binary situation. Firstly because being less sensitive to precise layout is a good thing, and secondly because it plays much more nicely with version control.

I'm getting way ahead of myself here, but I can even imagine a future where the editor is version control aware, encourages and helps designers to use a version control workflow, and can provide a nice interface over the git history of a level design. Like IDE features, but for level editing. I don't know if this is a good idea, but it sounds cool in my head. I think compatibility with this possibility would be good to keep in mind when designing schematics.

I realise this is suddenly becoming a very large feature that touches all areas of editor design. Oops.

@alice-i-cecile
Copy link
Member

@SamPruden Can you update the title of this issue to include "Schematics" so it's easier to find / more clear what it's referring to? :)

@SamPruden SamPruden changed the title An abstraction layer for stable scene representation and exposing ECS configuration in the editor Schematics - An abstraction layer for stable scene representation and exposing ECS configuration in the editor Feb 17, 2022
@SamPruden
Copy link
Author

@SamPruden Can you update the title of this issue to include "Schematics" so it's easier to find / more clear what it's referring to? :)

Done. That good?

@AlphaModder
Copy link

I was thinking about how we might do schematic conversion and deconversion API-wise, and had to write down this snippet to get it out of my mind:

Code snippet
pub trait Schematic {
    fn build<'b, 'e>(&'b self, EntityBuilder<'b, 'e>) -> Option<Box<for<'u> Fn(&'u mut Self, SchematicEntities<'u, 'e>) + 'e>>;
}

type Invariant<'e> = PhantomData<&'e mut &'e ()>;

#[derive(Copy, Clone)]
struct SchematicEntity<'e>(Entity, Invariant<'e>);

struct EntityBuilder<'b, 'e>(EntityMut<'b>, Invariant<'e>);

impl<'b, 'e> EntityBuilder<'b, 'e> {
    fn insert<T>(&mut self, value: T) where T: Component -> &mut Self {
        self.0.insert(value);
        self
    }
    
    fn insert_bundle<T>(&mut self, value: T) where T: Bundle -> &mut Self {
        self.0.insert_bundle(value);
        self
    }
    
    // ... etc.
    
    fn id(&self) -> SchematicEntity<'e> { SchematicEntity(self.0.id(), Invariant) }

    fn secondary_entity<'a>(&'a mut self) -> EntityBuilder<'a, 'e> {
        EntityBuilder(self.0.world().spawn(), Invariant)
    }
}

pub struct SchematicEntities<'u, 'e> {
    primary_id: SchematicEntity<'e>, // for convenience, not strictly necessary
    world: &'u World,
}

// can't expose EntityRef directly since it has `EntityRef::world` (and, to a lesser extent, `EntityRef::get_mut_unchecked`)
pub struct SchematicEntityRef<'a>(EntityRef<'a>); 

impl<'a> SchematicEntityRef<'a>(EntityRef<'a>) {
    fn get<T>(&self) -> Option<&'a T>; // uh, is this sound? i copied it from EntityRef...
    
    // etc...
}

impl SchematicEntities<'u, 'e> {
    fn primary(&self) -> Option<SchematicEntityRef>> { self.secondary(self.primary) }
    fn secondary(&self, entity: SchematicEntity<'e>) -> Option<SchematicEntityRef> {
        world.get_entity(entity.0).map(|e| SchematicEntityRef(e, Invariant))
    }
}

The basic idea here is that calling build on a schematic allows it to instantiate some number of entities, and it can optionally return a function used to update its settings from those entities later. The main trick is using the invariant/generative lifetime 'e to ensure statically that the returned update closure can only read entities that belong to it. If the lifetimes are seen as too confusing, this could be made a runtime check (or simply ignored), but I wanted to demonstrate that it is possible. To be clear, the parameter 'e only exists to prevent smuggling SchematicEntitys outside of the function or the closure it returns. Its actual value when calling Schematic::build is irrelevant, and internally we would just pass 'static so we can actually store the closure returned.

One extension that could be made to this API is for the schematic to be able to reserve entity ids ahead of time (i.e. stored in its settings) and spawn secondary entities with those IDs. That way, one schematic could create components which reference the not-yet-created entities of another schematic by ID if desired. Such AOT IDs would just be Entity (or maybe another type convertible to EntityId with an EntityBuilder?), but importantly not SchematicEntity, since they do not belong to schematics they were not created by.

@SamPruden
Copy link
Author

I don't think anything like that is in scope for what Schematics are intended to be. Their purpose is to be an abstraction of Bevy's ECS, not to be a universal format. They're a Bevy feature for solving Bevy problems.

The purpose of the schematics as crates question was aimed at the standard engineering thing of separating interface from implementation, e.g. extracting a common schematic for Rigidbody independent from the physics engine being used. Whether that actually makes sense to do is just a question of whether multiple physics engines have enough in common that they would want to share a schematic.

I don't think this question actually needs to be answered when designing schematics as a feature. If schematics are just structs, it should be possible to ship them as crates. It's up to the community whether they want to do that.

If hypothetically some standard interchange format for assets were implemented someday, it's possible that would be implemented as an abstraction over schematics, i.e. the format would be loaded by decoding into schematics. But that's not the concern of schematics themselves.

@jncornett
Copy link

I don't think anything like that is in scope for what Schematics are intended to be. Their purpose is to be an abstraction of Bevy's ECS, not to be a universal format. They're a Bevy feature for solving Bevy problems.

My mistake! I commented after my first pass of skimming over the issue and realized that the scope was different than my understanding after I commented.

@SamPruden
Copy link
Author

Actually there is one design question that arises from the schematics as crates idea: where does the schematic interpretation logic go?

The most straightforward idea is that schematics are responsible for decoding themselves into ECS. However, if we want to support a shared schematic with multiple backends, maybe those backends want to be responsible for their own interpretation of the schematic. There's probably a reasonable trait based solution here, I don't think it's a problem.

@SamPruden
Copy link
Author

SamPruden commented Apr 29, 2022

Having spent some time away from this, I have some fresh thoughts and an updated proposal.

I believe I have the outline of a design that achieves the following. Please tell me all of the ways in which I'm wrong.

  • "Editor objects" (which commonly map 1:1 to entities) are composed from schematics and interpreted into ECS data
  • Conflicts are handled in a reasonable manner
  • Schematics provide the inspector UI panels for editing objects
  • This same UI can be used for viewing and editing the properties of entities while the game is playing live in the editor - even when the entities are dynamically created by gameplay code, without using schematics
  • Arbitrary animations can be piped through the schematics in the editor
  • Could theoretically be used to build more complex prefab systems
  • Hopefully not a boilerplate nightmare...

A schematic is an arbitrary serializable Rust type. An attribute is used to expose it to the editor at particular locations in "Add schematic" menus etc.

Trivial identity function schematics can be implemented for a component by a derive macro, and upgraded to a manual implementation later if and when required.

A schematic optionally implements the EcsInterpretableSchematic trait (which needs a better name). This trait provides an interpret function, in which a declarative, constraint based API is used to specify the entites and components the schematic can be interpreted into. Constraints would include the basic Exact component value constraint, but also constraints like Component exists or default and Component exists or default matching predicate. During interpretation of an object, constraints from all schematics are gathered first, then checked for compatibility with each other, and the whole object is converted into ECS as a transaction if the process succeeds, otherwise it fails.

A schematic optionally implements the ObjectInterpretableSchematic trait (which needs a better name). This trait provides an interpret function, in which the schematic can produce sub-objects, which are in turn built from schematics. This allows a schematic to act as a complex prefab root, building arbitrary objects underneath it. (More on this later.)

A schematic optionally implements the SchematicUi trait to specifiy custom inspector UI. If this isn't implemented, a default is generated by reflecting the fields. This is where validation logic would go. For example, if an integer needs to be within a range, then that is something that would be setup on the input UI field. (Maybe this is a bad idea.) This may also be responsible for creating UI handles in the scene view.

A schematic optionally implements the InferableSchematic trait. This trait provides a Query associated type, and an infer function. Whilst the game is being played live in the editor (and the appropriate entites are selected in the inspector), entities that match the query have a value for the schematic inferred from the ECS data by the infer function.

Inferred schematics are never serialised, they're just used to provide the same inspector UI during gameplay as during editing. Changes made to the inferred schematic trigger re-interpretation, meaning they can be used to edit entities while the game is running. There's some trickiness around inferring a new schematic every frame, but having the UI interaction last longer than a frame. I'll cheat by leaving that problem to somebody else's clever UI binding system design.

It's valid for a schematic to not implement either of the *Interpretable traits, serving only as a neat form of readonly UI via InferableSchematic.

Prefabs

@therocode's well argued case for a more complex prefab system could theoretically be supported via ObjectInterpretableSchematic. Directly, ObjectInterpretableSchematic only allows prefabs with their logic built from code, so doesn't meet those needs. However, one could build an entire prefab feature as a schematic. That is to say, SchematicUi is used to build a very complex UI capable of assembling arbitrary prefabs, and ObjectInterpretableSchematic is used to realise them. I'm not sure whether this would end up being the right approach or too limiting, but it would have the nice advantage that schematics would still end up being the sole thing that gets serialised.

Injecting dynamic inputs (like an integer in a random range) into schematic fields shouldn't be too hard. I cheekily defer this to being a UI feature. The schematic can just have a plain primitive s32 field on the struct, and the injection can work by mimicking typing a value into the UI field. Handling it this way avoids adding any extra complexity to schematics themselves. (I'm not sure if this is quite the right way to do it, but at the moment I'm saying the UI is responsible for validation, and injected values would need to go through that validation path.)

Animation

Now the animation issue raised by @james7132. When I first saw this requirement it scared me, because it appears to conflict with the separation between schematics and ECS. Tying schematic fields directly to component fields felt far too brittle, and still does. Furtheremore, I dislike the idea of animations directly driving the underlying ECS data without some mediation layer, because it means that designers are directly controlling ECS data during runtime, which feels very prone to causing unexpected violations of engineering assumptions.

I would propose instead that there's some kind of AnimationRequestBuffer acting as a mediation layer. Each frame, an animation may insert a request corresponding to e.g. "Set position of Object X to Value Y". It's up to a gameplay system to read and act on this request from the buffer. This means animation is a bit less automatic, but it's not doing anything that the code doesn't expect and handle explicitly.

This should be relatively easy to integrate with schematics, which just needs to provide some mapping from UI fields to animation requests.

The simple/common case of choosing to directly expose a field could probably be handled by a derive macro, so animation should still be trivial to implement.


I don't know if this is all of the way to a workable idea yet, but it feels like progress to me. In particular, inferred schematics and the animation abstraction seem like progress on the two hardest parts. Now tell me all of the ways in which this is broken.

@alice-i-cecile
Copy link
Member

Overall, I really rather like this. I think this is ready for an RFC and/or prototype.

This really wants some serious worked examples of common use cases in pseudocode.

Outstanding questions / thoughts on this:

  1. "Conflicts are handled in a reasonable manner": uncomfortably hand-wavey
  2. Needs to be more explicit about the exact mapping between schematics and entities. It seems that schematics to entities are one-to-many under this model?
  3. How do bundles fit in, if at all?
  4. Derive macro -> manual implementation is 100% the right tool for progressive disclosure.
  5. EcsInterpretableSchematic and InferrableSchematic seem to be dual to each other? Perhaps we can use that to rename the former. Effectively they're the translation protocols. I suspect we should look closely at serde's design here.
  6. ObjectInterpretableSchematic seems like it could be renamed to RecursiveSchematic, which IIUC would explain the concepts much more directly.
  7. It's not clear to me that ObjectInterpratble schematic needs to be its own trait. Why can't we bake this in directly?
  8. SchematicUi seems like a nice little addition: reminds me of some of the macro attributes in bevy_inspector_egui.
  9. An intermediate event layer for animations seems fine by me.
  10. I'm not thrilled with the design you have in mind for @therocode's concerns. The classical approach here would be to use inheritance. I'm not thrilled by the prospect, but it could work. My ideal solution would probably be something parallel to what I laid out in the Styles RFC: just use a local, flat list of property overrides that are applied one at a time.
  11. I would very much like to be able to use this for UI styling and widget building.

My instinct is that we should have a SchematicToEcs trait, and a EcsToSchematic trait, parallel to Deserialize and Serialize respectively. Recursive building can be packed into those, and needs to work in both directions.

SchematicUI seems like a solid model, although you need to be careful of orphan rules.

@SamPruden
Copy link
Author

Thanks for the feedback!

Yes I agree with the need for worked examples. I've been keeping these ideas very abstract so far, but I think it's time to start being a little more concrete. I think starting a prototype implementation (or even just a mocked up API) and implementing some interesting example schematics in it is probably a good next step.

"Conflicts are handled in a reasonable manner": uncomfortably hand-wavey

If multiple schematics want to put constraints on the same component, that's okay as long as their constraints are compatible with each other, and a value can be deterministically found. See my previous comment for a first pass at what that might look like. #3877 (comment)

If there's an actual conflict, that's a logic error we can't automatically resolve. We report the conflict as a design time error as cleanly as we can, making use of the constraints for good error messages. Just think of this as a validation check failure.

Needs to be more explicit about the exact mapping between schematics and entities. It seems that schematics to entities are one-to-many under this model?

I think probably a single editor "object" maps to a single primary entity, but the schematic interpretation logic can also create additional secondary entities if desired. My goto example is a Rigidbody creating secondary joint entities, but I believe you also gave an example on Discord to do with MOBA abilities or something.

I've been avoiding the question of whether multiple schematics can share a secondary entity. I'm avoiding it because I don't know how to deal with it. This area needs work, and will depend a lot on what requirements we turn up when going over worked examples.

How do bundles fit in, if at all?

I have no idea. I don't see a direct need for them in this model, but I also don't have a good understanding of the longterm vision for bundles.

EcsInterpretableSchematic and InferableSchematic seem to be dual to each other? Perhaps we can use that to rename the former. Effectively they're the translation protocols. I suspect we should look closely at serde's design here.

They're approximately duals, although maybe not exactly. This is where we get to the weakest and most broken part of the design.

The SchematicToEcs, EcsToSchematic process (I like your naming) is not reliably bijective/roundtripable. The most trivial example is having a schematic field that is unused in the current implementation of SchematicToEcs, in which case it definitely won't be recoverable by EcsToSchematic. A more complex example might be that one of the expected components isn't present on the entity, but enough of the components are there that we'd still quite like to display our nice custom UI panel...

I've been handwaving this by saying that you would just infer some default value on a best effort basis, and it doesn't matter that much because this is only for viewing/tweaking values while the game is playing. I think that's bad UX and I'm not satisfied with it.

One way forward would be to aggressively use Option in schematic fields so that we can infer things as None if we need to. I don't like this, and think it would be clumsy to program against. One slight improvement on this would be a custom option type that's forced to be Some in the serialized data but is allowed to be None in the inferred data.

Both the inference and UI logic would then have to deal with branching on those option fields all over the place. I don't like it.

Reinterpreting the inferred schematic back to ECS as a way of applying edits during gameplay also feels very brittle because of this.

There's also a clean and simple solution: We simply give up on sharing a UI implementation between editing while the game is not playing, and editing during gameplay. EcsToSchematic is gone. We have two separate UI implementations; one is a UI for the schematic, the other is a UI for a set of queried components. If you want to support both, you have to implement it twice. But all of this design complexity is gone.

I'm trying desperately not to accept the conclusion that that's what we should do, because I dislike the boilerplate of the double implementation. But I think maybe that's what we need to do... Clever ideas invited. Please.

ObjectInterpretableSchematic seems like it could be renamed to RecursiveSchematic, which IIUC would explain the concepts much more directly.

Lovely.

It's not clear to me that ObjectInterpratble schematic needs to be its own trait. Why can't we bake this in directly?

For some reason I convinced myself that it was cleaner to keep it separate, but I expect that you're right. I think this is just an API ergonomics question, although combining them may allow for better code reuse.

I'm not thrilled with the design you have in mind for @therocode's concerns.

Yeah, me neither. I think all we have to say for now is that it should be possible to implement any arbitrary prefabs system on top of schematics, so we don't have to worry about it now, that can be its own thing.

SchematicUI seems like a solid model, although you need to be careful of orphan rules.

Hm. What's the orphan rule scenario you're concerned about?

@SamPruden
Copy link
Author

SamPruden commented Apr 30, 2022

There's also one other (potentially gnarly) scenario that I'd like to handle, and that's being able to mutate sibling schematics on the same object.

Imagine an object has a Transform schematic. We would like this object to be locked to a grid, or clamped to the terrain. We add a LockedToGrid schematic, and now dragging the object around in the editor snaps to the grid. That locking schematic would need to read and mutate the Transform schematic.

I don't think this should be hard to implement, but we would need to think carefully about who has permission to mutate what, and the footguns involved in that.

@alice-i-cecile
Copy link
Member

but I also don't have a good understanding of the longterm vision for bundles.

You and the rest of the team ;) I really like that they're not impacted here: it means we can keep them as simple heterogenously typed lists.

Hm. What's the orphan rule scenario you're concerned about?

I'm nervous about users importing some third party schematic, and then being discontent with the UI solution provided by that struct's author. If we use traits, the orphan rule will prevent them from tweaking it.

For the rest of this, I think I should defer further comments until the RFC: I'm nervous about misinterpreting something and leading you down the wrong path.

@SamPruden
Copy link
Author

SamPruden commented May 5, 2022

I've been doing some further thinking, and my ideas have evolved once again. I think I'm getting close to being ready to put together an RFC, however I have a practical question about how best to do that process.

To give a preview of where I'm going:

  • We introduce a separate "schematic" ECS world. What we have been calling Schematics are just components on entities in this world, with a requirement that they be (de)serializable. This schematic world is what the editor directly sees and edits, and what gets saved to disk. This also opens up "schematic resources" as an option.
  • The schematic world is interpreted into the "runtime world" (the main gameplay world) by arbitrary systems ("schematic systems"). A declarative constraint based API is provided to aid in this process, allowing runtime entities to be built using operations like "'Entity A' must have a Transform with any arbitrary value". This allows multiple schematic systems to cooperate smoothly without fear of poorly defined behaviour when they're combined in a way that's uninterpretable.
  • An inspector UI drawing API is exposed to allows inspectors for selected entities (including multiselection) to be drawn from "inspector systems", which are query based in the normal way. This same inspector API is used to draw inspectors in the schematic world during editing, and in the runtime world whilst the game is playing. A common and encouraged pattern is to share a UI implementation between the schematic component and the runtime components that it gets interpreted into. This does currently require an immediate mode style UI API, which is perhaps contentious...

My questions:

  1. Is this a reasonable direction worth persuing an RFC in?
  2. Should this be one RFC, or three separate RFCs? Or maybe two, with the first two bullets combined into one?

@alice-i-cecile
Copy link
Member

The first point reminds me of my ideas in #1446 :)

  1. Yes, I like the direction, and it's worth exploring in an RFC.
  2. I would split this into two RFCs. The first two bullet points can be combined to provide useful functionality, while the last one can be added on top down the line.

This will obviously need real multiple world support; if you're interested I'd love a co-author on the many worlds RFC.

@SamPruden
Copy link
Author

Oh yes #1446 does seem similar! You beat me to it.

I'll take a more detailed look at the many worlds RFC too, as there's certainly some interaction between these designs. I'm certainly happy to give thoughts there, although I'm cautious about comitting to too much at once. I haven't actually used Bevy for anything substantial yet, and should probably get a bit more familiar with it as a user before I try to drive the design too heavily.

This might be bikeshedding, but one thing that's causing me a little trouble at the moment is how we would go about requiring that components in the schematic world implement Serialize. I'd like to enforce that with the typesystem, but I'm not sure whether per-world component trait requirements are possible. I'll take a look at that in more detail in bevyengine/rfcs#43

I'll keep tinkering on my prototyping and see if I can get some draft RFCs thrown together, probably focusing on the schematics one first, and a looser draft of the inspector stuff.

@alice-i-cecile
Copy link
Member

one thing that's causing me a little trouble at the moment is how we would go about requiring that components in the schematic world implement Serialize

Yep, we could put a trait bound of Serialize on the APIs used to load components / entities into the prefab world perhaps? It reminds me a lot of the discussion around #1515, we should probably share a solution with that.

@SamPruden
Copy link
Author

I've continued the Many Worlds specific discussion over in bevyengine/rfcs#43.

@SamPruden
Copy link
Author

SamPruden commented May 6, 2022

An alternative might be to simply require that every single Component and resource in any world is Serialize, so that entire worlds can be serialized. I'm not sure how impractical that is, but it might end up having additional uses. Savegames are still an area to think about, and I'm toying with whether or not the ideas in this thread may be applicable to that somehow. Having all entities be serializable may be desirable for that purpose.

@alice-i-cecile
Copy link
Member

I definitely think that constraint is going to be too harsh. I couldn't even get consensus on Clone or Debug, which are dramatically simpler and more core.

@MDeiml
Copy link
Contributor

MDeiml commented Oct 17, 2022

Since this seemed stale I started working on an RFC. Haven't invested a lot of work yet so I'm happy to leave this to @SamPruden if the already started work on it.

https://github.com/MDeiml/rfcs/blob/schematics/rfcs/-schematics.md

So far I started writing down the schematic world -> main world process, but the other direction will be quite a bit more complex to define, as this is mainly where conflicts between schematics come up, and where the requirements of schematics have to be defined.

@SamPruden
Copy link
Author

Thanks for working on this @MDeiml! I'm sorry that I let it go stale - I moved away from evaluating Bevy for a current project at this stage, and this slipped through the cracks. I'd like to come back and take a second crack at it, and I'd be happy to work with you on that if you're up for it. I'm also still relatively inexperienced with Rust's specific quirks and features, and was somewhat delaying until I could take a more informed run at the details. Working closely with somebody who has that experience would be a good way forward for me.

If I remember where I got to with it, you're absolutely right that one direction is easy, and the other is hard. I was going back and forth on whether the schematic world should in fact be a world, or whether it's a different data structure. The former seems more elegant and fits nicely with the existing infrastructure, but the question is whether it's a bad compromise in terms of the intricate dependency tracking and serialization stuff.

I've taken a brief look over your RFC so far. I have a couple of notes.

At the same time this solves the problem of a scene format, that is stable with regard to small changes to the underlying components.

My view was that we should aim to be stable with regards to large changes to the underlying ECS data, up to completely different entity hierarchies with different components, but equivalent behaviour. The motivation behind the feature is to decouple the technical implementation and designer intention, and we might as well do that to the maximum degree possible.

As a consequence of that, I concluded that a build function on the Schematic trait was the wrong way forward. It's too limited. I was looking instead at conversion systems, that would perform arbitrary queries in the schematic world and produce arbitrary outputs in the target world/s. The dependency tracking gets a bit complicated, and we probably need something like a TrackedQuery, but it should be possible.

I'm happy this is moving again, I'll dig up my notes and thoughts and get back into it.

@MDeiml
Copy link
Contributor

MDeiml commented Oct 17, 2022

I agree that pretty much any conversion between schematic and "target" components should be possible to implement. The API should push users to simpler conversions though, since those are easier to understand and maintain.

Even then, my take so far is that complex dependencies don't have to be covered by schematics, since nothing is hindering people from writing their own conversion functions that are just normal systems running on the schematic world. See ExtractComponent vs adding systems to RenderStage::Extract for comparison. The latter is already possible and will be made easier with the many worlds proposal. So I had intended to focus on the equivalent of writing a more multi purpose ExtractComponent trait.

What are your thoughts on proceeding like this? It would definitely help us to make the design more understandable and easier to implement. And we could still give good examples for the "complex case" approach I just (tried to) describe, for when the more restrictive system fails.

The versioning part should be made accessible for both cases though. But the versioning and the conversions aspects can be implemented quite separately.

@SamPruden
Copy link
Author

SamPruden commented Oct 17, 2022

The API should push users to simpler conversions though, since those are easier to understand and maintain.

My goal was always to design a single powerful method where writing the simple cases would still be very simple. This is where I ran into some of the complexity, so perhaps you're right to simplify.

Even then, my take so far is that complex dependencies don't have to be covered by schematics, since nothing is hindering people from writing their own conversion functions that are just normal systems running on the schematic world.

My take was that all schematics are "just normal systems running on the schematic world", with slightly modified queries for automated dependency tracking. I think there's a world where a TrackedQuery tracks which entities/components are currently being iterated over and automagically sets them up as dependencies of the target entities/components.

If we want to provide even simpler APIs for the simple cases, those can "compile down" into plain systems.

One bit of complexity that we do need in every schematic is conflict resolution. There's two levels to this:

  1. If two schematics try to do conflicting things, we need to robustly prevent undefined or unintended behaviour in all cases. A collision must always be caught and become some type of conversion error that can be presented to the designer. Ideally, this would involve the collision checker having a deep enough understanding of the semantics of the conflict to give the designer a nice and accessible error message. Even the simplest case absolutely must do this.

  2. We actually want to allow multiple schematics to touch the same component without that always being an error. The simplest case would be that two schematics both add the same tag component. A more complex case might be that one schematic requires a component to be present, and another schematic requires it to be present with a specific value. We would like the schematics to be able to negotiate this in a clear, well defined, and intuitive way. This could potentially be reserved for a more complex path, although I think it's neater and more robust to have all ways of implementing schematics do this part too.

My approach to this was to build a smart version of SchematicCommands. It would do conversion in three passes, first collecting the commands from schematics, then looking for conflicts and resolving where possible, then applying the changes to the target worlds. It would have a declarative API, with commands like RequireComponent, RequireComponentWithValue, etc to accomodate the conflict resolution. It would automagically pick up dependencies from the TrackedQuery.

There are two big questions:

  1. Is this technically feasible?
  2. Can the API for this be made clean enough that doing simple things isn't burdensome?

And none of this even gets into the nasty issue of reflecting changes back into schematics somehow... That was what had me running scared.

For me personally, I would prefer a robust complex base, with optional simplified APIs built on top of it for the simple cases. I think what you're advocating is starting with simple cases and then progressively adding more complex ones. They're probably both valid ways forward. My view is that my way will lead to a better longterm outcome, but your way is much easier to get started on!

@SamPruden
Copy link
Author

SamPruden commented Oct 17, 2022

Pseudocode for how I'd like a trivial schematic system to look:

fn schematic_a_system(query: TrackedQuery<(&SchematicA, )>, mut commands: SchematicCommands) {
    for a in query.iter() {
        // TrackedQuery and SchematicCommands coordinate to automatically work out the dependencies here.
        // This is nearly straightforward but...
        // There are ways that the user could cheat it by e.g. extracting some
        // numeric data from a and using it later.
        // I don't know whether there's a way to address that, or whether we have to accept that
        // no automated dependency tracking will ever be perfect, and trust users not to be too crazy.

        // TODO: Do we allow an implicit nonspecified target entity like this?
        commands.require_exact(B(a.0));
    }
}

@MDeiml
Copy link
Contributor

MDeiml commented Oct 17, 2022

Ah, so we're mainly talking about different terminologies. For you a "schematic" (roughly) is a system in the schematic world, while for me a "schematic" was the simpler representation that compiles down to systems. I guess the exact naming is not important at this point.

@SamPruden
Copy link
Author

Ah, I think I've been unclear with terminology then. As I've been using the term, a "schematic" is the component in the schematic world, along with its user facing conversion behaviour. "Schematic" is the term a designer would use for a unit/component of functionality that they're manipulating in the editor. They're not concerned with the systems implementing it, but they're concerned with what it does.

Schematics get "interpeted" into the game world by "schematic systems" or "schematic interpreter systems". At least, that's the terminology I've been using casually. They're part of the schematic in the sense that they define its behaviour, but they can be implemented in a different place in the code if the programmer so chooses. Typically they'd be implemented side by side, in the simple case probably automatically using a derive.

Not much thought has gone into this terminology, so it's all up in the air.

@MDeiml
Copy link
Contributor

MDeiml commented Oct 18, 2022

For now I'll write down the consensus of this thread (in the form of an RFC, why not) adding my opinion where there is no consensus yet. I'll submit it as a draft PR and link it here. I think that's the best way of collaborating on this as you can easily make comments on specific parts / make pull requests to my branch.

@MDeiml
Copy link
Contributor

MDeiml commented Oct 19, 2022

bevyengine/rfcs#64

It's not everything yet, but I like what I have so far, this is mainly missing:

  • How to synchronize the other direction
  • How to be able to have "requirements" in schematics. I propose something like Archetype Invariants #1481 which could also be used outside this RFC
  • How to serialize the schematic world
  • How to check if:
    • Every component gets converted
    • Conversions don't conflict

But I think we can move discussion there now.

@viridia
Copy link
Contributor

viridia commented Mar 15, 2024

I'm working on an alternate design for schematics, which I'm calling "exemplars". A description is available here: https://github.com/viridia/panoply/blob/main/docs/Exemplars.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Editor Graphical tools to make Bevy games A-Scenes Serialized ECS data stored on the disk C-Feature A new feature, making something new possible S-Needs-Design-Doc This issue or PR is particularly complex, and needs an approved design doc before it can be merged
Projects
None yet
Development

No branches or pull requests