Skip to content

Learn Bevy (and Rust) by exploring a small example (almost) every day

Notifications You must be signed in to change notification settings

awwsmm/daily-bevy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

daily-bevy

Learn Bevy by exploring a small example (almost) every day.

Bevy is a free, open-source, cross-platform (Windows, macOS, Linux, Web, iOS, Android) game engine written in Rust.

This README shows the first entry in this series. All other entries can be found at daily-bevy/branches.

Hello, Bevy!

Today is the first day of Daily Bevy.

An Introduction to Daily Bevy

Daily Bevy is a kind of programming kata journal, where I will dissect a small, self-contained Bevy example (nearly) every day. The official Bevy docs say that exploring the examples in the repo "is currently the best way to learn Bevy's features and how to use them." So that's what I'll do!

The goal is to break this example down and understand the constituent parts, working toward a complete understanding of the Bevy game engine. It is not to write examples from scratch, explaining how to write Rust / Bevy code.

Each day, I will explore one of the examples from the bevy repo, an example I find on the Internet, or an example I write myself.

Each day will start fresh from the initial empty commit to this repo (rather than accumulating code from every example), so that readers don't need to inspect a diff, or understand what is relevant (or not) to a particular example.

Each day will exist on a new branch of this repository so that earlier examples can be updated* without affecting later examples.

* Updates may be required because "Bevy is still in the early stages of development" and may introduce breaking changes in new releases.

Today's Kata

Okay, so let's get into it.

Today, I will be dissecting the standard hello world example found in the Bevy repo.

The Code

Here is the entirety of the main.rs file for this example, as of today

use bevy::prelude::*;

fn main() {
    App::new().add_systems(Update, hello_world_system).run();
}

fn hello_world_system() {
    println!("hello world");
}

This code requires only the base bevy crate in Cargo.toml

[dependencies]
bevy = "0.12.1"

Discussion

I have done a bit of Bevy exploration already, and I've found that importing bevy::prelude::* at the top of main.rs is usually a good idea. Chances are, you're going to want to use a good number of items from this module anyway, and I have already found a few copy-and-paste examples where name conflicts can lead to code not working quite right when importing items one-by-one using IDE code hints. So I'll always use bevy::prelude::*; at the top of every Bevy source file, until further notice.

App::new()

This instantiates a new App by delegating to App::default(). App::default() looks like this

impl Default for App {
    fn default() -> Self {
        let mut app = App::empty();
        #[cfg(feature = "bevy_reflect")]
        app.init_resource::<AppTypeRegistry>();

        app.add_plugins(MainSchedulePlugin);

        app.add_event::<AppExit>();

        #[cfg(feature = "bevy_ci_testing")]
        {
            crate::ci_testing::setup_app(&mut app);
        }

        app
    }
}

The bevy_ci_testing block looks like something related to the Bevy team's GitHub CI pipeline testing, so I'm happy to ignore that for now

impl Default for App {
    fn default() -> Self {
        let mut app = App::empty();
        #[cfg(feature = "bevy_reflect")]
        app.init_resource::<AppTypeRegistry>();

        app.add_plugins(MainSchedulePlugin);

        app.add_event::<AppExit>();

        app
    }
}

There's another attribute here, though: bevy_reflect. What does this do?

Well, bevy_reflect is a whole separate crate in the Bevy workspace. My IDE (RustRover) helpfully shows that this feature is enabled by default (unlike bevy_ci_testing). But where? Well, as App is defined within the bevy_app crate, this feature is enabled in the Cargo.toml file of that crate. So

fn main() {
    App::new().add_systems(Update, hello_world_system).run();
}

...is equivalent to...

fn main() {
    let mut app = App::empty();

    app.init_resource::<AppTypeRegistry>();
    app.add_plugins(MainSchedulePlugin);
    app.add_event::<AppExit>();
    app.add_systems(Update, hello_world_system);

    app.run();
}

We create an App and run it, but the interesting stuff happens in the middle.


We first initialize an AppTypeRegistry...

#[derive(Resource, Clone, Default)]
pub struct AppTypeRegistry(pub TypeRegistryArc);

...which is a Resource and a tuple struct, and contains only a single field, a TypeRegistryArc

// TODO:  remove this wrapper once we migrate to Atelier Assets and the Scene AssetLoader doesn't
// need a TypeRegistry ref
/// A synchronized wrapper around a [`TypeRegistry`].
#[derive(Clone, Default)]
pub struct TypeRegistryArc {
    pub internal: Arc<RwLock<TypeRegistry>>,
}

It looks like this might be removed in the near future. (But Atelier Assets? This one? With 1 GitHub star?) Anyway, finally, we get to the actual TypeRegistry, which is where I'm happy to stop digging for now. This seems to hold lots of type information to enable some reflection in Bevy.


The next line is app.add_plugins(MainSchedulePlugin)

/// Initializes the [`Main`] schedule, sub schedules,  and resources for a given [`App`].
pub struct MainSchedulePlugin;

I was surprised to learn that the Main schedule could so easily be disabled (by using App::empty() instead of App::new()). Anyway, here's where I start to get a bit confused...

First, we instantiate a Schedule with the Main label, and set it to use a SingleThreaded executor

// simple "facilitator" schedules benefit from simpler single threaded scheduling
let mut main_schedule = Schedule::new(Main);
main_schedule.set_executor_kind(ExecutorKind::SingleThreaded);

The comment explains why we use a single-threaded executor for the Main schedule: multithreading has overhead and sometimes the increased complexity is not worth it. Nowhere in the Bevy codebase is it explained what a "facilitator schedule" is, though. Maybe it will become clear in later katas.

The Main schedule is very simple...

#[derive(ScheduleLabel, Clone, Debug, PartialEq, Eq, Hash)]
pub struct Main;

...it derives a bunch of standard attributes, plus the Bevy-specific ScheduleLabel attribute. I won't reproduce the implementation of that attribute here; it seems to just register the label within the app's configuration. Maybe I'll dig into this more later, as well.

As for the ExecutorKinds...

/// Specifies how a [`Schedule`](super::Schedule) will be run.
///
/// The default depends on the target platform:
///  - [`SingleThreaded`](ExecutorKind::SingleThreaded) on WASM.
///  - [`MultiThreaded`](ExecutorKind::MultiThreaded) everywhere else.
#[derive(PartialEq, Eq, Default, Debug, Copy, Clone)]
pub enum ExecutorKind {
    /// Runs the schedule using a single thread.
    ///
    /// Useful if you're dealing with a single-threaded environment, saving your threads for
    /// other things, or just trying minimize overhead.
    #[cfg_attr(any(target_arch = "wasm32", not(feature = "multi-threaded")), default)]
    SingleThreaded,
    /// Like [`SingleThreaded`](ExecutorKind::SingleThreaded) but calls [`apply_deferred`](crate::system::System::apply_deferred)
    /// immediately after running each system.
    Simple,
    /// Runs the schedule using a thread pool. Non-conflicting systems can run in parallel.
    #[cfg_attr(all(not(target_arch = "wasm32"), feature = "multi-threaded"), default)]
    MultiThreaded,
}

...there are three: SingleThreaded, Simple, and MultiThreaded. Interestingly, MultiThreaded is not available for WASM targets. Rust / WASM multithreading is possible, but still experimental, so perhaps Bevy will support this in a few years' time.

Another thing to note above is that Simple executors are just SingleThreaded executors which "[call] apply_deferred immediately after running each system." Deferred refers to operations which mutate the World of your Bevy application. By waiting until a specified period to do all of your mutation, you leave the World in a read-only state for a longer period of time, which means more can be done in parallel.

As I said, this is where I begin to get confused, because app.add_plugins(MainSchedulePlugin) creates a Main Schedule, assigns it an ExecutorKind, adds it to the App, and adds a System (Main::run_main) for that Schedule...

impl Plugin for MainSchedulePlugin {
    fn build(&self, app: &mut App) {
        // simple "facilitator" schedules benefit from simpler single threaded scheduling
        let mut main_schedule = Schedule::new(Main);
        main_schedule.set_executor_kind(ExecutorKind::SingleThreaded);
        let mut fixed_update_loop_schedule = Schedule::new(RunFixedUpdateLoop);
        fixed_update_loop_schedule.set_executor_kind(ExecutorKind::SingleThreaded);

        app.add_schedule(main_schedule)
            .add_schedule(fixed_update_loop_schedule)
            .init_resource::<MainScheduleOrder>()
            .add_systems(Main, Main::run_main);
    }
}

...that all seems to make sense to me, but then what is RunFixedUpdateLoop doing? There is no System associated with that Schedule. Eventually, though, I found this in the bevy_time crate

impl Plugin for TimePlugin {
    fn build(&self, app: &mut App) {
        app.init_resource::<Time>()
            // -- snip --
            .add_systems(RunFixedUpdateLoop, run_fixed_update_schedule);

        // -- snip --
    }
}

I can look into this later, this is already getting a bit long.


After we add the MainSchedulePlugin, we add the AppExit event: app.add_event::<AppExit>();.

The documentation above add_event says

    /// Setup the application to manage events of type `T`.
    ///
    /// This is done by adding a [`Resource`] of type [`Events::<T>`],
    /// and inserting an [`event_update_system`] into [`First`].

which I read as a list of things that I needed to do, but this is just describing the implementation of the add_event method itself

    pub fn add_event<T>(&mut self) -> &mut Self
    where
        T: Event,
    {
        if !self.world.contains_resource::<Events<T>>() {
            self.init_resource::<Events<T>>().add_systems(
                First,
                bevy_ecs::event::event_update_system::<T>
                    .run_if(bevy_ecs::event::event_update_condition::<T>),
            );
        }
        self
    }

First, used above, is run by the MainSchedule

impl Default for MainScheduleOrder {
    fn default() -> Self {
        Self {
            labels: vec![
                First.intern(),
                PreUpdate.intern(),
                StateTransition.intern(),
                RunFixedUpdateLoop.intern(),
                Update.intern(),
                SpawnScene.intern(),
                PostUpdate.intern(),
                Last.intern(),
            ],
        }
    }
}

so it's good that we added that Plugin to our App.

So from my understanding, at the start of the MainSchedule loop, systems tagged First will be executed, well, first. A system which listens for AppExit events is now in that list of systems, and so if the App has been closed since the previous iteration of the MainSchedule, at the start of the next iteration, Bevy will begin the process of unwinding things and closing the App.

Note that there doesn't seem to be an obvious way to remove events, if we no longer want to listen for them.

The documentation above AppExit is interesting, as well

/// An event that indicates the [`App`] should exit. This will fully exit the app process at the
/// start of the next tick of the schedule.
///
/// You can also use this event to detect that an exit was requested. In order to receive it, systems
/// subscribing to this event should run after it was emitted and before the schedule of the same
/// frame is over. This is important since [`App::run()`] might never return.
///
/// If you don't require access to other components or resources, consider implementing the [`Drop`]
/// trait on components/resources for code that runs on exit. That saves you from worrying about
/// system schedule ordering, and is idiomatic Rust.
#[derive(Event, Debug, Clone, Default)]
pub struct AppExit;

"You can also use this event to detect that an exit was requested" implies that you might want to do some cleanup before the app is closed, but the "idiomatic Rust" way of doing this is to just implement Drop on any complex resources. Good to know.


Finally, we add our system with app.add_systems(Update, hello_world_system);.

What is a "system" anyway?

    pub fn add_systems<M>(
        &mut self,
        schedule: impl ScheduleLabel,
        systems: impl IntoSystemConfigs<M>,
    ) -> &mut Self {
        // -- snip --
    }

A system is anything that implements IntoSystemConfigs. So how does

fn hello_world_system() {
    println!("hello world");
}

implement this trait? IntoSystemConfigs is implemented by this macro

macro_rules! impl_system_collection {
    ($(($param: ident, $sys: ident)),*) => {
        impl<$($param, $sys),*> IntoSystemConfigs<(SystemConfigTupleMarker, $($param,)*)> for ($($sys,)*)
        where
            $($sys: IntoSystemConfigs<$param>),*
        {
            #[allow(non_snake_case)]
            fn into_configs(self) -> SystemConfigs {
                let ($($sys,)*) = self;
                SystemConfigs::Configs {
                    configs: vec![$($sys.into_configs(),)*],
                    collective_conditions: Vec::new(),
                    chained: false,
                }
            }
        }
    }
}

all_tuples!(impl_system_collection, 1, 20, P, S);

and then also in three other places without using macros directly

impl IntoSystemConfigs<()> for BoxedSystem<(), ()>
impl<Marker, F> IntoSystemConfigs<Marker> for F
where
    F: IntoSystem<(), (), Marker>
impl IntoSystemConfigs<()> for SystemConfigs

The IntoSystem implementation is interesting, because its documentation says

"Use this to get a system from a function."

Replacing add_systems(Update, hello_world_system) with add_systems(Update, IntoSystem::into_system(hello_world_system)) does work, but is that what's actually being done implicitly?

After some more poking around, I found that IntoSystem is implemented for any type F such that F implements SystemParamFunction

impl<Marker, F> IntoSystem<F::In, F::Out, (IsFunctionSystem, Marker)> for F
where
    Marker: 'static,
    F: SystemParamFunction<Marker>,
{

and SystemParamFunction is implemented by another macro, explicitly for function pointers

macro_rules! impl_system_function

// -- snip --

all_tuples!(impl_system_function, 0, 16, F);

SystemParamFunction's documentation describes it as

"A trait implemented for all functions that can be used as [System]s."

So I think this is probably the path by which an arbitrary function actually gets translated into a System. But I can dig into this more in later katas.


Finally, we are near the end with .run().

"Starts the application by calling the app's runner function."

But we haven't set a runner, at least not explicitly. As it turns out, this was hidden in App::empty, which is not really empty

    /// Creates a new empty [`App`] with minimal default configuration.
    ///
    /// This constructor should be used if you wish to provide custom scheduling, exit handling, cleanup, etc.
    pub fn empty() -> App {
        let mut world = World::new();
        world.init_resource::<Schedules>();
        Self {
            world,
            runner: Box::new(run_once),
            sub_apps: HashMap::default(),
            plugin_registry: Vec::default(),
            plugin_name_added: Default::default(),
            main_schedule_label: Main.intern(),
            building_plugin_depth: 0,
            plugins_state: PluginsState::Adding,
        }
    }

The default runner here is run_once, though it's possible to set a custom runner with

    pub fn set_runner(&mut self, run_fn: impl FnOnce(App) + 'static + Send) -> &mut Self {
        self.runner = Box::new(run_fn);
        self
    }

But the implementation of .run() itself is quite confusing to me

    pub fn run(&mut self) {
        #[cfg(feature = "trace")]
        let _bevy_app_run_span = info_span!("bevy_app").entered();

        let mut app = std::mem::replace(self, App::empty());
        if app.building_plugin_depth > 0 {
            panic!("App::run() was called from within Plugin::build(), which is not allowed.");
        }

        let runner = std::mem::replace(&mut app.runner, Box::new(run_once));
        (runner)(app);
    }

This line

let mut app = std::mem::replace(self, App::empty());

appears to replace self with an empty App...? std::mem::replace returns the original self, which we then use, but why are we swapping a new App::empty into memory where the old one was stored?

And I have the same question for

let runner = std::mem::replace(&mut app.runner, Box::new(run_once));

runner contains the original runner, which was maybe set by the user. Why are we rewriting the memory in that location with the default run_once runner?

Maybe these are questions for another time.


So this original example

use bevy::prelude::*;

fn main() {
    App::new().add_systems(Update, hello_world_system).run();
}

fn hello_world_system() {
    println!("hello world");
}

is much more complex than it seems. We know that App::new actually initializes an App::default, which is a specialization of App::empty. The App contains an AppTypeRegistry resource for reflection within Bevy. We add the SingleThreaded MainSchedule as well as the RunFixedUpdateLoop schedule, listen for AppExit events in the First sub-schedule of Main, and add our hello_world_system, which is converted from a plain Rust function into the appropriate type through Into traits and macros. We run the App using the default run_once runner, and the app exits when an AppExit event is seen, cleaning up by calling Drop implementations in an idiomatic way.

So much complexity is hidden behind this simple "hello world" example. I hope to dig into more of this in the coming weeks, and understand all of this a bit better.

Learn More

If you found this first kata interesting, head over to daily-bevy/branches to see the rest of them.

If you have questions, comments, or corrections, please head over to daily-bevy/discussions to join the conversation.

If you like what you've read above, you can follow me on Bluesky or Mastodon.

All Katas

  1. Hello, Bevy!
  2. File Drag and Drop
  3. Keyboard Input
  4. Clear Color
  5. Camera2dBundle (bonus!)
  6. Camera2dBundle 2 (bonus!)
  7. Camera2dBundle 3 (bonus!)
  8. Text 2D
  9. 3D Shapes
  10. Button
  11. WASM (bonus!)
  12. Asset Loading
  13. Scene
  14. Reflection
  15. Game Menu Part 1
  16. Game Menu Part 2
  17. Game Menu Part 3
  18. v0.13.0 (bonus!)
  19. WASM Persistence (bonus!)
  20. 2D Gizmos
  21. 2D Viewport to World
  22. Low-Power Windows
  23. Sprite Sheet
  24. Bounding Box 2D
  25. Virtual TIme
  26. Events

...more coming soon!

putting the above into practice

After katas 1-20, above, I was able to build this Tic-Tac-Toe game.

About

Learn Bevy (and Rust) by exploring a small example (almost) every day

Topics

Resources

Stars

Watchers

Forks