Skip to content

AssetsInTools

djewsbury edited this page Mar 21, 2015 · 1 revision

Assets in tools; modification and update path

Normally, we think of "assets" as being immutable things. They are an in-memory mirror of some static data on the disk (for example, a texture or model file).

However, in tools we want to be able to author assets. For example, we might have a tool for authoring material information. When making changes to the material, we want to be able to see a real-time preview of those changes. That preview should be rendered via the normal engine path.

Not only that, but we also want a structure for undo & redo. That requires the ability to record changes to assets, and revert and re-play them.

Finally, we also want to be able to commit them to disk (and/or any source control system).

In these cases, assets are very dynamic things. They can change in memory, and their in-memory state can disagree with the static copy on disk.

So how can we mix these two models of assets? One static, one dynamic? What's the best model for updating preview windows when assets change?

Virtual file-system model

One possibility is model involves using a kind of virtual file system. In this model, tools interact with a separate "dynamic" form of each asset type. After each change, we serialise them into a memory buffer (in the same format that they would appear on disk).

When static asset form needs to be deserialised for the real-time preview, file operations are redirected to the virtual file system (instead of disk).

Committing changes to disk is easy; simply by flushing the virtual file system data out to disk.

This model is pretty simple, but also powerful. Consider the advantages:

  • real-time engine code is unchanged, and kept isolated from editor-only behaviour
  • there is a clear dividing line between code used by assets for the real time preview, and code used by the editor for authoring. This means that authoring functionality (set property methods, OnChange event callbacks, etc) will be kept out of the game code. That behaviour only needs to exist in the editor code
  • convenient for cross-process communication
  • the virtual file system can be exposed to other processes on the same PC (or even other PCs)
  • this means that changes made in an editor can be immediately reflected in a separate running instance of the game
  • most code for refreshing asset is built around file system events already (eg, when a file changes on disk, the asset is reloaded)
  • so a virtual filesystem fits into that model very cleanly.

It's handy to separate editor-only functionality form the core engine. I've seen some engines do this by using a "EDITOR_MODE" #define (or something similiar). This define might add some extra functionality to certain base engine classes, functionality that is intended for use only in editors. But that path lends up with 2 complete sets of engine .lib files -- one set for the game, one set for the editors. Other engines attempt to deal with the same thing using polymorphism and virtual methods... But that method can also end up quite complex, and adds extra overhead in the game.

Cross process communication is pretty handy, as well... Imagine you are playing through the game you are working on. You come across an problem in the data. It would be convenient to be able to just open up the editor, change the data to fix the problem, and see the result reflected in-game immediately. Well, I guess even without cross-process communication the changes will get reflected after they are committed to disk, anyway.

However, there are some disadvantages to this model:

  • quite a long and complex path to making any change
  • even a very small change needs to go through serialize, refresh and deserialize steps
  • Some assets may be extremely large.
  • terrain height map data (in particular) needs special code, because it can be too huge to entirely fit in memory
  • In some cases we might not want to go through the full deserialize step for small changes
  • For example, consider moving a single placement in a placement file
  • it seems redundant to have to serialize and deserialize the entire placement file

Divergent assets model

When an asset differs from it's copy on disk, we can call that asset "divergent."

Modifying assets should place via packets of changes called "transactions." Each transaction is a single undo step. The type of transaction should vary depending on the asset type. For basic assets, each transaction must store a "before" and "after" copy.

For more complex assets (like a placements file, or terrain height data) each transaction will only store a part of the full asset. Just enough to do undo and redo operations.

We must be able to query the asset system for divergent assets. We can serialise these assets out to disk (for example, when saving in the editor).

When a divergent asset changes, we must trigger OnChange events. This should trigger rebuilds of higher-level dependant assets.

There are some problems however. Consider the "-resmat" asset type (this is the "MaterialScaffold" containing resolved materials).

"-resmat" assets are dependent on ".material" assets. So we list the ".material" files as one of their dependencies. "-resmat" files can be loaded as MaterialScaffold objects, and those objects inherit the dependency on the ".material" files. ".material" files can also be loaded using GetAsset<RawMaterial>(). So, our material editor will create divergent RawMaterial assets.

The problem is, MaterialScaffold doesn't have a dependency on the RawMaterial assets. It only as a dependency on the actual ".material" files. So, a OnChanged() event on RawMaterial won't propagate through to the MaterialScaffold object. To do that, we need to do a fake OnChanged() event on the actual file.

This is a little strange, because normally we only get OnChanged() events on files when the actual file on disk changes.

To handle these cases, it might be handy to have a DivergentAsset class that can handle these cases. This class should own a copy of the current state of the asset, it should also know the output file associated, and it should have a pointer to the undo/redo queue. It also needs interfaces for dealing with "transactions."

When an editor needs a changeable version of an asset, it should explicitly ask for a DivergentAsset version.

Using this model, it would be cool to be able to show a list of all assets that have changed, and also a list of all transactions grouped by asset. So, when we save, we can see only only which files will change, but also what the changes are.

This method might add some overhead in the game; because when we query for assets we need to check for divergent assets file. But the extra overhead is probably not excessive.

shadowing changes on disk

Imagine we have a material file "galleon.material." We open an editor for this material, and so create a DivergentAsset for it.

Now, what happens if we change "galleon.material" on disk (using notepad). Normally, when we detect a change on disk, we reload the asset. But our DivergentAsset might have changes. We don't want to reload it, because that would discard our changes.

But if we overwrite the file later, we will loose the changes we made with notepad.

Effectively, the DivergentAsset "shadows" changes on disk.

In this kind of case, doing an automatic merge is impractical. But this functionality would be nice:

  • if our DivergentAsset has no changes, we should do a reload
  • this should also refresh our editor dialog with changes from disk
  • if our DivergentAsset has changes, we should ignore the changes on disk
  • but when we attempt to save, we inform the user with a warning message

Conclusion

It feels like the DivergentAsset method is a better match for what we need. Inefficiencies associated with the virtual filesystem method are a big problem for certain assets. But it feels like DivergentAsset and transactions can be a powerful model for this situation.