New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VideoCommon: add a graphics mod action that allows you to modify the game's base rendering #11300
Conversation
|
This is looking really promising. I look forward to it. If you need textures with Material Maps for testing, I am happy to provide some if needed. |
Where does limitation of 8 textures come from? If it's from the render backends, that could be adjusted. |
I think 8 was chosen because that matches the GC/Wii hardware. But yes, I looked at increasing it. 12 worked for my machine but increasing to 16 caused my GPU to spaz out. From what I understand, graphics cards support a very small amount of samplers but many textures. Hence my comment about improving our sampler handling. Of course, for my usage, the texture array approach works fine. |
cc8acb7
to
f39ab4f
Compare
|
Watching with great interest |
8617304
to
24a9c42
Compare
|
Ubershaders is implemented. Vulkan black textures are fixed. The code/commits are in a pretty good state. I see one more thing to do there. Sadly, Vulkan still being Vulkan. Validation errors when trying to copy the base texture + load the additional data. No visible impact as far as I can tell but it should probably still be resolved. Not sure yet how to resolve it. Finally, I have been debugging some flickering I see in Final Fantasy Crystal Chronicles: Crystal Bearers. Turns out, every other frame the game draws the floor with two textures instead of one. In my testing, I used both textures which means the second frame draw actually draws the second texture (when the floor is drawn with the first). While I can simply ignore the second texture in this case, I think it may be beneficial to have some sort of tev stage api. I was planning on doing that in a follow-up PR but I may see if I can implement it here. |
24a9c42
to
8e334df
Compare
8e334df
to
87e9e29
Compare
|
I've added a minimal tev stage api for both the specialized and uber shaders. It allows you to get the input and output of each stage, along with analyzing the input type. For instance, say you wanted to remove character shading from Tales of Symphonia in order to add your own shading. First you'd isolate just the character texture. So change this: to this: A shader would look like this: Which would then allow you to apply your own shading. Here's Tales of Symphonia with a more realistic lighting: |
|
The coveted parallax occlusion is also possible. Thanks to @phire for working through this with me: Default With parallax occlusion on (and metal details): |
7abd1ef
to
62e4669
Compare
|
Calling on the graphics masters. @K0bin / @TellowKrinkle / @Pokechu22 - would you all be willing to start looking this over? It's mostly ready. Some things to call out:
There is still an open issue with Vulkan where there is a validation error. The error is:
I do not know how to resolve it. It happens during the point where the game texture data is copied into a new texture so that the additional textures specified by the shader can be added as additional layers. Final thing I'll say is that the API was meant to be all encompassing so users wouldn't use anything in the shader header that we could change. However it doesn't include the variable samp . Apparently copying samp causes some issues in SPIRV-Cross. I can't recall the error offhand. EDIT: Here is the user documentation if interested |
e1e94c0
to
86ad7bf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice if you could post an example of a custom shader and the GLSL (uber and non-uber) it gets converted to by the graphics mod, for reviewers who are having trouble reading the shader generation code.
| { | ||
| for (int i = 0; i < 8; ++i) | ||
| { | ||
| if ((light_mask & (1 << (i + 8 * j))) != 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it would be more or less confusing to do light_count += popcount(light_mask & (0xff << (8 * j)));?
(Or you could use the for loop to mask channels from light_mask, then just light_count = popcount(light_mask))
for (u32 j = 0; j < NUM_XF_COLOR_CHANNELS; j++)
{
if ((enablelighting & (1 << j)) == 0)
light_mask &= ~(0xff << (8 * j));
if ((enablelighting & (1 << (j + 2))) == 0)
light_mask &= ~(0xff << (8 * (j + 2)));
}
u32 light_count = std::popcount(light_mask);There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this comment resolved at some point? It's still marked as open on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops, I had seen this but then kept going back and forth on what to do and never commented. I like the conciseness but also felt like the current code was a little clearer and matched some of the logic the emulator already had. Does anyone feel strongly one way or the other?
Source/Core/VideoCommon/GraphicsModSystem/Runtime/CustomTextureData.cpp
Outdated
Show resolved
Hide resolved
|
@TellowKrinkle - great idea. For a user shader outlined in the opening (the prince of persia red shader), I did what you requested. I introduced an error to get it to dump (removed a semicolon) but you should be able to follow it, let me know if you have any questions. Here is the user shader: Here is the output: specialized_shader.txt If you want it, I can get a lighting/texture example as well. |
86ad7bf
to
6bbb6fe
Compare
|
I forgot one of the original reasons I wanted to implement this was to support animation. Well that's now possible: The above is a three-frame blink animation (hastily drawn by myself, sorry for the poor quality). Here's a naive way to implement an animated texture: Otherwise I cleaned up the code a bit more in the CustomShaderAction. |
6bbb6fe
to
79587bf
Compare
|
Minor cleanup and fixed a possibility of a crash (null pixel shader in the pipeline when the shader fails to compile). |
79587bf
to
4c86d1b
Compare
|
How do I reproduce the Vulkan validation error? You don't do self-copies, do you? (Copying from one layer of an image to another for example) Also, is it at all possible to maybe split this PR into smaller ones? It's practically impossible to review. EDIT: I think the validation error is related to the fact that Dolphin tracks the texture layout but can optionally put the resulting barriers into either the main command buffer or the init command buffer which is executed before the main one. So if you call This patch should fix it. Alternatively you could create a new texture. |
4c86d1b
to
7b410bd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I did a pass over all of the shader-related code (other than UberShaderPixel which I didn't look at in depth - I assume it's basically the same as PixelShaderGen).
One thing I recommend you do is watch a few videos by Jasper, which discuss some more interesting rendering techniques used by Super Mario Galaxy and Wind Waker:
There are also two longer follow-up videos which discuss some other aspects of rendering:
- A Deeper Dive into Wind Waker's Lighting - a Conversation with ScruffyMusic, part 1
- The Charm Behind Super Mario Galaxy 2 - a Conversation with ScruffyMusic, part 2
Although these aren't really guides to doing gamecube graphics, they do give examples of more niche situations you might run into (and scenes that are worth experimenting with in games). Although on the other hand, it probably isn't as important to try and replace rendering of scenes like these if they already look good.
| for (u32 i = 0; i < uid_data->genMode_numindstages; ++i) | ||
| { | ||
| if ((uid_data->nIndirectStagesUsed & (1U << i)) != 0) | ||
| { | ||
| u32 texcoord = uid_data->GetTevindirefCoord(i); | ||
|
|
||
| // Quirk: when the tex coord is not less than the number of tex gens (i.e. the tex coord | ||
| // does not exist), then tex coord 0 is used (though sometimes glitchy effects happen on | ||
| // console). This affects the Mario portrait in Luigi's Mansion, where the developers forgot | ||
| // to set the number of tex gens to 2 (bug 11462). | ||
| if (texcoord >= uid_data->genMode_numtexgens) | ||
| texcoord = 0; | ||
|
|
||
| out->Write("\t{{\n"); | ||
| out->Write("\t\tint2 fixpoint_uv = int2(custom_data.texcoord[{}].xy", texcoord); | ||
| out->Write(" * float2(" I_TEXDIMS "[{}].zw * 128));\n", texcoord); | ||
| out->Write("\t\tint2 tempcoord = fixpoint_uv >> " I_INDTEXSCALE "[{}].{};\n", i / 2, | ||
| (i & 1) ? "zw" : "xy"); | ||
| out->Write("\t\tcustom_data.texcoord[{0}] = float3(tempcoord / float2(" I_TEXDIMS | ||
| "[{0}].zw * 128), 0);\n", | ||
| texcoord); | ||
| out->Write("\t}}\n"); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this is something that makes sense to do. It looks like your goal is to apply the indirect texture scale, but you also need to convert into and then out of fixed point for that. But, more annoyingly, I don't think there's any rule that forces each indirect stage to use a distinct texture coordinate; I'm pretty sure you can use the same texture coordinate for multiple indirect stages (e.g. both of them use texture coordinate 0 for different textures). In particular I'm pretty sure that applies to the Luigi's Mansion portrait (albeit by accident).
It might be better to instead have indtexcoord as a field in custom_data, and store it as something like
custom_data.indtexcoord[ind_num] = custom_data.texcoord[ind_iref] / float2(1 << cindscale[ind_num/2].xy_or_zw)(i.e. ignore the fixed-point stuff, and replace x >> y with x / (1 << y))
I don't know if that would be enough to replicate indirect texture behavior (I don't think you currently expose the actual indirect texture lookup, or any of the ways it modifies the texture coordinates (both from the texture and from wrapping and the matrix)), though. Maybe it'd be better to just get rid of this entirely for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Essentially the goal is to get the texture samplers and the texture coordinates used by the game for a given draw call. At the moment draw-call = texture, which of course isn't really right (ex: multiple textures defined for a draw) but it's hard for users to point at a draw call without some sort of editor.
The goal of this block was to grab the texture coordinates, regardless of whether it is an indirect or direct call. This normalizes the data, so the user doesn't have to care (they probably don't).
I'm pretty sure you can use the same texture coordinate for multiple indirect stages (e.g. both of them use texture coordinate 0 for different textures)
Yes, this is fine. Just want the user to say for this texture, give me the sampler and texture coordinates.
Maybe it'd be better to just get rid of this entirely for now.
I'm fine doing that but still not clear what the long-term solution would be? If the draw-call = texture is the issue, I can certainly fix that in the future. I vaguely recall before I had this, lots of textures just weren't mapping right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Will address thus more fully later - I don't entirely understand what you mean by draw-call = texture, and will need to check more to understand better).
Yes, this is fine. Just want the user to say for this texture, give me the sampler and texture coordinates.
It's fine in the case of #11300 (comment), but not here where the same texture coordinate will get scaled multiple times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal of this block was to grab the texture coordinates, regardless of whether it is an indirect or direct call. This normalizes the data, so the user doesn't have to care (they probably don't).
Explanation attempt 1
This isn't something that makes sense to do. Based on libogc's GX_SetIndTexCoordScale, the main use of the indirect scale is so that you can use the same texture coordinate for both a regular texture and an indirect texture, but have different sizes for the two.
For each texture coordinate, bpmem.texcoords[i].s.scale_minus_1 gives a scale on the horizontal axis, and t on the vertical axis, which are generally set to the size of the corresponding texture in texels (libogc's GX_SetTexCoordScaleManually says that this happens automatically in most cases, but can be overridden if needed). So for a 256 by 256 texture, you would supply input coordinates between (0, 0) and (1, 1), and then these would be scaled to (0, 0) to (256, 256) when actually sampling the texture. GLSL's texture function doesn't expect this multiplication by the texture size, so we have to divide it by the texture's actual size ahead of time; the texture's actual size is stored in I_TEXDIMS[i].xy while this scale is I_TEXDIMS[i].zw. (Manual texture sampling uses texelFetch which does require multiplication by the texture size, and also manual wrapping; there's an extra complication here about the texture size the game uses being stored in I_TEXDIMS[i].xy, but custom textures possibly having a different size that needs to be used instead, which involves querying the size in GLSL.)
... OK, that's a bit of a confusing explanation. I'm going to try it in a different way:
Games usually specify textures coordinates as UV coordinates where (0, 0) is the top-left of the texture and (1, 1) is the bottom-right of the texture. This includes the GameCube/Wii; Modern graphics APIs have functions such as texture where you can just provide UVs and everything works. This is probably the coordinate system you want to expose for shaders.
But for the GameCube/Wii, sampling textures is done with ST coordinates. For instance, for a 256 by 256 texture, (0, 0) is the top-left and (255, 255) is the bottom-right (... ish, there may be an off-by-one here). I've seen this UV versus ST distinction in graphics textbooks, but Dolphin doesn't really use it currently. There's a scale factor that UV coordinates get multiplied by to convert to ST coordinates. That said, this is mostly hidden from the GX API: unless GX_SetTexCoordScaleManually is used, the scale (in bpmem.texcoords[texcoord].s.scale_minus_1 for s, and a similar value for t, and in shaders I_TEXDIMS[texcoord].zw) will match the one for the texture being used (in AllTexUnits.GetUnit(texmap).texImage0 or I_TEXDIMS[texmap].xy - note the difference in indexing here).
This does pose an issue when using the same texture coordinate for multiple textures, if those textures are of different sizes (for instance one is 256 by 256 and the second is 128 by 128); the scale will only match one of those. If the scale factor is 256, then the 128 by 128 texture will be drawn at half-scale horizontally (as in it'll be drawn twice for every time the bigger texture is drawn, or rather 4 times since this applies on both axes). I don't think there's any mechanism to deal with this for most textures that end up getting drawn to the screen, but there is a mechanism for doing this for indirect textures: GX_SetTexCoordScaleManually, which modifies bpmem.texscale[indstage/2]. This lets you scale down the texture coordinates when sampling a texture for an indirect texture operation (i.e. when you're reading a texture so that you can then add the values read from that texture to another texture coordinate, possibly transformed by a matrix). So, if you have a 256 by 256 main texture, and also a 128 by 128 texture you want to use as an indirect texture, you can set bpmem.texscale[0].ss0 and ts0 to 1, and thus the ST coordinates will be at appropriate scales for both textures.
The key issue here is that this kind of scaling is meaningless for UV coordinates, where the texture's size isn't a factor at all. I'd expect most custom shaders to use texture (and not e.g. texelFetch), so applying the scaling wouldn't make sense. Furthermore, even if ST coordinates were used, it would be more useful to provide ST coordinates on the main texture's scale, since that's what ends up on the screen eventually. (There's also a secondary issue of multiple indirect stages where all of the stages use the same texture coord; if you have two 128 by 128 indirect textures, the current code would divide s and t by 4, instead of 2.)
This is a lot of text, but hopefully it explains what the point of all of these are well enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just want to say thank you so much for taking the time to write this in-depth explanation. I've read it a couple times and your reasoning makes a lot of sense now. I had heard of ST coordinates but always assumed they were the same as UV coordinates. Maybe this distinction is special to the Cube/Wii?
Regardless, I see now why I don't need the indirect texture uv transformation and only need to capture those texture map ids.
Thanks again for the explanation. I have it all sorted now I think!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to dig up more information regarding ST versus UV and can't find anything concrete about the differences between them (either online or in various GC patents or in my graphics textbook), just that both are used by convention for textures. It would be convenient if there was a commonly used set of terminology for the difference between ranging between 0 and 1 versus 0 and width, but it doesn't seem like there actually is one. This doesn't change the actual GameCube/Wii behavior though (which should be as I described above); I'm just clarifying in case you see the terms elsewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original difference was that UV were surface coordinates (the range is whatever you want), and ST are texture coordinates (texture matrices applied, wrap mode is applied to fit within 0-1, sometimes the coordinates are unnormalized), but this hasn't been true in practice for many many years as hardware advancements happened and as shaders took off.
UV is now the standard, ST died out as it isn't a meaningful unit unless you make GPUs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does pose an issue when using the same texture coordinate for multiple textures, if those textures are of different sizes (for instance one is 256 by 256 and the second is 128 by 128); the scale will only match one of those. If the scale factor is 256, then the 128 by 128 texture will be drawn at half-scale horizontally (as in it'll be drawn twice for every time the bigger texture is drawn, or rather 4 times since this applies on both axes).
The standard way to handle this is to just eat up multiple texgens from the same texture coordinate.
As mentioned, this is secretly handled for you with the GX API -- right before a display list or a draw begins, it sets the texcoord scale registers to be the size of the texture. The GX's per-pixel pipeline is all unnormalized, the per-vertex XF unit generates unnormalized coordinates as one of the last things it does.
This means that indirect texture offsets are unnormalized, too. If I read +128 from an indirect texture, that's going to move +128 pixels in the final unnormalized coordinate no matter the original size. But the IndTexMtx can be used to adjust for this.
I don't think there's any mechanism to deal with this for most textures that end up getting drawn to the screen, but there is a mechanism for doing this for indirect textures: GX_SetTexCoordScaleManually, which modifies bpmem.texscale[indstage/2]
I think you meant to link GXSetIndTexCoordScale here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the info on UV versus ST.
It also sounds like the distinction I wanted was normalized versus unnormalized, where normalized is for ranges of 0 to 1 while unnormalized is for ranges of 0 to width/height. (... or at least can be, I guess, and you'd also have to consider wrapping at some point, but eh.)
I think you meant to link GXSetIndTexCoordScale here?
Yes, I meant to copy the link to GX_SetIndTexCoordScale from my first abandoned explanation, but I must not have been paying attention and copied the wrong one.
This means that indirect texture offsets are unnormalized, too. If I read +128 from an indirect texture, that's going to move +128 pixels in the final unnormalized coordinate no matter the original size. But the IndTexMtx can be used to adjust for this.
There's also an indtexbias parameter to GX_SetTevIndirect (which ends up in bpmem.tevind[stage].bias), which will subtract 128 from it in most cases, prior to the matrix being applied. (The exception is when the indirect format is not ITF_8, which is used in the Skyward Sword map (see #9876 and my writeup.)
The standard way to handle this is to just eat up multiple texgens from the same texture coordinate.
Yeah, that's what I've generally seen. If a game does this, then there's no special handling needed for graphics mods; an array listing the texture coordinate corresponding to each texture will work fine. (PixelShaderGen.cpp gets those generated texture coordinates as an input, so even if the model itself has only one texture coordinate or if the texture coordinates are generated from positions, at this point everything will be fine.)
The GX's per-pixel pipeline is all unnormalized, the per-vertex XF unit generates unnormalized coordinates as one of the last things it does.
It's worth noting that Dolphin does this multiplication by texture size in the pixel shader. This is probably suboptimal since the texture size won't change, but it does mean that (apart from when per-pixel lighting is enabled) everything in the vertex shader is based on xfmem and everything in the pixel shader is based on bpmem.
Yagcd says that the relevant variables are SU_SSIZE0 and SU_TSIZE0, which I think refers to the setup unit which is between the transform unit and the TEV (I think it's responsible for making triangles, although the rasterizer is also listed separately).
Handling both the multiplication by the scale in the vertex shader would be nice, but we'd still need to divide by the texture size for GLSL's texture and similar to work properly, both for Dolphin's use and for graphics mods. (And that division couldn't be done in the vertex shader if we want fixed-point math and indirect textures to work).
5cf8124
to
c2d2482
Compare
This comment was marked as outdated.
This comment was marked as outdated.
ef1c303
to
844e6f5
Compare
|
@Pokechu22 - thank you again for the review. I've addressed the points raised. I hope you will find some value in this feature in the future for your debugging. At least I'd imagine this might be useful to visualizer certain scenarios. Thanks again! |
844e6f5
to
91c8cc9
Compare
|
Nothing major in this last push, just noticed a lighting shader compiler error (caused by a refactor request) when testing a separate feature. Still ready for re-review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apart from a minor documentation issue, this looks good to me now. I haven't done any kind of testing though.
…der to shadergen common as it will be used by both the special and uber shader variant of pixel shaders
…g code for a custom pixel shader
…shaders in graphics mods
…ixel shaders in graphics mods
… custom pixel shaders by replacing the existing pipeline with a modified one
…a uniform to be able to support animation effects in custom shaders
…d pass it to graphics actions
91c8cc9
to
5506121
Compare
|
To @Pokechu22 and the rest of the reviewers that gave their time toward this feature, I just want to say thank you! |






Graphics mods are pretty basic right now. Disable bloom or stretch a HUD element. Not very exciting! But graphics mods were meant to be so much more.
With this feature, Dolphin gives users direct modification of the way game's draw by exposing hook points for users to provide their own pixel shaders.
A Basic Example
If you wanted to turn a character red, you'd provide a graphics mods that targeted that character's texture. For example, take Prince of Persia: the Forgotten Sand's main character texture here's how we might target that with our new graphics mod:
with the color shader metadata (
color.shader.json) looking like:the material json looking like (
material.json):And finally the actual shader "color.glsl" that the user writes:
What's going on here? Well Dolphin is taking this piece of code and returning the color red. Astute readers might notice the
CustomShaderData, more on that later.If you're curious, here's the result:
Game lighting
While it's up to the shader creator to decide what to do, the most obvious use case is to modify the game's lighting. Let's talk about that!
Game's provide two things for lighting: normals (a direction to describe the slope of the surface) and lights. In Dolphin, when Per-Pixel-LIghting is turned on, Dolphin will provide the normal to its pixel shader. This normal is then provided to the user through the
CustomShaderData. So if you wanted to display a game's normals, you'd simply do:Applied to a handful of Fragile Dreams: Farewell Ruins to the Moon's textures, you'd see:
Normals are wonderful but we really want to do something with lighting. In order to do that we need lights. Dolphin provides you all the scene's lights in the
CustomShaderData. This includes their type, the color, and their position. You also get the pixels position, meaning with a little math you have all you need to get some lighting outputted.Viewing lighting by itself might look like this (Little King's Story):
Game Quirks
Alas. Games don't have to give you normals. Arc Rise Fantasia fails to provide any:
Some games give you normals but the view direction can modify them. The Last Story is an example of that:
Some games give you normals but don't actually provide lights on those objects. An example is Rune Factory: Tides of Destiny which provides normals on all objects but only provides lighting on the characters. In those situations you have the option to bake in the lighting directly in the shader:
I've only tested 20 - 30 games at the moment but a majority of them have lighting/normals! Users will have to test their games to see what they can do with them.
Leveraging Textures
We have the game's normals and we may have lights. But the game's data is rather limited. What we really want is to provide textures to our shaders to give it more information. Well luckily we can!!!
Adding a texture to a shader requires us to use assets. You might have noticed this in the Prince example above but a material, a shader, and that's right even textures are all assets!
To leverage a texture, we need to:
Whew!
Just showing the
assetsandfeaturessection it might look like this:where
output_normal.material.jsonwould look like:and
output_normal.shader.jsonwould look like:Finally, with all that in place, the
output_normal.glslmight look like::This shader will either display the draw with the normal data (instead of the game's computed color) or will use the normal texture if available.
Where do
NORMAL_TEX_UNITandNORMAL_TEX_COORDcome from? Well, in the shader metadata you specified an input sampler ofNORMAL_TEXand Dolphin automatically generatesNORMAL_TEX_UNITandNORMAL_TEX_COORDfor you.You aren't restricted to one texture however. You can provide as many as you want following the same process. This allows you to do interesting features like physically based rendering (PBR). Here's Mobile Suit Gundam MS Sensen 0079 with a Mettalic/Roughness/Normal map:
Parallax occlusion is also possible.
Default
With parallax occlusion on (and metal details):
A time value is exposed, so basic animation is possible:
This all might be quite complicated. And it is definitely very technical at the moment. Creating the boiler plate data for assets, materials, and shaders is tedious. I've written some scripts to make this simpler but all this complexity is expected to be alleviated with some sort of editor. Even not taking the assets into consideration, it will take time for the community to get up to speed to write shaders that can be leveraged.
Future work
In the future we can support even more features:
Go out and make some pretty games!
Turn this:


Into this: