Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add proper and complete support for multi-pass shaders #496

Open
samdze opened this issue Feb 20, 2020 · 29 comments
Open

Add proper and complete support for multi-pass shaders #496

samdze opened this issue Feb 20, 2020 · 29 comments

Comments

@samdze
Copy link

samdze commented Feb 20, 2020

Describe the project you are working on:
Top-down 2D rpg with heavy use of visual effects, post-processing and custom depth tests.

Describe the problem or limitation you are having in your project:
I'll often have to create shaders that read from a certain buffer or texture and then write to it or viceversa.
This is not possible in a single pass shader as it is technically not supported by graphics cards.
So a multi-pass shader is needed.
Multi-pass materials rendering is also currently not supported for canvas_item shaders which is another related limitation.

This is however a proposal for multi-pass shaders, here's an example of how they are defined in Unity (just for reference):

Shader "ShaderName"
{
    Properties
    {
        // define shader properties
        _TextureName("Inspector Description", Type) = "white" {}
        _FloatSlider("Float Slider Description", Range(0, 1)) = 0.5
    }
    SubShader
    {
        // Tags, Identifiers, etc...
        // first pass
        Pass
        {
            // Tags, Identifiers, etc...
            // define blend mode, ztest mode, zwrite mode etc.

            // define used properties/uniforms
            // define input and/or output structs (appdata, s2f, FragmentOutput...)
           
            v2f vert (appdata v)
            {
                // vertex function
            }
           
            FragmentOutput frag (v2f i)
            {
                // fragment function
            }
        }
 
        // this second pass is performed after the first one
        Pass
        {
            // Tags, Identifiers, etc...
            // define blend mode, ztest mode, zwrite mode etc.

            // define used properties/uniforms
            // define input and/or output structs (appdata, s2f, FragmentOutput...)
           
            v2f vert (appdata v)
            {
                // vertex function
            }
 
            FragmentOutput frag (v2f i)
            {
                // fragment function
            }
        }
    }
    // define optional fallback
}

The FragmentOutput struct can also contain render targets (even multiple ones, allowing MRT) and/or the depth buffer target, to do depth writing/checks in fragment shaders.

Describe the feature / enhancement and how it helps to overcome the problem or limitation:
Being able to define multi-pass shaders everywhere allows the user to do things he just can't do right now. Reading and writing to the same buffer can't be performed in the same pass and multiple Viewports or post-processing can't have access to the rendering object's data.

At the optional performance cost of performing an additional pass, the user has maximum power.
I think the rendering code rewrite in Vulkan is the perfect opportunity to add this feature, together with #495.

Describe how your proposal will work, with code, pseudocode, mockups, and/or diagrams:
A simpler and more Godot-shading-language style version of the Unity "ShaderLab" code above can be a good example of how multi-pass shaders could be achieved at the user level.

Unity also permits the definition of tags into shaders and into passes to better manage the rendering of specific objects and to have more granular control over the pipeline. (Very useful in custom rendering pipelines)

If this enhancement will not be used often, can it be worked around with a few lines of script?:
No.

Is there a reason why this should be core and not an add-on in the asset library?:
Multi-pass shaders cannot be added with an add-on or asset library.

@reduz
Copy link
Member

reduz commented Feb 20, 2020

you can already do this in Godot, but its different than in Unity. In Godot, you just set the "next pass" property of materials and just put another material.

@samdze
Copy link
Author

samdze commented Feb 20, 2020

@reduz Multi-pass materials currently only work with spatial shaders (and don't know why since they are useful for the other types too).
The editor doesn't let you add a next pass material to a ShaderMaterial if the shader assigned is not a spatial.
It does if the material is a CanvasItemMaterial, but doesn't work:
godotengine/godot#17461
godotengine/godot#27726

Adding next pass support for canvas_item shaders etc. could mitigate the limitation for now.

However, the idea of multi-pass shaders is to have all the passes grouped in a single shader definition because it makes sense to have them all in one place. e.g to make the rendering flow clear, to better share parameters between passes etc.

@MartinGrignard
Copy link

This feature would be so useful! I struggle making some shaders in one pass and often end up not making it at all... The viewport alternative seems hacky.
The best example is a blurred drop shadow for a control. For now, you can only add a hard drop shadow but not blur it as it requires to reuse the result of the hard shadow as input for the blur step.

@samdze
Copy link
Author

samdze commented Feb 24, 2020

Must be said that multi-pass shaders serve less purpose if there's no MRT support and custom render targets cannot be freely defined.

@Calinou
Copy link
Member

Calinou commented Mar 12, 2020

The best example is a blurred drop shadow for a control. For now, you can only add a hard drop shadow but not blur it as it requires to reuse the result of the hard shadow as input for the blur step.

Note that StyleBoxFlat offers blurred drop shadows out of the box since Godot 3.1. These shadows can also be offset since 3.2.

@semickolon
Copy link

This feature would be very useful. There are times when I need to render something both transparent and opaque (for screen-space effects). It has to be in two passes. Having to set the same parameters in two shaders feels a bit inconvenient. It would be nice to access the same uniforms for multiple passes.

Can say I've also come across @MartinGrignard 's problem. Viewports as passes does seem hacky to me. Same situation came up on my end. I needed a way to share some uniforms across multiple shaders. Syncing them through script is one way around, but something like Unity's would make stuff easier and more natural.

@Calinou
Copy link
Member

Calinou commented Jun 23, 2020

I needed a way to share some uniforms across multiple shaders

Global shader uniforms will be usable for this purpose in 4.0.

@Reneator
Copy link

Reneator commented Jun 23, 2020

This would be useful to me, as it would enable me to add multiple visual effects to my item icons in the inventory of in my rpg game: Outline-shader with a color, and glint-shader on the same sprite (but maybe not influencing each other? so the glint doesnt run over the outline-shader, but maybe this would only be dependent on the order of the shaders), not requiring any kind of workaround.

(But maybe i can already do this in just one shader, by combining the code of both shaders, but might be finnicky and non-modular)

@semickolon
Copy link

@Calinou I see. Global uniforms would work for cases affecting the entire game. As for individual meshes, I see that per-instance uniforms would solve my problem, but unfortunately, it says it doesn't support textures, which I'd need to share across multiple passes.

@SomeDumName
Copy link

SomeDumName commented Sep 19, 2021

That's so much syntax
This a version with less syntax I'd like to be considered.

// the render mode should be able to change between passes
shader_type spatial;
// perhaps the pass amount should be set in project settings
// so that objects don't need to worry about defining all the passes.
passes 5;// limit the amount of passes to something
// if a pass doesn't exist in the shader, draw the default color.
pass 0;
// rendermode should be set per pass
render_mode specular_schlick_ggx;
void vertex() {
      // anything goes
}
// don't clamp the color for the pre finale passes.
// color should be either int or a floating point number.
void fragment() {
      OUTPUT = color(0,0,0,0);
}
// Maybe there should be limited light passes?
void light() {
// Output:0
}
pass 1;
render_mode specular_toon;
void vertex() {
      // anything goes
}
// don't clamp the color for the pre finale passes.
void fragment() {
      OUTPUT = PREV[0];
       // passes that happen before the current pass are readable from the current screen position
      // or as a texture with a custom uv position
      OUTPUT = texture(PREV[0],SCREEN_UV.x+0,SCREEN_UV.y+0);
      // attempting to read a non existent pass should be caught before anything bad happens
}
// Maybe there should be limited light passes?
void light() {
// Output:0
}
etcetera... / etc..

The last pass I call the finale pass.

pass 5;// finale!
render_mode specular_schlick_ggx;
void vertex() {
      // anything goes
}
// start clamping the color in this pass
void fragment() {
      OUTPUT = color(0,0,0,0) * 1.0 / PREV[0].x;
      // as a joke I'm dividing by a pass that output 0
}
void light() {
// Output:0
}

Under the hood, the passes should be drawing all the objects for the first pass, then redrawing all the objects in the next pass.
Perhaps a way of preventing objects from showing up in certain passes should be considered.

@Calinou Calinou added this to the 4.x milestone Sep 19, 2021
@SomeDumName
Copy link

My version of this concept doesn't consider objects that need seperate shaders very well.
Instead of putting all the passes in one file, objects could pick and choose what passes they use for the final render
Passes could be defined at the scene root, or in project settings.

@Reapetitive
Copy link

For me it would be just enough, if next pass feature would exist for canvas shaders and would work for canvasItemMaterial. It would solve a lot of problems for me. :)

@markusneg
Copy link

I think the multi-pass shaders would greatly extend Godots shading capabilities for both 2D and 3D. As far as I understood they would (together with custom FBOs, #495) enable direct access to results from previous shader passes and thus allow for arbitrary levels of indirection in shader design and the GPU usage in general.
Currently Viewports can be used to emulate additional shader passes, however, with the severe limitations that the number of active Viewports (= simultaneously rendered objects) is very limited and the setup (manual as well as script based) is complicated.

My simple use case: for a 2D top-down pixel art style the lighting has to snap to the pixel raster which has a different orientation for each object (forum post). For a prototype I am using Viewports in 3D (in 2D usage of Viewports is not possible due to missing cull_mask). A second shader pass by contrast could implement that effect easily.

@Chaosus Chaosus assigned Chaosus and unassigned Chaosus Feb 23, 2022
@Maaack
Copy link

Maaack commented Jun 5, 2022

This example https://paveldogreat.github.io/WebGL-Fluid-Simulation/ has motivated me to learn a lot about shaders and push some of their limits. It's also something that can only be achieved with multi-pass shaders. I've done what I could using multiple Viewports in my attempt https://maaack.itch.io/2d-fluid-simulator-in-godot, but as @markusneg mentioned, these are very limited. I'm only able to have about 16 passes run reliably and smoothly, where the recommended number for this simulation is 50-200.

@Masterpoda
Copy link

Just throwing my support behind this, I've currently got this fire simulation working, and it would be a huge benefit to computational efficiency and code simplicity if I could do some of these steps with successive passes instead of passing them around between viewports.

@h0lley
Copy link

h0lley commented Jul 3, 2022

I like how the godot shading language is pretty much identical to GLSL in terms of syntax which makes for trivial conversion and am not a fan of the unity syntax presented in the opening post.

so I'd vote for either making the next pass property work with canvas item materials (that should probably be supported anyways) or go with a syntax akin to what SomeDumName proposed.

edit: I just remembered, wasn't there something about being able to use includes in shaders now? that could be relevant for these multipass shaders; it would be really cool if they can be modular rather than having copied and slightly adjusted code for all the various orders of effects.

@samdze
Copy link
Author

samdze commented Jul 3, 2022

The Unity ShaderLab code in the OP is just for reference, something similar to SomeDumName's syntax is much more fitting for the Godot shading language.

@GithubPrankster
Copy link

Very much in support of this one, and I agree with h0lley that the syntax for multi-pass should make use of Godot shading language's strengths. And I'm also of the opinion just adding next pass to canvas material alone would help out many people! I wish to write complicated screen effects for my games and this would lend me quite a hand.

@michaelbraae
Copy link

Just leaving my support for this addition too, adding support for multi-pass would make a huge difference to this workflow. The viewport workaround seems really hacky and i'd like to avoid that as much as i can, currently im using scripts to "juggle" shaders which is also not ideal for my use case.

@FroggEater

This comment was marked as off-topic.

@Calinou
Copy link
Member

Calinou commented Aug 10, 2022

@FroggEater Please don't bump issues without contributing significant new information. Use the 👍 reaction button on the first post instead.

@dsmtE
Copy link

dsmtE commented Feb 2, 2023

Hello,
I try to implement a "xray"/"occlusion" effect to show up sprites when they are hidden behind some others.

Multi-pass materials rendering are missing for canvas_item shaders, They are any workaround to make it work ?

My problem is how to change the depth test so that my elements are only drawn when they are behind another elements ?
(kind of ZTest Greater)
Here what I want to achieve : https://forum.unity.com/threads/render-object-behind-others-with-ztest-greater-but-ignore-self.429493/#post-4211521

Thanks in advance.

@henriksod
Copy link

henriksod commented Jul 29, 2023

Hi, here is a use-case where this would be really valuable as a feature in the engine:
https://www.youtube.com/watch?v=dzcFB_9xHtg&t=35s

This is for 3D rendering.

@marcelb
Copy link

marcelb commented Sep 25, 2023

I also need that so badly. I'm creating a water shader with smooth transition from overwater to underwater. Right now it's very hard. Since the water surface can have any shape the transition would always have a moment in which the under water effect isn't applied correctly and the underwater world is visible shaderless.

It's a common problem and I would need to mark pixels as being over or underwater correctly.

@Calinou

This comment was marked as outdated.

@Calinou Calinou closed this as completed Sep 26, 2023
@Calinou Calinou removed this from the 4.x milestone Sep 26, 2023
@markusneg
Copy link

@Calinou, I don't think #7870 can replace this. #7870 is about defining multiple material passes in one file instead of having multiple shader files chained by the next_pass property. The author of this proposal refers to this as "multi-pass materials" in a side note. As far as I understood this proposal is about multi-pass shaders in a sense that subsequent shader steps can access the results of previous steps. AFAIK, the only way to do this with current Godot are (chained) Viewports which is a hard limitation due the computational (and fiddling) overhead.

@Calinou Calinou reopened this Sep 28, 2023
@Calinou Calinou removed the archived label Sep 28, 2023
@andy-noisyduck
Copy link

It would go someway to solving it though. With a combination of multipasses steps (from the proposal) and shader includes you could emulate that reasonably well. Whilst that's still not the friendliest of solutions, its still miles better than the viewport stacks we have to make currently.

@markusneg
Copy link

It would go someway to solving it though. With a combination of multipasses steps (from the proposal) and shader includes you could emulate that reasonably well. Whilst that's still not the friendliest of solutions, its still miles better than the viewport stacks we have to make currently.

Sorry, I don't understand how the current multipass approach - even when using only one shader file instead of the next_pass linked list - can replace the Viewport approach. Accessing intermediate results from previous passes on shader level instead of just forward-rendering / mixing to the render target in each pass seems as it would require modifications of the render pipeline itself.

@IAmTraffic
Copy link

Sorry, I don't understand how the current multipass approach - even when using only one shader file instead of the next_pass linked list - can replace the Viewport approach. Accessing intermediate results from previous passes on shader level instead of just forward-rendering / mixing to the render target in each pass seems as it would require modifications of the render pipeline itself.

I believe that there are some situations where, under the current Godot system, you do need to use the Viewport approach. For example, if you want to run multiple convolutional steps, you will need to have access to the results of previous passes on neighboring pixels, so you can't just run the whole thing in one shader pass.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests