-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Multipass shaders (buffer A ...) #30
base: main
Are you sure you want to change the base?
Conversation
Little update:
new breaking examples found, that might be unrelated to this PR, but I will note them down for later reference:
|
I think I finally fixed the compatibility issue. There is some small visual issues which look like precision problems to me (not sure yet). And the performance is horrible it seems... Please let me know if you find any shaders that are broken (not due to missing features, wgpu bugs) Will work on tests, examples and documentation to hopefully get this ready for next week. E: found this one seemingly broken: |
I think this is finally ready for review - and I welcome some feedback.
|
Cool stuff! |
part of #4
approximately 17.5% of public Shadertoys are multipass. Multipass allows up to 4 buffers (A through D) to be rendered as a texture. These can also be used to store data and enable quite some more experiences.
Some of the challenges include timing as well as cross inputs.
Buffer passes can seemingly take the exact same inputs as the main "Image" renderpass, including other buffers (and themselves?)
This PR starts to bloat a little and contains some refactor for the whole channel input concept... still in flux
Instead, will try to implement BufferTexture as a
ShadertoyChannel
subclass so it can hold for example the sampler settings.Additionally, there will likely be a RenderPass base class and subclasses for Image, Buffer(a-d) and later cube and sound.
So the main Shadertoy class contains several render passes, and all of these get their inputs(channels) attached.
I even started to try and sketch it out - but will have to sleep through this for a few more days... my concepts change every day but I need to just try and work on the ideas for a bit.
The render order should be Buffer A through D and then Image. So you can keep temporal data, by using itself has an input.
TODOs:
additional tests cases for inferred input types, empty channels(caching conflict with pytests)test coverage for examples in readme!(different PR)(maybe) some debug mode where you can render the buffers to canvas?(you can use RenderDoc with "capture child processes")