-
-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Easy way to use shaders to process images #82
Comments
Yep, render passes are designed to provide the full power of I can provide some extra context for your questions below, but might not have the best answers:
|
Thanks for the feedback! I'm prototyping this with just wgpu right now. Once I get it working, I'll start adding this to a fork of pixels. I believe I've figured out how to get compute shaders to work, so we can use those. That should be simpler (for the user) and faster than render passes. I'm going ahead with making them work in a chain, agreed that a dependency system is too complicated for now. The order things should end up running is:
|
A simple way to get more performance on systems with multiple GPUs is to request the faster one. By default, use pixels::wgpu::{PowerPreference, RequestAdapterOptions};
builder.request_adapter_options(RequestAdapterOptions {
power_preference: PowerPreference::HighPerformance,
compatible_surface: None,
}); |
Here's the API I've come up with (somewhat pseduo-code) to add a simpler render pass method, and support chaining of render passes. Note that it's a breaking change to the RenderPass API. What do you think? If there are no extra render passes, DefaultRenderer will render directly to the screen. Otherwise each render pass is setup to write to a texture, forming a chain, and then a final CopyPass runs at the end, to copy the texture to the screen (needed because compute passes cannot write directly to the swapchain afaik, or at least not the way I'm doing it with simple render passes). struct DefaultRenderer {
output_texture: Option<TextureView>,
}
struct RenderPass {
pixels_default_renderer_output_texture: TextureView,
previous_pass_texture: TextureView,
output_texture: TextureView,
}
struct CopyPass {
previous_pass_texture: TextureView,
}
fn create_pixels(simple_render_passes: Vec<SimpleRenderPass>, render_passes: Vec<RenderPass>) {
let has_no_additional_render_passes =
simple_render_passes.is_empty() && render_passes.is_empty();
// Added to Pixels
let default_renderer = DefaultRenderer {
output_texture: if has_no_additional_render_passes {
None
} else {
Some(TextureView::new())
},
};
let pixels_default_renderer_output_texture = default_renderer.output_texture.unwrap();
let mut previous_pass_texture = default_renderer.output_texture.unwrap();
for render_pass in simple_render_passes.extend_with(render_passes) {
let output_texture = TextureView::new();
// Added to Pixels
let render_pass = RenderPass {
pixels_default_renderer_output_texture,
previous_pass_texture,
output_texture,
};
previous_pass_texture = output_texture;
}
if !has_no_additional_render_passes {
// Added to Pixels
let copy_pass = Some(CopyPass {
previous_pass_texture,
});
}
}
fn render() {
let swapchain_texture = get_swapchain_texture();
match self.default_renderer.output_texture {
None => self.default_renderer.render(swapchain_texture),
Some(output_texture) => {
self.default_renderer.render(output_texture);
for render_pass in self.render_passes {
render_pass.render();
}
self.copy_pass.unwrap().render(swapchain_texture);
}
};
} |
I was originally thinking this would be more of a way to just specify a compute shader built off of a template that pixels would provide, and have pixels take care of the rest of the setup like setting up textures and copying to/from other textures. I don't think it's very realistic anymore though, and the new render api is much better, so I'm fine to close this now. |
Say you wanted to add a blur effect to your game. This is easy enough to do: Before you submit your frame to pixels for rendering, calculate the blur, and modify the frame. You have direct access to an array of pixels representing your image, and it's simple.
But wait, it's really slow! You could try adding rayon, but indexing an array in parallel is somewhat tricky, and it's still not quite the speed you wanted. Plus, you're operating on the original image, and not the scaled version that pixels generates, so you will end up with a lower quality result.
Instead, since pixels already uses wgpu, why not do this on the GPU? The problem is that pixels::RenderPass involves learning a ton about wgpu. Instead, I propose a simpler api:
Unanswered questions:
The text was updated successfully, but these errors were encountered: