Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility of gpu-side pixel buffer #170

Open
Tastaturtaste opened this issue Jun 1, 2021 · 8 comments
Open

Possibility of gpu-side pixel buffer #170

Tastaturtaste opened this issue Jun 1, 2021 · 8 comments
Labels
enhancement New feature or request

Comments

@Tastaturtaste
Copy link

I am currently working on implementing cellular automata and similar things while learning rust and wgpu.
Simulation on the cpu is already working for me and pixels was a big help for me since I didn't need to put any consideration whatsoever on the graphics aspect. So big thanks for this awesome crate!
But the next step is doing the computation on the gpu to enable large simulations like these.

At the moment it's necessary to have separate buffers on the gpu for the computation, then mapping the buffers to be read by the cpu to update the pixel buffer of pixels which then copies the pixel buffer back to the gpu side texture on rendering.
Directly updating the texture is not possible either, since the .render-with- method (and by extension the .render- method) override the texture with the cpu-side pixel buffer.

It would be awesome if it were possible to specify a pixel buffer on the gpu which then gets used by pixels or request pixels itself to use a gpu-side pixel buffer and expose that to the user.
This would allow users to use pixels for what it does best, rendering pixel perfect graphics, while using the most appropriate buffer location for the specific application.
As an alternative pixels could also expose a method which renders the texture without first copying the pixel buffer to it. In that case it would be conveniant to also be able to query the necessary TextureView to be able to use the copy_buffer_to_texture- method of the CommandEncoder of wgpu.

Thank you for your work!

@parasyte parasyte added the question Usability question label Jun 3, 2021
@parasyte
Copy link
Owner

parasyte commented Jun 3, 2021

This is something I need to look into more closely. I have another side project that I was not planning on using pixels. But it seems like a similar use case, where I want a dumb pixel buffer/texture and not worry about how it gets to the display, but I also want the flexibility to render to the texture using the GPU.

The biggest issue, AFAIK, is that the texture usage is currently hardcoded:

usage: wgpu::TextureUsage::SAMPLED | wgpu::TextureUsage::COPY_DST,

E.g. one could use the STORAGE flag so a compute shader has access to write to it directly without copying through a separate buffer. (Although that buffer and copy may be necessary for reasons I do not understand?)

The other question about skipping or ignoring the write_texture call seems like it should be architected as a separate Pixels type; one with a CPU-accessible pixel buffer and one without. If they are not separate, I feel it would open the door to using the API incorrectly. Things like trying to access an inaccessible pixel buffer from the CPU-side could get ugly. A separate type wouldn't have those concerns because a GPU-only Pixels type could just choose not to provide methods like get_frame().

@parasyte parasyte added the enhancement New feature or request label Jun 3, 2021
@Tastaturtaste
Copy link
Author

I also think having a separate type would probably be the most straightforward way to expose the functionality to the user. The Pixels- type is advertised as a simple pixel buffer, but changing from internally using a Vec to a GPU-side Texture would fundamentally change how it works. I guess much of the functionality could be shared internally anyway.

@Tastaturtaste
Copy link
Author

Could this maybe be solved by making Pixels generic over the used storage (maybe internally using a enum)? Than the Implementation could be divided for the methods where it matters.
The type using the CPU Storage could still be the default and the GPU Version would be constructible with the builder.

@parasyte
Copy link
Owner

parasyte commented Jun 3, 2021

My current thought is using a trait to define the common functionality, and concrete structs for the implementation-specific parts. Pretty standard stuff, no frills.

Did you have a specific reason to use copy_buffer_to_texture? If not, I think a trait would be superior to an enum in this case.

@Tastaturtaste
Copy link
Author

I certainly have the tendency to overcomplicate things 😅. I am hyped this seems to go forward!

Did you have a specific reason to use copy_buffer_to_texture? If not, I think a trait would be superior to an enum in this case.
Not really. I could also directly set the color in the texture without any copying.

@ARez2
Copy link

ARez2 commented Jul 8, 2023

Hey! I had a similar use case where I just used the glium crate and compute shaders to do things. Only problem was, that I found GPU shader development to be very tedious due to the lack of proper profiling and debugging (I dont have NVIDIA GPU so that complicates things). I was wondering whether I should switch back to using pixels again and while checking out the repo for new things I found this. So has there been any updates on this?

@parasyte
Copy link
Owner

parasyte commented Jul 9, 2023

Nothing new to report here. Even if this was addressed, it would not make the debugging experience with shaders any better. It would just bring that experience to pixels.

@ARez2
Copy link

ARez2 commented Jul 11, 2023

Yep. Very true. Thanks for the response! Love your project ^^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants