New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't panic on dropped frames #18
Conversation
On some lower end systems it's possible in some high pressure moments hit an edge case where a frame is dropped whilst buffers are still waiting to be mapped, causing wgpu to signal an error on the mapping callback and the profiler to panic. This is fixed by tracking wether the current frame was dropped or not and ignoring buffer mappings if it was dropped.
excellent catch, thank you for reporting and coming up with a first fix! :) I'm wondering if the deeper problem isn't that we're doing an unwrap here without handling the error pool.resolved_buffer_slice().map_async(wgpu::MapMode::Read, move |res| {
res.unwrap(); .. presuming I understood correctly and that unwrap is where we bail out with wgpu telling us things went sideways? Sadly wgpu doesn't distinguish any errors there so far :/. If so, I think it would be reasonable to assume that the respective error there means that we already discarded the frame. Another way to look at this is that we shouldn't actually throw away buffers with pending maps ever. Which makes me realize that this is problematic already: // Make sure we don't overflow
if self.pending_frames.len() == self.max_num_pending_frames {
// Drop previous frame.
let dropped_frame = self.pending_frames.pop().unwrap();
self.cache_unused_query_pools(dropped_frame.query_pools);
} If we recycle a the query pools from a dropped frame, we still have a map_async going on, so we can't actually reuse it :O |
Yes, I don't think we can assume that if it's an error it's because it's already mapped, but we could just ignore that buffer (mark it as tainted or something), or discard that frame.
This could prove problematic in slower systems, since it would increase memory pressure and possibly pressure on the gpu.
No idea, seems like a bug |
I like the idea of getting rid of that
Good point. We mustn't stack up indefinitely. You're okay with not merging your PR until then, or would it make your life easier somehow if it was as a transitional fix? |
I don't need it, I'm patching it in my cargo dependencies and that works for me. |
got a clean repro here now \o/ |
On some lower end systems it's possible in some high pressure moments hit an edge case where a frame is dropped whilst buffers are still waiting to be mapped, causing wgpu to signal an error on the mapping callback and the profiler to panic.
This is fixed by tracking wether the current frame was dropped or not and ignoring buffer mappings if it was dropped.
I'm opening this PR to gather opinions on how this should be done, right now it's implemented using an
AtomicBool
, but it could be possibly be done trough the already existingmapped_buffers
using a sentinel value likeusize::MAX
or another method.