Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] Ensure all lock guards are properly nested. #5646

Merged
merged 4 commits into from May 3, 2024

Conversation

jimblandy
Copy link
Member

@jimblandy jimblandy commented May 2, 2024

The lock analyzers in the wgpu_core::lock module can be a bit
simpler if they can assume that locks are acquired and released in a
stack-like order: that a guard is only dropped when it is the most
recently acquired lock guard still held. So:

  • Change Device::maintain to take a RwLockReadGuard for the device's hal fence, rather than just a reference to it.

  • Adjust the order in which guards are dropped in Device::maintain and Queue::submit.

  • Implement downgrade for the lock module's RwLocks.

    Implement RwLockWriteGuard::downgrade for lock::vanilla and lock::ranked, to downgrade a write lock to a read lock.

  • Refactor lock::ranked to use LockStateGuard.

    Rather than implementing Drop for all three lock guard types to restore the lock analysis' per-thread state, let lock guards own values of a new type, LockStateGuard, with the appropriate Drop implementation. This is cleaner and shorter, and helps us implement RwLock::downgrade in a later commit.

Fixes #5610.

Rather than implementing `Drop` for all three lock guard types to
restore the lock analysis' per-thread state, let lock guards own
values of a new type, `LockStateGuard`, with the appropriate `Drop`
implementation. This is cleaner and shorter, and helps us implement
`RwLock::downgrade` in a later commit.
Implement `RwLockWriteGuard::downgrade` for `lock::vanilla` and
`lock::ranked`, to downgrade a write lock to a read lock.
The lock analyzers in the `wgpu_core::lock` module can be a bit
simpler if they can assume that locks are acquired and released in a
stack-like order: that a guard is only dropped when it is the most
recently acquired lock guard still held. So:

- Change `Device::maintain` to take a `RwLockReadGuard` for the device's
  hal fence, rather than just a reference to it.

- Adjust the order in which guards are dropped in `Device::maintain`
  and `Queue::submit`.

Fixes gfx-rs#5610.
@jimblandy jimblandy requested a review from a team as a code owner May 2, 2024 01:37
@jimblandy
Copy link
Member Author

The interesting thing for reviewers to consider here is the shifting of the code that assigns to pending_writes.temp_resources from after the maintain to before. I believe this is safe because at this point pending_writes should be entirely empty, and it's fine if some other thread starts queuing up new stuff.

@ErichDonGubler
Copy link
Member

ErichDonGubler commented May 2, 2024

It looks like this will be in my queue until Monday. Sorry for the wait! 😖 EDIT: Plot twist, I have review bandwidth today!

Copy link
Member

@ErichDonGubler ErichDonGubler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shippit

wgpu-core/src/lock/ranked.rs Show resolved Hide resolved
wgpu-core/src/device/queue.rs Show resolved Hide resolved
@ErichDonGubler ErichDonGubler enabled auto-merge (rebase) May 3, 2024 16:45
@ErichDonGubler ErichDonGubler merged commit 4a07ab2 into gfx-rs:trunk May 3, 2024
25 checks passed
@jimblandy jimblandy deleted the core-lock-nesting branch May 3, 2024 20:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Lock analysis is not prepared to handle SnatchLock usage
2 participants