Skip to content

Kernel: Allow acquiring the same LockRank multiple times #25932

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

Hendiadyoin1
Copy link
Contributor

Revival of #20619

Will likely conflict with #25931

@github-actions github-actions bot added the 👀 pr-needs-review PR needs review from a maintainer or community member label May 17, 2025
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch 2 times, most recently from 080c7a7 to 1c2574b Compare June 5, 2025 21:21
@Hendiadyoin1
Copy link
Contributor Author

Hendiadyoin1 commented Jun 5, 2025

New this iteration:
Random KUBSAN fixes,
will move those to a separate PR soon:tm:, as well as stop lying to userspace about it (hopefully)

@Hendiadyoin1 Hendiadyoin1 marked this pull request as draft June 5, 2025 23:19
@github-actions github-actions bot removed the 👀 pr-needs-review PR needs review from a maintainer or community member label Jun 5, 2025
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch from a6a7a26 to adcd48b Compare June 6, 2025 18:29
@Hendiadyoin1 Hendiadyoin1 marked this pull request as ready for review June 6, 2025 18:32
@github-actions github-actions bot added the 👀 pr-needs-review PR needs review from a maintainer or community member label Jun 6, 2025
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch from adcd48b to 1c2574b Compare June 6, 2025 18:32
Copy link
Collaborator

@kleinesfilmroellchen kleinesfilmroellchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just notes, LGTM

@@ -34,6 +34,9 @@ enum class LockRank : int {
// Process locks are the highest rank, as they normally are taken
// first thing when processing syscalls.
Process = 0x010,

// Mutexes need to be taken before spinlocks, so they get their own lock-rank
Mutex = 0x020,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: We should reorganize this bitfield such that lowest 16 bits are reserved for spinlocks (and 0x8000 is the Process lock) and highest 16 bits are reserved for mutexes. If you want you can already make this 0x1'0000 even though the circular dependency prevents any rank differentiation on Mutex right now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently Mutexes don't have any lasting effect on the lock rank, as this technically is the lock rank of the internal spinlock of the mutex...

@@ -1401,6 +1401,10 @@ bool Thread::track_lock_acquire(LockRank rank)
// Nothing to do for locks without a rank.
if (rank == LockRank::None)
return false;
if (rank == LockRank::Mutex && m_lock_rank_mask != LockRank::None) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check could be generalized if we had a notion in the bitfield about which ranks are for mutexes and which are for spinlocks :^) (not necessary for now)

@kleinesfilmroellchen kleinesfilmroellchen added ✅ pr-community-approved PR has been approved by a community member and removed 👀 pr-needs-review PR needs review from a maintainer or community member labels Jun 9, 2025
Copy link

stale bot commented Jul 2, 2025

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions!

@stale stale bot added the stale label Jul 2, 2025
@@ -336,7 +339,7 @@ void Scheduler::leave_on_first_switch(InterruptsState previous_interrupts_state)
// At this point, enter_current has already be called, but because
// Scheduler::context_switch is not in the call stack we need to
// clean up and release locks manually here
g_scheduler_lock.unlock(previous_interrupts_state);
g_scheduler_lock.unlock({ .interrupts_state = previous_interrupts_state, .affect_lock_rank = true });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we guaranteed to always be on a different lock rank here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scheduler lock currently has no rank, so certainty is out of the window, but it should only be acquired once

// Verify we are only attempting to take a lock of a higher rank.
VERIFY(m_lock_rank_mask > rank);
if (m_lock_rank_mask > rank)
PANIC("Acquiring lock of higher rank than already held, currently held: {:#05b}, to-be-acquired: {:#05b}", (int)m_lock_rank_mask, (int)rank);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "Attempting to acquire a lock of higher rank than the one currently held. Currently held: {:#b}, to be acquired: {:#b}."

The maximum lock rank might change, so I wouldn't use a hard-coded minimum length for these fields. These hyphens also feel out of place.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: commit message: "They does" should be This does" I think.
and s/SpinLock/Spinlock

@@ -34,6 +34,9 @@ enum class LockRank : int {
// Process locks are the highest rank, as they normally are taken
// first thing when processing syscalls.
Process = 0x010,

// Mutexes need to be taken before spinlocks, so they get their own lock-rank
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/lock-rank/LockRank. The hyphen looks weird.
Also please end all of your comments with a period (https://github.com/SerenityOS/serenity/blob/master/Documentation/CodingStyle.md#comments).

@@ -1401,6 +1401,10 @@ bool Thread::track_lock_acquire(LockRank rank)
// Nothing to do for locks without a rank.
if (rank == LockRank::None)
return false;
if (rank == LockRank::Mutex && m_lock_rank_mask != LockRank::None) {
// If we already hold any Spinlock, we can't acquire a mutex.
PANIC("Trying to acquire a mutex while holding a lock of higher rank: {:#05b}", (int)m_lock_rank_mask);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/:#05b/:#b

Comment on lines 50 to 51
// FIXME: This should have a lasting effect on the threads lock-rank
// and therefore return some sort of key for unlocking
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So do you want to give Mutexes multiple lock ranks as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would probably make sense in the future

Comment on lines 32 to 34
// Note: The internal Spinlock might have a different Rank than the Mutex
// Note: Making this a template would currently mess with Threads and our implementation
// As we currently include Thread.h and Thread uses Mutexes in its API
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1st note: Does this might refer to something that should be done in the future? Because currently the spinlocks can't have a dfiferent rank.
2nd note: Does this cause an include cycle or what do you mean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC Mutecies are part of the out-of-line API of Thread so slapping a template on them would make that harder

@stale stale bot removed stale labels Jul 3, 2025
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch from 1c2574b to 3e8b93e Compare July 13, 2025 20:15
@github-actions github-actions bot added 👀 pr-needs-review PR needs review from a maintainer or community member and removed ✅ pr-community-approved PR has been approved by a community member labels Jul 13, 2025
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch from 3e8b93e to 69acf28 Compare July 13, 2025 21:02
Otherwise we'd fail on acquiring the process level LockRank twice during
jail propagation in `sys$fork`.

Also while we are at it, make the VERIFYs neat little PANICs with good
debug messages.
This delays it's destruction until the end of the function, which is
necessary to avoid taking a mutex during said destruction, while under
a spinlock.
This helps to enforce that we do not take Mutexes under Spinlocks.
They currently do not have a lasting effect on the LockRank while a
Mutex is held, but only during acquisition and release.
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch from 69acf28 to e1fe34d Compare July 14, 2025 09:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
👀 pr-needs-review PR needs review from a maintainer or community member
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants