Skip to content

Pro4 1023#1

Open
herocbn wants to merge 4 commits into
masterfrom
pro4_1023
Open

Pro4 1023#1
herocbn wants to merge 4 commits into
masterfrom
pro4_1023

Conversation

@herocbn
Copy link
Copy Markdown
Owner

@herocbn herocbn commented Nov 28, 2022

DO NOT PUSH PROJECT SOLUTIONS PUBLICLY.

And especially do NOT open pull requests with project solutions!

Please be considerate of this free educational resource.


Open with Devin

“herocbn” added 4 commits October 23, 2022 10:57
Copy link
Copy Markdown

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 5 potential issues.

Open in Devin Review

Comment on lines +73 to +75
if (txn->GetState() == TransactionState::ABORTED) {
return false;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 LockShared does not remove its own request from the queue when transaction is aborted while waiting

When LockShared detects that the transaction has been aborted (line 73), it returns false but does not remove its own LockRequest from the lock_list. The request was added at line 50, and if the transaction gets wounded (aborted by an older transaction) while waiting in the loop at lines 65-71, the stale request remains in the queue.

Root Cause and Impact

The LockRequest for this transaction was emplaced at src/concurrency/lock_manager.cpp:50 but when the abort check at line 73 triggers, the code returns false without erasing the request. This leaves a dangling entry in the lock queue that references an aborted transaction. Other transactions checking the queue (e.g., in LockExclusive at line 120 checking lock_list->size() != 1) will see this stale entry and may wait indefinitely or make incorrect decisions about lock availability.

Impact: This can cause deadlocks or indefinite blocking because other transactions see a non-empty queue due to the orphaned request and keep waiting for it to be removed.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +123 to +125
if (txn->GetState() == TransactionState::ABORTED) {
return false;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 LockExclusive does not remove its own request from the queue when transaction is aborted while waiting

Same issue as in LockShared: when LockExclusive detects the transaction is aborted at line 123, it returns false without removing its own LockRequest that was added at line 108.

Root Cause and Impact

The request was emplaced at src/concurrency/lock_manager.cpp:108. When another older transaction wounds this one (sets it to ABORTED), the while-loop at line 120 exits, but the code at line 123 just returns false without cleaning up. The orphaned entry in the queue will prevent other transactions from ever seeing lock_list->size() == 1, causing them to block forever.

Impact: Causes indefinite blocking / deadlock for subsequent transactions trying to acquire exclusive locks on the same RID.

Suggested change
if (txn->GetState() == TransactionState::ABORTED) {
return false;
}
if (txn->GetState() == TransactionState::ABORTED) {
// Remove our own request from the queue before returning
for (auto it = lock_list->begin(); it != lock_list->end(); ++it) {
if (it->txn_id_ == txn_id) {
lock_list->erase(it);
break;
}
}
val->cv_.notify_all();
return false;
}
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +143 to +153
auto iter = lock_table_.find(rid);
auto val = &(iter->second);
if (val->upgrading_ != INVALID_TXN_ID) {
assert(false);
txn->SetState(TransactionState::ABORTED);
return false;
}
if (iter == lock_table_.end()) {
assert(false);
return false;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 LockUpgrade dereferences iterator from find() before checking if it equals end()

In LockUpgrade, the code at line 144 dereferences iter (auto val = &(iter->second)) before checking if iter == lock_table_.end() at line 150. If the RID is not found in the lock table, this is undefined behavior.

Root Cause

At src/concurrency/lock_manager.cpp:143-150:

auto iter = lock_table_.find(rid);
auto val = &(iter->second);       // line 144: dereference BEFORE check
if (val->upgrading_ != INVALID_TXN_ID) { // line 145: uses val
    ...
}
if (iter == lock_table_.end()) {   // line 150: check AFTER dereference
    ...
}

The end() check at line 150 comes after the dereference at line 144. If the RID doesn't exist in lock_table_, dereferencing iter->second is undefined behavior (likely a crash).

Impact: Crash/undefined behavior if LockUpgrade is called on an RID that doesn't exist in the lock table.

Suggested change
auto iter = lock_table_.find(rid);
auto val = &(iter->second);
if (val->upgrading_ != INVALID_TXN_ID) {
assert(false);
txn->SetState(TransactionState::ABORTED);
return false;
}
if (iter == lock_table_.end()) {
assert(false);
return false;
}
std::unique_lock<std::mutex> guard(latch_);
auto iter = lock_table_.find(rid);
if (iter == lock_table_.end()) {
assert(false);
return false;
}
auto val = &(iter->second);
if (val->upgrading_ != INVALID_TXN_ID) {
assert(false);
txn->SetState(TransactionState::ABORTED);
return false;
}
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines 67 to 73
*tuple = Tuple(values, opt);
*rid = tp->GetRid();
}
if (txn->GetIsolationLevel() == IsolationLevel::READ_COMMITTED) {
lock_mgr->Unlock(txn, tp->GetRid());
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 SeqScanExecutor does not unlock shared lock for tuples that fail the predicate under READ_COMMITTED

In SeqScanExecutor::Next, when the isolation level is READ_COMMITTED, a shared lock is acquired at line 50 for every tuple. However, the unlock at line 71 only happens inside the if (res) block, meaning tuples that fail the predicate are never unlocked.

Root Cause

At src/execution/seq_scan_executor.cpp:49-73:

if (txn->GetIsolationLevel() == IsolationLevel::READ_COMMITTED) {
  lock_mgr->LockShared(txn, tp->GetRid());  // line 50: lock acquired for ALL tuples
}
...
if (res) {  // line 56: only true if predicate passes
  ...
  if (txn->GetIsolationLevel() == IsolationLevel::READ_COMMITTED) {
    lock_mgr->Unlock(txn, tp->GetRid());  // line 71: unlock ONLY for matching tuples
  }
}

When res is false (predicate doesn't match), the lock is acquired but never released. This causes shared locks to accumulate on non-matching tuples, potentially blocking writers indefinitely.

Impact: Resource leak of shared locks under READ_COMMITTED isolation. Writers will be blocked by locks that are never released, potentially causing deadlocks or severe performance degradation.

(Refers to lines 49-73)

Prompt for agents
In src/execution/seq_scan_executor.cpp, the Unlock call for READ_COMMITTED isolation at line 71 is inside the `if (res)` block, so it only unlocks tuples that match the predicate. Tuples that fail the predicate (res is false) are locked at line 50 but never unlocked. Move the READ_COMMITTED unlock to happen unconditionally after the predicate evaluation, regardless of whether the tuple matched. One approach: add an else branch after the `if (res)` block at line 73 that also calls `lock_mgr->Unlock(txn, tp->GetRid())` when the isolation level is READ_COMMITTED.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +120 to +121
while (txn->GetState() != TransactionState::ABORTED && lock_list->size() != 1) {
val->cv_.wait(guard);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 LockExclusive wounds itself by erasing its own request from the queue

In LockExclusive, the wound-wait loop at lines 110-118 iterates over the entire lock_list and erases any request with txn_id_ > txn_id. However, the current transaction's own request was just added at line 108. If there happens to be no other request with a higher ID, this is fine. But the loop does not skip the current transaction's own entry — it only checks it->txn_id_ > txn_id, which would be false for itself. However, the same issue exists in LockShared at lines 51-63 where the loop only kills EXCLUSIVE requests with higher IDs, but the current transaction's own SHARED request is in the list and could interact unexpectedly with the size check.

More critically, in LockExclusive at line 120, the condition lock_list->size() != 1 assumes that when the list has exactly 1 element, it must be the current transaction's request. But if the current transaction was wounded by another thread between lines 108 and 120, and its entry was erased by that other thread, the list could reach size 1 with someone else's request, and the current transaction would incorrectly proceed to grant itself the lock.

Detailed Explanation

Consider this race in LockExclusive:

  1. Transaction T2 adds its request to the queue at line 108
  2. Transaction T1 (older, lower ID) calls LockExclusive on the same RID
  3. T1's wound-wait loop erases T2's request from the queue and sets T2 to ABORTED
  4. T2's while-loop at line 120 wakes up, sees lock_list->size() == 1 (T1's request), but T2 is ABORTED
  5. The abort check at line 123 catches this, but T2's request is already gone from the list — this is correct

However, the reverse scenario is problematic: if T2 (younger) is in the wound loop at line 110 and erases T1's request (which it won't because T1 has lower ID), that's fine. But the real issue is that the size() != 1 check is fragile — it doesn't verify that the remaining element is actually the current transaction's request.

This is a latent correctness issue rather than a guaranteed crash, but under concurrent access patterns it could lead to incorrect lock grants.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant