Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Enhanced pessimistic lock queueing #100

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

MyonKeminta
Copy link
Contributor

Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
- Put them in the lock table in `ConcurrencyManager`, which means, we can use key to index both memory lock and the lock waiting queue (but of course they should not share the same mutex). In this way, we tries to put concurrency-managing stuff together, which is logically reasonable. But it makes cleaning up of entries in the lock table more complicated. Besides, it have risk of reducing performance of scanning memory locks.
- Put them in a separated concurrent hashmap indexed by key. It should be simpler and more efficient than putting into lock table.

As we will explain later, when a request is waiting in queue, it's possible that it exceeds its timeout. `WaiterManager` will be responsible to cancel it. However,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it unfinished?

Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>

#### Lazy-cleaning up

As we will explain later, when a request is waiting in queue, it's possible that it exceeds its timeout, or encounters deadlock. `WaiterManager` will be responsible to cancel it. However, the corresponding entry in the queue won't be removed from the priority queue efficiently. Therefore, we can use a lazy-cleaning up approach: when popping an entry from a queue, if the entry is cancelled but it is canceled (which can be indicated by an atomic flag), drop it and continue popping the next.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when popping an entry from a queue, if the entry is cancelled but it is canceled typo?


We put the queues in a concurrent hashmap indexed by key, and put the hashmap inner the `Scheduler`.

#### Lazy-cleaning up
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removal of an element is O(log n), whether lazily or not. Considering the problem mentioned, would it be better to just remove elements immediately when cancelling?

Copy link
Contributor Author

@MyonKeminta MyonKeminta Sep 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When a request is canceled, it might not be at the head of the queue. It's possible to remove any element from a priority queue in O(log n) time, but it doesn't seem to be supported by the std BinaryHeap. I'm actually considering introducing a third-party implementation of priority queue (didn't find any proper one yet) or implement it by myself.

Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants