You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
Each thread rmw to the same "key" and modifies different part ("thread_id:checksum") of the "value".
The f for each thread:
calculates the checksum of the whole "value" put it into its own slot ([thread_id:checksum])
and change the "last_written_thread_id" to itself.
Another thread read this "key":
use the "last_written_thread_id" to find the "thread_id:checksum"
calculate the checksum checker_checksum using the same f
verify if the checker_checksum is the same as the checksum in "thread_id:checksum".
We can also add some random garbage into the "value" make the page more likely to split (The current implementation seems to split a page on the size of the page rather the number of entries in the page).
The text was updated successfully, but these errors were encountered:
I think it's useful to support RMW (Read Modify Write) for test purpose.
The API could look like this:
The
f
is a function that takes the old key value and returns the new value.rmw
will set the new value for this key+lsn.The purpose of this API is mainly for testing the "Atomicity" of the tree operation.
We can enable it under special feature flag/test mode.
To test the "Atomicity" of concurrent tree operation, we can devise a client using this API.
For example, we can have concurrent written the tree with the "value" layout/encoded like this (using the same key and lsn).
Each thread
rmw
to the same "key" and modifies different part ("thread_id:checksum") of the "value".The
f
for each thread:Another thread read this "key":
checker_checksum
using the samef
checker_checksum
is the same as the checksum in "thread_id:checksum".We can also add some random garbage into the "value" make the page more likely to split (The current implementation seems to split a page on the size of the page rather the number of entries in the page).
The text was updated successfully, but these errors were encountered: