Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LOCK properly, everywhere, where necessary #94

Closed
dexX7 opened this issue Jun 25, 2015 · 1 comment
Closed

LOCK properly, everywhere, where necessary #94

dexX7 opened this issue Jun 25, 2015 · 1 comment

Comments

@dexX7
Copy link
Member

dexX7 commented Jun 25, 2015

Locks are used to ensure not more than one thread at the time enters a "locked" region, for example to prevent that thread A writes some data, and another thread B (for example UI, RPC) reads at that moment.

Currently there are a few places where LOCKs are likely missing.

The following parts should be guarded:

  • global state objects
  • pending transaction objects
  • caches

The lock we currently use is cs_tally, whereby I think it would make sense to use a seperated one for the pending transaction objects, and probably for the caches, especially the UI-only caches, too.

@dexX7
Copy link
Member Author

dexX7 commented Jun 26, 2015

Hmm... #100 locks cs_tally in all handlers, so there are actually a few too many locks now, given that mastercore_save_state() for example is only called from within a handler.

Is the following statement correct?

  • "Only one thread shall enter any of the handlers at a time."

I'm wondering about the UI and RPC though.

Ideally we would have something like:

And set a "write lock", when entering the handlers, and "read locks", when reading via RPC or UI.

If a read lock is set, then no write lock shall be set, and if the write lock is set, no read lock shall be set. More than one read lock may be set at the time, but no more than one write lock.

Or in other words:

  • "Don't read or write data, when data is currently written. Multiple readers are allowed."

The Boost equivalents seem to be:

// QReadWriteLock lock;
boost::shared_mutex lock;

void writeData(const T& data)
{
    // QWriteLocker locker(&lock);
    boost::unique_lock<boost::shared_mutex> locker(lock);
    ...
}

T readData()
{
    // QReadLocker locker(&lock);
    boost::shared_lock<boost::shared_mutex> locker(lock);
    ...
}

Currently I see the following problems:

  1. I have no way to test all this..
  2. CMPTally isn't really great for read-only access at the very moment, due to it's internal iterator, which is modified via init() and next(), even when reading. A const_iterator could be added as second iterator for read-only access, or concurrent reads could be restricted, at least for the start.
  3. It's not clear to me what might happen, when synchronizing the chain. While scanning for Omni transactions there shouldn't necessarily be any reads via UI or RPC, but when synchroinizing, then I'd expect responsiveness. As far as I understand the keyword in this context is "starvation".

Since Boost 1.52 the following lines were added to the documentation:

Note the the lack of reader-writer priority policies in shared_mutex. This is due to an algorithm credited to Alexander Terekhov which lets the OS decide which thread is the next to get the lock without caring whether a unique lock or shared lock is being sought. This results in a complete lack of reader or writer starvation. It is simply fair.

It's great to know that it's "fair", but this isn't really tangible for me.

I think I'm going to try to implement a PoC, but I really need at least a good idea how to test it. The locking test works as follows:

  • There are 4 active threads
  • Each thread reads a global number n
  • Then they sleep for a random amount of time
  • And increment (locally) n + 1, which is then written back into the global n
  • The test passes, if n equals number of threads * iterations per thread

Without lock, the local copy of n may already be outdated, because other threads updated n in the meantime, and then an outdated value gets written back into the global one.

To test a read-write scheme, there could be:

  • A global vector of numbers
  • One thread that adds incrementing number to the vector, with random sleeps
  • And other threads, which iterate over the vector and read it's numbers, with random sleeps
  • The test fails, if any thread reads numbers, which are not n' = n + 1

That should at least show, whether the reader threads really get incrementing numbers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant