Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use read-write lock in Hierarchy #316

Open
leventov opened this issue Jun 14, 2018 · 9 comments
Open

Use read-write lock in Hierarchy #316

leventov opened this issue Jun 14, 2018 · 9 comments
Assignees
Milestone

Comments

@leventov
Copy link

leventov commented Jun 14, 2018

Please use read-write lock, such as std::shared_mutex, here:

thread::Mutex hashtable_mutex;
It guards mostly read-only operations, yet using an ordinary mutex induces noticeable contention.

@wilx wilx self-assigned this Jun 14, 2018
@wilx
Copy link
Contributor

wilx commented Jun 14, 2018

@leventov commented on Jun 14, 2018, 7:56 PM GMT+2:

Please use read-write lock, such as std::shared_mutex, here:

thread::Mutex hashtable_mutex;
It guards mostly read-only operations, yet using an ordinary mutex induces noticeable contention.

Do you have any numbers from a benchmark?

std::shared_mutex is C++17 which is not C++11 that log4cplus requires right now. I could use SharedMutex but that has not been used anywhere yet and is rather heavy weight. I am not sure it would help.

I also could bump up the requirements to C++ and call it a 2.1 version or such. But without any numbers as a good argument I cannot justify it for myself.

@wilx
Copy link
Contributor

wilx commented Jun 14, 2018

I have changed the mutex type on branch master-make-hashtable-mutex-rw-lock to SharedMutex, if you want to benchmark it.

@leventov
Copy link
Author

Do you have any numbers from a benchmark?

I did mutex contention profiling of my application with the mutrace tool and this mutex appeared to be the most contentious one. I don't think specific numbers matter because it depends entirely on how heavily the application is calling the logger and from how many threads.

In my application this was likely due to suboptimally written code, Logger::getInstance() called each time a logger instance is required.

You could also use some sort of concurrent hash map.

@wilx
Copy link
Contributor

wilx commented Jun 15, 2018

@leventov commented on Jun 15, 2018, 2:45 PM GMT+2:

Do you have any numbers from a benchmark?

I did mutex contention profiling of my application with the mutrace tool and this mutex appeared to be the most contentious one. I don't think specific numbers matter because it depends entirely on how heavily the application is calling the logger and from how many threads.

In my application this was likely due to suboptimally written code, Logger::getInstance() called each time a logger instance is required.

I have pushed a changes set to aforementioned branch that splits the Logger instance retrieval to shared locked read only path and exclusive locked read-write path. Though it is not using the std::shared_mutex but my implementation of RW lock. Please test if it helps your workload.

You could also use some sort of concurrent hash map.

Implementing concurrent hash maps is hard.

@Keynib
Copy link

Keynib commented Jun 6, 2023

@leventov @wilx
I have tested the patches from branch master-make-hashtable-mutex-rw-lock, as a result, the main thread froze on infinity waiting.

Thread 1 (Thread 0x7ffff675ac80 (LWP 31095)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x5555564143f0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x5555564143a0, cond=0x5555564143c8) at pthread_cond_wait.c:502
#2  __pthread_cond_wait (cond=0x5555564143c8, mutex=0x5555564143a0) at pthread_cond_wait.c:655
#3  0x00007ffff6b913bc in std::condition_variable::wait(std::unique_lock<std::mutex>&) ()
   from /build/libstdc++.so.6
#4  0x00007ffff7d4696b in log4cplus::thread::Semaphore::lock() const ()
   from /build/liblog4cplus.so.8
#5  0x00007ffff7d46a38 in log4cplus::thread::SharedMutex::rdlock() const ()
   from /build/liblog4cplus.so.8
#6  0x00007ffff7d1e484 in log4cplus::Hierarchy::getInstanceImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, log4cplus::spi::LoggerFactory&) () from /build/liblog4cplus.so.8
#7  0x00007ffff7d1e543 in log4cplus::Hierarchy::getInstance(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
   from /build/liblog4cplus.so.8
#8  0x00007ffff7cb133e in log4cplus::PropertyConfigurator::getLogger(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
    () from /build/liblog4cplus.so.8
#9  0x00007ffff7cb3750 in log4cplus::PropertyConfigurator::configureLoggers() ()
   from /build/liblog4cplus.so.8
#10 0x00007ffff7cb39ac in log4cplus::PropertyConfigurator::configure() ()
   from /build/liblog4cplus.so.8

I suppose the problem is inside SharedMutex class, probably it relates to some work with counters, but I'm not sure. Could you please check one more time?

@Keynib
Copy link

Keynib commented Jun 7, 2023

I have found the root cause of problem above: if your application tries to reset the configuration by the following method, you will get a dead lock.

        log4cplus::HierarchyLocker lock(log4cplus::Logger::getDefaultHierarchy());
        lock.resetConfiguration();

        log4cplus::PropertyConfigurator config(ss);
        config.configure();
    }

The mutex is locked first time inside HierarchyLocker lock, and the thread tries to lock it the second time inside PropertyConfigurator::configure(). I assume you need to update PropertyConfigurator::getLogger - replace getInstance to getInstanceImplUnlocked for example:

Logger
PropertyConfigurator::getLogger(const tstring& name)
{
    return h.getInstanceImplUnlocked(name, *h.getLoggerFactory());
}

Also, I suggest trying the classical pthread_rwlock_t instead of SharedMutex, because in my benchmarks SharedMutex works slower than pthread_rwlock_t. It will be good if you try in your environment.

@wilx wilx added this to the v3.0.0 milestone Jun 7, 2023
@wilx
Copy link
Contributor

wilx commented Jun 7, 2023

If we do this, this should go to master branch, potential 3.x release. There we can use C++20 features and drop the poor man's shared mutex in favour of standard C++ shared mutex.

It would be a good reason to release 3.x. So far I have only been accumulating small tweaks compared to 2.x branches.

@wilx
Copy link
Contributor

wilx commented Jun 7, 2023

Well, now that I look at it, there are more changes that are worth releasing: 2.1.x...master

@Keynib
Copy link

Keynib commented Jun 7, 2023

There we can use C++20 features and drop the poor man's shared mutex in favour of standard C++ shared mutex.

Yes, agree with you.

Also, I want to mention the following thing: the problem with this code

        log4cplus::HierarchyLocker lock(log4cplus::Logger::getDefaultHierarchy());
        lock.resetConfiguration();

        log4cplus::PropertyConfigurator config(ss);
        config.configure();

can be resolved not by updating PropertyConfigurator::getLogger(const tstring& name) but by overriding this method in the user class like in ConfigurationWatchDogThread

Logger
ConfigurationWatchDogThread::getLogger(const tstring& name)
{
    if(lock)
        return lock->getInstance(name);
    else
        return PropertyConfigurator::getLogger(name);
}

So the user should determine where the lock is already set. I just want to mention that this is proper, but not obvious way, I think it will be good to check this somehow inside log4cplus or add this information to the documentation)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants