Skip to content
This repository has been archived by the owner on Apr 6, 2019. It is now read-only.

CPU pegged at 100% when use redis_subscriber.connect #75

Closed
RobWillis opened this issue May 3, 2017 · 3 comments
Closed

CPU pegged at 100% when use redis_subscriber.connect #75

RobWillis opened this issue May 3, 2017 · 3 comments

Comments

@RobWillis
Copy link

Just wanted to start with - great library!!

I'm currently using the redis_subscriber to receive notifications. My use case is very simple - I get a notification, I record a timestamp. Notifications are very rare.
However, I was noticing my program was often consuming 100% of 1 CPU (kernel space) - after it started and before I put any load or actually received any notifications.

Program worked fine when - just the 1 CPU was pegged. After much narrowing, I've narrowed it down to the redis_subscriber.connect triggers the issue.

Also, it appears the issue always happens when I run on a single-CPU Ubuntu 16 VM. On a Ubuntu VM with more than 1 CPU, the issue is much less common (though it does happen).

The code is pretty straight forward:

void
CacheSubscriber::State::initConnection()
{
    this->doConnInit = false;
    updateCacheTime();
    this->subscriber.connect( this->hostname, this->port, [this](redis_subscriber& sub)
    {
        if ( !this->shutdown )   {  this->doConnInit = true;    }
    });
    this->subscriber.subscribe( this->channel, [this](const std::string& channel, const std::string& msg)
    {
        if ( this->channel == channel )   {  updateCacheTime();   }
    });

    this->subscriber.commit();
}

If there is any additional info I can provide - let me know...

@sedapsfognik
Copy link

Have exactly the same issue. All is working fine on Windows on 4 cores, but I get 100% CPU usage on on a single-CPU Ubuntu 16 too. Looks, like subscriber waits in an infinite loop instead of using condition variable.

@Cylix
Copy link
Owner

Cylix commented Jun 8, 2017

Hi,

Sorry for the lack of response and thanks for the update.

I'll setup a 1 core VM and will try to investigate on that issue as soon as possible.

Basically, right now, there should be two workers thread: one that does the select/poll and another one that process the read and write callbacks.
Technically, both of them should sleep most of the time.
Having multiple threads on one core should not be an issue.
But my guess is that maybe the kernel switches the thread that owns the CPU such that the select/poll thread is forced to be awake while there is no action to process resulting in high CPU usage.

This is only a guess, so I still have to investigate.

At the moment, the timeout of the select/poll is set to -1, so unlimited sleep until an action occurs. If my guess reveals to be true, then I will consider to either allow the client to customize this timeout or to detect the number of cores and switch to a defined timeout (> 0) in the case of a single core machine.

I'm not able to give you any deadline concerning this issue as I have to deal with multiple things these days, but I'll do my best to solve it as fast as I can.

Best :)

@Cylix
Copy link
Owner

Cylix commented Jun 21, 2017

Hi,

I submitted a fix on tacopie (v2.4.3) that aims to solve that issue.

As explained in this tacopie ticket, I tried to reproduce this high CPU usage on a virtual machine configured with 1 core and running on Debian.
However, after changing the example to avoid doing a while (!should_exit) indefinitely until we hit ctrl-c, and instead use a condition variable, I could not observe any high CPU usage.

So if you happen to test again with the fix, I would be glad to know if it improves the behavior or not at all.

In case it doesn't, I provided a new CMake variable to configure the underlying timeout: SELECT_TIMEOUT. It takes the timeout time as nanoseconds. By default, timeout is set to null (infinite wait until an event occurs).

I'll leave this issue open and will close it in a few weeks if this post does not receive any reply in the meantime.

Hope this will solve the situation :)

Best!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants