Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poller Service #1405

Closed
wants to merge 95 commits into from
Closed

Poller Service #1405

wants to merge 95 commits into from

Conversation

clinta
Copy link
Contributor

@clinta clinta commented Jul 7, 2015

This is a re-write of poller-wrapper.py, designed to run as a continuous service rather than a cron job. It works on a best effort basis, using the configured number of threads to poll devices with the oldest data first polling each device no more requently than the user configured poller_service_poll_frequency (default 5 minutes).

It utilizes mysql GET_LOCK to ensure that a device is only being polled by one poller at a time allowing distributed polling without memcached.

Configuration parameters are described in doc/Extensions/Poller-Service.md.

I'm currently testing this in an environment with just over 200 devices, and two pollers configured to attempt to poll each device every 60 seconds with up to 32 threads, retrying down devices every 15 seconds.

I plan to continue testing and monitoring the service over the next week and am opening this PR to request comments or feedback from anyone else willing to test.

If you would like to test this, add a new remote and checkout the poller-service branch.

cd /opt/librenms
git remote add clinta https://github.com/clinta/librenms.git
git remote update clinta
git checkout poller-service

The default configuration is listed below and can be adjusted in config.php. A more detailed description of the configuration options is in doc/Extensions/Poller-Service.md.

// Poller-Service settings
$config['poller_service_loglevel']                       = "INFO";
$config['poller_service_workers']                        = 16;
$config['poller_service_poll_frequency']                 = 300;
$config['poller_service_discover_frequency']             = 21600;
$config['poller_service_down_retry']                     = 60;

If you are using Ubuntu, or another distribution that uses Upstart, copy /opt/librenms/poller-service.conf to /etc/init/poller-service.conf and run start poller-service to start the service.

@LibreNMS-CI
Copy link

Auto-Deploy finished, Test PR at http://1405.ci.librenms.org or https://1405.ci.librenms.org

@LibreNMS-CI
Copy link

Auto-Deploy finished, Test PR at http://1405.ci.librenms.org or https://1405.ci.librenms.org

@LibreNMS-CI
Copy link

Auto-Deploy finished, Test PR at http://1405.ci.librenms.org or https://1405.ci.librenms.org

@LibreNMS-CI
Copy link

Auto-Deploy finished, Test PR at http://1405.ci.librenms.org or https://1405.ci.librenms.org

@LibreNMS-CI
Copy link

Auto-Deploy finished, Test PR at http://1405.ci.librenms.org or https://1405.ci.librenms.org

@LibreNMS-CI
Copy link

Auto-Deploy finished, Test PR at http://1405.ci.librenms.org or https://1405.ci.librenms.org

@LibreNMS-CI
Copy link

Auto-Deploy finished, Test PR at http://1405.ci.librenms.org or https://1405.ci.librenms.org

@paulgear
Copy link
Member

@spinza
Copy link
Contributor

spinza commented Jul 29, 2015

@clinta keen to get this moving forward. Do you need the $timeout = 0 in your lock functions in dbFacile.php? It's not being used at the moment.

@f0o
Copy link
Member

f0o commented Jul 29, 2015

@spinza AFAIk he has already resolved the conflict and removed the issues scrutinizer complained about. However due to an issue with GH's fork-network it hasnt yet showed up here.

We're working together with GH in resolving the fork-network issue so everything can go on as usual again.

@spinza
Copy link
Contributor

spinza commented Jul 29, 2015

Yes I had to redo my local repo to do a pull request.

@clinta
Copy link
Contributor Author

clinta commented Jul 29, 2015

Yup, these are all fixed. I considered deleting my fork and re-creating it, I'm just not sure if the PR comments will be preserved if I do that.

@f0o
Copy link
Member

f0o commented Jul 29, 2015

@clinta if you choose to do that, please mention this PR in your new one so we got a 'glue' :)

@clinta clinta mentioned this pull request Jul 29, 2015
@clinta
Copy link
Contributor Author

clinta commented Jul 29, 2015

Replaced with #1561

@clinta clinta closed this Jul 29, 2015
@lock lock bot locked as resolved and limited conversation to collaborators Jan 23, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants