New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1920481: Decrease CPU usage of Prometheus exporter #437
Conversation
@dulek: This pull request references Bugzilla bug 1920481, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
pod_name = metric['name'] | ||
labels = metric['labels'] | ||
duration = metric['duration'] | ||
with lockutils.lock(pod_name): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need to lock here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, maybe not. So I left it here thinking that prometheus_exporter might not be thread safe. Or that it's called from various places. But seems like it can only be called from this thread and we're guaranteed it's sequential.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, no strong opinion. If leaving it here might be safer, it's fine.
self.prometheus_exporter.update_metric(labels, duration) | ||
del self.metrics[pod_name] | ||
try: | ||
metric = self.metrics.get(timeout=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does the get here blocks the addition of new metrics while removing from queue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I understand. This queue module should be thread safe, meaning that get()
blocks until there's an item added using put()
. Or the timeout happens, which we use to check self.is_running
to make sure the thread is able to correctly stop even if there are no new CNI requests handled at a moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
excellent. Thanks for explaining.
Seems like a thread moving the metrics data into the service exposing them in the Prometheus format was not constrained with any sleep. This was causing increased CPU usage of kuryr-cni pods without a real reason to do it. This commit solves that by rewriting the thread to use the common multiprocessing queue pattern. Change-Id: I0eacc37022fbf214c361dbc52b42281ffa5301fd
8a20561
to
6239a30
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dulek, MaysaMacedo The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@dulek: All pull requests linked via external trackers have merged: Bugzilla bug 1920481 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherry-pick release-4.6 |
@dulek: new pull request created: #440 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Seems like a thread moving the metrics data into the service exposing
them in the Prometheus format was not constrained with any sleep. This
was causing increased CPU usage of kuryr-cni pods without a real
reason to do it.
This commit solves that by rewriting the thread to use the common
multiprocessing queue pattern.
Change-Id: I0eacc37022fbf214c361dbc52b42281ffa5301fd