New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add max_latency to BackgroundThreadTransport #4762
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed, please reply here (e.g.
|
I signed it! |
CLAs look good, thanks! |
items.append(queue_.get_nowait()) | ||
elapsed = time.time() - start | ||
timeout = max(0, min_wait_time - elapsed) | ||
items.append(queue_.get(timeout=timeout)) |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
When reviewing this PR, I would be particularly interested in your opinion on how safe it is when a Python processes terminates due to an exception - is there a possibility for log messages getting lost? |
I've looked at the code a bit further. There is a method called Thoughts? |
:type max_latency: float | ||
:param max_latency: The maximum number of seconds to wait for more than one | ||
item from a queue. This number includes the time required to retrieve | ||
the first item. |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
@@ -264,6 +272,34 @@ def test__thread_main_batches(self): | |||
self.assertFalse(worker._cloud_logger._batch.commit_called) | |||
self.assertEqual(worker._queue.qsize(), 0) | |||
|
|||
def test__thread_main_max_latency(self): |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
The call to
This is always possible, yes, however, an exception on the main thread (or any thread other than this background thread) will not cause this to lose logs. Lost logs can happen due to one of three reasons:
|
64d0524
to
7af017e
Compare
@tcwalther I re-wrote the test to use monotonic time to keep our unit tests deterministic and fast, totally understand not wanting to do it - it's not an obvious way to write the test. This looks good, I'm merging. Thank you for doing all of this (and enduring my review). @dpebot will you merge when tests pass? |
Okay! I'll merge when all statuses are green and all reviewers approve. |
This PR adds a new parameter
max_latency
toBackgroundThreadTransport
in logging. It can be used to enforce batching by asking the transport to wait for new log messages for a specified amount of time. This is useful to avoid hitting the Stackdriver Logging rate limit.This PR is written on behalf of Spotify, who, as a company, has signed a corporate CLA. The idea for this PR is based on a support case with Google.