-
Notifications
You must be signed in to change notification settings - Fork 395
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[celery] add retry reason metadata to spans #630
Conversation
Related to #603 |
eq_(span.get_tag('celery.action'), 'run') | ||
eq_(span.get_tag('celery.state'), 'RETRY') | ||
|
||
# TODO: these should be failing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason these aren't failing is because you are explicitly raising a Retry
exception.
This is something that Celery will catch and handle on it's own.
The other case we need to be able to support is when you set a retry policy for the task and another type of exception is raised.
@self.app.task(autotry_for(Exception, ))
def fn_exception():
raise Exception('oh no, what happened?!')
I think the cases we need to handle are: # Success \0/
@app.task
def fn_success():
pass
# This should be reported as an error
@app.task
def fn_exception():
raise Exception()
# This should be reported as success (the desired action of retrying occurred)
# We should add metadata to the span to show that it will be retried
# We need to make sure we handle when the max number of
# retries have occurred (this should then be an error)
@app.task
def fn_retry():
raise Retry()
# Same as above
@app.task(bind=True)
def fn_retry(self):
raise self.retry()
# This feels like it should fall under the same as the previous,
# mark as success unless max retires have occurred
@app.task(autoretry_for=(Exception, ))
def fn_retry_exception():
raise Exception() |
@Kyle-Verhoog stealing this PR from you :) Changed the base to This PR now is focused on adding celery task retry reason as metadata to the span. |
|
||
# Add retry reason metadata to span | ||
# DEV: Use `str(reason)` instead of `reason.message` in case we get something that isn't an `Exception` | ||
span.set_tag(c.TASK_RETRY_REASON_KEY, str(reason)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want this added as an exception, retrying isn't necessarily an error.
We could potentially add other metadata here, like the type of exception that was raised, and traceback info, but reason alone should be good to get started?
I can't officially review the PR because I created it, but this looks good to me 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
Background
The method of retrying a task in celery raises retry exceptions. These appear to be getting caught and mishandled as errors in the tracer.
Overview
This PR aims to gracefully handle
celery
retry exceptions.Current Status
Some investigation has been done and there is a signal API provided to trace retries. However I have been unable to replicate the retry exception being marked as an error in the tracer.