Skip to content
This repository has been archived by the owner on May 23, 2023. It is now read-only.

Better error handling in context managers for Span/Scope. #101

Merged
merged 6 commits into from
Oct 19, 2018

Conversation

carlosalberto
Copy link
Contributor

Would address #72

cc @yurishkuro @palazzem

@carlosalberto
Copy link
Contributor Author

Probably the 'important' bit here is that we would also be setting the error tag/log for Span from Scope ;)

self.set_tag(tags.ERROR, True)
self.log_kv({
'event': tags.ERROR,
'error.object': exc_type,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this duplicates the previous log_kv, would it not make sense to combine?

Copy link
Contributor Author

@carlosalberto carlosalberto Aug 14, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I don't have any preference there - so we can definitely put all that info in a single place ;) Will update.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe OT spec has the field name for the stacktrace, which takes care of python.exception.tb. I don't know what python.exception.val is about.

My preference is to converge to OT standard log fields instead of these custom python strings (which predate official OT fields). But it will be a breaking change, in some way. Worth checking with other tracing vendors if they have explicit dependency on these strings in the backend (Jaeger doesn't).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, definitely. I had wondered about it too (that being said, it looks logging the error through error.object should be enough for us).

Will double check and ask away ;)

@carlosalberto
Copy link
Contributor Author

Hey @palazzem @pglombardo

Wondering if any of your backend sides consume these custom python.exception.* logs? If not we will replace them with the standard ones ('error' as tag and 'error.object' as a log).

PS - Jaeger nor LightStep consume this field on the backend ;)

@pglombardo
Copy link

Hi @carlosalberto - sorry for the late reply (vacation). We don't have specific processing for the python.exception.* logs at Instana. No problem here on switching to the standard tags.

@carlosalberto
Copy link
Contributor Author

Hey @yurishkuro think this could go finally in? Let me know ;)

@yurishkuro
Copy link
Member

didn't we say we want to dedupe the logging?

@carlosalberto
Copy link
Contributor Author

Oh my bad, thought I had already committed that :/ Will update it

@carlosalberto
Copy link
Contributor Author

@yurishkuro Done (removed the custom fields). Let me know ;)

self.set_tag(tags.ERROR, True)
self.log_kv({
logs.EVENT: tags.ERROR,
logs.ERROR_OBJECT: exc_type,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a good understanding of Python exceptions, but this seems odd. Wouldn't this make more sense?

  • error.kind = exc_type
  • error.object = exc_val (??)
  • stack = exc_tb

Copy link
Contributor Author

@carlosalberto carlosalberto Oct 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I should win a prize for being so stupidly careless here, as it should have been exc_val - and also, I was trying to use the advised behavior from https://github.com/opentracing/specification/blob/master/semantic_conventions.md so we it should be enough to have the error object.

Updated ;)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can the tracer implementation still extract the stack trace if you only pass exc_val? What's the reason that Python splits those three?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, good point - so, no, it's not possible in Python 2 at least. My guess was to go and append it as another log field, but I also see we don't have any stack field, as the Specification declares (this also happens in Java).

In any case, I think I will go ahead and add such traceback anyway. Let me know - the log field name can be added later on ;)

self.log_kv({
logs.EVENT: tags.ERROR,
logs.ERROR_OBJECT: exc_val,
'stack': exc_tb,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not add stack constant as well? it's in the spec.

what about error.type tag, wouldn't it make sense to be set to exc_type?

@carlosalberto
Copy link
Contributor Author

@yurishkuro Oh my, had forgotten we already had merged the opentracing.logs PR (and thus didn't want to mix them up).

So error.kind can be extrapolated from the exc_val but I'm appending it anymore - I'm also exposing an user-friendly traceback of the error (exactly the one you'd get running code). exc_tb is exposed here, but it's basically exposed for slightly more advanced usage, so we fallback to traceback.format_exc().

logs.ERROR_OBJECT: exc_val,
logs.ERROR_KIND: exc_type.__name__,
logs.MESSAGE: str(exc_val),
logs.STACK: traceback.format_exc(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a slight preference to passing all 3 values above as is, without any additional work. If implementation wants to call .format_exc(), it can do so, but doing it explicitly in the API introduces a lot of unavoidable overhead, especially when the trace is not sampled.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, we might need to use different log names, as per the spec, strings are expected. What's your take on this? Or simply put the objects there directly, anyway? (Personally I have mixed feelings about this, but I'd guess the latter option would be better ;) )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the spec is wrong about that. Forcing to-string prematurely is the biggest no-no of monitoring APIs. If we take Jaeger tracer, it will do to-string anyway, but only if the trace is actually sampled and the conversion is necessary. In some cases to-string is not even appropriate, e.g. if a tracing backend supports stack traces via proper structure, then I would much rather get the language-specific date structure and pack it more efficiently than to-string.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All right, lets do that. I just updated the PR ;)

@carlosalberto carlosalberto merged commit 5dda368 into opentracing:master Oct 19, 2018
@carlosalberto
Copy link
Contributor Author

Merged ;) Thanks for the feedback!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants