New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unhelpful error when job fails #305
Comments
@anthonator Do you know what your job is returning that may cause this error? This looks like the result of returning In the meantime you can improve the error reporting with an explicit match or by raising an error rather than returning a tuple. |
I guess the reason this was confusing is that it looks like |
Here's our instrumenter if that helps.
|
The default stacktrace is pretty much useless in this case. It is even potentially harmful because it doesn't come from application code at all and it looks like Oban itself is erroring. Do you receive Sentry notices for other failures? |
Yes, we receive Sentry errors in other places in the app. |
I meant do you receive sentry notices for other job failures. Though, looking at your config it seems like this is the only job you'd be reporting for. |
Sorry, I'm not sure what you mean. Given my instrumenter above I'd expect any job that raises an error to get reported to Sentry. At the moment we don't receive any failures in Sentry from Oban. |
You're absolutely right. I don't see any issue with the instrumenter above, it looks correct for Oban 2.0 events. I have a hunch that something is causing your handler to detach and that's preventing the error reports. It's hard to say without doing some debugging. |
@anthonator It seems like this the result of changes in Sentry, getsentry/sentry-elixir#403. It no longer accepts non-exception values for |
Is there a way to get the exception from Oban so I can pass it to Sentry? |
In this case (and any other where the job returns Until the next release (and safely after that) you can use def handle_event([:oban, :job, :exception], measure, meta, _) do
extra = Map.take(meta, [:args, :id, :queue, :worker])
extra = Map.merge(extra, measure)
meta.kind
|> Exception.normalize(meta.error, meta.stacktrace)
|> Sentry.capture_exception(stracktrace: meta.stacktrace, extra: extra)
end |
Ah, ok. That makes much more sense. Thanks for the clarification. I'm not sure I want to report failed jobs that return |
You can check if case {meta.kind, meta.error} do
{_, %{__exception__: _}} ->
Sentry.capture_exception(meta.error, stracktrace: meta.stacktrace, extra: extra)
{:exit, _} ->
Sentry.capture_exception(meta.error, stracktrace: meta.stacktrace, extra: extra)
_ ->
:do_nothing
end |
This makes several significant changes to how crahes, errors and timeouts are reported from `perform/1` calls: * Timeouts are wrapped in `Oban.TimeoutError` * Error and discard tuples are wrapped in `Oban.PerformError` * Exits and throws are wrapped in `Oban.CrashError` * Stacktraces are only included from code that is rescued or caught, not from error tuples or timeouts. The goal is improve error formatting within a job's error array and to make error reporting to external services like Sentry entirely consistent. Fixes #305
This makes several significant changes to how crashes, errors and timeouts are reported from `perform/1` calls: * Timeouts are wrapped in `Oban.TimeoutError` * Error and discard tuples are wrapped in `Oban.PerformError` * Exits and throws are wrapped in `Oban.CrashError` * Stacktraces are only included from code that is rescued or caught, not from error tuples or timeouts. The goal is improve error formatting within a job's error array and to make error reporting to external services like Sentry entirely consistent. Fixes #305
Environment
2.0.0
12.3
elixir --version
)Erlang/OTP 23 [erts-11.0.2] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [hipe]
Elixir 1.10.4 (compiled with Erlang/OTP 21)
Current Behavior
We're receiving an unhelpful error when a job fails to execute.
We attempt to send errors to Sentry using a telemetry instrumenter but whenever we see this error nothing gets sent to Sentry.
Expected Behavior
Error messages/stacktraces are helpful and get sent to external error services via the telemetry integration
Additional Information
Job record
The text was updated successfully, but these errors were encountered: