-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failures handling best practice #6
Comments
Hi @bguban Yeah, this is an area where taskinator isn't fully fledged. I've only ever used the notifications to Originally I planned to build support for inspecting processes and their tasks via I'll look into your suggestions to have a built-in retry mechanism and |
And in response to your specific questions:
Yes, typically the wire up for
Each event carries a payload, which includes the process class, UUID and other metadata. |
Hi @virtualstaticvoid. |
One more question. Is there any best practice for testing flows. I stubbed all my jobs
but I see that there is a 3 seconds delay between the execution of the jobs. So the test takes a very long time. One more problem I faced is that I can't stub tasks. I would like to be able to stub this task to check that flow calls it and then write a separate test for each of the jobs. As I can see in the source code that taskinator includes my flow into instances of Thanks |
Hi,
I have
Ruby on Rails
app withResque
as a background jobs engine. I need to notify developers when some oftaskinator
flows failed.As I can see
taskinator
usesActiveSupport::Notifications
. Does it mean that I should subscribe to failure events in an initializer? If I have severaltaskinator
processes how to determine which process failed? I have neither process class nor file name in the arguments passed intoActiveSupport::Notifications
. Also would be good to know a task/job name.Is it possible to continue execution from failed task/job after the problem was fixed or retry the failed task/job automatically?
Is it possible to set up
on_failure
hooks?The text was updated successfully, but these errors were encountered: