You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I am using Honeydew + Ecto, and I can't find a way to restart / re-run a job.
In my case, I have a job that I want to be run every time a record in my database has been created, and this works great.
However, I would like to also schedule a job once the record has been updated.
If I understand correctly there is currently no way to do it short of manually setting record's honeydew fields to default values, as they were when record been created.
But this puts me at risk at race condition if the record is being updated while the initlal job is running, or when multiple updates happen in short periods of time.
What is the correct way of handling the situation of running a job on record updates?
My current thinking is we create associated record that represents the record update, and corresponding job. So, if I have Photo, and I want to run ClassifyPhoto job after each update, I insert for each photo update a record, say PhotoUpdated, with a timestamp and honeydew fields, and hook up ClasifyPhoto job to this PhotoUpdated - and not Photo schema.
Is my thinking correct or am I missing some smart funcionality Honeydew provides to do it?
The text was updated successfully, but these errors were encountered:
Hey there, this isn't a use case that the Ecto queue was designed to accommodate. It's really meant for simple one-row-one-job scenarios. You're certainly right about resetting the fields to their default values, that'd be bad.
Your solution would work just fine, but maybe your situation would be a better fit for a different job queue, so you don't have to resort to hacks?
A lot of the more featureful Elixir job queues require postgres, which is sort of a bummer, though. The use case that I have doesn't tolerate SPOF nodes.
So I am using Honeydew + Ecto, and I can't find a way to restart / re-run a job.
In my case, I have a job that I want to be run every time a record in my database has been created, and this works great.
However, I would like to also schedule a job once the record has been updated.
If I understand correctly there is currently no way to do it short of manually setting record's honeydew fields to default values, as they were when record been created.
But this puts me at risk at race condition if the record is being updated while the initlal job is running, or when multiple updates happen in short periods of time.
What is the correct way of handling the situation of running a job on record updates?
My current thinking is we create associated record that represents the record update, and corresponding job. So, if I have Photo, and I want to run ClassifyPhoto job after each update, I insert for each photo update a record, say PhotoUpdated, with a timestamp and honeydew fields, and hook up ClasifyPhoto job to this PhotoUpdated - and not Photo schema.
Is my thinking correct or am I missing some smart funcionality Honeydew provides to do it?
The text was updated successfully, but these errors were encountered: