New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Having identifiers for limiting the enqueuing of the same jobs #236
Comments
This feature would also be very useful for my case. My product has a similar need where we want a job to fire exactly once, after a certain amount of time has expired. The event that triggers that job may occur a single time, a few times, or thousands of times - but regardless we want the job to fire once. At the moment, the only way to avoid creating extra jobs is to use the same UUID for each job, but then we have the problem where we need to update the job contents for each event, and to extend the time period before the job fires so that we can wait for any additional events - and that doesn't seem to be possible from what I can tell. |
A similar thing can be accomplished in the current version in the following way:
|
Jobrunr Pro now has an actual API for replacing jobs. |
Closing this as it is now possible to replace jobs in JobRunr Pro. |
Still requested: replacing jobs != not enqueue again if already scheduled, enqueued or processing. This to limit certain jobs and not recreate them again. |
Hey guys, I would like to confirm if the problem I have is related to this issue. |
Well, @sergiocerqueiraactabl, it's unrelated to this issue but what you're trying to achieve is part of JobRunr Pro... |
Thanks for your response @rdehuyss! |
This is all related to #448. There you can find the reason why this is happening. |
Hi @sergiocerqueiraactabl - I don't know if my comment from earlier on was understandable but the feature you are looking for (pausing and resuming RecurringJobs) is part of JobRunr Pro. I hope you understand that some premium features like this are only available in the Pro version. As you may (or may not) know, doing sustainable open-source development is hard :-). |
🎉 This feature will be part of JobRunr Pro v7! 🎉 Here is the integration test that shows the usage: @Test
void testEnqueueWithIdentifier() {
BackgroundJob.enqueue(JobProId.fromIdentifier("my identifier"), () -> testService.doWork());
BackgroundJob.enqueue(JobProId.fromIdentifier("my identifier"), () -> testService.doWork());
await().atMost(FIVE_SECONDS).untilAsserted(() -> assertThat(storageProvider.countJobs(aJobSearchRequest().build())).isEqualTo(1));
await().atMost(FIVE_SECONDS).untilAsserted(() -> assertThat(storageProvider.getJob(aJobSearchRequest().build())).hasStates(ENQUEUED, PROCESSING, SUCCEEDED));
} |
Thanks for your reply @rdehuyss . Yes I understand the amount of work of a open-source tool. You are doing a great work here |
Description of the problem.
Typical use case: you have a display of financial markets. Each time the price of a stock changes, you want to write it on the disk but you can have up to 150 changes for the same stock in one second (ex: APPLE CORP). Then I just want to update the last one, not doing 150 updates.
The idea behind this is to limit the work on machine (do we discuss Green IT?) when there is no need to multiply the same calls.
Other use cases:
myfile.txt
saved asmyfile.txt.bak
) each time this one is modified. If we have 15 changes in one second, only the last one has to be stored.The proposal
I would like an unique identifier to identify the job when I enqueue it. If this identifier already exists on the job still in queue (basically
AWAITING
orENQUEUED
I believe), I would like JobRunr ignores this and returns either the previousJobId
or null.Add-on capability
Depending of the need, the best allow 2 methods if an existing identifier already exists:
BackgroundJob.enqueueIfNotExists
to not enqueue the new job .BackgroundJob.enqueueReplace
to replace the previous job by the new one.Note the programmer can handle this quite easily and providing 2 methods is really a gold platinum.
The text was updated successfully, but these errors were encountered: