Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
job queue worries (was: deal with too big a queues) #34
some users are reporting huge queues (I suppose they're on slow hosts and more items are added then there are processed, to be looked into).
but to keep things manageable (or to give users some control over this), we could look into:
Hundreds - thousands…
On Mon, 18 Jun 2018, 07:49 Deny Dias, ***@***.***> wrote: First things first: define 'huge'. How many jobs ou consider a huge queue? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#34 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AALEMZPcGFuVE_ceZhgITkj1_00dCRGeks5t9z92gaJpZM4UrKn3> .
Hummm... I think that the solution for hundreds and thousands of jobs in the queue is to decrease the time between queue runs. Maybe ask those users with huge queues to install something like wp-control so they can change the default queue interval from 10 minutes to, let's say, five, three or even one minute.
The thing to clear the queue or purge N jobs falls into the category of inefficiency. The plugin will recreate those jobs anyway. As such this will be a tool so a user can fool himself. It's pure eye candy.
If the size of the queue is something to bother, we should just hide the queue somewhere in the UI. Jobs are far less important than rules.
referenced this issue
Aug 17, 2018
I'm reconsider auto-purging old(er) N jobs as I saw a user with 1111 jobs, some of which were 5 days old already even though, given the fact he does have AUTO rules, cron is working.
Even if the jobs-panel is hidden and psychological impact is somewhat mitigated, having that amount of N jobs means that the plugin is constantly "running behind the facts" and that the hash stored in the job might already be not be correct any more when the CCSS is actually requested, leading to the job being re-created with a new hash which potentially is incorrect again when the job gets executed ... ad infinitum ....