-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate cavalcade job entries #89
Comments
I noticed the same thing, which is why I opened #88 to help clean up job entries via WP-CLI, but it would be great to know what is causing these duplicate entries to occur. |
Does this project have multiple webservers? My approach was going to be to put an key constraint on hook + args + site so that a second instance of the same event wouldn't get into the database. The issue with that is that the args column is a longtext instead of varchar(max) which doesn't support indexes. (https://github.com/humanmade/Cavalcade/blob/master/inc/namespace.php#L70-L73) The issue can also occur when using intervals because each worker will check if an event needs to be rescheduled. The acquire_lock method of the runner isn't adequate: https://github.com/humanmade/Cavalcade-Runner/blob/master/inc/class-job.php#L55-L66 When 4 workers get started and they call https://github.com/humanmade/Cavalcade-Runner/blob/master/inc/class-runner.php#L236-L239 at a similar time then try and update we see the race condition here. Last time I spoke with @rmccue about this he wanted to look at database locking if I recall correctly. Looking into this something like https://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html might be an option. |
Would love to hear if there's a solution as we're looking to implement Cavalcade right now. If duplicates are being created, the solution would be (at the moment) no better than WP's broken built-in cron. |
@archon810 you could try Cavalcade 2.0. Worth noting this doesn't happen all the time - Cavalcade is running on all our production sites and wordpress.org too. |
WordPress.org has this running as a daily cron task to solve this issue:
It's not ideal, but it's been working for us for quite some time. Duplicate jobs are a little more common with Cavalcade over the usual WP cron storage, as Cavalcade inserts multiple rows where as WP cron just overwrites the previous cron array with a new one (which will only include one cron entry). Using a table lock would help, but Cavalcade would also have to reload the DB cron entries for the current site after locking, which could cause a lot of table locking on a high usage site like WordPress.org. For reference, Cavalcade on WordPress.org in numbers:
|
Maybe change the logic to confirm 100% that the query that looks for existing jobs returned correctly, and if it errors, don't schedule a potential dupe? Error check? |
Hey, thanks for these recent updates. Yes is the answer that we want to understand and resolve this issue for good. Thanks @dd32 for the code that checks for duplicates too. We did have some initial discovery work internally but we didn't reach any direct conclusions yet. We will have a few QA sprints coming up next month so I will try and get this addressed then. |
Hello, Any news regarding this issue? We implemented @dd32 duplicate marking solution as a cron job and it's working, but we still see a lot of duplicates (around 30,000 dupes currently). It's a multisite network with 400+ sites, and three workers (runners on AWS instances) with 25 max-workers each. The three of them are %99 CPU all the time, and we think it is due to the underlying dupes problem. We modified the runner a bit to change events with less than 15 minutes interval, to make the nextrun the current execution time plus the interval, so they never get stuck, and changed also the nextrun of the events with a one minute interval to be two minutes instead: Hopefully you cant sort this out! |
Seeing a potential issue where (in this example) scheduled post cavalcade jobs are duplicated causing a race condition and posts don't publish.
Will update upon further investigation
The text was updated successfully, but these errors were encountered: