-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preload adaptative batch #6427
base: develop
Are you sure you want to change the base?
Preload adaptative batch #6427
Conversation
…ious batch and errors
Conservative approach by initializing batch size to min value Check duration once per batch for the average Remove unused transient Remove transient on uninstall
…g actions as develop
…e/preload-adaptative-batch
… avoid filling the AS queue at all cost.
@Khadreal Hey, I just pushed a small change to enforce the max batch size after the min batch size. This is for the case there are already 45 actions in the AS queue. In this case, no matter what, we don't want to add more to the AS queue. So the max has to be enforced after. |
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferences🚀 Don’t miss a bit, follow what’s new on Codacy. Codacy stopped sending the deprecated coverage status on June 5th, 2024. Learn more Footnotes
|
/** | ||
* Filter the delay between each preload request. | ||
* | ||
* @param float $delay_between the defined delay. | ||
* @returns float | ||
*/ | ||
$delay_between = apply_filters( 'rocket_preload_delay_between_requests', 500000 ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know it's not been added by this PR, but shouldn't we add a safeguard here ?
Description
When preparing preload batches, the batch size will be adapted based on the average duration of a request to avoid flooding servers with requests if they take too long.
To get the estimate of the request duration, we do a blocking request from time to time when triggering the preload, and measure the timing.
Fixes #6396
Type of change
Please delete options that are not relevant.
Is the solution different from the one proposed during the grooming?
Yes, on top of the original idea, we added an average mechanism that keeps track of the average request time through transients.
Checklists
Generic development checklist
Test summary
Tested locally and on the gamma website by checking transients and scheduled actions.