-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Varnish "Connection reset by peer" error when large catalog is reindexed on schedule Issue #8815 #8919
Conversation
* | ||
* @var int | ||
*/ | ||
protected $requestSize = 7680; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this class is not designed for extending. Please, make it private
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
||
// Send request if batch size is reached and add the implode with pipe to the computation | ||
if ($tagsBatchSize + strlen($formattedTag) > $this->requestSize - count($tags) - 1) { | ||
$this->purgeCache->sendPurgeRequest(implode('|', array_unique($tags))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need send sendPurgeRequest
twice? Line 82 and 91
I understand that in line 91 you will purge leftovers, but I would like to keep request configuration (payloads) and requests itself separately.
Let's implement a service for batches generation and cover it with the tests.
Use new service in the observer. The observer should simply send purge requests prepared by the new service.
This will allow customize this service for 3rd party developers in case they need some other behavior.
Make it configurable and customizable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will allow customize this service for 3rd party developers in case they need some other behavior.
Do you have an example in mind?
As a reminder, the point of this PR is to fix the tags purging with Varnish because it does not work on big catalogs and I kept it as close as possible to what it was to avoid as much as possible any regression.
I am not against your idea, it is a nice enhancement but not at the cost of postponing this fix. This PR is a bugfix, your suggestion is an enhancement, and both should be on different roadmap.
hi @Vedrillan |
Hi @Vedrillan Can`t reproduce issue. Steps which were executed:
Note: Also tried import products in clean DB. Could you please provide more details about your environment. |
Closing this PR due to inactivity. @Vedrillan feel free to reopen this once ready. Thank you |
Description
Currently the Varnish purge request is made with one big request containing all the tags, this is not scalable as Varnish has a limit to the accepted request header size.
Rising the Varnish configuration of http_req_hdr_len has a consequence to Varnish memory footprint and is anyway not scalable to any number of tags, at some point you might again reach the Varnish limit if your catalog continue to grow.
The goal of this change is to send the purge request by fix batch size, based on request size, this way the purge request can scale to any number of tags.
Fixed Issues (if relevant)
Manual testing scenarios
Contribution checklist