Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Varnish "Connection reset by peer" error when large catalog is reindexed on schedule Issue #8815 #8919

Closed
wants to merge 1 commit into from

Conversation

Vedrillan
Copy link

@Vedrillan Vedrillan commented Mar 16, 2017

Description

Currently the Varnish purge request is made with one big request containing all the tags, this is not scalable as Varnish has a limit to the accepted request header size.

Rising the Varnish configuration of http_req_hdr_len has a consequence to Varnish memory footprint and is anyway not scalable to any number of tags, at some point you might again reach the Varnish limit if your catalog continue to grow.

The goal of this change is to send the purge request by fix batch size, based on request size, this way the purge request can scale to any number of tags.

Fixed Issues (if relevant)

  1. Varnish "Connection reset by peer" error when large catalog is reindexed on schedule #8815: Varnish "Connection reset by peer" error when large catalog is reindexed on schedule
  2. ...

Manual testing scenarios

  1. ...
  2. ...

Contribution checklist

  • Pull request has a meaningful description of its purpose
  • All commits are accompanied by meaningful commit messages
  • All new or changed code is covered with unit/integration tests (if applicable)
  • All automated tests passed successfully (all builds on Travis CI are green)

@magento-cicd2
Copy link
Contributor

magento-cicd2 commented Mar 16, 2017

CLA assistant check
All committers have signed the CLA.

*
* @var int
*/
protected $requestSize = 7680;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this class is not designed for extending. Please, make it private

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

 * send multiple small purge request instead of one big request to avoid reaching the varnish http_req_hdr_len threshold.
@okorshenko okorshenko dismissed their stale review March 17, 2017 21:06

thank you for the fix

@okorshenko okorshenko changed the title Issue #8815 Varnish "Connection reset by peer" error when large catalog is reindexed on schedule Issue #8815 Jul 12, 2017
@okorshenko okorshenko self-assigned this Jul 12, 2017
@okorshenko okorshenko added this to the July 2017 milestone Jul 12, 2017
@okorshenko okorshenko modified the milestones: July 2017, August 2017 Aug 1, 2017

// Send request if batch size is reached and add the implode with pipe to the computation
if ($tagsBatchSize + strlen($formattedTag) > $this->requestSize - count($tags) - 1) {
$this->purgeCache->sendPurgeRequest(implode('|', array_unique($tags)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need send sendPurgeRequest twice? Line 82 and 91

I understand that in line 91 you will purge leftovers, but I would like to keep request configuration (payloads) and requests itself separately.

Let's implement a service for batches generation and cover it with the tests.
Use new service in the observer. The observer should simply send purge requests prepared by the new service.

This will allow customize this service for 3rd party developers in case they need some other behavior.
Make it configurable and customizable.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will allow customize this service for 3rd party developers in case they need some other behavior.

Do you have an example in mind?

As a reminder, the point of this PR is to fix the tags purging with Varnish because it does not work on big catalogs and I kept it as close as possible to what it was to avoid as much as possible any regression.

I am not against your idea, it is a nice enhancement but not at the cost of postponing this fix. This PR is a bugfix, your suggestion is an enhancement, and both should be on different roadmap.

@okorshenko
Copy link
Contributor

hi @Vedrillan
Thank you for your pull request. I'm not closing this PR for now, but it should be refactored. Please, see my review comments

@vherasymenko
Copy link

Hi @Vedrillan

Can`t reproduce issue.

Steps which were executed:

  1. Install Magento from develop branch
  2. Configure varnish cache
  3. Create 10k products using command php bin/magento setup:perf:genereate..
  4. Export these products in csv file
  5. Run command varnishlog -g request -q 'ReqMethod eq "PURGE"' (for checking all PURGE requests)
  6. Click "System > Import" and choose early exported file
  7. Choose behavior "Add\Update"
  8. Click "Check Data"
  9. Then click "Import"
  10. In command line displayed few PURGE requests, and after few second products imported successfully.

Note: Also tried import products in clean DB.

Could you please provide more details about your environment.
If you can attach your csv file with products.

@okorshenko
Copy link
Contributor

Closing this PR due to inactivity. @Vedrillan feel free to reopen this once ready. Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants