-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data transfer between web server and redis cache is too high #33685
Comments
Hi @onlinebizsoft. Thank you for your report.
Please make sure that the issue is reproducible on the vanilla Magento instance following Steps to reproduce. To deploy vanilla Magento instance on our environment, please, add a comment to the issue:
For more details, please, review the Magento Contributor Assistant documentation. Please, add a comment to assign the issue:
🕙 You can find the schedule on the Magento Community Calendar page. 📞 The triage of issues happens in the queue order. If you want to speed up the delivery of your contribution, please join the Community Contributions Triage session to discuss the appropriate ticket. 🎥 You can find the recording of the previous Community Contributions Triage on the Magento Youtube Channel ✏️ Feel free to post questions/proposals/feedback related to the Community Contributions Triage process to the corresponding Slack Channel |
P/S Im on 2.4.2 so it is not same as #32118 |
^^^That issue is still present on 2.4.2-p1 |
Seem much complex with use-case multiple domain? Sorry but i dont have much experiences at this case ! You should add some details. What version that you are experiencing this problem. Any customises modules used |
@mrtuvn yes, multiple domain, multiple websites Im on 2.4.2 with many customization however I believe there is nothing causing this. Our Magento installation is around 20GB (without media and without databases |
@onlinebizsoft Have you configured FPC to utilise redis? |
@IbrahimS2 no, we end up with using single zone for the system now so all EC2 and elasticache instances are on same zone (cut down 50%-60% the bill) |
@onlinebizsoft Please share the Redis configuration section from your env.php. |
@IbrahimS2 |
cc: @vzabaznov may know better at this section |
Please keep in mind that data transfer between EC2 and Redis it is much much bigger than response data to traffic (from both nginx and varnish) I'm thinking it can be possible that some private data ajax action could cause data transfer? So these ajax actions might serve very small data but it still request and transfer big data from redis? |
about ajax request seem we have fixed this case here |
@mrtuvn not really, it may help a bit but not fix whole the problem. Also there are more cases with Ajax on any customized system. The root of the problem is with how/which Magento 2 cache and fetch cache Any one have experience with L2 caching setup? Not sure if it is effective because each web server will have a very small memory storage |
Yeap that's why i have tagged a guy above previous reply. He is the guy play role performance team lead |
This approach doesnt work because it has only write on the primary instance and only read on all instances. So Im thinking we could have a work around if we separate read and write redis connection in env.php. What do you think? |
It's a bit concerning that the Magento team has not made an announcement related to this issue. We are seeing a significant increase in outgoing network usage from the Redis instance being used for cache in most of our client environments that are running 2.3.7 or 2.4.2. Sites that previously had a maximum output of 200mbps from the Redis instance being used for cache are now experiencing over 1gbps at times since upgrading, and these are relatively small sites. One of our larger clients is now going above 5gbps on most days. We have been able to reduce the impact by disabling the Magento_CSP module in some cases, however, the overall outgoing throughput is still significantly higher than prior to the upgrades. It would be great if someone from Magento/Adobe could acknowledge this issue and confirm that this is being worked on. While this may not have a major impact on Adobe Cloud customers, the impact is significant for AWS clients due to the increased billing associated with network usage. I can only imagine how many bare metal 2.4.2 environments are in the wild with an NIC that only supports 1gbps. |
Hi there, we finally feel not alone anymore in this scenario! Thanks for opening an issue! As we have decided to be resilient, we run Adobe Commerce on AWS EC2 several regions for a zone. We have seen a huge impact on the famous Data Transfer (intra regional) AWS cost line. We have 20 store views (and growing), run FPC with Redis. We have decided to dig into the topic checking for improvements:
Maybe some improvements for the future for Adobe Commerce?
I hope you guys won't tell us to use Varnish in order to decrease this Data Transfer chat between Adobe Commerce and Redis. |
Not sure but magento already updated redis dependencies in composer.json. (latest code) "colinmollenhour/cache-backend-file": "~1.4.1",
"colinmollenhour/cache-backend-redis": "^1.14",
"colinmollenhour/credis": "1.12.1",
"colinmollenhour/php-redis-session-abstract": "~1.4.0", Not sure how much affected and relate with this issue version 2.4.3 "colinmollenhour/cache-backend-file": "~1.4.1",
"colinmollenhour/cache-backend-redis": "1.11.0",
"colinmollenhour/credis": "1.11.1",
"colinmollenhour/php-redis-session-abstract": "~1.4.0", https://github.com/magento/magento2/blob/2.4-develop/composer.json |
is this issue related to single redis instance or cluster? https://devdocs.magento.com/guides/v2.3/release-notes/release-notes-2-3-5-open-source.html#performance-boosts |
I have also noticed that If we suppose custom configurations for passwords are already encrypted (payments ...), I don't understand why we encrypt the whole thing again. We lose precious time here serializing / encrypting and after decrypting when getting those keys all the time... Do you guys know the reason why this With such a size key, it may explain issues on parallel generation... |
Hey guys, thank you for reporting please consider to use L2 cache https://devdocs.magento.com/guides/v2.4/config-guide/cache/two-level-cache.html |
L2 cache was a disaster on our Kubernetes cluster, really bad performance. |
@jonathanribas any update on your issue / pain point? Thanks for your answer |
Hi @theozzz, unfortunately we don't had time to test L2 cache again. |
@jonathanribas thanks for your answer. We are experiencing aswell some issues on Redis transfer slowness (we notified it on NewRelic), especially when traffic is high. The platform got 32 stores and 22 websites. Preload keys seems not to have any "big" impact for us. |
Hi @engcom-Hotel. Thank you for working on this issue.
|
Hello @onlinebizsoft, Are you still facing this issue? Can you please try to reproduce the issue in the latest 2.4-develop branch and let us know if the issue is still reproducible for you? Thanks |
Dear @onlinebizsoft, We have noticed that this issue has not been updated for a period of 14 Days. Hence we assume that this issue is fixed now, so we are closing it. Please raise a fresh ticket or reopen this ticket if you need more assistance on this. Regards |
@mrtuvn @engcom-Hotel @vzabaznov can you get this opened? |
Reopen due author ticket got response. Can you update problem still able to reproduce ? @onlinebizsoft |
The problem is still existing but we don't have any deeper information. This is confirmed by quite some users Remember that in our case, we have up to 100 stores (but traffic is not kind of 100 busy websites) and here is networkbytesout from redis P/S : Again, we are always on latest Magento version, we are using Varnish for full page cache. |
Another related issue #21334 |
On our side we have removed encryption / decryption of cache config and results are really good! We have reduce our Data Transfer bill around 30 to 40%! |
@jonathanribas so look like most of data transfer is because of the SYSTEM redis cache key ? |
@onlinebizsoft yes it is. |
@igorwulff did you make any separated measurement for preload key in Redis? From what I can see, this has no improvement for me and it seems to be very useless to collect small redis key one by one to save some redis call in total 100-200 redis calls for each page. What do you think? Or something I don't understand about this feature? |
@onlinebizsoft, if you use AWS and more than one zone inside your region, take a look at this AWS notification.This should help reducing Data Transfer between zones of same region. We have observed that your Amazon VPC resources are using a shared NAT Gateway across multiple Availability Zones (AZ). To ensure high availability and minimize inter-AZ data transfer costs, we recommend utilizing separate NAT Gateways in each AZ and routing traffic locally within the same AZ. Each NAT Gateway operates within a designated AZ and is built with redundancy in that zone only. As a result, if the NAT Gateway or AZ experiences failure, resources utilizing that NAT Gateway in other AZ(s) also get impacted. Additionally, routing traffic from one AZ to a NAT Gateway in a different AZ incurs additional inter-AZ data transfer charges. We recommend choosing a maintenance window for architecture changes in your Amazon VPC. |
I use this and it works well: |
@Nuranto @jonathanribas @igorwulff Just FYI I just saw realized that Magento is loading config cache for all stores and config cache is one of biggest one in the cache storage which mean the total transfer size is multiplying many times than it should be. |
@onlinebizsoft, yes I know about this. |
@onlinebizsoft I would look also into block cache, as each review and price block in Magento has own cache entry. By using https://github.com/EcomDev/magento2-product-preloader and disabling cache for those blocks via plugin I usually drop HMGET from 2000+ to 110 max. |
Summary (*)
We run a Magento site on multiple servers on multiple aws regions, the website have multple domains and multiple languages and the catalog has many products as well.
We relealized that the data transfer cost on the aws bill is quite much higher than normal. It cover up to 50-60% the total.
We figured out that most of data transfer was from Elasticache (one region) to EC2.
Examples (*)
Proposed solution
From our side we are looking at 2 things
https://aws.amazon.com/blogs/database/reduce-cost-and-boost-throughput-with-global-datastore-for-amazon-elasticache-for-redis/
https://devdocs.magento.com/guides/v2.4/config-guide/cache/two-level-cache.html
I think the code core need to be re-worked.
Please provide Severity assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
The text was updated successfully, but these errors were encountered: