You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Elasticsearch version:
5.0.0 Plugins installed: []
none JVM version:
1.8.0_77 OS version:
CentOS release 6.4 (Final) Description of the problem including expected versus actual behavior:
Linux 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Steps to reproduce:
After upgrading from v2.4.0 to v5.0.0 last weekend, Nov 11th, we saw significant increment in a couple of resource usages:
network traffic increased by about 7 times.
By refering to documents, tcp compression defaults to false both in v5.0.0 & v2.4.0, not sure why there is such a big difference. As our architecture puts several dedicated client nodes as data entry points, NIC bandwidth now became the bottleneck which however was never a concern back in v2.4.0 time. For now we have to get around the bottleneck by enabling tcp compression on client nodes.
The memory virtual increased by about 5 times on data node, but total on disk index size is about the same size after upgrading.
Though I understand memory virtual is not the actual physical memory required, it raised me concerns that data node potentially need to map more data from disks to memory than previous version. And I worried the performance could suffer when the search needs to access large amount of on disk data.
Considering we are suffering from occasional node wearing out issue reported in another issue: #21611 , I suspect these could be somewhat related.
Provide logs (if relevant):
Describe the feature:
The text was updated successfully, but these errors were encountered:
I am not sure how to explain the increase in network traffic, but the increase of virtual memory usage is expected since we now read all parts of the index using mmap. As you already noticed, this does not necessarily mean that physical memory usage should increase. Moreover, the access patterns to the index did not change so I would not expect a performance degradation. There is more information about this change at #17616.
Elasticsearch version:
5.0.0
Plugins installed: []
none
JVM version:
1.8.0_77
OS version:
CentOS release 6.4 (Final)
Description of the problem including expected versus actual behavior:
Linux 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Steps to reproduce:
After upgrading from v2.4.0 to v5.0.0 last weekend, Nov 11th, we saw significant increment in a couple of resource usages:
network traffic increased by about 7 times.
![image](https://cloud.githubusercontent.com/assets/10510416/20377518/1dd0b61e-accb-11e6-81d8-b1d70aa8ddb0.png)
![image](https://cloud.githubusercontent.com/assets/10510416/20377523/25ab8544-accb-11e6-9b03-a1fdb6e36121.png)
By refering to documents, tcp compression defaults to false both in v5.0.0 & v2.4.0, not sure why there is such a big difference. As our architecture puts several dedicated client nodes as data entry points, NIC bandwidth now became the bottleneck which however was never a concern back in v2.4.0 time. For now we have to get around the bottleneck by enabling tcp compression on client nodes.
The memory virtual increased by about 5 times on data node, but total on disk index size is about the same size after upgrading.
![image](https://cloud.githubusercontent.com/assets/10510416/20377670/1bb2bb2e-accc-11e6-84ea-bde0beff0668.png)
Though I understand memory virtual is not the actual physical memory required, it raised me concerns that data node potentially need to map more data from disks to memory than previous version. And I worried the performance could suffer when the search needs to access large amount of on disk data.
Considering we are suffering from occasional node wearing out issue reported in another issue:
#21611 , I suspect these could be somewhat related.
Provide logs (if relevant):
Describe the feature:
The text was updated successfully, but these errors were encountered: