New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory management: do not enforce the BigArrays limit on the network layer and the tranlog. #6332
Comments
would this significantly reduce the memory usage monitored by this breaker? I am concerned that our default of 20% on the BigArray breaker is too low. I'm not sure if this is enough to change that or if we need to separately change the default. |
I'm not sure about how much the monitored memory usage would be reduced but I agree on the fact that 20% might be too low. To me the issue is that is if you have very large field data, this would actually make sense to make sure you never go out of memory but on the other hand if you rely on doc values, this makes you waste memory. I heard @dakrone is working on a way to share memory across several breakers, I think it would help for that issue? |
Yes, currently I have a POC I'm working on for a CircuitBreakerService that has child circuit breakers (for example, one for fielddata and one for BigArrays/requests) and when a circuit break happens on one breaker, it can "borrow" space from another breaker if there is memory available within configurable minimums and maximums. |
…layer and the tranlog. This commit disables memory circuit breaking in BigArrays on FsTranslog, NettyHttpServerTransport and NettyTransport so that these components are not impacted by heavy search requests. Close elastic#6332
@kevinkluge I opened a separate pull request to deal with the default breaker value: #6375 |
The BigArrays limit is currently shared by the translog, netty, http and some queries/aggregations. If any of these consumers starts taking a lot of memory, then other ones might fail to allocate memory, which could have bad consequences, eg. if ping requests can't be sent. The plan is to come up with a better solution in 1.3. Close elastic#6332
The BigArrays limit is currently shared by the translog, netty, http and some queries/aggregations. If any of these consumers starts taking a lot of memory, then other ones might fail to allocate memory, which could have bad consequences, eg. if ping requests can't be sent. The plan is to come up with a better solution in 1.3. Close #6332
The BigArrays limit is currently shared by the translog, netty, http and some queries/aggregations. If any of these consumers starts taking a lot of memory, then other ones might fail to allocate memory, which could have bad consequences, eg. if ping requests can't be sent. The plan is to come up with a better solution in 1.3. Close #6332
The BigArrays limit is currently shared by the translog, netty, http and some queries/aggregations. If any of these consumers starts taking a lot of memory, then other ones might fail to allocate memory, which could have bad consequences, eg. if ping requests can't be sent. The plan is to come up with a better solution in 1.3. Close elastic#6332
BigArrays byte accounting (#6050) applies all the time. Yet, we might want to disable it for cluster-management-related operations so that they would not be impacted eg. in case of heavy search requests.
The text was updated successfully, but these errors were encountered: