-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release event pointers faster as they go through the pipeline #38166
Conversation
Pinging @elastic/elastic-agent (Team:Elastic-Agent) |
This pull request does not have a backport label.
To fixup this pull request, you need to add the backport labels for the needed
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes LGTM. Would it be possible to add some unit tests, basically ensuring the appropriate events fields are nil
'd out after calls to the methods changed in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
💚 Build Succeeded
cc @faec |
💚 Build Succeeded
History
cc @faec |
💚 Build Succeeded
cc @faec |
💚 Build Succeeded
History
cc @faec |
💚 Build Succeeded
History
cc @faec |
💚 Build Succeeded
cc @faec |
💚 Build Succeeded
cc @faec |
💚 Build Succeeded
History
cc @faec |
💚 Build Succeeded
History
cc @faec |
💚 Build Succeeded
History
cc @faec |
Make two changes to allow event data to be freed from memory faster: - Clear event pointers from the memory queue buffer when they are vended instead of when they're acknowledged. (The data will be preserved in the event batch structure until acknowledgment.) - Clear event pointers from the batch structure immediately after it is acknowledged instead of waiting for the batch to be freed naturally. In benchmarks of Filebeat with saturated Filestream input going to an Elasticsearch output, this lowered average memory footprint by ~10%. (cherry picked from commit 2e4cbdb)
Delete the proxy queue, a prototype written to reduce memory use in the old shipper project. Recent improvements to the memory queue (#37795, #38166) added support for the same early-free mechanisms as the proxy queue, so it is now redundant. The proxy queue was never used or exposed in a public release, so there are no compatibility concerns. (This is pre-cleanup for adding early-encoding support, to avoid implementing new functionality in a queue that is no longer used.)
Delete the proxy queue, a prototype written to reduce memory use in the old shipper project. Recent improvements to the memory queue (elastic#37795, elastic#38166) added support for the same early-free mechanisms as the proxy queue, so it is now redundant. The proxy queue was never used or exposed in a public release, so there are no compatibility concerns. (This is pre-cleanup for adding early-encoding support, to avoid implementing new functionality in a queue that is no longer used.)
Proposed commit message
Make two changes to allow event data to be freed from memory faster:
In benchmarks of Filebeat with saturated Filestream input going to an Elasticsearch output, this lowered average memory footprint by ~10%.
Checklist
I have made corresponding changes to the documentationI have made corresponding change to the default configuration filesI have added an entry inCHANGELOG.next.asciidoc
orCHANGELOG-developer.next.asciidoc
.