-
Notifications
You must be signed in to change notification settings - Fork 38.2k
mining: add getMemoryLoad() and track template non-mempool memory footprint #33922
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Prepare template destruction handling for a later commit that checks memory management: - add destroy_template helper which awaits the result and avoids calling destroy() if we never received a template - reverse order and prevent template override. This ensures template and template2 (which don't have transactions) are destroyed last.
|
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code Coverage & BenchmarksFor details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33922. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
|
I haven't benchmarked this yet on mainnet, so I'm not sure if checking every (unique) transaction for mempool presence is unacceptably expensive. If people prefer, I could also add a way for the |
IPC clients can hold on to block templates indefinately, which has the same impact as when the node holds a shared pointer to the CBlockTemplate. Because each template in turn tracks CTransactionRefs, transactions that are removed from the mempool will have not have their memory cleared. This commit adds bookkeeping to the block template constructor and destructor that will let us track the resulting memory footprint.
21ad8c1 to
f22413f
Compare
|
🚧 At least one of the CI tasks failed. HintsTry to run the tests locally, according to the documentation. However, a CI failure may still
Leave a comment here, if you need help tracking down a confusing failure. |
|
|
||
| TxTemplateMap& tx_refs{*Assert(m_tx_template_refs)}; | ||
| // Don't track the dummy coinbase, because it can be modified in-place | ||
| // by submitSolution() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allow IPC clients to inspect the amount of memory consumed by non-mempool transactions in blocks. Returns a MemoryLoad struct which can later be expand to e.g. include a limit. Expand the interface_ipc.py test to demonstrate the behavior and to illustrate how clients can call destroy() to reduce memory pressure. Add bench logging to collect data on whether caching or simplified heuristics are needed, such as not checking for mempool presence.
f22413f to
3b77529
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK
I think it would be better if we have internal memory management for the mining interface IPC, since we hold on to the block templates.
I would suggest the following approach:
- Add memory budget for the mining interface.
- Introduce a tracking list of recently built block templates and total memory usage.
- Add templates to the list and increment the memory usage after every
createnewblockorwaitnextreturn. - Whenever the memory budget is exhausted, we should release templates in FIFO order.
I think since we create a new template after a time interval elapses even if fees increase and that interval is usually enough for the client to receive and distribute the template to miners, this mechanism should be safe as the miners have long switch to most recent template when the budget elapsed because of the time interval being used in between returns of waitnext.
Mining interface clients should also handle their own memory internally.
Currently, I don’t see much use for the exposed getMemoryLoad method. In my opinion, we should not rely on the IPC client to manage our memory.
It seems counter intuitive, but from a memory management perspective IPC clients are treated no different than our own code. And if we started FIFO deleting templates that are used by our own code, we'd crash. So I think FIFO deletion should be a last resort (not implemented here). There's another reason why we should give clients an opportunity to gracefully release templates in whatever order they prefer. Maybe there's 100 downstream ASIC's, one of which is very slow at loading templates, so it's only given a new template when the tip changes, not when there's a fee change. In that scenario you have a specific template that the client wants to "defend" at all cost. In practice I'm hoping none of this matters and we can pick and recommend defaults that make it unlikely to get close to a memory limit, other than during some weird token launch. |
IMHO I think we should separate that, and treat clients differently from our own code, because they are different codebases and separate applications with their own memory.
I see your point but I don’t think that’s a realistic scenario, and I think we shouldn’t design software to be one-size-fits-all.
Delegating template eviction responsibility to the client can put us in a situation where they handle it poorly and cause us to OOM (but I guess your argument is that we rather take that chance than being in a situation where we make miners potentially lose on rewards). |
Implements the template memory footprint tracking discussed #33899, but does not yet impose a limit.