Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[p2p] Add Peer struct for per-peer data in net processing #19607
[p2p] Add Peer struct for per-peer data in net processing #19607
Changes from all commits
aba0335
7cd4159
1f96d2e
8e35bf5
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So even apart from not using
cs_main
, there's another design change happening here in how we use these objects, when compared withCNodeState
-- no lock on the map is maintained in order to have a shared_ptr to the peer object.That means that our code needs to be able to handle having the entry in the map erased out from under it, correct? That might be worth mentioning somewhere as a design consideration for future code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's possible. Do you think the comment on the map needs expanding? It currently says "Once a shared pointer reference is taken, the lock may be released. Individual fields are protected by their own locks."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is mostly a question for future stuff that gets moved in but... What if the refcount is more than 1 here? Do we continue tearing it down?
As a contrived (bad) example: what if someone bumps the misbehavior score after it's been evaluated here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll remove the
Peer
shared ptr fromg_peer_map
, but we don't destruct the object until the refcount drops to 0. Your contrived example is fine. There would have to be another thread currently inMisbehaving()
holding a shared ptr to thePeer
object. We'd remove it from the map, the other thread would increment the misbehaving score, and then thePeer
object would be destructed when that thread left theMisbehaving()
function.I've taken a look at the future commits in https://github.com/jnewbery/bitcoin/tree/2020-06-cs-main-split and I can't see any problems with this style of teardown.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My point here was that calls to
Misbehaving()
might go unobserved, because the score may have already been evaluated before the other thread bumped it. This is not possible in with the current behavior because of cs_main: once we're inFinalizeNode
, we're guaranteed that no other thread is accessing it (unless they've cached theCNodeState*
, which would be a bug).Obviously we don't have any threads doing that currently, so I'm wondering if we want to continue to enforce that invariant with a:
I'm not too bothered if there's a rare misbehaving bump that gets missed. My concern is that Peer's destructor may eventually gain some functionality, and we'll want to know if we need to keep track of which thread deletes it.
Edit: Trying my point one more time. As I see it, this PR removes the guarantee that every call into
Misbehaving()
will be reflected upon evaluation inFinalizeNode()
. That's not a huge deal, but it should at least be noted in case more important guarantees are removed by future moves into Peer.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @theuni. This is a very good point. I'll add a comment about not relying on this being the final state of
Peer
in a future commit.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this and
mapNodeState.erase(nodeid)
happen at the same time, under the same cs_main lock?What are the consequences of them being out of sync? (Still reachable by
State()
but not byGetPeerRef()
)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No bad consequences, but I've moved the Peer cleanup underneath the cs_main lock anyway to make this simpler to think about.