-
Notifications
You must be signed in to change notification settings - Fork 32
Conversation
e106259
to
c0ca201
Compare
I've looked into Still, I liked that they choose address based on distance https://github.com/btcsuite/btcd/blob/master/addrmgr/addrmanager.go#L1046 (we choose randomly). |
c0ca201
to
51efd40
Compare
|
||
// Try to pick numToDial addresses to dial. | ||
// TODO: improve logic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jaekwon what did you have in mind? something concrete? I've managed to get rid of alreadyConnected
flag by using addrbook
old group. I will try to think about something else for the time being.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand how alreadyConnected
can be removed. An address's membership in addrbook (old or new) has no bearing on whether the address is already connected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we mark the peer good after successful connection and pick addresses with the bias = 100% (always from new buckets), we don't need to check alreadyConnected
(since all the peers with whom we already have a connection will be in the old buckets) b898bc3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but that's not what we want to do.
Old bucket / New bucket are arbitrary categories to denote whether an address is vetted or not, and this needs to be determined over time via a heuristic that we haven't perfected yet, or, perhaps is manually edited by the node operator. It should not be used to compute what addresses are already connected or not.
The old code may have been buggy, but these modifications are definitely bad.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we mark the peer good after successful connection...
Basically, we need to work harder on our good-peer/bad-peer marking. What we're currently doing in terms of marking good/bad peers is just a placeholder.
It should not be the case that an address becomes old/vetted upon a single successful connection. That's not the intent of the old/new system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for explaining this to me. It wasn't clear to me before. I will revert the change and add a comment to the source code so we could refer to it in the future.
51efd40
to
b898bc3
Compare
pex_reactor.go
Outdated
func (r *PEXReactor) RemovePeer(p *Peer, reason interface{}) { | ||
addr := NewNetAddressString(p.ListenAddr) | ||
// addr will be ejected from the book | ||
r.book.MarkBad(addr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will need to depend on the reason. If the peer just goes offline, we probably don't want to remove them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure?
NOTE: The peer will be proposed to us by other peers (PexAddrsMessage) and we will add it again upon successful connection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if the peer actually went offline, wont all the other peers remove him too? Granted the PEX should enable everyone to find him when he next connects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Peer will need to send first requests to others by itself (he will have an addrbook or the seeds). Is it bad?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No I guess it's ok. Down the road I think we will want to preserve "MarkBad" for peers that actually misbehave.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds right. I will add RemoveAddress
to addrbook
then.
Looks good. I think limiting to eg 1000 msg/hour is fine for now. Maybe down the road we want to keep track of the quality of peer messages so if peerA keeps telling us about peers we can't connect to then maybe we should care less about peerA. But I don't think that kind of complexity is priority right now |
OK. I will add the comment |
9481c05
to
4d04534
Compare
after discussion with @ebuchman (#10 (comment))
pex_reactor.go
Outdated
// will remove him too. The peer will need to send first requests to others by | ||
// himself (he will have an addrbook or the seeds). | ||
func (r *PEXReactor) RemovePeer(p *Peer, reason interface{}) { | ||
addr := NewNetAddressString(p.ListenAddr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AddPeer/RemovePeer are just housekeeping methods, we shouldn't remove a peer from the addrbook just because they got disconnected.
We could rename AddPeer/RemovePeer to "OnPeerConnect" and "OnPeerDisconnect". If we aren't keeping track of local temp data for each peer here, then we don't have to do anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought if we add the peer to the book upon connection (AddPeer
), we should respectively remove him upon losing the connection (RemovePeer
). It just sounds logical. In addition, the peer will reach us once he up again (or network healed; though, not sure about the network case, need to test it with iptables
). #10 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I will revert this too.
6562d9b
to
24b8716
Compare
after discussion with @ebuchman (#10 (comment))
pex_reactor.go
Outdated
try := pexR.book.PickAddress(newBias) | ||
// NOTE always picking from the new group because old one stores already | ||
// connected peers. | ||
try := r.book.PickAddress(100) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is bad. The purpose of newBias is to first prioritize old (more vetted) peers when we have few connections, but to allow for new (less vetted) peers if we already have many connections. This algorithm isn't perfect, but it somewhat ensures that we prioritize connecting to more-vetted peers. Please revert.
b489774
to
32661a0
Compare
after discussion with @ebuchman (#10 (comment))
Rebased and reverted some commits as per Jae's comments. Done here. |
no need for repeate timer here (no need for goroutine safety)
optimizations: - if we move peer to the old bucket as soon as connected and pick only from new group, we can skip alreadyConnected check
This is better than waiting because while we wait, anything could happen (crash, timeout of the code who's using addrbook, ...). If we save immediately, we have much greater chances of success.
after discussion with @ebuchman (#10 (comment))
1720c41
to
8655e24
Compare
pex_reactor.go
Outdated
for { | ||
select { | ||
case <-ticker.C: | ||
r.msgCountByPeer = make(map[string]uint16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doesn't seem thread safe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it have to be? peer has only 1 MConn ; MConn calls onReceive
in the same thread (blocking)
Started to work on pex reactor issues (Refs #9).
What's done / left:
ensurePeers
logic