Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[autopilot] penalize small channels in preferantial attachment heuristic #2797

Conversation

@halseth
Copy link
Collaborator

@halseth halseth commented Mar 18, 2019

To avoid assigning a high score to nodes with a large number of small
channels, we only count channels at least as large as a given fraction of
the graph's median channel size.

err := n.ForEachChannel(func(e ChannelEdge) error {
// Since connecting to nodes with a lot of small
// channels actually worsens our connectivity in the
// graph (we will potentially waste time tryng to use

This comment has been minimized.

@alexbosworth

alexbosworth Mar 19, 2019
Contributor

Suggested change
// graph (we will potentially waste time tryng to use
// graph (we will potentially waste time trying to use

This comment has been minimized.

@halseth

halseth Mar 27, 2019
Author Collaborator

Fixed.

@@ -3961,6 +3967,20 @@ func (r *rpcServer) GetNetworkInfo(ctx context.Context,
return nil, err
}

// Sort the channels by capacity, and find the median.

This comment has been minimized.

@alexbosworth

alexbosworth Mar 19, 2019
Contributor

Maybe there should be a cap here in case the median is super high for some reason?

This comment has been minimized.

@halseth

halseth Mar 19, 2019
Author Collaborator

You mean cap the number of channels we'll sort?

This comment has been minimized.

@alexbosworth

alexbosworth Mar 19, 2019
Contributor

Cap the median value amount

This comment has been minimized.

@Roasbeef

Roasbeef Mar 22, 2019
Member

This is just for the RPC call, why should we cap it?

This comment has been minimized.

@halseth

halseth Mar 27, 2019
Author Collaborator

Question is whether to cap it for prefattach?

Copy link
Member

@Roasbeef Roasbeef left a comment

First pass review complete, still need to reason about the implications and possible loop holes....though nothing is perfect

@@ -3961,6 +3967,20 @@ func (r *rpcServer) GetNetworkInfo(ctx context.Context,
return nil, err
}

// Sort the channels by capacity, and find the median.

This comment has been minimized.

@Roasbeef

Roasbeef Mar 22, 2019
Member

This is just for the RPC call, why should we cap it?

rpcserver.go Outdated Show resolved Hide resolved
autopilot/prefattach.go Outdated Show resolved Hide resolved
autopilot/prefattach.go Outdated Show resolved Hide resolved
@wpaulino wpaulino added this to the 0.6 milestone Mar 27, 2019
@halseth halseth force-pushed the halseth:autopilot-prefattach-small-chan-penalize branch from 5fda627 to 07eae24 Mar 27, 2019
@halseth halseth force-pushed the halseth:autopilot-prefattach-small-chan-penalize branch from 07eae24 to 9d8e67d Mar 27, 2019
@halseth
Copy link
Collaborator Author

@halseth halseth commented Mar 27, 2019

seenChans = make(map[uint64]struct{})
)
if err := g.ForEachNode(func(n Node) error {
err := n.ForEachChannel(func(e ChannelEdge) error {

This comment has been minimized.

@alexbosworth

alexbosworth Mar 27, 2019
Contributor

It may be safer to only consider channels that have recently been updated

@alexbosworth
Copy link
Contributor

@alexbosworth alexbosworth commented Mar 27, 2019

lgtm although I still feel iffy about situations where the median could actually yield unexpected results. The code in comments says "large" but median doesn't necessarily imply large

Copy link
Member

@Roasbeef Roasbeef left a comment

LGTM 💎

@Roasbeef Roasbeef merged commit a069e78 into lightningnetwork:master Mar 28, 2019
2 checks passed
2 checks passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
coverage/coveralls Coverage increased (+0.004%) to 59.733%
Details
@halseth
Copy link
Collaborator Author

@halseth halseth commented Mar 28, 2019

@alexbosworth "large-ish channels" 😛

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

4 participants