Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nMaxConnections issues #11

Closed
orweinberger opened this issue Jan 25, 2015 · 4 comments
Closed

nMaxConnections issues #11

orweinberger opened this issue Jan 25, 2015 · 4 comments
Assignees

Comments

@orweinberger
Copy link

I was wondering if you have any solution to the edge case where a full node maxed out his connections (nMaxConnections usually at 125). Are you able to validate it in any other way, or do you simply ignore that edge case?

@ayeowch
Copy link
Owner

ayeowch commented Jan 25, 2015

Node that has reached its max. connections is treated as unreachable as the response the crawler may get from the node usually has no distinction from e.g. firewall rule blocking the incoming connection.

@ayeowch ayeowch self-assigned this Jan 26, 2015
@ayeowch
Copy link
Owner

ayeowch commented Jan 26, 2015

For the validation done in https://getaddr.bitnodes.io/#join-the-network, it makes sense to report maxed out node as unreachable. In terms of estimating the size of the network, the crawler takes a snapshot of the network about every 5 minutes. If your node is not already connected to the crawler and is currently maxed out, it will be considered as unreachable. If your node is already connected to the crawler and maxed out in the next snapshot, it will still be counted into the estimated number of reachable nodes.

@ayeowch ayeowch closed this as completed Jan 26, 2015
@stevenroose
Copy link

@ayeowch it's not possible to be permanently connected to the crawler, right? The crawler connects and disconnects immediately after.

Also, this is the log that I get when bitnodes connects and I'm maxed out, shouldn't it be able to detect that the node is still alive?

13:51:56 2017-05-30 [INF] BMGR: New valid peer 136.243.139.96:47319 (inbound) (/bitnodes.21.co:0.1/)
13:51:56 2017-05-30 [INF] SRVR: Max peers reached [125] - disconnecting peer 136.243.139.96:47319 (inbound)

@ayeowch
Copy link
Owner

ayeowch commented May 30, 2017

@stevenroose crawl.py (source address 136.243.139.96) is used to exhaustively (DFS) look for all currently reachable nodes starting from a list of seed nodes. Each of these nodes is sent to ping.py (source address 136.243.139.120) for it to connect to if it haven't already and maintain the connection to gather network and tx/block inv latency data with periodic ping within the established connection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants