-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integration tests is sending requests to 0.0.0.0
#1051
Comments
Here is a stack trace:
|
Better trace:
|
Urgh, this is because of This means that we register |
Copying my comments from slack: It was a workaround, not a permanent solution. Actually it was only a workaround because the quorum sets were not known. We could remove it entirely now that we know which nodes listen to each other. |
Actually, is there a way to get the connecting address in vibe.d? Then it could just become |
Although for localrest it's a different story. |
Yeah I think we should move to "establish connection" instead of "register listener".
Yeah it just works. |
How does it "just work"? There's no way of getting the "address" of the connecting node. |
|
I mean we will still need some way of feeding that into Agora. |
I thought this might actually be simpler than I originally thought. Shouldn't it be possible to:
router.route("/register_listener")
.post((scope HTTPServerRequest req, scope HTTPServerResponse res)
{
import std.format;
string addr = format("%s:%s", req.clientAddress.toAddressString(),
req.clientAddress.port());
node.registerListener(addr);
res.statusCode = 200;
res.writeVoidBody();
}); Calling this before |
But it was just an experiment, it's likely missing a lot of stuff. |
I can also see the logs as follows with the command,
The goal of this issue is that the I think @AndrejMitrovic 's suggestion which is using |
It would be good to get feedback from @Geod24 because he's used and worked on vibe.d for much longer than I have. He probably knows a better way to do this. |
My preference would be to have a connection established between nodes that is kept alive, and have all requests going through it. That would completely alleviate the need for register listener, but it needs quite some work on the Vibe.d side. The first step is to establish a connection and keep it alive, and registered on both sides. That should be enough for a node to "establish" a listener. and whenever it gets something it needs to gossip, it can do so on the connection directly. No need to use the same connection for queries at the beginning. The second step is to separate the validator and full nodes connections into two pools. We always need to communicate with (all) validators, but full nodes are common, and we might want a smarter strategy to talk to them (e.gg. round robin). In order to do this separation, we need to add an optional handshake phase, where nodes send their public key, if any. Remember that it could be 2 validators, 2 full nodes, or 1 full node 1 validator. To further this, we might want to add the ability to promote / demote a connection from one pool to the other, e.g. when a validator's enrollment start / expire. We surely need to remove those who expire from our set, or at least demote them. The last step would be to change things so that we can use a connection with a At the moment we can just get away with step 1 (for this very issue). |
I think that's what vibe.d already does under the hood? At least that's my impression from reading through We probably need to change |
However vibe.d does keep its own internal pool for all connections, and I'm not sure how easy it is to override this with our own connection pools. There's lots of things in the client which are |
Yes but we have no way to tap into that. And we can't rely on it, because the pool has an eviction strategy. You don't want to loose a connection with a validator because suddenly an attacker is sending you a burst of connections / messages and you just evicted this other connection which wasn't as active. |
Yes indeed. |
From reading your comments, we need to manage connections in two types of pools, like |
I found out that the Calculating the m_keepAliveLimit from the m_keepAliveTimeout in HTTPClientSettings Using the m_keepAliveLimit in HTTPClientSettings Using the m_keepAliveTimeout in HTTPServerSettings I think we should think of other approaches. |
Note: This is a pretty good resource: https://tldp.org/HOWTO/TCP-Keepalive-HOWTO/index.html |
Just do a
docker-compose up
and see the logs:I don't see why we would ever send a message to 0.0.0.0 ?
The text was updated successfully, but these errors were encountered: