Skip to content

Conversation

@mplsgrant
Copy link
Collaborator

Issue

I cannot connect two nodes using Test Frameworks connect_nodes

Cause

connect_nodes requires ip address resolution, and bitcoin nodes keep a mix of ip addresses and dns-style addresses internally to track peers. In our case, the dns-style addresses exist in Node A when we manually connect Node A to Node B. The ip addresses exist any time a node is connected to. So, when Node A connects to Node B, then Node B will keep a record of the ip address and not the dns-style address.

Solution

Create a method to map dns-style addresses to the ip addresses associated with the pods. This method can then be used to get_peer_info and query a nodes connections.

@mplsgrant
Copy link
Collaborator Author

Actually, I should write a test for this. Putting on draft until then...

@mplsgrant mplsgrant marked this pull request as draft May 30, 2024 21:48
@mplsgrant mplsgrant force-pushed the 2024-05-get-service-ip branch 4 times, most recently from 6fc1b07 to 93bfcd6 Compare May 31, 2024 13:56
@mplsgrant mplsgrant marked this pull request as ready for review May 31, 2024 14:35
@mplsgrant
Copy link
Collaborator Author

I added a test and some error checking around this code.

I created a small but complete DAG to make sure that nothing out of the ordinary happens when looking up internal ip addresses. The test does not cover external ip address (cluster ips).

@mplsgrant
Copy link
Collaborator Author

Also, my test has yet to run on github actions. Seems to be that github actions picks up the existing yaml workflow file instead of my proposed file. This means my test is not included in the current green check mark.
I think this is caused by pull request vs pull request target similar to what we saw in #350 and #351

@mplsgrant mplsgrant force-pushed the 2024-05-get-service-ip branch 2 times, most recently from 421da8e to bf87ee0 Compare June 3, 2024 04:39
@mplsgrant mplsgrant force-pushed the 2024-05-get-service-ip branch from bf87ee0 to 04a3a23 Compare June 3, 2024 04:57
@willcl-ark
Copy link
Contributor

I plan to review this today!

Also, my test has yet to run on github actions. Seems to be that github actions picks up the existing yaml workflow file instead of my proposed file. This means my test is not included in the current green check mark. I think this is caused by pull request vs pull request target similar to what we saw in #350 and #351

I think I also have a fix for this incoming. Havaing spent hours again on GHA on friday, I now understand (a bit) better what is likely going on.

Copy link
Contributor

@willcl-ark willcl-ark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet, this LGTM, thanks!

I added some prints to the test just to see for myself what range the ip addresses were in:

2024-06-03T21:29:31+0000 - INFO - Node 0: External IP: 10.97.234.141, Internal IP: 10.244.0.4
2024-06-03T21:29:31+0000 - INFO - Node 1: External IP: 10.103.159.221, Internal IP: 10.244.0.5
2024-06-03T21:29:31+0000 - INFO - Node 2: External IP: 10.105.19.244, Internal IP: 10.244.0.6
2024-06-03T21:29:31+0000 - INFO - Node 3: External IP: 10.100.135.235, Internal IP: 10.244.0.7
2024-06-03T21:29:31+0000 - INFO - Node 5: External IP: 10.102.120.231, Internal IP: 10.244.0.8
2024-06-03T21:29:31+0000 - INFO - Node 6: External IP: 10.100.105.162, Internal IP: 10.244.0.9

Still not totally sure what external ip is for though...

Regarding ip address usage in general, which IIRC there was a little contention about having... It seems totally reasonable to me to use ip addrs (well, we need them in some form), but especially so via a function dynamically fetching them from the control plane 👍🏼

# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Endpoints.md
config.load_incluster_config()
v1 = client.CoreV1Api()
service = v1.read_namespaced_service(name=service_name, namespace=namespace)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What exactly are we (planning on) using the external ip address for?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to be able to get the bare ip address that belongs to the service, sometimes called the cluster ip address. Maybe a separate function would be a nicer way to do this?

For context, and to show you what my mental model is on this issue, here's some of what I know about these ip address:

external should be the ip address of the service. So, when we do kubectl get all and we look down at the services listed, the ip addresses listed next to each service should match the external address. These addresses are often called the cluster ip addresses, I believe.

A service provides a permanent ip (aka "cluster" ip) address. Pods have their own ip address too, and the pods "go to work" behind the cluster.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha thanks, that's helpful. Also, because I am still a k8s noob I don't know what some of these things do...

@willcl-ark
Copy link
Contributor

@pinheadmz or @m3dwards you want to take a quick look here before merge?

@willcl-ark willcl-ark merged commit cab0be3 into bitcoin-dev-project:main Jun 4, 2024
@mplsgrant mplsgrant deleted the 2024-05-get-service-ip branch June 4, 2024 14:38
@@ -1,5 +1,29 @@
from ipaddress import IPv4Address, IPv6Address, ip_address
from kubernetes import client, config
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the utils file should be importing backend stuff. I feel like this dependency indicates that get_service_ip() is not a utility and should probably be implemented in kubernetes_backend.py

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could make get_service_ip into an abstract method similar to get_tank_ipv4. Then rename it to resolve_tank_dns to make it less k8s specific.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah or even use @Property to get (and then cache) the IP:

    @property
    def ipv4(self):
        if self._ipv4 is None:
            self._ipv4 = generate_ipv4_addr(self.warnet.subnet)
        return self._ipv4

@pinheadmz
Copy link
Contributor

Sorry for being so late to the review on this. I think if the motivation is to get the external static IP address for a bitcoin node, there are other approaches that make more sense to me:

  • Tank has an ipv4 property that we've been deprecating since last year
  • kubernetes_backend has get_tank_ipv4() but that returns the pod IP not the associated service IP

So I think it may make more sense to implement Tank.get_external_ip() which gets the tank's associated service's cluster IP from the kubernetes backend.

There could also be a new rpc warcli gettankinfo <id#> or something that returns a JSON object including this external IP. That should be usable in scenarios and tests to connect nodes

@mplsgrant
Copy link
Collaborator Author

We currently have these addresses: internal (pod), external (cluster), and dns (*-tank-000000-service).

So I think it may make more sense to implement Tank.get_external_ip() which gets the tank's associated service's cluster IP from the kubernetes backend.

I'm fine with getting the ip from a tank, but just as a note, bitcoind's getpeerinfo lists dns strings such as "my-bespoke-network-tank-000002-service". In that particular situation, I'm reluctant to make a Tank out of the dns string in order to call get_external_ip(). I would prefer resolve_tank_dns in that situation.

There could also be a new rpc warcli gettankinfo <id#>

This seems like a good idea.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants