Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods can't reach pods from other subnetworks #4123

Closed
ErGallo23 opened this issue Feb 25, 2024 · 9 comments
Closed

Pods can't reach pods from other subnetworks #4123

ErGallo23 opened this issue Feb 25, 2024 · 9 comments
Assignees

Comments

@ErGallo23
Copy link

Hi,

I'm trying to deploy an hybrid cluster but the pods deployed on different nodes (and therefore on different subnets) don't have communication between them if they're not deployed in nodes with the same private network.

I currently have one node deployed on AWS, and two nodes deployed on a private network inside VM.

To test it, I have deployed the prometheus stack and when I try to access to the metrics endpoint from one of the private network nodes with a test pod, I can access the other node but not the AWS one. If I do this from the AWS node, I can't access either of the other two nodes.

I have deployed K0S with KubeRouter, with a NLLB (in the future, I want to do HA Control Plane) and kine. All other options are the default ones.

It seems that nodes that are not in the same network cannot communicate between them. I don't see any errors in the konnectivity-agent, kube-router, kube-proxy or NLLB pods.

I need to do any additional configuration to enable communication between nodes from different private networks or to create an hybrid cloud? The goal is deploy a cluster with diferent cloud providers and some nodes in VMs on private networks

I have seen the following issues, but none of them are like my specific case.

#3784
#1240
#3024
#2410

I attach the logs of the connection tests and the deployed nodes:

The endpoints:

NAME                                  ENDPOINTS                                          AGE
prometheus-kube-state-metrics         10.244.0.8:8080                                    4m57s
prometheus-server                     10.244.0.10:9090                                   4m57s
prometheus-prometheus-node-exporter   10.244.0.11:9100,10.244.1.4:9100,10.244.2.2:9100   4m57s

The curl calls from a pod in one of the nodes in the private network:

pod-test:~# curl http://10.244.0.11:9100/metrics --connect-timeout 5 > metrics.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
curl: (28) Failed to connect to 10.244.0.11 port 9100 after 5002 ms: Timeout was reached
pod-test:~# curl http://10.244.2.2:9100/metrics --connect-timeout 5 > metrics.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 83858    0 83858    0     0  2622k      0 --:--:-- --:--:-- --:--:-- 2641k
pod-test:~# curl http://10.244.1.4:9100/metrics --connect-timeout 5 > metrics.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 89034    0 89034    0     0  2276k      0 --:--:-- --:--:-- --:--:-- 2229k
pod-test:~#

The nodes and the node-exporter pods:

NAME               STATUS   ROLES    AGE   VERSION       INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION           CONTAINER-RUNTIME
applications       Ready    <none>   40m   v1.29.1+k0s   10.0.18.132     <none>        Fedora CoreOS 39.20240210.2.0   6.7.4-200.fc39.x86_64    containerd://1.7.13
adparts-postgres   Ready    <none>   23m   v1.29.2+k0s   192.168.0.172   <none>        Fedora CoreOS 39.20240128.3.0   6.6.13-200.fc39.x86_64   containerd://1.7.13
applications-aux   Ready    <none>   27m   v1.29.1+k0s   192.168.0.170   <none>        Fedora CoreOS 39.20240128.3.0   6.6.13-200.fc39.x86_64   containerd://1.7.13

prometheus-prometheus-node-exporter-hqql8       1/1     Running   0          3h59m   10.244.2.2    adparts-postgres   <none>           <none>
prometheus-prometheus-node-exporter-dkxkc       1/1     Running   0          3h59m   10.244.0.11   applications       <none>           <none>
prometheus-prometheus-node-exporter-v9smr       1/1     Running   0          3h59m   10.244.1.4    applications-aux   <none>           <none>

Thanks in advance

@twz123
Copy link
Member

twz123 commented Feb 26, 2024

Related: #4121

@ncopa
Copy link
Collaborator

ncopa commented Feb 26, 2024

I don't see any errors in ..., kube-router, ...

I was about to respond here that kube-router operates on layer 2 (link level) which means that the nodes needs to be in the same layer 2 network. However, while looking for this in the documentation I found out that it looks like kube-router can do ipip overlay tunneling when nodes are not in the same subnet.

Last time I tried this I ended up using calico with vxlan, which worked nicely.

@ncopa
Copy link
Collaborator

ncopa commented Feb 26, 2024

Can the nodes communicate over the internal-ip? eg can you ping 10.0.18.32 from 192.168.0.170?

@ErGallo23
Copy link
Author

No, they can't. The AWS EC2 node is behind the public IP 63.32.124.37 and the other 2 instances behind the public IP 91.126.192.82.

I don't use these public IPs as IP Nodes because I understant that I can't use public the same public IPs more than once.

One of the reasons of open the issue is to know if this configuration is possible (connect nodes with diferent public ips/networks and some of them behind the same public IP 🤔)

Anyway, tomorrow I'll try what you say, tunneling overlay ipip with kube-router or use calico.

Thanks!

@ncopa
Copy link
Collaborator

ncopa commented Mar 4, 2024

I don't think that will work at all. It will not be possible for the worker nodes to establish any tunnel or anything if they cannot talk to each other. You need have some sort of routing working between the nodes.

@ErGallo23
Copy link
Author

Hello, sorry for the delay in answering, I have been testing different configurations these days.

Right now I am trying to configure the cluster with Wireguard for the network section.

I managed to build the cluster as I wanted by following this document:

https://www.inovex.de/de/blog/how-to-set-up-a-k3s-cluster-on-wireguard/

As you can see, it is in K3S, but I want to use K0S. Once I have managed to replicate the behavior, I am going to expose my final K0S configuration here in case someone else has a similar use case and close the incident.

Many thanks for everything

Copy link
Contributor

github-actions bot commented Apr 4, 2024

The issue is marked as stale since no activity has been recorded in 30 days

@github-actions github-actions bot added the Stale label Apr 4, 2024
@makhov
Copy link
Contributor

makhov commented Apr 8, 2024

@ErGallo23 you can try Kilo as CNI, which is based on WireGuard. I think it does exactly what you need. To do it, specify spec.network.provider: custom in your k0s config and install Kilo manually

apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
metadata:
  name: k0s
spec:
  network:
    provider: custom
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/crds.yaml
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-k3s.yaml

It mentions k3s, but I've tried it with k0s and it worked well in my tests. Basically, there are some paths that should be changed from k3s-specific to k0s-specific

@github-actions github-actions bot removed the Stale label Apr 8, 2024
@ErGallo23
Copy link
Author

Hi! Thanks @makhov for the answer.

Now I'm using cilium with WG installed in each node and it works fine, but, as you said, after look in Kilo documentation, maybe it's a better approach than run wg on node instead of using Kilo CRDs.

I'll do some tests with kilo and I'll decide the best option between k0s+kilo or k0s+wg in node itself.

Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants