Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster should be has external ip of public load balancer ? #23

Closed
lucacalcaterra opened this issue Sep 23, 2022 · 5 comments
Closed

Cluster should be has external ip of public load balancer ? #23

lucacalcaterra opened this issue Sep 23, 2022 · 5 comments
Labels
question Further information is requested

Comments

@lucacalcaterra
Copy link

@garutilorenzo services exposed with load balancer see private ip's and not external ip (public load balancer). I suppose k3s should be initialized with node-external-ip and so on...

I'm wrong ?

image

@garutilorenzo
Copy link
Owner

garutilorenzo commented Sep 23, 2022

@lucacalcaterra this is the output of what? kubectl get nodes -o wide?
--node-external-ip I think is for a different use (k3s behind NAT, or with a public network interface), I found an example here.

For the services exposed by k3s, k3s uses a different kind of approach unlike k8s. You can find more info here in Service Load Balancer chapter.

And If you want to integrate k3s service with OCI LoadBalancer you have to install OCI CCM. With OCI CCM installed if you expose a service with a LoadBalancer service, then you get a public ip address for your service (the OCI CCM will create a Load Balacner for you).

The OCI CCM integration is a work in progress task, see PR #16

@lucacalcaterra
Copy link
Author

@garutilorenzo this is the output of kubectl get svc -A
I noticed this behaviour because i'm trying to use skupper.io for link a site and the private cluster link to a private ip which is not reachable from remote.

So probably nothing wrong with this repo and i should use the OCI CCM as you suggest.

Thanks !

@lucacalcaterra
Copy link
Author

lucacalcaterra commented Sep 23, 2022

anyway, i think you should see load balancer's public ip as external addres and not the backends ip's

@garutilorenzo
Copy link
Owner

Dear @lucacalcaterra, the answer is no, with this module you can't see LP public IPs (not at the moment).
If you want to see LB public IPs you have to use a managed K8s (OKE for Oracle cloud Infrastructure, EKS for AWS, GKE for Google). The managed Kubernetes have installed by default the respective CCM (OCI, AWS, Google), the CCM does the "magic".

This module install k3s like it was an on-prem installation, with "no CCM support" so you can't see LB public IPs.
All the traffic (HTTP, HTTPS) from the internet, is redirected from the public LB (layer 4 LB) to the k3s workers where the ingress controller is listening.
All the services exposed by k3s are available here:

output "public_lb_ip" {
  value = module.k3s_cluster.public_lb_ip
}

you can't see this public IP with kubectl, since there is no CCM installed.
So if you want to use skupper.io whit this module, you have to expose "the skipper service" (i haven't read the docs, but I think there is a svc for this application) with the nginx ingress controller.
Since skupper.io seams to be a L7 service you have all done, install skupper.io an expose with the ingress controller.
The public ip address of skupper.io will be the "public_lb_ip" from terraform.

I hope is more clear now.

@garutilorenzo garutilorenzo added the question Further information is requested label Sep 26, 2022
@lucacalcaterra
Copy link
Author

Dear @lucacalcaterra, the answer is no, with this module you can't see LP public IPs (not at the moment). If you want to see LB public IPs you have to use a managed K8s (OKE for Oracle cloud Infrastructure, EKS for AWS, GKE for Google). The managed Kubernetes have installed by default the respective CCM (OCI, AWS, Google), the CCM does the "magic".

This module install k3s like it was an on-prem installation, with "no CCM support" so you can't see LB public IPs. All the traffic (HTTP, HTTPS) from the internet, is redirected from the public LB (layer 4 LB) to the k3s workers where the ingress controller is listening. All the services exposed by k3s are available here:

output "public_lb_ip" {
  value = module.k3s_cluster.public_lb_ip
}

you can't see this public IP with kubectl, since there is no CCM installed. So if you want to use skupper.io whit this module, you have to expose "the skipper service" (i haven't read the docs, but I think there is a svc for this application) with the nginx ingress controller. Since skupper.io seams to be a L7 service you have all done, install skupper.io an expose with the ingress controller. The public ip address of skupper.io will be the "public_lb_ip" from terraform.

I hope is more clear now.

Meanwhile your reply... i'll understand it... So your reply clarify all the things. thanks !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants