Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vanilla Hetzner Kubeone build does not support load balancers #1110

Closed
eddiewang opened this issue Sep 23, 2020 · 3 comments
Closed

Vanilla Hetzner Kubeone build does not support load balancers #1110

eddiewang opened this issue Sep 23, 2020 · 3 comments
Labels
kind/question Categorizes issue or PR as a question.

Comments

@eddiewang
Copy link

eddiewang commented Sep 23, 2020

What happened:

Once a vanilla Kubeone cluster is built with the example terraform and the kubeone scripts, load balancers are unable to be created automatically via the hcloud controller. Inside the pod, the hcloud controller has this message: ERROR: logging before flag.Parse: E0923 14:15:19.086458 1 controllermanager.go:240] Failed to start service controller: the cloud provider does not support external load balancers

So at the end, LB based services are stuck in forever.

What is the expected behavior:
Should be able to create the sample service here and get a LB mounted to the cluster.

How to reproduce the issue:
Install a simple 2 worker node, 3 controller node cluster in FSN1 region.

Anything else we need to know?

Information about the environment:
KubeOne version (kubeone version): 1.0.2
Operating system: Ubuntu
Provider you're deploying cluster on: Hetzner
Operating system you're deploying on: Ubuntu

@eddiewang eddiewang added the kind/bug Categorizes issue or PR as related to a bug. label Sep 23, 2020
@kron4eg
Copy link
Member

kron4eg commented Sep 23, 2020

Please keep in mind, the default datacenter where out terraform config deploys is nbg1. SO having this in mind here's my manifests that I've used.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: default
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: default
  annotations:
    load-balancer.hetzner.cloud/location: "nbg1" ###### <<<<<< ----- IMPORTANT
    load-balancer.hetzner.cloud/use-private-ip: "true"
spec:
  selector:
    app: nginx
  ports:
    - port: 8080
      targetPort: 80
  type: LoadBalancer

Here's logs from hcloud-cloud-controller-manager

I0923 16:09:57.011958       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"12de68af-1c6f-4ba6-a6d2-2f231711b7c7", APIVersion:"v1", ResourceVersion:"3549", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
I0923 16:09:57.035928       1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx" nodes=[artioms-pool1-6f95dd447c-ml2qp]
I0923 16:09:59.497585       1 load_balancer.go:390] "add target" op="hcops/LoadBalancerOps.ReconcileHCLBTargets" service="nginx" targetName="artioms-pool1-6f95dd447c-ml2qp"
I0923 16:10:00.469209       1 load_balancer.go:450] "add service" op="hcops/LoadBalancerOps.ReconcileHCLBServices" port=8080 loadBalancerID=96546
I0923 16:10:01.378008       1 load_balancers.go:117] "reload HC Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" loadBalancerID=96546
I0923 16:10:02.048105       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"12de68af-1c6f-4ba6-a6d2-2f231711b7c7", APIVersion:"v1", ResourceVersion:"3549", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer
I0923 16:10:27.051708       1 load_balancers.go:175] "update Load Balancer" op="hcloud/loadBalancers.UpdateLoadBalancer" service="nginx" nodes=[artioms-pool1-6f95dd447c-ml2qp]
I0923 16:10:27.350981       1 load_balancer.go:439] "update service" op="hcops/LoadBalancerOps.ReconcileHCLBServices" port=8080 loadBalancerID=96546
I0923 16:10:28.261472       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"12de68af-1c6f-4ba6-a6d2-2f231711b7c7", APIVersion:"v1", ResourceVersion:"3549", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts

Here's the resulted LB service

$ k get svc nginx
NAME    TYPE           CLUSTER-IP       EXTERNAL-IP                 PORT(S)          AGE
nginx   LoadBalancer   10.109.183.114   167.233.10.72,192.168.0.8   8080:32560/TCP   74s

Here's nginx output from behind of this LB:

$ curl 167.233.10.72:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

@kron4eg
Copy link
Member

kron4eg commented Sep 23, 2020

my kubeone config

apiVersion: kubeone.io/v1beta1
kind: KubeOneCluster

versions:
  kubernetes: "1.19.2"

cloudProvider:
  hetzner: {}
  external: true

@eddiewang
Copy link
Author

Hi @kron4eg I decided to try nbg1 and can confirm it does indeed work. Interesting that other DCs may cause compatibility issues.

@kron4eg kron4eg added kind/question Categorizes issue or PR as a question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Sep 23, 2020
@kubermatic kubermatic locked and limited conversation to collaborators Apr 26, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
kind/question Categorizes issue or PR as a question.
Projects
None yet
Development

No branches or pull requests

2 participants