Skip to content
This repository has been archived by the owner on Aug 26, 2021. It is now read-only.

Kube-lego works but connection gives default backend - 404 #44

Closed
johnparn opened this issue Oct 15, 2016 · 9 comments
Closed

Kube-lego works but connection gives default backend - 404 #44

johnparn opened this issue Oct 15, 2016 · 9 comments

Comments

@johnparn
Copy link

johnparn commented Oct 15, 2016

Hi!

To start with, thanks for your good work with kube-lego!

I've setup kube-lego with gce and it works fine. The certificates are requested and deployed for two sites, one mobile and one desktop site. However, only the mobile site is reachable, the desktop site returns default backend - 404.

The setup files I've used are https://gist.github.com/johnparn/ce0e025e8c015de812c0b84ef8b1faf9

Containers for both mobile and desktop and mobile are exposed on port 80. The only difference that I've spotted is that in the GCE Load Balancer for mobile service there is a path rule with All unmatched (default) for that particular host name.

gce-lb-mobile

This rule is obviously missing in GCE LB for desktop and I believe this is the problem.

gce-lb-desktop

However, I tried creating a corresponding rule for the desktop LB but I don't seem to be able to create All unmatched (default) rule for the desktop host, well not by using the GUI. And I want to to make sure in case I have to rerun the scripts that the rule actually is created.

Any insights appreciated!
// John

@johnparn
Copy link
Author

johnparn commented Oct 18, 2016

I've tried different solutions now without success to use kube-lego for multiple web sites.

  • Putting kube-lego in one namespace and the web sites in separate namespaces.
  • All sites and kube-lego in same namespace.
  • Assign each web sites to its own namespace and have a separate kube-lego in each of these namespaces (this should not have to be done as kube-lego should listen for all ingresses in all namespaces).

Has anyone successfully run multiple sites in hje same cluster with kube-lego?

At best one site will get the cert and the second is returning a 404.

@jackzampolin
Copy link
Contributor

jackzampolin commented Oct 18, 2016

I've gotten the nginx example working and updated the docs: #49

I have gotten multiple sites working on the same cluster. If you are using GKE then the nginx solution might be better (faster, no time to warm up the google LB), and cheaper (no paying for it!) so I would encourage you to check that out.

@johnparn
Copy link
Author

@jackzampolin Well done! I will give it a try soon. Then I likely skip the gce load balancer and go straight for the nginx solution.

@johnparn
Copy link
Author

johnparn commented Oct 21, 2016

@jackzampolin How did you solve the multisite setup?

I've setup a namespace production that will contain two deployements and services, mobile-web and desktop_web. How do you route the traffic from nginx-ingress to the right service in the production namespace or do you have to have multiple nginx-ingress installations (unless SNI is used)?

It would be nice to be able to just point at the service of each site - for example desktop-web-svc and mobile-web-svc.

@simonswine Simon, what is the proper way to serve multiple services using nginx-ingress?

@jackzampolin
Copy link
Contributor

jackzampolin commented Oct 21, 2016

@simonswine I use the one nginx service that looks like this:

apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: nginx-ingress
spec:
  type: LoadBalancer
  ports:
  - port: 80
    name: http
  - port: 443
    name: https
  selector:
    app: nginx

Then for each app I make a service and ingress like this:

apiVersion: v1
kind: Service
metadata:
  name: service
  namespace: service
spec:
  ports:
  - port: 80
    targetPort: 3000
    protocol: TCP
  type: NodePort
  selector:
    app: service
---------------
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: serivce
  namespace: service
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - foo.bar.com
    secretName: service-tls
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /
        backend:
          serviceName: service
          servicePort: 80

And have a namespace for each service/application. Works pretty well.

@johnparn
Copy link
Author

@jackzampolin thanks for your feedback, I really appreciate it!

That is exactly the same way I started, but what I don't understand is how the nginx-ingress knows how to route to either the foo-service or the bar-service. I was thinking about using host headers to route traffic for foo.com to the foo-service and for bar.com to bar-service.

I may have misunderstood how it works - the ingresses for each site do not expose these directly, right? It is the work fir nginx-ingress to route all traffic and terminate TLS for each and every site. They are upstreams.

One solution then is to use a separate nginx-ingress in each namespace - that is foo and bar. However I started out with the namespace production, hoping to gather both site foo and bar within that. But I must perhaps reconsider that.

@aledbf
Copy link
Contributor

aledbf commented Oct 21, 2016

@johnparn the ingress controller is aware of the mapping between services -> enppoints.
To see how its done please execute kubectl exec <nginx ingress pod> cat /etc/nginx/nginx.conf

@johnparn
Copy link
Author

johnparn commented Oct 22, 2016

@aledbf I wan't able to run the command you mentioned. But that means the nginx-ingress listens for all other ingresses and registers the new domain names as they appear in the ingresses of the web sites?

By the way, it's working just fine with the two sites. They are up running. Thanks @jackzampolin

@aledbf
Copy link
Contributor

aledbf commented Oct 22, 2016

But that means the nginx-ingress listens for all other ingresses and registers the new domain names as they appear in the ingresses of the web sites?

Yes

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants