Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Federation] unable to create a federated ingress #39087

Closed
lguminski opened this issue Dec 21, 2016 · 12 comments
Closed

[Federation] unable to create a federated ingress #39087

lguminski opened this issue Dec 21, 2016 · 12 comments
Assignees
Milestone

Comments

@lguminski
Copy link

Kubernetes version (use kubectl version):
Kubernetes 1.5.1

Environment:

  • GKE

What happened:
After I federated several clusters, all federated services work fine (replica sets, services) except for ingress.

What you expected to happen:
Ingress does not get propagated across all clusters

$ for c in $(kubectl config view -o jsonpath='{.contexts[*].name}'); do echo $c; kubectl --context=$c get ingress; done

federation
NAME         HOSTS     ADDRESS          PORTS     AGE
k8shserver   *         130.211.40.125   80        44m
gke_container-solutions_asia-east1-a_gce-asia-east1-a
No resources found.
gke_container-solutions_asia-northeast1-a_gce-asia-northeast1-a
No resources found.
gke_container-solutions_europe-west1-b_gce-europe-west1-b
No resources found.
gke_container-solutions_us-central1-a_gce-us-central1-a
No resources found.
gke_container-solutions_us-east1-b_gce-us-east1-b
NAME         HOSTS     ADDRESS          PORTS     AGE
k8shserver   *         130.211.40.125   80        45m
gke_container-solutions_us-west1-a_gce-us-west1-a
No resources found.

Here are controller's log (run with -v=4 flag)
https://gist.github.com/anonymous/a51ce3266f97a4103c6d47aabf76c228

How to reproduce it (as minimally and precisely as possible):
(all commands used to set it up are in https://github.com/ContainerSolutions/k8shserver/tree/master/scripts)

  1. create a few GKE clusters
  2. initlialize federation
    kubefed init federation --image=gcr.io/google_containers/hyperkube-amd64:v1.5.1 --host-cluster-context=gke_container-solutions_us-east1-b_gce-us-east1-b --dns-zone-name=infra.container-solutions.com
  3. applied a firewall workaround suggested by @madhusudancs (Federated ingress creates flapping backends and health checks #36327 (comment))
    gcloud  compute firewall-rules create my-federated-ingress-firewall-rule --source-ranges 130.211.0.0/22 --allow tcp:80 --network default
    
  4. verified that clusters are in Ready state
    $ kubectl --context=federation get clusters
    NAME                        STATUS    AGE
    cluster-asia-east1-a        Ready     55m
    cluster-asia-northeast1-a   Ready     55m
    cluster-europe-west1-b      Ready     55m
    cluster-us-central1-a       Ready     55m
    cluster-us-east1-b          Ready     55m
    cluster-us-west1-a          Ready     55m
    
  5. successfully deployed a federated service
    $ for c in $(kubectl config view -o jsonpath='{.contexts[*].name}'); do echo $c; kubectl --context=$c get services; done

    federation
    NAME         CLUSTER-IP   EXTERNAL-IP       PORT(S)   AGE
    k8shserver                104.196.209.210   80/TCP    39m
    gke_container-solutions_asia-east1-a_gce-asia-east1-a
    NAME         CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
    k8shserver   10.195.248.129   107.167.190.124   80:31957/TCP   39m
    kubernetes   10.195.240.1     <none>            443/TCP        3h
    gke_container-solutions_asia-northeast1-a_gce-asia-northeast1-a
    NAME         CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
    k8shserver   10.199.251.248   104.198.126.154   80:31367/TCP   39m
    kubernetes   10.199.240.1     <none>            443/TCP        3h
    gke_container-solutions_europe-west1-b_gce-europe-west1-b
    NAME         CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
    k8shserver   10.115.252.33   104.199.88.10   80:32418/TCP   39m
    kubernetes   10.115.240.1    <none>          443/TCP        3h
    gke_container-solutions_us-central1-a_gce-us-central1-a
    NAME         CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
    k8shserver   10.215.251.137   104.154.23.67   80:32695/TCP   39m
    kubernetes   10.215.240.1     <none>          443/TCP        3h
    gke_container-solutions_us-east1-b_gce-us-east1-b
    NAME         CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
    k8shserver   10.211.244.237   104.196.209.210   80:32665/TCP   39m
    kubernetes   10.211.240.1     <none>            443/TCP        3h
    gke_container-solutions_us-west1-a_gce-us-west1-a
    NAME         CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
    k8shserver   10.35.246.12   104.198.7.151   80:32714/TCP   39m
    kubernetes   10.35.240.1    <none>          443/TCP        3h
@madhusudancs madhusudancs self-assigned this Dec 21, 2016
@madhusudancs madhusudancs added this to the v1.6 milestone Dec 21, 2016
@madhusudancs
Copy link
Contributor

cc @kubernetes/sig-federation-misc

@madhusudancs
Copy link
Contributor

Federated Ingress currently only works for HTTPS Ingresses. Also, you will need the workaround described here for now - http://kubernetes.io/docs/user-guide/federation/federated-ingress/#known-issue. We plan to fix the firewall issue and that work is being tracked in Issue #37306.

@thesandlord
Copy link
Contributor

@madhusudancs even with the workaround, the Ingress controller only gets created in one cluster, which means the HTTPS LB only has one available backend. How do you make it create Ingress controllers in every cluster and have the HTTPS LB treat each cluster as a backend?

The Federated Service creates a service in each cluster today, but not Federated Ingress.

@madhusudancs
Copy link
Contributor

@thesandlord is it possible to share your manifests?

@thesandlord
Copy link
Contributor

thesandlord commented Dec 28, 2016

I'm basically following this blog post.

Both server and client are Kubernetes 1.5.1

The issue could be related to #34291, as I didn't specify a nodeport for the service. However, if I DO specify a nodeport, the service does not get propagated to the clusters. (I used a too low port number. But even if I specify a NodePort, the Ingress controllers still do not propagate) Also, the blog post does not have a nodeport specified in the YAML.

Deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 6
  template:
    metadata:
      labels:
        name: web
    spec:
      containers:
      - name: web
        image: gcr.io/<PROJECT_ID>/web:v1
        imagePullPolicy: Always
        ports:
        - containerPort: 3000

Service:

apiVersion: v1
kind: Service
metadata:
  name: web
spec:
  ports:
    - port: 80
      targetPort: 3000
      protocol: TCP
      nodePort: 30036
  selector:
    name: web
  type: LoadBalancer

Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web
spec:
  backend:
    serviceName: web
    servicePort: 80

@thesandlord
Copy link
Contributor

Quick Update:

I am able to manually create a L7 load balancer through the Google Cloud Console. I manually created the health check and backends (I used the unmanaged instance groups that GKE creates by default). Everything seems to work.

@ualtinok
Copy link

ualtinok commented Jan 6, 2017

Related to issue of ingress being created only in one of the underlying clusters:

E0106 10:50:08.309545 1 ingress_controller.go:725] Failed to ensure delete object from underlying clusters finalizer in ingress aning: failed to add finalizer orphan to ingress : Operation cannot be fulfilled on ingresses.extensions "aning": the object has been modified; please apply your changes to the latest version and try again

E0106 10:50:08.314478 1 ingress_controller.go:672] Failed to update annotation ingress.federation.kubernetes.io/first-cluster:acluster on federated ingress "default/aning", will try again later: Operation cannot be fulfilled on ingresses.extensions "aning": the object has been modified; please apply your changes to the latest version and try again

Hope this helps.

@ualtinok
Copy link

ualtinok commented Jan 8, 2017

@lguminski @thesandlord
While using in GCP, creating a static global ip and annotating Ingress with:
kubernetes.io/ingress.global-static-ip-name

solves the problem.

@thesandlord
Copy link
Contributor

@ualtinok that did it! This is awesome!

I'm going to try and get this into the docs.

@ciokan
Copy link

ciokan commented Jan 24, 2017

Tried all these tricks. I have 3 clusters and only one appears in the ingress with all health checks failing. Also in the ingress I can see the clusters being switched probably to health check them but everything fails. I started the ingress with a reserved global ip address as well...no cake.

@ethernetdan
Copy link
Contributor

@madhusudancs is this a release blocker for v1.6?

@madhusudancs
Copy link
Contributor

This is now fixed v1.6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants