Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

looking for migration guide from nginx ingress to istio #7776

Closed
Reasno opened this issue Aug 9, 2018 · 21 comments
Closed

looking for migration guide from nginx ingress to istio #7776

Reasno opened this issue Aug 9, 2018 · 21 comments

Comments

@Reasno
Copy link

@Reasno Reasno commented Aug 9, 2018

We have been using nginx ingress controller in production and looking to migrate to istio. I couldn't find a handy guide.

From what I learned so far I need to split ingress rules to gateway and virtual service. The general rule of thumb is quite straightforward, however in production we have a ton of nginx annotations and certifactes, which makes me hesitate on adopting Istio.

Can anyone with similar experience show me some heads up? Much appreciated.

@Stono

This comment has been minimized.

Copy link
Contributor

@Stono Stono commented Aug 9, 2018

Out of curiosity why are you wanting to move?
I use nginx ingress controller with istio due to the fact i find ingress-nginx to have more capabilities as an ingress point that I need.

@Reasno

This comment has been minimized.

Copy link
Author

@Reasno Reasno commented Aug 10, 2018

Hi stono. My impression is in the Istio world, traffic routing should be done with virtual service. With nginx ingress sit in front, I wonder:

  1. When ingress rule does all the l7 routing to service, is virtual service still able to manipulates the subset and timeout etc.?

  2. If I opt out of mtls, do I still need to do the iptable magic for nginx controller?

  3. Do I need to patch every ingress rule with service upstream and vhost annotaions?

@Stono

This comment has been minimized.

Copy link
Contributor

@Stono Stono commented Aug 10, 2018

Hey @Reasno, my thoughts:

  1. I don't use a virtualservice or ingress rules for ingress into the mesh. I use ingress-nginx configured with traffic.sidecar.istio.io/includeInboundPorts: "", which basically says "dont intercept any inbound connections". That way, nginx is the entrypoint/edge, handles all l3+. It then acts as a reverse proxy to my upstream service, it's at that point envoy comes into play with the destinationrules to the upstream services within the mesh.

  2. Not sure on your question here, iptables magic? I use mtls with ingress-nginx just fine.

  3. Yes if you're using ingress-nginx then your ingress object should have these annotations:

nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/upstream-vhost: your-app.the-namespace.svc.cluster.local

Hope this helps.

@Reasno

This comment has been minimized.

Copy link
Author

@Reasno Reasno commented Aug 11, 2018

Thanks for the help @Stono .

By iptable magic I mean the one you brought up here in kubernetes/ingress-nginx#2126. But I just realized traffic.sidecar.istio.io/includeInboundPorts: "" has taken its place.

Say if I want to install istio to my existing environment, I could:

  1. Add traffic.sidecar.istio.io/includeInboundPorts: "" to ingress controller deployment, and mark application and nginx namespaces as injectable namespace.
  2. Add nginx.ingress.kubernetes.io/service-upstream: "true" and nginx.ingress.kubernetes.io/upstream-vhost: your-app.the-namespace.svc.cluster.local to all ingress object.
  3. Install Istio with helm chart
  4. rolling update all pods in injectable namespaces

Is this suffice to test water with istio without breaking any stuff?

@Stono

This comment has been minimized.

Copy link
Contributor

@Stono Stono commented Aug 11, 2018

You'll probably break some stuff :-) there's probably something I've forgot but that is the main jist of it

@csyszf

This comment has been minimized.

Copy link

@csyszf csyszf commented Aug 14, 2018

Hi @Stono , I've tried to use nginx ingress with traffic.sidecar.istio.io/includeInboundPorts: "" controller, all works fine but there's no tracing data been record.
Is this an expected behavior?

@Stono

This comment has been minimized.

Copy link
Contributor

@Stono Stono commented Aug 14, 2018

@csyszf what do you mean no tracing data?

@Stono

This comment has been minimized.

Copy link
Contributor

@Stono Stono commented Aug 14, 2018

There is problem with ingress-nginx and zipkin traces/spans: kubernetes/ingress-nginx#2940

@csyszf

This comment has been minimized.

Copy link

@csyszf csyszf commented Aug 15, 2018

@Stono I'm using jaeger for tracing and There should be some tracing records like "nginx-ingress:serviceA.my-namespace.svc.cluster.local:80/*" but there's not.

@Stono

This comment has been minimized.

Copy link
Contributor

@Stono Stono commented Aug 16, 2018

Have you enabled opentracing (zipkin) in ingress-nginx?

@csyszf

This comment has been minimized.

Copy link

@csyszf csyszf commented Aug 17, 2018

Nope, I thought the proxy sidecar would be able to do the tracing work.

@zowiehi

This comment has been minimized.

Copy link

@zowiehi zowiehi commented Oct 10, 2018

Hi Everyone 👋
I am looking to use an nginx ingress with Istio as well, however the solution outlined here has an important drawback: the annotation nginx.ingress.kubernetes.io/service-upstream: "true" does not permit session affinity, which is crucial to my application. I am curious if you may have any advice @Stono, my first guess would be to use a headless service to achieve this but my attempts so far have not been fruitful.

@bzon

This comment has been minimized.

Copy link

@bzon bzon commented Dec 4, 2018

I was able to get this working. Ensure all Nginx Controller Ingress objects have the annotations nginx.ingress.kubernetes.io/service-upstream: "true" and nginx.ingress.kubernetes.io/upstream-vhost: your-app.the-namespace.svc.cluster.local.

Disable sidecar injection for the nginx ingress controller pod. See hhttps://istio.io/docs/setup/kubernetes/sidecar-injection/#policy

One important thing to note is to ensure that all your k8s Service ports has a name! See https://istio.io/docs/setup/kubernetes/spec-requirements/.

@darkrasid

This comment has been minimized.

Copy link

@darkrasid darkrasid commented Dec 13, 2018

Hi, everyone.
I have a question :) I also try to use both of nginx controller and istio.

I was able to connect my service using nginx and envoy but virtualservice couldn't handle any configuration. Can anyone give me a hint? Much appreciated.

First, my env.

  • istio: 1.0.3 ( installed with helm and node port enabled)
  • nginx-ingress-controller: 0.18.0 (install with helm and host port is mapped)

And I follow all instruction upper comments.

  1. set traffic.sidecar.istio.io/includeInboundPorts: "" on my nginx controller deployment. And don't inject envoy to nginx pod.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "8"
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.26.0
    component: controller
    heritage: Tiller
    release: my-nginx
  name: my-nginx-nginx-ingress-controller
  namespace: pilsner
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx-ingress
      component: controller
      release: my-nginx
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "false"
        traffic.sidecar.istio.io/includeInboundPorts: ""
... 
  1. set nginx.ingress.kubernetes.io/service-upstream: "true" and nginx.ingress.kubernetes.io/upstream-vhost: your-app.the-namespace.svc.cluster.local to my ingress.
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: pilsner
      nginx.ingress.kubernetes.io/service-upstream: "true"
      nginx.ingress.kubernetes.io/upstream-vhost: sandbox-test.pilsner.svc.cluster.local
    name: sandbox-test
    namespace: pilsner
  spec:
    rules:
    - host: sandbox-sampledns.myhost.com
      http:
        paths:
        - backend:
            serviceName: sandbox-test
            servicePort: 8080
    tls:
    - hosts:
      - sandbox-sampledns.myhost.com
      secretName: myhost.com
  1. all service have its port name
apiVersion: v1
kind: Service
metadata:
  labels:
    heritage: Tiller
    release: sandbox-test
  name: sandbox-test
  namespace: pilsner
spec:
  clusterIP: 10.250.112.161
  ports:
  - name: sandbox
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: sandbox-test
    version: JEN-7
  sessionAffinity: None
  type: ClusterIP

And then I can access my service. Actually there are two deployments behind the service. One will be return aaaaaaaaaaaaaa and another will return ccccccccccccc

➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
ccccccccccccc%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
ccccccccccccc%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
ccccccccccccc%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
ccccccccccccc%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%

And I make destination rule and virtualserivce.

piVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: sandbox-test-virtualservice
  namespace: pilsner
spec:
  hosts:
  - sandbox-test.pilsner.svc.cluster.local
  http:
  - match:
    - uri:
        exact: /api/ping
    route:
    - destination:
        host: sandbox-test.pilsner.svc.cluster.local
        port:
          number: 8080
        subset: aa
      weight: 0
    - destination:
        host: sandbox-test.pilsner.svc.cluster.local
        port:
          number: 8080
        subset: bb
      weight: 100

However no works. If I access my service with istio ingress gateway. something like this

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: sandbox-test-virtualservice
  namespace: pilsner
spec:
  gateways:
  - sandbox-test-gateway
  hosts:
  - '*'
  http:
  - match:
    - uri:
        exact: /api/ping
    route:
    - destination:
        host: sandbox-test.pilsner.svc.cluster.local
        port:
          number: 8080
        subset: aa
      weight: 0
    - destination:
        host: sandbox-test.pilsner.svc.cluster.local
        port:
          number: 8080
        subset: bb
      weight: 100

========== gateway

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: sandbox-test-gateway
  namespace: pilsner
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - 'sandbox-sampledns.myhost.com'
    port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      mode: SIMPLE
      privateKey: /etc/istio/ingressgateway-certs/tls.key
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

It works well.

➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%
➜  curl https://sandbox-sampledns.myhost.com/api/ping
aaaaaaaaaaaaaa%

Am I miss something? Please help :(

@jbialy

This comment has been minimized.

Copy link

@jbialy jbialy commented Jan 2, 2019

For the most recent releases of Istio, are the annotations for nginx-ingress deployment and ingresses still required in order to properly route traffic from nginx into the mesh?

@ravigude

This comment has been minimized.

Copy link

@ravigude ravigude commented Jan 20, 2019

@Stono , @darkrasid , I am having the same issue with virtualservice as you have mentioned. It would help us too. #7776 (comment)

@stale

This comment has been minimized.

Copy link

@stale stale bot commented Apr 20, 2019

This issue has been automatically marked as stale because it has not had activity in the last 90 days. It will be closed in the next 30 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Apr 20, 2019
@stale

This comment has been minimized.

Copy link

@stale stale bot commented May 20, 2019

This issue has been automatically closed because it has not had activity in the last month and a half. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.

@stale stale bot closed this May 20, 2019
@bitva77

This comment has been minimized.

Copy link

@bitva77 bitva77 commented Jul 9, 2019

@darkrasid - did you ever figure out how to use Nginx ingress to route to the Istio Virtual Service?

@memorais

This comment has been minimized.

Copy link

@memorais memorais commented Aug 31, 2019

Just an update on this, I was able to do ingress-nginx work with Istio. My use case was specific about Istio authentication and authorization (rbac) so I don't want use gateway components from Istio.

The trick is if you are using latest ingress-nginx (0.25.1), the component default-http-backend doesn't exist anymore so, the envoy sidecar on nginx-controller pod is mandatory if you want to make mTLS work properly.

@istio-policy-bot istio-policy-bot removed the stale label Aug 31, 2019
@bernardoVale

This comment has been minimized.

Copy link

@bernardoVale bernardoVale commented Dec 13, 2019

@memorais could you please share the code? I'm struggling to get this working. I have the exact same use case

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.