Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Istio1.9 integration with virtual machine (aws ec2) getting host file as empty #31624

Closed
jithinjks opened this issue Mar 22, 2021 · 8 comments
Closed
Labels
lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while

Comments

@jithinjks
Copy link

I have installed mysql in a VM and wanted my EKS with istio 1.9 installed to talk with them, i am following this https://istio.io/latest/docs/setup/install/virtual-machine/ but when am doing this step the host file which getting generated is empty file. With this empty host file i tried but when starting the vm with this command am getting

> sudo systemctl start istio
when tailed this file

/var/log/istio/istio.log

2021-03-22T18:44:02.332421Z     info    Proxy role      ips=[10.8.1.179 fe80::dc:36ff:fed3:9eea] type=sidecar id=ip-10-8-1-179.vm domain=vm.svc.cluster.local
2021-03-22T18:44:02.332429Z     info    JWT policy is third-party-jwt
2021-03-22T18:44:02.332438Z     info    Pilot SAN: [istiod.istio-system.svc]
2021-03-22T18:44:02.332443Z     info    CA Endpoint istiod.istio-system.svc:15012, provider Citadel
2021-03-22T18:44:02.332997Z     info    Using CA istiod.istio-system.svc:15012 cert with certs: /etc/certs/root-cert.pem
2021-03-22T18:44:02.333093Z     info    citadelclient   Citadel client using custom root cert: istiod.istio-system.svc:15012
2021-03-22T18:44:02.410934Z     info    ads     All caches have been synced up in 82.7974ms, marking server ready
2021-03-22T18:44:02.411247Z     info    sds     SDS server for workload certificates started, listening on "./etc/istio/proxy/SDS"
2021-03-22T18:44:02.424855Z     info    sds     Start SDS grpc server
2021-03-22T18:44:02.425044Z     info    xdsproxy        Initializing with upstream address "istiod.istio-system.svc:15012" and cluster "Kubernetes"
2021-03-22T18:44:02.425341Z     info    Starting proxy agent
2021-03-22T18:44:02.425483Z     info    dns     Starting local udp DNS server at localhost:15053
2021-03-22T18:44:02.427627Z     info    dns     Starting local tcp DNS server at localhost:15053
2021-03-22T18:44:02.427683Z     info    Opening status port 15020
2021-03-22T18:44:02.432407Z     info    Received new config, creating new Envoy epoch 0
2021-03-22T18:44:02.433999Z     info    Epoch 0 starting
2021-03-22T18:44:02.690764Z     warn    ca      ca request failed, starting attempt 1 in 91.93939ms
2021-03-22T18:44:02.693579Z     info    Envoy command: [-c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-proxy --service-node sidecar~10.8.1.179~ip-10-8-1-179.vm~vm.svc.cluster.local --local-address-ip-version v4 --bootstrap-version 3 --log-format %Y-%m-%dT%T.%fZ       %l      envoy %n        %v -l warning --component-log-level misc:error --concurrency 2]
2021-03-22T18:44:02.782817Z     warn    ca      ca request failed, starting attempt 2 in 195.226287ms
2021-03-22T18:44:02.978344Z     warn    ca      ca request failed, starting attempt 3 in 414.326774ms
2021-03-22T18:44:03.392946Z     warn    ca      ca request failed, starting attempt 4 in 857.998629ms
2021-03-22T18:44:04.251227Z     warn    sds     failed to warm certificate: failed to generate workload certificate: create certificate: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istiod.istio-system.svc on 10.8.0.2:53: no such host"
2021-03-22T18:44:04.849207Z     warn    ca      ca request failed, starting attempt 1 in 91.182413ms
2021-03-22T18:44:04.940652Z     warn    ca      ca request failed, starting attempt 2 in 207.680983ms
2021-03-22T18:44:05.148598Z     warn    ca      ca request failed, starting attempt 3 in 384.121814ms
2021-03-22T18:44:05.533019Z     warn    ca      ca request failed, starting attempt 4 in 787.704352ms
2021-03-22T18:44:06.321042Z     warn    sds     failed to warm certificate: failed 
@howardjohn
Copy link
Member

Can you show kubectl get svc -n istio-system -oyaml?

@jithinjks
Copy link
Author

thanks for the response howardjohn, please find the details. request your help as am stuck with this for 3 days...

apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"istio-ingressgateway","install.operator.istio.io/owning-resource":"unknown","install.operator.istio.io/owning-resource-namespace":"istio-system","istio":"ingressgateway","istio.io/rev":"default","operator.istio.io/component":"IngressGateways","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.9.1","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15021,"protocol":"TCP","targetPort":15021},{"name":"http2","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443},{"name":"tcp-istiod","port":15012,"protocol":"TCP","targetPort":15012},{"name":"tls","port":15443,"protocol":"TCP","targetPort":15443}],"selector":{"app":"istio-ingressgateway","istio":"ingressgateway"},"type":"LoadBalancer"}}
    creationTimestamp: "2021-03-22T09:20:57Z"
    finalizers:
    - service.kubernetes.io/load-balancer-cleanup
    labels:
      app: istio-ingressgateway
      install.operator.istio.io/owning-resource: unknown
      install.operator.istio.io/owning-resource-namespace: istio-system
      istio: ingressgateway
      istio.io/rev: default
      operator.istio.io/component: IngressGateways
      operator.istio.io/managed: Reconcile
      operator.istio.io/version: 1.9.1
      release: istio
    name: istio-ingressgateway
    namespace: istio-system
    resourceVersion: "113576"
    selfLink: /api/v1/namespaces/istio-system/services/istio-ingressgateway
    uid: 5f301d7b-24ca-4ae6-ab74-21f78196412b
  spec:
    clusterIP: 172.20.113.152
    externalTrafficPolicy: Cluster
    ports:
    - name: status-port
      nodePort: 32589
      port: 15021
      protocol: TCP
      targetPort: 15021
    - name: http2
      nodePort: 30500
      port: 80
      protocol: TCP
      targetPort: 8080
    - name: https
      nodePort: 30885
      port: 443
      protocol: TCP
      targetPort: 8443
    - name: tcp-istiod
      nodePort: 31932
      port: 15012
      protocol: TCP
      targetPort: 15012
    - name: tls
      nodePort: 31676
      port: 15443
      protocol: TCP
      targetPort: 15443
    selector:
      app: istio-ingressgateway
      istio: ingressgateway
    sessionAffinity: None
    type: LoadBalancer
  status:
    loadBalancer:
      ingress:
      - hostname: a5f301d7b24ca4ae6ab7421f78196412-462867769.ap-south-1.elb.amazonaws.com
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"istiod","install.operator.istio.io/owning-resource":"unknown","install.operator.istio.io/owning-resource-namespace":"istio-system","istio":"pilot","istio.io/rev":"default","operator.istio.io/component":"Pilot","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.9.1","release":"istio"},"name":"istiod","namespace":"istio-system"},"spec":{"ports":[{"name":"grpc-xds","port":15010,"protocol":"TCP"},{"name":"https-dns","port":15012,"protocol":"TCP"},{"name":"https-webhook","port":443,"protocol":"TCP","targetPort":15017},{"name":"http-monitoring","port":15014,"protocol":"TCP"}],"selector":{"app":"istiod","istio":"pilot"}}}
    creationTimestamp: "2021-03-22T09:20:44Z"
    labels:
      app: istiod
      install.operator.istio.io/owning-resource: unknown
      install.operator.istio.io/owning-resource-namespace: istio-system
      istio: pilot
      istio.io/rev: default
      operator.istio.io/component: Pilot
      operator.istio.io/managed: Reconcile
      operator.istio.io/version: 1.9.1
      release: istio
    name: istiod
    namespace: istio-system
    resourceVersion: "113545"
    selfLink: /api/v1/namespaces/istio-system/services/istiod
    uid: e22720ac-191c-4803-b7fc-160a974a8443
  spec:
    clusterIP: 172.20.82.95
    ports:
    - name: grpc-xds
      port: 15010
      protocol: TCP
      targetPort: 15010
    - name: https-dns
      port: 15012
      protocol: TCP
      targetPort: 15012
    - name: https-webhook
      port: 443
      protocol: TCP
      targetPort: 15017
    - name: http-monitoring
      port: 15014
      protocol: TCP
      targetPort: 15014
    selector:
      app: istiod
      istio: pilot
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

@howardjohn
Copy link
Member

@stevenctl I assume we do not handle the hostname style load balancers? I don;'t think /etc/hosts can do a "dns alias" type thing so that would make sense

@howardjohn
Copy link
Member

But we should probably warn if we have discovery address as istiod.istio-system.svc and no LB?

@stevenctl
Copy link
Contributor

we do not handle the hostname style load balancers

right.. it may be possible to do this for multi-network (watch the TTL and re-resolve in CP, push EDS), but for VMs we'd have to somehow have the VM update this /etc/hosts value.

@jithinjks just to get things working, you can evaluate the hostname in .Status.LoadBalancer.Ingress to get an IP, then when using the istioctl x workload command specify --ingressIP or just edit /etc/hosts manually.

@nmnellis
Copy link
Contributor

nmnellis commented May 27, 2021

we do not handle the hostname style load balancers

right.. it may be possible to do this for multi-network (watch the TTL and re-resolve in CP, push EDS), but for VMs we'd have to somehow have the VM update this /etc/hosts value.

@jithinjks just to get things working, you can evaluate the hostname in .Status.LoadBalancer.Ingress to get an IP, then when using the istioctl x workload command specify --ingressIP or just edit /etc/hosts manually.

We just ran into this issue.

The issue with this in AWS is that LBs resolve to multiple IP addresses. Yes you could simply grab one of them but they are known to change.

https://stackoverflow.com/questions/3821333/amazon-ec2-elastic-load-balancer-does-its-ip-ever-change

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Aug 26, 2021
@stevenctl
Copy link
Contributor

stevenctl commented Aug 27, 2021

@nmnellis sounds like an extension of #29359. For that issue we must send IPs to Envoy. We don't have that requirement here.

Currently Envoy gets configured to talk to istiod.istio-system... and then we use /etc/hosts to resolve that hostname to the static IP.

To solve the dynamic IP case, we can either:

  1. Configure envoy to talk to the awslb directly, no /etc/hosts or DNS trickery
  2. Configure DNS for the VM to make istiod and alias for awslb

2 sounds like much more work on the user's side though and may not be compatible with many environments, but I'm not sure off the top of my head what it takes to do 1.

@istio-policy-bot istio-policy-bot removed the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Aug 27, 2021
@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Nov 25, 2021
@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2021-08-27. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Dec 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while
Projects
None yet
Development

No branches or pull requests

5 participants