Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sidecar Mode detection logic (IPv4, IPv6, Dual) #41271

Closed
abasitt opened this issue Oct 5, 2022 · 9 comments
Closed

Sidecar Mode detection logic (IPv4, IPv6, Dual) #41271

abasitt opened this issue Oct 5, 2022 · 9 comments
Labels
area/networking kind/need more info Need more info or followup from the issue reporter lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed

Comments

@abasitt
Copy link
Member

abasitt commented Oct 5, 2022

Bug Description

We have single stack IPv6 cluster with 5G workload aka CNF (cloud-native network functions). Some CNFs have more than one network interfaces. Since kubernetes doesn't support multiple network, we use multus to assign multiple network interfaces to pods. The primary interface is always IPv6 because the nature of the cluster is single stack IPv6. The secondary interface can either be IPv4 or IPv6.
If all the interface primary and secondary are IPv6, everything works fine.
If any of the secondary interface is IPv4, the proxy-mode is detected as dual and the proxy start in IPv4 mode.

This may work with dual stack supported branch but this is still a bug and the behavior shouldn't be like this. The proxy mode must be identified based on the primary interface.

I dig deeper in to the code, I can see the function here scan all the interfaces.

Possible solutions.

  1. Excludeinterface annotation here should exclude interface from the IP detection logic as well.
  2. Detect the Primary interface some-how and use that to detect proxy mode. In most cases the default interface is eth0 but it should be dynamic value just incase if someone using anything different than eth0 as default interface.

Version

client version: 1.15.1
control plane version: 1.15.1
data plane version: 1.15.1 (5 proxies)
---
kubectl version --short
Client Version: v1.21.5
Server Version: v1.21.5

Additional Information

sleep deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sleep
  template:
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: "[{\"namespace\": \"kube-system\", \"name\": \"macvlan-test\"}]"
      labels:
        app: sleep
    spec:
      terminationGracePeriodSeconds: 0
      serviceAccountName: sleep
      containers:
      - name: sleep
        image: curlimages/curl
        command: ["/bin/sleep", "infinity"]
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /etc/sleep/tls
          name: secret-volume
      volumes:
      - name: secret-volume
        secret:
          secretName: sleep-secret
          optional: true

2 pods are running and sleep is not which has a secondary macvlan interface.

k get pods -owide
NAME                            READY   STATUS    RESTARTS   AGE   IP                                  NODE         NOMINATED NODE   READINESS GATES
httpbin-54f4d97f8f-6pgr7        2/2     Running   0          38m   fd74:ca9b:3a09:868c:172:18:0:1ba8   c7robin2m1   <none>           <none>
liveness-http-8db4d644b-tfft2   2/2     Running   0          38m   fd74:ca9b:3a09:868c:172:18:0:1bae   c7robin2m1   <none>           <none>
sleep-8d9c6c885-v96gc           1/2     Running   0          38m   fd74:ca9b:3a09:868c:172:18:0:1b92   c7robin2m1   <none>           <none>

logs of sleep pod

k describe pod sleep-8d9c6c885-v96gc
Name:         sleep-8d9c6c885-v96gc
Namespace:    default
Priority:     0
Node:         c7robin2m1/2001:470:ee86:4::106
Start Time:   Wed, 05 Oct 2022 04:41:54 -0400
Labels:       app=sleep
              pod-template-hash=8d9c6c885
              security.istio.io/tlsMode=istio
              service.istio.io/canonical-name=sleep
              service.istio.io/canonical-revision=latest
Annotations:  cni.projectcalico.org/containerID: 07efcb9612322f9cb3044a6b46d86a5a51dadb690288eba8a3b9fe94868af367
              cni.projectcalico.org/podIP: fd74:ca9b:3a09:868c:172:18:0:1b92/128
              cni.projectcalico.org/podIPs: fd74:ca9b:3a09:868c:172:18:0:1b92/128
              k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "ips": [
                        "fd74:ca9b:3a09:868c:172:18:0:1b92"
                    ],
                    "default": true,
                    "dns": {}
                },{
                    "name": "kube-system/macvlan-test",
                    "interface": "net1",
                    "ips": [
                        "192.168.40.192"
                    ],
                    "mac": "ea:b6:2d:a0:6e:fa",
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks: [{"namespace": "kube-system", "name": "macvlan-test"}]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "ips": [
                        "fd74:ca9b:3a09:868c:172:18:0:1b92"
                    ],
                    "default": true,
                    "dns": {}
                },{
                    "name": "kube-system/macvlan-test",
                    "interface": "net1",
                    "ips": [
                        "192.168.40.192"
                    ],
                    "mac": "ea:b6:2d:a0:6e:fa",
                    "dns": {}
                }]
              kubectl.kubernetes.io/default-container: sleep
              kubectl.kubernetes.io/default-logs-container: sleep
              prometheus.io/path: /stats/prometheus
              prometheus.io/port: 15020
              prometheus.io/scrape: true
              sidecar.istio.io/status:
                {"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["workload-socket","credential-socket","workload-certs","istio-env...
Status:       Running
IP:           fd74:ca9b:3a09:868c:172:18:0:1b92
IPs:
  IP:           fd74:ca9b:3a09:868c:172:18:0:1b92
Controlled By:  ReplicaSet/sleep-8d9c6c885
Init Containers:
  istio-init:
    Container ID:  robin://e3acbefdcfdcf4bba584ed919602727895631494e7d7c6717d5b364304409f82
    Image:         docker.io/istio/proxyv2:1.15.1
    Image ID:      docker-pullable://istio/proxyv2@sha256:7698e960a43b280b99081a505f86f2c73616112a57f21a8a9b6ab91c5ce3a682
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x
      
      -b
      *
      -d
      15090,15021,15020
      --log_output_level=default:info
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 05 Oct 2022 04:41:56 -0400
      Finished:     Wed, 05 Oct 2022 04:41:56 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:        10m
      memory:     40Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmbbb (ro)
Containers:
  sleep:
    Container ID:  robin://c6188d16f56bbde4d8bca485037405784b9827a98eaf404c70658eaa21a3a5c2
    Image:         curlimages/curl
    Image ID:      docker-pullable://curlimages/curl@sha256:5a2a25d96aa941ea2fc47acc50122f7c3d007399a075df61a82d6d2c3a567a2b
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sleep
      infinity
    State:          Running
      Started:      Wed, 05 Oct 2022 04:41:57 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/sleep/tls from secret-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmbbb (ro)
  istio-proxy:
    Container ID:  robin://77a4ec5bd71da55dcfa2c47d8c0096ef0ff15ea2a7c6f35d2d82d4dfdbeecf28
    Image:         docker.io/istio/proxyv2:1.15.1
    Image ID:      docker-pullable://istio/proxyv2@sha256:7698e960a43b280b99081a505f86f2c73616112a57f21a8a9b6ab91c5ce3a682
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency
      2
    State:          Running
      Started:      Wed, 05 Oct 2022 04:41:57 -0400
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      10m
      memory:   40Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      sleep-8d9c6c885-v96gc (v1:metadata.name)
      POD_NAMESPACE:                 default (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      PROXY_CONFIG:                  {}
                                     
      ISTIO_META_POD_PORTS:          [
                                     ]
      ISTIO_META_APP_CONTAINERS:     sleep
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      sleep
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/default/deployments/sleep
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/credential-uds from credential-socket (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmbbb (ro)
      /var/run/secrets/tokens from istio-token (rw)
      /var/run/secrets/workload-spiffe-credentials from workload-certs (rw)
      /var/run/secrets/workload-spiffe-uds from workload-socket (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  workload-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  credential-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  workload-certs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  istio-podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  secret-volume:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sleep-secret
    Optional:    true
  kube-api-access-jmbbb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  32m                    default-scheduler  Successfully assigned default/sleep-8d9c6c885-v96gc to c7robin2m1
  Normal   Pulled     32m                    kubelet            Container image "docker.io/istio/proxyv2:1.15.1" already present on machine
  Normal   Created    32m                    kubelet            Created container istio-init
  Normal   Started    32m                    kubelet            Started container istio-init
  Normal   Pulled     32m                    kubelet            Container image "curlimages/curl" already present on machine
  Normal   Created    32m                    kubelet            Created container sleep
  Normal   Started    32m                    kubelet            Started container sleep
  Normal   Pulled     32m                    kubelet            Container image "docker.io/istio/proxyv2:1.15.1" already present on machine
  Normal   Created    32m                    kubelet            Created container istio-proxy
  Normal   Started    32m                    kubelet            Started container istio-proxy
  Warning  Unhealthy  2m12s (x901 over 32m)  kubelet            Readiness probe failed: Get "http://[fd74:ca9b:3a09:868c:172:18:0:1b92]:15021/healthz/ready": dial tcp [fd74:ca9b:3a09:868c:172:18:0:1b92]:15021: connect: connection refused

The proxy logs shows the IPv4 addresses picked from the secondary interface

k logs sleep-8d9c6c885-v96gc -c istio-proxy
2022-10-05T08:41:57.420231Z     info    FLAG: --concurrency="2"
2022-10-05T08:41:57.420258Z     info    FLAG: --domain="default.svc.cluster.local"
2022-10-05T08:41:57.420263Z     info    FLAG: --help="false"
2022-10-05T08:41:57.420265Z     info    FLAG: --log_as_json="false"
2022-10-05T08:41:57.420267Z     info    FLAG: --log_caller=""
2022-10-05T08:41:57.420269Z     info    FLAG: --log_output_level="default:info"
2022-10-05T08:41:57.420270Z     info    FLAG: --log_rotate=""
2022-10-05T08:41:57.420272Z     info    FLAG: --log_rotate_max_age="30"
2022-10-05T08:41:57.420274Z     info    FLAG: --log_rotate_max_backups="1000"
2022-10-05T08:41:57.420276Z     info    FLAG: --log_rotate_max_size="104857600"
2022-10-05T08:41:57.420278Z     info    FLAG: --log_stacktrace_level="default:none"
2022-10-05T08:41:57.420285Z     info    FLAG: --log_target="[stdout]"
2022-10-05T08:41:57.420288Z     info    FLAG: --meshConfig="./etc/istio/config/mesh"
2022-10-05T08:41:57.420290Z     info    FLAG: --outlierLogPath=""
2022-10-05T08:41:57.420292Z     info    FLAG: --proxyComponentLogLevel="misc:error"
2022-10-05T08:41:57.420294Z     info    FLAG: --proxyLogLevel="warning"
2022-10-05T08:41:57.420304Z     info    FLAG: --serviceCluster="istio-proxy"
2022-10-05T08:41:57.420306Z     info    FLAG: --stsPort="0"
2022-10-05T08:41:57.420308Z     info    FLAG: --templateFile=""
2022-10-05T08:41:57.420318Z     info    FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2022-10-05T08:41:57.420324Z     info    FLAG: --vklog="0"
2022-10-05T08:41:57.420327Z     info    Version 1.15.1-bf836f0be536b0adcef68f93c405994769e767cb-Clean
2022-10-05T08:41:57.421756Z     info    Maximum file descriptors (ulimit -n): 1048576
2022-10-05T08:41:57.421926Z     info    Proxy role      ips=[fd74:ca9b:3a09:868c:172:18:0:1b92 192.168.40.192] type=sidecar id=sleep-8d9c6c885-v96gc.default domain=default.svc.cluster.local
2022-10-05T08:41:57.421988Z     info    Apply proxy config from env {}

2022-10-05T08:41:57.425968Z     info    Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: istio-proxy
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
  zipkin:
    address: zipkin.istio-system:9411

2022-10-05T08:41:57.426008Z     info    JWT policy is third-party-jwt
2022-10-05T08:41:57.426014Z     info    using credential fetcher of JWT type in cluster.local trust domain
2022-10-05T08:41:57.526810Z     info    Workload SDS socket not found. Starting Istio SDS Server
2022-10-05T08:41:57.526835Z     info    CA Endpoint istiod.istio-system.svc:15012, provider Citadel
2022-10-05T08:41:57.526848Z     info    Using CA istiod.istio-system.svc:15012 cert with certs: var/run/secrets/istio/root-cert.pem
2022-10-05T08:41:57.526841Z     info    Opening status port 15020
2022-10-05T08:41:57.526955Z     info    citadelclient   Citadel client using custom root cert: var/run/secrets/istio/root-cert.pem
2022-10-05T08:41:57.540637Z     info    ads     All caches have been synced up in 123.069054ms, marking server ready
2022-10-05T08:41:57.541083Z     info    xdsproxy        Initializing with upstream address "istiod.istio-system.svc:15012" and cluster "Kubernetes"
2022-10-05T08:41:57.541273Z     info    sds     Starting SDS grpc server
2022-10-05T08:41:57.542240Z     info    starting Http service at 127.0.0.1:15004
2022-10-05T08:41:57.548928Z     info    Pilot SAN: [istiod.istio-system.svc]
2022-10-05T08:41:57.550388Z     info    Starting proxy agent
2022-10-05T08:41:57.550410Z     info    starting
2022-10-05T08:41:57.550433Z     info    Envoy command: [-c etc/istio/proxy/envoy-rev.json --drain-time-s 45 --drain-strategy immediate --parent-shutdown-time-s 60 --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --log-format %Y-%m-%dT%T.%fZ %l      envoy %n        %v -l warning --component-log-level misc:error --concurrency 2]
2022-10-05T08:41:57.627050Z     info    xdsproxy        connected to upstream XDS server: istiod.istio-system.svc:15012
2022-10-05T08:41:57.663769Z     info    ads     ADS: new connection for node:sleep-8d9c6c885-v96gc.default-1
2022-10-05T08:41:57.703964Z     info    ads     ADS: new connection for node:sleep-8d9c6c885-v96gc.default-2
2022-10-05T08:41:57.752351Z     info    cache   generated new workload certificate      latency=211.165125ms ttl=23h59m59.247657319s
2022-10-05T08:41:57.752377Z     info    cache   Root cert has changed, start rotating root cert
2022-10-05T08:41:57.752391Z     info    ads     XDS: Incremental Pushing:0 ConnectedEndpoints:2 Version:
2022-10-05T08:41:57.752444Z     info    cache   returned workload trust anchor from cache       ttl=23h59m59.247558659s
2022-10-05T08:41:57.752458Z     info    cache   returned workload trust anchor from cache       ttl=23h59m59.247542642s
2022-10-05T08:41:57.757240Z     info    ads     SDS: PUSH request for node:sleep-8d9c6c885-v96gc.default resources:1 size:1.8kB resource:ROOTCA
2022-10-05T08:41:57.757523Z     info    cache   returned workload trust anchor from cache       ttl=23h59m59.242481866s
2022-10-05T08:41:57.757585Z     info    cache   returned workload certificate from cache        ttl=23h59m59.242416797s
2022-10-05T08:41:57.757967Z     info    ads     SDS: PUSH request for node:sleep-8d9c6c885-v96gc.default resources:1 size:8.8kB resource:default
2022-10-05T09:12:50.120544Z     info    xdsproxy        connected to upstream XDS server: istiod.istio-system.svc:15012

The listeners started on IPv4 address

istioctl pc l sleep-8d9c6c885-v96gc --port 15021
ADDRESS                           PORT  MATCH                                DESTINATION
0.0.0.0                           15021 ALL                                  Inline Route: /healthz/ready*
fd74:ca9b:3a09:868c:172:18:0:bf5a 15021 Trans: raw_buffer; App: http/1.1,h2c Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
fd74:ca9b:3a09:868c:172:18:0:bf5a 15021 ALL                                  Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
@zhlsunshine
Copy link
Contributor

Hi @abasitt, basically, the detection of proxy mode is correct based on current Istio code here, however, there would be still issues if the proxy mode is dual. Because current Istio still does not support dual stack.

@abasitt
Copy link
Member Author

abasitt commented Oct 8, 2022

@zhlsunshine yeah this is totally fine. What i am talking about is the original function where the discovery happens on all the interfaces. it should be just based on the primary interface, not all the interfaces that the pod have. here is the function, there is a similar function in the iptable root.go as well.

@zhlsunshine
Copy link
Contributor

@abasitt, Yeah, I understand what you are talking about. In fact, according to the current implementation, IPv4 has the higher priority than IPv6 if it's dual stack environment in Istio, Moreover, how to identify the primary interface in Istio is unclear. any idea?

@abasitt
Copy link
Member Author

abasitt commented Oct 10, 2022

Thank you @zhlsunshine. Since its working for many people and the issue is happening in unique cases where people use multiple interfaces. we should at least have an option via annotations to exclude interfaces. There is a an option to exclude interface today from IPtable but that still doesn't change the logic of detecting IP address.

@zhlsunshine
Copy link
Contributor

Hi @abasitt, I think the dual stack solution will help to handle your cases, so I will refer to the dual stack issue here for your issue.

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Jan 8, 2023
@jacob-delgado jacob-delgado added lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed and removed lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while labels Jan 12, 2023
@howardjohn
Copy link
Member

does #41271 (comment) resolve this?

@howardjohn howardjohn added the kind/need more info Need more info or followup from the issue reporter label Feb 20, 2023
@abasitt
Copy link
Member Author

abasitt commented Mar 6, 2023

It doesn't, unless the future roadmap is to have a dualstack as default mode. Even then I am not sure if there will be issues if primary interface is single stack IPv4, secondary interface is single stack IPv6 and the proxy is operating in dual stack mode on both interfaces.

ssuvasanth pushed a commit to ssuvasanth/istio that referenced this issue May 5, 2023
Signed-off-by: Vasanth Sundaravelu <vasanth.sundaravelu@rakuten.com>
ssuvasanth pushed a commit to ssuvasanth/istio that referenced this issue May 5, 2023
…stio#41271)

Signed-off-by: Vasanth Sundaravelu <vasanth.sundaravelu@rakuten.com>
istio-testing pushed a commit that referenced this issue May 8, 2023
* Add excludeInterfaces support in pilot-agent (#41271)

Signed-off-by: Vasanth Sundaravelu <vasanth.sundaravelu@rakuten.com>

* [PR feedback changes] Add excludeInterfaces support in pilot-agent (#41271)

Signed-off-by: Vasanth Sundaravelu <vasanth.sundaravelu@rakuten.com>

* Create 44777.yaml

[release notes] Add excludeInterfaces support in pilot-agent

Signed-off-by: Vasanth Sundaravelu <vasanth.sundaravelu@rakuten.com>

---------

Signed-off-by: Vasanth Sundaravelu <vasanth.sundaravelu@rakuten.com>
@ssuvasanth
Copy link
Contributor

Support for excludeInterfaces annotation is added by #44777 .
The reported issue can be worked around by using excludeInterface to exclude additional interfaces that are not required to be managed by proxy. This issue can be closed.

@abasitt
Copy link
Member Author

abasitt commented May 8, 2023

@ssuvasanth thank you so much. Tested and it works now with this PR.

@abasitt abasitt closed this as completed May 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/need more info Need more info or followup from the issue reporter lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed
Projects
None yet
Development

No branches or pull requests

6 participants