Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clustermesh: traffic continues to be routed to unresponsive cluster #20729

Closed
2 tasks done
shadowspore opened this issue Aug 1, 2022 · 2 comments
Closed
2 tasks done
Labels
kind/bug This is a bug in the Cilium logic. needs/triage This issue requires triaging to establish severity and next steps. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

Comments

@shadowspore
Copy link

shadowspore commented Aug 1, 2022

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

Two clusters in mesh created by kind tool (repo). Cilium clustermesh connectivity tests was passed.
Both clusters have http echo server deployments, exposed using kubernetes service with io.cilium/global-service: "true" annotation.
Clients are running inside of each cluster and sending requests to the service.
Similar to how x-wings access the rebel-base in the examples.

Cluster1 cilium status --verbose output

root@mesh-1-control-plane:/home/cilium# cilium status --verbose
KVStore:                Ok   Disabled
Kubernetes:             Ok   1.24 (v1.24.0) [linux/amd64]
Kubernetes APIs:        ["cilium/v2::CiliumClusterwideEnvoyConfig", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumEnvoyConfig", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Secrets", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   Strict   [eth0 172.18.0.3]
Host firewall:          Disabled
CNI Chaining:           none
Cilium:                 Ok   1.12.0 (v1.12.0-9447cd1)
NodeMonitor:            Listening for events on 8 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok   
IPAM:                   IPv4: 3/254 allocated from 10.1.0.0/24, 
Allocated addresses:
  10.1.0.103 (ingress)
  10.1.0.108 (health)
  10.1.0.237 (router)
ClusterMesh:   1/1 clusters ready, 9 global-services
   mesh-2: ready, 2 nodes, 8 identities, 9 services, 0 failures (last: never)
   └  etcd: 1/1 connected, lease-ID=7c02824532923d40, lock lease-ID=7c02824532923d42, has-quorum=true: https://mesh-2.mesh.cilium.io:31868 - 3.4.13 (Leader)
BandwidthManager:       Disabled
Host Routing:           Legacy
Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
Clock Source for BPF:   ktime
Controller Status:      26/26 healthy
  Name                                                 Last success    Last error      Count   Message
  bpf-map-sync-cilium_lxc                              1s ago          never           0       no error   
  cilium-health-ep                                     55s ago         never           0       no error   
  dns-garbage-collector-job                            12s ago         never           0       no error   
  endpoint-2077-regeneration-recovery                  never           never           0       no error   
  endpoint-733-regeneration-recovery                   never           never           0       no error   
  endpoint-gc                                          2m15s ago       never           0       no error   
  ipcache-bpf-garbage-collection                       2m14s ago       never           0       no error   
  ipcache-inject-labels                                92h22m13s ago   92h22m15s ago   0       no error   
  k8s-heartbeat                                        28s ago         never           0       no error   
  kvstore-etcd-lock-session-renew                      never           never           0       no error   
  kvstore-etcd-session-renew                           never           never           0       no error   
  kvstore-sync-store-cilium/state/nodes/v1/mesh-2      11s ago         never           0       no error   
  kvstore-sync-store-cilium/state/services/v1/mesh-2   11s ago         never           0       no error   
  link-cache                                           13s ago         never           0       no error   
  metricsmap-bpf-prom-sync                             2s ago          never           0       no error   
  remote-etcd-mesh-2                                   92h22m15s ago   never           0       no error   
  resolve-identity-733                                 2m1s ago        never           0       no error   
  restoring-ep-identity (2077)                         92h22m3s ago    never           0       no error   
  sync-endpoints-and-host-ips                          56s ago         never           0       no error   
  sync-lb-maps-with-k8s-services                       7m29s ago       never           0       no error   
  sync-node-with-ciliumnode (mesh-1-control-plane)     92h22m14s ago   92h22m15s ago   0       no error   
  sync-policymap-2077                                  26s ago         never           0       no error   
  sync-policymap-733                                   26s ago         never           0       no error   
  sync-to-k8s-ciliumendpoint (2077)                    2s ago          never           0       no error   
  sync-to-k8s-ciliumendpoint (733)                     11s ago         never           0       no error   
  template-dir-watcher                                 never           never           0       no error   
Proxy Status:            OK, ip 10.1.0.237, 0 redirects active on ports 10000-20000
Global Identity Range:   min 256, max 65535
Hubble:                  Ok   Current/Max Flows: 4095/4095 (100.00%), Flows/s: 3.79   Metrics: Ok
KubeProxyReplacement Details:
  Status:                 Strict
  Socket LB:              Enabled
  Socket LB Protocols:    TCP, UDP
  Devices:                eth0 172.18.0.3
  Mode:                   SNAT
  Backend Selection:      Random
  Session Affinity:       Enabled
  Graceful Termination:   Enabled
  NAT46/64 Support:       Disabled
  XDP Acceleration:       Disabled
  Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767) 
  - LoadBalancer:   Enabled 
  - externalIPs:    Enabled 
  - HostPort:       Enabled
BPF Maps:   dynamic sizing: on (ratio: 0.002500)
  Name                          Size
  Non-TCP connection tracking   73594
  TCP connection tracking       147189
  Endpoint policy               65535
  Events                        8
  IP cache                      512000
  IP masquerading agent         16384
  IPv4 fragmentation            8192
  IPv4 service                  65536
  IPv6 service                  65536
  IPv4 service backend          65536
  IPv6 service backend          65536
  IPv4 service reverse NAT      65536
  IPv6 service reverse NAT      65536
  Metrics                       1024
  NAT                           147189
  Neighbor table                147189
  Global policy                 16384
  Per endpoint policy           65536
  Session affinity              65536
  Signal                        8
  Sockmap                       65535
  Sock reverse NAT              73594
  Tunnel                        65536
Encryption:                                 Disabled
Cluster health:                             4/4 reachable   (2022-08-01T10:46:58Z)
  Name                                      IP              Node        Endpoints
  mesh-1/mesh-1-control-plane (localhost)   172.18.0.3      reachable   reachable
  mesh-1/mesh-1-worker                      172.18.0.2      reachable   reachable
  mesh-2/mesh-2-control-plane               172.18.0.5      reachable   reachable
  mesh-2/mesh-2-worker                      172.18.0.4      reachable   reachable

Cluster1 cilium-health status --probe output

root@mesh-1-control-plane:/home/cilium# cilium-health status --probe
Probe time:   2022-08-01T10:54:39Z
Nodes:
  mesh-1/mesh-1-control-plane (localhost):
    Host connectivity to 172.18.0.3:
      ICMP to stack:   OK, RTT=1.167103ms
      HTTP to agent:   OK, RTT=146.093µs
    Endpoint connectivity to 10.1.0.108:
      ICMP to stack:   OK, RTT=1.187763ms
      HTTP to agent:   OK, RTT=281.742µs
  mesh-1/mesh-1-worker:
    Host connectivity to 172.18.0.2:
      ICMP to stack:   OK, RTT=1.18302ms
      HTTP to agent:   OK, RTT=313.354µs
    Endpoint connectivity to 10.1.1.72:
      ICMP to stack:   OK, RTT=1.210518ms
      HTTP to agent:   OK, RTT=476.553µs
  mesh-2/mesh-2-control-plane:
    Host connectivity to 172.18.0.5:
      ICMP to stack:   OK, RTT=1.214629ms
      HTTP to agent:   OK, RTT=467.258µs
    Endpoint connectivity to 10.2.0.55:
      ICMP to stack:   OK, RTT=1.140983ms
      HTTP to agent:   OK, RTT=687.659µs
  mesh-2/mesh-2-worker:
    Host connectivity to 172.18.0.4:
      ICMP to stack:   OK, RTT=1.225698ms
      HTTP to agent:   OK, RTT=331.337µs
    Endpoint connectivity to 10.2.1.86:
      ICMP to stack:   OK, RTT=1.183777ms
      HTTP to agent:   OK, RTT=461.408µs

Cluster2 cilium status --verbose output

root@mesh-2-control-plane:/home/cilium# cilium status --verbose
KVStore:                Ok   Disabled
Kubernetes:             Ok   1.24 (v1.24.0) [linux/amd64]
Kubernetes APIs:        ["cilium/v2::CiliumClusterwideEnvoyConfig", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumEnvoyConfig", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Secrets", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   Strict   [eth0 172.18.0.5]
Host firewall:          Disabled
CNI Chaining:           none
Cilium:                 Ok   1.12.0 (v1.12.0-9447cd1)
NodeMonitor:            Listening for events on 8 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok   
IPAM:                   IPv4: 3/254 allocated from 10.2.0.0/24, 
Allocated addresses:
  10.2.0.241 (ingress)
  10.2.0.249 (router)
  10.2.0.55 (health)
ClusterMesh:   1/1 clusters ready, 10 global-services
   mesh-1: ready, 2 nodes, 10 identities, 10 services, 0 failures (last: never)
   └  etcd: 1/1 connected, lease-ID=7c028245325d7b47, lock lease-ID=7c028245325d7b49, has-quorum=true: https://mesh-1.mesh.cilium.io:31321 - 3.4.13 (Leader)
BandwidthManager:       Disabled
Host Routing:           Legacy
Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
Clock Source for BPF:   ktime
Controller Status:      25/25 healthy
  Name                                                 Last success    Last error      Count   Message
  bpf-map-sync-cilium_lxc                              2s ago          never           0       no error   
  cilium-health-ep                                     55s ago         never           0       no error   
  dns-garbage-collector-job                            13s ago         never           0       no error   
  endpoint-2892-regeneration-recovery                  never           never           0       no error   
  endpoint-3806-regeneration-recovery                  never           never           0       no error   
  endpoint-gc                                          16s ago         never           0       no error   
  ipcache-bpf-garbage-collection                       15s ago         never           0       no error   
  ipcache-inject-labels                                92h30m10s ago   92h30m15s ago   0       no error   
  k8s-heartbeat                                        29s ago         never           0       no error   
  kvstore-etcd-lock-session-renew                      never           never           0       no error   
  kvstore-etcd-session-renew                           never           never           0       no error   
  kvstore-sync-store-cilium/state/nodes/v1/mesh-1      12s ago         never           0       no error   
  kvstore-sync-store-cilium/state/services/v1/mesh-1   12s ago         never           0       no error   
  link-cache                                           14s ago         never           0       no error   
  metricsmap-bpf-prom-sync                             2s ago          never           0       no error   
  remote-etcd-mesh-1                                   92h30m15s ago   never           0       no error   
  resolve-identity-3806                                2s ago          never           0       no error   
  restoring-ep-identity (2892)                         92h30m4s ago    never           0       no error   
  sync-endpoints-and-host-ips                          56s ago         never           0       no error   
  sync-node-with-ciliumnode (mesh-2-control-plane)     92h30m15s ago   92h30m16s ago   0       no error   
  sync-policymap-2892                                  29s ago         never           0       no error   
  sync-policymap-3806                                  29s ago         never           0       no error   
  sync-to-k8s-ciliumendpoint (2892)                    3s ago          never           0       no error   
  sync-to-k8s-ciliumendpoint (3806)                    11s ago         never           0       no error   
  template-dir-watcher                                 never           never           0       no error   
Proxy Status:            OK, ip 10.2.0.249, 0 redirects active on ports 10000-20000
Global Identity Range:   min 256, max 65535
Hubble:                  Ok   Current/Max Flows: 4095/4095 (100.00%), Flows/s: 3.78   Metrics: Ok
KubeProxyReplacement Details:
  Status:                 Strict
  Socket LB:              Enabled
  Socket LB Protocols:    TCP, UDP
  Devices:                eth0 172.18.0.5
  Mode:                   SNAT
  Backend Selection:      Random
  Session Affinity:       Enabled
  Graceful Termination:   Enabled
  NAT46/64 Support:       Disabled
  XDP Acceleration:       Disabled
  Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767) 
  - LoadBalancer:   Enabled 
  - externalIPs:    Enabled 
  - HostPort:       Enabled
BPF Maps:   dynamic sizing: on (ratio: 0.002500)
  Name                          Size
  Non-TCP connection tracking   73594
  TCP connection tracking       147189
  Endpoint policy               65535
  Events                        8
  IP cache                      512000
  IP masquerading agent         16384
  IPv4 fragmentation            8192
  IPv4 service                  65536
  IPv6 service                  65536
  IPv4 service backend          65536
  IPv6 service backend          65536
  IPv4 service reverse NAT      65536
  IPv6 service reverse NAT      65536
  Metrics                       1024
  NAT                           147189
  Neighbor table                147189
  Global policy                 16384
  Per endpoint policy           65536
  Session affinity              65536
  Signal                        8
  Sockmap                       65535
  Sock reverse NAT              73594
  Tunnel                        65536
Encryption:                                 Disabled
Cluster health:                             4/4 reachable   (2022-08-01T10:54:56Z)
  Name                                      IP              Node        Endpoints
  mesh-2/mesh-2-control-plane (localhost)   172.18.0.5      reachable   reachable
  mesh-1/mesh-1-control-plane               172.18.0.3      reachable   reachable
  mesh-1/mesh-1-worker                      172.18.0.2      reachable   reachable
  mesh-2/mesh-2-worker                      172.18.0.4      reachable   reachable

Cluster2 cilium-health status --probe output

root@mesh-2-control-plane:/home/cilium# cilium-health status --probe
Probe time:   2022-08-01T10:56:38Z
Nodes:
  mesh-2/mesh-2-control-plane (localhost):
    Host connectivity to 172.18.0.5:
      ICMP to stack:   OK, RTT=1.117653ms
      HTTP to agent:   OK, RTT=228.252µs
    Endpoint connectivity to 10.2.0.55:
      ICMP to stack:   OK, RTT=1.079569ms
      HTTP to agent:   OK, RTT=425.034µs
  mesh-1/mesh-1-control-plane:
    Host connectivity to 172.18.0.3:
      ICMP to stack:   OK, RTT=1.03381ms
      HTTP to agent:   OK, RTT=469.295µs
    Endpoint connectivity to 10.1.0.108:
      ICMP to stack:   OK, RTT=1.082403ms
      HTTP to agent:   OK, RTT=671.207µs
  mesh-1/mesh-1-worker:
    Host connectivity to 172.18.0.2:
      ICMP to stack:   OK, RTT=1.059099ms
      HTTP to agent:   OK, RTT=370.719µs
    Endpoint connectivity to 10.1.1.72:
      ICMP to stack:   OK, RTT=1.080848ms
      HTTP to agent:   OK, RTT=671.118µs
  mesh-2/mesh-2-worker:
    Host connectivity to 172.18.0.4:
      ICMP to stack:   OK, RTT=1.114142ms
      HTTP to agent:   OK, RTT=331.615µs
    Endpoint connectivity to 10.2.1.86:
      ICMP to stack:   OK, RTT=1.124719ms
      HTTP to agent:   OK, RTT=535.21µs

The problem is, if one of the clusters becomes unresponsive (docker pause cluster1 nodes in this case), cilium continues routing traffic to that cluster.
Clients start getting http timeout errors.
Health probe is not working:

# cluster1 is down, trying to probe from cluster2
root@mesh-2-worker:/home/cilium# cilium-health status --probe
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": context deadline exceeded
Cluster2 cilium status --verbose output after cluster1 outage

# cluster2
root@mesh-2-control-plane:/home/cilium# cilium status --verbose
KVStore:                Ok   Disabled
Kubernetes:             Ok   1.24 (v1.24.0) [linux/amd64]
Kubernetes APIs:        ["cilium/v2::CiliumClusterwideEnvoyConfig", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumEnvoyConfig", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Secrets", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   Strict   [eth0 172.18.0.5]
Host firewall:          Disabled
CNI Chaining:           none
Cilium:                 Ok   1.12.0 (v1.12.0-9447cd1)
NodeMonitor:            Listening for events on 8 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok   
IPAM:                   IPv4: 3/254 allocated from 10.2.0.0/24, 
Allocated addresses:
  10.2.0.241 (ingress)
  10.2.0.249 (router)
  10.2.0.55 (health)
ClusterMesh:   0/1 clusters ready, 10 global-services
   mesh-1: not-ready, 0 nodes, 0 identities, 0 services, 1 failures (last: 2m52s ago)
   └  Waiting for initial connection to be established
BandwidthManager:       Disabled
Host Routing:           Legacy
Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
Clock Source for BPF:   ktime
Controller Status:      23/23 healthy
  Name                                               Last success    Last error      Count   Message
  bpf-map-sync-cilium_lxc                            5s ago          never           0       no error   
  cilium-health-ep                                   53s ago         never           0       no error   
  dns-garbage-collector-job                          11s ago         never           0       no error   
  endpoint-2892-regeneration-recovery                never           never           0       no error   
  endpoint-3806-regeneration-recovery                never           never           0       no error   
  endpoint-gc                                        3m14s ago       never           0       no error   
  ipcache-bpf-garbage-collection                     3m12s ago       never           0       no error   
  ipcache-inject-labels                              92h38m8s ago    92h38m13s ago   0       no error   
  k8s-heartbeat                                      27s ago         never           0       no error   
  kvstore-etcd-lock-session-renew                    never           never           0       no error   
  kvstore-etcd-session-renew                         never           never           0       no error   
  link-cache                                         11s ago         never           0       no error   
  metricsmap-bpf-prom-sync                           5s ago          never           0       no error   
  remote-etcd-mesh-1                                 92h38m13s ago   never           0       no error   
  resolve-identity-3806                              3m0s ago        never           0       no error   
  restoring-ep-identity (2892)                       92h38m2s ago    never           0       no error   
  sync-endpoints-and-host-ips                        54s ago         never           0       no error   
  sync-node-with-ciliumnode (mesh-2-control-plane)   92h38m12s ago   92h38m13s ago   0       no error   
  sync-policymap-2892                                27s ago         never           0       no error   
  sync-policymap-3806                                27s ago         never           0       no error   
  sync-to-k8s-ciliumendpoint (2892)                  10s ago         never           0       no error   
  sync-to-k8s-ciliumendpoint (3806)                  9s ago          never           0       no error   
  template-dir-watcher                               never           never           0       no error   
Proxy Status:            OK, ip 10.2.0.249, 0 redirects active on ports 10000-20000
Global Identity Range:   min 256, max 65535
Hubble:                  Ok   Current/Max Flows: 4095/4095 (100.00%), Flows/s: 3.78   Metrics: Ok
KubeProxyReplacement Details:
  Status:                 Strict
  Socket LB:              Enabled
  Socket LB Protocols:    TCP, UDP
  Devices:                eth0 172.18.0.5
  Mode:                   SNAT
  Backend Selection:      Random
  Session Affinity:       Enabled
  Graceful Termination:   Enabled
  NAT46/64 Support:       Disabled
  XDP Acceleration:       Disabled
  Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767) 
  - LoadBalancer:   Enabled 
  - externalIPs:    Enabled 
  - HostPort:       Enabled
BPF Maps:   dynamic sizing: on (ratio: 0.002500)
  Name                          Size
  Non-TCP connection tracking   73594
  TCP connection tracking       147189
  Endpoint policy               65535
  Events                        8
  IP cache                      512000
  IP masquerading agent         16384
  IPv4 fragmentation            8192
  IPv4 service                  65536
  IPv6 service                  65536
  IPv4 service backend          65536
  IPv6 service backend          65536
  IPv4 service reverse NAT      65536
  IPv6 service reverse NAT      65536
  Metrics                       1024
  NAT                           147189
  Neighbor table                147189
  Global policy                 16384
  Per endpoint policy           65536
  Session affinity              65536
  Signal                        8
  Sockmap                       65535
  Sock reverse NAT              73594
  Tunnel                        65536
Encryption:                                 Disabled
Cluster health:                             4/4 reachable   (2022-08-01T11:00:56Z)
  Name                                      IP              Node        Endpoints
  mesh-2/mesh-2-control-plane (localhost)   172.18.0.5      reachable   reachable
  mesh-1/mesh-1-control-plane               172.18.0.3      reachable   reachable
  mesh-1/mesh-1-worker                      172.18.0.2      reachable   reachable
  mesh-2/mesh-2-worker                      172.18.0.4      reachable   reachable
 
# eventually 'Cluster health' discovered that the cluster1 is down,
# but it did not solve the routing problems
Cluster health:                             2/4 reachable   (2022-08-01T11:06:56Z)
  Name                                      IP              Node          Endpoints
  mesh-2/mesh-2-control-plane (localhost)   172.18.0.5      reachable     reachable
  mesh-1/mesh-1-control-plane               172.18.0.3      unreachable   reachable
  mesh-1/mesh-1-worker                      172.18.0.2      unreachable   reachable
  mesh-2/mesh-2-worker                      172.18.0.4      reachable     reachable

Cilium Version

cilium-cli: v0.12.0 compiled with go1.18.4 on linux/amd64
cilium image (default): v1.12.0
cilium image (stable): v1.12.0
cilium image (running): v1.12.0

Kernel Version

Linux 5.15.0-41-generic

Kubernetes Version

Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:30:46Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-19T15:39:43Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}

Sysdump

cilium-sysdump-20220801-111040.zip

Relevant log output

No response

Anything else?

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@shadowspore shadowspore added kind/bug This is a bug in the Cilium logic. needs/triage This issue requires triaging to establish severity and next steps. labels Aug 1, 2022
@aanm aanm added the sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. label Aug 2, 2022
@github-actions
Copy link

github-actions bot commented Oct 3, 2022

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Oct 3, 2022
@github-actions
Copy link

This issue has not seen any activity since it was marked stale.
Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This is a bug in the Cilium logic. needs/triage This issue requires triaging to establish severity and next steps. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
None yet
Development

No branches or pull requests

2 participants