Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hubble Relay: Failed to create peer client for peers synchronization #20130

Closed
nmnellis opened this issue Jun 8, 2022 · 18 comments
Closed

Hubble Relay: Failed to create peer client for peers synchronization #20130

nmnellis opened this issue Jun 8, 2022 · 18 comments
Labels
kind/community-report This was reported by a user in the Cilium community, eg via Slack. kind/question Frequently asked questions & answers. This issue will be linked from the documentation's FAQ. sig/agent Cilium agent related. sig/hubble Impacts hubble server or relay

Comments

@nmnellis
Copy link

nmnellis commented Jun 8, 2022

Im running k3d on ubuntu and cilium installs and works as expected but hubble-relay does not seem to connect.

  • How i installed it
  helm install cilium cilium/cilium --version 1.11.5 \
      --namespace kube-system \
      --set hubble.relay.enabled=true \
      --set hubble.metrics.enabled="{dns,drop,tcp,flow,icmp,http}" \
      --set hubble.ui.enabled=true \
      --set hubble.relay.dialTimeout=5s \
      --set hubble.relay.retryTimeout=5s \
      --set monitor.enabled=true \
      --set cluster.name="cluster1" \
      --kube-context $name \
      --wait
  • log errors
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target=
"hubble-peer.kube-system.svc.cluster.local:4254"                                                                                                                                      
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target=
"hubble-peer.kube-system.svc.cluster.local:4254"
  • I am able to hit this endpoint from other pods though
▶ k run --context cluster1 -it t14 --rm=true --image=nginx --restart=Never -- bash
+ kubectl run --context cluster1 -it t14 --rm=true --image=nginx --restart=Never -- bash
If you don't see a command prompt, try pressing enter.
root@t14:/# curl hubble-peer.kube-system.svc.cluster.local:4254
<!doctype html><html><head><meta charset="utf-8"/><title>Hubble UI</title><meta http-equiv="X-UA-Compatible" content="IE=edge"/><meta name="viewport" content="width=device-width,user-scalable=0,initial-scale=1,minimum-scale=1,maximum-scale=1"/><link rel="icon" type="image/png" sizes="32x32" href="favicon-32x32.png"/><link rel="icon" type="image/png" sizes="16x16" href="favicon-16x16.png"/><link rel="shortcut icon" href="favicon.ico"/><script defer="defer" src="/bundle.main.3b2369adf2e0c02229aa.js"></script><link href="/bundle.main.9cd671817b2cf4a1a838.css" rel="stylesheet"></head><body><div id="app"></div></body></html>
  • K3d setup
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
  name: cluster1 # name that you want to give to your cluster (will still be prefixed with `k3d-`)
servers: 1 # same as `--servers 1`
agents: 1 # same as `--agents 2`
image: rancher/k3s:v1.21.3-k3s1
network: k3d-cluster-network
ports:
  - port: 8080:80 # same as `--port '8080:80@loadbalancer'`
    nodeFilters:
      - loadbalancer
  - port: 8443:443 # same as `--port '8443:443@loadbalancer'`
    nodeFilters:
      - loadbalancer
  # hubble port
  - port: 4244:4244 # same as `--port '8443:443@loadbalancer'`
    nodeFilters:
      - loadbalancer
registries: # define how registries should be created or used
  use:
    - k3d-registry.localhost:12345
options:
  k3d: # k3d runtime settings
    wait: true # wait for cluster to be usable before returining; same as `--wait` (default: true)
    timeout: "60s" # wait timeout before aborting; same as `--timeout 60s`
    disableLoadbalancer: false # same as `--no-lb`
  k3s: # options passed on to K3s itself
    extraArgs: # additional arguments passed to the `k3s server` command; same as `--k3s-server-arg`
      - arg: --disable=traefik
        nodeFilters:
          - server:*
      # https://sandstorm.de/de/blog/post/running-cilium-in-k3s-and-k3d-lightweight-kubernetes-on-mac-os-for-development.html
      - arg: --disable-network-policy
        nodeFilters:
          - server:*
      - arg: --flannel-backend=none
        nodeFilters:
          - server:*
      - arg: --node-taint=node.cilium.io/agent-not-ready=true:NoSchedule
        nodeFilters:
          - server:*
    nodeLabels:
      - label: topology.kubernetes.io/region=us-east-1 # same as `--k3s-node-label 'foo=bar@agent:1'` -> this results in a Kubernetes node label
        nodeFilters:
          - agent:*
      - label: topology.kubernetes.io/zone=us-east-1a # same as `--k3s-node-label 'foo=bar@agent:1'` -> this results in a Kubernetes node label
        nodeFilters:
          - agent:*
  kubeconfig:
    updateDefaultKubeconfig: true # add new cluster to your default Kubeconfig; same as `--kubeconfig-update-default` (default: true)
    switchCurrentContext: false # also set current-context to the new cluster's context; same as `--kubeconfig-switch-context` (default: true)
  k3d cluster create --wait --config cluster1.yaml

  # https://github.com/cilium/cilium/issues/18675
  # Mount the BPF file system in the k3s docker containers
  docker exec -it k3d-$name-agent-0 mount bpffs /sys/fs/bpf -t bpf
  docker exec -it k3d-$name-agent-0 mount --make-shared /sys/fs/bpf

  docker exec -it k3d-$name-agent-0 mkdir -p /run/cilium/cgroupv2
  docker exec -it k3d-$name-agent-0 mount none /run/cilium/cgroupv2 -t cgroup2
  docker exec -it k3d-$name-agent-0 mount --make-shared /run/cilium/cgroupv2/

  # this needs to be done for every container (every agent and every server)
  docker exec -it k3d-$name-server-0 mount bpffs /sys/fs/bpf -t bpf
  docker exec -it k3d-$name-server-0 mount --make-shared /sys/fs/bpf

  docker exec -it k3d-$name-server-0 mkdir -p /run/cilium/cgroupv2
  docker exec -it k3d-$name-server-0 mount none /run/cilium/cgroupv2 -t cgroup2
  docker exec -it k3d-$name-server-0 mount --make-shared /run/cilium/cgroupv2/
@gandro
Copy link
Member

gandro commented Jun 9, 2022

Thanks for the report.

# curl hubble-peer.kube-system.svc.cluster.local:4254
<!doctype html><html><head><meta charset="utf-8"/><title>Hubble UI</title><meta http-equiv="X-UA-Compatible" content="IE=edge"/><meta name="viewport" content="width=device-width,user-scalable=0,initial-scale=1,minimum-scale=1,maximum-scale=1"/><link rel="icon" type="image/png" sizes="32x32" href="favicon-32x32.png"/><link rel="icon" type="image/png" sizes="16x16" href="favicon-16x16.png"/><link rel="shortcut icon" href="favicon.ico"/><script defer="defer" src="/bundle.main.3b2369adf2e0c02229aa.js"></script><link href="/bundle.main.9cd671817b2cf4a1a838.css" rel="stylesheet"></head><body><div id="app"></div></body></html>

This looks off to me. The peer service should provide a gRPC interface implemented by the cilium-agent pods, not the Hubble UI frontend. Are you able to provide a sysdump?

https://docs.cilium.io/en/v1.11/operations/troubleshooting/#automatic-log-state-collection

@rolinh rolinh changed the title hubble-relay Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target= "hubble-peer.kube-system.svc.cluster.local:4254" Hubble Relay: Failed to create peer client for peers synchronization Jun 9, 2022
@rolinh rolinh transferred this issue from cilium/hubble Jun 9, 2022
@rolinh rolinh added kind/community-report This was reported by a user in the Cilium community, eg via Slack. sig/hubble Impacts hubble server or relay and removed 📊 kind/community-report labels Jun 9, 2022
@nmnellis
Copy link
Author

nmnellis commented Jun 9, 2022

@aanm aanm added the sig/agent Cilium agent related. label Jun 10, 2022
@echupriyanov
Copy link

I've got the same issue install Cilium with Hubble with k0s
Here is config, I used to deploy cluster with Cilium:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 172.16.68.201
      user: vagrant
      port: 22
      keyPath: /Users/eric/.vagrant.d/insecure_private_key
    role: controller
    installFlags:
      - --disable-components kube-proxy
  - ssh:
      address: 172.16.68.199
      user: vagrant
      port: 22
      keyPath: /Users/eric/.vagrant.d/insecure_private_key
    role: worker
  - ssh:
      address: 172.16.68.198
      user: vagrant
      port: 22
      keyPath: /Users/eric/.vagrant.d/insecure_private_key
    role: worker
  - ssh:
      address: 172.16.68.200
      user: vagrant
      port: 22
      keyPath: /Users/eric/.vagrant.d/insecure_private_key
    role: worker
  k0s:
    version: 1.23.8+k0s.0
    config:
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: Cluster
      metadata:
        name: my-k0s-cluster
      spec:
        network:
          provider: custom
        extensions:
          helm:
            repositories:
              - name: cilium
                url: https://helm.cilium.io/
            charts:
              - name: cilium
                chartname: cilium/cilium
                version: "1.11.6"
                namespace: kube-system
                values: |
                  kubeProxyReplacement: strict
                  k8sServiceHost: "172.16.68.201"
                  k8sServicePort: "6443"
                  hubble:
                    enabled: true
                    relay:
                      enabled: true
                    ui: 
                      enabled: true
                  monitor:
                    enabled: true

Cluster was successfully started and networking works fine, but hubble-relay has issues getting peers information:

eric@makaka ~/W/D/p/r/k0s> kubectl logs -n kube-system hubble-relay-cd85c8f55-f2mgb 
level=info msg="Starting server..." options="{peerTarget:hubble-peer.kube-system.svc.cluster.local:443 dialTimeout:5000000000 retryTimeout:30000000000 listenAddress::4245 log:0x40002a2150 serverTLSConfig:<nil> insecureServer:true clientTLSConfig:0x400000c0a8 clusterName:default insecureClient:false observerOptions:[0xc44c80 0xc44da0]}" subsys=hubble-relay
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"

Here is my sysdump

cilium-sysdump-20220627-122338.zip

@gandro gandro added the needs/triage This issue requires triaging to establish severity and next steps. label Jun 27, 2022
@shlande
Copy link

shlande commented Aug 12, 2022

I also encountered this problem. But after reinstalling Cilium and Hubble using helm only, Hubble seems to work again. I'm guessing it is my mix using cilium-cli and helm broke Hubble. cilium/hubble#599 (comment)

Here is the command I used to install Cilium using cilium-cli (which caused the problem)

cilium install --helm-values cilium.yaml
# Hubble didn't show up so I manually enabled it
cilium hubble enable --ui

using helm (works)

helm install cilium cilium/cilium --namespace=kube-system -f .kube/custom/cilium.yaml

cilium.yaml.zip
cilium-sysdump-20220812-152334.zip

@superbrothers
Copy link
Contributor

superbrothers commented Aug 28, 2022

I faced the same problem.

level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"

In my case, I use cert-manager to issue certificates for Hubble and the CA certificate expired.

$ date
Sun 28 Aug 2022 10:26:07 PM JST
$ sudo cat /proc/3005003/root/var/lib/hubble-relay/tls/hubble-server-ca.crt | openssl x509 -text
        Validity
            Not Before: Mar 22 08:41:23 2022 GMT
            Not After : Jun 20 08:41:23 2022 GMT

I solved this problem by reissuing all certificates related to hubble, including the CA certificate, after setting a longer expiration date for the CA certificate as follows:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: cilium-selfsigned
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: cilium-selfsigned-ca
spec:
  isCA: true
  commonName: cilium-selfsigned-ca
  duration: 438000h # 50y
  secretName: cilium-selfsigned-ca
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: cilium-selfsigned
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: cilium-selfsigned-ca
spec:
  ca:
    secretName: cilium-selfsigned-ca

@msarti
Copy link

msarti commented Oct 4, 2022

I faced the same problem.

In my case the problem was the cluster domain was not the default "cluster.local" so relay did not find the peer service. Solved setting the helm value:

hubble:
  peerService:
    clusterDomain: {{clusterDomain}}
  relay:
    enabled: true
  ui:
    enabled: true
    frontend:
      server:
        ipv6:
          enabled: false
  tls:
    auto:
      enabled: true
      method: helm
      certValidityDuration: 1095

@AlexisHW
Copy link

AlexisHW commented Oct 7, 2022

For me it was a hubble-relay certificate issue. I was using cert-manager with self-issuing cert and it didn't allow the connection to hubble-peer.kube-system.svc.cluster.local
Once I switched it to native helm based certificate the connection was established.

@rolinh
Copy link
Member

rolinh commented Oct 7, 2022

@AlexisHW Note that how to configure cert-manager to generate certificates for Hubble is documented here so it is an option that should work.

@YutaroHayakawa
Copy link
Member

YutaroHayakawa commented Jan 19, 2023

I guess this problem #20130 (comment) reported by @shlande was caused by this cilium/cilium-cli#1347.

@ensonic
Copy link

ensonic commented Feb 14, 2023

We install cilium via helm with hubble.enabled=false. Now if we'd like to debug something we run:

cilium hubble enable --ui
cilium hubble ui

the hubble web ui opens and lists namespaces. Clicking one give the above error. Cillium is 1.12.7 installed today and the cillium + hubble binaries are latest from today.

cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         OK
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment        hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Deployment        hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 1
                  hubble-ui          Running: 1
                  hubble-relay       Running: 1
                  cilium-operator    Running: 1
Cluster Pods:     37/37 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.12.7@sha256:8cb6b4742cc27b39e4f789d282a1fc2041decb6f5698bfe09112085a07b1fd61: 1
                  hubble-ui          quay.io/cilium/hubble-ui:v0.9.2@sha256:d3596efc94a41c6b772b9afe6fe47c17417658956e04c3e2a28d293f2670663e: 1
                  hubble-ui          quay.io/cilium/hubble-ui-backend:v0.9.2@sha256:a3ac4d5b87889c9f7cc6323e86d3126b0d382933bd64f44382a92778b0cde5d7: 1
                  hubble-relay       quay.io/cilium/hubble-relay:v1.12.7@sha256:edf491e362b52e2b5461b2bff346a79c76365c9595b675146edd01f9c28ae942: 1
                  cilium-operator    quay.io/cilium/operator-generic:v1.12.7@sha256:80f24810bf8484974c757382eb2c7408c9c024e5cb0719f4a56fba3f47695c72: 1

Hubble relay complains about:

level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"

We're using cilliums since ~2.5 years and did not manage to get hubble running since. To be clear using helm with "--set hubble.enabled=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true" it works. But if you start with "--set hubble.enabled=false" there is no known way to enable it later on. Even if you start with the defaults "hubble.enabled=true" and then run cilium hubble enable --ui you get Error: Unable to enable Hubble: services "hubble-peer" already exists and you need to first disable + enable to have it working (see cilium/cilium-cli#1397 on the asymmetry of the cilium hubble enable/disable).

It is totally fine if dev need to run some extra command to make it work, but not having this feature working at all, is really sad.

To maybe resolve this, could the documentation explain why one would disable hubble and if one keeps it enabled what the overhead is?

@rolinh
Copy link
Member

rolinh commented Feb 20, 2023

@ensonic It is currently not possible to mix Cilium CLI and Helm install methods. We understand this is not ideal and creates issues for users. This is something we will address (see cilium/cilium-cli#1396).

To maybe resolve this, could the documentation explain why one would disable hubble and if one keeps it enabled what the overhead is?

I don't think we have precise numbers about the overhead of running Hubble. However, we will be working on optimizing Hubble to reduce the overhead even more and possibly create new modes where e.g. Hubble doesn't do anything unless a client runs a query.

@rolinh
Copy link
Member

rolinh commented Feb 20, 2023

If you're here because you've seen the following Hubble Relay error in the logs

Failed to create peer client for peers synchronization

Please, check the list below for common root causes for this issue:

  • You have mixed helm install and cilium install (or cilium hubble enable). The cilium CLI and Helm based installation methods cannot be mixed as of today and until Seamless Cilium CLI + Helm cilium-cli#1396 is closed). Either always use Helm or always use the Cilium CLI.
  • The TLS certificates or CA certificate have expired. If that's the case, renew the certificates, Hubble and Hubble Relay will pickup the new ones without requiring a restart. Please, also check Hubble's documentation about TLS certificates.
  • The CA certificate made available to Hubble Relay is not the one that corresponds to the CA which issued certificates for Hubble.
  • You have a cluster domain different than the default cluster.local. If that's the case, make sure to update hubble.peerService.clusterDomain accordingly.

If you have checked all of the above and still hit this problem, please open a new issue so we can investigate the problem.

@rolinh rolinh closed this as completed Feb 20, 2023
@gandro gandro added kind/question Frequently asked questions & answers. This issue will be linked from the documentation's FAQ. and removed needs/triage This issue requires triaging to establish severity and next steps. labels Feb 27, 2023
@WoodyWoodsta
Copy link

I'd like to add a point to the above list of things to check while troubleshooting which I think may not be immediately obvious when setting up Cilium for the first time.

I have a restrictive firewall on my node machines. The hubble relay deployment attempts to connect to the service hubble-peer, which points to the Cilium node pods. Those pods are running on the host network, so the DNS for the hubble-peer service resolves to node machine IPs. This means you have to make sure that at least the hubble peer port (4244 default) is allowed in the firewall rules.

Hopefully this helps!

@tthoudam
Copy link

I'd like to add a point to the above list of things to check while troubleshooting which I think may not be immediately obvious when setting up Cilium for the first time.

I have a restrictive firewall on my node machines. The hubble relay deployment attempts to connect to the service hubble-peer, which points to the Cilium node pods. Those pods are running on the host network, so the DNS for the hubble-peer service resolves to node machine IPs. This means you have to make sure that at least the hubble peer port (4244 default) is allowed in the firewall rules.

Hopefully this helps!

This is indeed the reason. We use hardened ami for our clusters and as the cilium pods run on the host network, we had to open peer port 4244 to resolve this issue.

@debanjanbasu
Copy link

I'd like to add a point to the above list of things to check while troubleshooting which I think may not be immediately obvious when setting up Cilium for the first time.
I have a restrictive firewall on my node machines. The hubble relay deployment attempts to connect to the service hubble-peer, which points to the Cilium node pods. Those pods are running on the host network, so the DNS for the hubble-peer service resolves to node machine IPs. This means you have to make sure that at least the hubble peer port (4244 default) is allowed in the firewall rules.
Hopefully this helps!

This is indeed the reason. We use hardened ami for our clusters and as the cilium pods run on the host network, we had to open peer port 4244 to resolve this issue.

This finally helped to solve the issues. Most cloud vendor direct images are hardened with basic iptables blocking this.

@gandro
Copy link
Member

gandro commented Mar 11, 2024

Glad to hear this was resolved by opening the port. For future reference, the necessary ports are documented here: https://docs.cilium.io/en/stable/operations/system_requirements/#firewall-requirements

@Jeansen
Copy link

Jeansen commented Apr 29, 2024

I have the same problem and from what I see it looks like the binding of the pod through the host network is only with IPv6. Here is what it looks like on one of my nodes (there is nothing more installed than Kubernetes 1.29, cert-manager and cilium)

systemd       1            root  114u  IPv4  13541      0t0  TCP *:111 (LISTEN)
systemd       1            root  118u  IPv6  17718      0t0  TCP *:111 (LISTEN)
rpcbind     662            _rpc    4u  IPv4  13541      0t0  TCP *:111 (LISTEN)
rpcbind     662            _rpc    6u  IPv6  17718      0t0  TCP *:111 (LISTEN)
sshd        736            root    3u  IPv4  16885      0t0  TCP *:22 (LISTEN)
sshd        736            root    4u  IPv6  16887      0t0  TCP *:22 (LISTEN)
crio        745            root   11u  IPv4  16928      0t0  TCP 127.0.0.1:40829 (LISTEN)
crio        745            root   12u  IPv6  12557      0t0  TCP *:9090 (LISTEN)
crio        745            root   35u  IPv4  56373      0t0  TCP *:4000 (LISTEN)
kubelet    1868            root   15u  IPv6  12913      0t0  TCP *:10250 (LISTEN)
kubelet    1868            root   20u  IPv4  17355      0t0  TCP 127.0.0.1:10248 (LISTEN)
cilium-ag  8563            root    3u  IPv4  52256      0t0  TCP 127.0.0.1:9890 (LISTEN)
cilium-ag  8563            root   45u  IPv4  48868      0t0  TCP 127.0.0.1:37789 (LISTEN)
cilium-ag  8563            root   50u  IPv4  46922      0t0  TCP 127.0.0.1:9879 (LISTEN)
cilium-ag  8563            root   58u  IPv6  49752      0t0  TCP *:4244 (LISTEN)
cilium-ag  8563            root   74u  IPv4  48050      0t0  TCP 192.168.178.181:4240 (LISTEN)
cilium-en 21988            root   58u  IPv4 105712      0t0  TCP *:9964 (LISTEN)
cilium-en 21988            root   59u  IPv4 105713      0t0  TCP *:9964 (LISTEN)
cilium-en 21988            root   60u  IPv4 105714      0t0  TCP *:9964 (LISTEN)
cilium-en 21988            root   61u  IPv4 105715      0t0  TCP *:9964 (LISTEN)
cilium-en 21988            root   62u  IPv4 105716      0t0  TCP *:9964 (LISTEN)
cilium-en 21988            root   63u  IPv4 105717      0t0  TCP *:9964 (LISTEN)
cilium-en 21988            root   64u  IPv4 105718      0t0  TCP *:9964 (LISTEN)
cilium-en 21988            root   65u  IPv4 105719      0t0  TCP *:9964 (LISTEN)

For some reason for port 4244 from the POD which is used by the peer Service behind 443 only listens on IPv6.

When I curl it, the DNS gets resolved, bu I get an empty reply. Also, I have not enabled IPv6 in the Cilium values when deploying with Helm.

Here's the relevant log line:

time="2024-04-29T08:54:28Z" level=info msg="Starting gRPC server..." options="{peerTarget:hubble-peer.kube-system.svc.cluster.local:443 dialTimeout:5000000000 retryTimeout:30000000000 listenAddress::4245 healthListenAddress::4222 metricsListenAddress: log:0xc00034a2a0 serverTLSConfig:<nil> insecureServer:true clientTLSConfig:0xc0008971d0 clusterName:default insecureClient:false observerOptions:[0x1f0ed60 0x1f0ee40] grpcMetrics:<nil> grpcUnaryInterceptors:[] grpcStreamInterceptors:[]}" subsys=hubble-relay

@Jeansen
Copy link

Jeansen commented Apr 29, 2024

So, I reinstalled from scratch, with CLI and another time with helm. Seems something is not OK with my cert-manger deployment, because only then it does not work. Unfortunately, there is not much input in the logs. Is there an option for more detailed logs? And not, my CA is fine. For instance I have no problem with Ingress etc ... Anyway, the problem is on my side, obviously ....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/community-report This was reported by a user in the Cilium community, eg via Slack. kind/question Frequently asked questions & answers. This issue will be linked from the documentation's FAQ. sig/agent Cilium agent related. sig/hubble Impacts hubble server or relay
Projects
None yet
Development

No branches or pull requests