Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error connecting to kubernetes-proxy #758

Closed
atoulme opened this issue Apr 27, 2023 · 16 comments
Closed

Error connecting to kubernetes-proxy #758

atoulme opened this issue Apr 27, 2023 · 16 comments
Labels
bug Something isn't working Splunk Observability Issue related to Splunk Observability destination

Comments

@atoulme
Copy link
Contributor

atoulme commented Apr 27, 2023

The change implemented here https://github.com/signalfx/splunk-otel-collector-chart/pull/711/files fixes the kubernetes-scheduler issue however, it does not fix the kubernetes-proxy issue.

From original post:

2023-03-16T15:53:22.550Z error prometheusexporter/prometheus.go:139 Could not get prometheus metrics {"kind": "receiver", "name": "receiver_creator", "pipeline": "metrics", "name": "smartagent/kubernetes-proxy/receiver_creator{endpoint="xxx.xxx.xxx.xxx"}/k8s_observer/d742b91e-685e-4c35-8197-0b56ebc88e39", "monitorID": "smartagentkubernetesproxyreceiver_creatorendpoint2071306682k8s_observerd742b91e685e4c3581970b56ebc88e39", "monitorType": "kubernetes-proxy", "error": "Get "http://xxx.xxx.xxx.xxx:29101/metrics\": dial tcp xxx.xxx.xxx.xxx:29101: connect: connection refused"}

Originally posted by @kishah-lilly in #697 (comment)

@atoulme atoulme added Splunk Observability Issue related to Splunk Observability destination bug Something isn't working labels Apr 27, 2023
@jvoravong
Copy link
Contributor

jvoravong commented Apr 27, 2023

I didn't notice this being an issue when testing and validating Openshift 4.12 with the Splunk Otel Collector Chart v0.72.0.
If this is still an issue, can any reporters please post in depth details about your cluster setup.

@kishah-lilly
Copy link

@jvoravong
Helm chart version 0.75.0 on OpenShift 4.10 using Splunk Observability configuration only (not Splunk Cloud / Splunk Enterprise configuration):

agent:
  enabled: true

[...]
  
    proxy:
      # Specifies whether to collect proxy metrics.
      enabled: true
    scheduler:
      # Specifies whether to collect scheduler metrics.
      enabled: true

Setting proxy.enabled to false stops the errors.

Errors:

2023-04-27T19:01:43.114Z error prometheusexporter/prometheus.go:139 Could not get prometheus metrics {"kind": "receiver", "name": "receiver_creator", "data_type": "metrics", "name": "smartagent/kubernetes-proxy/receiver_creator{endpoint=\"X.X.X.X\"}/k8s_observer/857b9fdc-3241-4ad7-9c01-91de8cc87021", "monitorType": "kubernetes-proxy", "monitorID": "smartagentkubernetesproxyreceiver_creatorendpointXXXXXXXk8s_observer857b9fdc32414ad79c0191de8cc87021", "error": "Get \"http://X.X.X.X:29101/metrics\": dial tcp X.X.X.X:29101: connect: connection refused"}

@matthewmodestino
Copy link

0.75.0 on docker-desktop k8s also triggers it, can confirm can be silenced by disabling the proxy.

Still digging to confirm if it's simply because the proxy is not exposing it's port.

@dloucasfx
Copy link
Contributor

@matthewmodestino docker desktop is different, you might not have k8s proxy configured to expose metrics correctly.

@kishah-lilly did you guys change the default port?

https://<openshift_console>/k8s/ns/openshift-sdn/configmaps/sdn-config

kind: KubeProxyConfiguration
metricsBindAddress: 0.0.0.0:29101

@kishah-lilly
Copy link

@matthewmodestino docker desktop is different, you might not have k8s proxy configured to expose metrics correctly.

@kishah-lilly did you guys change the default port?

https://<openshift_console>/k8s/ns/openshift-sdn/configmaps/sdn-config

kind: KubeProxyConfiguration
metricsBindAddress: 0.0.0.0:29101

@dloucasfx we did not. It is currently set to 29101.

Thanks

@jvoravong
Copy link
Contributor

Would you be able to post more debug info like the labels or pod yaml so we can verify what is happening? It looks like we cover port 29101.

@kishah-lilly
Copy link

Would you be able to post more debug info like the labels or pod yaml so we can verify what is happening? It looks like we cover port 29101.

Pod yaml:

kind: Pod
apiVersion: v1
metadata:
  generateName: splunk-otel-collector-chart-agent-
  annotations:
    checksum/config: [REDACTED]
    kubectl.kubernetes.io/default-container: otel-collector
    openshift.io/scc: splunk-otel-collector-chart
  resourceVersion: [REDACTED]
  name: splunk-otel-collector-chart-agent-2tmwk
  uid: [REDACTED]
  creationTimestamp: '2023-04-27T19:01:41Z'
  managedFields:
    - manager: kube-controller-manager
      operation: Update
      apiVersion: v1
      time: '2023-04-27T19:01:41Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:checksum/config': {}
            'f:kubectl.kubernetes.io/default-container': {}
          'f:generateName': {}
          'f:labels':
            .: {}
            'f:app': {}
            'f:controller-revision-hash': {}
            'f:pod-template-generation': {}
            'f:release': {}
          'f:ownerReferences':
            .: {}
            'k:{"uid":[REDACTED]}': {}
        'f:spec':
          'f:volumes':
            .: {}
            'k:{"name":"host-dev"}':
              .: {}
              'f:hostPath':
                .: {}
                'f:path': {}
                'f:type': {}
              'f:name': {}
            'k:{"name":"host-etc"}':
              .: {}
              'f:hostPath':
                .: {}
                'f:path': {}
                'f:type': {}
              'f:name': {}
            'k:{"name":"host-proc"}':
              .: {}
              'f:hostPath':
                .: {}
                'f:path': {}
                'f:type': {}
              'f:name': {}
            'k:{"name":"host-run-udev-data"}':
              .: {}
              'f:hostPath':
                .: {}
                'f:path': {}
                'f:type': {}
              'f:name': {}
            'k:{"name":"host-sys"}':
              .: {}
              'f:hostPath':
                .: {}
                'f:path': {}
                'f:type': {}
              'f:name': {}
            'k:{"name":"host-var-run-utmp"}':
              .: {}
              'f:hostPath':
                .: {}
                'f:path': {}
                'f:type': {}
              'f:name': {}
            'k:{"name":"otel-configmap"}':
              .: {}
              'f:configMap':
                .: {}
                'f:defaultMode': {}
                'f:items': {}
                'f:name': {}
              'f:name': {}
          'f:containers':
            'k:{"name":"otel-collector"}':
              'f:image': {}
              'f:volumeMounts':
                .: {}
                'k:{"mountPath":"/conf"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/hostfs/dev"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                  'f:readOnly': {}
                'k:{"mountPath":"/hostfs/etc"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                  'f:readOnly': {}
                'k:{"mountPath":"/hostfs/proc"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                  'f:readOnly': {}
                'k:{"mountPath":"/hostfs/run/udev/data"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                  'f:readOnly': {}
                'k:{"mountPath":"/hostfs/sys"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                  'f:readOnly': {}
                'k:{"mountPath":"/hostfs/var/run/utmp"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                  'f:readOnly': {}
              'f:terminationMessagePolicy': {}
              .: {}
              'f:resources':
                .: {}
                'f:limits':
                  .: {}
                  'f:cpu': {}
                  'f:memory': {}
                'f:requests':
                  .: {}
                  'f:cpu': {}
                  'f:memory': {}
              'f:command': {}
              'f:livenessProbe':
                .: {}
                'f:failureThreshold': {}
                'f:httpGet':
                  .: {}
                  'f:path': {}
                  'f:port': {}
                  'f:scheme': {}
                'f:periodSeconds': {}
                'f:successThreshold': {}
                'f:timeoutSeconds': {}
              'f:env':
                'k:{"name":"K8S_NODE_NAME"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:fieldRef': {}
                'k:{"name":"HOST_ETC"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"K8S_POD_NAME"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:fieldRef': {}
                'k:{"name":"HOST_DEV"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"HOST_PROC_MOUNTINFO"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"SPLUNK_OBSERVABILITY_ACCESS_TOKEN"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:secretKeyRef': {}
                .: {}
                'k:{"name":"K8S_POD_UID"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:fieldRef': {}
                'k:{"name":"HOST_VAR"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"K8S_NAMESPACE"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:fieldRef': {}
                'k:{"name":"HOST_RUN"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"HOST_SYS"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"SPLUNK_MEMORY_TOTAL_MIB"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"K8S_NODE_IP"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:fieldRef': {}
                'k:{"name":"K8S_POD_IP"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:fieldRef': {}
                'k:{"name":"HOST_PROC"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
              'f:readinessProbe':
                .: {}
                'f:failureThreshold': {}
                'f:httpGet':
                  .: {}
                  'f:path': {}
                  'f:port': {}
                  'f:scheme': {}
                'f:periodSeconds': {}
                'f:successThreshold': {}
                'f:timeoutSeconds': {}
              'f:terminationMessagePath': {}
              'f:imagePullPolicy': {}
              'f:ports':
                .: {}
                'k:{"containerPort":4317,"protocol":"TCP"}':
                  .: {}
                  'f:containerPort': {}
                  'f:hostPort': {}
                  'f:name': {}
                  'f:protocol': {}
                'k:{"containerPort":4318,"protocol":"TCP"}':
                  .: {}
                  'f:containerPort': {}
                  'f:hostPort': {}
                  'f:name': {}
                  'f:protocol': {}
                'k:{"containerPort":9943,"protocol":"TCP"}':
                  .: {}
                  'f:containerPort': {}
                  'f:hostPort': {}
                  'f:name': {}
                  'f:protocol': {}
                'k:{"containerPort":55681,"protocol":"TCP"}':
                  .: {}
                  'f:containerPort': {}
                  'f:hostPort': {}
                  'f:name': {}
                  'f:protocol': {}
              'f:name': {}
          'f:dnsPolicy': {}
          'f:tolerations': {}
          'f:serviceAccount': {}
          'f:restartPolicy': {}
          'f:schedulerName': {}
          'f:hostNetwork': {}
          'f:nodeSelector': {}
          'f:terminationGracePeriodSeconds': {}
          'f:serviceAccountName': {}
          'f:enableServiceLinks': {}
          'f:securityContext': {}
          'f:affinity':
            .: {}
            'f:nodeAffinity':
              .: {}
              'f:requiredDuringSchedulingIgnoredDuringExecution': {}
    - manager: kubelet
      operation: Update
      apiVersion: v1
      time: '2023-05-01T15:50:56Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:conditions':
            'k:{"type":"ContainersReady"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Initialized"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Ready"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
          'f:containerStatuses': {}
          'f:hostIP': {}
          'f:phase': {}
          'f:podIP': {}
          'f:podIPs':
            .: {}
            'k:{"ip":[REDACTED]}':
              .: {}
              'f:ip': {}
          'f:startTime': {}
      subresource: status
  namespace: splunk-opentelemetry
  ownerReferences:
    - apiVersion: apps/v1
      kind: DaemonSet
      name: splunk-otel-collector-chart-agent
      uid: [REDACTED]
      controller: true
      blockOwnerDeletion: true
  labels:
    app: splunk-otel-collector
    controller-revision-hash: [REDACTED]
    pod-template-generation: '34'
    release: splunk-otel-collector-chart
spec:
  nodeSelector:
    kubernetes.io/os: linux
  restartPolicy: Always
  serviceAccountName: splunk-otel-collector-chart
  imagePullSecrets:
    - name: splunk-otel-collector-chart-dockercfg-j8wv2
  priority: 0
  schedulerName: default-scheduler
  hostNetwork: true
  enableServiceLinks: true
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchFields:
              - key: metadata.name
                operator: In
                values:
                  - [REDACTED]
  terminationGracePeriodSeconds: 600
  preemptionPolicy: PreemptLowerPriority
  nodeName: [REDACTED]
  securityContext:
    seLinuxOptions:
      user: system_u
      role: system_r
      type: spc_t
      level: s0
    fsGroup: 1002050000
  containers:
    - resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 200m
          memory: 500Mi
      readinessProbe:
        httpGet:
          path: /
          port: 13133
          scheme: HTTP
        timeoutSeconds: 1
        periodSeconds: 10
        successThreshold: 1
        failureThreshold: 3
      terminationMessagePath: /dev/termination-log
      name: otel-collector
      command:
        - /otelcol
        - '--config=/conf/relay.yaml'
      livenessProbe:
        httpGet:
          path: /
          port: 13133
          scheme: HTTP
        timeoutSeconds: 1
        periodSeconds: 10
        successThreshold: 1
        failureThreshold: 3
      env:
        - name: SPLUNK_MEMORY_TOTAL_MIB
          value: '500'
        - name: K8S_NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: K8S_NODE_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        - name: K8S_POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: K8S_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: K8S_POD_UID
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.uid
        - name: K8S_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: SPLUNK_OBSERVABILITY_ACCESS_TOKEN
          valueFrom:
            secretKeyRef:
              name: splunk-otel-collector
              key: splunk_observability_access_token
        - name: HOST_PROC
          value: /hostfs/proc
        - name: HOST_SYS
          value: /hostfs/sys
        - name: HOST_ETC
          value: /hostfs/etc
        - name: HOST_VAR
          value: /hostfs/var
        - name: HOST_RUN
          value: /hostfs/run
        - name: HOST_DEV
          value: /hostfs/dev
        - name: HOST_PROC_MOUNTINFO
          value: /proc/self/mountinfo
      securityContext:
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: true
      ports:
        - name: otlp
          hostPort: 4317
          containerPort: 4317
          protocol: TCP
        - name: otlp-http
          hostPort: 4318
          containerPort: 4318
          protocol: TCP
        - name: otlp-http-old
          hostPort: 55681
          containerPort: 55681
          protocol: TCP
        - name: signalfx
          hostPort: 9943
          containerPort: 9943
          protocol: TCP
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - name: otel-configmap
          mountPath: /conf
        - name: host-dev
          readOnly: true
          mountPath: /hostfs/dev
        - name: host-etc
          readOnly: true
          mountPath: /hostfs/etc
        - name: host-proc
          readOnly: true
          mountPath: /hostfs/proc
        - name: host-run-udev-data
          readOnly: true
          mountPath: /hostfs/run/udev/data
        - name: host-sys
          readOnly: true
          mountPath: /hostfs/sys
        - name: host-var-run-utmp
          readOnly: true
          mountPath: /hostfs/var/run/utmp
        - name: kube-api-access-wnbhj
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePolicy: File
      image: 'quay.io/signalfx/splunk-otel-collector:0.75.0'
  serviceAccount: splunk-otel-collector-chart
  volumes:
    - name: host-dev
      hostPath:
        path: /dev
        type: ''
    - name: host-etc
      hostPath:
        path: /etc
        type: ''
    - name: host-proc
      hostPath:
        path: /proc
        type: ''
    - name: host-run-udev-data
      hostPath:
        path: /run/udev/data
        type: ''
    - name: host-sys
      hostPath:
        path: /sys
        type: ''
    - name: host-var-run-utmp
      hostPath:
        path: /var/run/utmp
        type: ''
    - name: otel-configmap
      configMap:
        name: splunk-otel-collector-chart-otel-agent
        items:
          - key: relay
            path: relay.yaml
        defaultMode: 420
    - name: kube-api-access-wnbhj
      projected:
        sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              name: kube-root-ca.crt
              items:
                - key: ca.crt
                  path: ca.crt
          - downwardAPI:
              items:
                - path: namespace
                  fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
          - configMap:
              name: openshift-service-ca.crt
              items:
                - key: service-ca.crt
                  path: service-ca.crt
        defaultMode: 420
  dnsPolicy: ClusterFirstWithHostNet
  tolerations:
    - operator: Exists

@dloucasfx
Copy link
Contributor

@kishah-lilly
Thanks for confirming

I wonder if the service is only allowing https as we are currently trying http.
Can you check that? You can also edit the agent configmap and add useHTTPS: true to the smartagent/kubernetes-proxy config , restart the pod and see if that takes care of the issue.

If it's not caused by https, then can you verify that port 29101 is listening? netstat -anpe | grep "29101" | grep "LISTEN"

Thanks

@kishah-lilly
Copy link

@kishah-lilly Thanks for confirming

I wonder if the service is only allowing https as we are currently trying http. Can you check that? You can also edit the agent configmap and add useHTTPS: true to the smartagent/kubernetes-proxy config , restart the pod and see if that takes care of the issue.

If it's not caused by https, then can you verify that port 29101 is listening? netstat -anpe | grep "29101" | grep "LISTEN"

Thanks

I tried adding useHTTPS: true to smartagent/kubernetes-proxy, then deleted the pod and checked the pod logs for the new pod that spun up but no luck:

         smartagent/kubernetes-proxy:
            config:
              extraDimensions:
                metric_source: kubernetes-proxy
              port: 29101
              type: kubernetes-proxy
              useHTTPS: true
            rule: type == "pod" && labels["app"] == "sdn"

Also tried adding useServiceAccount: true just to see but that didn't work either.

Here is the output using ss so I believe 29101 is listening:

  $ ss -anpe | grep "29101" | grep "LISTEN"
  tcp              LISTEN                 0                   128                                                                     127.0.0.1:29101                                                     
  0.0.0.0:*                                ino:42391 sk:11f <->                                                           

Thanks

@sveno1990
Copy link

sveno1990 commented May 30, 2023

We ran into the same issue. The issue seems that the service is listening only on port 29101 on 127.0.0.1 and not on the IP of the node itself, while the OTEL Collector pod is trying to connect to node-ip:29101 instead of localhost. I think this is the issue at least for us. Do you agree that this is the general issue? If this is indeed the issue, should the solution be to make OTEL Collector connect to localhost instead of the node-ip, because it is running on HostNetwork anyways?

@dloucasfx
Copy link
Contributor

We ran into the same issue. The issue seems that the service is listening only on port 29101 on 127.0.0.1 and not on the IP of the node itself, while the OTEL Collector pod is trying to connect to :29101 instead of localhost. I think this is the issue at least for us. Do you agree that this is the general issue? If this is indeed the issue, should the solution be to make OTEL Collector connect to localhost instead of the node-ip, because it is running on HostNetwork anyways?

@sveno1990 I have seen this happening before in an openshift cluster. The receiver is configured to use the discovered pod IP rule: type == "pod" && labels["app"] == "sdn" ; it's possible that openshift is tightening security so only pods with hostnetwork access can access the metrics endpoint.

@jvoravong have you see this in our lab cluster?

@kishah-lilly can you try the loopback address; if it's still not working, please share the errors you are getting.

@jvoravong
Copy link
Contributor

jvoravong commented May 30, 2023

Openshift Notes:

  • v4.9: kubernetes-proxy receiver was integrating successfully.
  • v4.10: Haven't been able to successfully create and test this cluster version in our lab.
    • Release notes do state "monitoring stack components have been updated to use TLS authentication for metrics collection".
  • v4.11: Haven't been able to successfully create and test this cluster version in our lab.
  • v4.12: The kubernetes-proxy receiver in our latest release (v0.76.0) doesn't work at all because SDN in no longer the default.
    • Beginning with OpenShift 4.12, new clusters will be installed with the OVN-Kubernetes network plugin as the default networking plugin across all support platforms and topologies. All prior OpenShift releases will continue to use OpenShift SDN as the default networking plugin.
    • A service named "network-metrics-service" is now available
    • Still validating if our receivers and dashboard content are compatible with OVN metrics

@sveno1990
Copy link

We encounter the issue on Openshift 4.10. For troubleshooting purposes I manually added "host: localhost" to the config below in config map splunk-otel-collector-otel-agent. The solved the issue, however, that is not a sustainable solution.
smartagent/kubernetes-proxy:
config:
extraDimensions:
metric_source: kubernetes-proxy
port: 29101
type: kubernetes-proxy
rule: type == "pod" && labels["app"] == "sdn"

@jvoravong
Copy link
Contributor

Confirmed that you can set host to localhost, 0.0.0.0, 127.0.0.1 with v4.10 and the monitor works. However, warning messages can populate the logs with the following message if some of these values are used.
"warn internal/warning.go:51 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks"

Still looking into other solutions.

@jvoravong
Copy link
Contributor

A fix for OpenShift v4.10 and this issue was released with Update the Kubernetes Proxy monitor for OpenShift clusters #810

@jvoravong
Copy link
Contributor

We've implemented fixes for supported Kubernetes distributions. The collector agent's logging configurations have been adjusted to prevent excessive errors related to kubernetes-proxy connections on untested or unsupported distributions. Additionally, we've expanded the documentation section on known kube-proxy issues for clarity.

This ticket is now closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Splunk Observability Issue related to Splunk Observability destination
Projects
None yet
Development

No branches or pull requests

6 participants