Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for monitor tab in Jaeger console #470

Merged
merged 2 commits into from
Aug 23, 2023

Conversation

pavolloffay
Copy link
Collaborator

@pavolloffay pavolloffay commented Jun 19, 2023

Resolves #466

Based on https://docs.google.com/document/d/17TC4VPaRgK1SeP9JNFlYk17gGjJ1dtQh1ENa9wO9rVw/edit#heading=h.rqzf9iedmg7o

Blocked by #525

Depends on:

Todos

kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    enableUserWorkload: true 
EOF
kubectl apply -f - <<EOF
apiVersion: tempo.grafana.com/v1alpha1
kind:  TempoStack
metadata:
  name: simplest
spec:
  storage:
    secret:
      name: minio-test
      type: s3
  storageSize: 1Gi
  template:
    gateway:
      enabled: false
    queryFrontend:
      jaegerQuery:
        enabled: true
        monitorTab:
          enabled: true
          prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091
        ingress:
          type: route
EOF
 --query.bearer-token-propagation=true
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: jaeger-cluster-monitoring-view
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-monitoring-view
subjects:
  - kind: Group
    apiGroup: rbac.authorization.k8s.io
    name: system:authenticated
  - kind: Group
    apiGroup: rbac.authorization.k8s.io
    name: system:unauthenticated  
EOF
kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel
spec:
  image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.79.0
  mode: deployment
  ports:
    - name: promexporter
      port: 8889
      protocol: TCP
  config: |
    connectors:
      spanmetrics:
        histogram:
          explicit:
            buckets: [100us, 1ms, 2ms, 6ms, 10ms, 100ms, 250ms]
        dimensions:
        - name: http.method
          default: GET
        - name: http.status_code
        dimensions_cache_size: 1000
        aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"    
        metrics_flush_interval: 15s

    receivers:
      jaeger:
        protocols:
          thrift_http:
          grpc: 
      otlp:
        protocols:
          grpc:
          http:
    
    exporters:
      prometheus:
        endpoint: 0.0.0.0:8889
        resource_to_telemetry_conversion: 
          enabled: true # by default resource attributes are dropped
      logging:
      otlp:
        endpoint: tempo-simplest-distributor.ploffay.svc.cluster.local:4317
        tls:
          insecure: true
    
    service:
      # Expose internal telemetry of the collector
      # It exposes Prometheus /metrics endpoint for internal telemetry
      telemetry:
        metrics:
          address: 0.0.0.0:8888
      pipelines:
        traces:
          receivers: [otlp, jaeger]
          exporters: [otlp, spanmetrics]
        metrics:
          receivers: [spanmetrics]
          exporters: [prometheus, logging]
EOF
kubectl apply -f - <<EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: otel-collector
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: otel-collector
  podMetricsEndpoints:
  - port: metrics
  - port: promexporter
    relabelings:
    - action: labeldrop
      regex: pod
    - action: labeldrop
      regex: container
    - action: labeldrop
      regex: endpoint
    metricRelabelings:
    - action: labeldrop
      regex: instance
    - action: labeldrop
      regex: job
EOF

@CLAassistant
Copy link

CLAassistant commented Jun 19, 2023

CLA assistant check
All committers have signed the CLA.

@codecov-commenter
Copy link

codecov-commenter commented Jul 4, 2023

Codecov Report

Merging #470 (46eeb33) into main (e665d83) will increase coverage by 0.10%.
The diff coverage is 88.23%.

@@            Coverage Diff             @@
##             main     #470      +/-   ##
==========================================
+ Coverage   78.11%   78.22%   +0.10%     
==========================================
  Files          64       64              
  Lines        4707     4758      +51     
==========================================
+ Hits         3677     3722      +45     
- Misses        857      861       +4     
- Partials      173      175       +2     
Flag Coverage Δ
unittests 78.22% <88.23%> (+0.10%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Changed Coverage Δ
apis/tempo/v1alpha1/tempostack_types.go 100.00% <ø> (ø)
internal/manifests/queryfrontend/query_frontend.go 90.38% <85.71%> (-0.53%) ⬇️
apis/tempo/v1alpha1/tempostack_webhook.go 81.09% <100.00%> (+0.53%) ⬆️

Value: tempo.Spec.Template.QueryFrontend.JaegerQuery.MonitorTab.PrometheusEndpoint,
},
},
Args: []string{
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to add a condition here to set the TLS/token file only when deployed on OCP

@@ -0,0 +1,13 @@
#!/bin/bash
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you move this shellscript to a container? This will give us more deterministic results (e.g. curl might not be installed on every test host, may have a different version, etc.)

Something like here: https://github.com/grafana/tempo-operator/blob/3b402384c9e78b4cd0ae075548f4c6cebab82505/tests/e2e/smoketest-with-jaeger/03-verify-traces.yaml

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 I will change it

Copy link
Collaborator

@andreasgerstmayr andreasgerstmayr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!
Do we plan to support this also with upstream Kubernetes?
If so, should we add an e2e test for the upstream suite?

@pavolloffay pavolloffay force-pushed the span-metrics branch 2 times, most recently from f39c775 to 9057fc5 Compare July 7, 2023 10:07
@pavolloffay
Copy link
Collaborator Author

I am seeing the following error even when a bundle is provisioned from annotation:

{"level":"info","ts":1688752189.7239306,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle/..2023_07_07_17_49_31.629761843\""}
{"level":"info","ts":1688752189.7243323,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle/..data\""}
{"level":"info","ts":1688752189.7245862,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle/service-ca.crt\""}
{"level":"info","ts":1688752189.7246938,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle\""}
{"level":"info","ts":1688752190.846611,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle/..2023_07_07_17_49_31.629761843\""}
{"level":"info","ts":1688752190.84678,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle/..data\""}
{"level":"info","ts":1688752190.8468564,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle/service-ca.crt\""}
{"level":"info","ts":1688752190.846944,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle\""}
{"level":"info","ts":1688752191.8541636,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle/..2023_07_07_17_49_31.629761843\""}
{"level":"info","ts":1688752191.8543313,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle/..data\""}
{"level":"info","ts":1688752191.8544083,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle/service-ca.crt\""}
{"level":"info","ts":1688752191.854454,"caller":"fswatcher/fswatcher.go:117","msg":"Received event","event":"CHMOD         \"/ca-bundle\""}


kubectl apply -f - <<EOF
kind: ConfigMap
apiVersion: v1
metadata:
  annotations:
    service.beta.openshift.io/inject-cabundle: "true"
  name: serving-certs-ca-bundle
  namespace: ploffay
EOF

@pavolloffay
Copy link
Collaborator Author

The e2e test is passing now locally fine (I did 4 runs). However the solution does not work with OTEL collector 0.80.0 and above. Once Tempo 2.2 is released I will be able to support it via jaegertracing/jaeger#4555

}
// If the endpoint matches Prometheus on OpenShift, configure TLS and token based auth
prometheusEndpoint := strings.TrimSpace(tempo.Spec.Template.QueryFrontend.JaegerQuery.MonitorTab.PrometheusEndpoint)
if prometheusEndpoint == "https://thanos-querier.openshift-monitoring.svc.cluster.local:9091" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imho these settings (prometheus.tls.enabled, prometheus.token-file, prometheus.tls.ca) should be part of the monitorTab field in the CR, otherwise this feature will only work on non-tls prometheus and the monitoring stack on OpenShift.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and to keep it simple for users, set these new fields of the CR in the defaulter webhook to the values here, if the endpoint matches the thanos endpoint

Copy link
Collaborator

@andreasgerstmayr andreasgerstmayr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new red-metrics e2e test also passes for me.
Great work!

Signed-off-by: Pavol Loffay <p.loffay@gmail.com>
Signed-off-by: Pavol Loffay <p.loffay@gmail.com>
@pavolloffay
Copy link
Collaborator Author

@andreasgerstmayr the PR is ready to be reviewed

e2e test is passing

/home/ploffay/projects/grafana/tempo-operator/bin/kubectl-kuttl test --config kuttl-test-openshift.yaml --test red-metrics
=== RUN   kuttl
    harness.go:462: starting setup
    harness.go:252: running tests using configured kubeconfig.
I0822 13:07:49.248274  352130 request.go:682] Waited for 1.034038956s due to client-side throttling, not priority and fairness, request: GET:https://api.crc.testing:6443/apis/controlplane.operator.openshift.io/v1alpha1?timeout=32s
    harness.go:275: Successful connection to cluster at: https://api.crc.testing:6443
    harness.go:360: running tests
    harness.go:73: going to run test suite with timeout of 150 seconds for each step
    harness.go:372: testsuite: ./tests/e2e-openshift/ has 4 tests
=== RUN   kuttl/harness
=== RUN   kuttl/harness/red-metrics
=== PAUSE kuttl/harness/red-metrics
=== CONT  kuttl/harness/red-metrics
    logger.go:42: 13:07:56 | red-metrics | Creating namespace: kuttl-test-amazing-hyena
    logger.go:42: 13:07:56 | red-metrics/0-install-workload-monitoring | starting test step 0-install-workload-monitoring
    logger.go:42: 13:07:58 | red-metrics/0-install-workload-monitoring | ConfigMap:openshift-monitoring/cluster-monitoring-config created
    logger.go:42: 13:08:23 | red-metrics/0-install-workload-monitoring | test step completed 0-install-workload-monitoring
    logger.go:42: 13:08:23 | red-metrics/1-install-otel-collector | starting test step 1-install-otel-collector
I0822 13:08:24.340650  352130 request.go:682] Waited for 1.048510164s due to client-side throttling, not priority and fairness, request: GET:https://api.crc.testing:6443/apis/metal3.io/v1alpha1?timeout=32s
    logger.go:42: 13:08:25 | red-metrics/1-install-otel-collector | OpenTelemetryCollector:kuttl-test-amazing-hyena/otel created
    logger.go:42: 13:08:25 | red-metrics/1-install-otel-collector | PodMonitor:kuttl-test-amazing-hyena/otel-collector created
    logger.go:42: 13:08:51 | red-metrics/1-install-otel-collector | test step completed 1-install-otel-collector
    logger.go:42: 13:08:51 | red-metrics/2-install-tempo | starting test step 2-install-tempo
I0822 13:08:52.206408  352130 request.go:682] Waited for 1.04759747s due to client-side throttling, not priority and fairness, request: GET:https://api.crc.testing:6443/apis/metal3.io/v1alpha1?timeout=32s
    logger.go:42: 13:08:53 | red-metrics/2-install-tempo | Secret:kuttl-test-amazing-hyena/minio-test created
    logger.go:42: 13:08:53 | red-metrics/2-install-tempo | TempoStack:kuttl-test-amazing-hyena/redmetrics created
    logger.go:42: 13:08:53 | red-metrics/2-install-tempo | ClusterRoleBinding:/tempo-query-cluster-monitoring-view created
    logger.go:42: 13:10:15 | red-metrics/2-install-tempo | test step completed 2-install-tempo
    logger.go:42: 13:10:15 | red-metrics/3-install-hotrod | starting test step 3-install-hotrod
I0822 13:10:16.306339  352130 request.go:682] Waited for 1.047507298s due to client-side throttling, not priority and fairness, request: GET:https://api.crc.testing:6443/apis/machine.openshift.io/v1beta1?timeout=32s
    logger.go:42: 13:10:17 | red-metrics/3-install-hotrod | Deployment:kuttl-test-amazing-hyena/hotrod created
    logger.go:42: 13:10:17 | red-metrics/3-install-hotrod | Service:kuttl-test-amazing-hyena/hotrod created
    logger.go:42: 13:10:21 | red-metrics/3-install-hotrod | test step completed 3-install-hotrod
    logger.go:42: 13:10:21 | red-metrics/4-install-generate-traces | starting test step 4-install-generate-traces
    logger.go:42: 13:10:23 | red-metrics/4-install-generate-traces | Job:kuttl-test-amazing-hyena/hotrod-curl created
    logger.go:42: 13:10:32 | red-metrics/4-install-generate-traces | test step completed 4-install-generate-traces
    logger.go:42: 13:10:32 | red-metrics/5-intall-assert-job | starting test step 5-intall-assert-job
I0822 13:10:33.814077  352130 request.go:682] Waited for 1.047676091s due to client-side throttling, not priority and fairness, request: GET:https://api.crc.testing:6443/apis/autoscaling.openshift.io/v1beta1?timeout=32s
    logger.go:42: 13:10:35 | red-metrics/5-intall-assert-job | Job:kuttl-test-amazing-hyena/verify-metrics created
    logger.go:42: 13:10:42 | red-metrics/5-intall-assert-job | test step completed 5-intall-assert-job
    logger.go:42: 13:10:42 | red-metrics | red-metrics events from ns kuttl-test-amazing-hyena:
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:26 +0200 CEST	Normal	Pod otel-collector-589c5dffcd-n489n		Scheduled	Successfully assigned kuttl-test-amazing-hyena/otel-collector-589c5dffcd-n489n to crc-pbwlw-master-0 by crc-pbwlw-master-0		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:26 +0200 CEST	Normal	ReplicaSet.apps otel-collector-589c5dffcd		SuccessfulCreate	Created pod: otel-collector-589c5dffcd-n489n		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:26 +0200 CEST	Normal	Deployment.apps otel-collector		ScalingReplicaSet	Scaled up replica set otel-collector-589c5dffcd to 1		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:28 +0200 CEST	Normal	Pod otel-collector-589c5dffcd-n489n		AddedInterface	Add eth0 [10.217.0.154/23] from openshift-sdn		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:28 +0200 CEST	Normal	Pod otel-collector-589c5dffcd-n489n.spec.containers{otc-container}		Pulling	Pulling image "ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.83.0"		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:49 +0200 CEST	Normal	Pod otel-collector-589c5dffcd-n489n.spec.containers{otc-container}		Pulled	Successfully pulled image "ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.83.0" in 21.739144717s		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:50 +0200 CEST	Normal	Pod otel-collector-589c5dffcd-n489n.spec.containers{otc-container}		Created	Created container otc-container		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:50 +0200 CEST	Normal	Pod otel-collector-589c5dffcd-n489n.spec.containers{otc-container}		Started	Started container otc-container		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:53 +0200 CEST	Normal	Pod tempo-redmetrics-distributor-79df65bc45-f2bgt		Scheduled	Successfully assigned kuttl-test-amazing-hyena/tempo-redmetrics-distributor-79df65bc45-f2bgt to crc-pbwlw-master-0 by crc-pbwlw-master-0		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:53 +0200 CEST	Normal	ReplicaSet.apps tempo-redmetrics-distributor-79df65bc45		SuccessfulCreate	Created pod: tempo-redmetrics-distributor-79df65bc45-f2bgt		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:53 +0200 CEST	Normal	Deployment.apps tempo-redmetrics-distributor		ScalingReplicaSet	Scaled up replica set tempo-redmetrics-distributor-79df65bc45 to 1		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:53 +0200 CEST	Normal	StatefulSet.apps tempo-redmetrics-ingester		SuccessfulCreate	create Claim data-tempo-redmetrics-ingester-0 Pod tempo-redmetrics-ingester-0 in StatefulSet tempo-redmetrics-ingester success		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	PersistentVolumeClaim data-tempo-redmetrics-ingester-0		WaitForFirstConsumer	waiting for first consumer to be created before binding		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	PersistentVolumeClaim data-tempo-redmetrics-ingester-0		ExternalProvisioning	waiting for a volume to be created, either by external provisioner "kubevirt.io.hostpath-provisioner" or manually created by system administrator		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	PersistentVolumeClaim data-tempo-redmetrics-ingester-0		Provisioning	External provisioner is provisioning volume for claim "kuttl-test-amazing-hyena/data-tempo-redmetrics-ingester-0"		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	PersistentVolumeClaim data-tempo-redmetrics-ingester-0		ProvisioningSucceeded	Successfully provisioned volume pvc-a85c3954-fa0c-4091-b74f-719905a2e0c5		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	Pod tempo-redmetrics-compactor-77546ddf7d-wff29		Scheduled	Successfully assigned kuttl-test-amazing-hyena/tempo-redmetrics-compactor-77546ddf7d-wff29 to crc-pbwlw-master-0 by crc-pbwlw-master-0		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	ReplicaSet.apps tempo-redmetrics-compactor-77546ddf7d		SuccessfulCreate	Created pod: tempo-redmetrics-compactor-77546ddf7d-wff29		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	Deployment.apps tempo-redmetrics-compactor		ScalingReplicaSet	Scaled up replica set tempo-redmetrics-compactor-77546ddf7d to 1		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	StatefulSet.apps tempo-redmetrics-ingester		SuccessfulCreate	create Pod tempo-redmetrics-ingester-0 in StatefulSet tempo-redmetrics-ingester successful		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	Pod tempo-redmetrics-querier-f4c9c7b9b-d577b		Scheduled	Successfully assigned kuttl-test-amazing-hyena/tempo-redmetrics-querier-f4c9c7b9b-d577b to crc-pbwlw-master-0 by crc-pbwlw-master-0		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	ReplicaSet.apps tempo-redmetrics-querier-f4c9c7b9b		SuccessfulCreate	Created pod: tempo-redmetrics-querier-f4c9c7b9b-d577b		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	Deployment.apps tempo-redmetrics-querier		ScalingReplicaSet	Scaled up replica set tempo-redmetrics-querier-f4c9c7b9b to 1		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	Pod tempo-redmetrics-query-frontend-b869b4747-8zp2l		Scheduled	Successfully assigned kuttl-test-amazing-hyena/tempo-redmetrics-query-frontend-b869b4747-8zp2l to crc-pbwlw-master-0 by crc-pbwlw-master-0		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	ReplicaSet.apps tempo-redmetrics-query-frontend-b869b4747		SuccessfulCreate	Created pod: tempo-redmetrics-query-frontend-b869b4747-8zp2l		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:54 +0200 CEST	Normal	Deployment.apps tempo-redmetrics-query-frontend		ScalingReplicaSet	Scaled up replica set tempo-redmetrics-query-frontend-b869b4747 to 1		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:55 +0200 CEST	Normal	Pod tempo-redmetrics-ingester-0		Scheduled	Successfully assigned kuttl-test-amazing-hyena/tempo-redmetrics-ingester-0 to crc-pbwlw-master-0 by crc-pbwlw-master-0		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-compactor-77546ddf7d-wff29		AddedInterface	Add eth0 [10.217.0.158/23] from openshift-sdn		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-compactor-77546ddf7d-wff29.spec.containers{tempo}		Pulled	Container image "docker.io/grafana/tempo:2.2.1" already present on machine		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-distributor-79df65bc45-f2bgt		AddedInterface	Add eth0 [10.217.0.155/23] from openshift-sdn		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-distributor-79df65bc45-f2bgt.spec.containers{tempo}		Pulled	Container image "docker.io/grafana/tempo:2.2.1" already present on machine		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-distributor-79df65bc45-f2bgt.spec.containers{tempo}		Created	Created container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-distributor-79df65bc45-f2bgt.spec.containers{tempo}		Started	Started container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-ingester-0		AddedInterface	Add eth0 [10.217.0.159/23] from openshift-sdn		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-ingester-0.spec.containers{tempo}		Pulled	Container image "docker.io/grafana/tempo:2.2.1" already present on machine		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-querier-f4c9c7b9b-d577b		AddedInterface	Add eth0 [10.217.0.156/23] from openshift-sdn		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-querier-f4c9c7b9b-d577b.spec.containers{tempo}		Pulled	Container image "docker.io/grafana/tempo:2.2.1" already present on machine		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-querier-f4c9c7b9b-d577b.spec.containers{tempo}		Created	Created container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-querier-f4c9c7b9b-d577b.spec.containers{tempo}		Started	Started container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-query-frontend-b869b4747-8zp2l		AddedInterface	Add eth0 [10.217.0.157/23] from openshift-sdn		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-query-frontend-b869b4747-8zp2l.spec.containers{tempo}		Pulled	Container image "docker.io/grafana/tempo:2.2.1" already present on machine		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-query-frontend-b869b4747-8zp2l.spec.containers{tempo}		Created	Created container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-query-frontend-b869b4747-8zp2l.spec.containers{tempo}		Started	Started container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:56 +0200 CEST	Normal	Pod tempo-redmetrics-query-frontend-b869b4747-8zp2l.spec.containers{tempo-query}		Pulled	Container image "docker.io/grafana/tempo-query:2.2.1" already present on machine		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:57 +0200 CEST	Normal	Pod tempo-redmetrics-compactor-77546ddf7d-wff29.spec.containers{tempo}		Created	Created container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:57 +0200 CEST	Normal	Pod tempo-redmetrics-compactor-77546ddf7d-wff29.spec.containers{tempo}		Started	Started container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:57 +0200 CEST	Normal	Pod tempo-redmetrics-ingester-0.spec.containers{tempo}		Created	Created container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:57 +0200 CEST	Normal	Pod tempo-redmetrics-ingester-0.spec.containers{tempo}		Started	Started container tempo		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:57 +0200 CEST	Normal	Pod tempo-redmetrics-query-frontend-b869b4747-8zp2l.spec.containers{tempo-query}		Created	Created container tempo-query		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:08:57 +0200 CEST	Normal	Pod tempo-redmetrics-query-frontend-b869b4747-8zp2l.spec.containers{tempo-query}		Started	Started container tempo-query		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:09:14 +0200 CEST	Warning	Pod tempo-redmetrics-compactor-77546ddf7d-wff29.spec.containers{tempo}		Unhealthy	Readiness probe failed: HTTP probe failed with statuscode: 503		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:09:15 +0200 CEST	Warning	Pod tempo-redmetrics-ingester-0.spec.containers{tempo}		Unhealthy	Readiness probe failed: HTTP probe failed with statuscode: 503		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:17 +0200 CEST	Normal	Deployment.apps hotrod		ScalingReplicaSet	Scaled up replica set hotrod-6cc9cbf64b to 1		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:18 +0200 CEST	Normal	Pod hotrod-6cc9cbf64b-pdsqh		Scheduled	Successfully assigned kuttl-test-amazing-hyena/hotrod-6cc9cbf64b-pdsqh to crc-pbwlw-master-0 by crc-pbwlw-master-0		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:18 +0200 CEST	Normal	ReplicaSet.apps hotrod-6cc9cbf64b		SuccessfulCreate	Created pod: hotrod-6cc9cbf64b-pdsqh		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:19 +0200 CEST	Normal	Pod hotrod-6cc9cbf64b-pdsqh		AddedInterface	Add eth0 [10.217.0.161/23] from openshift-sdn		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:19 +0200 CEST	Normal	Pod hotrod-6cc9cbf64b-pdsqh.spec.containers{hotrod}		Pulled	Container image "jaegertracing/example-hotrod:1.46.0" already present on machine		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:20 +0200 CEST	Normal	Pod hotrod-6cc9cbf64b-pdsqh.spec.containers{hotrod}		Created	Created container hotrod		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:20 +0200 CEST	Normal	Pod hotrod-6cc9cbf64b-pdsqh.spec.containers{hotrod}		Started	Started container hotrod		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:23 +0200 CEST	Normal	Pod hotrod-curl-gjtq9		Scheduled	Successfully assigned kuttl-test-amazing-hyena/hotrod-curl-gjtq9 to crc-pbwlw-master-0 by crc-pbwlw-master-0		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:23 +0200 CEST	Normal	Job.batch hotrod-curl		SuccessfulCreate	Created pod: hotrod-curl-gjtq9		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:26 +0200 CEST	Normal	Pod hotrod-curl-gjtq9		AddedInterface	Add eth0 [10.217.0.162/23] from openshift-sdn		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:26 +0200 CEST	Normal	Pod hotrod-curl-gjtq9.spec.containers{hotrod-curl}		Pulling	Pulling image "curlimages/curl"		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:27 +0200 CEST	Normal	Pod hotrod-curl-gjtq9.spec.containers{hotrod-curl}		Pulled	Successfully pulled image "curlimages/curl" in 1.920348042s		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:28 +0200 CEST	Normal	Pod hotrod-curl-gjtq9.spec.containers{hotrod-curl}		Created	Created container hotrod-curl		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:28 +0200 CEST	Normal	Pod hotrod-curl-gjtq9.spec.containers{hotrod-curl}		Started	Started container hotrod-curl		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:32 +0200 CEST	Normal	Job.batch hotrod-curl		Completed	Job completed		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:35 +0200 CEST	Normal	Pod verify-metrics-klhkc		Scheduled	Successfully assigned kuttl-test-amazing-hyena/verify-metrics-klhkc to crc-pbwlw-master-0 by crc-pbwlw-master-0		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:35 +0200 CEST	Normal	Job.batch verify-metrics		SuccessfulCreate	Created pod: verify-metrics-klhkc		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:37 +0200 CEST	Normal	Pod verify-metrics-klhkc		AddedInterface	Add eth0 [10.217.0.163/23] from openshift-sdn		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:37 +0200 CEST	Normal	Pod verify-metrics-klhkc.spec.containers{verify-metrics}		Pulled	Container image "registry.access.redhat.com/ubi9/ubi:9.1" already present on machine		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:38 +0200 CEST	Normal	Pod verify-metrics-klhkc.spec.containers{verify-metrics}		Created	Created container verify-metrics		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:38 +0200 CEST	Normal	Pod verify-metrics-klhkc.spec.containers{verify-metrics}		Started	Started container verify-metrics		
    logger.go:42: 13:10:42 | red-metrics | 2023-08-22 13:10:41 +0200 CEST	Normal	Job.batch verify-metrics		Completed	Job completed		
    logger.go:42: 13:10:42 | red-metrics | Deleting namespace: kuttl-test-amazing-hyena
=== CONT  kuttl
    harness.go:405: run tests finished
    harness.go:513: cleaning up
    harness.go:570: removing temp folder: ""
--- PASS: kuttl (209.07s)
    --- PASS: kuttl/harness (0.00s)
        --- PASS: kuttl/harness/red-metrics (201.08s)
PASS

@pavolloffay pavolloffay merged commit 754a9ad into grafana:main Aug 23, 2023
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for monitor tab to Jaeger-console
4 participants