Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[datasetexporter]: Upgrade library to 0.17.0 #29446

Merged
merged 17 commits into from
Nov 28, 2023

Conversation

martin-majlis-s1
Copy link
Contributor

@martin-majlis-s1 martin-majlis-s1 commented Nov 22, 2023

Description: Upgrade to new version of the library.

This PR is implementing following issues:

Other change is that fields that are specified as part of the group_by configuration are now transferred as part of the session info.

Link to tracking Issue: #27650, #27652

Testing:

  1. Build docker image - make docker-otelcontribcol
  2. Checkout https://github.com/open-telemetry/opentelemetry-demo
  3. Update configuration in docker-compose.yaml and in the src/otelcollector/otelcol-config.yml:
  • In docker-compose.yaml switch image to the newly build one in step 1
  • In docker-compose.yaml enable feature gate for collecting metrics - --feature-gates=telemetry.useOtelForInternalMetrics
  • In src/otelcollector/otelcol-config.yml enable metrics scraping by prometheus
  • In src/otelcollector/otelcol-config.yml add configuration for dataset
diff --git a/docker-compose.yml b/docker-compose.yml
index 001f7c8..d7edd0d 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -646,14 +646,16 @@ services:

   # OpenTelemetry Collector
   otelcol:
-    image: otel/opentelemetry-collector-contrib:0.86.0
+    image: otelcontribcol:latest
     container_name: otel-col
     deploy:
       resources:
         limits:
           memory: 125M
     restart: unless-stopped
-    command: [ "--config=/etc/otelcol-config.yml", "--config=/etc/otelcol-config-extras.yml" ]
+    command: [ "--config=/etc/otelcol-config.yml", "--config=/etc/otelcol-config-extras.yml", "--feature-gates=telemetry.useOtelForInternalMetrics" ]
     volumes:
       - ./src/otelcollector/otelcol-config.yml:/etc/otelcol-config.yml
       - ./src/otelcollector/otelcol-config-extras.yml:/etc/otelcol-config-extras.yml
diff --git a/src/otelcollector/otelcol-config.yml b/src/otelcollector/otelcol-config.yml
index f2568ae..9944562 100644
--- a/src/otelcollector/otelcol-config.yml
+++ b/src/otelcollector/otelcol-config.yml
@@ -15,6 +15,14 @@ receivers:
     targets:
       - endpoint: http://frontendproxy:${env:ENVOY_PORT}

+  prometheus:
+    config:
+      scrape_configs:
+        - job_name: 'otel-collector'
+          scrape_interval: 5s
+          static_configs:
+            - targets: ['0.0.0.0:8888']
+
 exporters:
   debug:
   otlp:
@@ -29,6 +37,22 @@ exporters:
     endpoint: "http://prometheus:9090/api/v1/otlp"
     tls:
       insecure: true
+  logging:
+  dataset:
+    api_key: API_KEY
+    dataset_url: https://SERVER.scalyr.com
+    debug: true
+    buffer:
+      group_by:
+        - resource_name
+        - resource_type
+    logs:
+      export_resource_info_on_event: true
+    server_host:
+      server_host: Martin
+      use_hostname: false
+  dataset/aaa:
+    api_key: API_KEY
+    dataset_url: https://SERVER.scalyr.com
+    debug: true
+    buffer:
+      group_by:
+        - resource_name
+        - resource_type
+    logs:
+      export_resource_info_on_event: true
+    server_host:
+      server_host: MartinAAA
+      use_hostname: false

 processors:
   batch:
@@ -47,6 +71,11 @@ processors:
           - set(description, "") where name == "queueSize"
           # FIXME: remove when this issue is resolved: https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1958
           - set(description, "") where name == "http.client.duration"
+  attributes:
+    actions:
+      - key: otel.demo
+        value: 29446
+        action: upsert

 connectors:
   spanmetrics:
@@ -55,13 +84,13 @@ service:
   pipelines:
     traces:
       receivers: [otlp]
-      processors: [batch]
-      exporters: [otlp, debug, spanmetrics]
+      processors: [batch, attributes]
+      exporters: [otlp, debug, spanmetrics, dataset, dataset/aaa]
     metrics:
-      receivers: [httpcheck/frontendproxy, otlp, spanmetrics]
+      receivers: [httpcheck/frontendproxy, otlp, spanmetrics, prometheus]
       processors: [filter/ottl, transform, batch]
       exporters: [otlphttp/prometheus, debug]
     logs:
       receivers: [otlp]
-      processors: [batch]
-      exporters: [otlp/logs, debug]
+      processors: [batch, attributes]
+      exporters: [otlp/logs, debug, dataset, dataset/aaa]
  1. Run the demo - docker compose up --abort-on-container-exit
  2. Check, that metrics are in Grafana - http://localhost:8080/grafana/explore?
Screenshot 2023-11-27 at 12 29 29 6. Check some metrics ![Screenshot 2023-11-22 at 14 06 56](https://github.com/open-telemetry/opentelemetry-collector-contrib/assets/122797378/81306486-eb5e-49b1-87ed-25d1eb8afcf8) Screenshot 2023-11-27 at 12 59 10 7. Check that data are available in dataset ![Screenshot 2023-11-22 at 13 33 50](https://github.com/open-telemetry/opentelemetry-collector-contrib/assets/122797378/77cb2f31-be14-463b-91a7-fd10f8dbfe3a)

Documentation:

Library changes:

@andrzej-stencel
Copy link
Member

andrzej-stencel commented Nov 24, 2023

I'm worried that these metrics don't look like other metrics exposed by other components of the collector:

$ curl localhost:8888/metrics
# HELP otelcol_dataset_eventsEnqueued 
# TYPE otelcol_dataset_eventsEnqueued gauge
otelcol_dataset_eventsEnqueued 31
# HELP otelcol_dataset_eventsProcessed 
# TYPE otelcol_dataset_eventsProcessed gauge
otelcol_dataset_eventsProcessed 31
(...)
  1. The metric names are partially snake_case and partially camelCase, while other Otelcol metrics only use snake_case.
  2. The metrics don't have any attributes. What happens if I have two distinct exporter instances configured?

I wasn't able to find any guidelines on defining metrics in components, but looking at other components' metrics, I think the metrics from the DataSet exporter should have the names e.g.:

  • otelcol_exporter_dataset_events_enqueued
  • otelcol_exporter_dataset_events_processed
  • etc.

martin-majlis-s1 and others added 4 commits November 27, 2023 10:05
Co-authored-by: Andrzej Stencel <astencel@sumologic.com>
Co-authored-by: Andrzej Stencel <astencel@sumologic.com>
Co-authored-by: Andrzej Stencel <astencel@sumologic.com>
@martin-majlis-s1
Copy link
Contributor Author

I'm worried that these metrics don't look like other metrics exposed by other components of the collector:

$ curl localhost:8888/metrics
# HELP otelcol_dataset_eventsEnqueued 
# TYPE otelcol_dataset_eventsEnqueued gauge
otelcol_dataset_eventsEnqueued 31
# HELP otelcol_dataset_eventsProcessed 
# TYPE otelcol_dataset_eventsProcessed gauge
otelcol_dataset_eventsProcessed 31
(...)
  1. The metric names are partially snake_case and partially camelCase, while other Otelcol metrics only use snake_case.
  2. The metrics don't have any attributes. What happens if I have two distinct exporter instances configured?

I wasn't able to find any guidelines on defining metrics in components, but looking at other components' metrics, I think the metrics from the DataSet exporter should have the names e.g.:

  • otelcol_exporter_dataset_events_enqueued
  • otelcol_exporter_dataset_events_processed
  • etc.

Thanks for the review.

From the name - otelcol_dataset_eventsEnqueued - the part dataset_eventsEnqueued comes from the library, otelcol from the collector, and I do not know, why is this part meter := set.MeterProvider.Meter("datasetexporter") - ignored - https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/29446/files#diff-13b1a714fd20c78ebacc903ae38d6b3118598c1580b3983c14f3845a4fecfb1fR50

I will solve it differently and introduce those attributes.

@martin-majlis-s1 martin-majlis-s1 changed the title [datasetexporter]: Upgrade library to 0.16.0 [datasetexporter]: Upgrade library to 0.17.0 Nov 27, 2023
@martin-majlis-s1
Copy link
Contributor Author

@astencel-sumo : I have updated the library - scalyr/dataset-go#63.

I have also updated some screenshots in the description to show the functionality.

@mx-psi
Copy link
Member

mx-psi commented Nov 27, 2023

datasetexporter/datasetexporter.go:9: File is not `gci`-ed with --skip-generated -s standard -s default -s prefix(github.com/open-telemetry/opentelemetry-collector-contrib) (gci)
	"github.com/scalyr/dataset-go/pkg/meter_config"
datasetexporter/datasetexporter.go:17: File is not `gci`-ed with --skip-generated -s standard -s default -s prefix(github.com/open-telemetry/opentelemetry-collector-contrib) (gci)
	"github.com/scalyr/dataset-go/pkg/client"
datasetexporter/datasetexporter.go:14: File is not `goimports`-ed with -local github.com/open-telemetry/opentelemetry-collector-contrib (goimports)

Copy link
Contributor

@codeboten codeboten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change looks good, just one question

@@ -182,14 +186,15 @@ func (c *Config) Validate() error {
func (c *Config) String() string {
s := ""
s += fmt.Sprintf("%s: %s; ", "DatasetURL", c.DatasetURL)
s += fmt.Sprintf("%s: %s (%d); ", "APIKey", strings.Repeat("*", len(c.APIKey)), len(c.APIKey))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a configopaque.String will by default be replaced with [Redacted], is there a reason to override the behaviour here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@codeboten : Thanks. But when I have tried it out - it's returning the real value. I have to use apiKey, _ := c.APIKey.MarshalText() - which is not too much better. I will replace it, so it's more consistent with other places.

--- FAIL: TestConfigString (0.00s)
    config_test.go:143:
                Error Trace:    /Users/martin.majlis/development/scalyr-org/opentelemetry-collector-contrib/exporter/datasetexporter/config_test.go:143
                Error:          Not equal:
                                expected: "DatasetURL: https://example.com; APIKey: ****** (6); Debug: true; BufferSettings: {MaxLifetime:123ns GroupBy:[field1 field2] RetryInitialInterval:0s RetryMaxInterval:0s RetryMaxElapsedTime:0s RetryShutdownTimeout:0s}; LogsSettings: {ExportResourceInfo:true ExportResourcePrefix:AAA ExportScopeInfo:true ExportScopePrefix:BBB DecomposeComplexMessageField:true DecomposedComplexMessagePrefix:EEE exportSettings:{ExportSeparator:CCC ExportDistinguishingSuffix:DDD}}; TracesSettings: {exportSettings:{ExportSeparator:TTT ExportDistinguishingSuffix:UUU}}; ServerHostSettings: {UseHostName:false ServerHost:foo-bar}; RetrySettings: {Enabled:true InitialInterval:5s RandomizationFactor:0.5 Multiplier:1.5 MaxInterval:30s MaxElapsedTime:5m0s}; QueueSettings: {Enabled:true NumConsumers:10 QueueSize:1000 StorageID:<nil>}; TimeoutSettings: {Timeout:5s}"
                                actual  : "DatasetURL: https://example.com; APIKey: secret (6); Debug: true; BufferSettings: {MaxLifetime:123ns GroupBy:[field1 field2] RetryInitialInterval:0s RetryMaxInterval:0s RetryMaxElapsedTime:0s RetryShutdownTimeout:0s}; LogsSettings: {ExportResourceInfo:true ExportResourcePrefix:AAA ExportScopeInfo:true ExportScopePrefix:BBB DecomposeComplexMessageField:true DecomposedComplexMessagePrefix:EEE exportSettings:{ExportSeparator:CCC ExportDistinguishingSuffix:DDD}}; TracesSettings: {exportSettings:{ExportSeparator:TTT ExportDistinguishingSuffix:UUU}}; ServerHostSettings: {UseHostName:false ServerHost:foo-bar}; RetrySettings: {Enabled:true InitialInterval:5s RandomizationFactor:0.5 Multiplier:1.5 MaxInterval:30s MaxElapsedTime:5m0s}; QueueSettings: {Enabled:true NumConsumers:10 QueueSize:1000 StorageID:<nil>}; TimeoutSettings: {Timeout:5s}"

                                Diff:
                                --- Expected
                                +++ Actual
                                @@ -1 +1 @@
                                -DatasetURL: https://example.com; APIKey: ****** (6); Debug: true; BufferSettings: {MaxLifetime:123ns GroupBy:[field1 field2] RetryInitialInterval:0s RetryMaxInterval:0s RetryMaxElapsedTime:0s RetryShutdownTimeout:0s}; LogsSettings: {ExportResourceInfo:true ExportResourcePrefix:AAA ExportScopeInfo:true ExportScopePrefix:BBB DecomposeComplexMessageField:true DecomposedComplexMessagePrefix:EEE exportSettings:{ExportSeparator:CCC ExportDistinguishingSuffix:DDD}}; TracesSettings: {exportSettings:{ExportSeparator:TTT ExportDistinguishingSuffix:UUU}}; ServerHostSettings: {UseHostName:false ServerHost:foo-bar}; RetrySettings: {Enabled:true InitialInterval:5s RandomizationFactor:0.5 Multiplier:1.5 MaxInterval:30s MaxElapsedTime:5m0s}; QueueSettings: {Enabled:true NumConsumers:10 QueueSize:1000 StorageID:<nil>}; TimeoutSettings: {Timeout:5s}
                                +DatasetURL: https://example.com; APIKey: secret (6); Debug: true; BufferSettings: {MaxLifetime:123ns GroupBy:[field1 field2] RetryInitialInterval:0s RetryMaxInterval:0s RetryMaxElapsedTime:0s RetryShutdownTimeout:0s}; LogsSettings: {ExportResourceInfo:true ExportResourcePrefix:AAA ExportScopeInfo:true ExportScopePrefix:BBB DecomposeComplexMessageField:true DecomposedComplexMessagePrefix:EEE exportSettings:{ExportSeparator:CCC ExportDistinguishingSuffix:DDD}}; TracesSettings: {exportSettings:{ExportSeparator:TTT ExportDistinguishingSuffix:UUU}}; ServerHostSettings: {UseHostName:false ServerHost:foo-bar}; RetrySettings: {Enabled:true InitialInterval:5s RandomizationFactor:0.5 Multiplier:1.5 MaxInterval:30s MaxElapsedTime:5m0s}; QueueSettings: {Enabled:true NumConsumers:10 QueueSize:1000 StorageID:<nil>}; TimeoutSettings: {Timeout:5s}
                Test:           TestConfigString

Copy link
Contributor

@codeboten codeboten Nov 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But when I have tried it out - it's returning the real value.

This is surprising to me... maybe @mx-psi can help here. Will not block this PR on this though.

@martin-majlis-s1
Copy link
Contributor Author

The failed test is related to k8observer:

WARNING: DATA RACE
Write at 0x00c0004edbf0 by goroutine 90:
  github.com/open-telemetry/opentelemetry-collector-contrib/internal/k8sconfig.CreateRestConfig.func1()
      /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/internal/k8sconfig/config.go:117 +0x5c
  k8s.io/client-go/transport.HTTPWrappersForConfig()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/round_trippers.go:41 +0xcb
  k8s.io/client-go/transport.New()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/transport.go:60 +0x304
  k8s.io/client-go/rest.TransportFor()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/rest/transport.go:68 +0x64
  k8s.io/client-go/rest.HTTPClientFor()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/rest/transport.go:33 +0x30
  k8s.io/client-go/kubernetes.NewForConfig()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/kubernetes/clientset.go:468 +0xa7
  github.com/open-telemetry/opentelemetry-collector-contrib/internal/k8sconfig.MakeClient()
      /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/internal/k8sconfig/config.go:136 +0x111
  github.com/open-telemetry/opentelemetry-collector-contrib/extension/observer/k8sobserver.newObserver()
      /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/extension/observer/k8sobserver/extension.go:84 +0xb9
  github.com/open-telemetry/opentelemetry-collector-contrib/extension/observer/k8sobserver.TestExtensionObservePods()
      /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/extension/observer/k8sobserver/extension_test.go:138 +0x3a4
  testing.tRunner()
      /opt/hostedtoolcache/go/1.20.11/x64/src/testing/testing.go:1576 +0x216
  testing.(*T).Run.func1()
      /opt/hostedtoolcache/go/1.20.11/x64/src/testing/testing.go:1629 +0x47

Previous read at 0x00c0004edbf0 by goroutine 78:
  net/http.(*Transport).connectMethodForRequest()
      /opt/hostedtoolcache/go/1.20.11/x64/src/net/http/transport.go:843 +0xe6
  net/http.(*Transport).roundTrip()
      /opt/hostedtoolcache/go/1.20.11/x64/src/net/http/transport.go:580 +0xc46
  net/http.(*Transport).RoundTrip()
      /opt/hostedtoolcache/go/1.20.11/x64/src/net/http/roundtrip.go:17 +0x36
  k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/round_trippers.go:168 +0x4e3
  net/http.send()
      /opt/hostedtoolcache/go/1.20.11/x64/src/net/http/client.go:252 +0x942
  net/http.(*Client).send()
      /opt/hostedtoolcache/go/1.20.11/x64/src/net/http/client.go:176 +0x164
  net/http.(*Client).do()
      /opt/hostedtoolcache/go/1.20.11/x64/src/net/http/client.go:716 +0x10de
  net/http.(*Client).Do()
      /opt/hostedtoolcache/go/1.20.11/x64/src/net/http/client.go:582 +0x566
  k8s.io/client-go/rest.(*Request).request()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/rest/request.go:1023 +0x2e1
  k8s.io/client-go/rest.(*Request).Do()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/rest/request.go:1063 +0xef
  k8s.io/client-go/tools/cache.NewFilteredListWatchFromClient.func1()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/listwatch.go:87 +0x224
  k8s.io/client-go/tools/cache.(*ListWatch).List()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/listwatch.go:106 +0xb4
  k8s.io/client-go/tools/cache.(*Reflector).list.func1.2()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:488 +0xbc
  k8s.io/client-go/tools/pager.SimplePageFunc.func1()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/pager/pager.go:40 +0x74
  k8s.io/client-go/tools/pager.(*ListPager).list()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/pager/pager.go:108 +0x207
  k8s.io/client-go/tools/pager.(*ListPager).ListWithAlloc()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/pager/pager.go:89 +0x373
  k8s.io/client-go/tools/cache.(*Reflector).list.func1()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:513 +0x2f4

Goroutine 90 (running) created at:
  testing.(*T).Run()
      /opt/hostedtoolcache/go/1.20.11/x64/src/testing/testing.go:1629 +0x805
  testing.runTests.func1()
      /opt/hostedtoolcache/go/1.20.11/x64/src/testing/testing.go:2036 +0x8d
  testing.tRunner()
      /opt/hostedtoolcache/go/1.20.11/x64/src/testing/testing.go:1576 +0x216
  testing.runTests()
      /opt/hostedtoolcache/go/1.20.11/x64/src/testing/testing.go:2034 +0x87c
  testing.(*M).Run()
      /opt/hostedtoolcache/go/1.20.11/x64/src/testing/testing.go:1906 +0xb44
  main.main()
      _testmain.go:114 +0x2fc

Goroutine 78 (finished) created at:
  k8s.io/client-go/tools/cache.(*Reflector).list()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:479 +0x705
  k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:348 +0x338
  k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:291 +0x44
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
      /home/runner/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x48
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
      /home/runner/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xce
  k8s.io/client-go/tools/cache.(*Reflector).Run()
      /home/runner/go/pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:290 +0x256
  k8s.io/client-go/tools/cache.(*Reflector).Run-fm()
      <autogenerated>:1 +0x44
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
      /home/runner/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:55 +0x3e
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /home/runner/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:72 +0x73
==================
--- FAIL: TestExtensionObservePods (3.00s)
    testing.go:1446: race detected during execution of test
W1128 09:10:01.836811   18130 reflector.go:535] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://mock:12345/api/v1/pods?limit=500&resourceVersion=0": dial tcp: lookup mock on 127.0.0.53:53: server misbehaving
E1128 09:10:01.836920   18130 reflector.go:147] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://mock:12345/api/v1/pods?limit=500&resourceVersion=0": dial tcp: lookup mock on 127.0.0.53:53: server misbehaving
W1128 09:10:03.136882   18130 reflector.go:535] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://mock:12345/api/v1/pods?limit=500&resourceVersion=0": dial tcp: lookup mock on 127.0.0.53:53: server misbehaving
E1128 09:10:03.136975   18130 reflector.go:147] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://mock:12345/api/v1/pods?limit=500&resourceVersion=0": dial tcp: lookup mock on 127.0.0.53:53: server misbehaving
FAIL

@andrzej-stencel
Copy link
Member

@astencel-sumo : I have updated the library - scalyr/dataset-go#63.

I have also updated some screenshots in the description to show the functionality.

This is what I'm now seeing in the metrics:

otelcol_dataset_events_enqueued{entity="logs",name="ds1"} 31
otelcol_dataset_events_enqueued{entity="logs",name="ds2"} 31
otelcol_dataset_events_processed{entity="logs",name="ds1"} 31
otelcol_dataset_events_processed{entity="logs",name="ds2"} 31

This is when I have two exporters defined like this:

exporters:
  dataset/ds1:
    api_key: abc123456
    dataset_url: http://localhost:1234
  dataset/ds2:
    api_key: def
    dataset_url: http://localhost:5678
service:
  pipelines:
    logs/p1:
      exporters:
        - dataset/ds1
        - dataset/ds2

It's not ideal, but the exporter names are there in some form. If you're prepared to possibly make changes to these metrics in the future than I guess this is a step in the right direction.

We should probably also make it easier for components to expose metrics in the right format, and have that documented.

@andrzej-stencel andrzej-stencel added the ready to merge Code review completed; ready to merge by maintainers label Nov 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cmd/configschema configschema command cmd/otelcontribcol otelcontribcol command exporter/dataset ready to merge Code review completed; ready to merge by maintainers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants