Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Histogram metric with cumulative temporality #710

Closed
alexandrachakeres-wk opened this issue Mar 15, 2024 · 19 comments
Closed

Histogram metric with cumulative temporality #710

alexandrachakeres-wk opened this issue Mar 15, 2024 · 19 comments

Comments

@alexandrachakeres-wk
Copy link

alexandrachakeres-wk commented Mar 15, 2024

We're trying to use the experimental metrics to instrument a histogram metric for http.server.duration. I've got the histogram collecting data and printing to the console nicely (when I call :otel_meter_server.force_flush()), but I can't seem to set cumulative as the temporality for that metric.

Even when I have this config set (based on y'all's OtelMetricTests.setup/0):

config :opentelemetry_experimental, :readers, [
  %{
    module: :otel_metric_reader,
    config: %{
      exporter: {:otel_metric_exporter_console, {:metric, self()}},
      default_temporality_mapping: %{
        # <unrelated keys omitted in this description>
        # this was :temporality_delta in the linked test
        histogram: :temporality_cumulative
      }
    }
  }
]

When I run OpenTelemetryAPIExperimental.Histogram.create/2, I still get the following (notice the :temporality_delta)

{:instrument, :otel_meter_default,
 {:otel_meter_default,
  {:meter, :otel_meter_default,
   {:instrumentation_scope, "myapp", "0.0.1", :undefined},
   :otel_meter_provider_global, :instruments_otel_meter_provider_global,
   :view_aggregations_otel_meter_provider_global,
   :metrics_otel_meter_provider_global}}, :"http.server.duration",
 "measures the duration of the inbound HTTP request", :histogram, :ms,
 :temporality_delta, :undefined, :undefined}

I'm an elixir person, so parsing the erlang code is a bit tough for me, but based on the fact that otel_meter:create_histogram calls create_instrument/4 (vs create_instrument/6), it seems like it'll only ever use delta temporality the way things are currently configured. Am I interpreting that correctly?

Instead of using OpenTelemetryAPIExperimental.Histogram, it looks like I can use the following code to create a cumulative histogram, but I'm not sure what to put for the callback/callback args (these were copied from the test for an ObservableCounter).

{meter_module, _} = meter = :opentelemetry_experimental.get_meter()
meter_module.create_instrument(
  meter,
  :"http.server.duration",
  :histogram, 
  fn _ -> {3, %{}} end,
  [],
  %{description: "measures the duration of the inbound HTTP request", unit: :ms}
)

The callback doesn't really seem relevant to histograms though, based on these docs. And I'm no longer seeing the histogram metrics logged to the console when creating the histogram in this way. I'm wondering if I should even try something like the workaround above to set the histogram to cumulative temporality, or if it's sketchy and I should just give up and switch to Prometheus' package to export metrics for the time being. I did find a test case from this repo that seems to test the export of histograms with cumulative temporality, so I'm having a hard time giving up hope 😅 .

In order for us to import metrics into Prometheus/Grafana for burndown charts & so forth, the temporality of this histogram metric must be cumulative. We haven't had any trouble setting the temporality of histograms to cumulative in other languages' OpenTelemetry instrumentation (golang, java, python), so it seems like elixir/erlang should add that as an option as well. This is part of a large company-wide initiative, but we're getting stuck on our elixir repo.

Thanks so much for the work you've put into this so far!

@alexandrachakeres-wk
Copy link
Author

alexandrachakeres-wk commented Mar 15, 2024

Looks like the PR that added converting an instrument of delta temporality to cumulative was just merged about a month ago but the last release was about a year ago, so it seems like the workaround above might work on a fork (at the current main branch) but not with the actual current hex releases 😢. So I'll give up now and go the prometheus route. But I still think the ability to create a histogram with cumulative temporality (via a much less sketchy API than my workaround above) would be a great feature at some point!😄 Maybe with the latest code it's as simple as setting histogram: :temporality_cumulative in the default_temporality_mapping config and you're already done with that -- please close this out if so! Thanks for listening to me ramble through this exploration.

@tsloughter
Copy link
Member

I'd love it if you would try the latest main! I've been meaning to make a hex release but haven't gotten around to it. If that is a blocker to trying it I'll get on it very soon.

@tsloughter
Copy link
Member

Getting testers on the latest experimental is really important because I want to get it approved by the Technical Committee for GA release soon but feel it could use more usage to get feedback on any API recommendations before it gets moved out of experimental to the main SDK application.

@alexandrachakeres-wk
Copy link
Author

alexandrachakeres-wk commented Mar 15, 2024

Yeah we're not allowed to use forks so having a hex release would be nice. I took a 5 minute pass at using the main branch locally just to see if I could get cumulative histogram metrics printing to the console, but I'm running up against a couple different errors just trying to start the app.

I changed my deps to include the following:

      {:opentelemetry_exporter,
       git: "https://github.com/open-telemetry/opentelemetry-erlang.git", branch: "main"},
      {:opentelemetry,
       git: "https://github.com/open-telemetry/opentelemetry-erlang.git", branch: "main"},
      {:opentelemetry_api,
       git: "https://github.com/open-telemetry/opentelemetry-erlang.git",
       branch: "main",
       override: true},
      {:opentelemetry_phoenix, "~> 1.2"},
      {:opentelemetry_cowboy, "~> 0.3"},
      {:opentelemetry_api_experimental,
       git: "https://github.com/open-telemetry/opentelemetry-erlang.git", branch: "main"},
      {:opentelemetry_experimental,
       git: "https://github.com/open-telemetry/opentelemetry-erlang.git", branch: "main"},

I needed the override on opentelemetry_api to resolve this mix deps.get error:

Dependencies have diverged:
* opentelemetry_api (https://github.com/open-telemetry/opentelemetry-erlang.git - origin/main)
  the dependency opentelemetry_api in mix.exs is overriding a child dependency:

  > In mix.exs:
    {:opentelemetry_api, [env: :prod, git: "https://github.com/open-telemetry/opentelemetry-erlang.git", branch: "main"]}

  > In deps/opentelemetry_process_propagator/mix.exs:
    {:opentelemetry_api, "~> 1.0", [env: :prod, hex: "opentelemetry_api", repo: "hexpm", optional: false]}

  Ensure they match or specify one of the above in your deps and set "override: true"
** (Mix) Can't continue due to errors on dependencies

But now I get this error when attempting to start my app:

==> otel_elixir_tests
Generated otel_elixir_tests app
==> opentelemetry_telemetry
Compiling 1 file (.erl)
src/otel_telemetry.erl:4:14: can't find include lib "opentelemetry_api/include/opentelemetry.hrl"
%    4| -include_lib("opentelemetry_api/include/opentelemetry.hrl").
%     |              ^

src/otel_telemetry.erl:136:36: undefined macro 'OTEL_STATUS_ERROR'
%  136|     Status = opentelemetry:status(?OTEL_STATUS_ERROR, atom_to_binary(Reason, utf8)),
%     |                                    ^

src/otel_telemetry.erl:6:2: function handle_event/4 undefined
%    6| -export([
%     |  ^

could not compile dependency :opentelemetry_telemetry, "mix compile" failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile opentelemetry_telemetry", update it with "mix deps.update opentelemetry_telemetry" or clean it with "mix deps.clean opentelemetry_telemetry"

@tsloughter
Copy link
Member

I've just published new hex packages for everything.

@tsloughter
Copy link
Member

For future reference, you need to specify the subdir the application is in when using a git dep in mix.

So instead of:

{:opentelemetry,
       git: "https://github.com/open-telemetry/opentelemetry-erlang.git", branch: "main"},

You'd need:

{:opentelemetry,
  git: "https://github.com/open-telemetry/opentelemetry-erlang.git", 
  sparse: "apps/opentelemetry_api", branch: "main"},

See: https://github.com/open-telemetry/opentelemetry-erlang?tab=readme-ov-file#git-dependencies

@alexandrachakeres-wk
Copy link
Author

Thanks for pushing the hex update!

With these deps...

      {:opentelemetry_exporter, "~> 1.7"},
      {:opentelemetry, "~> 1.4"},
      {:opentelemetry_api, "~> 1.3"},
      {:opentelemetry_phoenix, "~> 1.2"},
      {:opentelemetry_cowboy, "~> 0.3"},
      {:opentelemetry_api_experimental, "~> 0.5"},
      {:opentelemetry_experimental, "~> 0.5"},

Everything starts up fine. But if I create the histogram like this, I'm still getting delta temporality despite setting histogram: :temporality_cumulative in my config (same as shown in the original posting).

    Histogram.create(@instrument_name, %{
      description: "measures the duration of the inbound HTTP request",
      unit: :ms
    })

Returns an instrument with :temporality_delta

If I use this weirdness instead

    {meter_module, _} = meter = :opentelemetry_experimental.get_meter()

    meter_module.create_instrument(
      meter,
      @instrument_name,
      :histogram,
      fn _ -> {3, %{}} end,
      [],
      %{description: "measures the duration of the inbound HTTP request", unit: :ms}
    )

It returns

{:instrument, :otel_meter_default,
 {:otel_meter_default,
  {:meter, :otel_meter_default, :undefined, :otel_meter_provider_global,
   :instruments_otel_meter_provider_global, :streams_otel_meter_provider_global,
   :metrics_otel_meter_provider_global, :exemplars_otel_meter_provider_global}},
 :"http.server.duration", "measures the duration of the inbound HTTP request",
 :histogram, :ms, :temporality_cumulative,
 #Function<1.12047456/1 in MyApp.Telemetry.CustomMetric.HTTPServerRequestDurationHistogram.start/0>,
 [], :undefined}

And when I run :otel_meter_server.force_flush(), it won't print any histogram metrics to the console (but that does work when I use the kosher Histogram.create). So everything from my testing perspective is still working the same as on the last release 😕 .

@tsloughter
Copy link
Member

Sorry, I should have taken more time with the original post to mention there is no Observable Histogram. So that's why the create_instrument isn't getting you anything.

Can you show me what the returned Instrument from Histogram.create is?

@tsloughter
Copy link
Member

Again, sorry, hehe, its early, I see you put that info in the first post :). I'm going to try to recreate in Elixir. The only test for this right now is in the Erlang suite,

https://github.com/open-telemetry/opentelemetry-erlang/blob/main/apps/opentelemetry_experimental/test/otel_metrics_SUITE.erl#L189-L199

followed by:

https://github.com/open-telemetry/opentelemetry-erlang/blob/main/apps/opentelemetry_experimental/test/otel_metrics_SUITE.erl#L775-L843

@tsloughter
Copy link
Member

This test I am adding to the top level Elixir metric tests works:

    Application.put_env(:opentelemetry_experimental, :readers, [
      %{
        module: :otel_metric_reader,
        config: %{
          exporter: {:otel_metric_exporter_pid, {:metric, self()}},
          default_temporality_mapping: %{
            counter: :temporality_delta,
            observable_counter: :temporality_cumulative,
            updown_counter: :temporality_delta,
            observable_updowncounter: :temporality_cumulative,
            histogram: :temporality_cumulative,
            observable_gauge: :temporality_cumulative
          }
        }
      }
    ])
  test "create Histogram with macros" do
    Histogram.create(:histogram_a, %{unit: "1", description: "some histogram_a"})
    Histogram.record(:histogram_a, 1)

    :otel_meter_server.force_flush()

    assert_receive {:metric,
                    metric(
                      name: :histogram_a,
                      data: histogram(aggregation_temporality: :temporality_cumulative,
                        datapoints: [{:histogram_datapoint, %{}, _, _, 1, 1, [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                                      [0.0, 5.0, 10.0, 25.0, 50.0, 75.0, 100.0, 250.0, 500.0, 750.0, 1000.0, 2500.0, 5000.0, 7500.0, 10000.0], [], 0, 1, 1}])
                    )}

    Histogram.record(:histogram_a, 10)

    :otel_meter_server.force_flush()

    assert_receive {:metric,
                    metric(
                      name: :histogram_a,
                      data: histogram(aggregation_temporality: :temporality_cumulative,
                        datapoints: [{:histogram_datapoint, %{}, _, _, 2, 11, [0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                                      [0.0, 5.0, 10.0, 25.0, 50.0, 75.0, 100.0, 250.0, 500.0, 750.0, 1000.0, 2500.0, 5000.0, 7500.0, 10000.0], [], 0, 1, 10}])
                    )}
  end

will send a PR with it later this morning as well.

@alexandrachakeres-wk
Copy link
Author

Hm I'm not sure why my config (exactly the same as yours other than otel_metric_exporter_console instead of otel_metric_exporter_pid) is not leading to a cumulative histogram being created. On the new hex versions, Histogram.create returns the following (still with temporality_delta):

{:instrument, :otel_meter_default,
 {:otel_meter_default,
  {:meter, :otel_meter_default,
   {:instrumentation_scope, "myapp", "0.0.1", :undefined},
   :otel_meter_provider_global, :instruments_otel_meter_provider_global,
   :streams_otel_meter_provider_global, :metrics_otel_meter_provider_global,
   :exemplars_otel_meter_provider_global}}, :"http.server.duration",
 "measures the duration of the inbound HTTP request", :histogram, :ms,
 :temporality_delta, :undefined, :undefined, :undefined}

@alexandrachakeres-wk
Copy link
Author

Oh interesting, with the latest update this behavior has changed though. In the console:

iex(2)> :otel_meter_server.force_flush()
** METRICS FOR DEBUG **
:ok
http.server.duration{net.host.name=localhost, http.status_code=200, http.scheme=http, http.route=/heartbeat, http.method=POST} [0,
                                                                                                                                       0,
                                                                                                                                       2,
                                                                                                                                       2,
                                                                                                                                       0,
                                                                                                                                       1,
                                                                                                                                       0,
                                                                                                                                       2,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0]
iex(2)> :otel_meter_server.force_flush()
** METRICS FOR DEBUG **
:ok
http.server.duration{net.host.name=localhost, http.status_code=200, http.scheme=http, http.route=/heartbeat, http.method=POST} [0,
                                                                                                                                       0,
                                                                                                                                       2,
                                                                                                                                       2,
                                                                                                                                       0,
                                                                                                                                       1,
                                                                                                                                       0,
                                                                                                                                       2,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0,
                                                                                                                                       0]

I flushed 2x in a row and there were no requests to that endpoint in between (they would have been logged) so it does look like it's exporting in a cumulative fashion! On the previous hex releases, the second :otel_meter_server.force_flush() would have just printed ** METRICS FOR DEBUG ** with no data below it.

So while the Histogram.create still returns an instrument with temporality_delta, the exporter behavior seems to be cumulative. So maybe everything is working as expected now, even if it's a bit confusing!

@tsloughter
Copy link
Member

Oooh, yea, so that temporality_delta is actually the "instrument temporality". When collecting metrics the particular aggregator needs to know the instrument's native temporality to support conversion between delta and cumulative temporalities.

There are times the aggregator needs to do some conversion if the instrument native temporality differs from the view's temporality. Like in the case of an Observable Counter, it has a native temporality of cumulative so if the view is configured with delta then when collected to export the previously exported value of the callback has to be subtracted from the current value.

So what you care about is the view or stream temporality and is being properly configured. This is what the default_temporality_mapping configures.

Sorry, I should have noticed that earlier and pointed it out to avoid confusion :)

@alexandrachakeres-wk
Copy link
Author

alexandrachakeres-wk commented Mar 19, 2024

I'm attempting the final step of our setup -- exporting metrics (and traces) by OTLP. What should I set for the exporter in the config below (instead of otel_metric_exporter_pid which I assume is just for debugging/testing)? Looking at the current main branch, I only see that and otel_metric_exporter_console that satisfy the interface from otel_metric_exporter. Or maybe otel_metric_exporter is the answer?

    Application.put_env(:opentelemetry_experimental, :readers, [
      %{
        module: :otel_metric_reader,
        config: %{
          exporter: {:otel_metric_exporter_pid, {:metric, self()}},
          default_temporality_mapping: %{
            counter: :temporality_delta,
            observable_counter: :temporality_cumulative,
            updown_counter: :temporality_delta,
            observable_updowncounter: :temporality_cumulative,
            histogram: :temporality_cumulative,
            observable_gauge: :temporality_cumulative
          }
        }
      }
    ])

I'll also have a config like this (based on what's documented for traces -- not sure if that'll end up getting used for metrics as well):

# config :opentelemetry,
#   span_processor: :batch

otlp_endpoint = System.fetch_env!("OTLP_ENDPOINT")

# Based on https://fly.io/phoenix-files/opentelemetry-and-the-infamous-n-plus-1/#configuring-opentelemetry-in-elixir
config :opentelemetry_exporter,
  otlp_protocol: :http_protobuf,
  otlp_endpoint: otlp_endpoint

config :opentelemetry, :processors,
  otel_batch_processor: %{
    exporter:
      {:opentelemetry_exporter,
       %{
         endpoints: [otlp_endpoint]
         # headers: [{"x-honeycomb-dataset", "experiments"}]
       }}
  }

Having documentation for how to add the required dependencies, start required services, set necessary configurations, and docs for the various public functions would make adopting this waaaay easier FYI. If you're hoping to get adopters, I'd say that'd be a good next step!

@alexandrachakeres-wk
Copy link
Author

alexandrachakeres-wk commented Mar 20, 2024

I tried this (with otel_metric_exporter)

config :opentelemetry_experimental, :readers, [
  %{
    module: :otel_metric_reader,
    config: %{
      exporter: {:otel_metric_exporter, {:metric, self()}},
      default_temporality_mapping: %{
...

but ended up with this error

iex(2)> :otel_meter_server.force_flush()
:ok
iex(3)> [error]  GenServer #PID<0.2910.0> terminating
** (UndefinedFunctionError) function :otel_metric_exporter.export/4 is undefined or private
    (opentelemetry_experimental 0.5.0) :otel_metric_exporter.export(:metrics, [{:metric, :"Custom/warned", {:instrumentation_scope, "myapp", "0.0.1", :undefined}, :undefined, :undefined, {:sum, [], :temporality_delta, true}}, {:metric, :"http.server.duration", {:instrumentation_scope, "myapp", "0.0.1", :undefined}, "measures the duration of the inbound HTTP request", :ms, {:histogram, [{:histogram_datapoint, %{"http.method" => "POST", "http.route" => "/myroute/:id", "http.scheme" => "http", "http.status_code" => "400", "net.host.name" => "localhost"}, -576460297011671375, -576458872565067554, 1, 47, [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0.0, 5.0, 10.0, 25.0, 50.0, 75.0, 100.0, 250.0, 500.0, 750.0, 1000.0, 2500.0, 5000.0, 7500.0, 10000.0], [], 0, 47, 47}, {:histogram_datapoint, %{"http.method" => "POST", "http.route" => "/another/endpoint", "http.scheme" => "http", "http.status_code" => "200", "net.host.name" => "localhost"}, -576460729843520949, -576458872565067554, 31, 980, [0, 0, 8, 10, 7, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0.0, 5.0, 10.0, 25.0, 50.0, 75.0, 100.0, 250.0, 500.0, 750.0, 1000.0, 2500.0, 5000.0, 7500.0, 10000.0], [], 0, 6, 101}], :temporality_cumulative}}, {:metric, :"Custom/succeeded", {:instrumentation_scope, "myapp", "0.0.1", :undefined}, :undefined, :undefined, {:sum, [], :temporality_delta, true}}, {:metric, :"Custom/errored", {:instrumentation_scope, "myapp", "0.0.1", :undefined}, :undefined, :undefined, {:sum, [], :temporality_delta, true}}], {:resource, :undefined, {:attributes, 128, 255, 0, %{"deployment.environment": "chak-local", "process.executable.name": "erl", "process.runtime.description": "Erlang/OTP 25 erts-13.0.2", "process.runtime.name": "BEAM", "process.runtime.version": "13.0.2", "service.instance.id": "myapp-deployment-local", "service.name": "myapp-deployment", "telemetry.sdk.language": "erlang", "telemetry.sdk.name": "opentelemetry", "telemetry.sdk.version": "1.4.0"}}}, [])
    (opentelemetry_experimental 0.5.0) /Users/chakeresa/code/myapp/deps/opentelemetry_experimental/src/otel_metric_reader.erl:170: :otel_metric_reader.handle_info/2
    (stdlib 4.0.1) gen_server.erl:1120: :gen_server.try_dispatch/4
    (stdlib 4.0.1) gen_server.erl:1197: :gen_server.handle_msg/6
    (stdlib 4.0.1) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Last message: :collect
State: {:state, {:otel_metric_exporter, []}, #PID<0.2907.0>, #Reference<0.2183366056.1499725825.239153>, %{counter: :otel_aggregation_sum, histogram: :otel_aaggregation_histogram_explicit, observable_counter: :otel_aggregation_sum, observable_gauge: :otel_aggregation_last_value, observable_updowncounter: :otel_aggregation_sum, updown_counter: :otel_aggregation_sum}, %{counter: :temporality_delta, histogram: :temporality_cumulative, observable_counter: :temporality_cumulative, observable_gauge: :temporality_cumulative, observable_updowncounter: :temporality_cumulative, updown_counter: :temporality_delta}, :undefined, :undefined, :callbacks_otel_meter_provider_global, :streams_otel_meter_provider_global, :metrics_otel_meter_provider_global, :exemplars_otel_meter_provider_global, %{default_temporality_mapping: %{counter: :temporality_delta, histogram: :temporality_cumulative, observable_counter: :temporality_cumulative, observable_gauge: :temporality_cumulative, observable_updowncounter: :temporality_cumulative, updown_counter: :temporality_delta}, exporter: {:otel_metric_exporter, {:metric, #PID<0.105.0>}}}, {:resource, :undefined, {:attributes, 128, 255, 0, %{"deployment.environment": "chak-local", "process.executable.name": "erl", "process.runtime.description": "Erlang/OTP 25 erts-13.0.2", "process.runtime.name": "BEAM", "process.runtime.version": "13.0.2", "service.instance.id": "myapp-deployment-local", "service.name": "myapp-deployment", "telemetry.sdk.language": "erlang", "telemetry.sdk.name": "opentelemetry", "telemetry.sdk.version": "1.4.0"}}}, #Reference<0.2183366056.1499856897.239161>, []}

I also tried

config :opentelemetry_experimental, :readers, [
  %{
    module: :otel_metric_reader,
    config: %{
      exporter: {:opentelemetry_exporter, %{}},
      default_temporality_mapping: %{
...

which seemed to work, but I'm still not seeing any OTel metrics come through our collector to NewRelic.

So yeah with the lack of docs / being unsure about if everything is built out yet or not, I think we're really going to have to give up on this spike until the package is more usable. I've burnt the better part of a week trying to set it up but am still struggling.

@tsloughter
Copy link
Member

Damn, understandable. Something definitely wrong since otel_metric_exporter should not be called. Though I'm glad it was, it shouldn't even exist anymore, this is handled just by opentelemetry_exporter. Originally was going to be a separate module to keep it in experimental, but that turned out to be too awkward.

I know you need to move on, but can you tell me if you using the latest release of opentelemetry_exporter, 1.7.0?

@alexandrachakeres-wk
Copy link
Author

alexandrachakeres-wk commented Mar 21, 2024

Yep I was using opentelemetry_exporter version 1.7.0.

Sorry if my message above was confusing. The otel_metric_exporter was getting called because I'd tried it in my config (see above). When I swapped that for opentelemetry_exporter in my config (also shown above), I no longer got any errors, but I also wasn't seeing the traces or metrics coming through our company's collector to NewRelic. We don't have a great way to debug where things are getting lost (our collector doesn't have great dev tools right now) so it very well could be exporting nicely but getting dropped on the floor by our collector instead of being sent to NewRelic. Might have to do with the erlang package not using the /v1/metrics endpoint?

But we're way past the original timebox allotted for this, so I'm going to have to put a pin in OpenTelemetry until metrics docs (and probably the v1 release of metrics) are available. And maybe some better dev tools on our side to help with visibility into our collector (since I also couldn't see the traces in NewRelic, downstream from the collector).

@tsloughter
Copy link
Member

@alexandrachakeres-wk thanks for giving it this much time. It has been hard to get people trying it out while in experimental and not having time to work on docs. You've found multiple issues, like not appending /v1/metrics. I'll get a fix in for that soon.

Hope you will give it another try once docs are written :)

@tsloughter
Copy link
Member

Actually, the issue is different than not appending /v1/metrics. It is that we don't support HTTP for metrics. So adding support for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants