New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OTel tracing configuration to VSAIO #134
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should go ahead and get docker working on a vsaio by default, maybe steal the few useful things from #131
I think the cookbook/recipes should go ahead and start the jager container and whatever else so that after vagrant up
it "just works"
I tested this and got my jager container running with reset_jager.sh, but in the GUI drop down under service I only see "jager-query" and can't find my swift traces... so I don't really know if it's working.
You will find a helper script in the bin/ directory that'll you can run on your host to start (or reset) the | ||
running jaeger container. Just run: | ||
|
||
bin/reset_jaeger.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we run this on startup - so that after TRACING=true vagrant up
you can make requests and view http://saio:16686/search
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make this change. It now runs docker internally and you can browse to http://saio:16686/search on your host and view your traces. Works nice for me. Great idea!
5192baf
to
46cb7b7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i checked out https://review.opendev.org/c/openstack/swift/+/857559/3 and this worked great out of the box!
log 'show jaeger docker info' do | ||
message %( | ||
A Jaeger all-in-one has been started in the vagrant environment. It was started with the bin/reset_jaeger tool. | ||
You can view all your traces at: http://saio:16686/search |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the only output that I actually wanted
|
||
If you want to have a custom config, volume mount in a config: | ||
|
||
docker run -v $(pwd)/config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please get rid of all this extra output at the end of vagrant up
either put it in a bin/
script or docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@matthewoliver if we can get rid of this extra warnings in the end after provisioning the vagrant VM, you have my approval as well to get this merged. The tool has been helpful in tracing paths for each requests made and latency decomposition within each function invoked while interacting with swift using the swift client
Thanks clay.
Getting docker running in the saio might be a good idea. Sometimes the
traces don't come up if the host vbox nice isn't up, so the traces can't
get to 192.168.8.1.
Also I've seen traces not get to the host Jaeger when serious is in
enforcing mode.
Having the container running in vsaio should solve both these issues.
Matt
…On Wed, 28 Sept 2022, 1:44 am clayg, ***@***.***> wrote:
***@***.**** requested changes on this pull request.
I think we should go ahead and get docker working on a vsaio by default,
maybe steal the few useful things from #131
<#131>
I think the cookbook/recipes should go ahead and start the jager container
and whatever else so that after vagrant up it "just works"
I tested this and got my jager container running with reset_jager.sh, but
in the GUI drop down under service I only see "jager-query" and can't find
my swift traces... so I don't really know if it's working.
------------------------------
In cookbooks/swift/recipes/configs.rb
<#134 (comment)>
:
> 'bench.conf',
'keymaster.conf',
-].each do |filename|
+]
+
+if node['tracing']
+ swift_files.concat [
+ 'jaeger_exporter.json',
this is fine, but I think it'd be ok if the file was always there too - as
soon as you turn on tracing once it'll always be there right?
------------------------------
In cookbooks/swift/recipes/configs.rb
<#134 (comment)>
:
> - group node["username"]
- variables({
- :ssl => node['ssl'],
- :keymaster_pipeline => keymaster_pipeline,
- })
- end
+ keymaster_pipeline = 'keymaster'
+ end
+ template "/#{proxy_conf_dir}/20_settings.conf" do
+ source "#{proxy_conf_dir}/20_settings.conf.erb"
+ owner node["username"]
+ group node["username"]
+ variables({
+ :ssl => node['ssl'],
+ :keymaster_pipeline => keymaster_pipeline,
+ :tracing => node['tracing'],
it looks like previously the no-auth proxy config was a file and now both
proxy config's are templates
... and we can pass in all the vars even if they're not used.
------------------------------
In cookbooks/swift/recipes/default.rb
<#134 (comment)>
:
> @@ -17,3 +17,7 @@
user node['username']
group node["username"]
end
+
+if node["tracing"] then
+ include_recipe "swift::tracing_info"
+end
i think this will get executed *after* we start swift - but it's not
really part of setup, it's just print some info out for the developer.
------------------------------
In cookbooks/swift/recipes/setup.rb
<#134 (comment)>
:
> "s3cmd",
"awscli-plugin-endpoint",
"bandit==1.5.1", # pin bandit to avoid pyyaml issues on bionic (at least)
-].each do |pkg|
+]
+if node['tracing']
+ # Install OpenTelemetry tracing pip packages
+ pip_packages.concat [
+ "opentelemetry-api",
+ "opentelemetry-sdk",
+ "opentelemetry-semantic-conventions",
+ "opentelemetry-exporter-jaeger",
+ ]
+end
maybe it makes sense to only install these dependencies if we asked for
tracing...
------------------------------
In cookbooks/swift/recipes/tracing_info.rb
<#134 (comment)>
:
> +
+ NOTE: We should go via OTel collector, but I havn't implemented that yet. But there are some notes on that below
+
+ For the OTel collector we should be able to run something like:
+
+ docker pull otel/opentelemetry-collector:latest
+ docker run -d otel/opentelemetry-collector:latest
+
+ See: https://opentelemetry.io/docs/collector/getting-started/
+ NOTE: of course you can also specify a version. We should probably pick whatever we use for prod (when we get that far)
+
+ If you want to have a custom config, volume mount in a config:
+
+ docker run -v $(pwd)/config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector
+ )
+end
oic, it's just a message - so that's why it makes sense after start main
------------------------------
In bin/reset_jaeger.sh
<#134 (comment)>
:
> @@ -0,0 +1,12 @@
+#!/bin/bash
+docker_id=$(docker ps --noheading |grep jaeger |awk '{print $1}')
+
+if [ -n "$docker_id" ]; then
+ docker stop $docker_id
+ docker rm $docker_id
+fi
+
+docker run -d --name jaeger -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
+ -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 \
+ -p 16686:16686 -p 14268:14268 -p 14250:14250 -p 9411:9411 \
+ jaegertracing/all-in-one:1.27
ok, so there's not really any "setup" for jaeger, we just start up their
all-in-one container
------------------------------
In cookbooks/swift/recipes/tracing_info.rb
<#134 (comment)>
:
> + -p 5775:5775/udp \\
+ -p 6831:6831/udp \\
+ -p 6832:6832/udp \\
+ -p 5778:5778 \\
+ -p 16686:16686 \\
+ -p 14268:14268 \\
+ -p 14250:14250 \\
+ -p 9411:9411 \\
+ jaegertracing/all-in-one:1.27
+
+ See: https://www.jaegertracing.io/docs/1.27/getting-started/
+
+ You will find a helper script in the bin/ directory that'll you can run on your host to start (or reset) the
+ running jaeger container. Just run:
+
+ bin/reset_jaeger.sh
do we run this on startup - so that after TRACING=true vagrant up you can
make requests and view http://saio:16686/search ?
—
Reply to this email directly, view it on GitHub
<#134 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAK3S5IRLWWT7DI6YLMPALWAMI7NANCNFSM6AAAAAAQWOO4U4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Sigh, auto correct from phone.
On Wed, 28 Sept 2022, 8:26 am Matthew Oliver, ***@***.***> wrote:
Thanks clay.
Getting docker running in the saio might be a good idea. Sometimes the
traces don't come up if the host vbox nice isn't up, so the traces can't
get to 192.168.8.1.
Vbox nic
Also I've seen traces not get to the host Jaeger when serious is in
enforcing mode.
Selinux is in enforcing mode.
…
Having the container running in vsaio should solve both these issues.
Matt
On Wed, 28 Sept 2022, 1:44 am clayg, ***@***.***> wrote:
> ***@***.**** requested changes on this pull request.
>
> I think we should go ahead and get docker working on a vsaio by default,
> maybe steal the few useful things from #131
> <#131>
>
> I think the cookbook/recipes should go ahead and start the jager
> container and whatever else so that after vagrant up it "just works"
>
> I tested this and got my jager container running with reset_jager.sh, but
> in the GUI drop down under service I only see "jager-query" and can't find
> my swift traces... so I don't really know if it's working.
> ------------------------------
>
> In cookbooks/swift/recipes/configs.rb
> <#134 (comment)>
> :
>
> > 'bench.conf',
> 'keymaster.conf',
> -].each do |filename|
> +]
> +
> +if node['tracing']
> + swift_files.concat [
> + 'jaeger_exporter.json',
>
> this is fine, but I think it'd be ok if the file was always there too -
> as soon as you turn on tracing once it'll always be there right?
> ------------------------------
>
> In cookbooks/swift/recipes/configs.rb
> <#134 (comment)>
> :
>
> > - group node["username"]
> - variables({
> - :ssl => node['ssl'],
> - :keymaster_pipeline => keymaster_pipeline,
> - })
> - end
> + keymaster_pipeline = 'keymaster'
> + end
> + template "/#{proxy_conf_dir}/20_settings.conf" do
> + source "#{proxy_conf_dir}/20_settings.conf.erb"
> + owner node["username"]
> + group node["username"]
> + variables({
> + :ssl => node['ssl'],
> + :keymaster_pipeline => keymaster_pipeline,
> + :tracing => node['tracing'],
>
> it looks like previously the no-auth proxy config was a file and now both
> proxy config's are templates
>
> ... and we can pass in all the vars even if they're not used.
> ------------------------------
>
> In cookbooks/swift/recipes/default.rb
> <#134 (comment)>
> :
>
> > @@ -17,3 +17,7 @@
> user node['username']
> group node["username"]
> end
> +
> +if node["tracing"] then
> + include_recipe "swift::tracing_info"
> +end
>
> i think this will get executed *after* we start swift - but it's not
> really part of setup, it's just print some info out for the developer.
> ------------------------------
>
> In cookbooks/swift/recipes/setup.rb
> <#134 (comment)>
> :
>
> > "s3cmd",
> "awscli-plugin-endpoint",
> "bandit==1.5.1", # pin bandit to avoid pyyaml issues on bionic (at least)
> -].each do |pkg|
> +]
> +if node['tracing']
> + # Install OpenTelemetry tracing pip packages
> + pip_packages.concat [
> + "opentelemetry-api",
> + "opentelemetry-sdk",
> + "opentelemetry-semantic-conventions",
> + "opentelemetry-exporter-jaeger",
> + ]
> +end
>
> maybe it makes sense to only install these dependencies if we asked for
> tracing...
> ------------------------------
>
> In cookbooks/swift/recipes/tracing_info.rb
> <#134 (comment)>
> :
>
> > +
> + NOTE: We should go via OTel collector, but I havn't implemented that yet. But there are some notes on that below
> +
> + For the OTel collector we should be able to run something like:
> +
> + docker pull otel/opentelemetry-collector:latest
> + docker run -d otel/opentelemetry-collector:latest
> +
> + See: https://opentelemetry.io/docs/collector/getting-started/
> + NOTE: of course you can also specify a version. We should probably pick whatever we use for prod (when we get that far)
> +
> + If you want to have a custom config, volume mount in a config:
> +
> + docker run -v $(pwd)/config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector
> + )
> +end
>
> oic, it's just a message - so that's why it makes sense after start main
> ------------------------------
>
> In bin/reset_jaeger.sh
> <#134 (comment)>
> :
>
> > @@ -0,0 +1,12 @@
> +#!/bin/bash
> +docker_id=$(docker ps --noheading |grep jaeger |awk '{print $1}')
> +
> +if [ -n "$docker_id" ]; then
> + docker stop $docker_id
> + docker rm $docker_id
> +fi
> +
> +docker run -d --name jaeger -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
> + -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 \
> + -p 16686:16686 -p 14268:14268 -p 14250:14250 -p 9411:9411 \
> + jaegertracing/all-in-one:1.27
>
> ok, so there's not really any "setup" for jaeger, we just start up their
> all-in-one container
> ------------------------------
>
> In cookbooks/swift/recipes/tracing_info.rb
> <#134 (comment)>
> :
>
> > + -p 5775:5775/udp \\
> + -p 6831:6831/udp \\
> + -p 6832:6832/udp \\
> + -p 5778:5778 \\
> + -p 16686:16686 \\
> + -p 14268:14268 \\
> + -p 14250:14250 \\
> + -p 9411:9411 \\
> + jaegertracing/all-in-one:1.27
> +
> + See: https://www.jaegertracing.io/docs/1.27/getting-started/
> +
> + You will find a helper script in the bin/ directory that'll you can run on your host to start (or reset) the
> + running jaeger container. Just run:
> +
> + bin/reset_jaeger.sh
>
> do we run this on startup - so that after TRACING=true vagrant up you
> can make requests and view http://saio:16686/search ?
>
> —
> Reply to this email directly, view it on GitHub
> <#134 (review)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAAK3S5IRLWWT7DI6YLMPALWAMI7NANCNFSM6AAAAAAQWOO4U4>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'll work on this some more next time I check out otel - it would be great to land support in vsaio shortly after we merge it upstream: what's the next steps there?
docker run -d --name jaeger -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \ | ||
-p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 \ | ||
-p 16686:16686 -p 14268:14268 -p 14250:14250 -p 9411:9411 \ | ||
jaegertracing/all-in-one:1.27 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IME with docker run
we'll want a --rm
"opentelemetry-sdk", | ||
"opentelemetry-semantic-conventions", | ||
"opentelemetry-exporter-jaeger", | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i wonder if there's a pip install -e .[opentelem=true]
sort of invocation we could get working
@@ -183,6 +184,7 @@ | |||
group node["username"] | |||
variables({ | |||
:disable_encryption => ! node['encryption'], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems like for encryption we always create the filters and put them in the pipeline, but just turn them on/of based on the config option - I wonder if that strategy makes more sense after we get otel merged.
I just found this and the work you're doing here is awesome! Quick question though, why use the Jager exporter instead of a generic Otel exporter like opentelemetry-exporter-otlp which allows to send to any tool supporting otel over RPC/HTTP? |
Thanks @rshaw1467. Great question, and your right! I do plan to move it over to Otel collector. If fact have an OTel collector now setup in our dev environment to do jus that. Jaeger was the first version because this otel tracing in swift forked from some slightly older OpenTracing version I was working on before OTel, which was using jaeger. So I still had that infrastructure up, so just worked. |
You are correct, you'll still need a backend to view the trace data. I was just thinking from a productionized sense where people are using existing observability tools like Dynatrace, Datadog, New Relic etc. a generic exporter makes sense. |
I just added otel collector support to the upstream patch for otel tracing!
So almost there now 😀
…On Wed, 9 Nov 2022, 3:17 am rshaw1467, ***@***.***> wrote:
You are correct, you'll still need a backend to view the trace data. I was
just thinking from a productionized sense where people are using existing
observability tools like Dynatrace, Datadog, New Relic etc. a generic
exporter makes sense.
—
Reply to this email directly, view it on GitHub
<#134 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAK3SYJJX5OWKWN5TSW6RTWHJ4J7ANCNFSM6AAAAAAQWOO4U4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
46cb7b7
to
0d61deb
Compare
When TRACING=true, the VSAIO will be configured to use tracing. This involves: - Adding trace middleware to all the wsgi server pipelines and config (not internal client... but hmm) - Add bin/reset_jaeger.sh helper script to start and clear/restart the - Start a jaeger all-in-one in the virtual machine using reset_jaeger.sh - Add /etc/swift/jaeger_exporter.json file which points to the running docker image. - Sets up some basic configuration for the traces, namely trace every request going through the proxies. Because the proxy spans can get quite big, they can get bigger then opentracing UDP max size. So enable udp_split_oversize_batches in the jaeger_exporter. Otherwise it's just dropped. Also added a bunch tracing notes to the end of the chef run which currently reads: ==> default: Recipe: swift::tracing_info ==> default: * execute[Start jaeger all-in-one docker image] action run ==> default: ==> default: [execute] Unable to find image 'jaegertracing/all-in-one:1.27' locally ==> default: 1.27: Pulling from jaegertracing/all-in-one ==> default: a0d0a0d46f8b: Pulling fs layer ==> default: b45576136ee2: Pulling fs layer ==> default: d22e8500bf73: Pulling fs layer ==> default: 1a972b89c2b0: Pulling fs layer ==> default: 1a972b89c2b0: Waiting ==> default: b45576136ee2: Verifying Checksum ==> default: b45576136ee2: Download complete ==> default: a0d0a0d46f8b: Verifying Checksum ==> default: a0d0a0d46f8b: Download complete ==> default: a0d0a0d46f8b: Pull complete ==> default: b45576136ee2: Pull complete ==> default: 1a972b89c2b0: Verifying Checksum ==> default: 1a972b89c2b0: Download complete ==> default: d22e8500bf73: Verifying Checksum ==> default: d22e8500bf73: Download complete ==> default: d22e8500bf73: Pull complete ==> default: 1a972b89c2b0: Pull complete ==> default: Digest: sha256:8d0bff43db3ce5c528cb6f957520511d263d7cceee012696e4afdc9087919bb9 ==> default: Status: Downloaded newer image for jaegertracing/all-in-one:1.27 ==> default: 59364e3d2e876234fc1b43d01ddd434d61dbc4bc1e659cd9bfb90d05aa13d15c ==> default: [2022-09-28T05:59:41+00:00] INFO: execute[Start jaeger all-in-one docker image] ran successfully ==> default: ==> default: - execute /vagrant/bin/reset_jaeger.sh ==> default: ==> default: * log[show jaeger docker info] action write ==> default: [2022-09-28T05:59:41+00:00] INFO: ==> default: A Jaeger all-in-one has been started in the vagrant environment. It was started with the bin/reset_jaeger tool. ==> default: You can view all your traces at: http://saio:16686/search ==> default: ==> default: The reset_jaeger.sh script basically runs: ==> default: ==> default: docker run -d --name jaeger \ ==> default: -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \ ==> default: -p 5775:5775/udp \ ==> default: -p 6831:6831/udp \ ==> default: -p 6832:6832/udp \ ==> default: -p 5778:5778 \ ==> default: -p 16686:16686 \ ==> default: -p 14268:14268 \ ==> default: -p 14250:14250 \ ==> default: -p 9411:9411 \ ==> default: jaegertracing/all-in-one:1.27 ==> default: ==> default: See: https://www.jaegertracing.io/docs/1.27/getting-started/ ==> default: ==> default: If you want to reset and clear the traces just run: ==> default: ==> default: reset_jaeger.sh ==> default: ==> default: NOTE: We should go via OTel collector, but I havn't implemented that yet. But there are some notes on that below ==> default: ==> default: For the OTel collector we should be able to run something like: ==> default: ==> default: docker pull otel/opentelemetry-collector:latest ==> default: docker run -d otel/opentelemetry-collector:latest ==> default: ==> default: See: https://opentelemetry.io/docs/collector/getting-started/ ==> default: NOTE: of course you can also specify a version. We should probably pick whatever we use for prod (when we get that far) ==> default: ==> default: If you want to have a custom config, volume mount in a config: ==> default: ==> default: docker run -v $(pwd)/config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector
0d61deb
to
4b862cf
Compare
When TRACING=true, the VSAIO will be configured to use tracing. This
involves:
(not internal client... but hmm)
image.
request going through the proxies.
Because the proxy spans can get quite big, they can get bigger then
opentracing UDP max size. So enable udp_split_oversize_batches in the
jaeger_exporter. Otherwise it's just dropped.
Also added a bunch tracing notes to the end of the chef run which
currently reads:
==> default: Recipe: swift::tracing_info
==> default: * execute[Start jaeger all-in-one docker image] action run
==> default:
==> default: [execute] Unable to find image 'jaegertracing/all-in-one:1.27' locally
==> default: 1.27: Pulling from jaegertracing/all-in-one
==> default: a0d0a0d46f8b: Pulling fs layer
==> default: b45576136ee2: Pulling fs layer
==> default: d22e8500bf73: Pulling fs layer
==> default: 1a972b89c2b0: Pulling fs layer
==> default: 1a972b89c2b0: Waiting
==> default: b45576136ee2: Verifying Checksum
==> default: b45576136ee2: Download complete
==> default: a0d0a0d46f8b: Verifying Checksum
==> default: a0d0a0d46f8b: Download complete
==> default: a0d0a0d46f8b: Pull complete
==> default: b45576136ee2: Pull complete
==> default: 1a972b89c2b0: Verifying Checksum
==> default: 1a972b89c2b0: Download complete
==> default: d22e8500bf73: Verifying Checksum
==> default: d22e8500bf73: Download complete
==> default: d22e8500bf73: Pull complete
==> default: 1a972b89c2b0: Pull complete
==> default: Digest: sha256:8d0bff43db3ce5c528cb6f957520511d263d7cceee012696e4afdc9087919bb9
==> default: Status: Downloaded newer image for jaegertracing/all-in-one:1.27
==> default: 59364e3d2e876234fc1b43d01ddd434d61dbc4bc1e659cd9bfb90d05aa13d15c
==> default: [2022-09-28T05:59:41+00:00] INFO: execute[Start jaeger all-in-one docker image] ran successfully
==> default:
==> default: - execute /vagrant/bin/reset_jaeger.sh
==> default:
==> default: * log[show jaeger docker info] action write
==> default: [2022-09-28T05:59:41+00:00] INFO:
==> default: A Jaeger all-in-one has been started in the vagrant environment. It was started with the bin/reset_jaeger tool.
==> default: You can view all your traces at: http://saio:16686/search
==> default:
==> default: The reset_jaeger.sh script basically runs:
==> default:
==> default: docker run -d --name jaeger
==> default: -e COLLECTOR_ZIPKIN_HOST_PORT=:9411
==> default: -p 5775:5775/udp
==> default: -p 6831:6831/udp
==> default: -p 6832:6832/udp
==> default: -p 5778:5778
==> default: -p 16686:16686
==> default: -p 14268:14268
==> default: -p 14250:14250
==> default: -p 9411:9411
==> default: jaegertracing/all-in-one:1.27
==> default:
==> default: See: https://www.jaegertracing.io/docs/1.27/getting-started/
==> default:
==> default: If you want to reset and clear the traces just run:
==> default:
==> default: reset_jaeger.sh
==> default:
==> default: NOTE: We should go via OTel collector, but I havn't implemented that yet. But there are some notes on that below
==> default:
==> default: For the OTel collector we should be able to run something like:
==> default:
==> default: docker pull otel/opentelemetry-collector:latest
==> default: docker run -d otel/opentelemetry-collector:latest
==> default:
==> default: See: https://opentelemetry.io/docs/collector/getting-started/
==> default: NOTE: of course you can also specify a version. We should probably pick whatever we use for prod (when we get that far)
==> default:
==> default: If you want to have a custom config, volume mount in a config:
==> default:
==> default: docker run -v $(pwd)/config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector