New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix panic in AWS ECS detector if container ARN is not valid #3583
Fix panic in AWS ECS detector if container ARN is not valid #3583
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #3583 +/- ##
=======================================
- Coverage 80.9% 80.8% -0.2%
=======================================
Files 150 150
Lines 10361 10374 +13
=======================================
Hits 8388 8388
- Misses 1831 1844 +13
Partials 142 142
|
677ff97
to
a7b3bc9
Compare
Hey everyone!
|
… some of them are not valid
a7b3bc9
to
1a37750
Compare
dismissing my review as changes were applied and I have not time to review it now
Hey @pellared @Aneurysm9! Is there any chance for this PR to be merged? Some of our users face the issue fixed by this PR, so I have to use a fork of this package and update it every time I update the project dependencies; this is pretty annoying. |
This PR fixes the panic that may occur here: https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/detectors/aws/ecs/ecs.go#L116
If
containerMetadata.ContainerARN
doesn't include:
(that is our client case),strings.LastIndex(containerMetadata.ContainerARN, ":")
will return-1
. This leads tocontainerMetadata.ContainerARN[:-1]
, andindex out of range
panic.Here's what changes in this PR do:
baseArn
fromtaskMetadata.TaskARN', 'containerMetadata.ContainerARN', and 'taskMetadata.Cluster
baseArn
is successfully extracted, and some of the ARNs aren't full, fix these ARNs