Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s_container monitored resource type requires cloud.platform: gcp_kubernetes_engine even when running outside of Google Cloud #627

Closed
jsirianni opened this issue Apr 21, 2023 · 5 comments · Fixed by #683
Assignees
Labels
enhancement New feature or request priority: p2

Comments

@jsirianni
Copy link

In the past, we were able to send logs to Cloud Logging by setting the following resource attributes

  • k8s.container.name
  • k8s.pod.name
  • k8s.namespace.name
  • k8s.cluster.name

Recently, I noticed that my logs are no longer shown under the k8s_container monitored resource type, instead they appear as generic_node.

This is because the internal/resourcemapping/resourcemapping.go is checking specifically for cloud.platform: gcp_kubernetes_engine.

We work with many Google customers who operate Kubernetes clusters outside of Google Cloud, therefore, the requirement for cloud.platform: gcp_kubernetes_engine is surprising, especially when using OpenTelemetry.

I can reproduce this with Minikube running on my workstation.

With cloud.platform set

Using the resource attribute processor, I can set the cloud.platform value to "trick" the exporter.

Screenshot from 2023-04-21 19-13-48

without cloud.platform

If I do not add the attribute, my logs come through but as generic_node.

Screenshot from 2023-04-21 19-13-22

@damemi damemi self-assigned this Apr 24, 2023
@damemi
Copy link
Member

damemi commented Apr 24, 2023

Hi @jsirianni, it looks like the line you're referring to has been part of the resource mapping for at least ~2 years based on the blame: https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/blame/3a2c57209765f4752bd140a007612a81f58489c1/exporter/collector/monitoredresource.go#L131 (moved from that file to resourcemapping.go in f4bd197 ~1 year ago), so maybe it is a different change that has caused this behavior to show up recently for you. Do you know what version of these modules you were using when this worked, or if you've recently upgraded them?

I do agree that the implied dependence on the GCP platform attribute is weird, so I'll check if there's a specific reason we have that or if this could be opened up to other k8s platforms.

@damemi damemi added enhancement New feature or request priority: p2 labels Apr 26, 2023
@damemi
Copy link
Member

damemi commented Apr 26, 2023

Talked to the team and it sounds like this may have possibly been due to us originally writing to gke_container. We can look more into this in an upcoming sprint but unless a hard reason comes up for why we are strictly relying on GKE then we will probably open this up.

@dashpole
Copy link
Contributor

Seems like if it is mapped to any hosted kubernetes cloud.platform (e.g. EKS, AKS, etc), then we should map to k8s_container.

@damemi
Copy link
Member

damemi commented Jul 21, 2023

Opened #683 to fix this

@damemi
Copy link
Member

damemi commented Jul 26, 2023

This should be fixed in the next release with #683, and anything that sets k8s.cluster.name should be, at minimum, mapped to k8s_cluster. Also opened issues in js, python, and java repos to make sure the behavior matches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request priority: p2
Projects
None yet
3 participants