Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DataDog is unable to get it's own container.ID #5482

Open
unacceptable opened this issue May 8, 2020 · 11 comments
Open

DataDog is unable to get it's own container.ID #5482

unacceptable opened this issue May 8, 2020 · 11 comments

Comments

@unacceptable
Copy link

unacceptable commented May 8, 2020

Output of the info page (if this is a bug)

2020-05-08 21:19:34 UTC | PROCESS | WARN | (pkg/tagger/collectors/kubelet_extract.go:212 in parsePods) | Unable to parse container pName: datadog-fzf6k / cName: agent / cId:  / err: can't extract an entity ID from container ID  
2020-05-08 21:19:34 UTC | PROCESS | WARN | (pkg/tagger/collectors/kubelet_extract.go:212 in parsePods) | Unable to parse container pName: datadog-fzf6k / cName: process-agent / cId:  / err: can't extract an entity ID from container ID  
2020-05-08 21:19:34 UTC | PROCESS | WARN | (pkg/tagger/collectors/kubelet_extract.go:212 in parsePods) | Unable to parse container pName: datadog-fzf6k / cName: trace-agent / cId:  / err: can't extract an entity ID from container ID

Describe what happened:
A warning was showing in the logs.

Describe what you expected:
I expected the pod to get it's own container id.

Steps to reproduce the issue:
Install datadog via helm.

Additional environment details (Operating System, Cloud provider, etc):
Container: datadog/agent:7.19.1
EKS
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-502bfb", GitCommit:"502bfb383169b124d87848f89e17a04b9fc1f6f0", GitTreeState:"clean", BuildDate:"2020-02-07T01:31:02Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

Code Locations
https://github.com/DataDog/datadog-agent/blob/7.19.1/pkg/tagger/collectors/kubelet_extract.go#L212
https://github.com/DataDog/datadog-agent/blob/7.19.1/pkg/util/kubernetes/kubelet/kubelet_common.go#L79

@Simwar
Copy link
Contributor

Simwar commented Jun 8, 2020

Hi @unacceptable ,

Thanks for reaching out!
Could you please open a support ticket by emailing: support@datadoghq.com to troubleshoot this further with our support team.

We will need additional details to understand what's going on here.

Thanks,

Simon

@orong-pp
Copy link

recently I was playing with ndots dnsConfig option
I also added a feature to datadog helm chart to change ndots dnsConfig
helm/charts#22727
(added on June 15th 2020 in helm chart version 2.3.10)

the configuration I added is disabled by default, but once I enable ndots: 1 I get the same error as mentioned here:
2020-06-15 21:39:45 UTC | PROCESS | WARN | (pkg/tagger/collectors/kubelet_extract.go:212 in parsePods) | Unable to parse container pName: mantis-system-datadog-9w8p8 / cName: process-agent / cId: / err: can't extract an entity ID from container ID

@mswezey23
Copy link

I'm also getting the 3x WARN as the OP reported.

Using the latest versions of of helm chart + DD with -jmx tag on K8 1.16.8

@kaarolch
Copy link

kaarolch commented Aug 12, 2020

We've observed the same issue for other containers on DD agent 6.21 version.

@christoph-kluge
Copy link

Also observing it with

  • datadog/cluster-agent:1.5.2
  • datadog/agent:7

@geidivan
Copy link

geidivan commented Dec 2, 2020

Same issue here:
Helm chart: datadog/datadog v2.5.3
DD agent: 7.23.1
Cluster agent: 1.9.1

@sandip750
Copy link

Anything on this
I am observing similar issue sudden for my service err: can't extract an entity ID from container ID
with DD agent - 7.25.0

@emirot
Copy link

emirot commented Apr 15, 2021

I think it could happen when there is a network issue
On the client side I m getting
WARN: DIAGNOSTICS Unable to reach agent Post \"http://10.34.145.44:8126/v0.4/traces\": dial tcp 10.34.145.44:8126: connect: connection refused\n"}
Whereas in the agent logs I saw err: can't extract an entity ID from container ID
In my case this happens when container starts before envoy is ready.

@tonymayflower
Copy link

can we have an update on the issue please ?

@andrei693
Copy link

I've seen this happening on 7.30.1 when a pod is in "ImagePullBackOff" for example. Not sure if others have pods in some failed state but getting rid of those can make the warning go away. A bit frustrating that we got the container/pod name only on loglevel DEBUG; think it might be useful at the WARN level too.

@sundeepbhatia1989
Copy link

I am getting below error while deploying datadog in aks via helm
Unable to extract containerID from cgroup name: containerd.service, err:
2023-06-27 04:37:07 UTC | CORE | WARN | (pkg/collector/corechecks/ebpf/tcp_queue_length.go:109 in Run) | Unable to extract containerID from cgroup name: node-problem-detector.service, err:
2023-06-27 04:37:07 UTC | CORE | WARN | (pkg/collector/corechecks/ebpf/tcp_queue_length.go:109 in Run) | Unable to extract containerID from cgroup name: walinuxagent.service, err:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests