Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AKS CoreDNS + Azure Prometheus - can't see metrics in log analytics #1364

Open
rhollins opened this issue Dec 23, 2019 · 3 comments
Open

AKS CoreDNS + Azure Prometheus - can't see metrics in log analytics #1364

rhollins opened this issue Dec 23, 2019 · 3 comments
Labels

Comments

@rhollins
Copy link

@rhollins rhollins commented Dec 23, 2019

I applied following oms config as configmap and only changed this row:
monitor_kubernetes_pods = true
https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/Kubernetes/container-azm-ms-agentconfig.yaml

I can see metrics config when running curl on coredns pod:
dnstools# curl http://<coredns_pod_ip>:9153/metrics

I also deployed nginx ingress controller and can see it's metrics just fine in log analytics

I'm using following Kusto command to check for metrics:

InsightsMetrics 
| order by TimeGenerated desc 
| where Namespace contains "prometheus"

Environment:

  • Kubernetes version (use kubectl version):
    kubectl version
    Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"1da9875156ba0ad48e7d09a5d00e41489507f592", GitTreeState:"clean", BuildDate:"2019-11-14T05:19:20Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
  • Size of cluster (how many worker nodes are in the cluster?)
    2
  • General description of workloads in the cluster (e.g. HTTP microservices, Java app, Ruby on Rails, machine learning, etc.)
  • Others:
@triage-new-issues triage-new-issues bot added the triage label Dec 23, 2019
@esHack

This comment has been minimized.

Copy link

@esHack esHack commented Dec 31, 2019

Also have the same issue

Seeing these logs on oms agent

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

****************Start Config Processing********************

Both stdout & stderr log collection are turned off for namespaces: '*_kube-system_*.log'

****************End Config Processing********************

****************Start Prometheus Config Processing********************

config::No configmap mounted for prometheus custom config, using defaults

****************End Prometheus Config Processing********************

Workspace f4aa9866-223b-4a7a-9363-323 already onboarded and agent is running.

Seems like not finding prometheus config map

@devteng

This comment has been minimized.

Copy link

@devteng devteng commented Jan 22, 2020

I solve this in AKS for a adhoc deployment of Prometheus. My problem was that CoreDNS metrics were not being scraped by Prometheus. I was able to get atleast the in cluster deployment of Prometheus by adding prometheus.io/scrape: "true" annotation to the coredns deployment under .spec.template.metadata.annotations. Adding the annotation will causes the deployment to perform a rolling update. This change would be lost on the next CoreDNS upgrade if this is not fixed by Azure.

kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
spec:
  template:
    metadata:
      annotations:
        prometheus.io/port: "9153"
        prometheus.io/scrape: "true"

Previously, only prometheus.io/port: "9153" was in the annotations. prometheus.io/scrape: "true" was not.

This may/may not work for this issue. Putting the solution out there in hopes that it helps. but this is not a permanent solution. The awesome AKS team would have to fix it.

@atikhono

This comment has been minimized.

Copy link

@atikhono atikhono commented Feb 18, 2020

I have bumped into this as well.

To have metrics sent to Azure Monitor, we need to add prometheus.io/scrape: "true" annotation to CoreDNS deployment (sarcastically, neither of tools we use to deploy on Azure support settings annotations on existing k8s objects -- but kubectl, which is not desirable for us). Alternatively, we could tell OMS agent the list of URLs to scrape from. But kube-dns service doesn't expose Prometheus endpoint.

Would be cool to have CoreDNS metrics sent to ContainerInsights out of box.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants
You can’t perform that action at this time.