Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grafana initializes dashboard provider before sidecar has completed parsing all dashboard ConfigMaps #527

Open
tringuyen-yw opened this issue Jun 25, 2021 · 10 comments

Comments

@tringuyen-yw
Copy link

tringuyen-yw commented Jun 25, 2021

Using kube-prometheus-stack heml chart v16.12.0 which has a dependency on Grafana helm chart v6.13.5.

Maybe I should file this issue in https://github.com/prometheus-community/helm-charts. I hope this is OK to file it here as I think it pertains to Grafana helm chart.

I would like to import some dashboards in a specific Grafana folder named KAFKA.

I have customize the helm value in my-helm-values.yaml:

grafana:
  sidecar:
    dashboards:
      enabled: true
      label: grafana_dashboard
      folder: /tmp/dashboards
  
  dashboardProviders:
    dashboardproviders.yaml:
      apiVersion: 1
      providers:
        - name: 'strimzi'
          orgId: 1
          folder: 'KAFKA'
          type: file
          disableDeletion: false
          editable: true
          allowUiUpdates: true
          options:
            path: /tmp/dashboards/strimzi

Then create a ConfigMap containing the dashboards. The configmap is created in the same namespace as the Grafana helm chart

apiVersion: v1
kind: ConfigMap
metadata:
  name: cm-strimzi-dashboards
  labels:
    app: strimzi
    grafana_dashboard: "1"
  annotations:
    k8s-sidecar-target-directory: "/tmp/dashboards/strimzi"
data:
  strimzi-kafka.json: |-
    { 
       json code of the Grafana dashboard
    }
  dashboard2.json: |-
    {
       json code
    }

Install the /kube-prometheus-stack helm chart with Grafana's custom values

helm install my-kubepromstack prometheus-community/kube-prometheus-stack -n monitoring -f my-helm-values.yaml

Grafana UI shows the KAFKA folder as specified in dashboardProviders but it is empty. All the Json dashboards in the ConfigMaps are imported but located in the General folder.

I think the reason for Grafana to fail to provison the custom dashboards is b/c the custom dashboard provider strimzi was not ready. See the comment below for more diagnostics details.

@tringuyen-yw
Copy link
Author

Could this be a timing issue between Grafana container and the side car container? ie. Grafana initializes the dashboard provider before the sidecar container has finished the parsing of all the ConfigMaps?

Logs of the grafana container (notice timestamp 20:31:13)

t=2021-06-26T20:31:13+0000 lvl=eror msg="Cannot read directory" logger=provisioning.dashboard type=file name=strimzi error="stat /tmp/dashboards/strimzi: no such file or directory"
t=2021-06-26T20:31:13+0000 lvl=eror msg="Failed to read content of symlinked path" logger=provisioning.dashboard type=file name=strimzi path=/tmp/dashboards/strimzi error="lstat /tmp/dashboards/strimzi: no such file or directory"
t=2021-06-26T20:31:13+0000 lvl=info msg="falling back to original path due to EvalSymlink/Abs failure" logger=provisioning.dashboard type=file name=strimzi
t=2021-06-26T20:31:13+0000 lvl=warn msg="Failed to provision config" logger=provisioning.dashboard name=strimzi error="stat /tmp/dashboards/strimzi: no such file or directory"

Logs of the grafana-sc-dashboard container. Notice timestamp 20:31:18 which is after 20:31:13 when the grafana container began initalizing the dashboard provider.

[2021-06-26 20:31:13] Starting collector
[2021-06-26 20:31:13] No folder annotation was provided, defaulting to k8s-sidecar-target-directory
[2021-06-26 20:31:13] Selected resource type: ('secret', 'configmap')
[2021-06-26 20:31:13] Loading incluster config ...
[2021-06-26 20:31:13] Config for cluster api at 'https://172.24.0.1:443' loaded...
[2021-06-26 20:31:13] Unique filenames will not be enforced.
[2021-06-26 20:31:13] 5xx response content will not be enabled.
[2021-06-26 20:31:18] Working on ADDED configmap monitoring/helm-sbox-cc-kubpromstack-node-cluster-rsrc-use
[2021-06-26 20:31:18] Working on ADDED configmap monitoring/helm-sbox-cc-kubpromstack-namespace-by-workload
[2021-06-26 20:31:18] Working on ADDED configmap monitoring/helm-sbox-cc-kubpromstack-node-rsrc-use
[2021-06-26 20:31:18] Working on ADDED configmap monitoring/helm-sbox-cc-kubpromstack-nodes
[2021-06-26 20:31:18] Working on ADDED configmap monitoring/helm-sbox-cc-kubpromstack-k8s-resources-workload
[2021-06-26 20:31:18] Working on ADDED configmap monitoring/cm-strimzi-dashboards
[2021-06-26 20:31:18] Found a folder override annotation, placing the cm-strimzi-dashboards in: /tmp/dashboards/strimzi
[2021-06-26 20:31:18] Working on ADDED configmap monitoring/helm-sbox-cc-kubpromstack-apiserver
... etc ...

When the Grafana pod is in running status. Shelling into the grafana container to verify the path /tmp/dashboards/strimzi, it does exist:

kubectl exec -it my-kubpromstack-01-grafana-123abc -c grafana -n monitoring -- bash

bash-5.1$ ls -l /tmp/dashboards/strimzi/
total 180
-rw-r--r--    1 grafana  472          34848 Jun 26 20:31 strimzi-kafka-exporter.json
-rw-r--r--    1 grafana  472          69052 Jun 26 20:31 strimzi-kafka.json
-rw-r--r--    1 grafana  472          39388 Jun 26 20:31 strimzi-operators.json
-rw-r--r--    1 grafana  472          35920 Jun 26 20:31 strimzi-zookeeper.json

@tringuyen-yw tringuyen-yw changed the title [Question] Howto import dashboards in a specific Grafana folder? Grafana initializes dashboard provider before sidecar has completed parsing all dashboard ConfigMaps Jun 26, 2021
@ChuckNoxis
Copy link

ChuckNoxis commented Sep 21, 2021

I have exactly the same issue using kube-prometheus-stack v18.0.9 which is using grafana/grafana:8.1.2

I found this thread on Grafana support that let me think that the General folder config is overriding the config of the configured folder.

It got weirder as I also tried to switch to /tmp/dashboards-folder1 to be outside of the /tmp/dashboards/ folder and also tried outside of /tmp, the logs shows me that it found the configmap but with this, the folder1 grafana folder doesn't appear in Grafana Web Ui

[2021-09-21 13:30:48] Working on ADDED configmap monitoring/elasticsearch-dashboards-configmap
[2021-09-21 13:30:48] Found a folder override annotation, placing the elasticsearch-dashboards-configmap in: /tmp/dashboards-folder1

@tringuyen-yw Did you find a workaround to configure dashboards' folders automatically ? 😃

EDIT : By searching a bit more, I found that if you override the sidecar config with this :

  sidecar:
    dashboards:
      provider:
        foldersFromFilesStructure: true
        updateIntervalSeconds: 30

It creates the rights Grafana folders and places correctly the dashboard in the right folder and you don't need the grafana.dashboardProviders anymore ! Maybe it doesn't suit your needs but if it can help anyone ! :)

@jwausle
Copy link

jwausle commented Nov 29, 2021

I observed the same problem in AKS and found this discussion at the init container quay.io:kiwigrid/k8s-sidecar. The discussion is closed. But there was no solution. This comment suggest to use an alternative image as sidecar.

@tringuyen-yw
Copy link
Author

@ChuckNoxis Sorry for taking so long to answer. @jwausle Chuck gave the correct solution. Here is a more complete solution with comment

grafana:
  # Sidecars that collect the ConfigMaps with specified label and stores the included files them into the respective folders
  sidecar:
    dashboards:
      enabled: true
      # label key that the ConfigMaps containing dashboards should have to be collected by the sidecar
      # The value is unused, the ConfigMap could be labelled as:
      # label:
      #   grafana_dashboard: "1"
      label: grafana_dashboard

      # specific kube-prometheus-stack
      # when dashboards.annotations.grafana_folder is UNDEFINED: all the dashboards will be created in Grafana "General" directory
      # by default the built-in dashboards for kube-prometheus-stack are designed for kube-state-metrics
      # it is more elegant to place those dashboards in a properly named Grafana dashboard folder
      # the annotation below will be added to each dashboard ConfigMap created by kube-prometheus-stack helm chart
      annotations:
        grafana_folder: "KUBE-STATE-METRICS"

      # folder in the Grafana container where the collected dashboards are stored (unless `defaultFolderName` is set)
      folder: /tmp/dashboards

      # "grafana_folder" is the annotation key which must be set in the ConfigMap defining the dashboards.
      # In the example below the CM has annotations.grafana_folder: "KAFKA"
      # which means all the dashboards defined in the CM would be stored
      # - in /tmp/dashboards/KAFKA on the Filesystem of the Grafana container
      #   "/tmp/dashboards" is defined in grafana.sidecar.dashboards.folder
      #   "KAFKA" is the custom folder defined in the ConfigMap along with dashboard definitions
      # - the dashboards are visible in the Grafana UI under the "KAFKA" dashboard folder
      #
      #   apiVersion: v1
      #   kind: ConfigMap
      #   metadata:
      #     name: ...
      #     namespace: ...
      #     labels:
      #       app: ...
      #       grafana_dashboard: "1"
      #     annotations:
      #       grafana_folder: "KAFKA"
      #   data:
      #      dashboard1.json:
      #         json code of dashboard1
      #      dashboard2.json:
      #         json code of dashboard2
      folderAnnotation: grafana_folder

      provider:
        # allow updating provisioned dashboards from the UI
        allowUiUpdates: true

        # MANDATORY when grafana.sidecar.dashboards.folderAnnotation is defined
        # 'true' = allow Grafana to replicate dashboard structure from filesystem, ie
        # - create a subdir in the File system and store the *.json files of the dashboards
        #   the json code and the subdir name are defined in the ConfigMap of the dashboards
        #   see example in comment section of grafana.sidecar.dashboards.folderAnnotation
        # AND
        # - In Grafana UI, place the dashboards defined in the ConfigMap (CM)
        #   in a dashboard folder with a name specified in the CM annotation `grafana_folder: ???`
        foldersFromFilesStructure: true

  # valid TZ values: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
  defaultDashboardsTimezone: 'America/Toronto' # default = utc

Example of ConfigMap defining Dahsboards where you want to place in a Grafana folder named MY-DASHBOARD-FOLDER

apiVersion: v1
kind: ConfigMap
metadata:
  name: cm-my-dashboards
  # if different namespace than kube-prometheus-stack
  # must set value `searchNamespace` https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L637
  #namespace: <where the kube-prometheus-stack helm release is installed>
  labels:
    # grafana will pick up automatically all the configmaps that are labeled "grafana_dashboard: 1"
    # the label key `grafana_dashboard` is configured in https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L627
    # overridable in https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L666
    grafana_dashboard: "1"

  annotations:
    # the dashboards in this ConfigMap will be placed in the Grafana UI, dashboard folder
    # named as the value of the `grafana_folder` key below
    # REQUIRE the Grafana helm value to be exactly 'grafana.sidecar.dashboards.folderAnnotation: grafana_folder'
    grafana_folder: "MY-DASHBOARD-FOLDER"

data:
  MyCustom_Dashboards.json: |-
    {
      blabla
    }

In conclusion, I think the solution is satisfactory, technicallky there is no issue with Grafana. If there is an issue worth mentioning, this would rather the Grafana config is insufficiently documented. If Grafafa helm chart maintainers think that the additional comments I added in this answer are worth to be mentioned in the Grafava default helm values. I'll make a Pull Request.

@laxmansoma1903
Copy link

@ChuckNoxis Sorry for taking so long to answer. @jwausle Chuck gave the correct solution. Here is a more complete solution with comment

grafana:
  # Sidecars that collect the ConfigMaps with specified label and stores the included files them into the respective folders
  sidecar:
    dashboards:
      enabled: true
      # label key that the ConfigMaps containing dashboards should have to be collected by the sidecar
      # The value is unused, the ConfigMap could be labelled as:
      # label:
      #   grafana_dashboard: "1"
      label: grafana_dashboard

      # specific kube-prometheus-stack
      # when dashboards.annotations.grafana_folder is UNDEFINED: all the dashboards will be created in Grafana "General" directory
      # by default the built-in dashboards for kube-prometheus-stack are designed for kube-state-metrics
      # it is more elegant to place those dashboards in a properly named Grafana dashboard folder
      # the annotation below will be added to each dashboard ConfigMap created by kube-prometheus-stack helm chart
      annotations:
        grafana_folder: "KUBE-STATE-METRICS"

      # folder in the Grafana container where the collected dashboards are stored (unless `defaultFolderName` is set)
      folder: /tmp/dashboards

      # "grafana_folder" is the annotation key which must be set in the ConfigMap defining the dashboards.
      # In the example below the CM has annotations.grafana_folder: "KAFKA"
      # which means all the dashboards defined in the CM would be stored
      # - in /tmp/dashboards/KAFKA on the Filesystem of the Grafana container
      #   "/tmp/dashboards" is defined in grafana.sidecar.dashboards.folder
      #   "KAFKA" is the custom folder defined in the ConfigMap along with dashboard definitions
      # - the dashboards are visible in the Grafana UI under the "KAFKA" dashboard folder
      #
      #   apiVersion: v1
      #   kind: ConfigMap
      #   metadata:
      #     name: ...
      #     namespace: ...
      #     labels:
      #       app: ...
      #       grafana_dashboard: "1"
      #     annotations:
      #       grafana_folder: "KAFKA"
      #   data:
      #      dashboard1.json:
      #         json code of dashboard1
      #      dashboard2.json:
      #         json code of dashboard2
      folderAnnotation: grafana_folder

      provider:
        # allow updating provisioned dashboards from the UI
        allowUiUpdates: true

        # MANDATORY when grafana.sidecar.dashboards.folderAnnotation is defined
        # 'true' = allow Grafana to replicate dashboard structure from filesystem, ie
        # - create a subdir in the File system and store the *.json files of the dashboards
        #   the json code and the subdir name are defined in the ConfigMap of the dashboards
        #   see example in comment section of grafana.sidecar.dashboards.folderAnnotation
        # AND
        # - In Grafana UI, place the dashboards defined in the ConfigMap (CM)
        #   in a dashboard folder with a name specified in the CM annotation `grafana_folder: ???`
        foldersFromFilesStructure: true

  # valid TZ values: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
  defaultDashboardsTimezone: 'America/Toronto' # default = utc

Example of ConfigMap defining Dahsboards where you want to place in a Grafana folder named MY-DASHBOARD-FOLDER

apiVersion: v1
kind: ConfigMap
metadata:
  name: cm-my-dashboards
  # if different namespace than kube-prometheus-stack
  # must set value `searchNamespace` https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L637
  #namespace: <where the kube-prometheus-stack helm release is installed>
  labels:
    # grafana will pick up automatically all the configmaps that are labeled "grafana_dashboard: 1"
    # the label key `grafana_dashboard` is configured in https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L627
    # overridable in https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L666
    grafana_dashboard: "1"

  annotations:
    # the dashboards in this ConfigMap will be placed in the Grafana UI, dashboard folder
    # named as the value of the `grafana_folder` key below
    # REQUIRE the Grafana helm value to be exactly 'grafana.sidecar.dashboards.folderAnnotation: grafana_folder'
    grafana_folder: "MY-DASHBOARD-FOLDER"

data:
  MyCustom_Dashboards.json: |-
    {
      blabla
    }

In conclusion, I think the solution is satisfactory, technicallky there is no issue with Grafana. If there is an issue worth mentioning, this would rather the Grafana config is insufficiently documented. If Grafafa helm chart maintainers think that the additional comments I added in this answer are worth to be mentioned in the Grafava default helm values. I'll make a Pull Request.

Thanks Man , it worked for me ,,,,

@ngc4579
Copy link

ngc4579 commented Aug 17, 2023

I am not quite sure whether this is actually an issue, but:

Following the configuration explained by @tringuyen-yw the sidecar container shows frequent log entries like these (issued roughly every minute):

...
{"time": "2023-08-17T12:07:31.670417+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
{"time": "2023-08-17T12:08:26.748956+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
{"time": "2023-08-17T12:09:21.748906+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
{"time": "2023-08-17T12:10:16.857739+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
...

@alexhatzo
Copy link

Worked for me, thank you! @tringuyen-yw The only place I found an explanation this good.

@vovka17surnyk
Copy link

@tringuyen-yw Appreciate it! It's effective for me, and your explanation is the most clear I've ever come across.

@AnushaRaoCosnova
Copy link

I am not quite sure whether this is actually an issue, but:

Following the configuration explained by @tringuyen-yw the sidecar container shows frequent log entries like these (issued roughly every minute):

...
{"time": "2023-08-17T12:07:31.670417+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
{"time": "2023-08-17T12:08:26.748956+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
{"time": "2023-08-17T12:09:21.748906+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
{"time": "2023-08-17T12:10:16.857739+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
...

I have the same issue. Were you able to find what configuration causes this? I also see the pod restarts often and additional to the logs you posted, I also see:
User
2024-04-03T07:17:35.5293Z stdout F logger=sqlstore.transactions t=2024-04-03T07:17:35.525432Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"

@ehddnko
Copy link

ehddnko commented Apr 17, 2024

I am not quite sure whether this is actually an issue, but:
Following the configuration explained by @tringuyen-yw the sidecar container shows frequent log entries like these (issued roughly every minute):

...
{"time": "2023-08-17T12:07:31.670417+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
{"time": "2023-08-17T12:08:26.748956+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
{"time": "2023-08-17T12:09:21.748906+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
{"time": "2023-08-17T12:10:16.857739+00:00", "msg": "Found a folder override annotation, placing the <redacted> in: /tmp/dashboards/Logs", "level": "INFO"}
...

I have the same issue. Were you able to find what configuration causes this? I also see the pod restarts often and additional to the logs you posted, I also see: User 2024-04-03T07:17:35.5293Z stdout F logger=sqlstore.transactions t=2024-04-03T07:17:35.525432Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"

@AnushaRaoCosnova Changing the log level in the sidecar container or increasing the watchServerTimeout to more than 60 seconds can help reduce those logs. In my case, I solved the issue by changing the log level.

Change log level:

sidecar:
  dashboards:
    logLevel: WARN

Increasing the watchServerTimeout:

sidecar:
  dashboards:
    watchServerTimeout: 3600

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants