Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue type not generated? #134

Closed
beneso opened this issue Nov 2, 2022 · 1 comment
Closed

Issue type not generated? #134

beneso opened this issue Nov 2, 2022 · 1 comment

Comments

@beneso
Copy link

beneso commented Nov 2, 2022

Hi, thanks for jiralert. We're facing an issue with issue type not being generated and would like to ask for any pointers where the problem is. What are we doing wrong?
Thank you!

Logs (possible sensitive data modified):

level=debug ts=2022-11-02T07:15:32.548102552Z caller=main.go:87 msg="handling /alert webhook request"
level=debug ts=2022-11-02T07:15:32.548265294Z caller=main.go:102 msg="  matched receiver" receiver=kubernetes_jiralert
level=debug ts=2022-11-02T07:15:32.548287614Z caller=template.go:69 msg="executing template" template=PROJECT
level=debug ts=2022-11-02T07:15:32.548295292Z caller=template.go:71 msg="returning unchanged"
level=debug ts=2022-11-02T07:15:32.548328386Z caller=notify.go:279 msg=search query="project=\"PROJECT\" and labels=\"ALERT{alertname=\\\"KubePodCrashLooping\\\",cluster=\\\"mycluster\\\",service=\\\"kube-state-metrics\\\"}\" order by resolutiondate desc" options="&{StartAt:0 MaxResults:2 Expand: Fields:[summary status resolution resolutiondate] ValidateQuery:}"
level=debug ts=2022-11-02T07:15:32.570109616Z caller=notify.go:287 msg="no results" query="project=\"PROJECT\" and labels=\"ALERT{alertname=\\\"KubePodCrashLooping\\\",cluster=\\\"mycluster\\\",service=\\\"kube-state-metrics\\\"}\" order by resolutiondate desc"
level=debug ts=2022-11-02T07:15:32.570151739Z caller=template.go:69 msg="executing template" template="{{ template \"jira.summary\" . }}"
level=debug ts=2022-11-02T07:15:32.570286858Z caller=template.go:90 msg="template output" output="[FIRING:1] KubePodCrashLooping mycluster kube-state-metrics (stress dev http dev IP:8080 kube-state-metrics namespace stress namespace/prometheus-prometheus warning 3a0721cf-5624-4f11-a565-eb094e27d621)"
level=debug ts=2022-11-02T07:15:32.570310264Z caller=template.go:69 msg="executing template" template="{{ template \"jira.description\" . }}"
level=debug ts=2022-11-02T07:15:32.570392772Z caller=template.go:90 msg="template output" output="Labels:\n - alertname = KubePodCrashLooping\n - cluster = mycluster\n - container = stress\n - country = dev\n - endpoint = http\n - env_type = dev\n - instance = IP:8080\n - job = kube-state-metrics\n - namespace = namespace\n - pod = stress\n - prometheus = namespace/prometheus-prometheus\n - service = kube-state-metrics\n - severity = warning\n - uid = 3a0721cf-5624-4f11-a565-eb094e27d621\n\nAnnotations:\n - description = Pod namespace/stress (stress) is restarting 1.05 times / 10 minutes.\n - runbook_url = https://github.com/nlamirault/monitoring-mixins/tree/master/runbooks/kubernetes-mixin-runbook.md#alert-name-kubepodcrashlooping\n - summary = Pod is crash looping.\n\nSource: https://prometheus.mycluster/graph?g0.expr=rate%28kube_pod_container_status_restarts_total%7Bjob%3D%22kube-state-metrics%22%7D%5B10m%5D%29+%2A+60+%2A+5+%3E+0&g0.tab=1\n"
level=info ts=2022-11-02T07:15:32.570406896Z caller=notify.go:138 msg="no recent matching issue found, creating new issue" label="ALERT{alertname=\"KubePodCrashLooping\",cluster=\"mycluster\",service=\"kube-state-metrics\"}"
level=debug ts=2022-11-02T07:15:32.570416835Z caller=template.go:69 msg="executing template" template=Bug
level=debug ts=2022-11-02T07:15:32.570422244Z caller=template.go:71 msg="returning unchanged"
level=debug ts=2022-11-02T07:15:32.570429892Z caller=template.go:69 msg="executing template" template=Critical
level=debug ts=2022-11-02T07:15:32.570434324Z caller=template.go:71 msg="returning unchanged"
level=debug ts=2022-11-02T07:15:32.570488074Z caller=notify.go:359 msg=create issue="{Expand: Type:{Self: ID: Description: IconURL: Name:Bug Subtask:false AvatarID:0} Project:{Expand: Self: ID: Key:PROJECT Description: Lead:{Self: AccountID: AccountType: Name: Key: Password: EmailAddress: AvatarUrls:{Four8X48: Two4X24: One6X16: Three2X32:} DisplayName: Active:false TimeZone: Locale: ApplicationKeys:[]} Components:[] IssueTypes:[] URL: Email: AssigneeType: Versions:[] Name: Roles:map[] AvatarUrls:{Four8X48: Two4X24: One6X16: Three2X32:} ProjectCategory:{Self: ID: Name: Description:}} Environment: Resolution:<nil> Priority:0xc00040c120 Resolutiondate:{wall:0 ext:0 loc:<nil>} Created:{wall:0 ext:0 loc:<nil>} Duedate:{wall:0 ext:0 loc:<nil>} Watches:<nil> Assignee:<nil> Updated:{wall:0 ext:0 loc:<nil>} Description:Labels:\n - alertname = KubePodCrashLooping\n - cluster = mycluster\n - container = stress\n - country = dev\n - endpoint = http\n - env_type = dev\n - instance = IP:8080\n - job = kube-state-metrics\n - namespace = namespace\n - pod = stress\n - prometheus = namespace/prometheus-prometheus\n - service = kube-state-metrics\n - severity = warning\n - uid = 3a0721cf-5624-4f11-a565-eb094e27d621\n\nAnnotations:\n - description = Pod namespace/stress (stress) is restarting 1.05 times / 10 minutes.\n - runbook_url = https://github.com/nlamirault/monitoring-mixins/tree/master/runbooks/kubernetes-mixin-runbook.md#alert-name-kubepodcrashlooping\n - summary = Pod is crash looping.\n\nSource: https://prometheus.mycluster/graph?g0.expr=rate%28kube_pod_container_status_restarts_total%7Bjob%3D%22kube-state-metrics%22%7D%5B10m%5D%29+%2A+60+%2A+5+%3E+0&g0.tab=1\n Summary:[FIRING:1] KubePodCrashLooping mycluster kube-state-metrics (stress dev http dev IP:8080 kube-state-metrics namespace stress namespace/prometheus-prometheus warning 3a0721cf-5624-4f11-a565-eb094e27d621) Creator:<nil> Reporter:<nil> Components:[] Status:<nil> Progress:<nil> AggregateProgress:<nil> TimeTracking:<nil> TimeSpent:0 TimeEstimate:0 TimeOriginalEstimate:0 Worklog:<nil> IssueLinks:[] Comments:<nil> FixVersions:[] AffectsVersions:[] Labels:[ALERT{alertname=\"KubePodCrashLooping\",cluster=\"mycluster\",service=\"kube-state-metrics\"}] Subtasks:[] Attachments:[] Epic:<nil> Sprint:<nil> Parent:<nil> AggregateTimeOriginalEstimate:0 AggregateTimeSpent:0 AggregateTimeEstimate:0 Unknowns:map[]}"
level=debug ts=2022-11-02T07:15:32.596416435Z caller=notify.go:374 msg=handleJiraErrResponse api=Issue.Create err="request failed. Please analyze the request body for more details. Status code: 400" url=https://jira.example.com/rest/api/2/issue
level=error ts=2022-11-02T07:15:32.596518301Z caller=main.go:174 msg="error handling request" statusCode=500 statusText="Internal Server Error" err="JIRA request https://jira.example.com/rest/api/2/issue returned status 400 Bad Request, body \"{\\\"errorMessages\\\":[],\\\"errors\\\":{\\\"issuetype\\\":\\\"issue type is required\\\"}}\"" receiver=kubernetes_jiralert groupLabels="unsupported value type"`

Kubernetes configmap (jiralert.tmpl and jiralert.yaml):

apiVersion: v1
data:
  jiralert.tmpl: |
    {{`{{ define "jira.summary" }}[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .GroupLabels.SortedPairs.Values | join " " }} {{ if gt (len .CommonLabels) (len .GroupLabels) }}({{ with .CommonLabels.Remove .GroupLabels.Names }}{{ .Values | join " " }}{{ end }}){{ end }}{{ end }}

    {{ define "jira.description" }}{{ range .Alerts.Firing }}Labels:
    {{ range .Labels.SortedPairs }} - {{ .Name }} = {{ .Value }}
    {{ end }}
    Annotations:
    {{ range .Annotations.SortedPairs }} - {{ .Name }} = {{ .Value }}
    {{ end }}
    Source: {{ .GeneratorURL }}
    {{ end }}{{ end }}`}}
  jiralert.yml: |
    {{`
    defaults:
      api_url: https://jira.example.com
      user: user
      password: 'password'
      issue_type: Bug
      priority: Critical
      summary: '{{ template "jira.summary" . }}'
      description: '{{ template "jira.description" . }}'
      reopen_state: "To Do"
      wont_fix_resolution: "Won't Fix"
      reopen_duration: 0h

    receivers:
      - name: 'kubernetes_jiralert'
        project: PROJECT
        add_group_labels: false

    template: jiralert.tmpl`}}
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: alertmanager-jiralert-webhook-configmap
@beneso
Copy link
Author

beneso commented Nov 7, 2022

Closing. The problem was sending issue type Bug into a project holding alerts.

@beneso beneso closed this as completed Nov 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant