Skip to content

Conversation

@kvaps
Copy link
Member

@kvaps kvaps commented Oct 3, 2024

  • Rework alerts
  • add fluxcd alerts

Alerts were processed to specify correct instance and service fields, added fluxcd alerts:

alerta

Thanks to ChatGPT, we used the following scripts to modify alerts from victoria-metrics-k8s-stack:

First, get output from helm template and save it to alerts.yaml, then run following:

import yaml

# Define the mapping of alerts to their respective exported_instance
alert_exported_instance_mapping = {
    "etcdMembersDown": "{{ $labels.instance }}",
    "etcdInsufficientMembers": "{{ $labels.instance }}",
    "etcdNoLeader": "{{ $labels.instance }}",
    "etcdHighNumberOfLeaderChanges": "{{ $labels.instance }}",
    "etcdHighNumberOfFailedGRPCRequests": "{{ $labels.instance }}/{{ $labels.grpc_method }}",
    "etcdGRPCRequestsSlow": "{{ $labels.instance }}/{{ $labels.grpc_method }}",
    "etcdMemberCommunicationSlow": "{{ $labels.instance }}/{{ $labels.member }}",
    "etcdHighNumberOfFailedProposals": "{{ $labels.instance }}",
    "etcdHighFsyncDurations": "{{ $labels.instance }}",
    "etcdHighCommitDurations": "{{ $labels.instance }}",
    "etcdDatabaseQuotaLowSpace": "{{ $labels.instance }}",
    "etcdExcessiveDatabaseGrowth": "{{ $labels.instance }}",
    "etcdDatabaseHighFragmentationRatio": "{{ $labels.instance }}",
    "TargetDown": "{{ $labels.instance }}",
    "Watchdog": "global",
    "InfoInhibitor": "global",
    "KubeAPIErrorBudgetBurn": "{{ $labels.namespace }}/{{ $labels.apiserver }}",
    "KubeStateMetricsListErrors": "{{ $labels.cluster }}/kube-state-metrics",
    "KubeStateMetricsWatchErrors": "{{ $labels.cluster }}/kube-state-metrics",
    "KubeStateMetricsShardingMismatch": "{{ $labels.cluster }}/kube-state-metrics",
    "KubeStateMetricsShardsMissing": "{{ $labels.cluster }}/kube-state-metrics",
    "KubePodCrashLooping": "{{ $labels.namespace }}/{{ $labels.pod }}",
    "KubePodNotReady": "{{ $labels.namespace }}/{{ $labels.pod }}",
    "KubeDeploymentGenerationMismatch": "{{ $labels.namespace }}/{{ $labels.deployment }}",
    "KubeDeploymentReplicasMismatch": "{{ $labels.namespace }}/{{ $labels.deployment }}",
    "KubeDeploymentRolloutStuck": "{{ $labels.namespace }}/{{ $labels.deployment }}",
    "KubeStatefulSetReplicasMismatch": "{{ $labels.namespace }}/{{ $labels.statefulset }}",
    "KubeStatefulSetGenerationMismatch": "{{ $labels.namespace }}/{{ $labels.statefulset }}",
    "KubeStatefulSetUpdateNotRolledOut": "{{ $labels.namespace }}/{{ $labels.statefulset }}",
    "KubeDaemonSetRolloutStuck": "{{ $labels.namespace }}/{{ $labels.daemonset }}",
    "KubeContainerWaiting": "{{ $labels.namespace }}/{{ $labels.pod }}/{{ $labels.container }}",
    "KubeDaemonSetNotScheduled": "{{ $labels.namespace }}/{{ $labels.daemonset }}",
    "KubeDaemonSetMisScheduled": "{{ $labels.namespace }}/{{ $labels.daemonset }}",
    "KubeJobNotCompleted": "{{ $labels.namespace }}/{{ $labels.job_name }}",
    "KubeJobFailed": "{{ $labels.namespace }}/{{ $labels.job_name }}",
    "KubeHpaReplicasMismatch": "{{ $labels.namespace }}/{{ $labels.horizontalpodautoscaler }}",
    "KubeHpaMaxedOut": "{{ $labels.namespace }}/{{ $labels.horizontalpodautoscaler }}",
    "KubeCPUOvercommit": "{{ $labels.cluster }}",
    "KubeMemoryOvercommit": "{{ $labels.cluster }}",
    "KubeCPUQuotaOvercommit": "{{ $labels.cluster }}",
    "KubeMemoryQuotaOvercommit": "{{ $labels.cluster }}",
    "KubeQuotaAlmostFull": "{{ $labels.namespace }}",
    "KubeQuotaFullyUsed": "{{ $labels.namespace }}",
    "KubeQuotaExceeded": "{{ $labels.namespace }}",
    "CPUThrottlingHigh": "{{ $labels.namespace }}/{{ $labels.pod }}/{{ $labels.container }}",
    "KubePersistentVolumeFillingUp": "{{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }}",
    "KubePersistentVolumeInodesFillingUp": "{{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }}",
    "KubePersistentVolumeErrors": "{{ $labels.persistentvolume }}",
    "KubeClientCertificateExpiration": "{{ $labels.namespace }}/{{ $labels.pod }}",
    "KubeAggregatedAPIErrors": "{{ $labels.name }}/{{ $labels.namespace }}",
    "KubeAggregatedAPIDown": "{{ $labels.name }}/{{ $labels.namespace }}",
    "KubeAPIDown": "{{ $labels.cluster }}/apiserver",
    "KubeAPITerminatedRequests": "{{ $labels.cluster }}/apiserver",
    "KubeControllerManagerDown": "{{ $labels.instance }}/controller-manager",
    "KubeNodeNotReady": "{{ $labels.node }}",
    "KubeNodeUnreachable": "{{ $labels.node }}",
    "KubeletTooManyPods": "{{ $labels.node }}",
    "KubeNodeReadinessFlapping": "{{ $labels.node }}",
    "KubeletPlegDurationHigh": "{{ $labels.node }}",
    "KubeletPodStartUpLatencyHigh": "{{ $labels.node }}",
    "KubeletClientCertificateExpiration": "{{ $labels.node }}",
    "KubeletServerCertificateExpiration": "{{ $labels.node }}",
    "KubeletClientCertificateRenewalErrors": "{{ $labels.node }}",
    "KubeletServerCertificateRenewalErrors": "{{ $labels.node }}",
    "KubeletDown": "{{ $labels.node }}",
    "KubeSchedulerDown": "{{ $labels.scheduler }}",
    "KubeVersionMismatch": "{{ $labels.cluster }}",
    "NodeFilesystemSpaceFillingUp": "{{ $labels.instance }}/{{ $labels.device }}",
    "NodeFilesystemAlmostOutOfSpace": "{{ $labels.instance }}/{{ $labels.device }}",
    "NodeFilesystemFilesFillingUp": "{{ $labels.instance }}/{{ $labels.device }}",
    "NodeFilesystemAlmostOutOfFiles": "{{ $labels.instance }}/{{ $labels.device }}",
    "NodeNetworkReceiveErrs": "{{ $labels.instance }}/{{ $labels.device }}",
    "NodeNetworkTransmitErrs": "{{ $labels.instance }}/{{ $labels.device }}",
    "NodeHighNumberConntrackEntriesUsed": "{{ $labels.instance }}",
    "NodeTextFileCollectorScrapeError": "{{ $labels.instance }}",
    "NodeClockSkewDetected": "{{ $labels.instance }}",
    "NodeClockNotSynchronising": "{{ $labels.instance }}",
    "NodeRAIDDegraded": "{{ $labels.instance }}/{{ $labels.device }}",
    "NodeRAIDDiskFailure": "{{ $labels.instance }}/{{ $labels.device }}",
    "NodeFileDescriptorLimit": "{{ $labels.instance }}",
    "NodeCPUHighUsage": "{{ $labels.instance }}",
    "NodeSystemSaturation": "{{ $labels.instance }}",
    "NodeMemoryMajorPagesFaults": "{{ $labels.instance }}",
    "NodeMemoryHighUtilization": "{{ $labels.instance }}",
    "NodeDiskIOSaturation": "{{ $labels.instance }}/{{ $labels.device }}",
    "NodeSystemdServiceFailed": "{{ $labels.instance }}/{{ $labels.name }}",
    "NodeBondingDegraded": "{{ $labels.instance }}/{{ $labels.master }}",
    "NodeNetworkInterfaceFlapping": "{{ $labels.instance }}/{{ $labels.device }}",
}

# Function to add missing fields to the alert
def process_alert(alert, group_name):
    alert_name = alert.get('alert')
    if alert_name in alert_exported_instance_mapping:
        if 'labels' not in alert:
            alert['labels'] = {}
        # Add exported_instance from the pre-prepared list
        if 'exported_instance' not in alert['labels']:
            alert['labels']['exported_instance'] = alert_exported_instance_mapping[alert_name]
    if 'service' not in alert['labels']:
        alert['labels']['service'] = group_name

# Function to preserve multiline format in expr
def literal_presenter(dumper, data):
    if isinstance(data, str) and '\n' in it:  # If string contains line breaks, use block style
        return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='|')
    return dumper.represent_scalar('tag:yaml.org,2002:str', data)

yaml.add_representer(str, literal_presenter)

# Reading the original YAML file with multiple documents
with open('alerts.yaml', 'r') as file:
    documents = list(yaml.safe_load_all(file))  # Read all documents

# Transform each document
processed_documents = []
for doc in documents:
    # If the document has rule groups, process them
    if 'spec' in doc and 'groups' in doc['spec']:
        for group in doc['spec']['groups']:
            group_name = group['name']
            for rule in group['rules']:
                if 'alert' in rule:
                    process_alert(rule, group_name)
    processed_documents.append(doc)

# Write all processed documents back to the YAML file, preserving the original multiline format
with open('alerts_modified.yaml', 'w') as file:
    yaml.dump_all(processed_documents, file, default_flow_style=False, sort_keys=False)

print("Processing complete! Changes saved to alerts_modified.yaml.")
mv alerts_modified.yaml alerts.yaml
import yaml

# Mapping of groups to files
group_file_mapping = {
    "kubernetes-system-controller-manager": "kubernetes-system-controller-manager.yaml",
    "k8s.rules.container_memory_swap": "k8s.rules.container_memory_swap.yaml",
    "kube-prometheus-general.rules": "kube-prometheus-general.rules.yaml",
    "k8s.rules.pod_owner": "k8s.rules.pod_owner.yaml",
    "k8s.rules.container_memory_rss": "k8s.rules.container_memory_rss.yaml",
    "kube-apiserver-slos": "kube-apiserver-slos.yaml",
    "kubernetes-resources": "kubernetes-resources.yaml",
    "kube-state-metrics": "kube-state-metrics.yaml",
    "kube-scheduler.rules": "kube-scheduler.rules.yaml",
    "k8s.rules.container_memory_cache": "k8s.rules.container_memory_cache.yaml",
    "kubernetes-apps": "kubernetes-apps.yaml",
    "kube-apiserver-availability.rules": "kube-apiserver-availability.rules.yaml",
    "etcd": "etcd.yaml",
    "general.rules": "general.rules.yaml",
    "kubelet.rules": "kubelet.rules.yaml",
    "kubernetes-system-scheduler": "kubernetes-system-scheduler.yaml",
    "kube-apiserver-histogram.rules": "kube-apiserver-histogram.rules.yaml",
    "node.rules": "node.rules.yaml",
    "kube-prometheus-node-recording.rules": "kube-prometheus-node-recording.rules.yaml",
    "kubernetes-system": "kubernetes-system.yaml",
    "k8s.rules.container_cpu_usage_seconds_total": "k8s.rules.container_cpu_usage_seconds_total.yaml",
    "node-exporter.rules": "node-exporter.rules.yaml",
    "k8s.rules.container_memory_working_set_bytes": "k8s.rules.container_memory_working_set_bytes.yaml",
    "k8s.rules.container_resource": "k8s.rules.container_resource.yaml",
    "node-network": "node-network.yaml",
    "node-exporter": "node-exporter.yaml",
    "kubernetes-system-apiserver": "kubernetes-system-apiserver.yaml",
    "kubernetes-system-kubelet": "kubernetes-system-kubelet.yaml",
    "kube-apiserver-burnrate.rules": "kube-apiserver-burnrate.rules.yaml",
    "kubernetes-storage": "kubernetes-storage.yaml"
}

# Function to remove labels from metadata
def remove_labels_from_metadata(doc):
    if 'metadata' in doc and 'labels' in doc['metadata']:
        del doc['metadata']['labels']

# Function to preserve multiline format in expr
def literal_presenter(dumper, data):
    if isinstance(data, str) and '\n' in it:  # If string contains line breaks, use block style
        return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='|')
    return dumper.represent_scalar('tag:yaml.org,2002:str', data)

yaml.add_representer(str, literal_presenter)

# Reading the original YAML file with multiple documents
with open('alerts.yaml', 'r') as file:
    documents = list(yaml.safe_load_all(file))  # Read all documents

# Create a dictionary for files where we'll collect the groups
file_groups = {}

# Transform each document
processed_documents = []
for doc in documents:
    # Remove labels from metadata
    remove_labels_from_metadata(doc)
    
    if 'spec' in doc and 'groups' in doc['spec']:
        for group in doc['spec']['groups']:
            group_name = group['name']
            # If the group matches a file, add it to the dictionary
            if group_name in group_file_mapping:
                file_name = group_file_mapping[group_name]
                if file_name not in file_groups:
                    # Copy metadata but without labels
                    cleaned_metadata = {k: v for k, v in doc['metadata'].items() if k != 'labels'}
                    file_groups[file_name] = {
                        'apiVersion': doc['apiVersion'],
                        'kind': doc['kind'],
                        'metadata': cleaned_metadata,
                        'spec': {'groups': []}
                    }
                file_groups[file_name]['spec']['groups'].append(group)
    processed_documents.append(doc)

# Create files and write the corresponding groups
for file_name, content in file_groups.items():
    with open(file_name, 'w') as out_file:
        yaml.dump(content, out_file, default_flow_style=False, sort_keys=False)

# Write all processed documents back to the YAML file, preserving the original multiline format
with open('alerts_modified.yaml', 'w') as file:
    yaml.dump_all(processed_documents, file, default_flow_style=False, sort_keys=False)

print("Processing complete! Groups have been distributed to files, 'labels' removed from 'metadata', and saved to 'alerts_modified.yaml'.")

let's automate this later

Summary by CodeRabbit

  • New Features

    • Introduced multiple alerting rules for monitoring Kubernetes resources, including etcd clusters and Flux-managed resources.
    • Added alerts for container CPU usage, memory cache, RSS, swap, and working set metrics.
    • Implemented general monitoring alerts for target status and health checks.
  • Bug Fixes

    • Removed deprecated references to the victoria-metrics-k8s-stack Helm chart, streamlining the monitoring setup.

kvaps added 2 commits October 3, 2024 15:52
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 3, 2024

Caution

Review failed

The pull request is closed.

Walkthrough

The changes in this pull request primarily involve the removal of the victoria-metrics-k8s-stack Helm chart from the Makefile of the monitoring package. This includes the deletion of related commands and files. Additionally, new alerting rules have been introduced across several YAML configuration files for monitoring various aspects of Kubernetes resources, including etcd clusters, Flux resources, and container metrics. Each new rule is structured under the VMRule kind and includes specific conditions for triggering alerts.

Changes

File Path Change Summary
packages/system/monitoring/Makefile Removed references to victoria-metrics-k8s-stack, including Helm repository addition and related commands.
packages/system/monitoring/alerts/etcd.yaml Introduced alerting rules for monitoring etcd cluster health, including multiple specific alerts.
packages/system/monitoring/alerts/flux.yaml Added alerting rules for monitoring Flux resources, including Helm releases and Git repositories.
packages/system/monitoring/alerts/general.rules.yaml Created a new alert configuration with rules for TargetDown, Watchdog, and InfoInhibitor.
packages/system/monitoring/alerts/k8s.rules.container_cpu_usage_seconds_total.yaml New rule added for monitoring container CPU usage in Kubernetes.
packages/system/monitoring/alerts/k8s.rules.container_memory_cache.yaml New rule added for monitoring container memory cache metrics.
packages/system/monitoring/alerts/k8s.rules.container_memory_rss.yaml New rule added for monitoring container memory RSS metrics.
packages/system/monitoring/alerts/k8s.rules.container_memory_swap.yaml New rule added for monitoring container memory swap metrics.
packages/system/monitoring/alerts/k8s.rules.container_memory_working_set_bytes.yaml New rule added for monitoring container working set memory.
packages/system/monitoring/alerts/k8s.rules.container_resource.yaml Introduced rules for monitoring Kubernetes container resource requests and limits.

Possibly related PRs

  • Add basic alerting system #355: This PR involves the removal of the victoria-metrics-k8s-stack component and the addition of Alerta for alert management, which is directly related to the changes made in the main PR regarding the removal of references to the victoria-metrics-k8s-stack Helm chart.
  • Upgrade grafana operator to the latest available version #356: Although this PR focuses on upgrading the Grafana operator, it is indirectly related as it may involve components that interact with the monitoring setup, including the victoria-metrics-k8s-stack, which was removed in the main PR.

🐰 In the fields where metrics grow,
A rabbit hops to and fro.
With alerts for etcd and Flux in sight,
Monitoring's now a delight!
No more charts from stacks of yore,
Just clean rules to help us score! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@kvaps kvaps merged commit b605c85 into main Oct 3, 2024
@kvaps kvaps deleted the alerts branch October 3, 2024 13:59
@kvaps kvaps changed the title alerts Rework alerts; Add fluxcd alerts Oct 3, 2024
chumkaska pushed a commit to chumkaska/cozystack that referenced this pull request Oct 15, 2024
- Rework alerts
- Add fluxcd alerts

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
This was referenced Mar 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant