Skip to content

Commit

Permalink
create kub autoscaler integration
Browse files Browse the repository at this point in the history
  • Loading branch information
HadhemiDD committed Apr 26, 2024
1 parent 0fa734b commit 9731c6f
Show file tree
Hide file tree
Showing 24 changed files with 591 additions and 0 deletions.
4 changes: 4 additions & 0 deletions kubernetes_autoscaler/CHANGELOG.md
@@ -0,0 +1,4 @@
# CHANGELOG - Kubernetes Autoscaler

<!-- towncrier release notes start -->

55 changes: 55 additions & 0 deletions kubernetes_autoscaler/README.md
@@ -0,0 +1,55 @@
# Agent Check: Kubernetes Autoscaler

## Overview

This check monitors [Kubernetes Autoscaler][1] through the Datadog Agent.

## Setup

Follow the instructions below to install and configure this check for an Agent running on a host. For containerized environments, see the [Autodiscovery Integration Templates][3] for guidance on applying these instructions.

### Installation

The Kubernetes Autoscaler check is included in the [Datadog Agent][2] package.
No additional installation is needed on your server.

### Configuration

1. Edit the `kubernetes_autoscaler.d/conf.yaml` file, in the `conf.d/` folder at the root of your Agent's configuration directory to start collecting your kubernetes_autoscaler performance data. See the [sample kubernetes_autoscaler.d/conf.yaml][4] for all available configuration options.

2. [Restart the Agent][5].

### Validation

[Run the Agent's status subcommand][6] and look for `kubernetes_autoscaler` under the Checks section.

## Data Collected

### Metrics

See [metadata.csv][7] for a list of metrics provided by this integration.

### Events

The Kubernetes Autoscaler integration does not include any events.

### Service Checks

The Kubernetes Autoscaler integration does not include any service checks.

See [service_checks.json][8] for a list of service checks provided by this integration.

## Troubleshooting

Need help? Contact [Datadog support][9].


[1]: **LINK_TO_INTEGRATION_SITE**
[2]: https://app.datadoghq.com/account/settings/agent/latest
[3]: https://docs.datadoghq.com/agent/kubernetes/integrations/
[4]: https://github.com/DataDog/integrations-core/blob/master/kubernetes_autoscaler/datadog_checks/kubernetes_autoscaler/data/conf.yaml.example
[5]: https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent
[6]: https://docs.datadoghq.com/agent/guide/agent-commands/#agent-status-and-information
[7]: https://github.com/DataDog/integrations-core/blob/master/kubernetes_autoscaler/metadata.csv
[8]: https://github.com/DataDog/integrations-core/blob/master/kubernetes_autoscaler/assets/service_checks.json
[9]: https://docs.datadoghq.com/help/
10 changes: 10 additions & 0 deletions kubernetes_autoscaler/assets/configuration/spec.yaml
@@ -0,0 +1,10 @@
name: Kubernetes Autoscaler
files:
- name: kubernetes_autoscaler.yaml
options:
- template: init_config
options:
- template: init_config/default
- template: instances
options:
- template: instances/default
Empty file.
1 change: 1 addition & 0 deletions kubernetes_autoscaler/assets/service_checks.json
@@ -0,0 +1 @@
[]
1 change: 1 addition & 0 deletions kubernetes_autoscaler/changelog.d/1.added
@@ -0,0 +1 @@
Initial Release

Check failure on line 1 in kubernetes_autoscaler/changelog.d/1.added

View workflow job for this annotation

GitHub Actions / run / Check PR

Your changelog entry has the wrong PR number. To fix this please run: mv kubernetes_autoscaler/changelog.d/1.added kubernetes_autoscaler/changelog.d/17463.added

Check failure on line 1 in kubernetes_autoscaler/changelog.d/1.added

View workflow job for this annotation

GitHub Actions / run / Check PR

Your changelog entry has the wrong PR number. To fix this please run: mv kubernetes_autoscaler/changelog.d/1.added kubernetes_autoscaler/changelog.d/17463.added

Check failure on line 1 in kubernetes_autoscaler/changelog.d/1.added

View workflow job for this annotation

GitHub Actions / run / Check PR

Your changelog entry has the wrong PR number. To fix this please run: mv kubernetes_autoscaler/changelog.d/1.added kubernetes_autoscaler/changelog.d/17463.added
4 changes: 4 additions & 0 deletions kubernetes_autoscaler/datadog_checks/__init__.py
@@ -0,0 +1,4 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
__path__ = __import__('pkgutil').extend_path(__path__, __name__) # type: ignore
@@ -0,0 +1,4 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
__version__ = '1.0.0'
@@ -0,0 +1,7 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
from .__about__ import __version__
from .check import KubernetesAutoscalerCheck

__all__ = ['__version__', 'KubernetesAutoscalerCheck']
@@ -0,0 +1,98 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
from typing import Any # noqa: F401

from datadog_checks.base import AgentCheck # noqa: F401

# from datadog_checks.base.utils.db import QueryManager
# from requests.exceptions import ConnectionError, HTTPError, InvalidURL, Timeout
# from json import JSONDecodeError


class KubernetesAutoscalerCheck(AgentCheck):

# This will be the prefix of every metric and service check the integration sends
__NAMESPACE__ = 'kubernetes_autoscaler'

def __init__(self, name, init_config, instances):
super(KubernetesAutoscalerCheck, self).__init__(name, init_config, instances)

# Use self.instance to read the check configuration
# self.url = self.instance.get("url")

# If the check is going to perform SQL queries you should define a query manager here.
# More info at
# https://datadoghq.dev/integrations-core/base/databases/#datadog_checks.base.utils.db.core.QueryManager
# sample_query = {
# "name": "sample",
# "query": "SELECT * FROM sample_table",
# "columns": [
# {"name": "metric", "type": "gauge"}
# ],
# }
# self._query_manager = QueryManager(self, self.execute_query, queries=[sample_query])
# self.check_initializations.append(self._query_manager.compile_queries)

def check(self, _):
# type: (Any) -> None
# The following are useful bits of code to help new users get started.

# Perform HTTP Requests with our HTTP wrapper.
# More info at https://datadoghq.dev/integrations-core/base/http/
# try:
# response = self.http.get(self.url)
# response.raise_for_status()
# response_json = response.json()

# except Timeout as e:
# self.service_check(
# "can_connect",
# AgentCheck.CRITICAL,
# message="Request timeout: {}, {}".format(self.url, e),
# )
# raise

# except (HTTPError, InvalidURL, ConnectionError) as e:
# self.service_check(
# "can_connect",
# AgentCheck.CRITICAL,
# message="Request failed: {}, {}".format(self.url, e),
# )
# raise

# except JSONDecodeError as e:
# self.service_check(
# "can_connect",
# AgentCheck.CRITICAL,
# message="JSON Parse failed: {}, {}".format(self.url, e),
# )
# raise

# except ValueError as e:
# self.service_check(
# "can_connect", AgentCheck.CRITICAL, message=str(e)
# )
# raise

# This is how you submit metrics
# There are different types of metrics that you can submit (gauge, event).
# More info at https://datadoghq.dev/integrations-core/base/api/#datadog_checks.base.checks.base.AgentCheck
# self.gauge("test", 1.23, tags=['foo:bar'])

# Perform database queries using the Query Manager
# self._query_manager.execute()

# This is how you use the persistent cache. This cache file based and persists across agent restarts.
# If you need an in-memory cache that is persisted across runs
# You can define a dictionary in the __init__ method.
# self.write_persistent_cache("key", "value")
# value = self.read_persistent_cache("key")

# If your check ran successfully, you can send the status.
# More info at
# https://datadoghq.dev/integrations-core/base/api/#datadog_checks.base.checks.base.AgentCheck.service_check
# self.service_check("can_connect", AgentCheck.OK)

# If it didn't then it should send a critical service check
self.service_check("can_connect", AgentCheck.CRITICAL)
@@ -0,0 +1,25 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)

# This file is autogenerated.
# To change this file you should edit assets/configuration/spec.yaml and then run the following commands:
# ddev -x validate config -s <INTEGRATION_NAME>
# ddev -x validate models -s <INTEGRATION_NAME>


from .instance import InstanceConfig
from .shared import SharedConfig


class ConfigMixin:
_config_model_instance: InstanceConfig
_config_model_shared: SharedConfig

@property
def config(self) -> InstanceConfig:
return self._config_model_instance

@property
def shared_config(self) -> SharedConfig:
return self._config_model_shared
@@ -0,0 +1,16 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)

# This file is autogenerated.
# To change this file you should edit assets/configuration/spec.yaml and then run the following commands:
# ddev -x validate config -s <INTEGRATION_NAME>
# ddev -x validate models -s <INTEGRATION_NAME>


def instance_empty_default_hostname():
return False


def instance_min_collection_interval():
return 15
@@ -0,0 +1,51 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)

# This file is autogenerated.
# To change this file you should edit assets/configuration/spec.yaml and then run the following commands:
# ddev -x validate config -s <INTEGRATION_NAME>
# ddev -x validate models -s <INTEGRATION_NAME>


from __future__ import annotations

from typing import Optional

from pydantic import BaseModel, ConfigDict, field_validator, model_validator

from datadog_checks.base.utils.functions import identity
from datadog_checks.base.utils.models import validation

from . import defaults, validators


class InstanceConfig(BaseModel):
model_config = ConfigDict(
validate_default=True,
arbitrary_types_allowed=True,
frozen=True,
)
empty_default_hostname: Optional[bool] = None
min_collection_interval: Optional[float] = None
service: Optional[str] = None
tags: Optional[tuple[str, ...]] = None

@model_validator(mode='before')
def _initial_validation(cls, values):
return validation.core.initialize_config(getattr(validators, 'initialize_instance', identity)(values))

@field_validator('*', mode='before')
def _validate(cls, value, info):
field = cls.model_fields[info.field_name]
field_name = field.alias or info.field_name
if field_name in info.context['configured_fields']:
value = getattr(validators, f'instance_{info.field_name}', identity)(value, field=field)
else:
value = getattr(defaults, f'instance_{info.field_name}', lambda: value)()

return validation.utils.make_immutable(value)

@model_validator(mode='after')
def _final_validation(cls, model):
return validation.core.check_model(getattr(validators, 'check_instance', identity)(model))
@@ -0,0 +1,48 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)

# This file is autogenerated.
# To change this file you should edit assets/configuration/spec.yaml and then run the following commands:
# ddev -x validate config -s <INTEGRATION_NAME>
# ddev -x validate models -s <INTEGRATION_NAME>


from __future__ import annotations

from typing import Optional

from pydantic import BaseModel, ConfigDict, field_validator, model_validator

from datadog_checks.base.utils.functions import identity
from datadog_checks.base.utils.models import validation

from . import defaults, validators


class SharedConfig(BaseModel):
model_config = ConfigDict(
validate_default=True,
arbitrary_types_allowed=True,
frozen=True,
)
service: Optional[str] = None

@model_validator(mode='before')
def _initial_validation(cls, values):
return validation.core.initialize_config(getattr(validators, 'initialize_shared', identity)(values))

@field_validator('*', mode='before')
def _validate(cls, value, info):
field = cls.model_fields[info.field_name]
field_name = field.alias or info.field_name
if field_name in info.context['configured_fields']:
value = getattr(validators, f'shared_{info.field_name}', identity)(value, field=field)
else:
value = getattr(defaults, f'shared_{info.field_name}', lambda: value)()

return validation.utils.make_immutable(value)

@model_validator(mode='after')
def _final_validation(cls, model):
return validation.core.check_model(getattr(validators, 'check_shared', identity)(model))
@@ -0,0 +1,13 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)

# Here you can include additional config validators or transformers
#
# def initialize_instance(values, **kwargs):
# if 'my_option' not in values and 'my_legacy_option' in values:
# values['my_option'] = values['my_legacy_option']
# if values.get('my_number') > 10:
# raise ValueError('my_number max value is 10, got %s' % str(values.get('my_number')))
#
# return values
@@ -0,0 +1,44 @@
## All options defined here are available to all instances.
#
init_config:

## @param service - string - optional
## Attach the tag `service:<SERVICE>` to every metric, event, and service check emitted by this integration.
##
## Additionally, this sets the default `service` for every log source.
#
# service: <SERVICE>

## Every instance is scheduled independently of the others.
#
instances:

-
## @param tags - list of strings - optional
## A list of tags to attach to every metric and service check emitted by this instance.
##
## Learn more about tagging at https://docs.datadoghq.com/tagging
#
# tags:
# - <KEY_1>:<VALUE_1>
# - <KEY_2>:<VALUE_2>

## @param service - string - optional
## Attach the tag `service:<SERVICE>` to every metric, event, and service check emitted by this integration.
##
## Overrides any `service` defined in the `init_config` section.
#
# service: <SERVICE>

## @param min_collection_interval - number - optional - default: 15
## This changes the collection interval of the check. For more information, see:
## https://docs.datadoghq.com/developers/write_agent_check/#collection-interval
#
# min_collection_interval: 15

## @param empty_default_hostname - boolean - optional - default: false
## This forces the check to send metrics with no hostname.
##
## This is useful for cluster-level checks.
#
# empty_default_hostname: false

0 comments on commit 9731c6f

Please sign in to comment.