Prometheus Exporter for Vultr
Metrics are all prefixed vultr_
Name | Type | Description |
---|---|---|
block_storage_up |
Counter | Number of Block Storage volumes |
block_storage_size |
Gauge | Size (GB) of Block Storage volumes |
exporter_build_info |
Counter | Build status (1=running) |
exporter_start_time |
Gauge | Start time (Unix epoch) of Exporter |
kubernetes_cluster_up |
Counter | Number of Kubernetes clusters |
kubernetes_node_pool |
Gauge | Number of Kubernetes cluster Node Pools |
kubernetes_node |
Gauge | Number of Kubernetes Cluster Nodes |
load_balancer_up |
Number of Load Balancers | |
load_balancer_instances |
Number of Load Balancer instances | |
reserved_ips_up |
Counter | Number of Reserved IPs |
ghcr.io/dazwilkin/vultr-exporter:cd5548f4126d61c6433573468d9b18a3e7e6e130
The Exporter needs access to your Vultr API Key
export API_KEY="[YOUR-API-KEY]"
ghcr.io/dazwilkin/vultr-exporter:cd5548f4126d61c6433573468d9b18a3e7e6e130
The Exporter needs access to your Vultr API Key
export API_KEY="[YOUR-API-KEY]"
export API_KEY="[YOUR-API-KEY]"
go run ./cmd/server \
--endpoint=0.0.0.0:8080 \
--path=/metrics
API_KEY="[YOUR-API-KEY]"
IMAGE="ghcr.io/dazwilkin/vultr-exporter:cd5548f4126d61c6433573468d9b18a3e7e6e130"
podman run \
--interactive --tty --rm \
--publish=8080:8080 \
--env=API_KEY=${API_KEY} \
${IMAGE} \
--endpoint=0.0.0.0:8080 \
--path=/metrics
NOTE If running
vult-exporter
on VKE, ensure that you'reAPI_KEY
includes the public IP addresses of the cluster's nodes as these will be originating Vultr API requests. I think these access control changes can't be done programmatically.
API_KEY="[YOUR-API-KEY]"
IMAGE="ghcr.io/dazwilkin/vultr-exporter:cd5548f4126d61c6433573468d9b18a3e7e6e130"
NAMESPACE="exporter"
kubectl create namespace ${NAMESPACE}
kubectl create secret generic vultr \
--namespace=${NAMESPACE} \
--from-literal=apiKey=${API_KEY}
echo "
apiVersion: v1
kind: List
metadata: {}
items:
- kind: Service
apiVersion: v1
metadata:
labels:
app: vultr-exporter
name: vultr-exporter
spec:
selector:
app: vultr-exporter
ports:
- name: http
port: 8080
targetPort: 8080
- kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: vultr-exporter
name: vultr-exporter
spec:
replicas: 1
selector:
matchLabels:
app: vultr-exporter
template:
metadata:
labels:
app: vultr-exporter
spec:
containers:
- name: vultr-exporter
image: ${IMAGE}
command:
- /server
args:
- --endpoint=0.0.0.0:8080
- --path=/metrics
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: vultr
key: apiKey
optional: false
ports:
- name: metrics
containerPort: 8080
restartPolicy: Always
" | kubectl apply --filename=- --namespace=${NAMESPACE}
# Use your preferred HTTP Load-balancer
kubectl port-forward deployment/vultr-exporter 8080:8080 \
--namespace=${NAMESPACE}
# To use a Vultr Load balancer
# Replaces the service created above w/ a Vultr Load balancer
echo "
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
name: vultr-exporter
spec:
type: LoadBalancer
selector:
app: vultr-exporter
ports:
- name: http
port: 80
targetPort: 8080
" | kubectl apply --filename=- --namespace=${NAMESPACE}
if [ "$(getconf LONG_BIT)" -eq 64 ]
then
# 64-bit Raspian
ARCH="GOARCH=arm64"
TAG="arm64"
else
# 32-bit Raspian
ARCH="GOARCH=arm GOARM=7"
TAG="arm32v7"
fi
podman build \
--build-arg=GOLANG_OPTIONS="CGO_ENABLED=0 GOOS=linux ${ARCH}" \
--build-arg=COMMIT=$(git rev-parse HEAD) \
--build-arg=VERSION=$(uname --kernel-release) \
--tag=ghcr.io/dazwilkin/vultr-exporter:${TAG} \
--file=./Dockerfile \
.
global:
scrape_interval: 1m
evaluation_interval: 1m
rule_files:
- "/etc/alertmanager/rules.yml"
# Vultr Exporter
- job_name: "vultr-exporter"
static_configs:
- targets:
- "localhost:8080"
groups:
- name: vultr_exporter
rules:
- alert: vultr_kubernetes_cluster_up
expr: vultr_kubernetes_cluster_up{} > 0
for: 6h
labels:
severity: page
annotations:
summary: Vultr Kubernetes Engine clusters
vultr-exporter
container images are being signed by Sigstore and may be verified:
cosign verify \
--key=./cosign.pub \
ghcr.io/dazwilkin/vultr-exporter:cd5548f4126d61c6433573468d9b18a3e7e6e130
NOTE cosign.pub may be downloaded here
To install cosign, e.g.:
go install github.com/sigstore/cosign/cmd/cosign@latest
- Prometheus Exporter for Azure
- Prometheus Exporter for Fly.io
- Prometheus Exporter for GCP
- Prometheus Exporter for Koyeb
- Prometheus Exporter for Linode