Skip to content

Investigate How to Create Custom Metrics for Request Load #41

@ranchodeluxe

Description

@ranchodeluxe

depends on #42

Background

Given that all the eoAPI tooling is I/O bound we can't by default rely on CPU for HPA/VPA solutions. While we can probably still use cpu utilization in one auto-scaling variable we'll want to figure out how to build our own custom metrics to capture "request load"

Options

  1. Direct Instrumentation: of course we could fork core libraries and add application-level metrics with prometheus or opentelemetry hooks but this should be avoided for now in favor of finding options at higher-levels so we don't have fork code and build our own images

  2. Exporters: this nginx package seems like it might be able to export request load to Prometheus "out of the box". Looks like with the ingress-nginx-controller there is specific docs about set up. Maybe a good first starting place?

  3. Middlewares and Meshes: If the "exporter" above doesn't work then Istio or Linkerd might have "service meshes" that can be installed in the cluster to help gather this information

  4. Others?

AC:

  • investigate some of the options above and choose a path
  • write some documentation about how users of eoapi-k8s can install a solution
  • think about how a solution might be included int the existing helm chart (if possible) or maybe a new help chart? Bash Script for Different Full Installs #40 is also an option for us to think about here

Metadata

Metadata

Assignees

Labels

documentationImprovements or additions to documentationenhancementNew feature or request

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions