-
Notifications
You must be signed in to change notification settings - Fork 12
Description
depends on #42
Background
Given that all the eoAPI tooling is I/O bound we can't by default rely on CPU for HPA/VPA solutions. While we can probably still use cpu utilization in one auto-scaling variable we'll want to figure out how to build our own custom metrics to capture "request load"
Options
-
Direct Instrumentation: of course we could fork core libraries and add application-level metrics with prometheus or opentelemetry hooks but this should be avoided for now in favor of finding options at higher-levels so we don't have fork code and build our own images
-
Exporters: this nginx package seems like it might be able to export request load to Prometheus "out of the box". Looks like with the
ingress-nginx-controllerthere is specific docs about set up. Maybe a good first starting place? -
Middlewares and Meshes: If the "exporter" above doesn't work then Istio or Linkerd might have "service meshes" that can be installed in the cluster to help gather this information
-
Others?
AC:
- investigate some of the options above and choose a path
- write some documentation about how users of
eoapi-k8scan install a solution - think about how a solution might be included int the existing helm chart (if possible) or maybe a new help chart? Bash Script for Different Full Installs #40 is also an option for us to think about here