diff --git a/README.md b/README.md index a59122a78e..7d7cd9f7fa 100644 --- a/README.md +++ b/README.md @@ -83,6 +83,15 @@ It is possible to read Prometheus metrics provided by Insights Operator. For exa curl --cert k8s.crt --key k8s.key -k https://localhost:8443/metrics `` +## Accessing prometheus metrics with K8s Token +``` +# Get token +oc whoami -t +# Read metrics from Pod +oc exec -it deployment/insights-operator -n openshift-insights -- curl -k -H "Authorization: Bearer 8NCUZTV3mvigpHxhdIKer6AyBLce14uehzg9b2R4dPY" 'https://localhost:8443/metrics' +``` +Example of metrics exposed by Insights Operator can be found at [metrics.txt](docs/metrics.txt) + ### Certificate and key needed to access Prometheus metrics Certificate and key are required to access Prometheus metrics (instead 404 Forbidden is returned). It is possible to generate these two files from Kubernetes config file. Certificate is stored in `users/admin/client-cerfificate-data` and key in `users/admin/client-key-data`. Please note that these values are encoded by using Base64 encoding, so it is needed to decode them, for example by `base64 -d`. diff --git a/docs/arch.md b/docs/arch.md new file mode 100644 index 0000000000..6c6d41ba90 --- /dev/null +++ b/docs/arch.md @@ -0,0 +1,286 @@ +Insights operator is OpenShift Cloud Native application based on the [Operators Framework](https://github.com/operator-framework). +Operators Framework is a toolkit to manage other cloud native applications. + +Tip: +Try to install operators-sdk and generate new operator using https://sdk.operatorframework.io/docs/building-operators/golang/quickstart/ +You will see how much code is generated by operators-sdk by default and what is provided by default in an operator. + +Main goal of Insights Operators is to periodically gather anonymized data from applications in cluster and periodically upload them +to cloud.redhat.com for analysis. + +Insights Operator itself is not managing any applications, it rather only runs using Operators Framework infrastructure. +Like is the convention of Operator applications it has most of the code structured in pkg package and pkg/controller/operator.go +is the hosting the Operator controller. Typically Operator controllers are reading configuration and starts some periodical tasks. + +## How Insights Operator reads configuration +In case of Insights Operator, configuration is a combination of file [config/pod.yaml](config/pod.yaml) and configuration stored in +Namespace openshift-config in secret support. In the secret support is the endpoint and interval. The secret doesn't exist by default, +but when exists it overrides default settings which IO reads from config/pod.yaml. +The support secret has +- endpoint (where to upload to), +- interval (baseline for how often to gather and upload) +- httpProxy, httpsProxy, noProxy eventually to set custom proxy, which overrides cluster proxy just for Insights Operator uploads +- username, password - if set, the insights client upload will be authentized by basic authorization and this username/password. By default it uses Token from pull-secret secret. + +The pull-secret has .dockerconfigjson with list of Tokens to various docker repositories + authentication token for insights operator upload: +- under auths object in property cloud.redhat.com is property auth with Bearer token for cloud.redhat.com Authentication. + + + +Content of openshift-config secret support: +``` +$ oc get secret support -n openshift-config -o=yaml +apiVersion: v1 +data: + endpoint: aHR0cHM6Ly9jbG91ZC5yZWRoYXQuY29tL2FwaS9pbmdyZXNzL3YxL3VwbG9hZA== + interval: Mmg= +kind: Secret +metadata: + creationTimestamp: "2020-10-05T05:37:34Z" + name: support + namespace: openshift-config + resourceVersion: "823414" + selfLink: /api/v1/namespaces/openshift-config/secrets/support + uid: 0e522987-4c02-479d-8d10-e4f551e60b65 +type: Opaque + +$ oc get secret support -n openshift-config -o=json | jq -r .data.endpoint | base64 -d +https://cloud.redhat.com/api/ingress/v1/upload +$ oc get secret support -n openshift-config -o=json | jq -r .data.interval | base64 -d +2h +``` +The support secret can be also configured for a Insights Operator specific Http Proxy using keys (httpProxy, httpsProxy and noProxy). + +To configure authentication to cloud.redhat.com Insights Operator is reading preconfigured token from namespace +openshift-config and secret pull-secret (where are cluster-wide tokens stored). The token to cloud.redhat.com is stored in .dockerjsonconfig, inside auth section. +``` +oc get secret/pull-secret -n openshift-config -o json | jq -r ".data | .[]" | base64 --decode | jq +{ + "auths": { + ... + "cloud.openshift.com": { + "email": "cee-ops-admins@redhat.com", + "auth": "BASE64-ENCODED-JWT-TOKEN-REMOVED" + }, + ... + } +} +``` +The configuration secrets are periodically refreshed by [configobserver](pkg/config/configobserver/configobserver.go). Any code can register to +receive signal throught channel by using config.ConfigChanged(), like for example in insightsuploader.go. It will then get notified if config changes. +``` +configCh, cancelFn := c.configurator.ConfigChanged() +``` +Internally the configObserver has an array of subscribers, so all of them will get the signal. + + +## How is Insights Operator scheduling gathering +Commonly used pattern in Insights Operator is that a task is started as go routine and runs its own cycle of periodic actions. +These actions are mostly started from operator.go. +They are usually using wait.Until - runs function periodically after short delay until end is signalled. +There are these main tasks scheduled: +- Gatherer +- Uploader +- Config Observer +- Disk Pruner + +### Scheduling of Gatherer +Gatherer is using this logic to start information gathering from the cluster, and it is handled in [periodic.go](pkg/controller/periodic/periodic.go). + +The periodic is using a producer/consumer queue. It is periodically adding the gatherer to queue. The queue has a limit for maximally one gatherer of the name in queue and because we only have gatherer "config" it is only this one in queue. +The adding to queue is run in periodic.periodicTrigger, which is started from periodic.Run function. +Function periodicTrigger tries to add a new one (blocking if the queue is full) every interval/2. + +Code: In periodic.Run it is calling periodicTrigger if it has a chance +``` +go wait.Until(func() { c.periodicTrigger(stopCh) }, time.Second, stopCh) +``` +The periodicTrigger is special in a way that it tries to balance times when it inserts to the queue. +wait.Jitter(i, m) returns time.Duration after time i + random(i*m). +And even if it is time to add to queue it balances inserts with AddAfter between i/4 to i/4 + random(i/4 * 2) (sometimes between 1/4 and 3/4). + +The consumers from the queue are 4 workers started as goroutines from periodic.Run function. +Because we only have one gatherer in queue, I think only one worker will effectively run task from queue. The actual start of gatherer is call to c.sync from periodic.processNextWorkItem. The c.sync is also defined in periodic and calls Gather + immediatelly stores returned data to disk as one file with timestamp. + +### Scheduling and running of Uploader +The operator.go is starting background task defined in pkg/insights/insightsuploader/insightsuploader.go. The insights uploader is periodically checking if there are any data to upload by calling summarizer. +if any exists. If no data to upload are found the uploader continues with next cycle. +The uploader cycle is running wait.Poll function which is waiting until config changes or until there is a time to upload. The time to upload is determined set to initialDelay. +If this is the first upload (the lastReportedTime from status is not set) the uploader uses interval/8+random(interval/8*2) as next upload time. This could be reset though to 0, if it is Safe to upload immediatelly. If any upload was already reported, the next upload interval is going to be now - lastReported + interval + 1.2 Jitter. +Code: This line sets next interval in regular polling: +``` +wait.Jitter(now.Sub(next), 1.2) +``` +After calculation of initialDelay, wait.Until runs regular function, which starts waiting on either config change or until time till initialDelay and then continues. Every event is retriggered by wait.Until again every 15 seconds. For example if ClusterVersion is not populated (because Gatherer havent finished initial Gathering), it will retry in 15 seconds. +Eventually uploader will use insightsclient.Send to run the upload itself. It then reports any errors to its Status reporter. + +# How is Uploader authenticating to cloud.redhat.com +The insightsclient.Send is creating http client with Get method and url, which can be configured in config/pod.yaml, or eventually from support secret endpoint value. The transport is encrypted with TLS, which is set in clientTransport() method. This method is using pkg/authorizer/clusterauthorizer.go to +add the Bearer token, which is read from secret pull-secret, the section .auths.cloud.redhat.com.auth. The clientTransport is also setting Proxy, which +can be either from Proxy settings or from support secret, or from Env variables (cluster-wide). + +## Summarising the content before upload +Summarizer is defined by pkg/record/diskrecorder/diskrecorder.go and is merging all existing archives. That means all archives with insights-*.tar.gz pattern which weren't removed and since last check are included. Then mergeReader is taking one file after another and adding all of them to archive under their path. +If the file names are unstable (for example reading from Api with Limit and reaching the Limit), it could merge together more files than specified in Api limit. + +## Scheduling the ConfigObserver +Another background task started from Observer is from pkg/config/configobserver/configobserver.go. The observer creates configObserver by calling configObserver.New, which sets default observing interval to 5 minutes. +The Run method runs again wail.Poll every 5 minutes and reads both support and pull-secret secrets. + +## Scheduling diskprunner and what it does +By default Insights Operator Gather is calling diskrecorder to save newly collected data in a new file, but doesn't remove old. This is the task of diskpruner. Observer calls recorder.PeriodicallyPrune() function. It is again using wait.Until pattern and runs aproximatelly after every second interval. +Internally it calls diskrecorder.Prune with maxAge = interval*6*24 (with 2h it is 12 days) everything older is going to be removed from io archive path (by default /tmp/insights-operator). + + + +## How is Insights operator setting operator Status +The operator status is based on K8s [Pod conditions](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions). +Code: How Insights Operator status conditions looks like: +``` +$ oc get co insights -o=json | jq '.status.conditions' +[ + { + "lastTransitionTime": "2020-10-03T04:13:50Z", + "status": "False", + "type": "Degraded" + }, + { + "lastTransitionTime": "2020-10-03T04:13:50Z", + "status": "True", + "type": "Available" + }, + { + "lastTransitionTime": "2020-10-03T04:14:05Z", + "message": "Monitoring the cluster", + "status": "False", + "type": "Progressing" + }, + { + "lastTransitionTime": "2020-10-03T04:14:05Z", + "status": "False", + "type": "Disabled" + } +] +``` +The status is being updated by pkg/controller/status/status.go. Status runs a background task, which is periodically updating +the Operator status from its internal list of Sources. Any component which wants to participate on Operator's status adds a +SimpleReporter, which is returning its actual Status. The Simple reporter is defined in controllerstatus. + +Code: In operator.go components are adding their reporters to Status Sources: +``` +statusReporter.AddSources(uploader) +``` +This periodic status updater calls updateStatus which sets the Operator status after calling merge to all the provided Sources. +The uploader updateStatus determines if it is Safe to upload, if Cluster Operator status and Pod last Exit Code are both healthy. +It relies on fact that updateStatus is called on Start of status cycle. + + + +## How is Insights Operator using various Api Clients +Internally Insights operator is talking to Kubernetes Api server over Http Rest queries. Each query is authenticated by a Bearer token, +To simulate see an actual Rest query being used, you can try: +``` +$ oc get pods -A -v=9 +I1006 12:26:33.972634 66541 loader.go:375] Config loaded from file: /home/mkunc/.kube/config +I1006 12:26:33.977546 66541 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" -H "User-Agent: oc/4.5.0 (linux/amd64) kubernetes/9933eb9" -H "Authorization: Bearer Xy9HoVzNdsRifGr3oCIl7pfxwkeqE2u058avw6o969w" 'https://api.sharedocp4upi43.lab.upshift.rdu2.redhat.com:6443/api/v1/pods?limit=500' +I1006 12:26:36.075230 66541 round_trippers.go:443] GET https://api.sharedocp4upi43.lab.upshift.rdu2.redhat.com:6443/api/v1/pods?limit=500 200 OK in 2097 milliseconds +I1006 12:26:36.075284 66541 round_trippers.go:449] Response Headers: +I1006 12:26:36.075300 66541 round_trippers.go:452] Audit-Id: 53ad17b9-c3fe-4166-9693-2bacf60f7dcc +I1006 12:26:36.075313 66541 round_trippers.go:452] Cache-Control: no-cache, private +I1006 12:26:36.075326 66541 round_trippers.go:452] Content-Type: application/json +I1006 12:26:36.075347 66541 round_trippers.go:452] Vary: Accept-Encoding +I1006 12:26:36.075370 66541 round_trippers.go:452] Date: Tue, 06 Oct 2020 10:26:36 GMT +I1006 12:26:36.467245 66541 request.go:1068] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"selfLink":"/api/v1/pods" +... CUT HERE +``` + +But adding Bearer token and creating Rest query is all handled automatically for us by using Clients, which are generated, type safe golang libraries, +like [github.com/openshift/client-go](github.com/openshift/client-go) or [github.com/kubernetes/client-go](github.com/kubernetes/client-go). +Both these libraries are generated by automation, which specifies from which Api repo and which Api Group it generates it. +All these clients are created in [operator.go](pkg/controller/operator.go) from the KUBECONFIG envvar defined in cluster and passed into [clusterconfig.go](pkg/controller/clusterconfig.go). + +## The are credentials used in clients +In IO deployment [manifest](manifests/06-deployment.yaml) is specified service account operator (serviceAccountName: operator). This is the account under which insights operator runs or reads its configuration or also reads the metrics. +Because Insights Operator needs quite powerfull credentials to access cluster-wide resources, it has one more service account called gather. It is created +in [manifest](manifests/03-clusterrole.yaml). +Code: To verify if gather account has right permissions to call verb list from apigroup machinesets I can use: +``` +kubectl auth can-i list machinesets --as=system:serviceaccount:openshift-insights:gather +yes +``` +This account is used to impersonate any clients which are being used in Gather Api calls. The impersonated account is set in operator go: +Code: In Operator.go specific Api client is using impersonated account +``` + gatherKubeConfig := rest.CopyConfig(controller.KubeConfig) + if len(s.Impersonate) > 0 { + gatherKubeConfig.Impersonate.UserName = s.Impersonate + } + // .. and later on this impersonated client is used to create another clients + gatherConfigClient, err := configv1client.NewForConfig(gatherKubeConfig) +``` + +Code: The impersonated account is specified in config/pod.yaml (or config/local.yaml) using: +``` +impersonate: system:serviceaccount:openshift-insights:gather +``` +To test where the client has right permissions, the command mentioned above with verb, api and service account can be used. + +Note: I was only able to test missing permissions on OCP 4.3, the versions above seems like always passing fine. Maybe higher versions +don't have RBAC enabled. + +Code: Example error returned from Api, in this case downloading Get config from imageregistry. +``` +configs.imageregistry.operator.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot get resource "configs" in API group "imageregistry.operator.openshift.io" at the cluster scope +``` + +## How Api extensions works +If any cloud native appliction wants to add some Kubernetes Api endpoint, it needs to define it using [K8s Api extensions](https://kubernetes.io/docs/concepts/extend-kubernetes/) and it would need to define Custom Resource Definition. Openshift itself defines them for [github.com/openshift/api](github.com/openshift/api) (ClusterOperators, Proxy, Image, ..). Thus for using api of Openshift, we need to use Openshift's client-go generated client. +If we would need to use Api of some other Operators, we would need to find if Operator is defining Api. + +Typically when operator defines a new CRD type, this type would be defined inside of its repo (for example [Machine Config Operator's MachineConfig](https://github.com/openshift/machine-config-operator/tree/master/pkg/apis/machineconfiguration.openshift.io)). + +To talk to specific Api, we need to have generated clientset and generated lister types from the CRD type. There might be three possibilities: +- Operator doesn't generate clientset nor lister types +- Operator generate only lister types +- Operator generates both, clientset and lister types + +Machine Config Operator defines: +- its Lister types [here](https://github.com/openshift/machine-config-operator/tree/master/pkg/generated/listers/machineconfiguration.openshift.io/v1) +- its ClientSet [here](https://github.com/openshift/machine-config-operator/blob/master/pkg/generated/clientset/versioned/clientset.go) + +Normally such a generation is not intended for other consumers, unless it is prepared in a separate api library. For example +[Operators Lifecycle Manager](https://github.com/operator-framework/operator-lifecycle-manager) defines its CRD types [here](https://github.com/operator-framework/api/tree/master/pkg/operators/v1alpha1). Operators framework is exposing in Api only CRD and lister types, not ClientSet. + +One problem with adding new operator to go.mod is that usually other operator will have its own reference to k8s/api (and related k8s/library-go), which might be different then what Insights Operator is using, which could cause issues during compilation (when referenced Operator is using Api from new k8s api). + +If it is impossible to reference, or operator doesn't expose generated Lister or ClientSet types in all these cases when we dont have type safe +Api, we can still use non type safe custom build types called [dynamic client](k8s.io/client-go/dynamic). There are two cases, when Lister types exists, but no ClientSet, or when no Lister types exists at all both have examples [here](https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.6.2/pkg/client#example-Client-List). +Such a client is used in [GatherMachineSet](pkg/gather/clusterconfig/clusterconfig.go). + + +## Gathering the data +When the periodic.go calls method Gather of interface Gatherer, it actually calls pkg/gather/clusterconfig/clusterconfig.go. +The function Gather calls one by one functions to gather (collect results of kubernetes Api call). Each GatherXX method is returning record object, +which is a list of records with name (how file will be called in archive) and actual data. Actual data (any struct) has to implement Marshalable interface, +which requires to have Marshall method. + +The structure of Gather calls is created inside the from pkg/gather/clusterconfig from Gather method. The method creates the list of gathering functions +and passes them to record.Collect method in pkg/record/interface.go. The Collect method works in a way that any error returned from a gathering function +is stored, but all next functions are still called, unless the parent context returns an Error (for example on timeout). + +Each result is being stored into record.Item as Marshallable item. It is using either golang Json marshaller, or K8s Api serializers. Those has to be explicitly registered in init func. The record is created to archive under its Name specifying full relative path including folders. The extension for particular record file is defined by GetExtension() func, but most of them are today of "json", except metrics or id. + +## Downloading and exposing Archive Analysis +After the successfull upload of archive, the progress monitoring task starts. By default it waits for 1m until it checks if results of analysis of the archive (done by external pipeline in cloud.redhat.com) are available. The report contains LastUpdatedAt timestamp, and verifies if report has changed its state (for this cluster) since the last time. If there was no +update (yet), it retries its download of analysis, because we have uploaded the data, but no analysis was provided back yet. +The successfully downloaded report is then being reported as IO metric health_statuses_insights. +Code: Example of reported metrics: +``` +# HELP health_statuses_insights [ALPHA] Information about the cluster health status as detected by Insights tooling. +# TYPE health_statuses_insights gauge +health_statuses_insights{metric="critical"} 0 +health_statuses_insights{metric="important"} 0 +health_statuses_insights{metric="low"} 1 +health_statuses_insights{metric="moderate"} 1 +health_statuses_insights{metric="total"} 2 +``` \ No newline at end of file diff --git a/docs/metrics.txt b/docs/metrics.txt new file mode 100644 index 0000000000..59bbb292cb --- /dev/null +++ b/docs/metrics.txt @@ -0,0 +1,386 @@ +# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend. +# TYPE apiserver_audit_event_total counter +apiserver_audit_event_total 0 +# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend. +# TYPE apiserver_audit_requests_rejected_total counter +apiserver_audit_requests_rejected_total 0 +# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request. +# TYPE apiserver_client_certificate_expiration_seconds histogram +apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="3600"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="7200"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="7.776e+06"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="1.5552e+07"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="3.1104e+07"} 0 +apiserver_client_certificate_expiration_seconds_bucket{le="+Inf"} 0 +apiserver_client_certificate_expiration_seconds_sum 0 +apiserver_client_certificate_expiration_seconds_count 0 +# HELP apiserver_current_inflight_requests [ALPHA] Maximal number of currently used inflight request limit of this apiserver per request kind in last second. +# TYPE apiserver_current_inflight_requests gauge +apiserver_current_inflight_requests{requestKind="mutating"} 0 +apiserver_current_inflight_requests{requestKind="readOnly"} 0 +# HELP apiserver_envelope_encryption_dek_cache_fill_percent [ALPHA] Percent of the cache slots currently occupied by cached DEKs. +# TYPE apiserver_envelope_encryption_dek_cache_fill_percent gauge +apiserver_envelope_encryption_dek_cache_fill_percent 0 +# HELP apiserver_storage_data_key_generation_duration_seconds [ALPHA] Latencies in seconds of data encryption key(DEK) generation operations. +# TYPE apiserver_storage_data_key_generation_duration_seconds histogram +apiserver_storage_data_key_generation_duration_seconds_bucket{le="5e-06"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="1e-05"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="2e-05"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="4e-05"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="8e-05"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="0.00016"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="0.00032"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="0.00064"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="0.00128"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="0.00256"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="0.00512"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="0.01024"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="0.02048"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="0.04096"} 0 +apiserver_storage_data_key_generation_duration_seconds_bucket{le="+Inf"} 0 +apiserver_storage_data_key_generation_duration_seconds_sum 0 +apiserver_storage_data_key_generation_duration_seconds_count 0 +# HELP apiserver_storage_data_key_generation_failures_total [ALPHA] Total number of failed data encryption key(DEK) generation operations. +# TYPE apiserver_storage_data_key_generation_failures_total counter +apiserver_storage_data_key_generation_failures_total 0 +# HELP apiserver_storage_envelope_transformation_cache_misses_total [ALPHA] Total number of cache misses while accessing key decryption key(KEK). +# TYPE apiserver_storage_envelope_transformation_cache_misses_total counter +apiserver_storage_envelope_transformation_cache_misses_total 0 +# HELP authenticated_user_requests [ALPHA] Counter of authenticated requests broken out by username. +# TYPE authenticated_user_requests counter +authenticated_user_requests{username="other"} 431 +# HELP authentication_attempts [ALPHA] Counter of authenticated attempts. +# TYPE authentication_attempts counter +authentication_attempts{result="success"} 431 +# HELP authentication_duration_seconds [ALPHA] Authentication duration in seconds broken out by result. +# TYPE authentication_duration_seconds histogram +authentication_duration_seconds_bucket{result="success",le="0.001"} 0 +authentication_duration_seconds_bucket{result="success",le="0.002"} 0 +authentication_duration_seconds_bucket{result="success",le="0.004"} 0 +authentication_duration_seconds_bucket{result="success",le="0.008"} 60 +authentication_duration_seconds_bucket{result="success",le="0.016"} 413 +authentication_duration_seconds_bucket{result="success",le="0.032"} 424 +authentication_duration_seconds_bucket{result="success",le="0.064"} 427 +authentication_duration_seconds_bucket{result="success",le="0.128"} 430 +authentication_duration_seconds_bucket{result="success",le="0.256"} 430 +authentication_duration_seconds_bucket{result="success",le="0.512"} 431 +authentication_duration_seconds_bucket{result="success",le="1.024"} 431 +authentication_duration_seconds_bucket{result="success",le="2.048"} 431 +authentication_duration_seconds_bucket{result="success",le="4.096"} 431 +authentication_duration_seconds_bucket{result="success",le="8.192"} 431 +authentication_duration_seconds_bucket{result="success",le="16.384"} 431 +authentication_duration_seconds_bucket{result="success",le="+Inf"} 431 +authentication_duration_seconds_sum{result="success"} 4.842085615000003 +authentication_duration_seconds_count{result="success"} 431 +# HELP authentication_token_cache_active_fetch_count [ALPHA] +# TYPE authentication_token_cache_active_fetch_count gauge +authentication_token_cache_active_fetch_count{status="blocked"} 0 +authentication_token_cache_active_fetch_count{status="in_flight"} 0 +# HELP authentication_token_cache_fetch_total [ALPHA] +# TYPE authentication_token_cache_fetch_total counter +authentication_token_cache_fetch_total{status="ok"} 431 +# HELP authentication_token_cache_request_duration_seconds [ALPHA] +# TYPE authentication_token_cache_request_duration_seconds histogram +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="0.005"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="0.01"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="0.025"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="0.05"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="0.1"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="0.25"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="0.5"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="1"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="2.5"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="5"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="10"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="hit",le="+Inf"} 1 +authentication_token_cache_request_duration_seconds_sum{status="hit"} 0 +authentication_token_cache_request_duration_seconds_count{status="hit"} 1 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="0.005"} 394 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="0.01"} 422 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="0.025"} 426 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="0.05"} 427 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="0.1"} 430 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="0.25"} 430 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="0.5"} 431 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="1"} 431 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="2.5"} 431 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="5"} 431 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="10"} 431 +authentication_token_cache_request_duration_seconds_bucket{status="miss",le="+Inf"} 431 +authentication_token_cache_request_duration_seconds_sum{status="miss"} 2.2419999999999876 +authentication_token_cache_request_duration_seconds_count{status="miss"} 431 +# HELP authentication_token_cache_request_total [ALPHA] +# TYPE authentication_token_cache_request_total counter +authentication_token_cache_request_total{status="hit"} 1 +authentication_token_cache_request_total{status="miss"} 431 +# HELP go_gc_duration_seconds A summary of the GC invocation durations. +# TYPE go_gc_duration_seconds summary +go_gc_duration_seconds{quantile="0"} 5.2223e-05 +go_gc_duration_seconds{quantile="0.25"} 0.000154673 +go_gc_duration_seconds{quantile="0.5"} 0.000208595 +go_gc_duration_seconds{quantile="0.75"} 0.000255495 +go_gc_duration_seconds{quantile="1"} 0.118492528 +go_gc_duration_seconds_sum 0.138714134 +go_gc_duration_seconds_count 87 +# HELP go_goroutines Number of goroutines that currently exist. +# TYPE go_goroutines gauge +go_goroutines 82 +# HELP go_info Information about the Go environment. +# TYPE go_info gauge +go_info{version="go1.15.0"} 1 +# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use. +# TYPE go_memstats_alloc_bytes gauge +go_memstats_alloc_bytes 1.3456496e+07 +# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed. +# TYPE go_memstats_alloc_bytes_total counter +go_memstats_alloc_bytes_total 6.26118416e+08 +# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table. +# TYPE go_memstats_buck_hash_sys_bytes gauge +go_memstats_buck_hash_sys_bytes 1.584249e+06 +# HELP go_memstats_frees_total Total number of frees. +# TYPE go_memstats_frees_total counter +go_memstats_frees_total 3.181471e+06 +# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started. +# TYPE go_memstats_gc_cpu_fraction gauge +go_memstats_gc_cpu_fraction 6.189388813466055e-05 +# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata. +# TYPE go_memstats_gc_sys_bytes gauge +go_memstats_gc_sys_bytes 6.14472e+06 +# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use. +# TYPE go_memstats_heap_alloc_bytes gauge +go_memstats_heap_alloc_bytes 1.3456496e+07 +# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used. +# TYPE go_memstats_heap_idle_bytes gauge +go_memstats_heap_idle_bytes 4.7685632e+07 +# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use. +# TYPE go_memstats_heap_inuse_bytes gauge +go_memstats_heap_inuse_bytes 1.847296e+07 +# HELP go_memstats_heap_objects Number of allocated objects. +# TYPE go_memstats_heap_objects gauge +go_memstats_heap_objects 66468 +# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS. +# TYPE go_memstats_heap_released_bytes gauge +go_memstats_heap_released_bytes 4.3696128e+07 +# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system. +# TYPE go_memstats_heap_sys_bytes gauge +go_memstats_heap_sys_bytes 6.6158592e+07 +# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection. +# TYPE go_memstats_last_gc_time_seconds gauge +go_memstats_last_gc_time_seconds 1.6031741806513464e+09 +# HELP go_memstats_lookups_total Total number of pointer lookups. +# TYPE go_memstats_lookups_total counter +go_memstats_lookups_total 0 +# HELP go_memstats_mallocs_total Total number of mallocs. +# TYPE go_memstats_mallocs_total counter +go_memstats_mallocs_total 3.247939e+06 +# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures. +# TYPE go_memstats_mcache_inuse_bytes gauge +go_memstats_mcache_inuse_bytes 6944 +# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system. +# TYPE go_memstats_mcache_sys_bytes gauge +go_memstats_mcache_sys_bytes 16384 +# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures. +# TYPE go_memstats_mspan_inuse_bytes gauge +go_memstats_mspan_inuse_bytes 215016 +# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system. +# TYPE go_memstats_mspan_sys_bytes gauge +go_memstats_mspan_sys_bytes 360448 +# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place. +# TYPE go_memstats_next_gc_bytes gauge +go_memstats_next_gc_bytes 2.3357472e+07 +# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations. +# TYPE go_memstats_other_sys_bytes gauge +go_memstats_other_sys_bytes 627895 +# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator. +# TYPE go_memstats_stack_inuse_bytes gauge +go_memstats_stack_inuse_bytes 950272 +# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator. +# TYPE go_memstats_stack_sys_bytes gauge +go_memstats_stack_sys_bytes 950272 +# HELP go_memstats_sys_bytes Number of bytes obtained from system. +# TYPE go_memstats_sys_bytes gauge +go_memstats_sys_bytes 7.584256e+07 +# HELP go_threads Number of OS threads created. +# TYPE go_threads gauge +go_threads 9 +# HELP health_statuses_insights [ALPHA] Information about the cluster health status as detected by Insights tooling. +# TYPE health_statuses_insights gauge +health_statuses_insights{metric="critical"} 0 +health_statuses_insights{metric="important"} 0 +health_statuses_insights{metric="low"} 1 +health_statuses_insights{metric="moderate"} 1 +health_statuses_insights{metric="total"} 2 +# HELP insightsclient_request_recvreport_total [ALPHA] Tracks the number of reports requested +# TYPE insightsclient_request_recvreport_total counter +insightsclient_request_recvreport_total{client="default",status_code="200"} 6 +# HELP insightsclient_request_send_total [ALPHA] Tracks the number of metrics sends +# TYPE insightsclient_request_send_total counter +insightsclient_request_send_total{client="default",status_code="202"} 6 +# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. +# TYPE process_cpu_seconds_total counter +process_cpu_seconds_total 19.18 +# HELP process_max_fds Maximum number of open file descriptors. +# TYPE process_max_fds gauge +process_max_fds 1.048576e+06 +# HELP process_open_fds Number of open file descriptors. +# TYPE process_open_fds gauge +process_open_fds 13 +# HELP process_resident_memory_bytes Resident memory size in bytes. +# TYPE process_resident_memory_bytes gauge +process_resident_memory_bytes 8.6491136e+07 +# HELP process_start_time_seconds Start time of the process since unix epoch in seconds. +# TYPE process_start_time_seconds gauge +process_start_time_seconds 1.60316731481e+09 +# HELP process_virtual_memory_bytes Virtual memory size in bytes. +# TYPE process_virtual_memory_bytes gauge +process_virtual_memory_bytes 1.311428608e+09 +# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes. +# TYPE process_virtual_memory_max_bytes gauge +process_virtual_memory_max_bytes -1 +# HELP workqueue_adds_total [ALPHA] Total number of adds handled by workqueue +# TYPE workqueue_adds_total counter +workqueue_adds_total{name="DynamicCABundle-serving-cert"} 115 +workqueue_adds_total{name="DynamicConfigMapCABundle-client-ca"} 117 +workqueue_adds_total{name="DynamicServingCertificateController"} 118 +workqueue_adds_total{name="gatherer"} 13 +# HELP workqueue_depth [ALPHA] Current depth of workqueue +# TYPE workqueue_depth gauge +workqueue_depth{name="DynamicCABundle-serving-cert"} 0 +workqueue_depth{name="DynamicConfigMapCABundle-client-ca"} 0 +workqueue_depth{name="DynamicServingCertificateController"} 0 +workqueue_depth{name="gatherer"} 0 +# HELP workqueue_longest_running_processor_seconds [ALPHA] How many seconds has the longest running processor for workqueue been running. +# TYPE workqueue_longest_running_processor_seconds gauge +workqueue_longest_running_processor_seconds{name="DynamicCABundle-serving-cert"} 0 +workqueue_longest_running_processor_seconds{name="DynamicConfigMapCABundle-client-ca"} 0 +workqueue_longest_running_processor_seconds{name="DynamicServingCertificateController"} 0 +workqueue_longest_running_processor_seconds{name="gatherer"} 0 +# HELP workqueue_queue_duration_seconds [ALPHA] How long in seconds an item stays in workqueue before being requested. +# TYPE workqueue_queue_duration_seconds histogram +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="1e-08"} 0 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="1e-07"} 0 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="1e-06"} 0 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="9.999999999999999e-06"} 51 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="9.999999999999999e-05"} 114 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="0.001"} 115 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="0.01"} 115 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="0.1"} 115 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="1"} 115 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="10"} 115 +workqueue_queue_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="+Inf"} 115 +workqueue_queue_duration_seconds_sum{name="DynamicCABundle-serving-cert"} 0.0019529520000000006 +workqueue_queue_duration_seconds_count{name="DynamicCABundle-serving-cert"} 115 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="1e-08"} 0 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="1e-07"} 0 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="1e-06"} 0 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="9.999999999999999e-06"} 76 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="9.999999999999999e-05"} 114 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="0.001"} 116 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="0.01"} 116 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="0.1"} 117 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="1"} 117 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="10"} 117 +workqueue_queue_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="+Inf"} 117 +workqueue_queue_duration_seconds_sum{name="DynamicConfigMapCABundle-client-ca"} 0.09290695100000007 +workqueue_queue_duration_seconds_count{name="DynamicConfigMapCABundle-client-ca"} 117 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="1e-08"} 0 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="1e-07"} 0 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="1e-06"} 0 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="9.999999999999999e-06"} 19 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="9.999999999999999e-05"} 117 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="0.001"} 118 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="0.01"} 118 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="0.1"} 118 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="1"} 118 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="10"} 118 +workqueue_queue_duration_seconds_bucket{name="DynamicServingCertificateController",le="+Inf"} 118 +workqueue_queue_duration_seconds_sum{name="DynamicServingCertificateController"} 0.0027722009999999997 +workqueue_queue_duration_seconds_count{name="DynamicServingCertificateController"} 118 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="1e-08"} 0 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="1e-07"} 0 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="1e-06"} 0 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="9.999999999999999e-06"} 1 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="9.999999999999999e-05"} 13 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="0.001"} 13 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="0.01"} 13 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="0.1"} 13 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="1"} 13 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="10"} 13 +workqueue_queue_duration_seconds_bucket{name="gatherer",le="+Inf"} 13 +workqueue_queue_duration_seconds_sum{name="gatherer"} 0.000195836 +workqueue_queue_duration_seconds_count{name="gatherer"} 13 +# HELP workqueue_retries_total [ALPHA] Total number of retries handled by workqueue +# TYPE workqueue_retries_total counter +workqueue_retries_total{name="DynamicCABundle-serving-cert"} 0 +workqueue_retries_total{name="DynamicConfigMapCABundle-client-ca"} 0 +workqueue_retries_total{name="DynamicServingCertificateController"} 0 +workqueue_retries_total{name="gatherer"} 13 +# HELP workqueue_unfinished_work_seconds [ALPHA] How many seconds of work has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases. +# TYPE workqueue_unfinished_work_seconds gauge +workqueue_unfinished_work_seconds{name="DynamicCABundle-serving-cert"} 0 +workqueue_unfinished_work_seconds{name="DynamicConfigMapCABundle-client-ca"} 0 +workqueue_unfinished_work_seconds{name="DynamicServingCertificateController"} 0 +workqueue_unfinished_work_seconds{name="gatherer"} 0 +# HELP workqueue_work_duration_seconds [ALPHA] How long in seconds processing an item from workqueue takes. +# TYPE workqueue_work_duration_seconds histogram +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="1e-08"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="1e-07"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="1e-06"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="9.999999999999999e-06"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="9.999999999999999e-05"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="0.001"} 115 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="0.01"} 115 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="0.1"} 115 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="1"} 115 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="10"} 115 +workqueue_work_duration_seconds_bucket{name="DynamicCABundle-serving-cert",le="+Inf"} 115 +workqueue_work_duration_seconds_sum{name="DynamicCABundle-serving-cert"} 0.04687979499999999 +workqueue_work_duration_seconds_count{name="DynamicCABundle-serving-cert"} 115 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="1e-08"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="1e-07"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="1e-06"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="9.999999999999999e-06"} 8 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="9.999999999999999e-05"} 115 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="0.001"} 117 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="0.01"} 117 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="0.1"} 117 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="1"} 117 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="10"} 117 +workqueue_work_duration_seconds_bucket{name="DynamicConfigMapCABundle-client-ca",le="+Inf"} 117 +workqueue_work_duration_seconds_sum{name="DynamicConfigMapCABundle-client-ca"} 0.0043756729999999975 +workqueue_work_duration_seconds_count{name="DynamicConfigMapCABundle-client-ca"} 117 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="1e-08"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="1e-07"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="1e-06"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="9.999999999999999e-06"} 0 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="9.999999999999999e-05"} 113 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="0.001"} 115 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="0.01"} 118 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="0.1"} 118 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="1"} 118 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="10"} 118 +workqueue_work_duration_seconds_bucket{name="DynamicServingCertificateController",le="+Inf"} 118 +workqueue_work_duration_seconds_sum{name="DynamicServingCertificateController"} 0.009871749999999997 +workqueue_work_duration_seconds_count{name="DynamicServingCertificateController"} 118 +workqueue_work_duration_seconds_bucket{name="gatherer",le="1e-08"} 0 +workqueue_work_duration_seconds_bucket{name="gatherer",le="1e-07"} 0 +workqueue_work_duration_seconds_bucket{name="gatherer",le="1e-06"} 0 +workqueue_work_duration_seconds_bucket{name="gatherer",le="9.999999999999999e-06"} 0 +workqueue_work_duration_seconds_bucket{name="gatherer",le="9.999999999999999e-05"} 0 +workqueue_work_duration_seconds_bucket{name="gatherer",le="0.001"} 0 +workqueue_work_duration_seconds_bucket{name="gatherer",le="0.01"} 0 +workqueue_work_duration_seconds_bucket{name="gatherer",le="0.1"} 0 +workqueue_work_duration_seconds_bucket{name="gatherer",le="1"} 0 +workqueue_work_duration_seconds_bucket{name="gatherer",le="10"} 3 +workqueue_work_duration_seconds_bucket{name="gatherer",le="+Inf"} 13 +workqueue_work_duration_seconds_sum{name="gatherer"} 236.38745128300002 +workqueue_work_duration_seconds_count{name="gatherer"} 13