Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Client-side throttling on prometheus adapter #573

Closed
ashishvaishno opened this issue Mar 31, 2023 · 7 comments
Closed

Client-side throttling on prometheus adapter #573

ashishvaishno opened this issue Mar 31, 2023 · 7 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@ashishvaishno
Copy link

ashishvaishno commented Mar 31, 2023

What happened?:
We are using prometheus adapter to scrape extra metrics for HPA.
Rule for prometheus adapter

  rules:
    default: false
    external:
      - seriesQuery: '{__name__=~"^rpc_outbound_request_duration_seconds$"}'
        resources:
          overrides:
            namespace:
              resource: namespace
        name:
          matches: ""
          as: service_name_p95_latency
        metricsQuery: max(<<.Series>>{quantile="0.95", service="service_name",destination_method="method_name"}) by (service,namespace)

We are running the latest version of prometheus adapter helm chart version 4.1.1. This was working as expected till we installed crossplane and upbound aws provider on the eks cluster
We started getting the below error in prometheus adapter and the prometheus adapter ended up in a restart loop

I0315 11:37:00.832149       1 request.go:533] Waited for 8.262693219s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/appsync.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/appsync.aws.upbound.io/v1beta1?timeout=32s)
I0315 11:37:00.930525       1 request.go:533] Waited for 8.351446772s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/athena.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/athena.aws.upbound.io/v1beta1?timeout=32s)
I0315 11:37:01.120526       1 request.go:533] Waited for 8.551033934s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/autoscaling.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/autoscaling.aws.upbound.io/v1beta1?timeout=32s)
I0315 11:37:01.427912       1 request.go:533] Waited for 8.766376003s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/autoscalingplans.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/autoscalingplans.aws.upbound.io/v1beta1?timeout=32s)
I0315 11:37:01.535077       1 request.go:533] Waited for 8.963432443s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/backup.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/backup.aws.upbound.io/v1beta1?timeout=32s)
I0315 11:37:01.732613       1 request.go:533] Waited for 9.162033956s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/batch.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/batch.aws.upbound.io/v1beta1?timeout=32s)
I0315 11:37:01.732740       1 request.go:601] Waited for 9.162033956s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/batch.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/batch.aws.upbound.io/v1beta1?timeout=32s)
I0315 11:37:01.934231       1 request.go:533] Waited for 9.357336159s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/budgets.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/budgets.aws.upbound.io/v1beta1?timeout=32s)
I0315 11:37:02.222204       1 request.go:533] Waited for 9.651356553s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/ce.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/ce.aws.upbound.io/v1beta1?timeout=32s)
I0315 11:37:02.337830       1 request.go:533] Waited for 9.761527171s due to client-side throttling, not priority and fairness, request: GET:[https://10.100.0.1:443/apis/chime.aws.upbound.io/v1beta1?timeout=32s](https://10.100.0.1/apis/chime.aws.upbound.io/v1beta1?timeout=32s)

What did you expect to happen?:
Prometheus adapter works as expected and does not throttle with crossplane and upbound aws providers installed on the cluster

Please provide the prometheus-adapter config:

prometheus-adapter config

rules:
default: false
external:
- seriesQuery: '{name=~"^rpc_outbound_request_duration_seconds$"}'
resources:
overrides:
namespace:
resource: namespace
name:
matches: ""
as: service_name_p95_latency
metricsQuery: max(<<.Series>>{quantile="0.95", service="service_name",destination_method="method_name"}) by (service,namespace)

Please provide the HPA resource used for autoscaling:

HPA yaml

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: service_name
spec:
minReplicas: 1
maxReplicas: 3
metrics:
- type: External
external:
metric:
name: service_name_p95_latency
target:
type: Value
value: 2000m
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: service_name

Please provide the HPA status:

Please provide the prometheus-adapter logs with -v=6 around the time the issue happened:

prometheus-adapter logs

I0316 07:39:26.534928 1 adapter.go:114] successfully using in-cluster auth
I0316 07:39:26.943018 1 request.go:533] Waited for 230.672683ms due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/appconfig.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:27.146559 1 request.go:533] Waited for 434.214493ms due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/appflow.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:27.389778 1 request.go:533] Waited for 671.383357ms due to client-side throttling, not priority and fairness,
I0316 07:39:32.145188 1 request.go:601] Waited for 5.428125777s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/docdb.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:32.343588 1 request.go:533] Waited for 5.626614071s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/ds.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:32.593254 1 request.go:533] Waited for 5.876304051s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/dynamodb.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:32.743660 1 request.go:533] Waited for 6.026726993s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/ec2.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:32.943332 1 request.go:533] Waited for 6.226311388s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/ecr.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:33.142899 1 request.go:533] Waited for 6.425859339s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/ecrpublic.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:33.362509 1 request.go:533] Waited for 6.645461234s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/ecs.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:33.366166 1 request.go:601] Waited for 6.645461234s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/ecs.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:33.542878 1 request.go:533] Waited for 6.825813411s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/efs.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:33.743214 1 request.go:533] Waited for 7.026158155s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/eks.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:38.744611 1 request.go:533] Waited for 12.02699336s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/kinesisanalyticsv2.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:38.942946 1 request.go:533] Waited for 12.225349835s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/kinesisvideo.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:39.144294 1 request.go:533] Waited for 12.426507144s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/kms.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:40.755040 1 request.go:533] Waited for 14.037247945s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/mediaconvert.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:40.943617 1 request.go:533] Waited for 14.225838353s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/medialive.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:41.157050 1 request.go:533] Waited for 14.439127982s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/mediapackage.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:41.364687 1 request.go:533] Waited for 14.626128432s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/mediastore.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:41.364792 1 request.go:601] Waited for 14.626128432s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/mediastore.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:41.543099 1 request.go:533] Waited for 14.825271789s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/memorydb.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:48.753709 1 request.go:533] Waited for 22.034546996s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/vpcresources.k8s.aws/v1beta1?timeout=32s
I0316 07:39:48.944057 1 request.go:533] Waited for 22.224922724s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/waf.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:49.146891 1 request.go:533] Waited for 22.427696462s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/wafregional.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:49.343956 1 request.go:533] Waited for 22.624759377s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/wafv2.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:49.344077 1 request.go:601] Waited for 22.624759377s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/wafv2.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:49.543144 1 request.go:533] Waited for 22.823955974s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/workspaces.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:49.742921 1 request.go:533] Waited for 23.023692953s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/xray.aws.upbound.io/v1beta1?timeout=32s
I0316 07:39:49.943797 1 request.go:533] Waited for 23.224593494s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/external.metrics.k8s.io/v1beta1?timeout=32s
I0316 07:39:50.157126 1 request.go:533] Waited for 211.822888ms due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/api?timeout=32s
I0316 07:39:50.345680 1 request.go:533] Waited for 185.835648ms due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis?timeout=32s
I0316 07:39:50.546364 1 request.go:533] Waited for 164.852552ms due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s
I0316 07:39:50.743942 1 request.go:533] Waited for 364.114314ms due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/api/v1?timeout=32s
I0316 07:39:50.943360 1 request.go:533] Waited for 563.82385ms due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/apiregistration.k8s.io/v1?timeout=32s
I0316 07:39:51.183869 1 request.go:533] Waited for 804.366772ms due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/apps/v1?timeout=32s
I0316 07:39:51.343784 1 request.go:533] Waited for 964.264914ms due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/events.k8s.io/v1?timeout=32s
I0316 07:39:51.649459 1 request.go:533] Waited for 1.269722293s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s
I0316 07:39:51.655898 1 request.go:601] Waited for 1.269722293s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s
I0316 07:39:51.743538 1 request.go:533] Waited for 1.363998525s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/authentication.k8s.io/v1?timeout=32s
I0316 07:39:51.944037 1 request.go:533] Waited for 1.564366528s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/authorization.k8s.io/v1?timeout=32s
I0316 07:39:52.146438 1 request.go:533] Waited for 1.766843499s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/autoscaling/v2?timeout=32s
I0316 07:39:52.343356 1 request.go:533] Waited for 1.963768329s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/autoscaling/v1?timeout=32s
I0316 07:39:52.543541 1 request.go:533] Waited for 2.163936957s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/autoscaling/v2beta1?timeout=32s
I0316 07:39:52.743165 1 request.go:533] Waited for 2.363583964s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/autoscaling/v2beta2?timeout=32s
I0316 07:39:52.743185 1 request.go:601] Waited for 2.363583964s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/autoscaling/v2beta2?timeout=32s
I0316 07:39:52.953430 1 request.go:533] Waited for 2.57374835s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/batch/v1?timeout=32s
I0316 07:39:53.154567 1 request.go:533] Waited for 2.774240082s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/batch/v1beta1?timeout=32s
I0316 07:39:53.375952 1 request.go:533] Waited for 2.996246964s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/certificates.k8s.io/v1?timeout=32s
I0316 07:39:53.543097 1 request.go:533] Waited for 3.163394292s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/networking.k8s.io/v1?timeout=32s
I0316 07:39:53.743769 1 request.go:533] Waited for 3.364060419s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/policy/v1?timeout=32s
I0316 07:39:53.743991 1 request.go:601] Waited for 3.364060419s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/policy/v1?timeout=32s
I0316 07:39:53.962269 1 request.go:533] Waited for 3.568400775s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/policy/v1beta1?timeout=32s
I0316 07:39:54.156595 1 request.go:533] Waited for 3.776909237s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s
I0316 07:39:54.343120 1 request.go:533] Waited for 3.963426695s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/storage.k8s.io/v1?timeout=32s
I0316 07:39:54.543748 1 request.go:533] Waited for 4.164006634s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
I0316 07:40:32.292340 1 request.go:601] Waited for 41.791813194s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/medialive.aws.upbound.io/v1beta1?timeout=32s
I0316 07:40:32.391440 1 request.go:533] Waited for 42.002009226s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/mq.aws.upbound.io/v1beta1?timeout=32s
I0316 07:40:32.544313 1 request.go:533] Waited for 42.16028138s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/mediastore.aws.upbound.io/v1beta1?timeout=32s
I0316 07:40:32.743856 1 request.go:533] Waited for 42.359826407s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/memorydb.aws.upbound.io/v1beta1?timeout=32s
I0316 07:40:32.943275 1 request.go:533] Waited for 42.559265741s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/neptune.aws.upbound.io/v1beta1?timeout=32s
I0316 07:40:33.150506 1 request.go:533] Waited for 42.766452611s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/networkfirewall.aws.upbound.io/v1beta1?timeout=32s
I0316 07:40:33.342994 1 request.go:533] Waited for 42.940316482s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/networkmanager.aws.upbound.io/v1beta1?timeout=32s
I0316 07:40:33.343098 1 request.go:601] Waited for 42.940316482s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/networkmanager.aws.upbound.io/v1beta1?timeout=32s
I0316 07:40:33.386092 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/cert/apiserver.crt::/tmp/cert/apiserver.key"
I0316 07:40:35.880867 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController
I0316 07:40:36.026561 1 config.go:724] Not requested to run hook priority-and-fairness-config-consumer
I0316 07:40:36.113120 1 genericapiserver.go:412] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete
I0316 07:40:36.113257 1 genericapiserver.go:425] "[graceful-termination] shutdown event" name="ShutdownInitiated"
I0316 07:40:36.113342 1 genericapiserver.go:428] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"
I0316 07:40:36.113443 1 object_count_tracker.go:84] "StorageObjectCountTracker pruner is exiting"
I0316 07:40:36.121030 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/cert/apiserver.crt::/tmp/cert/apiserver.key" certDetail=""localhost@1677667749" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer="localhost-ca@1677667743" (2023-03-01 09:49:00 +0000 UTC to 2024-02-29 09:49:00 +0000 UTC (now=2023-03-16 07:40:36.120984539 +0000 UTC))"
I0316 07:40:36.121640 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail=""apiserver-loopback-client@1678952435" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1678952433" (2023-03-16 06:40:33 +0000 UTC to 2024-03-15 06:40:33 +0000 UTC (now=2023-03-16 07:40:36.121556435 +0000 UTC))"
I0316 07:40:36.121732 1 secure_serving.go:210] Serving securely on [::]:6443
I0316 07:40:36.122562 1 genericapiserver.go:477] [graceful-termination] waiting for shutdown to be initiated
I0316 07:40:36.122579 1 genericapiserver.go:489] [graceful-termination] RunPreShutdownHooks has completed
I0316 07:40:36.122687 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/cert/apiserver.crt::/tmp/cert/apiserver.key"
I0316 07:40:36.133444 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0316 07:40:36.133546 1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0316 07:40:36.133626 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0316 07:40:36.133640 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0316 07:40:36.133712 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0316 07:40:36.133725 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0316 07:40:36.133855 1 reflector.go:219] Starting reflector *v1.ConfigMap (12h0m0s) from pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167
I0316 07:40:36.133867 1 reflector.go:255] Listing and watching *v1.ConfigMap from pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167
I0316 07:40:36.134159 1 reflector.go:219] Starting reflector *v1.ConfigMap (12h0m0s) from pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167
I0316 07:40:36.134168 1 reflector.go:255] Listing and watching *v1.ConfigMap from pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167
I0316 07:40:36.135116 1 reflector.go:219] Starting reflector *v1.ConfigMap (12h0m0s) from pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167
I0316 07:40:36.135131 1 reflector.go:255] Listing and watching *v1.ConfigMap from pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167
I0316 07:40:36.143455 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0316 07:40:36.152353 1 genericapiserver.go:475] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"
I0316 07:40:36.152429 1 secure_serving.go:255] Stopped listening on [::]:6443
I0316 07:40:36.152447 1 genericapiserver.go:463] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"
I0316 07:40:36.152458 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0316 07:40:36.152499 1 shared_informer.go:281] stop requested
E0316 07:40:36.152507 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0316 07:40:36.152517 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0316 07:40:36.152534 1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167
I0316 07:40:36.152987 1 shared_informer.go:281] stop requested
E0316 07:40:36.153000 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0316 07:40:36.153082 1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167
I0316 07:40:36.153137 1 shared_informer.go:281] stop requested
E0316 07:40:36.153149 1 shared_informer.go:258] unable to sync caches for RequestHeaderAuthRequestController
I0316 07:40:36.153177 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/tmp/cert/apiserver.crt::/tmp/cert/apiserver.key"
I0316 07:40:36.153404 1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167
I0316 07:40:36.153535 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0316 07:40:36.154636 1 requestheader_controller.go:176] Shutting down RequestHeaderAuthRequestController
I0316 07:40:36.155101 1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="874.121µs" userAgent="kube-probe/1.24+" audit-ID="50569b6f-cf1d-41a8-af93-cdce505890d0" srcIP="172.25.46.250:46540" resp=200
I0316 07:40:36.156879 1 genericapiserver.go:496] [graceful-termination] apiserver is exiting

Anything else we need to know?:

With assistance from @candonov (Christina Andonov), the prometheus-adapter uses k8s.client-go v0.24.3 which might be the contributing factor of the client side throttling. Would it be possible to bump up the client version to the latest one?

Regards
Ashish Vaishno

@ashishvaishno ashishvaishno added the kind/bug Categorizes issue or PR as related to a bug. label Mar 31, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 31, 2023
@dgrisonnet
Copy link
Member

Hi @ashishvaishno, we should definitely update the Kubernetes version to the latest. Do you want to give a try at sending a PR?

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 31, 2023
@ashishvaishno
Copy link
Author

@dgrisonnet #574

@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Apr 2, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 31, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 30, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants