Skip to content

envoy drain duration should adapt to pod spec #34855

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
kyessenov opened this issue Aug 24, 2021 · 71 comments
Closed

envoy drain duration should adapt to pod spec #34855

kyessenov opened this issue Aug 24, 2021 · 71 comments
Labels
lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while

Comments

@kyessenov
Copy link
Contributor

kyessenov commented Aug 24, 2021

There are four (!) settings controlling the sidecar termination duration:

  1. drainDuration is envoy's graceful drain duration (default 45s).
  2. parentShutdownDuration is envoy's parent shutdown delay (default 60s).
  3. terminationDrainDuration is pilot-agent's termination duration delay after SIGTERM (default 5s).
  4. terminationGracefulPeriodSeconds is pod's graceful termination delay after SIGTERM before SIGKILL (default 30s).

This is very confusing and is actually inconsistent since (1)-(3) are normally mesh-level and (4) is pod-level. (4) overrides (3), and (3) overrides (1), while (2) is not used. (3) is also too short to be meaningful since envoy's drain is passive.

The proposal is to use (4) as the main deadline since it's orchestrated by kubelet irrespective of mesh-level settings. In this world, there is just one graceful deadline value (4) that dictates how long envoy gracefully drains open connections before shutting down. That means envoy is not terminated abruptly without sending final telemetry, for example. The process would look like this:

  1. On SIGTERM, agent initiates inbound listener graceful drain using (4) as the deadline. GOAWAY and connection:close are sent on all inbound connections, no more inbound connections are accepted, container readiness fails, endpoint removed from the pools. The max drain deadline is set by (4), but envoy can be stopped before that once all outstanding requests/connections are finished (including outbound app-driven). It's probably good to have a fixed minimum delay because detecting if an app (peer container) has outstanding requests is difficult (alternatively, CNI can undo outbound iptables on termination).
  2. After (4) seconds, SIGKILL is issued if envoy does not stop before that.

cc @ramaraochavali @mandarjog

@kyessenov
Copy link
Contributor Author

Ref envoyproxy/envoy#17056

@howardjohn
Copy link
Member

We use --drain-strategy immediate which should somewhat help "envoy's drain is passive" if you weren't aware - doesn't work in all cases though.

Overall I agree we should do this. A few concerns I think we will need to work out:

  • what we should do if users set terminationDrainDuration (or just have the old default of 5s)?
  • Will we hit any cases where we get stuck? For example, I have seen some users set terminationGracefulPeriodSeconds to extreme values (>10 minutes) - if there are cases where their pod would have shut down and now it doesn't we may run into issues

The benefits largely outweigh the issues, but I we should see what we can do to address those

@kyessenov
Copy link
Contributor Author

kyessenov commented Aug 25, 2021

@howardjohn It's passive because Envoy does not signal clients pro-actively. On a long-living connection with few requests, clients simply don't receive any notification. For very long grace periods, we have to implement polling for open connections so that if app exits early, proxy does, too. If we do pursue this, I'd imagine flags (1)-(3) are no longer useful. 5s is a dangerously low value, more so for apps with very longs shutdowns.

@kyessenov
Copy link
Contributor Author

One complication is that health checks (and some other endpoints) go through Envoy and those should not count as application connections during drain.

@kyessenov
Copy link
Contributor Author

Tagging @hzxuzhonghu.

@ramaraochavali
Copy link
Contributor

One complication is that health checks (and some other endpoints) go through Envoy and those should not count as application connections during drain.

This is exactly the same reason why we can not completely depend on Envoy to provide draining connection count.

@kevin-lindsay-1
Copy link
Contributor

I just ran into this issue; one of my services has a high maximum but low average request time, so it gracefully shuts itself down, but once it's actually exited the sidecar persists and waits around even though there's no active connections.

Hopefully I see this in a release soon, because I'd really like to test it and have even faster resource freeing across our clusters.

@ramaraochavali
Copy link
Contributor

@kevin-lindsay-1 #35059 has this implementation. You can set EXIT_ON_ZERO_ACTIVE_CONNECTIONS on Agent side and try it. Please let us know how it works

@kevin-lindsay-1
Copy link
Contributor

@ramaraochavali I use helm to install istio, and can change the image tags. Are there image builds that I could try out?

@ramaraochavali
Copy link
Contributor

https://storage.googleapis.com/istio-build/dev/latest - Please try with this

@kevin-lindsay-1
Copy link
Contributor

kevin-lindsay-1 commented Oct 11, 2021

@ramaraochavali I use dockerhub for my images; can you tell me what I'd need to change on the helm chart for istio-discovery to test that image, because I'm getting an error when trying to switch stuff to that hub/tag.

@ramaraochavali
Copy link
Contributor

Sorry. I am not aware of that. May be @howardjohn knows about it

@howardjohn
Copy link
Member

howardjohn commented Oct 12, 2021 via email

@kevin-lindsay-1
Copy link
Contributor

@howardjohn sorry, I understand the values to set, but I don't know what the actual hub and tag values would be, as this appears to be a google hub and tag, and the attempts I made all gave me errors.

Not experienced with using google storage as a hub, to the point where I'm not even 100% that it's compatible as an implementation of a docker hub.

@howardjohn
Copy link
Member

howardjohn commented Oct 12, 2021 via email

@kevin-lindsay-1
Copy link
Contributor

Thanks, I'll give it a shot!

@kevin-lindsay-1
Copy link
Contributor

kevin-lindsay-1 commented Oct 13, 2021

@ramaraochavali I updated the proxyMetada annotation to have EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true', and I see it set in the istio-proxy container, but even though my service has no active connections it still waits the terminationDrainDuration.

Should I not set terminationDrainDuration any longer or something?

Pod logs say version 1.12-alpha.27e28972ff1b1229ddd603ba8363827a3f99fad3-27e28972ff1b1229ddd603ba8363827a3f9

@ramaraochavali
Copy link
Contributor

@kevin-lindsay-1 It is not part of proxy metadata. You need to set that as env var while starting proxy.

@kevin-lindsay-1
Copy link
Contributor

@ramaraochavali how do I set that when using the sidecar injector? I tried going in after the pod initialized, but it looks like it doesn't check for that variable after initialization, and I don't see a way to configure the injector to set environment variables that aren't part of a specific pre-defined API.

@ramaraochavali
Copy link
Contributor

@kevin-lindsay-1 Sorry. I missed that - since you have already set in proxy Metadata - it should have been carried to env of proxy. Can you please share your pod.yaml and logs?

@kevin-lindsay-1
Copy link
Contributor

kevin-lindsay-1 commented Oct 25, 2021

@ramaraochavali scratch that. i checked on my config, and I must have tried something previously along the lines of:

proxy.istio.io/proxyMetadata: |
  EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true'

I just tried it again with:

proxy.istio.io/config: |
  proxyMetadata:
    EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true'

and it worked exactly as expected this time.

Sorry for the late response, took a while to get around to testing this again.

@ramaraochavali
Copy link
Contributor

ramaraochavali commented Oct 26, 2021 via email

@kevin-lindsay-1
Copy link
Contributor

@ramaraochavali I'd be very interested to know when this feature hits an official release, as it will be very useful for my team, as we use EKS w/ Spot instances, so faster drain time can save us maintenance time and a non-trivial amount of costs over time.

@Samjin
Copy link

Samjin commented Nov 4, 2021

(3) overrides (1), while (2) is not used.

@kyessenov are you saying terminationDrainDuration wouldn't override 1 if 2 is specified?
Assume 3(terminationDrainDuration) is 100s, would envoy shutdown at 100s or does it shutdown at 45s because of drainDuration?
What's the best practice when setting these params? Should 1 and 2 as close to terminationDrainDuration as possible?

@ramaraochavali
Copy link
Contributor

It is in 1.12 and here are the dates for https://github.com/istio/istio/wiki/Istio-Release-1.12

@Samjin
Copy link

Samjin commented Nov 5, 2021

@ramaraochavali Thanks for quickly reply. What's the best practice before 1.12?
It sounds to me that 1 and 2 should be a close number to terminationDrainDuration so that envoy doesn't shutdown too early abruptly at 1(45s) before terminationDrainDuration(100s)?

@ramaraochavali
Copy link
Contributor

2 is not used now.

@Stono
Copy link
Contributor

Stono commented Nov 11, 2021

Hey,
Just catching up on this and the changes in 1.12. Super keen to get this sorted because we currently have a wrapper script for pilot-agent (that we've been using for quite some time) that sort of does this: https://gist.github.com/Stono/2cc68c62968f13e3447fc107abbec72f

If i'm reading the changes right, we could template our workloads with an annotation like this:

annotations:
  proxy.istio.io/config: |
    terminationDrainDuration: <whatever the termination grace period is>
    proxyMetadata:
      EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true'

And should be able to remove our custom wrapper script as that would give me a drain duration === to the termination grace period, and it would exit in a timely manner.

@ramaraochavali
Copy link
Contributor

Can you please share Bootstrap's proxyMetadata section if possible?

@yteraoka
Copy link

jq '.configs[0].bootstrap.node.metadata'
with inclusionRegexps ``` { "ANNOTATIONS": { "kubernetes.io/config.seen": "2022-02-26T22:20:30.119706875+09:00", "kubernetes.io/psp": "eks.privileged", "prometheus.io/scrape": "true", "kubectl.kubernetes.io/default-container": "nginx", "sidecar.istio.io/status": "{\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-data\",\"istio-podinfo\",\"istio-token\",\"istiod-ca-cert\"],\"imagePullSecrets\":null,\"revision\":\"1-12-1\"}", "prometheus.io/path": "/stats/prometheus", "kubernetes.io/config.source": "api", "proxy.istio.io/config": "terminationDrainDuration: 10s\nproxyMetadata:\n EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true'\n MINIMUM_DRAIN_DURATION: 11s\nproxyStatsMatcher:\n inclusionRegexps:\n - \".*downstream_cx_active\"\n", "prometheus.io/port": "15020", "kubectl.kubernetes.io/default-logs-container": "nginx" }, "WORKLOAD_NAME": "nginx", "INTERCEPTION_MODE": "REDIRECT", "ENVOY_STATUS_PORT": 15021, "NAME": "nginx-55475b47b6-57m8z", "INSTANCE_IPS": "10.110.101.240", "ISTIO_PROXY_SHA": "istio-proxy:e6f45abcf874983fbff384459d70b28c072f68b5", "CLUSTER_ID": "Kubernetes", "OWNER": "kubernetes://apis/apps/v1/namespaces/teraoka/deployments/nginx", "LABELS": { "pod-template-hash": "55475b47b6", "service.istio.io/canonical-revision": "latest", "app.kubernetes.io/name": "nginx", "security.istio.io/tlsMode": "istio", "app.kubernetes.io/instance": "nginx", "service.istio.io/canonical-name": "nginx" }, "ISTIO_VERSION": "1.12.1", "SERVICE_ACCOUNT": "nginx", "PILOT_SAN": [ "istiod-1-12-1.istio-system.svc" ], "PROXY_CONFIG": { "proxyMetadata": { "MINIMUM_DRAIN_DURATION": "11s", "EXIT_ON_ZERO_ACTIVE_CONNECTIONS": "true" }, "controlPlaneAuthPolicy": "MUTUAL_TLS", "binaryPath": "/usr/local/bin/envoy", "proxyStatsMatcher": { "inclusionRegexps": [ ".*downstream_cx_active" ] }, "configPath": "./etc/istio/proxy", "proxyAdminPort": 15000, "statNameLength": 189, "discoveryAddress": "istiod-1-12-1.istio-system.svc:15012", "drainDuration": "45s", "statusPort": 15020, "terminationDrainDuration": "10s", "concurrency": 2, "tracing": { "zipkin": { "address": "zipkin.istio-system:9411" } }, "holdApplicationUntilProxyStarts": true, "parentShutdownDuration": "60s", "serviceCluster": "istio-proxy" }, "MESH_ID": "cluster.local", "APP_CONTAINERS": "nginx", "NAMESPACE": "teraoka", "PROV_CERT": "var/run/secrets/istio/root-cert.pem", "ENVOY_PROMETHEUS_PORT": 15090, "POD_PORTS": "[{\"name\":\"http\",\"containerPort\":80,\"protocol\":\"TCP\"}]" } ```
without inclusionRegexps
{
  "APP_CONTAINERS": "nginx",
  "PROV_CERT": "var/run/secrets/istio/root-cert.pem",
  "NAMESPACE": "teraoka",
  "POD_PORTS": "[{\"name\":\"http\",\"containerPort\":80,\"protocol\":\"TCP\"}]",
  "ANNOTATIONS": {
    "kubernetes.io/config.seen": "2022-02-27T15:32:04.461976770+09:00",
    "proxy.istio.io/config": "terminationDrainDuration: 10s\nproxyMetadata:\n  EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true'\n  MINIMUM_DRAIN_DURATION: 11s\n",
    "kubectl.kubernetes.io/default-logs-container": "nginx",
    "kubernetes.io/psp": "eks.privileged",
    "kubernetes.io/config.source": "api",
    "prometheus.io/scrape": "true",
    "sidecar.istio.io/status": "{\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-data\",\"istio-podinfo\",\"istio-token\",\"istiod-ca-cert\"],\"imagePullSecrets\":null,\"revision\":\"1-12-1\"}",
    "prometheus.io/port": "15020",
    "kubectl.kubernetes.io/default-container": "nginx",
    "prometheus.io/path": "/stats/prometheus"
  },
  "ISTIO_VERSION": "1.12.1",
  "LABELS": {
    "app.kubernetes.io/instance": "nginx",
    "service.istio.io/canonical-revision": "latest",
    "service.istio.io/canonical-name": "nginx",
    "app.kubernetes.io/name": "nginx",
    "pod-template-hash": "84c877988c",
    "security.istio.io/tlsMode": "istio"
  },
  "CLUSTER_ID": "Kubernetes",
  "PROXY_CONFIG": {
    "drainDuration": "45s",
    "parentShutdownDuration": "60s",
    "configPath": "./etc/istio/proxy",
    "tracing": {
      "zipkin": {
        "address": "zipkin.istio-system:9411"
      }
    },
    "binaryPath": "/usr/local/bin/envoy",
    "concurrency": 2,
    "proxyAdminPort": 15000,
    "serviceCluster": "istio-proxy",
    "statNameLength": 189,
    "statusPort": 15020,
    "discoveryAddress": "istiod-1-12-1.istio-system.svc:15012",
    "holdApplicationUntilProxyStarts": true,
    "terminationDrainDuration": "10s",
    "proxyMetadata": {
      "MINIMUM_DRAIN_DURATION": "11s",
      "EXIT_ON_ZERO_ACTIVE_CONNECTIONS": "true"
    },
    "controlPlaneAuthPolicy": "MUTUAL_TLS"
  },
  "ENVOY_STATUS_PORT": 15021,
  "OWNER": "kubernetes://apis/apps/v1/namespaces/teraoka/deployments/nginx",
  "PILOT_SAN": [
    "istiod-1-12-1.istio-system.svc"
  ],
  "NAME": "nginx-84c877988c-s9vd4",
  "WORKLOAD_NAME": "nginx",
  "INTERCEPTION_MODE": "REDIRECT",
  "ISTIO_PROXY_SHA": "istio-proxy:e6f45abcf874983fbff384459d70b28c072f68b5",
  "MESH_ID": "cluster.local",
  "ENVOY_PROMETHEUS_PORT": 15090,
  "INSTANCE_IPS": "10.110.103.52",
  "SERVICE_ACCOUNT": "nginx"
}
diff
$ diff -u <(jq -S '.configs[0].bootstrap.node.metadata' config.dump.with-inclusion.json) <(jq -S '.configs[0].bootstrap.node.metadata' config.dump.without-inclusion.json)
--- /dev/fd/11	2022-02-27 18:49:32.000000000 +0900
+++ /dev/fd/12	2022-02-27 18:49:32.000000000 +0900
@@ -2,33 +2,33 @@
   "ANNOTATIONS": {
     "kubectl.kubernetes.io/default-container": "nginx",
     "kubectl.kubernetes.io/default-logs-container": "nginx",
-    "kubernetes.io/config.seen": "2022-02-26T22:20:30.119706875+09:00",
+    "kubernetes.io/config.seen": "2022-02-27T15:32:04.461976770+09:00",
     "kubernetes.io/config.source": "api",
     "kubernetes.io/psp": "eks.privileged",
     "prometheus.io/path": "/stats/prometheus",
     "prometheus.io/port": "15020",
     "prometheus.io/scrape": "true",
-    "proxy.istio.io/config": "terminationDrainDuration: 10s\nproxyMetadata:\n  EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true'\n  MINIMUM_DRAIN_DURATION: 11s\nproxyStatsMatcher:\n  inclusionRegexps:\n    - \".*downstream_cx_active\"\n",
+    "proxy.istio.io/config": "terminationDrainDuration: 10s\nproxyMetadata:\n  EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true'\n  MINIMUM_DRAIN_DURATION: 11s\n",
     "sidecar.istio.io/status": "{\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-data\",\"istio-podinfo\",\"istio-token\",\"istiod-ca-cert\"],\"imagePullSecrets\":null,\"revision\":\"1-12-1\"}"
   },
   "APP_CONTAINERS": "nginx",
   "CLUSTER_ID": "Kubernetes",
   "ENVOY_PROMETHEUS_PORT": 15090,
   "ENVOY_STATUS_PORT": 15021,
-  "INSTANCE_IPS": "10.110.101.240",
+  "INSTANCE_IPS": "10.110.103.52",
   "INTERCEPTION_MODE": "REDIRECT",
   "ISTIO_PROXY_SHA": "istio-proxy:e6f45abcf874983fbff384459d70b28c072f68b5",
   "ISTIO_VERSION": "1.12.1",
   "LABELS": {
     "app.kubernetes.io/instance": "nginx",
     "app.kubernetes.io/name": "nginx",
-    "pod-template-hash": "55475b47b6",
+    "pod-template-hash": "84c877988c",
     "security.istio.io/tlsMode": "istio",
     "service.istio.io/canonical-name": "nginx",
     "service.istio.io/canonical-revision": "latest"
   },
   "MESH_ID": "cluster.local",
-  "NAME": "nginx-55475b47b6-57m8z",
+  "NAME": "nginx-84c877988c-s9vd4",
   "NAMESPACE": "teraoka",
   "OWNER": "kubernetes://apis/apps/v1/namespaces/teraoka/deployments/nginx",
   "PILOT_SAN": [
@@ -50,11 +50,6 @@
       "EXIT_ON_ZERO_ACTIVE_CONNECTIONS": "true",
       "MINIMUM_DRAIN_DURATION": "11s"
     },
-    "proxyStatsMatcher": {
-      "inclusionRegexps": [
-        ".*downstream_cx_active"
-      ]
-    },
     "serviceCluster": "istio-proxy",
     "statNameLength": 189,
     "statusPort": 15020,

@ramaraochavali
Copy link
Contributor

@yteraoka Thank you. I see the problem. Fixed in #37573

@yteraoka
Copy link

@ramaraochavali Thank you, too.

@kevin-lindsay-1
Copy link
Contributor

Does this actually work now? If so, when can I test it in a non-dev build?

@liyihuang
Copy link

@ramaraochavali I have seen your PR back in Feb. I'm on 1.13.4 and still don't see this has been merged and only see it on 1.14 beta will your 37573 be merged to any 1.12 or 1.13?

2022-05-23T02:54:09.595581Z     info    Envoy proxy is ready
2022-05-23T02:58:22.079931Z     info    Agent draining Proxy
2022-05-23T02:58:22.079917Z     info    Status server has successfully terminated
2022-05-23T02:58:22.080142Z     error   accept tcp [::]:15020: use of closed network connection
2022-05-23T02:58:22.081261Z     info    Agent draining proxy for 5s, then waiting for active connections to terminate...
2022-05-23T02:58:27.082178Z     info    Checking for active connections...
2022-05-23T02:58:28.085056Z     info    There are still -1 active connections
2022-05-23T02:58:29.084947Z     info    There are still -1 active connections
2022-05-23T02:58:30.084276Z     info    There are still -1 active connections
2022-05-23T02:58:31.085118Z     info    There are still -1 active connections
2022-05-23T02:58:32.084250Z     info    There are still -1 active connections
2022-05-23T02:58:33.084492Z     info    There are still -1 active connections

my IOP spec

   meshConfig:
     defaultConfig:
       terminationDrainDuration: 150s
       proxyMetadata:
         EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true'`
     enablePrometheusMerge: true

@ramaraochavali
Copy link
Contributor

I have added for 1.13. Please follow this PR #39082

@liyihuang
Copy link

@ramaraochavali thanks. can we also cherry-pick this for 1.12?

@ramaraochavali
Copy link
Contributor

1.12 needs manual PR. It will take some time.

@kevin-lindsay-1
Copy link
Contributor

kevin-lindsay-1 commented May 24, 2022

@ramaraochavali thanks! can you let us know when this will be in a release build? i've sometimes seen merges take months to see in a release build.

@ramaraochavali
Copy link
Contributor

Sorry I have been busy with other stuff. PR for backport #39286 to 1.12

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Sep 3, 2022
@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2022-06-05. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Sep 18, 2022
@batchamalick
Copy link

Hi @ramaraochavali ,
While testing the setting EXIT_ON_ZERO_ACTIVE_CONNECTIONS: 'true' ,I deliberately kept an active connection and killed the pod using kubectl delete ,the envoy logs says one active connection clearly but after the terminationGracefulPeriod the istio-proxy + app is killed with active connection, Should there be a doc to say that user needs to adjust the terminationGracefulPeriod if the app takes more time than default 30s.

@ramaraochavali
Copy link
Contributor

Sure. We can update the docs

@ldemailly
Copy link
Member

EXIT_ON_ZERO_ACTIVE_CONNECTIONS isn't mentioned in latest docs - has the default changed to true or it still exists in current istio as an option? and maybe related is there an uptodate doc on how to ensure the proxy doesn't die or breaks the network before the service ends, while at the same time stop new traffic from coming in asap?

@howardjohn
Copy link
Member

Its there on https://preliminary.istio.io/latest/docs/reference/commands/pilot-agent/. This is an experimental feature so there is not more docs beyond the generated reference.

@jbilliau-rcd
Copy link

For future readers here, we implemented this on our Kafka app that would throw 5xx's whenever pods scaled down due to Envoy terminating before the app could finish its outstanding requests. Implemented at 3:25, looks to have fixed it!

image

@jkleckner
Copy link

We have a service container that runs a background thread that needs to complete in-flight work and then exit.

The terminationGracefulPeriodSeconds is used and the service container will exit earlier when the in-flight work completes.

Waiting for istio connections to drain does not help with this situation since the background thread doesn't have an open connection. One suggestion I have seen floating around is to have the service container open an otherwise unneeded connection to the istio pod so that istio will wait until the service pod exits. But it feels kind of hacky. Suggestions?

@howardjohn
Copy link
Member

https://istio.io/latest/blog/2023/native-sidecars/ is the perfect solution. Everything else is pretty rough. 1.29 is available on most platforms these days

@jkleckner
Copy link

Thanks! More motivation to get current on k8s. Is there a "least bad" approach in the meantime?

@howardjohn
Copy link
Member

You can use terminationDrainDuration and set it high (works ok but is a static time), or use EXIT_ON_ZERO_ACTIVE_CONNECTIONS (experimental, doesn't work in some cases)

@jkleckner
Copy link

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while
Projects
None yet
Development

No branches or pull requests