-
Notifications
You must be signed in to change notification settings - Fork 7.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Istio with CronJobs #11659
Comments
Associated: kubernetes/kubernetes#25908 |
Same issue for me. |
For those who are interested, we worked around this by adding a livenessProbe to the sidecar injector for istio-proxy:
And then the script looks like this:
|
@Stono thanks for sharing your work around. Are the above script and the livenessProbe added to the cronjob yams file? I am asking because I could not understand how to |
I use the auto sidecar injector. The probes are added to the injected
template for the istio-proxy pod.
Karl
…On Mon, 11 Feb 2019, 4:31 pm huikang ***@***.*** wrote:
@Stono <https://github.com/Stono> thanks for sharing your work around.
Are the above script and the livenessProbe added to the cronjob yams file?
I am asking because I could not understand how to adding a livenessProbe
to the sidecar injector for istio-proxy. Thanks.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11659 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABaviQZq0obf_JtZ4xrhya5yXoGshXSqks5vMZrRgaJpZM4azuR_>
.
|
Hi, @Stono, it is still unclear to me how the liveness probe can be added to the istio-proxy pod (my understanding is that the istio-proxy image is not managed by the end user). Could you point me to some online resources? Thanks. |
Ohhh I get you.
So in our CI pipeline we build our own istio-proxy image which pulls from
the main image and copies these files in. We then use that in the istio sidecar injector
Another thing you could do is mount them from a ConfigMap.
Karl
…On Mon, 11 Feb 2019, 5:48 pm huikang ***@***.*** wrote:
Hi, @Stono <https://github.com/Stono>, it is still unclear to me how the
liveness probe can be added to the istio-proxy pod (my understanding is
that the istio-proxy image is not managed by the end user).
Could you point me to some online resources? Thanks.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11659 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABavidP9BXHS5KmBDQs_WxpMVmLW0kAzks5vMazogaJpZM4azuR_>
.
|
I would imagine native Istio support for this would require a similar approach. Exposing an additional path on the pilot-agent server that tells it to shut down. This would still require the batch job to call that endpoint after it is finished though. I wonder if there is something else we could hook into in Kube that would automatically call this endpoint once the job is finished? |
@liamawhite do you know of any way to make istio-proxy exit with a 0 status code, at the moment if i send a |
Similar issue: #11045 |
I think the lastest in release-1.1 should return 0. I will try to find some time to verify. |
Seems to be fine for me in 1.1.1 |
We have started doing this recently btw folks in our pod spec:
This will:
|
@Stono Have you observed the sidecar living for a while after receiving quitquitquit? Ours are living for another minute or so before exiting (although 15000 gets closed immediately). We're also getting log spam on Completed jobs until we delete them:
|
Yeah so we also have a liveness probe that looks for 15000 listening and if
it's missing exits.
It's hacky.
…On Tue, 30 Apr 2019, 11:33 am Mike Simons, ***@***.***> wrote:
@Stono <https://github.com/Stono> Have you observed the sidecar living
for a while after receiving quitquitquit? Ours are living for another
minute or so before exiting (although 15000 gets closed immediately).
We're also getting log spam on Completed jobs until we delete them:
info Envoy proxy is NOT ready: failed retrieving Envoy stats: Get http://127.0.0.1:15000/stats?usedonly: dial tcp 127.0.0.1:15000: connect: connection refused
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11659 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AALK7CKBTH2COP3SKOIRR2LPTAN7JANCNFSM4GWO4R7Q>
.
|
This issue has been automatically marked as stale because it has not had activity in the last 90 days. It will be closed in the next 30 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions. |
Activity! |
In 1.3 we added a new |
@howardjohn Did some digging and testing and did not find a way to do such a tweak. Looks like k8s takes the displayed PODStatus from one of the containers. Tested POD with 2 containers and without Update: I think I got to the bottom of this. The resulting POD Status display message is calculated when returning to client. And in our case it is unstable and sometimes wrong. It returns the termination reason of the last container in the The link is for version 1.22.5, but in master I see the same. |
Hello, I've done an "operator" to handle this problem until keystone containers are added to k8s. See: https://gitlab.com/kubitus-project/kubitus-pod-cleaner-operator/-/blob/main/README.md |
This seems to maintain the error code from the job:
|
Seems it is not always the same port. The following works for me
|
dollar sign is missing in the above command:
|
Just if anyone is searching for a simple solution: https://github.com/AOEpeople/kubernetes-sidecar-cleaner We have developed a simple app to clean up the istio-proxies in completed CronJobs. |
@kschu91 where should be this controller install? Inside istio namespace? |
Is there any plan to implement something at istio level rather than using hacky solutions or a 3rd party operator? |
https://istio.io/latest/blog/2022/introducing-ambient-mesh/ can't come soon enough. :-) |
There is a kubernetes enhancement that should allow orderly startup and shutdown of multi container pods: kubernetes/enhancements#3759 or ambient :) |
This worked perfectly for me - thank you for your contribution! @ZiaUrRehman-GBI - you can deploy it anywhere. The helmchart is deployed with a cluster role so it moderates pods in the entire cluster. At least that's how it works with the helmchart that @kschu91 is providing. |
So there's no official solution. I prefer to use a script to notify Istio rather than installing an unofficial or hacky operator 😃 Thanks everyone |
This should be addressed by https://kubernetes.io/blog/2023/08/25/native-sidecar-containers/#what-are-sidecar-containers-in-1-28 |
Not a solution by any means, but a reminder that you can opt out of istio when appropriate by using the |
Sidecar as a first class citizen is in alpha in behind a feature gate in Kubernetes 1.28 and is beta and enabled by default in Kubernetes 1.29. If/when Istio implements restartPolicy the istio-proxy lifecycle will align to the main container. This will allow running Jobs on Istio. cc: @howardjohn |
Istio already supports it behind the ENABLE_NATIVE_SIDECARS. Note I highly recommend only using it on K8s 1.29 as some of the lifecycle is broken in 1.28 |
This is exactly what I needed, thank you!!
I installed it into the |
I'm going to close this supperrrr old issue. |
@Stono from that linked article they actually recommend not using it with istio:
|
I know, I wrote it :) My point was more that there won't be a "solution" to this multi year problem without sidecar containers, the actual solution is Sidecar containers. So the choices are either:
Either approach currently has some daemons, but Sidecar containers is less hacky and moving in the correct direction so will only get better with time. |
Hi @Stono, thanks for your article! I’m still a bit confused about the best way to set this up in a cron job. Could you provide some guidance on how to properly configure the startupProbe with Istio Proxy? What would be the best approach for this? |
Use sidecar containers, they start before your app.
…On Wed, 28 Aug 2024, 13:17 Ruslan Gonzalez, ***@***.***> wrote:
Hi @Stono <https://github.com/Stono>, thanks for your article! I’m still
a bit confused about the best way to set this up in a cron job. Could you
provide some guidance on how to properly configure the startupProbe with
Istio Proxy? What would be the best approach for this?
—
Reply to this email directly, view it on GitHub
<#11659 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AALK7CJEBNYHBNVUIFX2KT3ZTW5XHAVCNFSM4GWO4R72U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMZRGUYTMNRXG4YA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hey all,
I have an issue with Istio when used in conjunction with CronJobs or Jobs, in that when the primary pod completes, the "Job" never completes because istio-proxy is still running:
I tried adding the following to the end of the primary pod script as suggested by @costinm in #6324, but that doesn't work (envoy exits, proxy doesn't):
Which seems to cause envoy to exit correctly, however the istio-proxy process is still running:
Despite it no longer listening:
The main pod can't send a SIGTERM to istio-proxy because it doesn't have permission to do so (quite rightly) so I'm a little stuck.
The only hacky thing I can think of doing is adding a readinessProbe to istio-proxy which checks to see if it's listening and if it isn't, sends the SIGTERM.
Thoughts?
The text was updated successfully, but these errors were encountered: