You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Kubernetes 1.29 introduces long lived sidecar container, but infisical secrets-operator failed to restart any deployments with it. The sidecar have health checks in place and restartPolicy=always set. But, when i remove those health checks, the secrets-operator are able to restart any deployments just fine.
To Reproduce
Steps to reproduce the behavior:
Create a kubernetes deployment with an initContainer using health checks.
Update environment variables on infisical.
Observe the secrets-operator trying over and over again to restart the deployment.
Expected behavior
It should restart the deployment without issues.
Logs
unable to reconcile deployment with [name=******]. Will try next requeueunable to reconcile deployment with [name=******]. Will try next requeueOperator will requeue after [5s]
Manual re-sync interval set requeueAfter 5s
Requeue duration set requeueAfter 5s
Workspace ID: *****
TokenName: *****
ReconcileInfisicalSecret: Fetched secrets via service token
No secrets modified so reconcile not needed Etag: W/"**********************" Modified: false
deployment is using outdated managed secret. Starting re-deployment [deploymentName=*****]
Platform you are having the issue on:
Secrets-operator helm chart version 0.3.3 on GKE cluster version 1.29.0-gke.1381000
Additional Context
The initContainer have startupProbe, readinessProbe, and livenessProbe as well with restartPolicy=always. I have observed similiar issue on another tools that returns this error, and this might just be the culprit. Even when restartPolicy=always is set, it still failed to restart.
time="2024-02-15T01:52:06Z" level=error msg="provider.kubernetes: got error while updating resource" deployment=crm-organizations error="Deployment.apps \"*****\" is invalid: [spec.template.spec.initContainers[0].livenessProbe: Forbidden: may not be set for init containers without restartPolicy=Always, spec.template.spec.initContainers[0].readinessProbe: Forbidden: may not be set for init containers without restartPolicy=Always, spec.template.spec.initContainers[0].startupProbe: Forbidden: may not be set for init containers without restartPolicy=Always]" kind=deployment namespace=*** update="latest->latest"
The text was updated successfully, but these errors were encountered:
Describe the bug
Kubernetes 1.29 introduces long lived sidecar container, but infisical secrets-operator failed to restart any deployments with it. The sidecar have health checks in place and restartPolicy=always set. But, when i remove those health checks, the secrets-operator are able to restart any deployments just fine.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
It should restart the deployment without issues.
Logs
Platform you are having the issue on:
Secrets-operator helm chart version 0.3.3 on GKE cluster version 1.29.0-gke.1381000
Additional Context
The initContainer have startupProbe, readinessProbe, and livenessProbe as well with
restartPolicy=always
. I have observed similiar issue on another tools that returns this error, and this might just be the culprit. Even whenrestartPolicy=always
is set, it still failed to restart.time="2024-02-15T01:52:06Z" level=error msg="provider.kubernetes: got error while updating resource" deployment=crm-organizations error="Deployment.apps \"*****\" is invalid: [spec.template.spec.initContainers[0].livenessProbe: Forbidden: may not be set for init containers without restartPolicy=Always, spec.template.spec.initContainers[0].readinessProbe: Forbidden: may not be set for init containers without restartPolicy=Always, spec.template.spec.initContainers[0].startupProbe: Forbidden: may not be set for init containers without restartPolicy=Always]" kind=deployment namespace=*** update="latest->latest"
The text was updated successfully, but these errors were encountered: