Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

secret-operator failed to restart deployment with initContainer using health checks #1405

Open
mainey-cc opened this issue Feb 15, 2024 · 2 comments

Comments

@mainey-cc
Copy link

Describe the bug

Kubernetes 1.29 introduces long lived sidecar container, but infisical secrets-operator failed to restart any deployments with it. The sidecar have health checks in place and restartPolicy=always set. But, when i remove those health checks, the secrets-operator are able to restart any deployments just fine.

To Reproduce

Steps to reproduce the behavior:

  1. Create a kubernetes deployment with an initContainer using health checks.
  2. Update environment variables on infisical.
  3. Observe the secrets-operator trying over and over again to restart the deployment.

Expected behavior

It should restart the deployment without issues.

Logs

unable to reconcile deployment with [name=******]. Will try next requeueunable to reconcile deployment with [name=******].  Will try next requeueOperator will requeue after [5s]
Manual re-sync interval set requeueAfter 5s
Requeue duration set requeueAfter 5s
Workspace ID: *****
TokenName: *****
ReconcileInfisicalSecret: Fetched secrets via service token
No secrets modified so reconcile not needed Etag: W/"**********************" Modified: false
deployment is using outdated managed secret. Starting re-deployment [deploymentName=*****]

Platform you are having the issue on:

Secrets-operator helm chart version 0.3.3 on GKE cluster version 1.29.0-gke.1381000

Additional Context

The initContainer have startupProbe, readinessProbe, and livenessProbe as well with restartPolicy=always. I have observed similiar issue on another tools that returns this error, and this might just be the culprit. Even when restartPolicy=always is set, it still failed to restart.

time="2024-02-15T01:52:06Z" level=error msg="provider.kubernetes: got error while updating resource" deployment=crm-organizations error="Deployment.apps \"*****\" is invalid: [spec.template.spec.initContainers[0].livenessProbe: Forbidden: may not be set for init containers without restartPolicy=Always, spec.template.spec.initContainers[0].readinessProbe: Forbidden: may not be set for init containers without restartPolicy=Always, spec.template.spec.initContainers[0].startupProbe: Forbidden: may not be set for init containers without restartPolicy=Always]" kind=deployment namespace=*** update="latest->latest"

@akhilmhdh
Copy link
Member

CC: @maidul98

@mainey-cc
Copy link
Author

I pushed a PR here to resolve the issue #1615

@linear linear bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 23, 2024
@vmatsiiako vmatsiiako reopened this Sep 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants