Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuring Projected volume with pod is always in CrashLoopBackoff State #118526

Closed
niranjandarshann opened this issue Jun 7, 2023 · 11 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@niranjandarshann
Copy link

Page related to the issue: https://kubernetes.io/docs/concepts/storage/projected-volumes/#example-configuration-secrets-nondefault-permission-mode
When i am creating a projected volume having two secrets mysecret and mysecret1 at the location /project-volume/my-guide/my-username and /project-volume/my-guide/my-password the secret get created perfectly and when i create a pod it get created but remain in CrashLoopBackoff State.
Whether i am missing any thing or is there any problem with the file.

I am attaching the whole process i do:

File Location

secrets file path

Created Secrets

Created secrets

Yaml file of Pod

Pod yaml file (podproj.yaml)

Final Output

final output

Your opinion and help means a lot to me.

Proposed solution :
The pods should go in Ready state.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 7, 2023
@k8s-ci-robot
Copy link
Contributor

There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:

  • /sig <group-name>
  • /wg <group-name>
  • /committee <group-name>

Please see the group list for a listing of the SIGs, working groups, and committees available.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jun 7, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@niranjandarshann
Copy link
Author

/language en
/kind support

@k8s-ci-robot
Copy link
Contributor

@niranjandarshann: The label(s) language/en cannot be applied, because the repository doesn't have them.

In response to this:

/language en
/kind support

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Jun 7, 2023
@tamilselvan1102
Copy link

tamilselvan1102 commented Jun 7, 2023

By default, a pod’s restart policy is Always, meaning it should always restart on failure (other options are Never or OnFailure). Depending on the restart policy defined in the pod template, Kubernetes might try to restart the pod multiple times.

Every time the pod is restarted, Kubernetes waits for a longer and longer time, known as a “backoff delay”. During this process, Kubernetes displays the CrashLoopBackOff error.

@niranjandarshann
Copy link
Author

By default, a pod’s restart policy is Always, meaning it should always restart on failure (other options are Never or OnFailure). Depending on the restart policy defined in the pod template, Kubernetes might try to restart the pod multiple times.

Every time the pod is restarted, Kubernetes waits for a longer and longer time, known as a “backoff delay”. During this process, Kubernetes displays the CrashLoopBackOff error.

@tamilselvan1102 thank you, but here i am looking for whats wrong in the process i am doing.
How to create a pod with projected volume must be in Ready state instead of CrashLoopBackoff.

@niranjandarshann
Copy link
Author

I am attaching the description of the pods volume-test.
description of pods

@killshotrevival
Copy link
Contributor

Hey @niranjandarshann , I believe the issue is not with the volumes, but with the image, you are using, busy box as the image, but as there is no process that keeps the container running, the pod is completing, as you can see in your first screenshot, and after that it is going to crashloopback state.

To keep a container running you can try something like:

  - name: container-test
    image: busy box
    command: ["sleep", "3600"]

This will start a sleep process in your busybox, that will keep the container running for 1 hr. Hope this help 😃

@niranjandarshann
Copy link
Author

Hey @niranjandarshann , I believe the issue is not with the volumes, but with the image, you are using, busy box as the image, but as there is no process that keeps the container running, the pod is completing, as you can see in your first screenshot, and after that it is going to crashloopback state.

To keep a container running you can try something like:

  - name: container-test
    image: busybox
    command: ["sleep", "3600"]

This will start a sleep process in your busybox, that will keep the container running for 1 hr. Hope this help smiley

@killshotrevival Thank you for your support now the pod is running by adding command labels in my yaml file.

@niranjandarshann
Copy link
Author

closing the issue marking it as resolved
/close

@k8s-ci-robot
Copy link
Contributor

@niranjandarshann: Closing this issue.

In response to this:

closing the issue marking it as resolved
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants