Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HostPath PV is stuck at terminating when claimed from StatefulSet #78106

Closed
ganchandrasekaran opened this issue May 20, 2019 · 6 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@ganchandrasekaran
Copy link

ganchandrasekaran commented May 20, 2019

What happened:
Trying to delete the deployment, but the PV is stuck at Terminating status forever. However claims made from Kind: Deployment and the PV claimed for it doesn't have any problem terminating successfully.

What you expected to happen:
All pods, services and PV disappear after terminating.

How to reproduce it (as minimally and precisely as possible):

# kubectl describe pv elasticsearch-data-001
Name:            elasticsearch-data-001
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    esdata
Status:          Terminating (lasts <invalid>)
Claim:           abc/data-elasticsearch-data-0
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        2Gi
Node Affinity:   <none>
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /mnt/k8s-pv/es-data-001
    HostPathType:
Events:            <none>

Make a claim for it from StatefulSet

kind: StatefulSet
...
  volumeClaimTemplates:
  - metadata:
      name: abc
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Filesystem
      storageClassName: esdata
      resources:
        requests:
          storage: 1Gi

Anything else we need to know?:
Ofcourse as suggested in bugReport solves the issue.

kubectl patch pvc db-pv-claim -p '{"metadata":{"finalizers":null}}'
kubectl patch pod db-74755f6698-8td72 -p '{"metadata":{"finalizers":null}}'

But this creates two problem.
1, I am not able to redploy the same chart without manually deleting these PVs as the chart produces the same PV names.
2, After I manually delete the PVs, I change the name of PV on the chart and redeploy. But the Kubernetes assigns the OLD PV that doesn't exist anymore to my Claim. THIS MEANS THAT THE ABOVE STEP DOESN'T REMOVE THE PV COMPLETELY.

Any insight would be appreciated! Thanks

@saad-ali
@childsb

@ganchandrasekaran ganchandrasekaran added the kind/bug Categorizes issue or PR as related to a bug. label May 20, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 20, 2019
@athenabot
Copy link

/sig storage

These SIGs are my best guesses for this issue. Please comment /remove-sig <name> if I am incorrect about one.

🤖 I am a bot run by vllry. 👩‍🔬

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 21, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 5, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 5, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@nemobis
Copy link

nemobis commented Feb 28, 2024

/reopen Still valid

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

5 participants