Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

Kafka PersistentVolumeClaim is not bound in Minikube #2338

Closed
jengo opened this issue Sep 26, 2017 · 12 comments
Closed

Kafka PersistentVolumeClaim is not bound in Minikube #2338

jengo opened this issue Sep 26, 2017 · 12 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@jengo
Copy link

jengo commented Sep 26, 2017

I am running macOS 10.12.6 with Minikube 0.22.2 and helm 2.6.1.

I tried to setup Kafka using the commands:

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name my-kafka incubator/kafka

The first 2 pods that try to launch report the errors:

PersistentVolumeClaim is not bound: "datadir-my-kafka-kafka-0"
Back-off restarting failed container
Error syncing pod
PersistentVolumeClaim is not bound: "datadir-my-kafka-zookeeper-0"
Back-off restarting failed container
Error syncing pod

I do see both of the referenced Persistent Volume Claims being created along with both Persistent Volumes. The pods never finish setting up and keep restarting.

@ginkel
Copy link

ginkel commented Dec 5, 2017

@jengo Hitting the same issue. Have you been able to figure out a workaround?

@sammerry
Copy link
Contributor

@jellonek @ginkel any luck? Same problem using stable/sentry chart.

@skunkwerk
Copy link

I'm having the same issue on Google Container Engine - keep getting these errors with the Helm installation.

@pablorsk
Copy link

pablorsk commented Jan 6, 2018

I have this error only with minikube, with GCE works fine

@ShahNewazKhan
Copy link

I am facing the same issue:

Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath                          Type             Reason                  Message
  ---------     --------        -----   ----                    -------------                          -------- ------                  -------
  7s            7s              2       default-scheduler                                              Warning          FailedScheduling        PersistentVolumeClaim is not bound: "datadir-dev-zookeeper-0"
  6s            6s              1       default-scheduler                                              Normal           Scheduled               Successfully assigned dev-zookeeper-0 to minikube
  6s            6s              1       kubelet, minikube                                              Normal           SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "pvc-4b088ee5-f5a1-11e7-95f1-52540028db7a" 
  6s            6s              1       kubelet, minikube                                              Normal           SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-bnjcp" 
  5s            5s              2       kubelet, minikube       spec.containers{zookeeper-server}      Normal           Pulled                  Container image "gcr.io/google_samples/k8szk:v2" already present on machine
  5s            5s              2       kubelet, minikube       spec.containers{zookeeper-server}      Warning          Failed                  Error: lstat /tmp/hostpath-provisioner/pvc-4b088ee5-f5a1-11e7-95f1-52540028db7a: no such file or directory
  5s            5s              2       kubelet, minikube                                              Warning          FailedSync              Error syncing pod

Could possibly be a race condition, as I see

Error: lstat /tmp/hostpath-provisioner/pvc-4b088ee5-f5a1-11e7-95f1-52540028db7a: no such file or directory

MountVolume.SetUp succeeded for volume "pvc-4b088ee5-f5a1-11e7-95f1-52540028db7a"

@kevinohara80
Copy link

Has anyone found a workaround to this issue?

@feffi
Copy link

feffi commented Feb 5, 2018

anyone? BUMP?

@ghost
Copy link

ghost commented Feb 7, 2018

Also seeing this problem, trying to install into minkube

@gexinworks
Copy link

Hello,has anyone found a solution to this issue?

@AmundsenJunior
Copy link

Addressed via kubernetes/minikube#2256. I upgraded my minikube setup to latest version (v.0.25.2), and the whole chart started up successfully.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 5, 2018
@stale
Copy link

stale bot commented Aug 8, 2018

This issue is being automatically closed due to inactivity.

@stale stale bot closed this as completed Aug 8, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests