Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Statefulset example: cannot find volume plugin for alpha provisioning #12676

Closed
syffs opened this issue Jan 26, 2017 · 5 comments
Closed

Statefulset example: cannot find volume plugin for alpha provisioning #12676

syffs opened this issue Jan 26, 2017 · 5 comments
Assignees
Labels
area/examples component/storage kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/P2

Comments

@syffs
Copy link

syffs commented Jan 26, 2017

I'm trying to run this Zookeeper example, but I end with errors such as:

  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  1h            12s             281     {default-scheduler }                    Warning         FailedScheduling        [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "datadir-zoo-0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "datadir-zoo-0", which is unexpected.]

or

error finding provisioning plugin for claim test/datadir-zoo-2: cannot find volume plugin for alpha provisioning

I'm note sure but I suspect that it might be due to line volume.alpha.kubernetes.io/storage-class: anything, because I don't think that there is any default StorageClass defined...
If so how can I set up the most simple StorageClass to get this to work, because as I'm self-hosting my openshift origin cluster, I cannot fit into any of the cloud storage option (GCE, AWS, Azure, etc...)?

Version
[user@server ~]$ oc version
oc v1.5.0-alpha.2+e4b43ee
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.1.106:8443
openshift v1.5.0-alpha.2+e4b43ee
kubernetes v1.5.2+43a9be4
Steps To Reproduce
  1. Retrieve file zookeeper.yaml
  2. Run as below:
oc process -f zookeeper.yaml > zookeeper.active
oc create -f zookeeper.active
  1. oc describe pods zoo-0
Current Result
  • Volumes are not allocated

  • Pods are not scheduled

@smarterclayton
Copy link
Contributor

I wonder if the example was updated upstream to remove the annotation.

@stevekuznetsov
Copy link
Contributor

When I hit this, I was installing Origin with the Ansible installer and needed to add the following stanza to my OSEv3 vars:

osm_controller_args:
  enable-hostpath-provisioner:
    - true

@jorgemoralespou
Copy link

jorgemoralespou commented Jul 17, 2017

Moved to #15239 and #15240

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 14, 2018
@stevekuznetsov
Copy link
Contributor

Other issues tracking this,

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/examples component/storage kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/P2
Projects
None yet
Development

No branches or pull requests

8 participants