Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure disk: Fail to find storage account if there is no "vhds" container in it #38362

Closed
weherdh opened this issue Dec 8, 2016 · 10 comments · Fixed by #37845
Closed

Azure disk: Fail to find storage account if there is no "vhds" container in it #38362

weherdh opened this issue Dec 8, 2016 · 10 comments · Fixed by #37845

Comments

@weherdh
Copy link

weherdh commented Dec 8, 2016

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.1011+42d5a1a9cd4aca", GitCommit:"42d5a1a9cd4acae9c7b258c21bfc4fb4955d7be3", GitTreeState:"clean", BuildDate:"2016-11-30T07:19:12Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.1011+42d5a1a9cd4aca", GitCommit:"42d5a1a9cd4acae9c7b258c21bfc4fb4955d7be3", GitTreeState:"clean", BuildDate:"2016-11-30T07:19:12Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Azure
  • OS (e.g. from /etc/os-release): RHEL 7.3
  • Install tools: Ansible

What happened:
PVC keeps pending with a new created storage account

What you expected to happen:
Storage class can find storage account and then create a dynamic pv for pvc to bind

How to reproduce it (as minimally and precisely as possible):

  1. Create a new storage account without create any container
  2. Create a storage class as below
[root@wehe-master1 azure]# cat azscw-LRS.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: wlrs 
provisioner: kubernetes.io/azure-disk
parameters:
  skuName: Standard_LRS
  location: westus 
  storageAccount: qewestus 

[root@wehe-master1 azure]# kubectl describe storageclass wlrs
Name:		wlrs
IsDefaultClass:	No
Annotations:	<none>
Provisioner:	kubernetes.io/azure-disk
Parameters:	location=westus,skuName=Standard_LRS,storageAccount=qewestus
No events.

  1. Create a pvc annotating this storage class
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: azpvcwest
  annotations:        
    volume.beta.kubernetes.io/storage-class: wlrs 
spec:
  accessModes:
    - ReadWriteOnce 
  resources:
    requests:
      storage: 1Gi
  1. Check pvc status is pending
[root@wehe-master1 ~]# kubectl describe pvc azpvcwest
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  16h		11s		3940	{persistentvolume-controller }			Warning		ProvisioningFailed	Failed to provision volume with StorageClass "wlrs": failed to find a matching storage account

Anything else do we need to know:
After I create a new container "vhds" in storage account of "qewestus", storage class works and pvc is bound

[root@wehe-master1 ~]# kubectl get pvc
NAME        STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
azpvcwest   Bound     pvc-703fc37c-bc65-11e6-b2be-000d3a194418   1Gi        RWO           16h
@rootfs
Copy link
Contributor

rootfs commented Dec 8, 2016

VHD container is a rerequiste. Will update the doc.

@kim0
Copy link

kim0 commented Dec 15, 2016

@rootfs .. I was about to open this exact bug. Shouldn't the provisoner create the vhds container ?

@rootfs
Copy link
Contributor

rootfs commented Dec 15, 2016

yes, there is a todo for this case (also missing storage account)
https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure_storage.go#L198

@kim0
Copy link

kim0 commented Dec 15, 2016

Another minor issue is, new volumes take a lot of time (tens of mins) to be ext4 formatted. This causes the pod to timeout and fail upon first use! Any way to make this better

@rootfs
Copy link
Contributor

rootfs commented Jan 5, 2017

@kim0 fix is in #37845

rootfs added a commit to rootfs/kubernetes that referenced this issue Jan 6, 2017
Signed-off-by: Huamin Chen <hchen@redhat.com>
k8s-github-robot pushed a commit that referenced this issue Jan 9, 2017
Automatic merge from submit-queue

Azure disk volume fixes

fix #36571: Do not report error when deleting an attached volume
fix #38362: create blob vhds container if not exists
jayunit100 pushed a commit to jayunit100/kubernetes that referenced this issue Jan 13, 2017
Signed-off-by: Huamin Chen <hchen@redhat.com>
@eggsy84
Copy link

eggsy84 commented Feb 2, 2017

Can anyone share how to create the vhds container? For those that don't yet have this fix

@jgeraerts
Copy link

@eggsy84
Easiest is through the portal, browse to your storage account and create a container called vhds

@eggsy84
Copy link

eggsy84 commented Feb 2, 2017

Thats great thanks for the quick response. The automator in me now wants to see if I can automate the creation of that container through powershell ;)

@rootfs
Copy link
Contributor

rootfs commented Feb 2, 2017

@eggsy84
Copy link

eggsy84 commented Feb 2, 2017

Does the container have to be of type 'Container' ? IE Full public read access??

There are three levels of anonymous read access: Off, Blob, and Container. To prevent anonymous access to blobs, set the Permission parameter to Off. By default, the new container is private and can be accessed only by the account owner. To allow anonymous public read access to blob resources, but not to container metadata or to the list of blobs in the container, set the Permission parameter to Blob. To allow full public read access to blob resources, container metadata, and the list of blobs in the container, set the Permission parameter to Container. For more information, see Manage anonymous read access to containers and blobs.

berryjam pushed a commit to berryjam/kubernetes that referenced this issue Aug 18, 2017
Signed-off-by: Huamin Chen <hchen@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants