Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods with volumes stuck in ContainerCreating after cluster node is deleted from OpenStack #50200

Closed
kars7e opened this issue Aug 5, 2017 · 20 comments
Labels
area/provider/openstack Issues or PRs related to openstack provider kind/bug Categorizes issue or PR as related to a bug. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@kars7e
Copy link

kars7e commented Aug 5, 2017

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
One of the cluster worker nodes was deleted from OpenStack.
Pods running on this node have been rescheduled on different nodes but got stuck in ContainerCreating. It's stuck for 20+ minutes until action like restarting controller manager is taken (it can't reconcile without manual intervention). See at the end for actions that can fix it.

What you expected to happen:
Pod should be started correctly on a different node, with volumes attached to it.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
The underlying issue is that Cinder volumes are getting dettached when instance is deleted, but k8s is not registering this fact, and is throwing:

Multi-Attach error for volume "pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c" Volume is already exclusively attached to one node and can't be attached to another

It seems that manager attempts the detach, but is not able to handle the fact that volume is in available state (k8s-node-2 is the node that has been deleted):

{"log":"E0805 19:36:33.196285 6 nestedpendingoperations.go:262] Operation for \"\\\"kubernetes.io/cinder/9dd4110b-e9f2-4eba-a2b5-22b6082b2c1b\\\"\" failed. No retries permitted until 2017-08-05 19:37:37.196173574 +0000 UTC (durationBeforeRetry 1m4s). Error: DetachVolume.Detach failed for volume \"pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c\" (UniqueName: \"kubernetes.io/cinder/9dd4110b-e9f2-4eba-a2b5-22b6082b2c1b\") on node \"k8s-node-2-31217f04-941c-48f2-b36e-8a97a3bf7515\" : can not detach volume kubernetes-dynamic-pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c, its status is available.\n","stream":"stderr","time":"2017-08-05T19:36:33.196396348Z"}

After inspecting the pod, following events are registered (note, this pod is eventually started because controller manager has been rebooted):

Events:
  FirstSeen	LastSeen	Count	From								SubObjectPath		Type		Reason		  	Message
  ---------	--------	-----	----								-------------		--------	------			-------
  11m		11m		1	default-scheduler									Normal		Scheduled	  	Successfully assigned webapp-staging-3564021476-tq2dc to k8s-node-1-31217f04-941c-48f2-b36e-8a97a3bf7515
  11m		11m		1	kubelet, k8s-node-1-31217f04-941c-48f2-b36e-8a97a3bf7515				Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "default-token-bw55f"
  11m		2m		5144	attachdetach										Warning		FailedAttachVolume	Multi-Attach error for volume "pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c" Volume is already exclusively attached to one node and can't be attached to another
  9m		2m		4	kubelet, k8s-node-1-31217f04-941c-48f2-b36e-8a97a3bf7515				Warning		FailedMount	  	Unable to mount volumes for pod "webapp-staging-3564021476-tq2dc_default(63941f31-7a14-11e7-bd74-fa163eae2160)": timeout expired waiting for volumes to attach/mount for pod "default"/"webapp-staging-3564021476-tq2dc". list of unattached/unmounted volumes=[mypvc]
  9m		2m		4	kubelet, k8s-node-1-31217f04-941c-48f2-b36e-8a97a3bf7515				Warning		FailedSync	  	Error syncing pod
  1m		1m		1	kubelet, k8s-node-1-31217f04-941c-48f2-b36e-8a97a3bf7515				Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c"
  1m		1m		1	kubelet, k8s-node-1-31217f04-941c-48f2-b36e-8a97a3bf7515	spec.containers{nginx}	Normal		Pulled		  	Container image "nginx:1.7.9" already present on machine
  1m		1m		1	kubelet, k8s-node-1-31217f04-941c-48f2-b36e-8a97a3bf7515	spec.containers{nginx}	Normal		Created		  	Created container
  1m		1m		1	kubelet, k8s-node-1-31217f04-941c-48f2-b36e-8a97a3bf7515	spec.containers{nginx}	Normal		Started

Also an excerpt from controller manager log (with --v=4):

{"log":"I0805 19:36:33.156440       6 deployment_controller.go:562] Started syncing deployment \"default/webapp-staging\" (2017-08-05 19:36:33.156430897 +0000 UTC)\n","stream":"stderr","time":"2017-08-05T19:36:33.156586574Z"}
{"log":"I0805 19:36:33.157358       6 progress.go:231] Queueing up deployment \"webapp-staging\" for a progress check now\n","stream":"stderr","time":"2017-08-05T19:36:33.157472884Z"}
{"log":"I0805 19:36:33.157409       6 deployment_controller.go:564] Finished syncing deployment \"default/webapp-staging\" (954.263µs)\n","stream":"stderr","time":"2017-08-05T19:36:33.157484805Z"}
{"log":"E0805 19:36:33.196088       6 openstack_volumes.go:263] can not detach volume kubernetes-dynamic-pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c, its status is available.\n","stream":"stderr","time":"2017-08-05T19:36:33.196206222Z"}
{"log":"E0805 19:36:33.196104       6 attacher.go:394] Error detaching volume \"9dd4110b-e9f2-4eba-a2b5-22b6082b2c1b\" from node \"k8s-node-2-31217f04-941c-48f2-b36e-8a97a3bf7515\": can not detach volume kubernetes-dynamic-pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c, its status is available.\n","stream":"stderr","time":"2017-08-05T19:36:33.196226294Z"}
{"log":"I0805 19:36:33.196143       6 actual_state_of_world.go:478] Add new node \"k8s-node-2-31217f04-941c-48f2-b36e-8a97a3bf7515\" to nodesToUpdateStatusFor\n","stream":"stderr","time":"2017-08-05T19:36:33.196230931Z"}
{"log":"I0805 19:36:33.196157       6 actual_state_of_world.go:486] Report volume \"kubernetes.io/cinder/9dd4110b-e9f2-4eba-a2b5-22b6082b2c1b\" as attached to node \"k8s-node-2-31217f04-941c-48f2-b36e-8a97a3bf7515\"\n","stream":"stderr","time":"2017-08-05T19:36:33.196234322Z"}
{"log":"E0805 19:36:33.196285       6 nestedpendingoperations.go:262] Operation for \"\\\"kubernetes.io/cinder/9dd4110b-e9f2-4eba-a2b5-22b6082b2c1b\\\"\" failed. No retries permitted until 2017-08-05 19:37:37.196173574 +0000 UTC (durationBeforeRetry 1m4s). Error: DetachVolume.Detach failed for volume \"pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c\" (UniqueName: \"kubernetes.io/cinder/9dd4110b-e9f2-4eba-a2b5-22b6082b2c1b\") on node \"k8s-node-2-31217f04-941c-48f2-b36e-8a97a3bf7515\" : can not detach volume kubernetes-dynamic-pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c, its status is available.\n","stream":"stderr","time":"2017-08-05T19:36:33.196396348Z"}
{"log":"I0805 19:36:33.212211       6 node_status_updater.go:76] Could not update node status. Failed to find node \"k8s-node-2-31217f04-941c-48f2-b36e-8a97a3bf7515\" in NodeInformer cache. Error: 'node \"k8s-node-2-31217f04-941c-48f2-b36e-8a97a3bf7515\" not found'\n","stream":"stderr","time":"2017-08-05T19:36:33.212356492Z"}
{"log":"W0805 19:36:33.212444       6 reconciler.go:262] Multi-Attach error for volume \"pvc-7da59477-7a13-11e7-a1c3-fa163ec6b87c\" (UniqueName: \"kubernetes.io/cinder/9dd4110b-e9f2-4eba-a2b5-22b6082b2c1b\") from node \"k8s-node-1-31217f04-941c-48f2-b36e-8a97a3bf7515\" Volume is already exclusively attached to one node and can't be attached to another\n","stream":"stderr","time":"2017-08-05T19:36:33.2

Note: Following operations will resolve this issue:

  • Restarting active controller manager
  • attaching the volume in question to random instance
  • deleting & recreating the pod.

Environment:

  • Kubernetes version (use kubectl version):
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d70ffb940d12c648df2b6647580c150b8f113704", GitTreeState:"clean", BuildDate:"2017-08-04T07:14:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration**:
    OpenStack Mitaka
  • OS (e.g. from /etc/os-release):
    Ubuntu 16.04.2 LTS (Xenial Xerus)
  • Kernel (e.g. uname -a):
    Linux k8s-master-1-31217f04-941c-48f2-b36e-8a97a3bf7515 4.4.0-62-generic add travis integration #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
    kargo (kubespray)
  • Others:
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 5, 2017
@k8s-github-robot
Copy link

@kars7e
There are no sig labels on this issue. Please add a sig label by:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <label>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. You can find the group list here and label list here.
The <group-suffix> in the method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Aug 5, 2017
@kars7e
Copy link
Author

kars7e commented Aug 5, 2017

/sig storage
/sig openstack
cc @kubernetes/sig-storage-bugs

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. area/provider/openstack Issues or PRs related to openstack provider kind/bug Categorizes issue or PR as related to a bug. labels Aug 5, 2017
@k8s-ci-robot
Copy link
Contributor

@kars7e: Reiterating the mentions to trigger a notification:
@kubernetes/sig-storage-bugs.

In response to this:

/sig storage
/sig openstack
cc @kubernetes/sig-storage-bugs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Aug 5, 2017
@FengyunPan
Copy link

It seems attachDetachController does not update actualStateOfWorld when node is deleted. I will check it.
/cc

@jingxu97
Copy link
Contributor

jingxu97 commented Aug 6, 2017

From the log @kars7e provided, I think the following happened.

  1. Node is deleted and pod is killed
  2. Attach_detach controller got the podDelete event, and try to detach volume
  3. Detach failed because the status is available (this is a bug at openstack cloud provider code. DetachDisk should return true if the volume status is already available, I think https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/openstack/openstack_volumes.go#L261)
  4. Due to this feature Don't try to attach volumes which are already attached to other nodes #45346 added in 1.7, if the volume is already attached to another node, reconciler will not try to attach. This cause the issue because the actual state is not represent the true state.

One thing I am not sure is that syncState should check the volume status and update the actual state periodically. @kars7e did you set the flag disable-attach-detach-reconcile-sync?

Opened an issue for openstack bug in DetachVolume

@FengyunPan
Copy link

FengyunPan commented Aug 6, 2017

Detach failed because the status is available (this is a bug at openstack cloud provider code. DetachDisk should return true if the volume status is already available

@jingxu97 Oops, it is need to check "available" in DetachDisk.

@kars7e
Copy link
Author

kars7e commented Aug 6, 2017

Thanks @jingxu97 for looking into it. I actually had the same theory, so I added following change to DetachDisk:

        if err != nil {
                return err
        }
+       if volume.Status == VolumeAvailableStatus {
+               // Nothing to do, volume is available
+               return nil
+       }
        if volume.Status != VolumeInUseStatus {
                errmsg := fmt.Sprintf("can not detach volume %s, its status is %s.", volume.Name, volume.Status)
                glog.Errorf(errmsg)

And tried with that. I don't see errors about detaching the volume anymore, but somehow the asw does not update the mounts, and I'm seeing thousands of Multi-Attach error for volume...., until I restart the controller-manager

I haven't set flag disable-attach-detach-reconcile-sync, here are my flags from manifest:

    - controller-manager
    - --master=http://127.0.0.1:8080
    - --leader-elect=true
    - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --root-ca-file=/etc/kubernetes/ssl/apica.pem
    - --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
    - --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
    - --enable-hostpath-provisioner=false
    - --node-monitor-grace-period=40s
    - --node-monitor-period=5s
    - --pod-eviction-timeout=5m0s
    - --v=4
    - --cloud-provider=openstack
    - --cloud-config=/etc/kubernetes/cloud_config

@kars7e
Copy link
Author

kars7e commented Aug 6, 2017

If it's of any help, here is the log (with v=4) for controller manager:
https://gist.github.com/kars7e/b1fe9a8a330f2c3fbb3418b6e7613e11

Note: I removed all lines with Multi-Attach error for volume.... as there were thousands of them. The log is still long, but there are few crucial lines:

  1. https://gist.github.com/kars7e/b1fe9a8a330f2c3fbb3418b6e7613e11#file-controller_manager-log-L1 - the log starts around the same time node has been deleted
  2. https://gist.github.com/kars7e/b1fe9a8a330f2c3fbb3418b6e7613e11#file-controller_manager-log-L95 - This is where Multi-Attach error started showing up (thousands of it, cleaned up for brevity).
  3. https://gist.github.com/kars7e/b1fe9a8a330f2c3fbb3418b6e7613e11#file-controller_manager-log-L1822 controller has been restarted
  4. https://gist.github.com/kars7e/b1fe9a8a330f2c3fbb3418b6e7613e11#file-controller_manager-log-L2433 volume is successfully attached to the new node.

k8s-node-2 is the deleted node, k8s-node-0 and k8s-node-1 are remaining worker nodes. container was rescheduled to k8s-node-1.

@FengyunPan
Copy link

FengyunPan commented Aug 6, 2017

@jingxu97
The UpdateNodeStatuses() will update the actual state when node is been deleted,
but attachDetachController calls attach/detach volume before calling UpdateNodeStatuses().

Can we update node status before ensuring volumes that should be attached are attached and ensuring volumes that should be detached are detached?
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/attachdetach/reconciler/reconciler.go#L283

@FengyunPan
Copy link

FengyunPan commented Aug 6, 2017

@kars7e Can you test it by the following codes if it is convenience for you, if not, I will test it(I am sorry for that my cluster is down and I rebuild it day after tomorrow.):
------
--- a/pkg/controller/volume/attachdetach/reconciler/reconciler.go
+++ b/pkg/controller/volume/attachdetach/reconciler/reconciler.go
@@ -170,6 +170,12 @@ func (rc *reconciler) reconcile() {
// Detaches are triggered before attaches so that volumes referenced by
// pods that are rescheduled to a different node are detached first.

+ // Update Node Status
+ err := rc.nodeStatusUpdater.UpdateNodeStatuses()
+ if err != nil {
+ glog.Warningf("UpdateNodeStatuses failed with: %v", err)
+ }
+
// Ensure volumes that should be detached are detached.
------

@kars7e
Copy link
Author

kars7e commented Aug 6, 2017

@FengyunPan Thanks! I will try that tomorrow

@jingxu97
Copy link
Contributor

jingxu97 commented Aug 7, 2017

I don't think that will work. UpdateNodeStatuses() is not updating the actual state of the reconciler. It is updating the node status for communicating with kubelet so that kubelet volume manager whether the volume is already attached or not.

Strange thing is that if disable-attach-detach-reconcile-sync is not set, the reconciler syncState should check whether volume is still attached to node or not periodically and update the actual state. I wondered whether this function is also has bug https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/openstack/openstack_volumes.go#L409. Even though the volume status is already available, is it possible that volume.AttachedServerId still has the server id.

@FengyunPan
Copy link

FengyunPan commented Aug 7, 2017

I don't think that will work. UpdateNodeStatuses() is not updating the actual state of the reconciler. It is updating the node status for communicating with kubelet so that kubelet volume manager whether the volume is already attached or not.

Thank for your comment, that's right.

Even though the volume status is already available, is it possible that volume.AttachedServerId still has the server id.

I think that's impossible, if volume'status is available, volume's attachments will be [].
https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/openstack/openstack_volumes.go#L164
On the other hand, I have tested it on v1.6.4 yesterday and it work fine.

@kars7e
Copy link
Author

kars7e commented Aug 7, 2017

I've tried the UpdateNodeStatuses() change proposed by @FengyunPan. I deployed three pods (each with single volume). Surprisingly, this time after deleting the node one of the volumes got reattached correctly to the new node. The rest, however, did not and the Multi attach.... error started showing up again until I restarted controller manager. I'm not sure if this was affected by this patch - I suspect there might be a race condition happening there. I will grab the logs and post them if I find something interesting.

It was working for us quite all right in 1.6, this problem surfaced after upgrading to 1.7. My guess is because this commit 06baeb3
introduced a check if the volume is already attached. In 1.6, in this case, reconciler would just try to assign the volume to the new node, and it would succeed because the volume is in available state in OpenStack. in 1.7, the internal state is checked first, and it errors out before it even tries to attach the volume.

@kars7e
Copy link
Author

kars7e commented Aug 7, 2017

So this time I see following lines in log:

I0807 07:14:22.021646       9 attacher.go:205] VolumesAreAttached: check volume "b21057c7-98dd-47f4-acb0-379748c22561" (specName: "pvc-7340212d-7b3f-11e7-a3fc-fa163e85eaa3") is no longer attached
I0807 07:14:22.021686       9 operation_generator.go:169] VerifyVolumesAreAttached determined volume "kubernetes.io/cinder/b21057c7-98dd-47f4-acb0-379748c22561" (spec.Name: "pvc-7340212d-7b3f-11e7-a3fc-fa163e85eaa3") is no longer attached to node "k8s-node-0-93614e2e-091a-476f-830e-7a47b7749f8f", therefore it was marked as detached.
I0807 07:14:22.110882       9 reconciler.go:278] attacherDetacher.AttachVolume started for volume "pvc-7340212d-7b3f-11e7-a3fc-fa163e85eaa3" (UniqueName: "kubernetes.io/cinder/b21057c7-98dd-47f4-acb0-379748c22561") from node "k8s-node-0-93614e2e-091a-476f-830e-7a47b7749f8f"

Then it fails to attach, because it tries to attach to the instance which is being deleted.

E0807 07:14:22.683883       9 openstack_volumes.go:248] Failed to attach b21057c7-98dd-47f4-acb0-379748c22561 volume to 4a6b413d-5c9d-465c-a621-067afd338e70 compute: Expected HTTP response code [200] when accessing [POST https://us11-1-openstack.oc.vmware.com:8774/v2.1/bc0ca978c3124cab94151aeb07b0fddb/servers/4a6b413d-5c9d-465c-a621-067afd338e70/os-volume_attachments], but got 409 instead

But eventually the target node is updated, and it successfully gets attached:

I0807 07:15:12.703075       9 operation_generator.go:271] AttachVolume.Attach succeeded for volume "pvc-7340212d-7b3f-11e7-a3fc-fa163e85eaa3" (UniqueName: "kubernetes.io/cinder/b21057c7-98dd-47f4-acb0-379748c22561") from node "k8s-node-1-93614e2e-091a-476f-830e-7a47b7749f8f"

Now, what about other volumes that are failing? I looked for VolumesAreAttached, and I see plenty of:
E0807 07:15:21.557652 9 operation_generator.go:161] VolumesAreAttached failed for checking on node "k8s-node-0-93614e2e-091a-476f-830e-7a47b7749f8f" with: Failed to find object

k8s-node-0 is the node that was deleted. Shouldn't the VolumesAreAttached return false if the target instance does not exist?.

@kars7e
Copy link
Author

kars7e commented Aug 7, 2017

Also this shows why this time one of the volumes was reattached correctly. The VolumesAreAttached for that volume was executed at the precise moment when volumes have been detached from the instance being deleted, but before the instance disappeared from Nova. Thus it was able to return info about volume being available, and this caused reconciler to work properly (volume is marked as not attached in the internal state).

@FengyunPan
Copy link

@kars7e I have see the log(E0806 05:36:32.343037 21 operation_generator.go:161] VolumesAreAttached failed for checking on node "k8s-node-2-cca67ed1-eda7-4988-848c-3222706c2b45" with: Failed to find object) which means VerifyVolumesAreAttachedPerNode checks a no exist node. This is a bug.

@FengyunPan
Copy link

@kars7e Hi, I new a PR for this bug, can you help me to test it.

FengyunPan pushed a commit to FengyunPan/kubernetes that referenced this issue Aug 7, 2017
If node does not exist, node's volumes will be detached
automatically and become available. So mark them detached.
Fix: kubernetes#50200
@jingxu97
Copy link
Contributor

jingxu97 commented Aug 7, 2017

@FengyunPan Right it is a bug. I checked a few volume plugins, currently GCE PD and AWS have the correct behavior. If node no longer exist by checking the cloud provider, we can safely mark the volume as detached. But the rest of volumes do not check the error such as cinder, vsphere, photon. Need to fix them also. Opened an issue to track this #50266

@kars7e
Copy link
Author

kars7e commented Aug 8, 2017

@FengyunPan thanks for the patch, I tested and it worked! Can you post a PR with it? CC @jingxu97

k8s-github-robot pushed a commit that referenced this issue Aug 23, 2017
Automatic merge from submit-queue (batch tested with PRs 38947, 50239, 51115, 51094, 51116)

Mark the volumes as detached when node does not exist

If node does not exist, node's volumes will be detached
automatically and become available. So mark them detached and do not return err.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#50200

**Release note**:
```release-note
NONE
```
dims pushed a commit to dims/kubernetes that referenced this issue Feb 8, 2018
If node doesn't exist, OpenStack Nova will assume the volumes
are not attached to it. So mark the volumes as detached and
return false without error.
Fix: kubernetes#50200
dims pushed a commit to dims/kubernetes that referenced this issue Feb 8, 2018
Automatic merge from submit-queue (batch tested with PRs 38947, 50239, 51115, 51094, 51116)

Mark the volumes as detached when node does not exist

If node does not exist, node's volumes will be detached
automatically and become available. So mark them detached and do not return err.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
kubernetes#50200

**Release note**:
```release-note
NONE
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/openstack Issues or PRs related to openstack provider kind/bug Categorizes issue or PR as related to a bug. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

5 participants