Skip to content

Commit

Permalink
storage: deactivate special LVs after detach
Browse files Browse the repository at this point in the history
On block SD, LVs and whole VG is deactivated when domain is put into
maintenance. However, during SD detach, special LVs are activated again,
as a result of creating BlockStorageDomain object. This cause LVs are
activated again, but never properly deactivated. After SD detach, we are
left with stale DM links, which can cause subsequent issues, e.g.
multipath device cannot be removed.

Proper solution would be to refactor BlockStorageDomain constructor not
to activate special LVs in constructor and activate them when needed.
This would require lots of changes with high risk to instorduce more
issues than issues it solves.

Deativate special LVs after detach. With this fix, multipath device can
be removed after removing storage domain. Multipath device is present
only in case SD refresh happens beforer LUN is removed from storage
server. If no other storage domain is on given storage server, no
multipath device for removed SD won't appear even after SD refresh,
as host will log out from storage server.

Multipath device can be removed with

multipath -f <WWID>
echo 1 > /sys/block/<device-name>/device/delete

which is proper way how to remove multipath block device.

Add context manager, which tears down storage domain upon exiting code
block and use it during SD detach.

Change-Id: Ic7b390450ce71571fb651f79566546b94d538f51
Bug-Url: https://bugzilla.redhat.com/1928041
Signed-off-by: Vojtech Juranek <vjuranek@redhat.com>
  • Loading branch information
vjuranek committed Mar 3, 2021
1 parent a8f4e3b commit a80f858
Show file tree
Hide file tree
Showing 2 changed files with 40 additions and 20 deletions.
11 changes: 11 additions & 0 deletions lib/vdsm/storage/sd.py
Original file line number Diff line number Diff line change
Expand Up @@ -784,6 +784,17 @@ def teardown(self):
storage domain object.
"""

@contextmanager
def tearing_down(self):
"""
Context manager which ensures that upon exiting context, storage domain
is torn down.
"""
try:
yield
finally:
self.teardown()

# Other

@property
Expand Down
49 changes: 29 additions & 20 deletions lib/vdsm/storage/sp.py
Original file line number Diff line number Diff line change
Expand Up @@ -1074,31 +1074,40 @@ def detachSD(self, sdUUID):

dom = sdCache.produce(sdUUID)

# Avoid detach domains if not owned by pool
self._assert_sd_owned_by_pool(dom)

if sdUUID == self.masterDomain.sdUUID:
raise se.CannotDetachMasterStorageDomain(sdUUID)
# To detach SD, it has to be put into maintenance, which stops
# monitoring thread. Producing SD will call its constructor, which
# in case of block SD, will activate special LVs, but these won't be
# deactivate by monitoring thread as it was already stopped. Tear the
# domain down once detach is finished.
# TODO: remove once SD constructor is refactored so it doesn't do any
# actions on the storage, like activating LVs.

with dom.tearing_down():
# Avoid detach domains if not owned by pool
self._assert_sd_owned_by_pool(dom)

# TODO: clusterLock protection should be moved to
# StorageDomain.[at,de]tach()
detachingISO = dom.isISO()
if sdUUID == self.masterDomain.sdUUID:
raise se.CannotDetachMasterStorageDomain(sdUUID)

if detachingISO:
# An ISO domain can be shared by multiple pools
dom.acquireHostId(self.id)
dom.acquireClusterLock(self.id)
# TODO: clusterLock protection should be moved to
# StorageDomain.[at,de]tach()
detachingISO = dom.isISO()

try:
# Remove pool info from domain metadata
dom.detach(self.spUUID)
finally:
if detachingISO:
dom.releaseClusterLock()
dom.releaseHostId(self.id)
# An ISO domain can be shared by multiple pools
dom.acquireHostId(self.id)
dom.acquireClusterLock(self.id)

try:
# Remove pool info from domain metadata
dom.detach(self.spUUID)
finally:
if detachingISO:
dom.releaseClusterLock()
dom.releaseHostId(self.id)

# Remove domain from pool metadata
self.forcedDetachSD(sdUUID)
# Remove domain from pool metadata
self.forcedDetachSD(sdUUID)

def detachAllDomains(self):
"""
Expand Down

0 comments on commit a80f858

Please sign in to comment.