Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perform LV activation/deactivation #88

Closed
wants to merge 1 commit into from

Conversation

kvaps
Copy link

@kvaps kvaps commented Apr 28, 2023

This PR introduces:

  • Deactivation of volume after lvcreate
  • Activation of volume before usage
  • Deactivation of volume after usage
  • sync+lvscan before every lvm operation

This is first step needed to implement shared vg support #62

@kvaps kvaps requested a review from a team as a code owner April 28, 2023 14:12
@majst01
Copy link
Contributor

majst01 commented May 2, 2023

Hi @kvaps

I still do not understand how lvm-ha setup would look like ? Can you explain a bit more who would be responsible to sync data between nodes and how.

@kvaps
Copy link
Author

kvaps commented May 3, 2023

Hey, sorry, for insufficient description to this PR.
I'm solving the issue of using LVM over shared storage (DAS).

Eg. when you have shared LUN connected to multiple nodes in your cluster.
image

The LVM is used to cut this volume on small pieces
I like design of your driver, and I'm going to develop my own own based on it.

I wish to contribute changes back, but not sure if you'll agree with these changes.

  • First of all I want driver which will work in both cases: Local LVM and Shared LVM.
  • The driver should have configurable volume group in StorageClass, thus it can work with many VGs.
  • Volumes should be created with nodeAffinity to VG's, not to the nodes.
  • For each VG driver should add label to node with UUID of this VG. Thus if VG is shared across the multuple nodes the Volume can be used on any of them.
  • The driver should not use cLVM or lvmlockd extensions, they are pretty complex and difficult to maintain. Instead it must to refresh LVM metadata before the each operation on VG. The locks should be implemented using Kubernetes API.
  • Snapshots support

The design of shared LVM without cLVM and lvmlockd is borrowed from such solutions as OpenNebula and Proxmox.
Both do not require the clustered extension for LVM:

proxmox:

You don't need CLVM as long as you use the pve tools/API to manage volumes (create/delete).

opennebula:

The LVM datastore does not need CLVM configured in your cluster. The drivers refresh LVM metadata each time an image is needed on another Hosts.

here are some excerpts from the LVM mailing list:

cLVM does not control concurrent access, it just cares about propagating the lvm metadata to all nodes and locking during changes of the metadata.

All clustered LVM does is make sure that, as LVM things change, all nodes know about the changes immediately.

Unfortunately, I have no enough time to continue with this right now. It's better to convert it to draft.
Feel free to leave any comments on my design.

@kvaps kvaps marked this pull request as draft May 3, 2023 08:29
@majst01
Copy link
Contributor

majst01 commented May 3, 2023

This seems a lot of effort and does not fit well in our use case. It is probably better to take this code and create your own csi driver. But if you have small improvements for this one, they are all welcome.

@Gerrit91
Copy link
Contributor

I hope this is fine to close, please re-open if necessary.

@Gerrit91 Gerrit91 closed this Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants