-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Add proposal for volume spec reconstruction #650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
34 changes: 34 additions & 0 deletions
34
contributors/design-proposals/volume-spec-reconstruction.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,34 @@ | ||
| # Abstract | ||
| Today, Kubelet dynamically re-constructs volume spec during runtime. It reconstructs using volume mount path. But for many plugins volume mount path is not enough to reconstruct all the fields/options. As a result, some of the plugin API which depend on these fields/options are broken, especially in the cleanup code. The goal of this exercise is to store volume specific meta-data and use that information to reconstruct the spec. | ||
|
|
||
| # High Level Design | ||
| Store volume specific meta-data during volume mountdevice/mount/update logic and use it to reconstruct the volume spec. If it is an attachable plugin, store the meta-data during mountdevice and remove after a successful unmountdevice. For plugins, which use only mount & unmount API, store the meta-data during the mount API and remove during the umount API. Meta-data should also be updated, once we add support for volume spec update. | ||
|
|
||
| ## Meta-data format | ||
|
|
||
| ### Option 1: | ||
| Store the volume object source in JSON format. Example: for GCE plugin store "GCEPersistentDiskVolumeSource". | ||
| #### Pros: | ||
| * Simpler to implement. | ||
| #### Cons: | ||
| * Volume object source is not versioned and upgrades will be a problem going forward. | ||
| ### Option 2: | ||
| Store Persistent volume spec/Volume Source in JSON format. Persistent volume spec is stored if the volume is backed by a Persistent Volume Object. Volume source is stored if the volume source is inlined in Pod spec. We can use the different filenames \<volume name\>~pv.json and \<volume name\>~vs.json to distinguish these objects. | ||
| #### Pros: | ||
| * Persistent volume spec is versioned. | ||
| #### Cons: | ||
| * Complicated naming. | ||
| * VolumeSource is still not versioned. | ||
| * Persistent volume does not have the namespace information for the secretref. | ||
| ### Option 3: | ||
| Implement a per plugin specific API to store the meta-data relevant to the plugin. We can provide a sensible default using Option 2/Option 1 if plugin does not want to implement these API. | ||
| #### Pros: | ||
| * Plugins are free to implement/experiment with the information they want to store. | ||
| #### Cons: | ||
| * Versioning the meta-data is offloaded to the plugin. | ||
|
|
||
| ## Meta-data location: | ||
| Store the meta-data file \<volume name\>.json in the plugin path. | ||
| Ex: | ||
| For plugin: ```kubernetes.io/lvm``` store in the following directory | ||
| ```/var/lib/kubelet/plugins/kubernetes.io/flexvolume/kubernetes.io/lvm``` | ||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
controller-manager can run on several machines in hot-standby HA, where one controller-manager is master (and runs controllers) and the other controller-managers just wait for the master to die so they can quickly resume. The metadata won't be available to a new master on another machine if you store the metadata on the filesystem. I am afraid that these metadata must be stored in API server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The information will be stored as part of mountdevice call, which is only executed on worker nodes. Do we need this information and spec reconstruction in controller-manager code path too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, ok, I think we don't need it in the controller-manager, @tsmetana or @jingxu97 can confirm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and would you please add a note that this will be stored only on nodes and not on controller-manager?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just introduce a standard 'provider_info' key in json format and the plugin can determine what it needs to store here on it's own? In addition, it would be up to the driver to reconnect if needed and it would utilize that provider_info data.