From 22fc44ff7fc5e19451c76a46828b94388347b517 Mon Sep 17 00:00:00 2001 From: leonwanghui Date: Sat, 10 Jun 2017 18:38:04 +0800 Subject: [PATCH 1/5] Host-side volume discovery and management library proposal --- ...volume-discovery-and-management-library.md | 160 ++++++++++++++++++ 1 file changed, 160 insertions(+) create mode 100644 contributors/design-proposals/hostside-volume-discovery-and-management-library.md diff --git a/contributors/design-proposals/hostside-volume-discovery-and-management-library.md b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md new file mode 100644 index 00000000000..3dc8e36fc4b --- /dev/null +++ b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md @@ -0,0 +1,160 @@ +# Host-side Volume Discovery and Management Library + +## Motivation + +As we all know, Kubernetes can be deployed anywhere, including various cloud +platforms and bare metals. But as for storage resources, you have to choose +specific storage backend according to the environment Kubernetes be deployed. +There are some examples: +1. if you deploy Kubernetes on AWS cloud platform, then you can only use +EBS to provide storage resources for Kubernetes. +2. if your Kubernetes cluster is deployed on OpenStack, then you can only +use Cinder to provide storage resources. +3. if you want to deploy Kubernetes on bare metals, right now you can only +choose SCSI or RBD device (maybe one or two more). + +Maybe you are wondering what the problem is. Honestly it's ok if you show +it to customers, but something interesting is that if you change the format +of these sentences like below: +1. if you want to use EBS to provide storage resource for Kubernetes, you +have to deploy Kubernetes on AWS cloud platform. +2. if you want to use Cinder, then you have to make sure Kubernetes is +deployed on OpenStack. +3. if you want to use SCSI, RBD or FC device, then you can only deploy +Kubernetes on bare metals. (except the storage bakend has supported to +manage these devices) + +Now if you show it to customers, then customers would get confused and they +might ask why can not you provide Cinder storage for Kubernetes deployed on +bare metals. And we also get some feedbacks from users, and they suggested +we enrich Kubernetes storage drivers since they were using some storage +systems Kubernetes doesn't support. + +All in all, our motivation is to break the limitation between storage systems +and Kubernetes deployment environment, and to provide any storage backend +regardless of deployment environment(cloud platforms and bare metals). + +## Goal + +To slove this problem, we plan to create a standalone project in Kubernetes +that acts a library providing volume discovery and local management. For +any in-tree volume plugins which want to provide storage resource for +bare metals, they can call this library to finish host-side volume discovery +and then mount the device path to container. And the reason why set it up +as a standalone project is that in this way it can also serve for CSI plugin +and even any other storage systems written in Go. + +## Proposed Design + +As we know, there are a lot of storage protocols, such as iscsi, rbd, fc, +smbfs and so forth, and some of them are implemented in different ways +according to different system types(x86, s390, ppc64) and os types +(linux, windows), so it is a quite complicated work if we add these device +drivers directly into volume plugins. But what we can do is to create a +library to communicate with kernel and expose a unified interface to +volume plugins. + +### API Object + +The `Host-side Volume Discovery and Management Library` API object will +have the following structure: + +```go +const ( + // Platform type + PLATFORM_ALL = 'ALL' + PLATFORM_x86 = 'X86' + PLATFORM_S390 = 'S390' + PLATFORM_PPC64 = 'PPC64' + + // Operation system type + OS_TYPE_ALL = 'ALL' + OS_TYPE_LINUX = 'LINUX' + OS_TYPE_WINDOWS = 'WIN' + + // Device driver type + ISCSI = "ISCSI" + ISER = "ISER" + FIBRE_CHANNEL = "FIBRE_CHANNEL" + AOE = "AOE" + DRBD = "DRBD" + NFS = "NFS" + GLUSTERFS = "GLUSTERFS" + LOCAL = "LOCAL" + GPFS = "GPFS" + HUAWEISDSHYPERVISOR = "HUAWEISDSHYPERVISOR" + HGST = "HGST" + RBD = "RBD" + SCALEIO = "SCALEIO" + SCALITY = "SCALITY" + QUOBYTE = "QUOBYTE" + DISCO = "DISCO" + VZSTORAGE = "VZSTORAGE" + + // A unified device path prefix + VOLUME_LINK_DIR = '/dev/disk/by-id/' +) + +// Connector is an interface indicating what outside world can do with this +// library, notice that it is at very early stage right now. +type Connector interface { + GetConnectorProperties(multiPath string, doLocalAttach bool) (*ConnectorProperties, error) + + ConnectVolume(conn *ConnectionInfo) (string, error) + + DisconnectVolume(conn *ConnectionInfo) (string, error) + + GetDevicePath(volumeId string) (string, error) +} + +// ConnectorProperties is a struct used to tell storage backend how to +// intialize connection of volume. Please notice that it is OPTIONAL. +type ConnectorProperties struct { + DoLocalAttach bool `json:"doLocalAttach"` + Platform string `json:"platform"` + OsType string `json:"osType"` + Ip string `json:"ip"` + Host string `json:"host"` + MultiPath bool `json:"multipath"` + Initiator string `json:"initiator"` +} + +// ConnectionInfo is a structure for all properties of +// connection when connect a volume +type ConnectionInfo struct { + // the type of driver type, such as iscsi, rbd and so on + DriverVolumeType string `json:"driverVolumeType"` + + // Required parameters to connect volume and differ from DriverVolumeType. + // For example, for iscsi driver, see struct IsciConnectionData below. + // NOTICE that you have to convert it into a map. + ConnectionData map[string]interface{} `json:"data"` +} + +type IscsiConnectionData struct { + // boolean indicating whether discovery was used + TragetDiscovered bool `json:"targetDiscovered"` + + // the IQN of the iSCSI target + TargetIqn string `json:"targetIqn"` + + // the portal of the iSCSI target + TargetPortal string `json:"targetPortal"` + + // the lun of the iSCSI target + TargetLun string `json:"targetLun"` + + // the uuid of the volume + VolumeId string `json:"volumeId"` + + // the authentication details + AuthUsername string `json:"authUsername"` + AuthPassword string `json:"authPassword"` +} + +``` + +## References + +- For more information, please refer to https://github.com/openstack/os-brick, +which is an implementation written in Python. \ No newline at end of file From 589bd608046fa05dd3a167c6dbdc0e64cecd3b74 Mon Sep 17 00:00:00 2001 From: leonwanghui Date: Sat, 10 Jun 2017 18:46:12 +0800 Subject: [PATCH 2/5] Update the proposal --- .../hostside-volume-discovery-and-management-library.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contributors/design-proposals/hostside-volume-discovery-and-management-library.md b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md index 3dc8e36fc4b..3c3900f7381 100644 --- a/contributors/design-proposals/hostside-volume-discovery-and-management-library.md +++ b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md @@ -122,10 +122,10 @@ type ConnectorProperties struct { // ConnectionInfo is a structure for all properties of // connection when connect a volume type ConnectionInfo struct { - // the type of driver type, such as iscsi, rbd and so on + // the type of driver volume, such as iscsi, rbd and so on DriverVolumeType string `json:"driverVolumeType"` - // Required parameters to connect volume and differ from DriverVolumeType. + // Required parameters to connect volume and differs from DriverVolumeType. // For example, for iscsi driver, see struct IsciConnectionData below. // NOTICE that you have to convert it into a map. ConnectionData map[string]interface{} `json:"data"` From 03033690caa7718f0a9feb89e22a95ec4faad85c Mon Sep 17 00:00:00 2001 From: leonwanghui Date: Mon, 12 Jun 2017 08:51:05 +0800 Subject: [PATCH 3/5] Update the proposal --- .../hostside-volume-discovery-and-management-library.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contributors/design-proposals/hostside-volume-discovery-and-management-library.md b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md index 3c3900f7381..d89a2caaeef 100644 --- a/contributors/design-proposals/hostside-volume-discovery-and-management-library.md +++ b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md @@ -49,7 +49,7 @@ and even any other storage systems written in Go. As we know, there are a lot of storage protocols, such as iscsi, rbd, fc, smbfs and so forth, and some of them are implemented in different ways according to different system types(x86, s390, ppc64) and os types -(linux, windows), so it is a quite complicated work if we add these device +(linux, windows), so it is quite a complicated work if we add these device drivers directly into volume plugins. But what we can do is to create a library to communicate with kernel and expose a unified interface to volume plugins. @@ -98,7 +98,7 @@ const ( // Connector is an interface indicating what outside world can do with this // library, notice that it is at very early stage right now. type Connector interface { - GetConnectorProperties(multiPath string, doLocalAttach bool) (*ConnectorProperties, error) + GetConnectorProperties(multiPath, doLocalAttach bool) (*ConnectorProperties, error) ConnectVolume(conn *ConnectionInfo) (string, error) From 6ee5a5d27104874a611ee3641feed2a4661f777c Mon Sep 17 00:00:00 2001 From: leonwanghui Date: Thu, 6 Jul 2017 22:42:59 +0800 Subject: [PATCH 4/5] Update the proposal --- ...volume-discovery-and-management-library.md | 36 +++++++++---------- 1 file changed, 16 insertions(+), 20 deletions(-) diff --git a/contributors/design-proposals/hostside-volume-discovery-and-management-library.md b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md index d89a2caaeef..6f82fceb723 100644 --- a/contributors/design-proposals/hostside-volume-discovery-and-management-library.md +++ b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md @@ -5,30 +5,26 @@ As we all know, Kubernetes can be deployed anywhere, including various cloud platforms and bare metals. But as for storage resources, you have to choose specific storage backend according to the environment Kubernetes be deployed. -There are some examples: -1. if you deploy Kubernetes on AWS cloud platform, then you can only use -EBS to provide storage resources for Kubernetes. -2. if your Kubernetes cluster is deployed on OpenStack, then you can only -use Cinder to provide storage resources. -3. if you want to deploy Kubernetes on bare metals, right now you can only -choose SCSI or RBD device (maybe one or two more). - -Maybe you are wondering what the problem is. Honestly it's ok if you show -it to customers, but something interesting is that if you change the format -of these sentences like below: +Here are some examples: 1. if you want to use EBS to provide storage resource for Kubernetes, you have to deploy Kubernetes on AWS cloud platform. 2. if you want to use Cinder, then you have to make sure Kubernetes is deployed on OpenStack. -3. if you want to use SCSI, RBD or FC device, then you can only deploy -Kubernetes on bare metals. (except the storage bakend has supported to -manage these devices) +3. if you want to deploy Kubernetes on bare metals, right now you can only +choose SCSI or RBD device (maybe one or two more). + +If you show it to users, they would get confused and ask why can not you +provide Cinder storage for Kubernetes deployed on bare metals. Since some +storage systems like Cinder, ScaleIO can provide volume resource for bare metals +directly, how can users use these resources if they deploy their cluster on +these bare metals? -Now if you show it to customers, then customers would get confused and they -might ask why can not you provide Cinder storage for Kubernetes deployed on -bare metals. And we also get some feedbacks from users, and they suggested -we enrich Kubernetes storage drivers since they were using some storage -systems Kubernetes doesn't support. +Another use case is that we got some suggestions from users that we enrich +Kubernetes storage drivers since they were using some storage systems Kubernetes +doesn't support. Right now we have an option, which is flexvolume. But it would +be a hard work for them to develop every storage driver because the operation of +attaching volume to host is quite complicated. Can we shield the complex +implementation and provide a simple interface for their secondary development? All in all, our motivation is to break the limitation between storage systems and Kubernetes deployment environment, and to provide any storage backend @@ -36,7 +32,7 @@ regardless of deployment environment(cloud platforms and bare metals). ## Goal -To slove this problem, we plan to create a standalone project in Kubernetes +To slove the problem, we plan to create a standalone project in Kubernetes that acts a library providing volume discovery and local management. For any in-tree volume plugins which want to provide storage resource for bare metals, they can call this library to finish host-side volume discovery From 826869bbe1751c791ed2623a1ec43faf1ed168fc Mon Sep 17 00:00:00 2001 From: leonwanghui Date: Tue, 11 Jul 2017 14:19:31 +0800 Subject: [PATCH 5/5] Update the proposal and fix some word spelling issues --- .../hostside-volume-discovery-and-management-library.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contributors/design-proposals/hostside-volume-discovery-and-management-library.md b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md index 6f82fceb723..c05e6c821cd 100644 --- a/contributors/design-proposals/hostside-volume-discovery-and-management-library.md +++ b/contributors/design-proposals/hostside-volume-discovery-and-management-library.md @@ -13,7 +13,7 @@ deployed on OpenStack. 3. if you want to deploy Kubernetes on bare metals, right now you can only choose SCSI or RBD device (maybe one or two more). -If you show it to users, they would get confused and ask why can not you +If you show it to users, they would get confused and ask why you can not provide Cinder storage for Kubernetes deployed on bare metals. Since some storage systems like Cinder, ScaleIO can provide volume resource for bare metals directly, how can users use these resources if they deploy their cluster on @@ -32,7 +32,7 @@ regardless of deployment environment(cloud platforms and bare metals). ## Goal -To slove the problem, we plan to create a standalone project in Kubernetes +To solve the problem, we plan to create a standalone project in Kubernetes that acts a library providing volume discovery and local management. For any in-tree volume plugins which want to provide storage resource for bare metals, they can call this library to finish host-side volume discovery