Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes and libStorage #133

Closed
clintkitson opened this issue May 9, 2016 · 20 comments
Closed

Kubernetes and libStorage #133

clintkitson opened this issue May 9, 2016 · 20 comments

Comments

@clintkitson
Copy link
Contributor

clintkitson commented May 9, 2016

Targeting 05/20/2016 for a review of the libStorage driver for K8.

Configured against a RR as a libStorage server
Create/Remove Volumes
Claim/Unclaim Volumes
Mount/Unmount Volumes

@vladimirvivien
Copy link

@clintonskitson I think you meant 5/20/2016 on the target date. Please confirm.

@vladimirvivien
Copy link

Source of inspiration - clintkitson/kubernetes@7ecd87e

@clintkitson
Copy link
Contributor Author

Confirmed @vladimirvivien, thanks

@clintkitson
Copy link
Contributor Author

The client should be instantiated from here https://github.com/emccode/libstorage/blob/master/libstorage.go#L81-L89

@clintkitson
Copy link
Contributor Author

This config would be fed in as a config object. To get this working you need a libStorage server instance running (REX-Ray 0.4 not running embedded mode). The libS client points at this central thing.

rexray:
  blank:
libstorage:
  host: tcp://127.0.0.1:7981
  service: scaleio

@vladimirvivien
Copy link

@clintonskitson : Update

Clarification

  • The code in pkg/cloudprovider/rexray/rexray.go (your branch) seems to be rexray-specific code to bootstrap the drivers, etc. Since libstorage does that (bootstrap itself, from what I can see), do I still need similar code?

TO DO

  • Add changes to Dockerfile for libstorage changes
  • Add changes to k8s Makefile to build container for libstorage
  • Changes to yaml files to support libstorage

Possible Issues (later when submitting PR)
The followings may become issues when submitting later to Kubernetes project.

  • Had to update Sirupsen/logrus to latest to support akutz/gofig stuff
  • There maybe an issue with pre-commit checks because of version diff between master
  • Also gotta keep an eye on go-bindata, I think Andrew is using a personal fork.

@clintkitson
Copy link
Contributor Author

@vladimirvivien Thanks for the update. You are right on the REX-Ray specific config. That had to do with configuring modules that were defined as that functionality was not bubbled up through the top level of RR.

The equivalent for you would be libstorage as laid out here with services. Passing this into libstorage would not require anything special.

  libstorage:
    host: tcp://127.0.0.1:7981
    embedded: false
    server:
      endpoints:
        localhost:
          address: tcp://:7981
      services:
        swarm_scaleio:
          libstorage:
            storage:
              driver: scaleio
        swarm_isilon:
          libstorage:
            storage:
              driver: isilon
        swarm_virtualbox:
          libstorage:
            storage:
              driver: virtualbox

@clintkitson clintkitson modified the milestone: 16Q2 Google Jun 8, 2016
@clintkitson clintkitson changed the title Prototype of Kubernetes and libStorage Kubernetes and libStorage Jun 8, 2016
@vladimirvivien
Copy link

@clintonskitson Update status:

> cluster/kubectl.sh describe pv
Name:       kube-test
Labels:     <none>
Status:     Available
Claim:
Reclaim Policy: Retain
Access Modes:   RWO
Capacity:   3Gi
Message:
Source:
    Type:   LibStorage (a persistent disk resource in libStorage)
    VolumeName: kube-volume
    Service:    kubernetes

PVC is not working yet (see following). No obvious cause showing up in log files.

> cluster/kubectl.sh describe pvc
Name:       kube-volume
Namespace:  default
Status:     Pending
Volume:
Labels:     <none>
Capacity:
Access Modes:
Events:
  FirstSeen LastSeen    Count   From                SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----                -------------   --------    ------          -------
  6m        2s      26  {persistentvolume-controller }          Warning     ProvisioningFailed  No provisioner plugin found for the claim!

@vladimirvivien
Copy link

@clintonskitson Update

  • Switched to Linux build environment due to lengthy build time in OSX; build in second since I don't have to build all these docker images.
  • Deploying PersistentVolumeClaim is able to find a plugin provider (when CLOUD_PROVIDER=libstorage)
  • However, investigating why PVC showing no volume found in log. PVC stuck in pending
I0615 01:17:58.900254   29405 controller_base.go:521] storeObjectUpdate: adding claim "default/myclaim", version 23
I0615 01:17:58.900295   29405 controller.go:158] synchronizing PersistentVolumeClaim[default/myclaim]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0615 01:17:58.900314   29405 controller.go:181] synchronizing unbound PersistentVolumeClaim[default/myclaim]: no volume found
I0615 01:17:58.900318   29405 controller.go:1069] provisionClaim[default/myclaim]: started
I0615 01:17:58.900344   29405 controller.go:1221] scheduleOperation[provision-default/myclaim[863e6f2a-32b8-11e6-8ba4-0800275f7ed0]]
I0615 01:17:58.900363   29405 controller.go:1247] scheduleOperation[provision-default/myclaim[863e6f2a-32b8-11e6-8ba4-0800275f7ed0]]: running the operation
I0615 01:17:58.900389   29405 controller.go:1083] provisionClaimOperation [default/myclaim] started

@vladimirvivien
Copy link

@clintonskitson Update
Good News Kubernetes is provisioning the volume and RexRay is creating the requested volume.

Not So Good The issue however, the k8s API is generating a value of the PersistentVolume type that is different then what is being returned by the provisioner.Provision() method. This is causing a validation failure.

Volume Created

# rexray volume get | grep kubernetes
  name: kubernetes-dynamic-pvc-9830f2ef-3352-11e6-a89d-0800275f7ed0
  status: /Users/vladimir/VirtualBox Volumes/kubernetes-dynamic-pvc-9830f2ef-3352-11e6-a89d-0800275f7ed0
  name: kubernetes-dynamic-pvc-ed8544bf-334c-11e6-834b-0800275f7ed0
  status: /Users/vladimir/VirtualBox Volumes/kubernetes-dynamic-pvc-ed8544bf-334c-11e6-834b-0800275f7ed0
  name: kubernetes-dynamic-pvc-538a34f1-3354-11e6-abed-0800275f7ed0
  status: /Users/vladimir/VirtualBox Volumes/kubernetes-dynamic-pvc-538a34f1-3354-11e6-abed-0800275f7ed0
  name: kubernetes-dynamic-pvc-fffdec0b-3352-11e6-b56b-0800275f7ed0 ...

Current Issue Investigated

The following is the PersistentVolume created by the Provision() method in my code:

&api.PersistentVolume{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"pvc-66880435-335a-11e6-8062-0800275f7ed0", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string{"kubernetes.io/createdby":"libstorage-dynamic-provisioner"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, Spec:api.PersistentVolumeSpec{Capacity:api.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, PersistentVolumeSource:api.PersistentVolumeSource{GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), HostPath:(*api.HostPathVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), NFS:(*api.NFSVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), LibStorage:(*api.LibStorageVolumeSource)(0xc820f93290)}, AccessModes:[]api.PersistentVolumeAccessMode{"ReadWriteOnce"}, ClaimRef:(*api.ObjectReference)(nil), PersistentVolumeReclaimPolicy:"Delete"}, Status:api.PersistentVolumeStatus{Phase:"", Message:"", Reason:""}}

However, during validation, this is the PersistentVolume value that is presented to the API:

&api.PersistentVolume{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"pvc-66880435-335a-11e6-8062-0800275f7ed0", GenerateName:"", Namespace:"", SelfLink:"", UID:"8117af6c-335a-11e6-8062-0800275f7ed0", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63601634248, nsec:726512341, loc:(*time.Location)(0x5552b00)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"kubernetes.io/createdby":"libstorage-dynamic-provisioner", "pv.kubernetes.io/bound-by-controller":"yes", "pv.kubernetes.io/provisioned-by":"kubernetes.io/libstorage"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, Spec:api.PersistentVolumeSpec{Capacity:api.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, PersistentVolumeSource:api.PersistentVolumeSource{GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), HostPath:(*api.HostPathVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), NFS:(*api.NFSVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), LibStorage:(*api.LibStorageVolumeSource)(nil)}, AccessModes:[]api.PersistentVolumeAccessMode{"ReadWriteOnce"}, ClaimRef:(*api.ObjectReference)(0xc821153b20), PersistentVolumeReclaimPolicy:"Delete"}, Status:api.PersistentVolumeStatus{Phase:"", Message:"", Reason:""}}

The problem is that the version that the API is validating has LibStorage = nil which is causing validation failure. Continuing to look into into this for a quick resolution to keep this moving.

@vladimirvivien
Copy link

@clintonskitson Update
I was able to deploy a PersistentVolumeClaim unto a local setup:

PersistentVolume Claim

Claim

kind: Pod
apiVersion: v1
metadata:
  name: testpod
spec:
  containers:
    - name: mydb
      image: kubernetes/pause
      volumeMounts:
      - mountPath: /test
        name: test-data
  volumes:
    - name: test-data
      persistentVolumeClaim:
        claimName: myclaim

Result
After deployment, Claim is created and bound to a volume as shown:

cluster/kubectl.sh describe pvc
Name:       myclaim
Namespace:  default
Status:     Bound
Volume:     pvc-1012405e-348c-11e6-98f8-0800275f7ed0
Labels:     <none>
Capacity:   0
Access Modes:   
No events.

The volume information:

cluster/kubectl.sh describe pv pvc-1012405e-348c-11e6-98f8-0800275f7ed0
Name:       pvc-1012405e-348c-11e6-98f8-0800275f7ed0
Labels:     <none>
Status:     Bound
Claim:      default/myclaim
Reclaim Policy: Delete
Access Modes:   RWO
Capacity:   1Gi
Message:    
Source:
    Type:   LibStorage (a persistent disk resource in libStorage)
    VolumeName: kubernetes-dynamic-pvc-1012405e-348c-11e6-98f8-0800275f7ed0
    VolumeID:   e3c4f558-3664-45c9-bfc7-ce4b5abe867e
    Service:    

RexRay Verification

rexray volume get --volumeid=e3c4f558-3664-45c9-bfc7-ce4b5abe867e
attachments: []
availabilityzone: ""
iops: 0
name: kubernetes-dynamic-pvc-1012405e-348c-11e6-98f8-0800275f7ed0
networkname: ""
size: 1
status: /Users/vladimir/VirtualBox Volumes/kubernetes-dynamic-pvc-1012405e-348c-11e6-98f8-0800275f7ed0
id: e3c4f558-3664-45c9-bfc7-ce4b5abe867e
type: ""
fields: {}

Issue (Big One)

I got stuck for a day unable to figure out why pods were not deploying. It turns out when CLOUD_PROVIDER=libstorage is set, the cluster does not create a node (needed for scheduling) by default. This is because Kubernetes expects the specified cloud provider to handle node orchestration. This means there is no pod deployment until a node is created.

Being Investigated (Workarounds)

  • Manually create node (may not work, but worth investigation)
  • Implement all Cloud API interfaces (very involved and may not make sense)
  • Investigate Can PVCs be created without cloud-provider specified (ultimate)

@clintkitson
Copy link
Contributor Author

Makes sense. It is probably best to simulate real production environments
here. If I am interpreting this correctly, yes a node needs to be there.
This is where the mount operation is going to occur.

On Friday, June 17, 2016, Vladimir Vivien notifications@github.com wrote:

@clintonskitson https://github.com/clintonskitson Update
I was able to deploy a PersistentVolumeClaim unto a local setup:
PersistentVolume Claim

Claim

kind: Pod
apiVersion: v1
metadata:
name: testpod
spec:
containers:
- name: mydb
image: kubernetes/pause
volumeMounts:
- mountPath: /test
name: test-data
volumes:
- name: test-data
persistentVolumeClaim:
claimName: myclaim

Result
After deployment, Claim is created and bound to a volume as shown:

cluster/kubectl.sh describe pvc
Name: myclaim
Namespace: default
Status: Bound
Volume: pvc-1012405e-348c-11e6-98f8-0800275f7ed0
Labels:
Capacity: 0
Access Modes:
No events.

The volume information:

cluster/kubectl.sh describe pv pvc-1012405e-348c-11e6-98f8-0800275f7ed0
Name: pvc-1012405e-348c-11e6-98f8-0800275f7ed0
Labels:
Status: Bound
Claim: default/myclaim
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Message:
Source:
Type: LibStorage (a persistent disk resource in libStorage)
VolumeName: kubernetes-dynamic-pvc-1012405e-348c-11e6-98f8-0800275f7ed0
VolumeID: e3c4f558-3664-45c9-bfc7-ce4b5abe867e
Service:

RexRay Verification

rexray volume get --volumeid=e3c4f558-3664-45c9-bfc7-ce4b5abe867e
attachments: []
availabilityzone: ""
iops: 0
name: kubernetes-dynamic-pvc-1012405e-348c-11e6-98f8-0800275f7ed0
networkname: ""
size: 1
status: /Users/vladimir/VirtualBox Volumes/kubernetes-dynamic-pvc-1012405e-348c-11e6-98f8-0800275f7ed0
id: e3c4f558-3664-45c9-bfc7-ce4b5abe867e
type: ""
fields: {}

Issue (Big One)

I got stuck for a day unable to figure out why pods were not deploying. It
turns out when CLOUD_PROVIDER=libstorage is set, the cluster does not
create a node (needed for scheduling) by default. This is because
Kubernetes expects the specified cloud provider to handle node
orchestration. This means there is no pod deployment until a node is
created.
Being Investigated (Workarounds)

  • Manually create node (may not work, but worth investigation)
  • Implement all Cloud API interfaces (very involved and may not make
    sense)
  • Investigate Can PVCs be created without cloud-provider specified
    (ultimate)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#133 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABVMMfsqH5I3CRd-GYPfFnI5flDSGvDUks5qMqrlgaJpZM4IaXDy
.

@vladimirvivien
Copy link

@clintonskitson I am investigating a code-change to create dynamic PVCs with or without cloud-provider specified. Stay-tuned. Will send update later today.

@vladimirvivien
Copy link

@clintonskitson to actually answer your question: Yes, the dynamic provisioning of volumes (creating the vol automatically when PVC is deployed) only works with a cloud is specified. But, as I stated, I am investigating some code changes that may make it work cloud or not. Stay-tuned!

@vladimirvivien
Copy link

Update
I made some code changes to the k8s codebase that resolved the issued mentioned in earlier comment. I added new kube-controller-manager flag that enables (disables) libStorage as the persistent volume provisioner regardless of the cloud or kubernetes provider selected.

Let's do a walk through of the Kubernetes + libStorage

Pre-Reqisites

  • Kubernetes 1.3 or above
  • RexRay 0.4.0 or above

Case 1: PersistentVolumes Bound to Pre-Defined Volume

Define RexRay Volume

#> rexray volume create --volumename="vol-0001" --size=1
attachments: []
availabilityzone: ""
iops: 0
name: vol-0001
networkname: ""
size: 1
status: ""
id: af76dab6-ba1c-4788-bdb8-9f63e0cd62db
type: HardDisk
fields: {}

Persistent Volume

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-0001
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  libstorage:
    volumeName: vol-0001
    service: kubernetes

Persistent Volume Claim

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-0001
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Result of Deployment

get pv
NAME      CAPACITY   ACCESSMODES   STATUS    CLAIM              REASON    AGE
pv-0001   1Gi        RWO           Bound     default/pvc-0001             14m

Deploy Pod with Claim

kind: Pod
apiVersion: v1
metadata:
  name: pod0002
spec:
  containers:
    - name: pod0002-container
      image: gcr.io/google_containers/test-webserver
      volumeMounts:
      - mountPath: /test
        name: test-data
  volumes:
    - name: test-data
      persistentVolumeClaim:
        claimName: pvc-0001

Result
cluster/kubectl.sh describe pod pod0002 shows pod info including volume information (see below).

. . .
Conditions:
  Type      Status
  Initialized   True 
  Ready     True 
  PodScheduled  True 
Volumes:
  test-data:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-0001
    ReadOnly:   false
. . .

Case 2: Dynamic Persistent Volume Provisioner

In this scenario, the volume is defined in one Kubernetes claim file. The file uses the experimental annotation volume.alpha.kubernetes.io/storage-class to indicate it wants the volume to be dynamically be provisioned. Then the libStorage Volume provisioner will create the volume using RexRay automatically.

Activating LibStorage Provisioning

New Kube-Controller-Manager Flag Added
The code now supports a new flag --enable-libstorage-provisioner, for the controller binary kube-controller-manager, that activates the libStorage persistent volume provisioner to handle the automatic provisioning of the volume defined in a PersistentVolumeClaim yaml.

Launching Local Cluster with Provisioner
The following activates libStorage provisioning flag in the hack/local-up-cluster.sh script that comes with Kubernetes.

KUBERNETES_PROVIDER=local ENABLE_LIBSTORAGE_PROVISIONER=true LOG_LEVEL=99 hack/local-up-cluster.sh

The following snippet shows how the kube-controller-manager is launched in the bash script wit the new flag:

sudo -E "${GO_OUT}/hyperkube" controller-manager \
      --v=${LOG_LEVEL} \
...
      --enable-libstorage-provisioner="${ENABLE_LIBSTORAGE_PROVISIONER}" \
...

Deploy a Persistent Volume Claim

Define PersistentVolumeClaim

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-0002
  annotations:
    volume.experimental.kubernetes.io/provisioning-required: "true"
    volume.alpha.kubernetes.io/storage-class: kubernetes
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Description of PVC Created

#> cluster/kubectl.sh describe pvc pvc-0002
Name:       pvc-0002
Namespace:  default
Status:     Bound
Volume:     pvc-850a3832-3548-11e6-ba16-0800275f7ed0
Labels:     <none>
Capacity:   0
Access Modes:   
No events.

PersistentVolume Description
Notice there is a volume ID reported (awesome!)

#> cluster/kubectl.sh describe pv pvc-850a3832-3548-11e6-ba16-0800275f7ed0
Name:       pvc-850a3832-3548-11e6-ba16-0800275f7ed0
Labels:     <none>
Status:     Bound
Claim:      default/pvc-0002
Reclaim Policy: Delete
Access Modes:   RWO
Capacity:   1Gi
Message:    
Source:
    Type:   LibStorage (a persistent disk resource in libStorage)
    VolumeName: kubernetes-dynamic-pvc-850a3832-3548-11e6-ba16-0800275f7ed0
    VolumeID:   6699738b-17d1-41a3-8cce-b390dcab09ed

Validate Volume with RexRay

#> rexray volume get --volumeid=6699738b-17d1-41a3-8cce-b390dcab09ed
attachments: []
availabilityzone: ""
iops: 0
name: kubernetes-dynamic-pvc-850a3832-3548-11e6-ba16-0800275f7ed0
networkname: ""
size: 1
status: /Users/vladimir/VirtualBox Volumes/kubernetes-dynamic-pvc-850a3832-3548-11e6-ba16-0800275f7ed0
id: 6699738b-17d1-41a3-8cce-b390dcab09ed
type: ""
fields: {}

Launch Pod Using PVC
We can launch a POD that uses the claim defined above.

kind: Pod
apiVersion: v1
metadata:
  name: pod0003
spec:
  containers:
    - name: pod0003-container
      image: gcr.io/google_containers/test-webserver
      volumeMounts:
      - mountPath: /test
        name: test-data
  volumes:
    - name: test-data
      persistentVolumeClaim:
        claimName: pvc-0002

PVC Bound
You can see now pvc-0002 is now bound.

#> cluster/kubectl.sh describe pvc pvc-0002
Name:       pvc-0002
Namespace:  default
Status:     Bound
Volume:     pvc-850a3832-3548-11e6-ba16-0800275f7ed0
...

@vladimirvivien
Copy link

@vladimirvivien
Copy link

vladimirvivien commented Jun 28, 2016

Tasks

The following is a list of tasks I am working to get code ready for PR

  • Rebase to latest Kubernetes to pickup latest API changes
  • Investigate new kube-controller-mgr's flag --enable-dynamic-provisioning. This may surplant flag that I added for similar purpose.
    • The --enable-dynanic-provisioning does not negate the need for the --enable-libstorage-provisioner flag.
  • Clean code and remove cloudprovider/libstorage as this is not needed.
    • LibStorage is not a cloudprovider, since it does not setup a cloud environment
    • The flag --enable-libstorage-provisioner triggers libStorage as the dynamic provisioner
  • Add kube-controller-manager flag --libstorage-opts to pass a semi-colon-delimited list of key/value pairs to libstorage for internal configuration.
  • Update LibStorageVolume - Remove extraneous fields. Use command-line flag (see above) for additional params.
  • Revisit Unit tests to ensure proper coverage of newly added methods
  • Retest code with updated REX-Ray 0.4.0 GA
  • Fix any bug discovered during re-test
  • Test PV and PVC in using Vagrant setup
  • Test PV and PVC in local/single node
  • Capture instructions in document examples/libstorage/README.md
  • Revisit libStorage as a RecyclableVolumePlugin and its implications
    • Ensure libStorage handle being recycled
  • Create / Review PR
  • Submit PR

@vladimirvivien
Copy link

PR - kubernetes/kubernetes#28599

@clintkitson clintkitson modified the milestones: 16Q3 Google, 16Q2 Google Aug 15, 2016
@vladimirvivien
Copy link

The PR stalled due to toml library license issue. That has been resolved now. However, the volume plugin API has progressed and invalidated many of the assumptions made earlier. Now, these have to be refactored to use the now built-in dynamic provisioning mechanism.

@vladimirvivien
Copy link

vladimirvivien commented Nov 3, 2016

It has been a while since this thread got updated. The latest version of the code has been submitted to the PR. This version of the code implements the followings:

  • Automatic volume attachment for Persistent Volumes (PV)
  • Support for PV & PersistentVolumeClaim (PVC) binding
  • Consumption of storage from a PVC
  • Support for Storage Class
  • Dynamic provisioning of volumes using Storage classes
  • Binding of PVC to dynamically created volumes
  • Consumption of dynamically created volumes

The updated Kubernetes libStorage volume plugin code also introduced a clean implementation of the libStorage API client baked into Kubernetes. This allowed us to further reduce direct dependencies on Go packages from the libStorage project. As a benefit, this reduces the build friction that arises when introducing new dependencies to Kubernenetes (see #133 (comment)). The new client gives us the freedom to only implement the libStorage API call needed to achieve storage operations.

See PR - kubernetes/kubernetes#28599

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants