Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement glusterfs volume plugin #6174

Merged
merged 1 commit into from Apr 7, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 3 additions & 0 deletions cmd/kubelet/app/plugins.go
Expand Up @@ -28,6 +28,7 @@ import (
"github.com/GoogleCloudPlatform/kubernetes/pkg/volume/empty_dir"
"github.com/GoogleCloudPlatform/kubernetes/pkg/volume/gce_pd"
"github.com/GoogleCloudPlatform/kubernetes/pkg/volume/git_repo"
"github.com/GoogleCloudPlatform/kubernetes/pkg/volume/glusterfs"
"github.com/GoogleCloudPlatform/kubernetes/pkg/volume/host_path"
"github.com/GoogleCloudPlatform/kubernetes/pkg/volume/iscsi"
"github.com/GoogleCloudPlatform/kubernetes/pkg/volume/nfs"
Expand Down Expand Up @@ -55,6 +56,8 @@ func ProbeVolumePlugins() []volume.VolumePlugin {
allPlugins = append(allPlugins, nfs.ProbeVolumePlugins()...)
allPlugins = append(allPlugins, secret.ProbeVolumePlugins()...)
allPlugins = append(allPlugins, iscsi.ProbeVolumePlugins()...)
allPlugins = append(allPlugins, glusterfs.ProbeVolumePlugins()...)

return allPlugins
}

Expand Down
3 changes: 3 additions & 0 deletions examples/examples_test.go
Expand Up @@ -181,6 +181,9 @@ func TestExampleObjectSchemas(t *testing.T) {
"../examples/iscsi/v1beta3": {
"iscsi": &api.Pod{},
},
"../examples/glusterfs/v1beta3": {
"glusterfs": &api.Pod{},
},
}

for path, expected := range cases {
Expand Down
47 changes: 47 additions & 0 deletions examples/glusterfs/README.md
@@ -0,0 +1,47 @@
## Glusterfs

[Glusterfs](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use Glusterfs volumes.

The example assumes that the Glusterfs client package is installed on all nodes.

### Prerequisites

Install Glusterfs client package on the Kubernetes hosts.

### Create a POD

The following *volume* spec illustrates a sample configuration.

```js
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Clarify that this is the volume spec for the POD.

{
"name": "glusterfsvol",
"glusterfs": {
"endpoints": "glusterfs-cluster",
"path": "kube_vol",
"readOnly": true
}
}
```

The parameters are explained as the followings.

- **endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
- **path** is the Glusterfs volume name.
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.

Detailed POD and Gluster cluster endpoints examples can be found at [v1beta3/](v1beta3/) and [endpoints/](endpoints/)

```shell
# create gluster cluster endpoints
$ kubectl create -f examples/glusterfs/endpoints/glusterfs-endpoints.json
# create a container using gluster volume
$ kubectl create -f examples/glusterfs/v1beta3/glusterfs.json
```
Once that's up you can list the pods and endpoint in the cluster, to verify that the master is running:

```shell
$ kubectl get endpoints
$ kubectl get pods
```

If you ssh to that machine, you can run `docker ps` to see the actual pod and `mount` to see if the Glusterfs volume is mounted.
13 changes: 13 additions & 0 deletions examples/glusterfs/endpoints/glusterfs-endpoints.json
@@ -0,0 +1,13 @@
{
"apiVersion": "v1beta1",
"id": "glusterfs-cluster",
"kind": "Endpoints",
"metadata": {
"name": "glusterfs-cluster"
},
"Endpoints": [
"10.16.154.81:0",
"10.16.154.82:0",
"10.16.154.83:0"
]
}
32 changes: 32 additions & 0 deletions examples/glusterfs/v1beta3/glusterfs.json
@@ -0,0 +1,32 @@
{
"apiVersion": "v1beta3",
"id": "glusterfs",
"kind": "Pod",
"metadata": {
"name": "glusterfs"
},
"spec": {
"containers": [
{
"name": "glusterfs",
"image": "kubernetes/pause",
"volumeMounts": [
{
"mountPath": "/mnt/glusterfs",
"name": "glusterfsvol"
}
]
}
],
"volumes": [
{
"name": "glusterfsvol",
"glusterfs": {
"endpoints": "glusterfs-cluster",
"path": "kube_vol",
"readOnly": true
}
}
]
}
}
1 change: 1 addition & 0 deletions pkg/api/testing/fuzzer.go
Expand Up @@ -174,6 +174,7 @@ func FuzzerFor(t *testing.T, version string, src rand.Source) *fuzz.Fuzzer {
// Exactly one of the fields should be set.
//FIXME: the fuzz can still end up nil. What if fuzz allowed me to say that?
fuzzOneOf(c, &vs.HostPath, &vs.EmptyDir, &vs.GCEPersistentDisk, &vs.GitRepo, &vs.Secret, &vs.NFS, &vs.ISCSI)
fuzzOneOf(c, &vs.HostPath, &vs.EmptyDir, &vs.GCEPersistentDisk, &vs.GitRepo, &vs.Secret, &vs.NFS, &vs.ISCSI, &vs.Glusterfs)
},
func(d *api.DNSPolicy, c fuzz.Continue) {
policies := []api.DNSPolicy{api.DNSClusterFirst, api.DNSDefault}
Expand Down
17 changes: 17 additions & 0 deletions pkg/api/types.go
Expand Up @@ -198,6 +198,8 @@ type VolumeSource struct {
// ISCSIVolumeSource represents an ISCSI Disk resource that is attached to a
// kubelet's host machine and then exposed to the pod.
ISCSI *ISCSIVolumeSource `json:"iscsi"`
// Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime
Glusterfs *GlusterfsVolumeSource `json:"glusterfs"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'll also want to add this to PersistentVolumeSource to make it a provisionable resource.

If Gluster is not to be exposed to the end user (only the admin provisions it, users claim it), then after the PV framework is fully merged you can remove GFS from VolumeSource and leave it in PVS. This hides it completely from the pod author.

}

// Similar to VolumeSource but meant for the administrator who creates PVs.
Expand All @@ -210,6 +212,8 @@ type PersistentVolumeSource struct {
// This is useful for development and testing only.
// on-host storage is not supported in any way
HostPath *HostPathVolumeSource `json:"hostPath"`
// Glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod
Glusterfs *GlusterfsVolumeSource `json:"glusterfs"`
}

type PersistentVolume struct {
Expand Down Expand Up @@ -421,6 +425,19 @@ type NFSVolumeSource struct {
ReadOnly bool `json:"readOnly,omitempty"`
}

// GlusterfsVolumeSource represents a Glusterfs Mount that lasts the lifetime of a pod
type GlusterfsVolumeSource struct {
// Required: EndpointsName is the endpoint name that details Glusterfs topology
EndpointsName string `json:"endpoints"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the assumption that endpoints for gluster lie outside the kubernetes cluster? I am a bit anxious about direct creation of endpoints without a service (now that we have headless services) since it lays a trap for a later collision that won't be detected.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the gluster cluster lies outside the kube cluster. what's the collision case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not unreasonable to force someone to create an external headless service if they want this behavior.

----- Original Message -----

@@ -421,6 +425,19 @@ type NFSVolumeSource struct {
ReadOnly bool json:"readOnly,omitempty"
}

+// GlusterfsVolumeSource represents a Glusterfs Mount that lasts the
lifetime of a pod
+type GlusterfsVolumeSource struct {

  • // Required: EndpointsName is the endpoint name that details Glusterfs
    topology
  • EndpointsName string json:"endpoints"

Is the assumption that endpoints for gluster lie outside the kubernetes
cluster? I am a bit anxious about direct creation of endpoints without a
service (now that we have headless services) since it lays a trap for a
later collision that won't be detected.


Reply to this email directly or view it on GitHub:
https://github.com/GoogleCloudPlatform/kubernetes/pull/6174/files#r27942357

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The collision case is that you create endpoints called "foo" then I create
a service called "foo" and the endpoints controller fails.
On Apr 8, 2015 6:24 AM, "Huamin Chen" notifications@github.com wrote:

In pkg/api/types.go
#6174 (comment)
:

@@ -421,6 +425,19 @@ type NFSVolumeSource struct {
ReadOnly bool json:"readOnly,omitempty"
}

+// GlusterfsVolumeSource represents a Glusterfs Mount that lasts the lifetime of a pod
+type GlusterfsVolumeSource struct {

  • // Required: EndpointsName is the endpoint name that details Glusterfs topology
  • EndpointsName string json:"endpoints"

the gluster cluster lies outside the kube cluster. what's the collision
case?


Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/kubernetes/pull/6174/files#r27968510
.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it, thanks. would a special namespace for storage help?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the real question is whether you expect an external-to-kubernetes gluster cluster to be namespace-scoped or not?

A) The set of gluster endpoints is namespaced - use a headless service
B) The set of gluster endpoints is not namespaced - use an object reference to a headless service
C ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Apr 8, 2015, at 12:11 PM, Tim Hockin notifications@github.com wrote:

In pkg/api/types.go:

@@ -421,6 +425,19 @@ type NFSVolumeSource struct {
ReadOnly bool json:"readOnly,omitempty"
}

+// GlusterfsVolumeSource represents a Glusterfs Mount that lasts the lifetime of a pod
+type GlusterfsVolumeSource struct {

  • // Required: EndpointsName is the endpoint name that details Glusterfs topology
  • EndpointsName string json:"endpoints"
    Perhaps the real question is whether you expect an external-to-kubernetes gluster cluster to be namespace-scoped or not?

A) The set of gluster endpoints is namespaced - use a headless service
B) The set of gluster endpoints is not namespaced - use an object reference to a headless service
C ?

At least for persistent volumes (not namespaced), b) is required.

Only thing I can think of for C is "a DNS address" or "a list of ips".


Reply to this email directly or view it on GitHub.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not currently have any concept of non-namespaced endpoints or services. Are we going to accumulate these things in a random namespace and then violate the cross-namespace principles?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So hypothetically, an admin might run gluster in namespace "foo" and have a real service "gluster" (headless or no). They then want to use volumes from that gluster service in other namespaces. So an admin would automate / manually create persistent volumes that point to that gluster cluster. The volume settings would be "use the gluster cluster in namespace foo with name gluster". When a volume source is created for that persistent volume, it would be referencing that service.

----- Original Message -----

@@ -421,6 +425,19 @@ type NFSVolumeSource struct {
ReadOnly bool json:"readOnly,omitempty"
}

+// GlusterfsVolumeSource represents a Glusterfs Mount that lasts the
lifetime of a pod
+type GlusterfsVolumeSource struct {

  • // Required: EndpointsName is the endpoint name that details Glusterfs
    topology
  • EndpointsName string json:"endpoints"

We do not currently have any concept of non-namespaced endpoints or services.
Are we going to accumulate these things in a random namespace and then
violate the cross-namespace principles?


Reply to this email directly or view it on GitHub:
https://github.com/GoogleCloudPlatform/kubernetes/pull/6174/files#r28000861

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can buy that argument for persistent volumes, where it is an admin that is crossing the namespace boundary. It's a bit less shiny when it's a user's pod that is referencing Endpoints or Services in another namespace.

As it stands, the endpoints must be in the same namespace as the pod. I don't think this is sufficient to handle what you are describing. This should probably become an ObjectRef, or else we should make it target a multi-record DNS name and treat that as an endpoints set (or something).

Then we have to decide if it is kosher to write an Endpoints object that does not have an associated Service object.


// Required: Path is the Glusterfs volume path
Path string `json:"path"`

// Optional: Defaults to false (read/write). ReadOnly here will force
// the Glusterfs to be mounted with read-only permissions
ReadOnly bool `json:"readOnly,omitempty"`
}

// ContainerPort represents a network port in a single container
type ContainerPort struct {
// Optional: If specified, this must be a DNS_LABEL. Each named port
Expand Down
6 changes: 6 additions & 0 deletions pkg/api/v1beta1/conversion.go
Expand Up @@ -1179,6 +1179,9 @@ func init() {
if err := s.Convert(&in.NFS, &out.NFS, 0); err != nil {
return err
}
if err := s.Convert(&in.Glusterfs, &out.Glusterfs, 0); err != nil {
return err
}
return nil
},
func(in *VolumeSource, out *newer.VolumeSource, s conversion.Scope) error {
Expand All @@ -1203,6 +1206,9 @@ func init() {
if err := s.Convert(&in.NFS, &out.NFS, 0); err != nil {
return err
}
if err := s.Convert(&in.Glusterfs, &out.Glusterfs, 0); err != nil {
return err
}
return nil
},

Expand Down
17 changes: 17 additions & 0 deletions pkg/api/v1beta1/types.go
Expand Up @@ -114,6 +114,8 @@ type VolumeSource struct {
// ISCSI represents an ISCSI Disk resource that is attached to a
// kubelet's host machine and then exposed to the pod.
ISCSI *ISCSIVolumeSource `json:"iscsi" description:"iSCSI disk attached to host machine on demand"`
// Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime
Glusterfs *GlusterfsVolumeSource `json:"glusterfs" description:"Glusterfs volume that will be mounted on the host machine "`
}

// Similar to VolumeSource but meant for the administrator who creates PVs.
Expand All @@ -126,6 +128,8 @@ type PersistentVolumeSource struct {
// This is useful for development and testing only.
// on-host storage is not supported in any way.
HostPath *HostPathVolumeSource `json:"hostPath" description:"a HostPath provisioned by a developer or tester; for develment use only"`
// Glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod
Glusterfs *GlusterfsVolumeSource `json:"glusterfs" description:"Glusterfs volume resource provisioned by an admin"`
}

type PersistentVolume struct {
Expand Down Expand Up @@ -1493,3 +1497,16 @@ type SecretList struct {

Items []Secret `json:"items" description:"items is a list of secret objects"`
}

// GlusterfsVolumeSource represents a Glusterfs Mount that lasts the lifetime of a pod
type GlusterfsVolumeSource struct {
// Required: EndpointsName is the endpoint name that details Glusterfs topology
EndpointsName string `json:"endpoints" description:"gluster hosts endpoints name"`

// Required: Path is the Glusterfs volume path
Path string `json:"path" description:"path to gluster volume"`

// Optional: Defaults to false (read/write). ReadOnly here will force
// the Glusterfs volume to be mounted with read-only permissions
ReadOnly bool `json:"readOnly,omitempty" description:"Glusterfs volume to be mounted with read-only permissions"`
}
6 changes: 6 additions & 0 deletions pkg/api/v1beta2/conversion.go
Expand Up @@ -1106,6 +1106,9 @@ func init() {
if err := s.Convert(&in.NFS, &out.NFS, 0); err != nil {
return err
}
if err := s.Convert(&in.Glusterfs, &out.Glusterfs, 0); err != nil {
return err
}
return nil
},
func(in *VolumeSource, out *newer.VolumeSource, s conversion.Scope) error {
Expand All @@ -1130,6 +1133,9 @@ func init() {
if err := s.Convert(&in.NFS, &out.NFS, 0); err != nil {
return err
}
if err := s.Convert(&in.Glusterfs, &out.Glusterfs, 0); err != nil {
return err
}
return nil
},

Expand Down
17 changes: 17 additions & 0 deletions pkg/api/v1beta2/types.go
Expand Up @@ -83,6 +83,8 @@ type VolumeSource struct {
// ISCSI represents an ISCSI Disk resource that is attached to a
// kubelet's host machine and then exposed to the pod.
ISCSI *ISCSIVolumeSource `json:"iscsi" description:"iSCSI disk attached to host machine on demand"`
// Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime
Glusterfs *GlusterfsVolumeSource `json:"glusterfs" description:"Glusterfs volume that will be mounted on the host machine "`
}

// Similar to VolumeSource but meant for the administrator who creates PVs.
Expand All @@ -95,6 +97,8 @@ type PersistentVolumeSource struct {
// This is useful for development and testing only.
// on-host storage is not supported in any way.
HostPath *HostPathVolumeSource `json:"hostPath" description:"a HostPath provisioned by a developer or tester; for develment use only"`
// Glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod
Glusterfs *GlusterfsVolumeSource `json:"glusterfs" description:"Glusterfs volume resource provisioned by an admin"`
}

type PersistentVolume struct {
Expand Down Expand Up @@ -307,6 +311,19 @@ type ISCSIVolumeSource struct {
ReadOnly bool `json:"readOnly,omitempty" description:"read-only if true, read-write otherwise (false or unspecified)"`
}

// GlusterfsVolumeSource represents a Glusterfs Mount that lasts the lifetime of a pod
type GlusterfsVolumeSource struct {
// Required: EndpointsName is the endpoint name that details Glusterfs topology
EndpointsName string `json:"endpoints" description:"gluster hosts endpoints name"`

// Required: Path is the Glusterfs volume path
Path string `json:"path" description:"path to gluster volume"`

// Optional: Defaults to false (read/write). ReadOnly here will force
// the Glusterfs volume to be mounted with read-only permissions
ReadOnly bool `json:"readOnly,omitempty" description:"glusterfs volume to be mounted with read-only permissions"`
}

// VolumeMount describes a mounting of a Volume within a container.
//
// https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md
Expand Down
17 changes: 17 additions & 0 deletions pkg/api/v1beta3/types.go
Expand Up @@ -215,6 +215,8 @@ type VolumeSource struct {
// ISCSI represents an ISCSI Disk resource that is attached to a
// kubelet's host machine and then exposed to the pod.
ISCSI *ISCSIVolumeSource `json:"iscsi" description:"iSCSI disk attached to host machine on demand"`
// Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime
Glusterfs *GlusterfsVolumeSource `json:"glusterfs" description:"Glusterfs volume that will be mounted on the host machine "`
}

// Similar to VolumeSource but meant for the administrator who creates PVs.
Expand All @@ -227,6 +229,8 @@ type PersistentVolumeSource struct {
// This is useful for development and testing only.
// on-host storage is not supported in any way.
HostPath *HostPathVolumeSource `json:"hostPath" description:"a HostPath provisioned by a developer or tester; for develment use only"`
// Glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod
Glusterfs *GlusterfsVolumeSource `json:"glusterfs" description:"Glusterfs volume resource provisioned by an admin"`
}

type PersistentVolume struct {
Expand Down Expand Up @@ -343,6 +347,19 @@ type EmptyDirVolumeSource struct {
Medium StorageType `json:"medium" description:"type of storage used to back the volume; must be an empty string (default) or Memory"`
}

// GlusterfsVolumeSource represents a Glusterfs Mount that lasts the lifetime of a pod
type GlusterfsVolumeSource struct {
// Required: EndpointsName is the endpoint name that details Glusterfs topology
EndpointsName string `json:"endpoints" description:"gluster hosts endpoints name"`

// Required: Path is the Glusterfs volume path
Path string `json:"path" description:"path to gluster volume"`

// Optional: Defaults to false (read/write). ReadOnly here will force
// the Glusterfs volume to be mounted with read-only permissions
ReadOnly bool `json:"readOnly,omitempty" description:"glusterfs volume to be mounted with read-only permissions"`
}

// StorageType defines ways that storage can be allocated to a volume.
type StorageType string

Expand Down
15 changes: 15 additions & 0 deletions pkg/api/validation/validation.go
Expand Up @@ -311,6 +311,10 @@ func validateSource(source *api.VolumeSource) errs.ValidationErrorList {
numVolumes++
allErrs = append(allErrs, validateISCSIVolumeSource(source.ISCSI).Prefix("iscsi")...)
}
if source.Glusterfs != nil {
numVolumes++
allErrs = append(allErrs, validateGlusterfs(source.Glusterfs).Prefix("glusterfs")...)
}
if numVolumes != 1 {
allErrs = append(allErrs, errs.NewFieldInvalid("", source, "exactly 1 volume type is required"))
}
Expand Down Expand Up @@ -386,6 +390,17 @@ func validateNFS(nfs *api.NFSVolumeSource) errs.ValidationErrorList {
return allErrs
}

func validateGlusterfs(glusterfs *api.GlusterfsVolumeSource) errs.ValidationErrorList {
allErrs := errs.ValidationErrorList{}
if glusterfs.EndpointsName == "" {
allErrs = append(allErrs, errs.NewFieldRequired("endpoints"))
}
if glusterfs.Path == "" {
allErrs = append(allErrs, errs.NewFieldRequired("path"))
}
return allErrs
}

func ValidatePersistentVolumeName(name string, prefix bool) (bool, string) {
return nameIsDNSSubdomain(name, prefix)
}
Expand Down
5 changes: 5 additions & 0 deletions pkg/api/validation/validation_test.go
Expand Up @@ -519,6 +519,7 @@ func TestValidateVolumes(t *testing.T) {
{Name: "gitrepo", VolumeSource: api.VolumeSource{GitRepo: &api.GitRepoVolumeSource{"my-repo", "hashstring"}}},
{Name: "iscsidisk", VolumeSource: api.VolumeSource{ISCSI: &api.ISCSIVolumeSource{"127.0.0.1", "iqn.2015-02.example.com:test", 1, "ext4", false}}},
{Name: "secret", VolumeSource: api.VolumeSource{Secret: &api.SecretVolumeSource{"my-secret"}}},
{Name: "glusterfs", VolumeSource: api.VolumeSource{Glusterfs: &api.GlusterfsVolumeSource{"host1", "path", false}}},
}
names, errs := validateVolumes(successCase)
if len(errs) != 0 {
Expand All @@ -530,6 +531,8 @@ func TestValidateVolumes(t *testing.T) {
emptyVS := api.VolumeSource{EmptyDir: &api.EmptyDirVolumeSource{}}
emptyPortal := api.VolumeSource{ISCSI: &api.ISCSIVolumeSource{"", "iqn.2015-02.example.com:test", 1, "ext4", false}}
emptyIQN := api.VolumeSource{ISCSI: &api.ISCSIVolumeSource{"127.0.0.1", "", 1, "ext4", false}}
emptyHosts := api.VolumeSource{Glusterfs: &api.GlusterfsVolumeSource{"", "path", false}}
emptyPath := api.VolumeSource{Glusterfs: &api.GlusterfsVolumeSource{"host", "", false}}
errorCases := map[string]struct {
V []api.Volume
T errors.ValidationErrorType
Expand All @@ -541,6 +544,8 @@ func TestValidateVolumes(t *testing.T) {
"name not unique": {[]api.Volume{{Name: "abc", VolumeSource: emptyVS}, {Name: "abc", VolumeSource: emptyVS}}, errors.ValidationErrorTypeDuplicate, "[1].name"},
"empty portal": {[]api.Volume{{Name: "badportal", VolumeSource: emptyPortal}}, errors.ValidationErrorTypeRequired, "[0].source.iscsi.targetPortal"},
"empty iqn": {[]api.Volume{{Name: "badiqn", VolumeSource: emptyIQN}}, errors.ValidationErrorTypeRequired, "[0].source.iscsi.iqn"},
"empty hosts": {[]api.Volume{{Name: "badhost", VolumeSource: emptyHosts}}, errors.ValidationErrorTypeRequired, "[0].source.glusterfs.endpoints"},
"empty path": {[]api.Volume{{Name: "badpath", VolumeSource: emptyPath}}, errors.ValidationErrorTypeRequired, "[0].source.glusterfs.path"},
}
for k, v := range errorCases {
_, errs := validateVolumes(v.V)
Expand Down