CSI-ScaleIO is a Container Storage Interface (CSI) plugin that provides ScaleIO support.
This project may be compiled as a stand-alone binary using Golang that, when run, provides a valid CSI endpoint. This project can also be vendored or built as a Golang plug-in in order to extend the functionality of other programs.
The Node portion of the plugin can be run on any node that is configured as a
ScaleIO SDC. This means that the
scini kernel module must be loaded. Also,
X_CSI_SCALEIO_SDCGUID environment variable is not set, the plugin will
try to query the SDC GUID by executing the binary
/opt/emc/scaleio/sdc/bin/drv_cfg. If that binary is not present, the Node
Service cannot be run.
CSI-ScaleIO can be installed with Go and the following command:
$ go get github.com/thecodeteam/csi-scaleio
The resulting binary will be installed to
If you want to build
csi-scaleio with accurate version information, you'll
need to run the
go generate command and build again:
$ go get github.com/thecodeteam/csi-scaleio $ cd $GOPATH/src/github.com/thecodeteam/csi-scaleio $ go generate && go install
The binary will once again be installed to
Before starting the plugin please set the environment variable
CSI_ENDPOINT to a valid Go network address such as
$ CSI_ENDPOINT=csi.sock csi-scaleio INFO configured com.thecodeteam.scaleio endpoint="https://10.50.10.100:443" insecure=true password="******" privatedir=/dev/disk/csi-scaleio sdcGUID= systemname=democluster thickprovision=false user=admin INFO identity service registered INFO controller service registered INFO node service registered INFO serving endpoint="unix:///csi.sock"
The server can be shutdown by using
Ctrl-C or sending the process
any of the standard exit signals.
The CSI specification uses the gRPC protocol for plug-in communication.
The easiest way to interact with a CSI plugin is via the Container
Storage Client (
csc) program provided via the
$ go get github.com/rexray/gocsi $ go install github.com/rexray/gocsi/csc
csc use the same
CSI_ENDPOINT, and you can issue commands
to the plugin. Some examples...
Get the plugin's supported versions and plugin info:
$ ./csc -e csi.sock identity supported-versions 0.1.0 $ ./csc -v 0.1.0 -e csi.sock identity plugin-info "com.thecodeteam.scaleio" "0.0.1+1" "commit"="cd9c538b596db926a3a747c6c219a2ace8f1890b" "formed"="Fri, 01 Dec 2017 08:33:28 PST" "semver"="0.0.1+1" "url"="https://github.com/thecodeteam/csi-scaleio"
When using the plugin, some commands accept additional parameters, some of which may be required for the command to work, or may change the behavior of the command. Those parameters are listed here.
storagepoolThe name of a storage pool must be passed in the
storagepoolmay be passed in
GetCapacitycommand. If it is, the returned capacity is the available capacity for creation within the given storage pool. Otherwise, it's the capacity for creation within the storage cluster.
Passing parameters with
csc is demonstrated in this
$ ./csc -v 0.1.0 c create --cap 1,mount,xfs --params storagepool=pd1pool1 myvol "6757e7d300000000"
The CSI-ScaleIO SP is built using the GoCSI CSP package. Please see its configuration section for a complete list of the environment variables that may be used to configure this SP.
The following table is a list of this SP's default configuration values:
The following table is a list of this configuration values that are specific to ScaleIO, their default values, and whether they are required for operation:
||ScaleIO Gateway HTTP endpoint||""||
||Username for authenticating to Gateway||"admin"||
||Password of Gateway user||""||
||The ScaleIO Gateway's certificate chain and host name should not be verified||
||The name of the ScaleIO cluster||""||
||The GUID of the SDC. This is only used by the Node Service, and removes a need for calling an external binary to retrieve the GUID||""||
||Whether to use thick provisioning when creating new volumes||
Capable operational modes
The CSI spec defines a set of AccessModes that a volume can have. CSI-ScaleIO supports the following modes for volumes that will be mounted as a filesystem:
// Can only be published once as read/write on a single node, // at any given time. SINGLE_NODE_WRITER = 1; // Can only be published once as readonly on a single node, // at any given time. SINGLE_NODE_READER_ONLY = 2; // Can be published as readonly at multiple nodes simultaneously. MULTI_NODE_READER_ONLY = 3;
This means that volumes can be mounted to either single node at a time, with read-write or read-only permission, or can be mounted on multiple nodes, but all must be read-only.
For volumes that are used as block devices, only the following are supported:
// Can only be published once as read/write on a single node, at // any given time. SINGLE_NODE_WRITER = 1; // Can be published as read/write at multiple nodes // simultaneously. MULTI_NODE_MULTI_WRITER = 5;
This means that giving a workload read-only access to a block device is not supported.
In general, volumes should be formatted with xfs or ext4.
For any questions or concerns please file an issue with the csi-scaleio project or join the Slack channel #project-rexray at codecommunity.slack.com.