Skip to content
Container Storage Interface (CSI) for Synology
Go Dockerfile Makefile
Branch: master
Clone or download
jparklab Merge pull request #11 from jparklab/fix/session
Avoid logging into target when there is an open session
Latest commit 38985a5 Nov 9, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cmd/syno-csi-plugin Change indentations Jul 9, 2019
deploy/kubernetes
pkg Check if session already exists Nov 9, 2019
.gitignore Added kubernetes spec files for v1.13(not tested yet) Jan 6, 2019
Dockerfile Mount root partition from the host under /host Aug 31, 2019
LICENSE Initial commit Dec 16, 2018
Makefile Change MIT License to Apache Dec 18, 2018
README.md Mount root partition from the host under /host Aug 31, 2019
azure-pipelines.yml
go.mod Choose default LUN type based on volume type Aug 17, 2019

README.md

synology-csi Build Status Go Report Card

A Container Storage Interface Driver for Synology NAS

Platforms supported

The driver supports linux only since it requires iscsid to be running on the host. It is currently tested with Ubuntu 16.04 and 18.04

Build

Build package

make

Build docker image

# e.g. docker build -t jparklab/synology-csi .
docker build [-f Dockerfile] -t <repo>[:<tag>] .

Test

Here we use gocsi to test the driver,

Create a config file for testing

You need to create a config file that contains information to connect to the Synology NAS API. See Create a config file below

Start plugin driver

# You can specify any name for nodeid
$ go run cmd/syno-csi-plugin/main.go \
    --nodeid CSINode \
    --endpoint tcp://127.0.0.1:10000 \
    --synology-config syno-config.yml 

Get plugin info

$ csc identity plugin-info -e tcp://127.0.0.1:10000

Create a volume

$ csc controller create-volume \
    --req-bytes 2147483648 \
    -e tcp://127.0.0.1:10000 \
    test-volume 
"8.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="8"

List volumes

The first column in the output is the volume D

$ csc controller list-volumes -e tcp://127.0.0.1:10000 
"8.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="8"

Delete the volume

# e.g.
# csc controller delete-volume  -e tcp://127.0.0.1:10000 8.1
$ csc controller delete-volume  -e tcp://127.0.0.1:10000 <volume id>

Deploy

Ensure kubernetes cluster is configured for CSI drivers

For kubernetes v1.12, and v1.13, feature gates need to be enabled to use CSI drivers. Follow instructions on https://kubernetes-csi.github.io/docs/csi-driver-object.html and https://kubernetes-csi.github.io/docs/csi-node-object.html to set up your kubernetes cluster.

Create a config file

---
# syno-config.yml file
host: <hostname>        # ip address or hostname of the Synology NAS
port: 5000              # change this if you use a port other than the default one
username: <login>       # username
password: <password>    # password
sessionName: Core       # You won't need to touch this value
sslVerify: false        # set this true to use https

Create a k8s secret from the config file

kubectl create secret generic synology-config --from-file=syno-config.yml

Deploy to kubernetes

kubectl apply -f deploy/kubernetes/v1.15

(v1.12 is also tested, v1.13 has not been tested)

NOTE:

 synology-csi-attacher and synology-csi-provisioner need to run on the same node.
 (probably..)

Parameters for volumes

By default, iscsi LUN will be created on Volume 1(/volume1) location with thin provisioning. You can set parameters in sotrage_class.yml to choose different locations or volume type.

e.g.

apiVersion: storage.k8s.io/v1
kind: StorageClass
name: synology-iscsi-storage
...
provisioner: csi.synology.com
parameters:
  location: '/volume2'
  type: 'FILE'          # if the location has ext4 file system, use FILE for thick provisioning, and THIN for thin provisioning.
                        # for btrfs file system, use BLUN_THICK for thick provisioning, and BLUN for thin provisioning.
reclaimPolicy: Delete

NOTE: if you have already created storage class, you would need to delete the storage class and recreate it.

You can’t perform that action at this time.