Skip to content
This repository has been archived by the owner on Jul 22, 2024. It is now read-only.

Commit

Permalink
Merge pull request #4 from IBM/dev
Browse files Browse the repository at this point in the history
Updating README and merging systemd scripts
  • Loading branch information
midoblgsm authored Apr 5, 2017
2 parents 89098eb + 394a68f commit e3f487c
Show file tree
Hide file tree
Showing 5 changed files with 175 additions and 41 deletions.
167 changes: 127 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
#Ubiquity Storage Service for Container Ecosystems
# Ubiquity Storage Service for Container Ecosystems
Ubiquity provides access to persistent storage for Docker containers in Docker or Kubernetes ecosystems. The REST service can be run on one or more nodes in the cluster to create, manage, and delete storage volumes.

Ubiquity can support any number of storage backends. See 'Available Storage Systems' for more details.
Ubiquity is a pluggable framework that can support a variety of storage backends. See 'Available Storage Systems' for more details.

This code is provided "AS IS" and without warranty of any kind. Any issues will be handled on a best effort basis.

## Sample Deployment Options
The service can be deployed in a variety of ways. In all options though, Ubiquity must be
Expand All @@ -24,12 +26,17 @@ This deployment shows a Kubernetes pod or cluster as well as a Docker Swarm clus

This is identical to the previous deployment example except that the Kubernetes or Docker Swarm hosts are using NFS to access their volumes. Note that a typical Spectrum Scale deployment would have several CES NFS servers (protocol servers) and the Ubiquity service could be installed on one of those servers or on a separate management server (such as the node collecting Zimon stats or where the GUI service is installed).

#### Multi-node using Native GPFS(POSIX) and Docker Swarm

In this deployment, the Ubiquity service is installed and running on a single Spectrum Scale server. [Ubiquity Docker Plugin](https://github.com/IBM/ubiquity-docker-plugin) is installed and running on all nodes (Docker Hosts that are acting as clients to the Spectrum Scale Storage Cluster) that are part of the Docker Swarm cluster, including the Swarm Manager and the Worker Nodes. The Ubiquity Docker Plugin, running on all the Swarm Nodes, must be configured to point to the single instance of Ubiquity service running on the Spectrum Scale server.

## Installation
### Build Prerequisites
* Install [golang](https://golang.org/) (>=1.6)
* Install git (if accessing source code from github)
* Install [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
* Install gcc

* Configure go - GOPATH environment variable needs to be correctly set before starting the build process. Create a new directory and set it as GOPATH

### Deployment Prerequisites
Once the Ubiquity binary is built, then the only requirements on the node where it is deployed is that the Ubiquity service has access to a deployed storage service that will be used by the containers. The type of access Ubiquity needs to the storage service depends on the storage backend that is being used. See 'Available Storage Systems' for more details.

Expand All @@ -52,7 +59,7 @@ Defaults:%ubiquity secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/lpp/mmfs/bin
```

### Download and Build Source Code
* Configure go - GOPATH environment variable needs to be correctly set before starting the build process. Create a new directory and set it as GOPATH

```bash
mkdir -p $HOME/workspace
export GOPATH=$HOME/workspace
Expand All @@ -66,81 +73,161 @@ cd $GOPATH/src/github.com/IBM
git clone git@github.com:IBM/ubiquity.git
cd ubiquity
./scripts/build

```
### Running the Ubiquity Service
```bash
./bin/ubiquity [-configFile <configFile>]
```
where:
* configFile: Configuration file to use (defaults to `./ubiquity-server.conf`)

### Configuring the Ubiquity Service

Unless otherwise specified by the `configFile` command line parameter, the Ubiquity service will
look for a file named `ubiquity-server.conf` for its configuration.

The following snippet shows a sample configuration file:
The following snippet shows a sample configuration file in the case where Ubiquity service is deployed on a system with native access (CLI) to the Spectrum Scale Storage system.

Note that the file system chosen for where to store the DB that tracks volumes is important. Ubiquity uses a sqllite db, and so can support any storage location that sqllite supports. This can be a local file system such as Ext4, NFS (if exclusive access is ensured from a single host), or a parallel file system such as Spectrum Scale. In our example above, we are storing the DB in Spectrum Scale to support failover as well as provide availability and durability of the db data.


```toml
port = 9999 # The TCP port to listen on
logPath = "/var/log/ubiquity" # The Ubiquity service will write logs to file "ubiquity.log" in this path. This path must already exist.
port = 9999 # The TCP port to listen on
logPath = "/tmp/ubiquity" # The Ubiquity service will write logs to file "ubiquity.log" in this path. This path must already exist.
defaultBackend = "spectrum-scale" # The "spectrum-scale" backend will be the default backend if none is specified in the request

[SpectrumConfig] # If this section is specified, the "spectrum-scale" backend will be enabled.
defaultFilesystem = "gold" # Default name of Spectrum Scale file system to use if user does not specify one during creation of a volume. This file system must already exist.
configPath = "/gpfs/gold/config" # Path in an existing filesystem where Ubiquity can create/store volume DB.
[SpectrumScaleConfig] # If this section is specified, the "spectrum-scale" backend will be enabled.
defaultFilesystem = "gold" # Default name of Spectrum Scale file system to use if user does not specify one during creation of a volume. This file system must already exist.
configPath = "/gpfs/gold/config" # Path in an existing filesystem where Ubiquity can create/store volume DB.
nfsServerAddr = "CESClusterHost" # IP/hostname of Spectrum Scale CES NFS cluster. This is the hostname that NFS clients will use to mount NFS volumes. (required for creation of NFS accessible volumes)

# Controls the behavior of volume deletion. If set to true, the data in the the storage system (e.g., fileset, directory) will be deleted upon volume deletion. If set to false, the volume will be removed from the local database, but the data will not be deleted from the storage system. Note that volumes created from existing data in the storage system should never have their data deleted upon volume deletion (although this may not be true for Kubernetes volumes with a recycle reclaim policy).
forceDelete = false
```

Note that the file system chosen for where to store the DB that tracks volumes is important. Ubiquity uses a sqllite db, and so can support any storage location that sqllite supports. This can be a local file system such as Ext4, NFS (if exclusive access is ensured from a single host), or a parallel file system such as Spectrum Scale. In our example above, we are storing the DB in Spectrum Scale to both allow access from multiple hosts (Ubiquity will ensure consistency across hosts to the parallel file system) as well as provide availability and durability of the data.
To support running the Ubiquity service on a host (or VM or container) that doesn't have direct access to the Spectrum Scale CLI, also add the following items to the config file to have Ubiquity use password-less SSH access to the Spectrum Scale Storage system:

### Next Steps
To use Ubiquity, please install appropriate storage-specific plugin ([docker](https://github.com/IBM/ubiquity-docker-plugin), [kubernetes](https://github.com/IBM/ubiquity-flexvolume))
```toml
[SpectrumScaleConfig.SshConfig] # If this section is specified, then the "spectrum-scale" backend will be accessed via SSH connection
user = "ubiquity" # username to login as on the Spectrum Scale storage system
host = "my_ss_host" # hostname of the Spectrum Scale storage system
port = "22" # port to connect to on the Spectrum Scale storage system
```

## Additional Considerations
### High-Availability
Currently, handling failures of the Ubiquity service must be done manually, although there are several possible options.
### Two Options to Install and Run

The Ubiquity service can be safely run on multiple nodes, either in an active-active or active-passive manner. Failover can then be manually achieved by switching the Ubiquity service hostname, or automatically through use of a HTTP load balancer.
#### Option 1: systemd
This option assumes that the system that you are using has support for systemd (e.g., ubuntu 14.04 does not have native support to systems, ubuntu 16.04 does.)

Moving forward, we will leverage Docker or K8s specific mechanisms to achieving high-availability by running the Ubiquity service in containers or a pod.
1) Inside the almaden-containers/ubiquity/scripts directory, execute the following command
```bash
./setup
```

This command will copy binary ubiquity to /usr/bin, ubiquity-server.conf and ubiquity-server.env to /etc/ubiquity location. It will also enable Ubiquity service using "systemctl enable"

2) Make appropriate changes to /etc/ubiquity/ubiquity-server.conf

3) Edit /etc/ubiquity/ubiquity-server.env to add/remove command line options to ubiquity server

4) Once above steps are done we can start/stop ubiquity server using systemctl command as below
```bash
systemctl start/stop/restart ubiquity
```

#### Option 2: Manual
```bash
./bin/ubiquity [-configFile <configFile>]
```
where:
* configFile: Configuration file to use (defaults to `./ubiquity-server.conf`)

### Scalability
Running the Ubiquity service on a single server will most likely provide sufficient performance. But if not, it can be run on multiple nodes and load balancing can be achieved through use of a HTTP load balancer or round-robin DNS service.

### Next Steps - Install a plugin for Docker or Kubernetes
To use Ubiquity, please install appropriate storage-specific plugin ([docker](https://github.com/IBM/ubiquity-docker-plugin), [kubernetes](https://github.com/IBM/ubiquity-flexvolume))

## Available Storage Systems

### IBM Spectrum Scale
With IBM Spectrum Scale, containers can have shared file system access to any number of containers from small clusters of a few hosts up to very large clusters with thousands of hosts.

The current plugin supports the following protocols:
* Native POSIX Client
* CES NFS (Scalable and Clustered NFS Exports)
* Native POSIX Client (backend=spectrum-scale)
* CES NFS (Scalable and Clustered NFS Exports) (backend=spectrum-scale-nfs)

**Note** that if option backend is not specified to Docker as an opt parameter, or to Kubernetes in the yaml file, the backend defaults to server side default specification.

Spectrum Scale supports the following options for creating a volume. ther the native or NFS driver is used, the set of options is exactly the same. They are passed to Docker via the 'opt' option on the command line as a set of key-value pairs.

Note that POSIX volumes are not accessible via NFS, but NFS volumes are accessible via POSIX. This is because NFS requires the additional step of exporting the dataset on the storage server. To make a POSIX volume accessible via NFS, simply create the volume using the 'spectrum-scale-nfs' backend using the same path or fileset name.


#### Supported Volume Types

The volume driver supports creation of two types of volumes in Spectrum Scale:

***1. Fileset Volume (Default)***

Fileset Volume is a volume which maps to a fileset in Spectrum Scale. By default, this will create a dependent Spectrum Scale fileset, which supports Quota and other Policies, but does not support snapshots. If snapshots are required, then a independent volume can also be requested. Note that there are limits to the number of filesets that can be created, please see Spectrum Scale docs for more info.

Usage: type=fileset

***2. Independent Fileset Volume***

Independent Fileset Volume is a volume which maps to an independent fileset, with its own inode space, in Spectrum Scale.

Usage: type=fileset, fileset-type=independent

***3. Lightweight Volume***

Lightweight Volume is a volume which maps to a sub-directory within an existing fileset in Spectrum Scale. The fileset could be a previously created 'Fileset Volume'. Lightweight volumes allow users to create unlimited numbers of volumes, but lack the ability to set quotas, perform individual volume snapshots, etc.

To use Lightweight volumes, but take advantage of Spectrum Scale features such a encryption, simply create the Lightweight volume in a Spectrum Scale fileset that has the desired features enabled.

POSIX and NFS Volumes are be created separately by choosing the 'spectrum-scale' volume driver or the 'spectrum-scale-nfs' volume driver. Note that POSIX volumes are not accessible via NFS, but, NFS volumes are accessible via POSIX. To make a POSIX volume accessible via NFS, simply create the volume using the 'spectrum-scale-nfs' driver using the same path or fileset name.
[**Note**: Support for Lightweight volume with NFS is experimental]

### Ubiquity Access to IBM Spectrum Scale
Usage: type=lightweight

#### Supported Volume Creation Options

**Features**
* Quotas (optional) - Fileset Volumes can have a max quota limit set. Quota support for filesets must be already enabled on the file system.
* Usage: quota=(numeric value)
* Docker usage example: --opt quota=100M
* Ownership (optional) - Specify the userid and groupid that should be the owner of the volume. Note that this only controls Linux permissions at this time, ACLs are not currently set (but could be set manually by the admin).
* Usage uid=(userid), gid=(groupid)
* Docker usage example: --opt uid=1002 --opt gid=1002

**Type and Location**
* File System (optional) - Select a file system in which the volume will exist. By default the file system set in ubiquity-server.conf is used.
* Usage: filesystem=(name)
* Fileset - This option selects the fileset that will be used for the volume. This can be used to create a volume from an existing fileset, or choose the fileset in which a lightweight volume will be created.
* Usage: fileset=modelingData
* Directory (lightweight volumes only): This option sets the name of the directory to be created for a lightweight volume. This can also be used to create a lighweight volume from an existing directory. The directory can be a relative path starting at the root of the path at which the fileset is linked in the file system namespace.
* Usage: directory=dir1



## Additional Considerations
### High-Availability
Ubiquity supports an Active-Passive model of availability. Currently, handling failures of the Ubiquity service must be done manually, although there are several possible options.

The Ubiquity service can be safely run on multiple nodes in an active-passive manner. Failover can then be manually achieved by switching the Ubiquity service hostname, or automatically through use of a HTTP load balancer such as HAProxy (which could be run in containers by K8s or Docker).

Moving forward, we will leverage Docker or K8s specific mechanisms to achieving high-availability by running the Ubiquity service in containers or a pod.



### Ubiquity Service Access to IBM Spectrum Scale CLI
Currently there are 2 different ways for Ubiquity to manage volumes in IBM Spectrum Scale.
* Direct access - In this setup, Ubiquity will directly call the IBM Spectrum Scale CLI (e.g., 'mm' commands). This means that Ubiquity must be deployed on a node that can directly call the CLI.
* ssh - In this setup, Ubiquity uses ssh to call the IBM Spectrum Scale CLI that is deployed on another node. This avoids the need to run Ubiquity on a node that is part of the IBM Spectrum Scale cluster. For example, this would also allow Ubiquity to run in a container.

## Roadmap

* Support OpenStack Manila storage back-end
* Add explicit instrucitons on use of certificates to secure communication between plugins and Ubiquity service
* Add explicit instructions on use of certificates to secure communication between plugins and Ubiquity service
* API for updating volumes
* Additional options to expore more features of Spectrum Scale, including use of the Spectrum Scale REST API.
* Containerization of Ubiquity for Docker and Kubernetes
* Kubernetes dynamic provisioning support
* Support for additional IBM storage systems
* Support for CloudFoundry

## Support

(TBD)



## Suggestions and Questions
For any questions, suggestions, or issues, please ...(TBD)
For any questions, suggestions, or issues, please use github.

21 changes: 21 additions & 0 deletions scripts/setup
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
#!/bin/bash

set -e
scripts=$(dirname $0)

mkdir -p /etc/ubiquity

cp $scripts/../bin/ubiquity /usr/bin/ubiquity

cp $scripts/../ubiquity-server.conf \
/etc/ubiquity/ubiquity-server.conf

cp $scripts/ubiquity-server.env /etc/ubiquity/ubiquity-server.env

dist=`lsb_release -a 2>/dev/null |grep "Distributor ID" |cut -d: -f2 |xargs`
if [ "$dist" == "Ubuntu" ]; then
cp $scripts/ubiquity.service /lib/systemd/system/
else
cp $scripts/ubiquity.service /usr/lib/systemd/system/
fi
systemctl enable ubiquity.service
10 changes: 10 additions & 0 deletions scripts/ubiquity-server.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
###
# Ubiquity server options

# Config file of ubiquity server
UBIQUITY_SERVER_CONFIG="--config /etc/ubiquity/ubiquity-server.conf"

# Add your own arguments

UBIQUITY_SERVER_ARGS=

16 changes: 16 additions & 0 deletions scripts/ubiquity.service
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
[Unit]
Description=ubiquity Service
Documentation=https://github.com/IBM/ubiquity
After=network.target

[Service]
Type=simple
User=ubiquity
EnvironmentFile=/etc/ubiquity/ubiquity-server.env
ExecStart=/usr/bin/ubiquity \
$UBIQUITY_SERVER_CONFIG \
$UBIQUITY_SERVER_ARGS
Restart=on-abort

[Install]
WantedBy=multi-user.target
2 changes: 1 addition & 1 deletion utils/utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ func SetupLogger(logPath string, loggerName string) (*log.Logger, *os.File) {
return nil, nil
}
log.SetOutput(logFile)
logger := log.New(io.MultiWriter(logFile, os.Stdout), fmt.Sprintf("%s: ", loggerName), log.Lshortfile|log.LstdFlags)
logger := log.New(io.MultiWriter(logFile), fmt.Sprintf("%s: ", loggerName), log.Lshortfile|log.LstdFlags)
return logger, logFile
}

Expand Down

0 comments on commit e3f487c

Please sign in to comment.