Skip to content
florianagiannuzzi edited this page Dec 3, 2014 · 30 revisions

Swift Installation and configuration guide

Set-up

  1. proxy-server installed on the controller node node01
  2. account, container, object servers installed on node02, node3, node04

Installation

Create a swift user that the Object Storage Service can use to authenticate with the Identity Service. Choose a password (replace $SWIFT_PASS with it) and specify an email address for the swift user. Use the service tenant and give the user the admin role:

$ keystone user-create --name=swift --pass=$SWIFT_PASS --email=swift@example.com
$ keystone user-role-add --user=swift --tenant=service --role=admin

Create a service entry for the Object Storage Service:

$ keystone service-create --name=swift --type=object-store --description="OpenStack Object Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Object Storage     |
|      id     | eede9296683e4b5ebfa13f5166375ef6 |
|     name    |              swift               |
|     type    |           object-store           |
+-------------+----------------------------------+

Specify an API endpoint for the Object Storage Service by using the returned service ID. When you specify an endpoint, you provide URLs for the public API, internal API, and admin API. In this guide, the controller host name is used:

$ keystone endpoint-create --service-id=$(keystone service-list | awk '/ object-store / {print $2}') --publicurl='http://CONTROLLER_PUBLIC_IP:8080/v1/AUTH_%(tenant_id)s' --internalurl='http://controller:8080/v1/AUTH_%(tenant_id)s' --adminurl=http://controller:8080
+-------------+---------------------------------------------------+
|   Property  |                       Value                       |
+-------------+---------------------------------------------------+
|   adminurl  |            http://controller:8080/                |
|      id     |          9e3ce428f82b40d38922f242c095982e         |
| internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s      |
|  publicurl  | http://controller:8080/v1/AUTH_%(tenant_id)s      |
|    region   |                     regionOne                     |
|  service_id |          eede9296683e4b5ebfa13f5166375ef6         |
+-------------+---------------------------------------------------+

Create the configuration directory on all nodes:

# mkdir -p /etc/swift

Create /etc/swift/swift.conf on all nodes:

[swift-hash]
# random unique string that can never change (DO NOT LOSE)
swift_hash_path_prefix = xrfuniounenqjnw
swift_hash_path_suffix = fLIbertYgibbitZ

[Note] The prefix and suffix value in /etc/swift/swift.conf should be set to some random string of text to be used as a salt when hashing to determine mappings in the ring. This file must be the same on every node in the cluster!

Next, set up your storage nodes and proxy node. This example uses the Identity Service for the common authentication piece.

Install and configure storage nodes

On the compute nodes, install storage node packages:

# apt-get install swift swift-account swift-container swift-object xfsprogs

For each device on the node that you want to use for storage, set up the XFS volume (/dev/sdc is used as an example). Use a single partition per drive. For example, in a server with 12 disks you may use one or two disks for the operating system which should not be touched in this step. The other 10 or 11 disks should be partitioned with a single partition, then formatted in XFS.

# fdisk /dev/sdc
# mkfs.xfs /dev/sdc1
# echo "/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
# mkdir -p /srv/node/sdc1
# mount /srv/node/sdc1
# chown -R swift:swift /srv/node

Create /etc/rsyncd.conf (replace $STORAGE_LOCAL_NET_IP with the IP address of the node):

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = $STORAGE_LOCAL_NET_IP
 
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
 
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
 
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

(Optional) If you want to separate rsync and replication traffic to replication network, set $STORAGE_REPLICATION_NET_IP instead of $STORAGE_LOCAL_NET_IP:

address = $STORAGE_REPLICATION_NET_IP

Edit the following line in /etc/default/rsync:

RSYNC_ENABLE=true

Start the rsync service:

# service rsync start

[Note] The rsync service requires no authentication, so run it on a local, private network.

Create the swift recon cache directory and set its permissions:

# mkdir -p /var/swift/recon
# chown -R swift:swift /var/swift/recon

Install and configure the proxy node

The proxy server takes each request and looks up locations for the account, container, or object and routes the requests correctly. The proxy server also handles API requests. You enable account management by configuring it in the /etc/swift/proxy-server.conf file.

[Note] The Object Storage processes run under a separate user and group, set by configuration options, and referred to as swift:swift. The default user is swift.

On the controller node, install swift-proxy service:

# apt-get install swift swift-proxy memcached python-keystoneclient python-swiftclient python-webob

Modify memcached to listen on the default interface on a local, non-public network. Edit this line in the /etc/memcached.conf file:

-l 127.0.0.1

Change it to:

-l $PROXY_LOCAL_NET_IP

Restart the memcached service:

# service memcached restart

Note: if you have modified /etc/memcached.conf and if the dashboard has been installed on the same node, you need to modify the value of CACHES in /etc/openstack-dashboard/local_settings.py accordingly, in order to match the new settings in /etc/memcached.conf (see Step 7 in Controller and Network node installation )

Create /etc/swift/proxy-server.conf (replace $SWIFT_PASS with a suitable password):

[DEFAULT]
bind_port = 8080
user = swift
 
[pipeline:main]
pipeline = healthcheck cache authtoken keystoneauth proxy-server
 
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
 
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = Member,admin,swiftoperator
 
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
 
# Delaying the auth decision is required to support token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = true
 
# auth_* settings refer to the Keystone server
auth_protocol = http
auth_host = controller
auth_port = 35357
 
# the service tenant and swift username and password created in Keystone
admin_tenant_name = service
admin_user = swift
admin_password = $SWIFT_PASS
 
[filter:cache]
use = egg:swift#memcache
memcache_servers = $PROXY_LOCAL_NET_IP:11211
 
[filter:catch_errors]
use = egg:swift#catch_errors
 
[filter:healthcheck]
use = egg:swift#healthcheck

[Note] If you run multiple memcache servers, put the multiple IP:port listings in the [filter:cache] section of the /etc/swift/proxy-server.conf file:

memcache_servers = 10.1.2.3:11211,10.1.2.4:11211

Only the proxy server uses memcache.

Create the account, container, and object rings. The builder command creates a builder file with a few parameters. The parameter with the value of 18 represents 2 ^ 18th, the value that the partition is sized to. Set this “partition power” value based on the total amount of storage you expect your entire ring to use. The value 3 represents the number of replicas of each object, with the last value being the number of hours to restrict moving a partition more than once.

# cd /etc/swift
# swift-ring-builder account.builder create 18 3 1
# swift-ring-builder container.builder create 18 3 1
# swift-ring-builder object.builder create 18 3 1

For every storage device on each node add entries to each ring:

# swift-ring-builder account.builder add z${ZONE}-$STORAGE_LOCAL_NET_IP:6002[R$STORAGE_REPLICATION_NET_IP:6005]/DEVICE 100
# swift-ring-builder container.builder add z${ZONE}-$STORAGE_LOCAL_NET_IP_1:6001[R$STORAGE_REPLICATION_NET_IP:6004]/DEVICE 100
# swift-ring-builder object.builder add z${ZONE}-$STORAGE_LOCAL_NET_IP_1:6000[R$STORAGE_REPLICATION_NET_IP:6003]/DEVICE 100

[Note] the optional $STORAGE_REPLICATION_NET_IP parameter if you do not want to use dedicated network for replication.

In our case, we want to deploy the account, container and object servers on the three hosts node02, node03 and node04. Therefore we will run the following commands (assuming that the device to be added to the cluster is /dev/sdc1 for all our three nodes):

# swift-ring-builder account.builder add z1-10.10.10.12:6002R10.10.20.12:6005/sdc1 100
# swift-ring-builder container.builder add z1-10.10.10.12:6001R10.10.20.12:6004/sdc1 100
# swift-ring-builder object.builder add z1-10.10.10.12:6000R10.10.20.12:6003/sdc1 100

# swift-ring-builder account.builder add z1-10.10.10.13:6002R10.10.20.13:6005/sdc1 100
# swift-ring-builder container.builder add z1-10.10.10.13:6001R10.10.20.13:6004/sdc1 100
# swift-ring-builder object.builder add z1-10.10.10.13:6000R10.10.20.13:6003/sdc1 100

# swift-ring-builder account.builder add z1-10.10.10.14:6002R10.10.20.14:6005/sdc1 100
# swift-ring-builder container.builder add z1-10.10.10.14:6001R10.10.20.14:6004/sdc1 100
# swift-ring-builder object.builder add z1-10.10.10.14:6000R10.10.20.14:6003/sdc1 100

Verify the ring contents for each ring:

# swift-ring-builder account.builder
# swift-ring-builder container.builder
# swift-ring-builder object.builder

Rebalance the rings:

# swift-ring-builder account.builder rebalance
# swift-ring-builder container.builder rebalance
# swift-ring-builder object.builder rebalance

[Note] Rebalancing rings can take some time.

Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to each of the Proxy and Storage nodes in /etc/swift.

Make sure the swift user owns all configuration files:

# chown -R swift:swift /etc/swift

Restart the Proxy service:

# service swift-proxy restart

Start services on the storage nodes

Now that the ring files are on each storage node, you can start the services. On each storage node, run the following command:

# for service in swift-object swift-object-replicator swift-object-updater swift-object-auditor swift-container swift-container-replicator swift-container-updater swift-container-auditor swift-account swift-account-replicator swift-account-reaper swift-account-auditor; do service $service start; done

[Note] To start all swift services at once, run the command:

# swift-init all start

Usage

Load credentials:

# source admin-openrc.sh

Create a container:

# swift post testcontainer

Upload a file (upload --object-name ):

# swift upload --object-name testfile testcontainer test.txt 
testfile

Display information for the account, container, or object

# swift stat
       Account: AUTH_5d2a076b4cbd463a95408a461772612e
    Containers: 1
       Objects: 1
         Bytes: 166
 Accept-Ranges: bytes
   X-Timestamp: 1403474802.26481
    X-Trans-Id: txde503d323f8d467ca860b-0053a99687
  Content-Type: text/plain; charset=utf-8

List the container/objects:

# swift list
testcontainer
#  swift list testcontainer
testfile

The same operations can be performed through the dashboard as shown in figure swift-dashboard