Use following command to build server and client:
make build
If the build is successful, cfs-server and cfs-client will be found in directory build/bin
nohup ./cfs-server -c master.json &
Sample master.json is shown as follows,
{
"role": "master",
"ip": "192.168.31.173",
"port": "80",
"prof":"10088",
"id":"1",
"peers": "1:192.168.31.173:80,2:192.168.31.141:80,3:192.168.30.200:80",
"retainLogs":"20000",
"logDir": "/export/Logs/master",
"logLevel":"info",
"walDir":"/export/Data/master/raft",
"storeDir":"/export/Data/master/rocksdbstore",
"consulAddr": "http://consul.prometheus-cfs.local",
"exporterPort": 9510,
"clusterName":"cfs",
"metaNodeReservedMem": "134217728"
}
For detailed explanations of master.json, please refer to :doc:`user-guide/master`.
nohup ./cfs-server -c meta.json &
Sample meta.json is shown as follows,
{
"role": "metanode",
"listen": "9021",
"prof": "9092",
"logLevel": "info",
"metadataDir": "/export/Data/metanode",
"logDir": "/export/Logs/metanode",
"raftDir": "/export/Data/metanode/raft",
"raftHeartbeatPort": "9093",
"raftReplicaPort": "9094",
"totalMem": "17179869184",
"consulAddr": "http://consul.prometheus-cfs.local",
"exporterPort": 9511,
"masterAddr": [
"192.168.31.173:80",
"192.168.31.141:80",
"192.168.30.200:80"
]
}
For detailed explanations of meta.json, please refer to :doc:`user-guide/metanode`.
Prepare data directories
Recommendation Using independent disks can reach better performance.
Disk preparation
1.1 Check available disks
fdisk -l
1.2 Build local Linux file system on the selected devices
mkfs.xfs -f /dev/sdx
1.3 Make mount point
mkdir /data0
1.4 Mount the device on mount point
mount /dev/sdx /data0
Start datanode
nohup ./cfs-server -c datanode.json &
Sample datanode.json is shown as follows,
{ "role": "datanode", "port": "6000", "prof": "6001", "logDir": "/export/Logs/datanode", "raftDir": "/export/Data/datanode/raft", "logLevel": "info", "raftHeartbeat": "9095", "raftReplica": "9096", "consulAddr": "http://consul.prometheus-cfs.local", "exporterPort": 9512, "masterAddr": [ "192.168.31.173:80", "192.168.31.141:80", "192.168.30.200:80" ], "disks": [ "/data0:21474836480", "/data1:21474836480" ] }
For detailed explanations of datanode.json, please refer to :doc:`user-guide/datanode`.
nohup ./cfs-server -c objectnode.json &
Sample objectnode.json is shown as follows,
{
"role": "objectnode",
"domains": [
"object.cfs.local"
],
"listen": 80,
"masters": [
"master1.cfs.local:80",
"master2.cfs.local:80",
"master3.cfs.local:80"
],
"logLevel": "info",
"logDir": "/export/Logs/objectnode",
"region": "cn_bj"
}
For detailed explanations of meta.json, please refer to :doc:`user-guide/objectnode`.
By default, there are only a few data partitions allocated upon volume creation, and will be dynamically expanded according to actual usage.
curl -v "http://192.168.31.173/admin/createVol?name=test&capacity=10000&owner=cfs"
For performance evaluation, extra data partitions shall be pre-created according to the amount of data nodes and disks to reach maximum performance.
curl -v "http://192.168.31.173/dataPartition/create?name=test&count=120"
Run
modprobe fuse
to insert FUSE kernel module.Run
yum install -y fuse
to install libfuse.Run
cfs-client -c fuse.json
to start a client daemon.Sample fuse.json is shown as follows,
{ "mountPoint": "/mnt/fuse", "volName": "test", "owner": "cfs", "masterAddr": "192.168.31.173:80,192.168.31.141:80,192.168.30.200:80", "logDir": "/export/Logs/client", "profPort": "10094", "logLevel": "info" }
For detailed explanations of fuse.json, please refer to :doc:`user-guide/client`.
Note that end user can start more than one client on a single machine, as long as mountpoints are different.
- freeze the cluster
curl -v "http://192.168.31.173/cluster/freeze?enable=true"
- upgrade each module
- closed freeze flag
curl -v "http://192.168.31.173/cluster/freeze?enable=false"