Bluzelle brings together the sharing economy and token economy. Bluzelle enables people to rent out their computer storage space to earn a token while dApp developers pay with a token to have their data stored and managed in the most efficient way.
If you want to deploy your swarm immediately you can use our docker-compose quickstart instructions:
- Setup a local docker-compose swarm with the instructions found here
- Run
docker-compose up
in the same directory of your docker-compose.yml. This command will initialize the swarm within your local docker-machine. Full docker-compose documentation can be found here - Nodes are available on localhost port 51010-51012
- Connect a test websocket client
- Create a node server application using our node.js library
CTRL-C
to terminate the docker-compose swarm
Boost
$ export BOOST_VERSION="1.68.0"
$ export BOOST_INSTALL_DIR="$HOME/myboost"
$ mkdir -p ~/myboost
$ toolchain/install-boost.sh
This will result in a custom Boost install at ~/myboost/1_68_0/
that will not collide with your system's Boost.
Other dependencies (Protobuf, CMake)
$ brew update && brew install protobuf && brew install snappy && brew install lz4 && brew upgrade cmake
ccache (Optional)
If available, cmake will attempt to use ccache (https://ccache.samba.org) to drastically speed up compilation.
$ brew install ccache
Boost
Open up a console and install the compatible version of Boost:
$ ENV BOOST_VERSION="1.68.0"
$ ENV BOOST_INSTALL_DIR="$HOME/myboost"
$ mkdir -p ~/myboost
$ toolchain/install-boost.sh
This will result in a custom Boost install at ~/myboost/1_68_0/
that will not overwrite your system's Boost.
CMake
$ mkdir -p ~/mycmake
$ curl -L http://cmake.org/files/v3.11/cmake-3.11.0-Linux-x86_64.tar.gz | tar -xz -C ~/mycmake --strip-components=1
Again, this will result in a custom cmake install into ~/mycmake/
and will not overwrite your system's cmake.
Protobuf (Ver. 3 or greater) etc.
$ sudo apt-get install pkg-config protobuf-compiler libprotobuf-dev libsnappy-dev
ccache (Optional)
If available, cmake will attempt to use ccache (https://ccache.samba.org) to drastically speed up compilation.
$ sudo apt-get install ccache
Ensure that you set your cmake args to pass in:
-DBOOST_ROOT:PATHNAME=$HOME/myboost/1_68_0/
The project root can be directly imported into CLion.
Here are the steps to build the Daemon and unit test application from the command line:
$ mkdir build
$ cd build
$ cmake -DBOOST_ROOT:PATHNAME=$HOME/myboost/1_68_0/ ..
$ sudo make install
$ mkdir build
$ cd build
$ ~/mycmake/bin/cmake -DBOOST_ROOT:PATHNAME=$HOME/myboost/1_68_0/ ..
$ sudo make install
The Bluzelle daemon is configured by setting the properties of a JSON configuration file provided by the user. This file is usually called bluzelle.json and resides in the current working directory. To specify a different configuration file the daemon can be executed with the -c command line argument:
$ swarm -c peer0.json
The configuration file is a JSON format file, as seen in the following example:
{
"bootstrap_file": "./peers.json",
"ethereum": "0xddbd<...>121a",
"ethereum_io_api_token": "53IW57FSZSZS3QXJUEBYT8F4YZ9IZFXBPQ",
"listener_address": "127.0.0.1",
"listener_port": 49152,
"http_port": 8080,
"log_to_stdout": true,
"uuid": "d6707510-8ac6-43c1-b9a5-160cf54c99f5",
"max_storage" : "2GB",
"logfile_dir" : "logs/",
"logfile_rotation_size" : "64K",
"logfile_max_size" : "640K",
"debug_logging" : false,
"peer_validation_enabled" : false
"signed_key": "LjMrLq8pw3 <...> +QbThXaQ="
}
where the properties are:
- "bootstrap_file" - the path to a file containing the list of peers in the swarm that this node will be participating in. See below.
- "debug_logging" (optional)- set this value to true to include debug level log messages in the logs
- "ethereum" - is your Ethereum block chain address, used to pay for transactions.
- "ethereum_io_api_token" - this is used to identify the SwarmDB daemon to Etherscan Developer API (see https://etherscan.io/apis). Use the given value for now, this property may be moved out the config file in the future.
- "listener_address" - the ip address that SwarmDB will use
- "listener_port" - the socket address where SwarmDB will listen for protobuf and web socket requests.
- "http_port" - the listen port where a HTTP api is exposed for blockchain integration
- "log_to_stdout" (optional) - directs SwarmDB to log output to stdout when true.
- "logfile_dir" (optional -- If running on the same machine the log dir MUST be unique for each node) - location of log files (default: logs/)
- "logfile_max_size" (optional) - approx. maximum combined size of the logs before deletion occurs (default: 512K)
- "logfile_rotation_size" (optional) - approximate size of log file must be before rotation (default: 64K)
- "max_storage" (optional) - the approximate maximum limit for the storage that SwarmDB will use in the current instance (default: 2G)
- "uuid" - the universally unique identifier that this instance of SwarmDB will use to uniquely identify itself.
- "peer_validation_enabled" (optional)- set this to true to enable blacklisting and uuid signature verification
- "signed_key" - (required if peer_validation enabled) a key generated from the node's UUID and the Bluzelle private key. If peer_validation_enabled is set to true, the node owner must provide the node's uuid to a Bluzelle representative who will generate the signed_key. The key must be added to the config file as a single line of text with no carriage returns or line feeds.
All size entries use the same notation as storage: B, K, M, G & T or none (bytes)
The bootstrap file, identified in the config file by the "bootstrap_file" parameter, see above, provides a list of other nodes in the the swarm that the local instance of the SwarmDB daemon can communicate with. Note that this may not represent the current quorum so if it is sufficently out of date, you may need to find the current list of nodes.
The booststrap file format a JSON array, containing JSON objects describing nodes as seen in the following example:
[
{
"host": "127.0.0.1",
"http_port": 9082,
"name": "peer0",
"port": 49152,
"uuid": "d6707510-8ac6-43c1-b9a5-160cf54c99f5"
},
{
"host": "127.0.0.1",
"http_port": 9083,
"name": "peer1",
"port": 49153,
"uuid": "5c63dfdc-e251-4b9c-8c36-404972c9b4ec"
},
...
{
"host": "127.0.0.1",
"http_port": 9083,
"name": "peer1",
"port": 49153,
"uuid": "ce4bfdc-63c7-5b9d-1c37-567978e9b893a"
}
]
where the Peer object parameters are (ALL PARAMETERS MUST MATCH THE PEER CONFIGURATION):
- "host" - the IP address associated with the external node
- "http_port" - the HTTP port on which the external node listens for HTTP client requests.
- "name" - the human readable name that the external node uses
- "port" - the socket address that the external node will listen for protobuf and web socket requests. (listen_port in the config file)
- "uuid" - the universally unique identifier that the external node uses to uniquely identify itself. This is required to be unique per node and consistent between the peerlist and the config.
Please ensure that a JSON object representing the local node is also included in the array of peers.
-
Create each of the JSON files below in swarmDB/build/output/, where the swarm executable resides. (bluzelle.json, bluzelle2.json, bluzelle3.json, peers.json).
-
Create an account with Etherscan: https://etherscan.io/register
-
Create an Etherscan API KEY by clicking Developers -> API-KEYs.
-
Add your Etherscan API KEY Token to the configuration files.
-
Modify the
listener_address
to use your local interface IP.$ ifconfig en1 en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet6 fe80::1837:c97f:df86:c36f%en1 prefixlen 64 secured scopeid 0xa -->>inet 192.168.0.34 netmask 0xffffff00 broadcast 192.168.0.255 nd6 options=201<PERFORMNUD,DAD> media: autoselect status: active
In the above case, the IP address of the local interface is 192.168.0.34
.
If you do not see inet <ipaddress>
, run ifconfig
and comb through manually to find your local IP address.
- Modify the
ethereum
address to be an Ethereum mainnet address that contains tokens or use the sample address provided below.
Requirements:
- You must provide a valid Ethereum address with a balance > 0 and an Etherscan API key.
- A unique ID (uuid) must be specified for each daemon.
- Listener address and port must match peer list.
Configuration files for Daemon:
// debug_logging is an optional setting (default is false)
// bluzelle.json
{
"listener_address" : "127.0.0.1",
"listener_port" : 50000,
"ethereum" : "0xddbd2b932c763ba5b1b7ae3b362eac3e8d40121a",
"ethereum_io_api_token" : "**********************************",
"bootstrap_file" : "./peers.json",
"uuid" : "60ba0788-9992-4cdb-b1f7-9f68eef52ab9",
"debug_logging" : true,
"log_to_stdout" : false
}
// bluzelle2.json
{
"listener_address" : "127.0.0.1",
"listener_port" : 50001,
"ethereum" : "0xddbd2b932c763ba5b1b7ae3b362eac3e8d40121a",
"ethereum_io_api_token" : "**********************************",
"bootstrap_file" : "./peers.json",
"uuid" : "c7044c76-135b-452d-858a-f789d82c7eb7",
"debug_logging" : true,
"log_to_stdout" : true
}
// bluzelle3.json
{
"listener_address" : "127.0.0.1",
"listener_port" : 50002,
"ethereum" : "0xddbd2b932c763ba5b1b7ae3b362eac3e8d40121a",
"ethereum_io_api_token" : "**********************************",
"bootstrap_file" : "./peers.json",
"uuid" : "3726ec5f-72b4-4ce6-9e60-f5c47f619a41",
"debug_logging" : true,
"log_to_stdout" : true
}
// peers.json
[
{"name": "peer1", "host": "127.0.0.1", "port": 50000, "http_port" : 8080, "uuid" : "60ba0788-9992-4cdb-b1f7-9f68eef52ab9"},
{"name": "peer2", "host": "127.0.0.1", "port": 50001, "http_port" : 8081, "uuid" : "c7044c76-135b-452d-858a-f789d82c7eb7"},
{"name": "peer3", "host": "127.0.0.1", "port": 50002, "http_port" : 8082, "uuid" : "3726ec5f-72b4-4ce6-9e60-f5c47f619a41"}
]
- Deploy your swarm of Daemons. From the swarmDB/build/output/ directory, run:
$ ./swarm -c bluzelle.json
$ ./swarm -c bluzelle2.json
$ ./swarm -c bluzelle3.json
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
$ brew install node
$ brew install yarn
$ sudo apt-get install npm
$ sudo npm install npm@latest -g
$ curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
$ sudo apt-get update && sudo apt-get install yarn
Script to clone bluzelle-js repository and copy template configuration files or run tests with your configuration files.
$ qa/integration-tests setup // Sets up template configuration files
$ qa/integration-tests // Runs tests with configuration files you've created
$ cd scripts
Follow instructions in readme.md
$ ./crud -p -n localhost:50000 status
Client: crud-script-0
Sending:
sender: "crud-script-0"
status_request: ""
------------------------------------------------------------
Response:
swarm_version: "0.3.1443"
swarm_git_commit: "0.3.1096-41-g91cef89"
uptime: "1 days, 17 hours, 29 minutes"
module_status_json: ...
pbft_enabled: true
Response:
{
"module" :
[
{
"name" : "pbft",
"status" :
{
"is_primary" : false,
"latest_checkpoint" :
{
"hash" : "",
"sequence_number" : 3800
},
"latest_stable_checkpoint" :
{
"hash" : "",
"sequence_number" : 3800
},
"next_issued_sequence_number" : 1,
"outstanding_operations_count" : 98,
"peer_index" :
[
{
"host" : "127.0.0.1",
"name" : "node_0",
"port" : 50000,
"uuid" : "MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAE/HIPqL97zXbPN8CW609Dddu4vSKx/xnS1sle0FTgyzaDil1UmmQkrlTsQQqpU7N/kVMbAY+/la3Rawfw6VjVpA=="
},
{
"host" : "127.0.0.1",
"name" : "node_1",
"port" : 50001,
"uuid" : "MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAELUJ3AivScRn6sfBgBsBi3I18mpOC5NZ552ma0QTFSHVdPGj98OBMhxMkyKRI6UhAeuUTDf/mCFM5EqsSRelSQw=="
},
{
"host" : "127.0.0.1",
"name" : "node_2",
"port" : 50002,
"uuid" : "MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAEg+lS+GZNEOqhftj041jCjLabPrOxkkpTHSWgf6RNjyGKenwlsdYF9Xg1UH1FZCpNVkHhCLi2PZGk6EYMQDXqUg=="
}
],
"primary" :
{
"host" : "127.0.0.1",
"host_port" : 50001,
"name" : "node_1",
"uuid" : "MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAELUJ3AivScRn6sfBgBsBi3I18mpOC5NZ552ma0QTFSHVdPGj98OBMhxMkyKRI6UhAeuUTDf/mCFM5EqsSRelSQw=="
},
"unstable_checkpoints_count" : 0,
"view" : 1
}
}
]
}
------------------------------------------------------------
./crud -p -n localhost:50000 create-db -u myuuid
Client: crud-script-0
------------------------------------------------------------
Response:
header {
db_uuid: "myuuid"
nonce: 2998754133578549919
}
------------------------------------------------------------
$ ./crud -p -n localhost:50000 create -u myuuid -k mykey -v myvalue
Client: crud-script-0
------------------------------------------------------------
Response:
header {
db_uuid: "myuuid"
nonce: 9167923913779064632
}
------------------------------------------------------------
$ ./crud -p -n localhost:50000 read -u myuuid -k mykey
Client: crud-script-0
------------------------------------------------------------
Response:
header {
db_uuid: "myuuid"
nonce: 1298794800698891064
}
read {
key: "mykey"
value: "myvalue"
}
------------------------------------------------------------
$ ./crud -p -n localhost:50000 update -u myuuid -k mykey -v mynewvalue
Client: crud-script-0
------------------------------------------------------------
Response:
header {
db_uuid: "myuuid"
nonce: 9006453024945657757
}
------------------------------------------------------------
$ ./crud -p -n localhost:50000 delete -u myuuid -k mykey
Client: crud-script-0
------------------------------------------------------------
Response:
header {
db_uuid: "myuuid"
nonce: 7190311901863172254
}
------------------------------------------------------------
$ ./crud -p -n localhost:50000 subscribe -u myuuid -k mykey
Client: crud-script-0
------------------------------------------------------------
Response:
header {
db_uuid: "myuuid"
nonce: 8777225851310409007
}
------------------------------------------------------------
Waiting....
Response:
header {
db_uuid: "myuuid"
nonce: 8777225851310409007
}
subscription_update {
key: "mykey"
value: "myvalue"
}
------------------------------------------------------------
Waiting....
./crud -p -n localhost:50000 delete-db -u myuuid
Client: crud-script-0
------------------------------------------------------------
Response:
header {
db_uuid: "myuuid"
nonce: 1540670102065057350
}
------------------------------------------------------------
Dynamically adding and removing peers is not supported in this release. This functionality will be available in a subsequent version of swarmDB.
$ ./crud --help
usage: crud [-h] [-p] [-i ID] -n NODE
{status,create-db,delete-db,has-db,writers,add-writer,remove-writer,create,read,update,delete,has,keys,size,subscribe}
...
crud
positional arguments:
{status,create-db,delete-db,has-db,writers,add-writer,remove-writer,create,read,update,delete,has,keys,size,subscribe}
status Status
create-db Create database
delete-db Delete database
has-db Has database
writers Database writers
add-writer Add database writers
remove-writer Remove database writers
create Create k/v
read Read k/v
update Update k/v
delete Delete k/v
has Determine whether a key exists within a DB by UUID
keys Get all keys for a DB by UUID
size Determine the size of the DB by UUID
subscribe Subscribe and monitor changes for a key
optional arguments:
-h, --help show this help message and exit
-p, --use_pbft Direct message to pbft instead of raft
-i ID, --id ID Crud script sender id (default 0)
required arguments:
-n NODE, --node NODE node's address (ex. 127.0.0.1:51010)