Skip to content
Permalink
Browse files

[Doc] Cluster deployment (#746)

* cluster depoly doc

* fix sherman-the-tank's comment

* fix graph port
  • Loading branch information...
darionyaphet authored and dutor committed Aug 12, 2019
1 parent 088d012 commit 587d7ea825470616651cb1ff78fd03e199f8ca58
@@ -0,0 +1,151 @@
---

This tutorial provides an introduction to deploy `Nebula` cluster.

---

#### Download and install package

First at all, you can download rpm or deb from [Here](https://github.com/vesoft-inc/nebula/releases).

Currently, we have offered `CentOS 7.5`, `CentOS 6.5`, `Ubuntu 1604` and `Ubuntu 1804`'s installation package.

For `CentOS`:

```
rpm -ivh nebula-{VERSION}.{SYSTEM_VERSION}.x86_64.rpm
```

For `Ubuntu`:

```
dpkg -i nebula-{VERSION}.{SYSTEM_VERSION}.amd64.deb
```

By default, the config files are under `/usr/local/nebula/etc`, you should modify the `meta_server_addrs` to set the Meta Server's address.

In order to enable multi copy Meta service, you should set the meta addresses split by comma into `meta_server_addrs`.

Using `data_path` to set `Meta` and `Storage`'s underlying storage directory.

***

#### StartUp Nebula Cluster

Currently, we have support `scripts/services.sh` to management the nebula cluster.

You can `start`, `stop` and `restart` the cluster using this script.

It's looks like the following command:

```
scripts/services.sh <start|stop|restart|status|kill>
```

The metas, storages and graphs contain the host of them.

***

#### Config reference

**Meta Service** support the following config properties.

Property Name | Default Value | Description
--------------------------- | ------------------------ | -----------
`port` | 45500 | Meta daemon listening port.
`reuse_port` | true | Whether to turn on the SO_REUSEPORT option.
`data_path` | "" | Root data path.
`peers` | "" | It is a list of IPs split by comma, the ips number equals replica number. If empty, it means replica is 1.
`local_ip` | "" | Local ip speicified for NetworkUtils::getLocalIP.
`num_io_threads` | 16 | Number of IO threads.
`meta_http_thread_num` | 3 | Number of meta daemon's http thread.
`num_worker_threads` | 32 | Number of workers.
`part_man_type` | memory | memory, meta.
`pid_file` | "pids/nebula-metad.pid" | File to hold the process id.
`daemonize` | true | Whether run as a daemon process.
`cluster_id` | 0 | A unique id for each cluster.
`putTryNum` | 10 | Number of attempts to generate cluster ID.
`load_config_interval_secs` | 2 * 60 | Load config interval.
`meta_ingest_thread_num` | 3. | Meta daemon's ingest thread number.

**Storage Service** support the following config properties.

Property Name | Default Value | Description
----------------------------------- | -------------------------- | -----------
`port` | 44500 | Storage daemon listening port.
`reuse_port` | true | Whether to turn on the SO_REUSEPORT option.
`data_path` | "" | Root data path, multi paths should be split by comma. For rocksdb engine, one path one instance.
`local_ip` | "" | IP address which is used to identify this server, combined with the listen port.
`daemonize` | true | Whether to run the process as a daemon.
`pid_file` | "pids/nebula-storaged.pid" | File to hold the process id.
`meta_server_addrs` | "" | List of meta server addresses, the format looks like ip1:port1, ip2:port2, ip3:port3.
`store_type` | "nebula" | Which type of KVStore to be used by the storage daemon.Options can be \"nebula\", \"hbase\", etc.
`num_io_threads` | 16 | Number of IO threads
`storage_http_thread_num` | 3 | Number of storage daemon's http thread.
`num_worker_threads` | 32 | Number of workers.
`engine_type` | rocksdb | rocksdb, memory...
`custom_filter_interval_secs` | 24 * 3600 | interval to trigger custom compaction.
`num_workers` | 4 | Number of worker threads.
`rocksdb_disable_wal` | false | Whether to disable the WAL in rocksdb.
`rocksdb_db_options` | "" | DBOptions, each option will be given as <option_name>:<option_value> separated by.
`rocksdb_column_family_options` | "" | ColumnFamilyOptions, each option will be given as <option_name>:<option_value> separated by.
`rocksdb_block_based_table_options` | "" | BlockBasedTableOptions, each option will be given as <option_name>:<option_value> separated by.
`batch_size` | 4 * 1024 | Default reserved bytes for one batch operation
`block_cache` | 4 | BlockBasedTable:block_cache : MB
`download_thread_num` | 3 | Download thread number.
`min_vertices_per_bucket` | 3 | The min vertices number in one bucket.
`max_appendlog_batch_size` | 128 | The max number of logs in each appendLog request batch.
`max_outstanding_requests` | 1024 | The max number of outstanding appendLog requests.
`raft_rpc_timeout_ms` | 500 | RPC timeout for raft client.
`accept_log_append_during_pulling` | false | Whether to accept new logs during pulling the snapshot.
`raft_heartbeat_interval_secs` | 5 | Seconds between each heartbeat.
`max_batch_size` | 256 | The max number of logs in a batch.

**Graph Service** support the following config properties.

Property Name | Default Value | Description
------------------------------- | ------------------------ | -----------
`port` | 3699 | Nebula Graph daemon's listen port.
`client_idle_timeout_secs` | 0 | Seconds before we close the idle connections, 0 for infinite.
`session_idle_timeout_secs` | 600 | Seconds before we expire the idle sessions, 0 for infinite.
`session_reclaim_interval_secs` | 10 | Period we try to reclaim expired sessions.
`num_netio_threads` | 0 | Number of networking threads, 0 for number of physical CPU cores.
`num_accept_threads` | 1 | Number of threads to accept incoming connections.
`num_worker_threads` | 1 | Number of threads to execute user queries.
`reuse_port` | true | Whether to turn on the SO_REUSEPORT option.
`listen_backlog` | 1024 | Backlog of the listen socket.
`listen_netdev` | "any" | The network device to listen on.
`pid_file` | "pids/nebula-graphd.pid" | File to hold the process id.
`redirect_stdout` | true | Whether to redirect stdout and stderr to separate files.
`stdout_log_file` | "graphd-stdout.log" | Destination filename of stdout.
`stderr_log_file` | "graphd-stderr.log" | Destination filename of stderr.
`daemonize` | true | Whether run as a daemon process.
`meta_server_addrs` | "" | List of meta server addresses, the format looks like ip1:port1, ip2:port2, ip3:port3.



**Web Service** support the following config properties.

Property Name | Default Value | Description
---------------------- | ------------- | -----------
`ws_http_port` | 11000 | Port to listen on with HTTP protocol.
`ws_h2_port` | 11002 | Port to listen on with HTTP/2 protocol.
`ws_ip` | "127.0.0.1" | IP/Hostname to bind to.
`ws_threads` | 4 | Number of threads for the web service.
`ws_meta_http_port` | 11000 | Port to listen on Meta with HTTP protocol.
`ws_meta_h2_port` | 11002 | Port to listen on Meta with HTTP/2 protocol.
`ws_storage_http_port` | 12000 | Port to listen on Storage with HTTP protocol.
`ws_storage_h2_port` | 12002 | Port to listen on Storage with HTTP/2 protocol.

**Console** support the following config properties.

Property Name | Default Value | Description
------------------------ | ------------- | -----------
`addr` | "127.0.0.1" | Nebula daemon IP address
`port` | 0 | Nebula daemon listening port.
`u` | "" | Username used to authenticate.
`p` | "" | Password used to authenticate.
`enable_history` | false | Whether to force saving the command history.
`server_conn_timeout_ms` | 1000 | Connection timeout in milliseconds.


@@ -30,12 +30,13 @@
# Root data path, multiple paths should be splitted by comma.
# One path per instance, if --engine_type is `rocksdb'
--data_path=data/storage
# To start mock server
--mock_server=true
# The type of part manager, [memory | meta]
--part_man_type=memory
# The default reserved bytes for one batch operation
--batch_reserved_bytes=4096
--rocksdb_batch_size=4096
# The default block cache size used in BlockBasedTable.
# The unit is MB.
--rocksdb_block_cache=4
# The type of storage engine, `rocksdb', `memory', etc.
--engine_type=rocksdb
# The type of part, `simple', `consensus'...
@@ -17,7 +17,7 @@ if [ -z $NEBULA_HOME ]; then
NEBULA_HOME=/usr/local
fi

echo "Start Meta Service ..."
echo "Processing Meta Service ..."

if [ ! -f ${SCRIPT_PATH}"/meta.hosts" ]; then
echo "${SCRIPT_PATH}/metas not exist"
@@ -34,7 +34,7 @@ do
fi
done

echo "Start Storage Service ..."
echo "Processing Storage Service ..."

if [ ! -f ${SCRIPT_PATH}"/storage.hosts" ]; then
echo "${SCRIPT_PATH}/storage.hosts not exist"
@@ -51,7 +51,7 @@ do
fi
done

echo "Start Graph Service ..."
echo "Processing Graph Service ..."

if [ ! -f ${SCRIPT_PATH}"/graph.hosts" ]; then
echo "${SCRIPT_PATH}/graph.hosts not exist"
@@ -7,7 +7,7 @@
#include "base/Base.h"
#include "graph/GraphFlags.h"

DEFINE_int32(port, 34500, "Nebula Graph daemon's listen port");
DEFINE_int32(port, 3699, "Nebula Graph daemon's listen port");
DEFINE_int32(client_idle_timeout_secs, 0,
"Seconds before we close the idle connections, 0 for infinite");
DEFINE_int32(session_idle_timeout_secs, 600,
@@ -214,7 +214,7 @@ ResultCode RocksEngine::put(std::string key, std::string value) {


ResultCode RocksEngine::multiPut(std::vector<KV> keyValues) {
rocksdb::WriteBatch updates(FLAGS_batch_reserved_bytes);
rocksdb::WriteBatch updates(FLAGS_rocksdb_batch_size);
for (size_t i = 0; i < keyValues.size(); i++) {
updates.Put(keyValues[i].first, keyValues[i].second);
}
@@ -244,7 +244,7 @@ ResultCode RocksEngine::remove(const std::string& key) {


ResultCode RocksEngine::multiRemove(std::vector<std::string> keys) {
rocksdb::WriteBatch deletes(FLAGS_batch_reserved_bytes);
rocksdb::WriteBatch deletes(FLAGS_rocksdb_batch_size);
for (size_t i = 0; i < keys.size(); i++) {
deletes.Delete(keys[i]);
}
@@ -35,7 +35,7 @@ DEFINE_string(rocksdb_block_based_table_options,
"BlockBasedTableOptions, each option will be given "
"as <option_name>:<option_value> separated by ;");

DEFINE_int32(batch_reserved_bytes,
DEFINE_int32(rocksdb_batch_size,
4 * 1024,
"default reserved bytes for one batch operation");

@@ -47,8 +47,8 @@ DEFINE_string(part_man_type,
*/

// BlockBasedTable block_cache
DEFINE_int64(block_cache, 4,
"BlockBasedTable:block_cache : MB");
DEFINE_int64(rocksdb_block_cache, 4,
"The default block cache size used in BlockBasedTable. The unit is MB");


namespace nebula {
@@ -81,7 +81,7 @@ rocksdb::Status initRocksdbOptions(rocksdb::Options &baseOpts) {
return s;
}

bbtOpts.block_cache = rocksdb::NewLRUCache(FLAGS_block_cache * 1024 * 1024);
bbtOpts.block_cache = rocksdb::NewLRUCache(FLAGS_rocksdb_block_cache * 1024 * 1024);
baseOpts.table_factory.reset(NewBlockBasedTableFactory(bbtOpts));
baseOpts.create_if_missing = true;
return s;
@@ -29,9 +29,9 @@ DECLARE_string(memtable_factory);
DECLARE_bool(rocksdb_disable_wal);

// BlockBasedTable block_cache
DECLARE_int64(block_cache);
DECLARE_int64(rocksdb_block_cache);

DECLARE_int32(batch_reserved_bytes);
DECLARE_int32(rocksdb_batch_size);

DECLARE_string(part_man_type);

@@ -22,7 +22,7 @@

DEFINE_bool(accept_log_append_during_pulling, false,
"Whether to accept new logs during pulling the snapshot");
DEFINE_uint32(heartbeat_interval, 5,
DEFINE_uint32(raft_heartbeat_interval_secs, 5,
"Seconds between each heartbeat");
DEFINE_uint32(max_batch_size, 256, "The max number of logs in a batch");

@@ -766,15 +766,15 @@ bool RaftPart::needToSendHeartbeat() {
std::lock_guard<std::mutex> g(raftLock_);
return status_ == Status::RUNNING &&
role_ == Role::LEADER &&
lastMsgSentDur_.elapsedInSec() >= FLAGS_heartbeat_interval * 2 / 5;
lastMsgSentDur_.elapsedInSec() >= FLAGS_raft_heartbeat_interval_secs * 2 / 5;
}


bool RaftPart::needToStartElection() {
std::lock_guard<std::mutex> g(raftLock_);
if (status_ == Status::RUNNING &&
role_ == Role::FOLLOWER &&
(lastMsgRecvDur_.elapsedInSec() >= FLAGS_heartbeat_interval ||
(lastMsgRecvDur_.elapsedInSec() >= FLAGS_raft_heartbeat_interval_secs ||
term_ == 0)) {
role_ = Role::CANDIDATE;
}
@@ -955,7 +955,7 @@ bool RaftPart::leaderElection() {


void RaftPart::statusPolling() {
size_t delay = FLAGS_heartbeat_interval * 1000 / 3;
size_t delay = FLAGS_raft_heartbeat_interval_secs * 1000 / 3;
if (needToStartElection()) {
LOG(INFO) << idStr_ << "Need to start leader election";
if (leaderElection()) {
@@ -16,7 +16,7 @@
#include "kvstore/raftex/test/RaftexTestBase.h"
#include "kvstore/raftex/test/TestShard.h"

DECLARE_uint32(heartbeat_interval);
DECLARE_uint32(raft_heartbeat_interval_secs);


namespace nebula {
@@ -101,7 +101,7 @@ TEST(LearnerTest, CatchUpDataTest) {
std::vector<std::string> msgs;
appendLogs(0, 99, leader, msgs);
// Sleep a while to make sure the last log has been committed on followers
sleep(FLAGS_heartbeat_interval);
sleep(FLAGS_raft_heartbeat_interval_secs);

// Check every copy
for (int i = 0; i < 3; i++) {
@@ -16,7 +16,7 @@
#include "kvstore/raftex/test/RaftexTestBase.h"
#include "kvstore/raftex/test/TestShard.h"

DECLARE_uint32(heartbeat_interval);
DECLARE_uint32(raft_heartbeat_interval_secs);
DECLARE_uint32(max_batch_size);

namespace nebula {
@@ -114,7 +114,7 @@ TEST(LogAppend, MultiThreadAppend) {

// Sleep a while to make sure the lat log has been committed on
// followers
sleep(FLAGS_heartbeat_interval);
sleep(FLAGS_raft_heartbeat_interval_secs);

// Check every copy
for (auto& c : copies) {
@@ -16,7 +16,7 @@
#include "kvstore/raftex/test/RaftexTestBase.h"
#include "kvstore/raftex/test/TestShard.h"

DECLARE_uint32(heartbeat_interval);
DECLARE_uint32(raft_heartbeat_interval_secs);


namespace nebula {
@@ -146,7 +146,7 @@ TEST_F(LogCASTest, AllInvalidCAS) {
LOG(INFO) << "<===== Finish appending logs";

// Sleep a while to make sure the last log has been committed on followers
sleep(FLAGS_heartbeat_interval);
sleep(FLAGS_raft_heartbeat_interval_secs);

// Check every copy
for (auto& c : copies_) {
@@ -16,7 +16,7 @@
#include "kvstore/raftex/test/RaftexTestBase.h"
#include "kvstore/raftex/test/TestShard.h"

DECLARE_uint32(heartbeat_interval);
DECLARE_uint32(raft_heartbeat_interval_secs);


namespace nebula {
@@ -41,7 +41,7 @@ TEST_F(LogCommandTest, StartWithCommandLog) {
}

TEST_F(LogCommandTest, CommandInMiddle) {
FLAGS_heartbeat_interval = 1;
FLAGS_raft_heartbeat_interval_secs = 1;
// Append logs
LOG(INFO) << "=====> Start appending logs";
std::vector<std::string> msgs;

0 comments on commit 587d7ea

Please sign in to comment.
You can’t perform that action at this time.