Skip to content

Commit

Permalink
Update guides for dbsync, and version number for node (#716)
Browse files Browse the repository at this point in the history
* Update guides for dbsync, and version number for node

* Add dbsync.json to prereqs
  • Loading branch information
rdlrt committed Jan 14, 2021
1 parent e5f34c6 commit c3d4183
Show file tree
Hide file tree
Showing 4 changed files with 185 additions and 51 deletions.
82 changes: 51 additions & 31 deletions docs/Build/dbsync.md
@@ -1,7 +1,7 @@
!> - An average pool operator may not require cardano-db-sync at all. Please verify if it is required for your use as mentioned [here](build.md#components)

> Ensure the [Pre-Requisites](basics.md#pre-requisites) are in place before you proceed.
>- Cardano DB Sync tool relies on an existing PostgreSQL server. To keep the focus on building dbsync tool, and not how to setup postgres itself, you can refer to [Sample Local PostgreSQL Server Deployment instructions](Appendix/postgres.md) for setting up Postgres instance.
>- Cardano DB Sync tool relies on an existing PostgreSQL server. To keep the focus on building dbsync tool, and not how to setup postgres itself, you can refer to [Sample Local PostgreSQL Server Deployment instructions](Appendix/postgres.md) for setting up Postgres instance. Specifically, we expect the PGPASSFILE environment variable is set as per the instructions in the sample guide, for dbsync to be able to connect.
>- The instructions are not maintained daily, but will be with major releases (expect a bit of time post new release to get those updated)
#### Build Instructions {docsify-ignore}
Expand All @@ -27,13 +27,16 @@ git pull
# On CentOS 7 (GCC 4.8.5) we should also do
# echo -e "package cryptonite\n flags: -use_target_attributes" >> cabal.project.local
echo -e "package cardano-crypto-praos\n flags: -external-libsodium-vrf" > cabal.project.local
# Replace master with appropriate tag if you'd like to avoid compiling against master
git checkout 5.0.1
# Replace tag against checkout if you do not want to build the latest released version
git checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-db-sync/releases/latest | jq -r .tag_name)
$CNODE_HOME/scripts/cabal-build-all.sh
```
The above would copy the binaries into `~/.cabal/bin` folder.

##### Prepare DB for cardano-db-sync :

Now that binaries are available, let's create our database (when going through breaking changes, you may need to use `--recreatedb` instead of `--createdb` used for the first time.Again, we expect that PGPASSFILE environment variable is already set (refer to top of this guide for sample instructions):

``` bash
cd ~/git/cardano-db-sync
# scripts/postgresql-setup.sh --dropdb #if exists already, will fail if it doesnt - thats OK
Expand All @@ -47,7 +50,7 @@ scripts/postgresql-setup.sh --createdb
##### Start cardano-db-sync-tool
``` bash
cd ~/git/cardano-db-sync
PGPASSFILE=$CNODE_HOME/priv/.pgpass cardano-db-sync-extended --config $CNODE_HOME/files/config.json --socket-path $CNODE_HOME/sockets/node0.socket --schema-dir schema/
cardano-db-sync-extended --config $CNODE_HOME/files/dbsync.json --socket-path $CNODE_HOME/sockets/node0.socket --state-dir $CNODE_HOME/guild-db/ledger-state --schema-dir schema/
```

You can use same instructions above to repeat and execute `cardano-db-sync` as well, but [cardano-graphql](Build/graphql.md) uses `cardano-db-sync-extended`, so we'll stick to it
Expand All @@ -58,39 +61,56 @@ To validate, connect to postgres instance and execute commands as per below:

``` bash
export PGPASSFILE=$CNODE_HOME/priv/.pgpass
psql cexplorer_phtn
psql cexplorer
```

You should be at the psql prompt, you can check the tables and verify they're populated:

``` sql
\dt
# List of relations
# Schema | Name | Type | Owner
#--------+----------------+-------+-------
# public | block | table | <username>
# public | epoch | table | <username>
# public | meta | table | <username>
# public | schema_version | table | <username>
# public | slot_leader | table | <username>
# public | tx | table | <username>
# public | tx_in | table | <username>
# public | tx_out | table | <username>
#(8 rows)
select * from meta;
```

A sample output of the above two commands may look like below:

```
List of relations
Schema | Name | Type | Owner
--------+----------------------+-------+--------
public | block | table | centos
public | delegation | table | centos
public | epoch | table | centos
public | epoch_param | table | centos
public | epoch_stake | table | centos
public | ma_tx_mint | table | centos
public | ma_tx_out | table | centos
public | meta | table | centos
public | orphaned_reward | table | centos
public | param_proposal | table | centos
public | pool_hash | table | centos
public | pool_meta_data | table | centos
public | pool_owner | table | centos
public | pool_relay | table | centos
public | pool_retire | table | centos
public | pool_update | table | centos
public | reserve | table | centos
public | reward | table | centos
public | schema_version | table | centos
public | slot_leader | table | centos
public | stake_address | table | centos
public | stake_deregistration | table | centos
public | stake_registration | table | centos
public | treasury | table | centos
public | tx | table | centos
public | tx_in | table | centos
public | tx_metadata | table | centos
public | tx_out | table | centos
public | withdrawal | table | centos
(29 rows)
select * from meta;
# id | protocol_const | slot_duration | start_time | network_name
#----+----------------+---------------+---------------------+--------------
# 1 | 43200 | 20000 | 2020-04-12 13:55:37 | pHTN
#(1 row)

select * from tx;
# id | hash | block | fee | out_sum | size
#----+--------------------------------------------------------------------+-------+-----+------------------+------
# 1 | \x26b63ce785b16fc53ba3ab882ac0e5342a77b33f355ba82982e3e2d5e05500df | 1 | 0 | 1000000000 | 0
# 2 | \xbd8f661658dabbb557d4b5e23264d34fda2a2304daccdac283e337581a88c479 | 1 | 0 | 62499975000000 | 0
# 3 | \x17fbf571b7d091e9cfb6853cd5fb603031831ce7e5e3acbb4b842960e90ba419 | 1 | 0 | 62499975000000 | 0
# 4 | \x3e7e3c1105d3bd76a2b5ae897e1b79b86c7834e68409e533afc318112405ff69 | 1 | 0 | 62499975000000 | 0
# ...
# (36 rows)
id | start_time | network_name
----+---------------------+--------------
1 | 2017-09-23 21:44:51 | mainnet
(1 row)
```
29 changes: 10 additions & 19 deletions docs/Build/node-cli.md
Expand Up @@ -18,9 +18,9 @@ You can use the instructions below to build the cardano-node, same steps can be

``` bash
git fetch --tags --all
# Replace release 1.19.0 with the version/branch/tag you'd like to build
# Replace tag against checkout if you do not want to build the latest released version
git pull
git checkout 1.19.0
git checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-node/releases/latest | jq -r .tag_name)

# The "-o" flag against script below will download cabal.project.local to depend on system libSodium package, and include cardano-address and bech32 binaries to your build
$CNODE_HOME/scripts/cabal-build-all.sh -o
Expand All @@ -34,21 +34,18 @@ Execute cardano-cli and cardano-node to verify output as below:

```bash
cardano-cli version
# cardano-cli 1.19.0 - linux-x86_64 - ghc-8.6
# git rev 4814003f14340d5a1fc02f3ac15437387a7ada9f
# cardano-cli 1.24.2 - linux-x86_64 - ghc-8.10
# git rev 196ba716c425a0b7c75741c168f6a6d7edaee1fc
cardano-node version
# cardano-node 1.19.0 - linux-x86_64 - ghc-8.6
# git rev 4814003f14340d5a1fc02f3ac15437387a7ada9f
cardano-node 1.24.2 - linux-x86_64 - ghc-8.10
git rev 400d18092ce604352cf36fe5f105b0d7c78be074
```

##### Update port number or pool name for relative paths

Before you go ahead with starting your node, you may want to update values for CNODE_PORT in `$CNODE_HOME/scripts/cnode.sh`. Note that it is imperative for operational relays and pools to ensure that the port mentioned is opened via firewall to the destination your node is supposed to connect from. Update your network/firewall configuration accordingly. Future executions of prereqs.sh will preserve and not overwrite these values.
Before you go ahead with starting your node, you may want to update values for CNODE_PORT in `$CNODE_HOME/scripts/env`. Note that it is imperative for operational relays and pools to ensure that the port mentioned is opened via firewall to the destination your node is supposed to connect from. Update your network/firewall configuration accordingly. Future executions of prereqs.sh will preserve and not overwrite these values.

```bash
## Static (content that will not be overwritten by prereqs.sh)
## Begin

POOL_NAME="GUILD"
CNODE_PORT=6000
POOL_DIR="$CNODE_HOME/priv/pool/$POOL_NAME"
Expand All @@ -58,13 +55,15 @@ POOL_DIR="$CNODE_HOME/priv/pool/$POOL_NAME"
##### Start the node

To test starting the node in interactive mode, you can use the pre-built script below (note that the config now uses `SimpleView` so you may not see much output):
To test starting the node in interactive mode, you can use the pre-built script below (note that your node logs are being written to $CNODE_HOME/logs folder, you may not see much output beyond `Listening on http://127.0.0.1:12798`):

```bash
cd $CNODE_HOME/scripts
./cnode.sh
```

Stop the node by hitting Ctrl-C.

##### Run as systemd service

The preferred way to run the node is through a service manager like systemd. This section explains how to setup a systemd service file.
Expand Down Expand Up @@ -92,11 +91,3 @@ sudo systemctl status cnode.service

You can use [gLiveView](Scripts/gliveview.md) to monitor your pool that was started as systemd, if you miss the LiveView functionality.

##### Steps to transition from LiveView in tmux to systemd setup

If you've followed guide from this repo previously and would like to transfer to systemd usage, please checkout the steps below:

1. Stop previous instance of node if already running (eg: in tmux)
2. Run `prereqs.sh`, but remember to preserve your customisations to cnode.sh, topology.json, env files (you can also compare and update cnode.sh and env files from github repo).
3. Follow the instructions [above](#run-as-systemd-service) to setup your node as a service and start it using systemctl as directed.
4. If you need to monitor via interactive terminal as before, use [gLiveView](Scripts/gliveview.md).
117 changes: 117 additions & 0 deletions files/dbsync.json
@@ -0,0 +1,117 @@
{
"EnableLogMetrics": false,
"EnableLogging": true,
"NetworkName": "mainnet",
"NodeConfigFile": "/opt/cardano/cnode/files/config.json",
"RequiresNetworkMagic": "RequiresNoMagic",
"defaultBackends": [
"KatipBK"
],
"defaultScribes": [
[
"FileSK",
"/opt/cardano/cnode/logs/dbsync.json"
]
],
"hasPrometheus": [
"127.0.0.1",
12698
],
"minSeverity": "Info",
"options": {
"cfokey": {
"value": "Release-1.0.0"
},
"mapBackends": {},
"mapSeverity": {
"db-sync-node": "Info",
"db-sync-node.Mux": "Error",
"db-sync-node.Subscription": "Error"
},
"mapSubtrace": {
"#ekgview": {
"contents": [
[
{
"contents": "cardano.epoch-validation.benchmark",
"tag": "Contains"
},
[
{
"contents": ".monoclock.basic.",
"tag": "Contains"
}
]
],
[
{
"contents": "cardano.epoch-validation.benchmark",
"tag": "Contains"
},
[
{
"contents": "diff.RTS.cpuNs.timed.",
"tag": "Contains"
}
]
],
[
{
"contents": "#ekgview.#aggregation.cardano.epoch-validation.benchmark",
"tag": "StartsWith"
},
[
{
"contents": "diff.RTS.gcNum.timed.",
"tag": "Contains"
}
]
]
],
"subtrace": "FilterTrace"
},
"#messagecounters.aggregation": {
"subtrace": "NoTrace"
},
"#messagecounters.ekgview": {
"subtrace": "NoTrace"
},
"#messagecounters.katip": {
"subtrace": "NoTrace"
},
"#messagecounters.monitoring": {
"subtrace": "NoTrace"
},
"#messagecounters.switchboard": {
"subtrace": "NoTrace"
},
"benchmark": {
"contents": [
"GhcRtsStats",
"MonotonicClock"
],
"subtrace": "ObservableTrace"
},
"cardano.epoch-validation.utxo-stats": {
"subtrace": "NoTrace"
}
}
},
"rotation": {
"rpKeepFilesNum": 10,
"rpLogLimitBytes": 5000000,
"rpMaxAgeHours": 24
},
"setupBackends": [
"AggregationBK",
"KatipBK"
],
"setupScribes": [
{
"scKind": "FileSK",
"scName": "/opt/cardano/cnode/logs/dbsync.json",
"scFormat": "ScJson",
"scRotation": null
}
]
}
8 changes: 7 additions & 1 deletion scripts/cnode-helper-scripts/prereqs.sh
Expand Up @@ -377,6 +377,10 @@ else
echo "${BRANCH}" > "${CNODE_HOME}"/scripts/.env_branch
fi

# Download dbsync config
curl -sL -m ${CURL_TIMEOUT} -o dbsync.json.tmp ${URL_RAW}/files/dbsync.json

# Download node config, genesis and topology from template
if [[ ${NETWORK} = "testnet" ]]; then
curl -sL -m ${CURL_TIMEOUT} -o byron-genesis.json.tmp https://hydra.iohk.io/job/Cardano/iohk-nix/cardano-deployment/latest-finished/download/1/testnet-byron-genesis.json
curl -sL -m ${CURL_TIMEOUT} -o genesis.json.tmp https://hydra.iohk.io/job/Cardano/iohk-nix/cardano-deployment/latest-finished/download/1/testnet-shelley-genesis.json
Expand All @@ -401,10 +405,12 @@ fi
sed -e "s@/opt/cardano/cnode@${CNODE_HOME}@g" -i ./*.json.tmp
[[ ${FORCE_OVERWRITE} = 'Y' && -f topology.json ]] && cp -f topology.json "topology.json_bkp$(date +%s)"
[[ ${FORCE_OVERWRITE} = 'Y' && -f config.json ]] && cp -f config.json "config.json_bkp$(date +%s)"
[[ ${FORCE_OVERWRITE} = 'Y' && -f dbsync.json ]] && cp -f dbsync.json "dbsync.json_bkp$(date +%s)"
if [[ ${FORCE_OVERWRITE} = 'Y' || ! -f byron-genesis.json ]]; then mv -f byron-genesis.json.tmp byron-genesis.json; else rm -f byron-genesis.json.tmp; fi
if [[ ${FORCE_OVERWRITE} = 'Y' || ! -f genesis.json ]]; then mv -f genesis.json.tmp genesis.json; else rm -f genesis.json.tmp; fi
if [[ ${FORCE_OVERWRITE} = 'Y' || ! -f topology.json ]]; then mv -f topology.json.tmp topology.json; else rm -f topology.json.tmp; fi
if [[ ${FORCE_OVERWRITE} = 'Y' || ! -f config.json ]]; then mv -f config.json.tmp config.json; else rm -f config.json.tmp; fi
if [[ ${FORCE_OVERWRITE} = 'Y' || ! -f dbsync.json ]]; then mv -f dbsync.json.tmp dbsync.json; else rm -f dbsync.json.tmp; fi

pushd "${CNODE_HOME}"/scripts >/dev/null || err_exit
curl -s -m ${CURL_TIMEOUT} -o env.tmp ${URL_RAW}/scripts/cnode-helper-scripts/env
Expand Down Expand Up @@ -446,7 +452,7 @@ updateWithCustomConfig() {
mv -f ${file}.tmp ${file}
}

[[ ${FORCE_OVERWRITE} = 'Y' ]] && echo "Forced full upgrade! Please edit scripts/env, scripts/cnode.sh, scripts/gLiveView.sh and scripts/topologyUpdater.sh (alongwith files/topology.json, files/config.json) as required/"
[[ ${FORCE_OVERWRITE} = 'Y' ]] && echo "Forced full upgrade! Please edit scripts/env, scripts/cnode.sh, scripts/gLiveView.sh and scripts/topologyUpdater.sh (alongwith files/topology.json, files/config.json, files/dbsync.json) as required/"

updateWithCustomConfig "env"
updateWithCustomConfig "cnode.sh"
Expand Down

0 comments on commit c3d4183

Please sign in to comment.