Skip to content

Commit

Permalink
Merge branch 'alpha' into cncli-patch-master
Browse files Browse the repository at this point in the history
  • Loading branch information
rdlrt committed Aug 8, 2022
2 parents b10e6d7 + b270bd1 commit 1adfdf1
Show file tree
Hide file tree
Showing 67 changed files with 1,572 additions and 748 deletions.
12 changes: 6 additions & 6 deletions docs/Appendix/postgres.md
Expand Up @@ -58,34 +58,34 @@ export PGPASSFILE=$CNODE_HOME/priv/.pgpass
echo "/var/run/postgresql:5432:cexplorer:*:*" > $PGPASSFILE
chmod 0600 $PGPASSFILE
psql postgres
# psql (13.4)
# psql (14.0)
# Type "help" for help.
#
# postgres=#
```

#### Tuning your instance

Before you start populating your DB instance using dbsync data, now might be a good time to put some thought on to baseline configuration of your postgres instance by editing `/etc/postgresql/13/main/postgresql.conf`.
Before you start populating your DB instance using dbsync data, now might be a good time to put some thought on to baseline configuration of your postgres instance by editing `/etc/postgresql/14/main/postgresql.conf`.
Typically, you might find a lot of common standard practices parameters available in tuning guides. For our consideration, it would be nice to start with some baselines - for which we will use inputs from example [here](https://pgtune.leopard.in.ua/#/).
You might want to fill in some sample information as per below to fill in the form:

| Option | Value |
|----------------|-------|
| DB Version | 13 |
| DB Version | 14 |
| OS Type | Linux |
| DB Type | Online Transaction Processing System|
| Total RAM | 32 (or as per your server) |
| Total RAM | 64 (or as per your server) |
| Number of CPUs | 8 (or as per your server) |
| Number of Connections | 200 |
| Data Storage | HDD Storage |

In addition to above, due to the nature of usage by dbsync (restart of instance does a rollback to start of epoch), and data retention on blockchain - we're not affected by loss of volatile information upon a restart of instance. Thus, we can relax some of the data retention and protection against corruption related settings, as those are IOPs/CPU Load Average impacts that the instance does not need to spend. We'd recommend setting 3 of those below in your `/etc/postgresql/13/main/postgresql.conf`:
In addition to above, due to the nature of usage by dbsync (restart of instance does a rollback to start of epoch), and data retention on blockchain - we're not affected by loss of volatile information upon a restart of instance. Thus, we can relax some of the data retention and protection against corruption related settings, as those are IOPs/CPU Load Average impacts that the instance does not need to spend. We'd recommend setting 3 of those below in your `/etc/postgresql/14/main/postgresql.conf`:

| Parameter | Value |
|--------------------|---------|
| wal_level | minimal |
| max_wal_senders | 0 |
| synchronous_commit | off |

Once your changes are done, ensure to restart postgres service using `sudo systemctl restart postgresql`.
Once your changes are done, ensure to restart postgres service using `sudo systemctl restart postgresql`.
8 changes: 4 additions & 4 deletions docs/Build/node-cli.md
Expand Up @@ -35,11 +35,11 @@ Execute `cardano-cli` and `cardano-node` to verify output as below (the exact ve

```bash
cardano-cli version
# cardano-cli 1.32.1 - linux-x86_64 - ghc-8.10
# git rev 4f65fb9a27aa7e3a1873ab4211e412af780a3648
# cardano-cli 1.35.0 - linux-x86_64 - ghc-8.10
# git rev <...>
cardano-node version
# cardano-node 1.32.1 - linux-x86_64 - ghc-8.10
# git rev 4f65fb9a27aa7e3a1873ab4211e412af780a3648
# cardano-node 1.35.0 - linux-x86_64 - ghc-8.10
# git rev <...>
```

#### Update port number or pool name for relative paths
Expand Down
2 changes: 1 addition & 1 deletion docs/Scripts/cncli.md
@@ -1,7 +1,7 @@
!!! info "Reminder !!"
Ensure the [Pre-Requisites](../basics.md#pre-requisites) are in place before you proceed.

`cncli.sh` is a script to download and deploy [CNCLI](https://github.com/AndrewWestberg/cncli) created and maintained by Andrew Westberg. It's a community-based CLI tool written in RUST for low-level `cardano-node` communication. Usage is **optional** and no script is dependent on it. The main features include:
`cncli.sh` is a script to download and deploy [CNCLI](https://github.com/cardano-community/cncli) created and maintained by Andrew Westberg. It's a community-based CLI tool written in RUST for low-level `cardano-node` communication. Usage is **optional** and no script is dependent on it. The main features include:

- **PING** - Validates that the remote server is on the given network and returns its response time. Utilized by `gLiveView` for peer analysis if available.
- **SYNC** - Connects to a node (local or remote) and synchronizes blocks to a local `sqlite` database.
Expand Down
18 changes: 18 additions & 0 deletions docs/Scripts/cntools-changelog.md
Expand Up @@ -6,6 +6,24 @@ All notable changes to this tool will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [10.0.1] - 2022-07-14
#### Changed
- Transactions now built using cddl-format to ensure that the formatting of transaction adheres the ledger specs.
- Default to mary era transaction building format for now.
#### Fixed
- Cold signing fix for pool registration / update. Last key was added twice when assemling witnesses.

## [10.0.0] - 2022-06-28
#### Added
- Support for Vasil Fork
- Preliminary support for Post HF updates (a short release will follow post fork in coming days)
- Minimum version for Node bumped to 1.35.0

#### Changed
- Pool > Rotate code now uses kes-periodinfo CLI query to get counter from node (fallback for Koios)
- Pool > Show Info updated to include current KES counter
- Update getEraIdentifier to include Babbage era

## [9.1.0] - 2022-05-11
#### Changed
- Harmonize flow for reusing old wallet configuration on pool modification vs setting new wallets.
Expand Down
6 changes: 6 additions & 0 deletions docs/Scripts/gliveview.md
Expand Up @@ -56,6 +56,12 @@ Displays live metrics from cardano-node gathered through the nodes EKG/Prometheu
- **Tip (diff) / Status** - Will either show node status as `starting|sync xx.x%` or if close to reference tip, the tip difference `Tip (ref) - Tip (node)` to see how far of the tip (diff value) the node is. With current parameters a slot diff up to 40 from reference tip is considered good but it should usually stay below 30. It's perfectly normal to see big differences in slots between blocks. It's the built in randomness at play. To see if a node is really healthy and staying on tip you would need to compare the tip between multiple nodes.
- **Forks** - The number of forks since node start. Each fork means the blockchain evolved in a different direction, thereby discarding blocks. A high number of forks means there is a higher chance of orphaned blocks.
- **Peers In / Out** - Shows how many connections the node has established in and out. See [Peer analysis](#peer-analysis) section for how to get more details of incoming and outgoing connections.
- **P2P Mode**
- `Cold` peers indicate the number of inactive but known peers to the node.
- `Warm` peers tell how many established connections the node has.
- `Hot` peers how many established connections are actually active.
- `Bi-Dir`(bidirectional) and `Uni-Dir`(unidirectional) indicate how the handshake protocol negotiated the connection. The connection between p2p nodes will always be bidirectional, but it will be unidirectional between p2p nodes and non-p2p nodes.
- `Duplex` shows the connections that are actually used in both directions, only bidirectional connections have this potential.
- **Mem (RSS)** - RSS is the Resident Set Size and shows how much memory is allocated to cardano-node and that is in RAM. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.
- **Mem (Live) / (Heap)** - GC (Garbage Collector) values that show how much memory is used for live/heap data. A large difference between them (or the heap approaching the physical memory limit) means the node is struggling with the garbage collector and/or may begin swapping.
- **GC Minor / Major** - Collecting garbage from "Young space" is called a Minor GC. Major (Full) GC is done more rarily and is a more expensive operation. Explaining garbage collection is a topic outside the scope of this documentation and google is your friend for this.
Expand Down
2 changes: 1 addition & 1 deletion docs/basics.md
Expand Up @@ -39,7 +39,7 @@ Install pre-requisites for building cardano-node and using CNTools
-f Force overwrite of all files including normally saved user config sections in env, cnode.sh and gLiveView.sh
topology.json, config.json and genesis files normally saved will also be overwritten
-s Skip installing OS level dependencies (Default: will check and install any missing OS level prerequisites)
-n Connect to specified network instead of mainnet network (Default: connect to cardano mainnet network)
-n Connect to specified network (mainnet | guild | testnet | staging) (Default: mainnet)
eg: -n testnet
-t Alternate name for top level folder, non alpha-numeric chars will be replaced with underscore (Default: cnode)
-m Maximum time in seconds that you allow the file download operation to take before aborting (Default: 60s)
Expand Down
2 changes: 1 addition & 1 deletion files/docker/grest/scripts/docker-getmetrics.sh
Expand Up @@ -119,7 +119,7 @@ function get-metrics() {
export METRIC_grestschsize="${grestschsize}"
export METRIC_dbsize="${dbsize}"
#export METRIC_cnodeversion="$(echo $(cardano-node --version) | awk '{print $2 "-" $9}')"
#export METRIC_dbsyncversion="$(echo $(cardano-db-sync-extended --version) | awk '{print $2 "-" $9}')"
#export METRIC_dbsyncversion="$(echo $(cardano-db-sync --version) | awk '{print $2 "-" $9}')"
#export METRIC_psqlversion="$(echo "" | psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" -c "SELECT version();" | grep PostgreSQL | awk '{print $2}')"

for metric_var_name in $(env | grep ^METRIC | sort | awk -F= '{print $1}')
Expand Down
2 changes: 1 addition & 1 deletion files/docker/node/dockerfile_stage3
Expand Up @@ -36,7 +36,7 @@ RUN sed -i 's/^# *\(en_US.UTF-8\)/\1/' /etc/locale.gen \
&& echo "export LANGUAGE=en_US.UTF-8" >> ~/.bashrc

# PREREQ
RUN apt-get update && apt-get install -y libcap2 libselinux1 libc6 libsodium-dev ncurses-bin iproute2 curl wget apt-utils xz-utils netbase sudo coreutils dnsutils net-tools procps tcptraceroute bc usbip sqlite3 python3 tmux jq ncurses-base libtool autoconf git gnupg tcptraceroute util-linux less openssl bsdmainutils dialog \
RUN apt-get update && apt-get install -y libsecp256k1-0 libcap2 libselinux1 libc6 libsodium-dev ncurses-bin iproute2 curl wget apt-utils xz-utils netbase sudo coreutils dnsutils net-tools procps tcptraceroute bc usbip sqlite3 python3 tmux jq ncurses-base libtool autoconf git gnupg tcptraceroute util-linux less openssl bsdmainutils dialog \
&& apt-get install -y --no-install-recommends cron \
&& sudo apt-get -y purge && sudo apt-get -y clean && sudo apt-get -y autoremove && sudo rm -rf /var/lib/apt/lists/* # && sudo rm -rf /usr/bin/apt*

Expand Down
14 changes: 7 additions & 7 deletions files/grest/cron/jobs/active-stake-cache-update.sh
Expand Up @@ -6,31 +6,31 @@ echo "$(date +%F_%H:%M:%S) Running active stake cache update..."
# High level check in db to see if update needed at all (should be updated only once on epoch transition)
[[ $(psql ${DB_NAME} -qbt -c "SELECT grest.active_stake_cache_update_check();" | tail -2 | tr -cd '[:alnum:]') != 't' ]] &&
echo "No update needed, exiting..." &&
exit 0;
exit 0

# This could break due to upstream changes on db-sync (based on log format)
last_epoch_stakes_log=$(grep -r 'Handling.*.stakes for epoch ' "$(dirname "$0")"/../../logs/dbsync-*.json "$(dirname "$0")"/../../logs/archive/dbsync-*.json 2>/dev/null | sed -e 's#.*.Handling ##' -e 's#stakes for epoch##' -e 's# slot .*.$##' | sort -k2 -n | tail -1)
last_epoch_stakes_log=$(grep -r 'Inserted.*.EpochStake for EpochNo ' "$(dirname "$0")"/../../logs/dbsync-*.json "$(dirname "$0")"/../../logs/archive/dbsync-*.json 2>/dev/null | sed -e 's#.*.Inserted ##' -e 's#EpochStake for EpochNo##' -e 's#\"}.*.$##' | sort -k2 -n | tail -1)
[[ -z ${last_epoch_stakes_log} ]] &&
echo "Could not find any 'Handling stakes' log entries, exiting..." &&
exit 1;
exit 1

logs_last_epoch_stakes_count=$(echo "${last_epoch_stakes_log}" | cut -d\ -f1)
logs_last_epoch_no=$(echo "${last_epoch_stakes_log}" | cut -d\ -f3)

db_last_epoch_no=$(psql ${DB_NAME} -qbt -c "SELECT grest.get_current_epoch();" | tr -cd '[:alnum:]')
db_last_epoch_no=$(psql ${DB_NAME} -qbt -c "SELECT MAX(NO) from EPOCH;" | tr -cd '[:alnum:]')
[[ "${db_last_epoch_no}" != "${logs_last_epoch_no}" ]] &&
echo "Mismatch between last epoch in logs and database, exiting..." &&
exit 1;
exit 1

# Count current epoch entries processed by db-sync
db_epoch_stakes_count=$(psql ${DB_NAME} -qbt -c "SELECT grest.get_epoch_stakes_count(${db_last_epoch_no});" | tr -cd '[:alnum:]')
db_epoch_stakes_count=$(psql ${DB_NAME} -qbt -c "SELECT COUNT(1) FROM EPOCH_STAKE WHERE epoch_no = ${db_last_epoch_no};" | tr -cd '[:alnum:]')

# Check if db-sync completed handling stakes
[[ "${db_epoch_stakes_count}" != "${logs_last_epoch_stakes_count}" ]] &&
echo "Logs last epoch stakes count: ${logs_last_epoch_stakes_count}" &&
echo "DB last epoch stakes count: ${db_epoch_stakes_count}" &&
echo "db-sync stakes handling still incomplete, exiting..." &&
exit 0;
exit 0

# Stakes have been validated, run the cache update
psql ${DB_NAME} -qbt -c "SELECT GREST.active_stake_cache_update(${db_last_epoch_no});" 2>&1 1>/dev/null
Expand Down
6 changes: 6 additions & 0 deletions files/grest/cron/jobs/stake-snapshot-cache.sh
@@ -0,0 +1,6 @@
#!/bin/bash
DB_NAME=cexplorer

echo "$(date +%F_%H:%M:%S) Capturing last epochs' snapshot..."
psql ${DB_NAME} -qbt -c "CALL GREST.CAPTURE_LAST_EPOCH_SNAPSHOT();" 2>&1 1>/dev/null
echo "$(date +%F_%H:%M:%S) Job done!"
36 changes: 36 additions & 0 deletions files/grest/rpc/00_blockchain/genesis.sql
@@ -0,0 +1,36 @@
CREATE FUNCTION grest.genesis ()
RETURNS TABLE (
NETWORKMAGIC varchar,
NETWORKID varchar,
ACTIVESLOTCOEFF varchar,
UPDATEQUORUM varchar,
MAXLOVELACESUPPLY varchar,
EPOCHLENGTH varchar,
SYSTEMSTART integer,
SLOTSPERKESPERIOD varchar,
SLOTLENGTH varchar,
MAXKESREVOLUTIONS varchar,
SECURITYPARAM varchar,
ALONZOGENESIS varchar
)
LANGUAGE PLPGSQL
AS $$
BEGIN
RETURN QUERY
SELECT
g.NETWORKMAGIC,
g.NETWORKID,
g.ACTIVESLOTCOEFF,
g.UPDATEQUORUM,
g.MAXLOVELACESUPPLY,
g.EPOCHLENGTH,
EXTRACT(epoch from g.SYSTEMSTART::timestamp)::integer,
g.SLOTSPERKESPERIOD,
g.SLOTLENGTH,
g.MAXKESREVOLUTIONS,
g.SECURITYPARAM,
g.ALONZOGENESIS
FROM
grest.genesis g;
END;
$$;
12 changes: 6 additions & 6 deletions files/grest/rpc/00_blockchain/tip.sql
@@ -1,11 +1,11 @@
CREATE FUNCTION grest.tip ()
RETURNS TABLE (
hash text,
epoch_no uinteger,
abs_slot uinteger,
epoch_slot uinteger,
block_no uinteger,
block_time double precision
epoch_no word31type,
abs_slot word63type,
epoch_slot word31type,
block_no word31type,
block_time integer
)
LANGUAGE PLPGSQL
AS $$
Expand All @@ -17,7 +17,7 @@ BEGIN
b.SLOT_NO AS ABS_SLOT,
b.EPOCH_SLOT_NO AS EPOCH_SLOT,
b.BLOCK_NO,
EXTRACT(EPOCH from b.TIME)
EXTRACT(EPOCH from b.TIME)::integer
FROM
BLOCK B
ORDER BY
Expand Down
2 changes: 1 addition & 1 deletion files/grest/rpc/00_blockchain/totals.sql
@@ -1,6 +1,6 @@
CREATE FUNCTION grest.totals (_epoch_no numeric DEFAULT NULL)
RETURNS TABLE (
epoch_no uinteger,
epoch_no word31type,
circulation text,
treasury text,
reward text,
Expand Down
76 changes: 23 additions & 53 deletions files/grest/rpc/01_cached_tables/active_stake_cache.sql
@@ -1,6 +1,3 @@
--------------------------------------------------------------------------------
-- Pool active stake cache setup
--------------------------------------------------------------------------------
CREATE TABLE IF NOT EXISTS GREST.POOL_ACTIVE_STAKE_CACHE (
POOL_ID varchar NOT NULL,
EPOCH_NO bigint NOT NULL,
Expand All @@ -22,39 +19,6 @@ CREATE TABLE IF NOT EXISTS GREST.ACCOUNT_ACTIVE_STAKE_CACHE (
PRIMARY KEY (STAKE_ADDRESS, POOL_ID, EPOCH_NO)
);

/* HELPER FUNCTIONS */

CREATE FUNCTION grest.get_last_active_stake_validated_epoch ()
RETURNS INTEGER
LANGUAGE plpgsql
AS
$$
BEGIN
RETURN (
SELECT
last_value -- coalesce() doesn't work if empty set
FROM
grest.control_table
WHERE
key = 'last_active_stake_validated_epoch'
);
END;
$$;

/* POSSIBLE VALIDATION FOR CACHE (COUNTING ENTRIES) INSTEAD OF JUST DB-SYNC PART (EPOCH_STAKE)
CREATE FUNCTION grest.get_last_active_stake_cache_address_count ()
RETURNS INTEGER
LANGUAGE plpgsql
AS $$
BEGIN
RETURN (
SELECT count(*) from cache...
)
END;
$$;
*/

CREATE FUNCTION grest.active_stake_cache_update_check ()
RETURNS BOOLEAN
LANGUAGE plpgsql
Expand All @@ -64,15 +28,19 @@ $$
_current_epoch_no integer;
_last_active_stake_validated_epoch text;
BEGIN
SELECT
grest.get_last_active_stake_validated_epoch()
INTO
_last_active_stake_validated_epoch;

SELECT
grest.get_current_epoch()
INTO
_current_epoch_no;
-- Get Last Active Stake Validated Epoch
SELECT last_value
INTO _last_active_stake_validated_epoch
FROM
grest.control_table
WHERE
key = 'last_active_stake_validated_epoch';

-- Get Current Epoch
SELECT MAX(NO)
INTO _current_epoch_no
FROM epoch;

RAISE NOTICE 'Current epoch: %',
_current_epoch_no;
Expand All @@ -92,7 +60,6 @@ $$;
COMMENT ON FUNCTION grest.active_stake_cache_update_check
IS 'Internal function to determine whether active stake cache should be updated';

/* UPDATE FUNCTION */
CREATE FUNCTION grest.active_stake_cache_update (_epoch_no integer)
RETURNS VOID
LANGUAGE plpgsql
Expand Down Expand Up @@ -127,10 +94,10 @@ $$
/* POOL ACTIVE STAKE CACHE */
SELECT
COALESCE(MAX(epoch_no), 0)
FROM
GREST.POOL_ACTIVE_STAKE_CACHE
INTO
_last_pool_active_stake_cache_epoch_no;
_last_pool_active_stake_cache_epoch_no
FROM
GREST.POOL_ACTIVE_STAKE_CACHE;

INSERT INTO GREST.POOL_ACTIVE_STAKE_CACHE
SELECT
Expand All @@ -157,9 +124,9 @@ $$
/* EPOCH ACTIVE STAKE CACHE */
SELECT
COALESCE(MAX(epoch_no), 0)
INTO _last_epoch_active_stake_cache_epoch_no
FROM
GREST.EPOCH_ACTIVE_STAKE_CACHE
INTO _last_epoch_active_stake_cache_epoch_no;
GREST.EPOCH_ACTIVE_STAKE_CACHE;

INSERT INTO GREST.EPOCH_ACTIVE_STAKE_CACHE
SELECT
Expand All @@ -180,10 +147,10 @@ $$

/* ACCOUNT ACTIVE STAKE CACHE */
SELECT
COALESCE(MAX(epoch_no), 0)
COALESCE(MAX(epoch_no), (_epoch_no - 4) )
INTO _last_account_active_stake_cache_epoch_no
FROM
GREST.ACCOUNT_ACTIVE_STAKE_CACHE
INTO _last_account_active_stake_cache_epoch_no;
GREST.ACCOUNT_ACTIVE_STAKE_CACHE;

INSERT INTO GREST.ACCOUNT_ACTIVE_STAKE_CACHE
SELECT
Expand All @@ -210,6 +177,9 @@ $$
) DO UPDATE
SET AMOUNT = EXCLUDED.AMOUNT;

DELETE FROM GREST.ACCOUNT_ACTIVE_STAKE_CACHE
WHERE EPOCH_NO <= (_epoch_no - 4);

/* CONTROL TABLE ENTRY */
PERFORM grest.update_control_table(
'last_active_stake_validated_epoch',
Expand Down
2 changes: 1 addition & 1 deletion files/grest/rpc/01_cached_tables/asset_registry_cache.sql
Expand Up @@ -21,7 +21,7 @@ CREATE FUNCTION grest.asset_registry_cache_update (
_ticker text DEFAULT NULL,
_url text DEFAULT NULL,
_logo text DEFAULT NULL,
_decimals uinteger DEFAULT 0
_decimals word31type DEFAULT 0
)
RETURNS void
LANGUAGE plpgsql
Expand Down

0 comments on commit 1adfdf1

Please sign in to comment.