diff --git a/docs/Appendix/postgres.md b/docs/Appendix/postgres.md index b2b343d95..fc2a20c19 100644 --- a/docs/Appendix/postgres.md +++ b/docs/Appendix/postgres.md @@ -58,7 +58,7 @@ export PGPASSFILE=$CNODE_HOME/priv/.pgpass echo "/var/run/postgresql:5432:cexplorer:*:*" > $PGPASSFILE chmod 0600 $PGPASSFILE psql postgres -# psql (13.4) +# psql (14.0) # Type "help" for help. # # postgres=# @@ -66,21 +66,21 @@ psql postgres #### Tuning your instance -Before you start populating your DB instance using dbsync data, now might be a good time to put some thought on to baseline configuration of your postgres instance by editing `/etc/postgresql/13/main/postgresql.conf`. +Before you start populating your DB instance using dbsync data, now might be a good time to put some thought on to baseline configuration of your postgres instance by editing `/etc/postgresql/14/main/postgresql.conf`. Typically, you might find a lot of common standard practices parameters available in tuning guides. For our consideration, it would be nice to start with some baselines - for which we will use inputs from example [here](https://pgtune.leopard.in.ua/#/). You might want to fill in some sample information as per below to fill in the form: | Option | Value | |----------------|-------| -| DB Version | 13 | +| DB Version | 14 | | OS Type | Linux | | DB Type | Online Transaction Processing System| -| Total RAM | 32 (or as per your server) | +| Total RAM | 64 (or as per your server) | | Number of CPUs | 8 (or as per your server) | | Number of Connections | 200 | | Data Storage | HDD Storage | -In addition to above, due to the nature of usage by dbsync (restart of instance does a rollback to start of epoch), and data retention on blockchain - we're not affected by loss of volatile information upon a restart of instance. Thus, we can relax some of the data retention and protection against corruption related settings, as those are IOPs/CPU Load Average impacts that the instance does not need to spend. We'd recommend setting 3 of those below in your `/etc/postgresql/13/main/postgresql.conf`: +In addition to above, due to the nature of usage by dbsync (restart of instance does a rollback to start of epoch), and data retention on blockchain - we're not affected by loss of volatile information upon a restart of instance. Thus, we can relax some of the data retention and protection against corruption related settings, as those are IOPs/CPU Load Average impacts that the instance does not need to spend. We'd recommend setting 3 of those below in your `/etc/postgresql/14/main/postgresql.conf`: | Parameter | Value | |--------------------|---------| @@ -88,4 +88,4 @@ In addition to above, due to the nature of usage by dbsync (restart of instance | max_wal_senders | 0 | | synchronous_commit | off | -Once your changes are done, ensure to restart postgres service using `sudo systemctl restart postgresql`. \ No newline at end of file +Once your changes are done, ensure to restart postgres service using `sudo systemctl restart postgresql`. diff --git a/docs/Build/node-cli.md b/docs/Build/node-cli.md index 8bc8a7add..88d532bf9 100644 --- a/docs/Build/node-cli.md +++ b/docs/Build/node-cli.md @@ -35,11 +35,11 @@ Execute `cardano-cli` and `cardano-node` to verify output as below (the exact ve ```bash cardano-cli version -# cardano-cli 1.32.1 - linux-x86_64 - ghc-8.10 -# git rev 4f65fb9a27aa7e3a1873ab4211e412af780a3648 +# cardano-cli 1.35.0 - linux-x86_64 - ghc-8.10 +# git rev <...> cardano-node version -# cardano-node 1.32.1 - linux-x86_64 - ghc-8.10 -# git rev 4f65fb9a27aa7e3a1873ab4211e412af780a3648 +# cardano-node 1.35.0 - linux-x86_64 - ghc-8.10 +# git rev <...> ``` #### Update port number or pool name for relative paths diff --git a/docs/Scripts/cncli.md b/docs/Scripts/cncli.md index 6ffd34310..37370fcf5 100644 --- a/docs/Scripts/cncli.md +++ b/docs/Scripts/cncli.md @@ -1,7 +1,7 @@ !!! info "Reminder !!" Ensure the [Pre-Requisites](../basics.md#pre-requisites) are in place before you proceed. -`cncli.sh` is a script to download and deploy [CNCLI](https://github.com/AndrewWestberg/cncli) created and maintained by Andrew Westberg. It's a community-based CLI tool written in RUST for low-level `cardano-node` communication. Usage is **optional** and no script is dependent on it. The main features include: +`cncli.sh` is a script to download and deploy [CNCLI](https://github.com/cardano-community/cncli) created and maintained by Andrew Westberg. It's a community-based CLI tool written in RUST for low-level `cardano-node` communication. Usage is **optional** and no script is dependent on it. The main features include: - **PING** - Validates that the remote server is on the given network and returns its response time. Utilized by `gLiveView` for peer analysis if available. - **SYNC** - Connects to a node (local or remote) and synchronizes blocks to a local `sqlite` database. diff --git a/docs/Scripts/cntools-changelog.md b/docs/Scripts/cntools-changelog.md index 0d63550e4..f87869379 100644 --- a/docs/Scripts/cntools-changelog.md +++ b/docs/Scripts/cntools-changelog.md @@ -6,6 +6,24 @@ All notable changes to this tool will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [10.0.1] - 2022-07-14 +#### Changed +- Transactions now built using cddl-format to ensure that the formatting of transaction adheres the ledger specs. +- Default to mary era transaction building format for now. +#### Fixed +- Cold signing fix for pool registration / update. Last key was added twice when assemling witnesses. + +## [10.0.0] - 2022-06-28 +#### Added +- Support for Vasil Fork +- Preliminary support for Post HF updates (a short release will follow post fork in coming days) +- Minimum version for Node bumped to 1.35.0 + +#### Changed +- Pool > Rotate code now uses kes-periodinfo CLI query to get counter from node (fallback for Koios) +- Pool > Show Info updated to include current KES counter +- Update getEraIdentifier to include Babbage era + ## [9.1.0] - 2022-05-11 #### Changed - Harmonize flow for reusing old wallet configuration on pool modification vs setting new wallets. diff --git a/docs/Scripts/gliveview.md b/docs/Scripts/gliveview.md index af5e0d3e3..9e92bf392 100644 --- a/docs/Scripts/gliveview.md +++ b/docs/Scripts/gliveview.md @@ -56,6 +56,12 @@ Displays live metrics from cardano-node gathered through the nodes EKG/Prometheu - **Tip (diff) / Status** - Will either show node status as `starting|sync xx.x%` or if close to reference tip, the tip difference `Tip (ref) - Tip (node)` to see how far of the tip (diff value) the node is. With current parameters a slot diff up to 40 from reference tip is considered good but it should usually stay below 30. It's perfectly normal to see big differences in slots between blocks. It's the built in randomness at play. To see if a node is really healthy and staying on tip you would need to compare the tip between multiple nodes. - **Forks** - The number of forks since node start. Each fork means the blockchain evolved in a different direction, thereby discarding blocks. A high number of forks means there is a higher chance of orphaned blocks. - **Peers In / Out** - Shows how many connections the node has established in and out. See [Peer analysis](#peer-analysis) section for how to get more details of incoming and outgoing connections. +- **P2P Mode** + - `Cold` peers indicate the number of inactive but known peers to the node. + - `Warm` peers tell how many established connections the node has. + - `Hot` peers how many established connections are actually active. + - `Bi-Dir`(bidirectional) and `Uni-Dir`(unidirectional) indicate how the handshake protocol negotiated the connection. The connection between p2p nodes will always be bidirectional, but it will be unidirectional between p2p nodes and non-p2p nodes. + - `Duplex` shows the connections that are actually used in both directions, only bidirectional connections have this potential. - **Mem (RSS)** - RSS is the Resident Set Size and shows how much memory is allocated to cardano-node and that is in RAM. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory. - **Mem (Live) / (Heap)** - GC (Garbage Collector) values that show how much memory is used for live/heap data. A large difference between them (or the heap approaching the physical memory limit) means the node is struggling with the garbage collector and/or may begin swapping. - **GC Minor / Major** - Collecting garbage from "Young space" is called a Minor GC. Major (Full) GC is done more rarily and is a more expensive operation. Explaining garbage collection is a topic outside the scope of this documentation and google is your friend for this. diff --git a/docs/basics.md b/docs/basics.md index 4e2fda794..5accd8bd1 100644 --- a/docs/basics.md +++ b/docs/basics.md @@ -39,7 +39,7 @@ Install pre-requisites for building cardano-node and using CNTools -f Force overwrite of all files including normally saved user config sections in env, cnode.sh and gLiveView.sh topology.json, config.json and genesis files normally saved will also be overwritten -s Skip installing OS level dependencies (Default: will check and install any missing OS level prerequisites) --n Connect to specified network instead of mainnet network (Default: connect to cardano mainnet network) +-n Connect to specified network (mainnet | guild | testnet | staging) (Default: mainnet) eg: -n testnet -t Alternate name for top level folder, non alpha-numeric chars will be replaced with underscore (Default: cnode) -m Maximum time in seconds that you allow the file download operation to take before aborting (Default: 60s) diff --git a/files/docker/grest/scripts/docker-getmetrics.sh b/files/docker/grest/scripts/docker-getmetrics.sh index 4b156aa6d..b9b0f5300 100755 --- a/files/docker/grest/scripts/docker-getmetrics.sh +++ b/files/docker/grest/scripts/docker-getmetrics.sh @@ -119,7 +119,7 @@ function get-metrics() { export METRIC_grestschsize="${grestschsize}" export METRIC_dbsize="${dbsize}" #export METRIC_cnodeversion="$(echo $(cardano-node --version) | awk '{print $2 "-" $9}')" - #export METRIC_dbsyncversion="$(echo $(cardano-db-sync-extended --version) | awk '{print $2 "-" $9}')" + #export METRIC_dbsyncversion="$(echo $(cardano-db-sync --version) | awk '{print $2 "-" $9}')" #export METRIC_psqlversion="$(echo "" | psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" -c "SELECT version();" | grep PostgreSQL | awk '{print $2}')" for metric_var_name in $(env | grep ^METRIC | sort | awk -F= '{print $1}') diff --git a/files/docker/node/dockerfile_stage3 b/files/docker/node/dockerfile_stage3 index cbf7ef0b1..e5fb1caa1 100644 --- a/files/docker/node/dockerfile_stage3 +++ b/files/docker/node/dockerfile_stage3 @@ -36,7 +36,7 @@ RUN sed -i 's/^# *\(en_US.UTF-8\)/\1/' /etc/locale.gen \ && echo "export LANGUAGE=en_US.UTF-8" >> ~/.bashrc # PREREQ -RUN apt-get update && apt-get install -y libcap2 libselinux1 libc6 libsodium-dev ncurses-bin iproute2 curl wget apt-utils xz-utils netbase sudo coreutils dnsutils net-tools procps tcptraceroute bc usbip sqlite3 python3 tmux jq ncurses-base libtool autoconf git gnupg tcptraceroute util-linux less openssl bsdmainutils dialog \ +RUN apt-get update && apt-get install -y libsecp256k1-0 libcap2 libselinux1 libc6 libsodium-dev ncurses-bin iproute2 curl wget apt-utils xz-utils netbase sudo coreutils dnsutils net-tools procps tcptraceroute bc usbip sqlite3 python3 tmux jq ncurses-base libtool autoconf git gnupg tcptraceroute util-linux less openssl bsdmainutils dialog \ && apt-get install -y --no-install-recommends cron \ && sudo apt-get -y purge && sudo apt-get -y clean && sudo apt-get -y autoremove && sudo rm -rf /var/lib/apt/lists/* # && sudo rm -rf /usr/bin/apt* diff --git a/files/grest/cron/jobs/active-stake-cache-update.sh b/files/grest/cron/jobs/active-stake-cache-update.sh index ed5ba2733..2cab81b63 100644 --- a/files/grest/cron/jobs/active-stake-cache-update.sh +++ b/files/grest/cron/jobs/active-stake-cache-update.sh @@ -6,31 +6,31 @@ echo "$(date +%F_%H:%M:%S) Running active stake cache update..." # High level check in db to see if update needed at all (should be updated only once on epoch transition) [[ $(psql ${DB_NAME} -qbt -c "SELECT grest.active_stake_cache_update_check();" | tail -2 | tr -cd '[:alnum:]') != 't' ]] && echo "No update needed, exiting..." && - exit 0; + exit 0 # This could break due to upstream changes on db-sync (based on log format) -last_epoch_stakes_log=$(grep -r 'Handling.*.stakes for epoch ' "$(dirname "$0")"/../../logs/dbsync-*.json "$(dirname "$0")"/../../logs/archive/dbsync-*.json 2>/dev/null | sed -e 's#.*.Handling ##' -e 's#stakes for epoch##' -e 's# slot .*.$##' | sort -k2 -n | tail -1) +last_epoch_stakes_log=$(grep -r 'Inserted.*.EpochStake for EpochNo ' "$(dirname "$0")"/../../logs/dbsync-*.json "$(dirname "$0")"/../../logs/archive/dbsync-*.json 2>/dev/null | sed -e 's#.*.Inserted ##' -e 's#EpochStake for EpochNo##' -e 's#\"}.*.$##' | sort -k2 -n | tail -1) [[ -z ${last_epoch_stakes_log} ]] && echo "Could not find any 'Handling stakes' log entries, exiting..." && - exit 1; + exit 1 logs_last_epoch_stakes_count=$(echo "${last_epoch_stakes_log}" | cut -d\ -f1) logs_last_epoch_no=$(echo "${last_epoch_stakes_log}" | cut -d\ -f3) -db_last_epoch_no=$(psql ${DB_NAME} -qbt -c "SELECT grest.get_current_epoch();" | tr -cd '[:alnum:]') +db_last_epoch_no=$(psql ${DB_NAME} -qbt -c "SELECT MAX(NO) from EPOCH;" | tr -cd '[:alnum:]') [[ "${db_last_epoch_no}" != "${logs_last_epoch_no}" ]] && echo "Mismatch between last epoch in logs and database, exiting..." && - exit 1; + exit 1 # Count current epoch entries processed by db-sync -db_epoch_stakes_count=$(psql ${DB_NAME} -qbt -c "SELECT grest.get_epoch_stakes_count(${db_last_epoch_no});" | tr -cd '[:alnum:]') +db_epoch_stakes_count=$(psql ${DB_NAME} -qbt -c "SELECT COUNT(1) FROM EPOCH_STAKE WHERE epoch_no = ${db_last_epoch_no};" | tr -cd '[:alnum:]') # Check if db-sync completed handling stakes [[ "${db_epoch_stakes_count}" != "${logs_last_epoch_stakes_count}" ]] && echo "Logs last epoch stakes count: ${logs_last_epoch_stakes_count}" && echo "DB last epoch stakes count: ${db_epoch_stakes_count}" && echo "db-sync stakes handling still incomplete, exiting..." && - exit 0; + exit 0 # Stakes have been validated, run the cache update psql ${DB_NAME} -qbt -c "SELECT GREST.active_stake_cache_update(${db_last_epoch_no});" 2>&1 1>/dev/null diff --git a/files/grest/cron/jobs/stake-snapshot-cache.sh b/files/grest/cron/jobs/stake-snapshot-cache.sh new file mode 100644 index 000000000..3b0f9b02e --- /dev/null +++ b/files/grest/cron/jobs/stake-snapshot-cache.sh @@ -0,0 +1,6 @@ +#!/bin/bash +DB_NAME=cexplorer + +echo "$(date +%F_%H:%M:%S) Capturing last epochs' snapshot..." +psql ${DB_NAME} -qbt -c "CALL GREST.CAPTURE_LAST_EPOCH_SNAPSHOT();" 2>&1 1>/dev/null +echo "$(date +%F_%H:%M:%S) Job done!" diff --git a/files/grest/rpc/00_blockchain/genesis.sql b/files/grest/rpc/00_blockchain/genesis.sql new file mode 100644 index 000000000..9746e282b --- /dev/null +++ b/files/grest/rpc/00_blockchain/genesis.sql @@ -0,0 +1,36 @@ +CREATE FUNCTION grest.genesis () + RETURNS TABLE ( + NETWORKMAGIC varchar, + NETWORKID varchar, + ACTIVESLOTCOEFF varchar, + UPDATEQUORUM varchar, + MAXLOVELACESUPPLY varchar, + EPOCHLENGTH varchar, + SYSTEMSTART integer, + SLOTSPERKESPERIOD varchar, + SLOTLENGTH varchar, + MAXKESREVOLUTIONS varchar, + SECURITYPARAM varchar, + ALONZOGENESIS varchar + ) + LANGUAGE PLPGSQL + AS $$ +BEGIN + RETURN QUERY + SELECT + g.NETWORKMAGIC, + g.NETWORKID, + g.ACTIVESLOTCOEFF, + g.UPDATEQUORUM, + g.MAXLOVELACESUPPLY, + g.EPOCHLENGTH, + EXTRACT(epoch from g.SYSTEMSTART::timestamp)::integer, + g.SLOTSPERKESPERIOD, + g.SLOTLENGTH, + g.MAXKESREVOLUTIONS, + g.SECURITYPARAM, + g.ALONZOGENESIS + FROM + grest.genesis g; +END; +$$; diff --git a/files/grest/rpc/00_blockchain/tip.sql b/files/grest/rpc/00_blockchain/tip.sql index 885b6ee0b..eb4539c5a 100644 --- a/files/grest/rpc/00_blockchain/tip.sql +++ b/files/grest/rpc/00_blockchain/tip.sql @@ -1,11 +1,11 @@ CREATE FUNCTION grest.tip () RETURNS TABLE ( hash text, - epoch_no uinteger, - abs_slot uinteger, - epoch_slot uinteger, - block_no uinteger, - block_time double precision + epoch_no word31type, + abs_slot word63type, + epoch_slot word31type, + block_no word31type, + block_time integer ) LANGUAGE PLPGSQL AS $$ @@ -17,7 +17,7 @@ BEGIN b.SLOT_NO AS ABS_SLOT, b.EPOCH_SLOT_NO AS EPOCH_SLOT, b.BLOCK_NO, - EXTRACT(EPOCH from b.TIME) + EXTRACT(EPOCH from b.TIME)::integer FROM BLOCK B ORDER BY diff --git a/files/grest/rpc/00_blockchain/totals.sql b/files/grest/rpc/00_blockchain/totals.sql index f616f2805..bb562f085 100644 --- a/files/grest/rpc/00_blockchain/totals.sql +++ b/files/grest/rpc/00_blockchain/totals.sql @@ -1,6 +1,6 @@ CREATE FUNCTION grest.totals (_epoch_no numeric DEFAULT NULL) RETURNS TABLE ( - epoch_no uinteger, + epoch_no word31type, circulation text, treasury text, reward text, diff --git a/files/grest/rpc/01_cached_tables/active_stake_cache.sql b/files/grest/rpc/01_cached_tables/active_stake_cache.sql index 384e8cf15..c8135f4a7 100644 --- a/files/grest/rpc/01_cached_tables/active_stake_cache.sql +++ b/files/grest/rpc/01_cached_tables/active_stake_cache.sql @@ -1,6 +1,3 @@ --------------------------------------------------------------------------------- --- Pool active stake cache setup --------------------------------------------------------------------------------- CREATE TABLE IF NOT EXISTS GREST.POOL_ACTIVE_STAKE_CACHE ( POOL_ID varchar NOT NULL, EPOCH_NO bigint NOT NULL, @@ -22,39 +19,6 @@ CREATE TABLE IF NOT EXISTS GREST.ACCOUNT_ACTIVE_STAKE_CACHE ( PRIMARY KEY (STAKE_ADDRESS, POOL_ID, EPOCH_NO) ); -/* HELPER FUNCTIONS */ - -CREATE FUNCTION grest.get_last_active_stake_validated_epoch () - RETURNS INTEGER - LANGUAGE plpgsql - AS -$$ - BEGIN - RETURN ( - SELECT - last_value -- coalesce() doesn't work if empty set - FROM - grest.control_table - WHERE - key = 'last_active_stake_validated_epoch' - ); - END; -$$; - -/* POSSIBLE VALIDATION FOR CACHE (COUNTING ENTRIES) INSTEAD OF JUST DB-SYNC PART (EPOCH_STAKE) - -CREATE FUNCTION grest.get_last_active_stake_cache_address_count () - RETURNS INTEGER - LANGUAGE plpgsql - AS $$ - BEGIN - RETURN ( - SELECT count(*) from cache... - ) - END; - $$; - */ - CREATE FUNCTION grest.active_stake_cache_update_check () RETURNS BOOLEAN LANGUAGE plpgsql @@ -64,15 +28,19 @@ $$ _current_epoch_no integer; _last_active_stake_validated_epoch text; BEGIN - SELECT - grest.get_last_active_stake_validated_epoch() - INTO - _last_active_stake_validated_epoch; - SELECT - grest.get_current_epoch() - INTO - _current_epoch_no; + -- Get Last Active Stake Validated Epoch + SELECT last_value + INTO _last_active_stake_validated_epoch + FROM + grest.control_table + WHERE + key = 'last_active_stake_validated_epoch'; + + -- Get Current Epoch + SELECT MAX(NO) + INTO _current_epoch_no + FROM epoch; RAISE NOTICE 'Current epoch: %', _current_epoch_no; @@ -92,7 +60,6 @@ $$; COMMENT ON FUNCTION grest.active_stake_cache_update_check IS 'Internal function to determine whether active stake cache should be updated'; -/* UPDATE FUNCTION */ CREATE FUNCTION grest.active_stake_cache_update (_epoch_no integer) RETURNS VOID LANGUAGE plpgsql @@ -127,10 +94,10 @@ $$ /* POOL ACTIVE STAKE CACHE */ SELECT COALESCE(MAX(epoch_no), 0) - FROM - GREST.POOL_ACTIVE_STAKE_CACHE INTO - _last_pool_active_stake_cache_epoch_no; + _last_pool_active_stake_cache_epoch_no + FROM + GREST.POOL_ACTIVE_STAKE_CACHE; INSERT INTO GREST.POOL_ACTIVE_STAKE_CACHE SELECT @@ -157,9 +124,9 @@ $$ /* EPOCH ACTIVE STAKE CACHE */ SELECT COALESCE(MAX(epoch_no), 0) + INTO _last_epoch_active_stake_cache_epoch_no FROM - GREST.EPOCH_ACTIVE_STAKE_CACHE - INTO _last_epoch_active_stake_cache_epoch_no; + GREST.EPOCH_ACTIVE_STAKE_CACHE; INSERT INTO GREST.EPOCH_ACTIVE_STAKE_CACHE SELECT @@ -180,10 +147,10 @@ $$ /* ACCOUNT ACTIVE STAKE CACHE */ SELECT - COALESCE(MAX(epoch_no), 0) + COALESCE(MAX(epoch_no), (_epoch_no - 4) ) + INTO _last_account_active_stake_cache_epoch_no FROM - GREST.ACCOUNT_ACTIVE_STAKE_CACHE - INTO _last_account_active_stake_cache_epoch_no; + GREST.ACCOUNT_ACTIVE_STAKE_CACHE; INSERT INTO GREST.ACCOUNT_ACTIVE_STAKE_CACHE SELECT @@ -210,6 +177,9 @@ $$ ) DO UPDATE SET AMOUNT = EXCLUDED.AMOUNT; + DELETE FROM GREST.ACCOUNT_ACTIVE_STAKE_CACHE + WHERE EPOCH_NO <= (_epoch_no - 4); + /* CONTROL TABLE ENTRY */ PERFORM grest.update_control_table( 'last_active_stake_validated_epoch', diff --git a/files/grest/rpc/01_cached_tables/asset_registry_cache.sql b/files/grest/rpc/01_cached_tables/asset_registry_cache.sql index e9b91c46a..e54d23d42 100644 --- a/files/grest/rpc/01_cached_tables/asset_registry_cache.sql +++ b/files/grest/rpc/01_cached_tables/asset_registry_cache.sql @@ -21,7 +21,7 @@ CREATE FUNCTION grest.asset_registry_cache_update ( _ticker text DEFAULT NULL, _url text DEFAULT NULL, _logo text DEFAULT NULL, - _decimals uinteger DEFAULT 0 + _decimals word31type DEFAULT 0 ) RETURNS void LANGUAGE plpgsql diff --git a/files/grest/rpc/01_cached_tables/epoch_info_cache.sql b/files/grest/rpc/01_cached_tables/epoch_info_cache.sql index 955588b88..0c513f0e0 100644 --- a/files/grest/rpc/01_cached_tables/epoch_info_cache.sql +++ b/files/grest/rpc/01_cached_tables/epoch_info_cache.sql @@ -1,29 +1,30 @@ CREATE TABLE IF NOT EXISTS grest.epoch_info_cache ( - epoch_no uinteger PRIMARY KEY NOT NULL, + epoch_no word31type PRIMARY KEY NOT NULL, i_out_sum word128type NOT NULL, i_fees lovelace NOT NULL, - i_tx_count uinteger NOT NULL, - i_blk_count uinteger NOT NULL, - i_first_block_time double precision UNIQUE NOT NULL, - i_last_block_time double precision UNIQUE NOT NULL, + i_tx_count word31type NOT NULL, + i_blk_count word31type NOT NULL, + i_first_block_time numeric UNIQUE NOT NULL, + i_last_block_time numeric UNIQUE NOT NULL, i_total_rewards lovelace DEFAULT NULL, - i_avg_blk_reward uinteger DEFAULT NULL, - p_min_fee_a uinteger NULL, - p_min_fee_b uinteger NULL, - p_max_block_size uinteger NULL, - p_max_tx_size uinteger NULL, - p_max_bh_size uinteger NULL, + i_avg_blk_reward lovelace DEFAULT NULL, + i_last_tx_id bigint DEFAULT NULL, + p_min_fee_a word31type NULL, + p_min_fee_b word31type NULL, + p_max_block_size word31type NULL, + p_max_tx_size word31type NULL, + p_max_bh_size word31type NULL, p_key_deposit lovelace NULL, p_pool_deposit lovelace NULL, - p_max_epoch uinteger NULL, - p_optimal_pool_count uinteger NULL, + p_max_epoch word31type NULL, + p_optimal_pool_count word31type NULL, p_influence double precision NULL, p_monetary_expand_rate double precision NULL, p_treasury_growth_rate double precision NULL, p_decentralisation double precision NULL, - p_entropy text, - p_protocol_major uinteger NULL, - p_protocol_minor uinteger NULL, + p_extra_entropy text, + p_protocol_major word31type NULL, + p_protocol_minor word31type NULL, p_min_utxo_value lovelace NULL, p_min_pool_cost lovelace NULL, p_nonce text, @@ -36,9 +37,9 @@ CREATE TABLE IF NOT EXISTS grest.epoch_info_cache ( p_max_block_ex_mem word64type, p_max_block_ex_steps word64type, p_max_val_size word64type, - p_collateral_percent uinteger, - p_max_collateral_inputs uinteger, - p_coins_per_utxo_word lovelace + p_collateral_percent word31type, + p_max_collateral_inputs word31type, + p_coins_per_utxo_size lovelace ); COMMENT ON TABLE grest.epoch_info_cache IS 'Contains detailed info for epochs including protocol parameters'; @@ -93,7 +94,7 @@ BEGIN IF _curr_epoch = _latest_epoch_no_in_cache THEN RAISE NOTICE 'Updating latest epoch info in cache...'; - PERFORM grest.UPDATE_LATEST_EPOCH_INFO_CACHE(_latest_epoch_no_in_cache); + PERFORM grest.UPDATE_LATEST_EPOCH_INFO_CACHE(_curr_epoch, _latest_epoch_no_in_cache); RETURN; END IF; @@ -104,7 +105,7 @@ BEGIN RAISE NOTICE 'Updating cache with new epoch(s) data...'; -- We need to update last epoch one last time before going to new one - PERFORM grest.UPDATE_LATEST_EPOCH_INFO_CACHE(_latest_epoch_no_in_cache); + PERFORM grest.UPDATE_LATEST_EPOCH_INFO_CACHE(_curr_epoch, _latest_epoch_no_in_cache); -- Populate rewards data for epoch n - 2 PERFORM grest.UPDATE_TOTAL_REWARDS_EPOCH_INFO_CACHE(_latest_epoch_no_in_cache - 1); -- Continue new epoch data insert @@ -132,6 +133,7 @@ BEGIN WHEN e.no <= _curr_epoch THEN ROUND(reward_pot.amount / e.blk_count) ELSE NULL END AS i_avg_blk_reward, + last_tx.tx_id AS i_last_tx_id, ep.min_fee_a AS p_min_fee_a, ep.min_fee_b AS p_min_fee_b, ep.max_block_size AS p_max_block_size, @@ -145,7 +147,7 @@ BEGIN ep.monetary_expand_rate AS p_monetary_expand_rate, ep.treasury_growth_rate AS p_treasury_growth_rate, ep.decentralisation AS p_decentralisation, - ENCODE(ep.entropy, 'hex') AS p_entropy, + ENCODE(ep.extra_entropy, 'hex') AS p_extra_entropy, ep.protocol_major AS p_protocol_major, ep.protocol_minor AS p_protocol_minor, ep.min_utxo_value AS p_min_utxo_value, @@ -162,7 +164,7 @@ BEGIN ep.max_val_size AS p_max_val_size, ep.collateral_percent AS p_collateral_percent, ep.max_collateral_inputs AS p_max_collateral_inputs, - ep.coins_per_utxo_word AS p_coins_per_utxo_word + ep.coins_per_utxo_size AS p_coins_per_utxo_size FROM epoch e LEFT JOIN epoch_param ep ON ep.epoch_no = e.no @@ -179,6 +181,16 @@ BEGIN GROUP BY e.no ) reward_pot ON TRUE + LEFT JOIN LATERAL ( + SELECT + MAX(tx.id) AS tx_id + FROM + block b + INNER JOIN tx ON tx.block_id = b.id + WHERE + b.epoch_no = e.no + AND b.epoch_no <> _curr_epoch + ) last_tx ON TRUE WHERE e.no >= _epoch_no_to_insert_from ORDER BY @@ -189,11 +201,29 @@ END; $$; -- Helper function for updating current epoch data -CREATE FUNCTION grest.UPDATE_LATEST_EPOCH_INFO_CACHE (_epoch_no_to_update bigint) +CREATE FUNCTION grest.UPDATE_LATEST_EPOCH_INFO_CACHE (_curr_epoch bigint, _epoch_no_to_update bigint) RETURNS void LANGUAGE plpgsql AS $$ BEGIN + + -- only update last tx id in case of new epoch + IF _curr_epoch <> _epoch_no_to_update THEN + UPDATE + grest.epoch_info_cache + SET + i_last_tx_id = last_tx.tx_id + FROM ( + SELECT + MAX(tx.id) AS tx_id + FROM + block b + INNER JOIN tx ON tx.block_id = b.id + WHERE + b.epoch_no = _epoch_no_to_update + ) last_tx; + END IF; + UPDATE grest.epoch_info_cache SET diff --git a/files/grest/rpc/01_cached_tables/pool_history_cache.sql b/files/grest/rpc/01_cached_tables/pool_history_cache.sql index 069382961..e065ca18b 100644 --- a/files/grest/rpc/01_cached_tables/pool_history_cache.sql +++ b/files/grest/rpc/01_cached_tables/pool_history_cache.sql @@ -26,7 +26,7 @@ declare _curr_epoch bigint; _latest_epoch_no_in_cache bigint; begin - IF ( + IF ( SELECT COUNT(pid) > 1 FROM diff --git a/files/grest/rpc/01_cached_tables/pool_info_cache.sql b/files/grest/rpc/01_cached_tables/pool_info_cache.sql index ca5861f12..b3726644a 100644 --- a/files/grest/rpc/01_cached_tables/pool_info_cache.sql +++ b/files/grest/rpc/01_cached_tables/pool_info_cache.sql @@ -3,8 +3,9 @@ DROP TABLE IF EXISTS grest.pool_info_cache; CREATE TABLE grest.pool_info_cache ( id SERIAL PRIMARY KEY, tx_id bigint NOT NULL, + update_id bigint NOT NULL, tx_hash text, - block_time double precision, + block_time numeric, pool_hash_id bigint NOT NULL, pool_id_bech32 character varying NOT NULL, pool_id_hex text NOT NULL, @@ -20,7 +21,7 @@ CREATE TABLE grest.pool_info_cache ( meta_url character varying, meta_hash text, pool_status text, - retiring_epoch uinteger + retiring_epoch word31type ); COMMENT ON TABLE grest.pool_info_cache IS 'A summary of all pool parameters and updates'; @@ -34,15 +35,15 @@ CREATE FUNCTION grest.pool_info_insert ( _margin double precision, _fixed_cost lovelace, _pledge lovelace, - _reward_addr addr29type, + _reward_addr_id bigint, _meta_id bigint ) RETURNS void LANGUAGE plpgsql AS $$ DECLARE - _current_epoch_no uinteger; - _retiring_epoch uinteger; + _current_epoch_no word31type; + _retiring_epoch word31type; _pool_status text; BEGIN SELECT COALESCE(MAX(no), 0) INTO _current_epoch_no FROM public.epoch; @@ -64,6 +65,7 @@ BEGIN INSERT INTO grest.pool_info_cache ( tx_id, + update_id, tx_hash, block_time, pool_hash_id, @@ -85,6 +87,7 @@ BEGIN ) SELECT _tx_id, + _update_id, encode(tx.hash::bytea, 'hex'), EXTRACT(epoch from b.time), _hash_id, @@ -101,7 +104,7 @@ BEGIN sa.view FROM public.pool_owner AS po INNER JOIN public.stake_address AS sa ON sa.id = po.addr_id - WHERE po.registered_tx_id = _tx_id + WHERE po.pool_update_id = _update_id ), ARRAY( SELECT json_build_object( @@ -123,7 +126,7 @@ BEGIN INNER JOIN public.tx ON tx.id = _tx_id INNER JOIN public.block AS b ON b.id = tx.block_id LEFT JOIN public.pool_metadata_ref AS pmr ON pmr.id = _meta_id - LEFT JOIN public.stake_address AS sa ON sa.hash_raw = _reward_addr + LEFT JOIN public.stake_address AS sa ON sa.id = _reward_addr_id WHERE ph.id = _hash_id; END; $$; @@ -153,10 +156,10 @@ CREATE FUNCTION grest.pool_info_retire_update () LANGUAGE plpgsql AS $$ DECLARE - _current_epoch_no uinteger; + _current_epoch_no word31type; _pool_hash_id bigint; _latest_pool_update_tx_id bigint; - _retiring_epoch uinteger; + _retiring_epoch word31type; _pool_status text; BEGIN SELECT COALESCE(MAX(no), 0) INTO _current_epoch_no FROM public.epoch; @@ -216,7 +219,7 @@ BEGIN NEW.margin, NEW.fixed_cost, NEW.pledge, - NEW.reward_addr, + NEW.reward_addr_id, NEW.meta_id ); ELSIF (TG_OP = 'DELETE') THEN @@ -251,9 +254,7 @@ BEGIN SET owners = owners || (SELECT sa.view FROM public.stake_address AS sa WHERE sa.id = NEW.addr_id) WHERE - pool_hash_id = NEW.pool_hash_id - AND - tx_id = NEW.registered_tx_id; + update_id = NEW.pool_update_id; END IF; RETURN NULL; @@ -312,7 +313,9 @@ DECLARE BEGIN SELECT COALESCE(MAX(tx_id), 0) INTO _latest_pool_info_tx_id FROM grest.pool_info_cache; - FOR rec IN (SELECT * FROM public.pool_update AS pu WHERE pu.registered_tx_id > _latest_pool_info_tx_id) LOOP + FOR rec IN ( + SELECT * FROM public.pool_update AS pu WHERE pu.registered_tx_id > _latest_pool_info_tx_id + ) LOOP PERFORM grest.pool_info_insert( rec.id, rec.registered_tx_id, @@ -322,7 +325,7 @@ BEGIN rec.margin, rec.fixed_cost, rec.pledge, - rec.reward_addr, + rec.reward_addr_id, rec.meta_id ); END LOOP; diff --git a/files/grest/rpc/01_cached_tables/stake_distribution_cache.sql b/files/grest/rpc/01_cached_tables/stake_distribution_cache.sql index 7b6a7f3ed..4f33942e9 100644 --- a/files/grest/rpc/01_cached_tables/stake_distribution_cache.sql +++ b/files/grest/rpc/01_cached_tables/stake_distribution_cache.sql @@ -23,12 +23,12 @@ BEGIN SELECT (last_value::integer - 2)::integer INTO _active_stake_epoch FROM GREST.CONTROL_TABLE WHERE key = 'last_active_stake_validated_epoch'; - SELECT id INTO _last_active_stake_blockid FROM PUBLIC.BLOCK - WHERE epoch_no = _active_stake_epoch - AND block_no IS NOT NULL - ORDER BY block_no DESC LIMIT 1; - - SELECT MAX(id) INTO _last_account_tx_id FROM PUBLIC.TX WHERE block_id = _last_active_stake_blockid; + SELECT MAX(tx.id) INTO _last_account_tx_id + FROM PUBLIC.TX + INNER JOIN BLOCK b ON b.id = tx.block_id + WHERE b.epoch_no <= _active_stake_epoch + AND b.block_no IS NOT NULL + AND b.tx_count != 0; SELECT MAX(no) INTO _latest_epoch FROM PUBLIC.EPOCH WHERE NO IS NOT NULL; @@ -168,7 +168,7 @@ BEGIN END; $$; - + -- HELPER FUNCTION: GREST.STAKE_DISTRIBUTION_CACHE_UPDATE_CHECK -- Determines whether or not the stake distribution cache should be updated -- based on the time rule (max once in 60 mins), and ensures previous run completed. diff --git a/files/grest/rpc/01_cached_tables/stake_snapshot_cache.sql b/files/grest/rpc/01_cached_tables/stake_snapshot_cache.sql new file mode 100644 index 000000000..23f8ab45c --- /dev/null +++ b/files/grest/rpc/01_cached_tables/stake_snapshot_cache.sql @@ -0,0 +1,367 @@ +/* Keeps track of stake snapshots taken at the end of epochs n - 1 and n - 2 */ +CREATE TABLE IF NOT EXISTS GREST.stake_snapshot_cache ( + addr_id integer, + pool_id integer, + amount numeric, + epoch_no bigint, + PRIMARY KEY (addr_id, epoch_no) +); + +CREATE INDEX IF NOT EXISTS _idx_pool_id ON grest.stake_snapshot_cache (pool_id); +CREATE INDEX IF NOT EXISTS _idx_addr_id ON grest.stake_snapshot_cache (addr_id); + +CREATE OR REPLACE PROCEDURE GREST.CAPTURE_LAST_EPOCH_SNAPSHOT () +LANGUAGE PLPGSQL +AS $$ +DECLARE + _previous_epoch_no bigint; + _lower_bound_account_tx_id bigint; + _upper_bound_account_tx_id bigint; + _newly_registered_account_ids bigint[]; +BEGIN + IF ( + -- If checking query with the same name there will be 2 results + SELECT COUNT(pid) > 1 + FROM pg_stat_activity + WHERE state = 'active' + AND query ILIKE '%GREST.CAPTURE_LAST_EPOCH_SNAPSHOT(%' + AND datname = ( + SELECT current_database() + ) + ) THEN + RAISE EXCEPTION 'Previous query still running but should have completed! Exiting...'; + END IF; + + SELECT MAX(NO) - 1 INTO _previous_epoch_no FROM PUBLIC.EPOCH; + + IF EXISTS ( + SELECT FROM grest.stake_snapshot_cache + WHERE epoch_no = _previous_epoch_no + ) THEN + RETURN; + END IF; + + -- Set-up interval limits for previous epoch + SELECT MAX(tx.id) INTO _lower_bound_account_tx_id + FROM PUBLIC.TX + INNER JOIN BLOCK b ON b.id = tx.block_id + WHERE b.epoch_no <= _previous_epoch_no - 2 + AND b.block_no IS NOT NULL + AND b.tx_count != 0; + + SELECT MAX(tx.id) INTO _upper_bound_account_tx_id + FROM PUBLIC.TX + INNER JOIN BLOCK b ON b.id = tx.block_id + WHERE b.epoch_no <= _previous_epoch_no + AND b.block_no IS NOT NULL + AND b.tx_count != 0; + + /* Temporary table to figure out valid delegations ending up in active stake in case of pool retires */ + DROP TABLE IF EXISTS minimum_pool_delegation_tx_ids; + CREATE TEMP TABLE minimum_pool_delegation_tx_ids ( + pool_hash_id integer PRIMARY KEY, + latest_registered_tx_id integer, + latest_registered_tx_cert_index integer + ); + + DROP TABLE IF EXISTS latest_accounts_delegation_txs; + CREATE TEMP TABLE latest_accounts_delegation_txs ( + addr_id integer PRIMARY KEY, + tx_id integer, + cert_index integer, + pool_hash_id integer + ); + + DROP TABLE IF EXISTS rewards_subset; + CREATE TEMP TABLE rewards_subset ( + stake_address_id bigint, + type rewardtype, + spendable_epoch bigint, + amount lovelace + ); + + INSERT INTO rewards_subset + SELECT addr_id, type, spendable_epoch, amount + FROM reward + WHERE spendable_epoch BETWEEN _previous_epoch_no - 1 AND _previous_epoch_no + 1; + +/* Registered and delegated accounts to be captured (have epoch_stake entries for baseline) */ + WITH + latest_non_cancelled_pool_retire as ( + SELECT DISTINCT ON (pr.hash_id) + pr.hash_id, + pr.retiring_epoch + FROM pool_retire pr + WHERE + pr.announced_tx_id <= _upper_bound_account_tx_id + AND pr.retiring_epoch <= _previous_epoch_no + AND NOT EXISTS ( + SELECT TRUE + FROM pool_update pu + WHERE pu.hash_id = pr.hash_id + AND ( + pu.registered_tx_id > pr.announced_tx_id + OR ( + pu.registered_tx_id = pr.announced_tx_id + AND pu.cert_index > pr.cert_index + ) + ) + AND pu.registered_tx_id <= _upper_bound_account_tx_id + AND pu.registered_tx_id <= ( + SELECT i_last_tx_id + FROM grest.epoch_info_cache eic + WHERE eic.epoch_no = pr.retiring_epoch - 1 + ) + ) + AND NOT EXISTS ( + SELECT TRUE + FROM pool_retire sub_pr + WHERE pr.hash_id = sub_pr.hash_id + AND ( + sub_pr.announced_tx_id > pr.announced_tx_id + OR ( + sub_pr.announced_tx_id = pr.announced_tx_id + AND sub_pr.cert_index > pr.cert_index + ) + ) + AND sub_pr.announced_tx_id <= _upper_bound_account_tx_id + AND sub_pr.announced_tx_id <= ( + SELECT i_last_tx_id + FROM grest.epoch_info_cache eic + WHERE eic.epoch_no = pr.retiring_epoch - 1 + ) + ) + ORDER BY + pr.hash_id, pr.retiring_epoch DESC + ) + + INSERT INTO minimum_pool_delegation_tx_ids + SELECT DISTINCT ON (pu.hash_id) + pu.hash_id, + pu.registered_tx_id as min_tx_id, + pu.cert_index + FROM pool_update pu + LEFT JOIN latest_non_cancelled_pool_retire lncpr ON lncpr.hash_id = pu.hash_id + WHERE pu.registered_tx_id <= _upper_bound_account_tx_id + AND + CASE WHEN lncpr.retiring_epoch IS NOT NULL + THEN + pu.registered_tx_id > ( + SELECT i_last_tx_id + FROM grest.epoch_info_cache eic + WHERE eic.epoch_no = lncpr.retiring_epoch - 1 + ) + ELSE TRUE + END + ORDER BY + pu.hash_id, pu.registered_tx_id ASC; + + INSERT INTO latest_accounts_delegation_txs + SELECT distinct on (d.addr_id) + d.addr_id, + d.tx_id, + d.cert_index, + d.pool_hash_id + FROM DELEGATION D + WHERE + d.tx_id <= _upper_bound_account_tx_id + AND NOT EXISTS ( + SELECT TRUE FROM STAKE_DEREGISTRATION + WHERE STAKE_DEREGISTRATION.ADDR_ID = D.ADDR_ID + AND ( + STAKE_DEREGISTRATION.TX_ID > D.TX_ID + OR ( + STAKE_DEREGISTRATION.TX_ID = D.TX_ID + AND STAKE_DEREGISTRATION.CERT_INDEX > D.CERT_INDEX + ) + ) + AND STAKE_DEREGISTRATION.TX_ID <= _upper_bound_account_tx_id + ) + ORDER BY + d.addr_id, d.tx_id DESC; + + CREATE INDEX _idx_pool_hash_id ON latest_accounts_delegation_txs (pool_hash_id); + + + /* Registered and delegated accounts to be captured (have epoch_stake entries for baseline) */ + WITH + accounts_with_delegated_pools AS ( + SELECT DISTINCT ON (ladt.addr_id) + ladt.addr_id as stake_address_id, + ladt.pool_hash_id + FROM latest_accounts_delegation_txs ladt + INNER JOIN minimum_pool_delegation_tx_ids mpdtx ON mpdtx.pool_hash_id = ladt.pool_hash_id + WHERE + ( + ladt.tx_id > mpdtx.latest_registered_tx_id + OR ( + ladt.tx_id = mpdtx.latest_registered_tx_id + AND ladt.cert_index > mpdtx.latest_registered_tx_cert_index + ) + ) + -- Account must be present in epoch_stake table for the previous epoch + AND EXISTS ( + SELECT TRUE FROM epoch_stake es + WHERE es.epoch_no = _previous_epoch_no + AND es.addr_id = ladt.addr_id + ) + ), + account_active_stake AS ( + SELECT awdp.stake_address_id, es.amount + FROM public.epoch_stake es + INNER JOIN accounts_with_delegated_pools awdp ON awdp.stake_address_id = es.addr_id + WHERE epoch_no = _previous_epoch_no + ), + account_delta_tx_ins AS ( + SELECT awdp.stake_address_id, tx_in.tx_out_id AS txoid, tx_in.tx_out_index AS txoidx FROM tx_in + LEFT JOIN tx_out ON tx_in.tx_out_id = tx_out.tx_id AND tx_in.tx_out_index::smallint = tx_out.index::smallint + INNER JOIN accounts_with_delegated_pools awdp ON awdp.stake_address_id = tx_out.stake_address_id + WHERE tx_in.tx_in_id > _lower_bound_account_tx_id + AND tx_in.tx_in_id <= _upper_bound_account_tx_id + ), + account_delta_input AS ( + SELECT tx_out.stake_address_id, COALESCE(SUM(tx_out.value), 0) AS amount + FROM account_delta_tx_ins + LEFT JOIN tx_out ON account_delta_tx_ins.txoid=tx_out.tx_id AND account_delta_tx_ins.txoidx = tx_out.index + INNER JOIN accounts_with_delegated_pools awdp ON awdp.stake_address_id = tx_out.stake_address_id + GROUP BY tx_out.stake_address_id + ), + account_delta_output AS ( + SELECT awdp.stake_address_id, COALESCE(SUM(tx_out.value), 0) AS amount + FROM tx_out + INNER JOIN accounts_with_delegated_pools awdp ON awdp.stake_address_id = tx_out.stake_address_id + WHERE TX_OUT.TX_ID > _lower_bound_account_tx_id + AND TX_OUT.TX_ID <= _upper_bound_account_tx_id + GROUP BY awdp.stake_address_id + ), + account_delta_rewards AS ( + SELECT awdp.stake_address_id, COALESCE(SUM(rs.amount), 0) AS REWARDS + FROM rewards_subset rs + INNER JOIN accounts_with_delegated_pools awdp ON awdp.stake_address_id = rs.stake_address_id + WHERE + CASE WHEN rs.type = 'refund' + THEN rs.spendable_epoch IN (_previous_epoch_no - 1, _previous_epoch_no) + ELSE rs.spendable_epoch IN (_previous_epoch_no, _previous_epoch_no + 1) + END + GROUP BY awdp.stake_address_id + ), + account_delta_withdrawals AS ( + SELECT accounts_with_delegated_pools.stake_address_id, COALESCE(SUM(withdrawal.amount), 0) AS withdrawals + FROM withdrawal + INNER JOIN accounts_with_delegated_pools ON accounts_with_delegated_pools.stake_address_id = withdrawal.addr_id + WHERE withdrawal.tx_id > _lower_bound_account_tx_id + AND withdrawal.tx_id <= _upper_bound_account_tx_id + GROUP BY accounts_with_delegated_pools.stake_address_id + ) + + INSERT INTO GREST.stake_snapshot_cache + SELECT + awdp.stake_address_id as addr_id, + awdp.pool_hash_id, + COALESCE(aas.amount, 0) + COALESCE(ado.amount, 0) - COALESCE(adi.amount, 0) + COALESCE(adr.rewards, 0) - COALESCE(adw.withdrawals, 0) as AMOUNT, + _previous_epoch_no as epoch_no + from accounts_with_delegated_pools awdp + LEFT JOIN account_active_stake aas ON aas.stake_address_id = awdp.stake_address_id + LEFT JOIN account_delta_input adi ON adi.stake_address_id = awdp.stake_address_id + LEFT JOIN account_delta_output ado ON ado.stake_address_id = awdp.stake_address_id + LEFT JOIN account_delta_rewards adr ON adr.stake_address_id = awdp.stake_address_id + LEFT JOIN account_delta_withdrawals adw ON adw.stake_address_id = awdp.stake_address_id + ON CONFLICT (addr_id, EPOCH_NO) DO + UPDATE + SET + POOL_ID = EXCLUDED.POOL_ID, + AMOUNT = EXCLUDED.AMOUNT; + + /* Newly registered accounts to be captured (they don't have epoch_stake entries for baseline) */ + SELECT INTO _newly_registered_account_ids ARRAY_AGG(addr_id) + FROM ( + SELECT DISTINCT ladt.addr_id + FROM latest_accounts_delegation_txs ladt + INNER JOIN minimum_pool_delegation_tx_ids mpdtx ON mpdtx.pool_hash_id = ladt.pool_hash_id + WHERE + ( + ladt.tx_id > mpdtx.latest_registered_tx_id + OR ( + ladt.tx_id = mpdtx.latest_registered_tx_id + AND ladt.cert_index > mpdtx.latest_registered_tx_cert_index + ) + ) + -- Account must NOT be present in epoch_stake table for the previous epoch + AND NOT EXISTS ( + SELECT TRUE FROM epoch_stake es + WHERE es.epoch_no = _previous_epoch_no + AND es.addr_id = ladt.addr_id + ) + ) AS tmp; + WITH + account_delta_tx_ins AS ( + SELECT tx_out.stake_address_id, tx_in.tx_out_id AS txoid, tx_in.tx_out_index AS txoidx + FROM tx_in + LEFT JOIN tx_out ON tx_in.tx_out_id = tx_out.tx_id AND tx_in.tx_out_index::smallint = tx_out.index::smallint + WHERE tx_in.tx_in_id <= _upper_bound_account_tx_id + AND tx_out.stake_address_id = ANY(_newly_registered_account_ids) + ), + account_delta_input AS ( + SELECT tx_out.stake_address_id, COALESCE(SUM(tx_out.value), 0) AS amount + FROM account_delta_tx_ins + LEFT JOIN tx_out ON account_delta_tx_ins.txoid=tx_out.tx_id AND account_delta_tx_ins.txoidx = tx_out.index + WHERE tx_out.stake_address_id = ANY(_newly_registered_account_ids) + GROUP BY tx_out.stake_address_id + ), + account_delta_output AS ( + SELECT tx_out.stake_address_id, COALESCE(SUM(tx_out.value), 0) AS amount + FROM tx_out + WHERE TX_OUT.TX_ID <= _upper_bound_account_tx_id + AND tx_out.stake_address_id = ANY(_newly_registered_account_ids) + GROUP BY tx_out.stake_address_id + ), + account_delta_rewards AS ( + SELECT r.addr_id as stake_address_id, COALESCE(SUM(r.amount), 0) AS REWARDS + FROM REWARD r + WHERE r.addr_id = ANY(_newly_registered_account_ids) + AND + CASE WHEN r.type = 'refund' + THEN r.spendable_epoch <= _previous_epoch_no + ELSE r.spendable_epoch <= _previous_epoch_no + 1 + END + GROUP BY r.addr_id + ), + account_delta_withdrawals AS ( + SELECT withdrawal.addr_id as stake_address_id, COALESCE(SUM(withdrawal.amount), 0) AS withdrawals + FROM withdrawal + WHERE withdrawal.tx_id <= _upper_bound_account_tx_id + AND withdrawal.addr_id = ANY(_newly_registered_account_ids) + GROUP BY withdrawal.addr_id + ) + + INSERT INTO GREST.stake_snapshot_cache + SELECT + ladt.addr_id, + ladt.pool_hash_id, + COALESCE(ado.amount, 0) - COALESCE(adi.amount, 0) + COALESCE(adr.rewards, 0) - COALESCE(adw.withdrawals, 0) as amount, + _previous_epoch_no as epoch_no + FROM latest_accounts_delegation_txs ladt + LEFT JOIN account_delta_input adi ON adi.stake_address_id = ladt.addr_id + LEFT JOIN account_delta_output ado ON ado.stake_address_id = ladt.addr_id + LEFT JOIN account_delta_rewards adr ON adr.stake_address_id = ladt.addr_id + LEFT JOIN account_delta_withdrawals adw ON adw.stake_address_id = ladt.addr_id + WHERE + ladt.addr_id = ANY(_newly_registered_account_ids) + ON CONFLICT (addr_id, epoch_no) DO + UPDATE + SET + pool_id = EXCLUDED.pool_id, + amount = EXCLUDED.amount; + + INSERT INTO GREST.CONTROL_TABLE (key, last_value) + VALUES ( + 'last_stake_snapshot_epoch', + _previous_epoch_no + ) ON CONFLICT (key) + DO UPDATE + SET last_value = _previous_epoch_no; + + DELETE FROM grest.stake_snapshot_cache + WHERE epoch_no <= _previous_epoch_no - 2; +END; +$$; diff --git a/files/grest/rpc/account/account_history.sql b/files/grest/rpc/account/account_history.sql index d0e8e1f9c..f261f94a0 100644 --- a/files/grest/rpc/account/account_history.sql +++ b/files/grest/rpc/account/account_history.sql @@ -33,27 +33,31 @@ $$ THEN RETURN QUERY SELECT - ACCOUNT_ACTIVE_STAKE_CACHE.stake_address, - ACCOUNT_ACTIVE_STAKE_CACHE.pool_id, - ACCOUNT_ACTIVE_STAKE_CACHE.epoch_no, - ACCOUNT_ACTIVE_STAKE_CACHE.amount::text as active_stake + sa.view as stake_address, + ph.view as pool_id, + es.epoch_no::bigint, + es.amount::text as active_stake FROM - GREST.ACCOUNT_ACTIVE_STAKE_CACHE + EPOCH_STAKE es + LEFT JOIN stake_address sa ON sa.id = es.addr_id + LEFT JOIN pool_hash ph ON ph.id = es.pool_id WHERE - ACCOUNT_ACTIVE_STAKE_CACHE.epoch_no = _epoch_no + es.epoch_no = _epoch_no AND - ACCOUNT_ACTIVE_STAKE_CACHE.stake_address = _address; + sa.view = _address; ELSE RETURN QUERY SELECT - ACCOUNT_ACTIVE_STAKE_CACHE.stake_address, - ACCOUNT_ACTIVE_STAKE_CACHE.pool_id, - ACCOUNT_ACTIVE_STAKE_CACHE.epoch_no, - ACCOUNT_ACTIVE_STAKE_CACHE.amount::text as active_stake + sa.view as stake_address, + ph.view as pool_id, + es.epoch_no::bigint, + es.amount::text as active_stake FROM - GREST.ACCOUNT_ACTIVE_STAKE_CACHE + EPOCH_STAKE es + LEFT JOIN stake_address sa ON sa.id = es.addr_id + LEFT JOIN pool_hash ph ON ph.id = es.pool_id WHERE - ACCOUNT_ACTIVE_STAKE_CACHE.stake_address = _address; + sa.view = _address; END IF; END; $$; diff --git a/files/grest/rpc/account/account_info.sql b/files/grest/rpc/account/account_info.sql index 4f1166772..1bbe17e92 100644 --- a/files/grest/rpc/account/account_info.sql +++ b/files/grest/rpc/account/account_info.sql @@ -14,7 +14,7 @@ CREATE FUNCTION grest.account_info (_address text) DECLARE SA_ID integer DEFAULT NULL; LATEST_WITHDRAWAL_TX bigint DEFAULT NULL; - LATEST_WITHDRAWAL_EPOCH uinteger DEFAULT NULL; + LATEST_WITHDRAWAL_EPOCH word31type DEFAULT NULL; BEGIN IF _address LIKE 'stake%' THEN -- Shelley stake address diff --git a/files/grest/rpc/address/address_info.sql b/files/grest/rpc/address/address_info.sql index 5af5981f4..dc2eed10e 100644 --- a/files/grest/rpc/address/address_info.sql +++ b/files/grest/rpc/address/address_info.sql @@ -31,7 +31,7 @@ BEGIN 'tx_hash', ENCODE(tx.hash, 'hex'), 'tx_index', tx_out.index, 'block_height', block.block_no, - 'block_time', EXTRACT(epoch from block.time), + 'block_time', EXTRACT(epoch from block.time)::integer, 'value', tx_out.value::text, 'datum_hash', ENCODE(tx_out.data_hash, 'hex'), 'asset_list', COALESCE( diff --git a/files/grest/rpc/address/address_txs.sql b/files/grest/rpc/address/address_txs.sql index 87836b0c9..caedc9bb0 100644 --- a/files/grest/rpc/address/address_txs.sql +++ b/files/grest/rpc/address/address_txs.sql @@ -1,9 +1,9 @@ CREATE FUNCTION grest.address_txs (_addresses text[], _after_block_height integer DEFAULT 0) RETURNS TABLE ( tx_hash text, - epoch_no uinteger, - block_height uinteger, - block_time double precision + epoch_no word31type, + block_height word31type, + block_time integer ) LANGUAGE PLPGSQL AS $$ @@ -38,7 +38,7 @@ BEGIN DISTINCT(ENCODE(tx.hash, 'hex')) as tx_hash, block.epoch_no, block.block_no, - EXTRACT(epoch from block.time) + EXTRACT(epoch from block.time)::integer FROM public.tx INNER JOIN public.block ON block.id = tx.block_id diff --git a/files/grest/rpc/address/credential_txs.sql b/files/grest/rpc/address/credential_txs.sql index 987d6c740..c9ea8e166 100644 --- a/files/grest/rpc/address/credential_txs.sql +++ b/files/grest/rpc/address/credential_txs.sql @@ -1,9 +1,9 @@ CREATE FUNCTION grest.credential_txs (_payment_credentials text[], _after_block_height integer DEFAULT 0) RETURNS TABLE ( tx_hash text, - epoch_no uinteger, - block_height uinteger, - block_time double precision + epoch_no word31type, + block_height word31type, + block_time integer ) LANGUAGE PLPGSQL AS $$ @@ -48,7 +48,7 @@ BEGIN DISTINCT(ENCODE(tx.hash, 'hex')) as tx_hash, block.epoch_no, block.block_no, - EXTRACT(epoch from block.time) + EXTRACT(epoch from block.time)::integer FROM public.tx INNER JOIN public.block ON block.id = tx.block_id diff --git a/files/grest/rpc/assets/asset_history.sql b/files/grest/rpc/assets/asset_history.sql index 0659f49e1..54a9919db 100644 --- a/files/grest/rpc/assets/asset_history.sql +++ b/files/grest/rpc/assets/asset_history.sql @@ -12,6 +12,7 @@ DECLARE _asset_id int; BEGIN SELECT DECODE(_asset_policy, 'hex') INTO _asset_policy_decoded; + SELECT DECODE( CASE WHEN _asset_name IS NULL THEN '' @@ -20,6 +21,7 @@ BEGIN END, 'hex' ) INTO _asset_name_decoded; + SELECT id INTO @@ -36,7 +38,9 @@ BEGIN ARRAY_AGG( JSON_BUILD_OBJECT( 'tx_hash', minting_data.tx_hash, - 'quantity', minting_data.quantity + 'block_time', minting_data.block_time, + 'quantity', minting_data.quantity, + 'metadata', minting_data.metadata ) ORDER BY minting_data.id DESC ) @@ -44,12 +48,28 @@ BEGIN SELECT tx.id, ENCODE(tx.hash, 'hex') AS tx_hash, - mtm.quantity::text + EXTRACT(epoch from b.time)::integer as block_time, + mtm.quantity::text, + COALESCE( + JSON_AGG( + JSON_BUILD_OBJECT( + 'key', TM.key::text, + 'json', TM.json + ) + ), + JSON_BUILD_ARRAY() + ) AS metadata FROM ma_tx_mint mtm INNER JOIN tx ON tx.id = MTM.tx_id + INNER JOIN block b ON b.id = tx.block_id + LEFT JOIN tx_metadata TM ON TM.tx_id = tx.id WHERE mtm.ident = _asset_id + GROUP BY + tx.id, + b.time, + mtm.quantity ) minting_data; END; $$; diff --git a/files/grest/rpc/assets/asset_info.sql b/files/grest/rpc/assets/asset_info.sql index 4d03c47b3..d277794e5 100644 --- a/files/grest/rpc/assets/asset_info.sql +++ b/files/grest/rpc/assets/asset_info.sql @@ -8,7 +8,7 @@ CREATE FUNCTION grest.asset_info (_asset_policy text, _asset_name text default ' total_supply text, mint_cnt bigint, burn_cnt bigint, - creation_time double precision, + creation_time integer, minting_tx_metadata json, token_registry_metadata json ) @@ -20,6 +20,7 @@ DECLARE _asset_id int; BEGIN SELECT DECODE(_asset_policy, 'hex') INTO _asset_policy_decoded; + SELECT DECODE( CASE WHEN _asset_name IS NULL THEN '' @@ -28,80 +29,86 @@ BEGIN END, 'hex' ) INTO _asset_name_decoded; + SELECT id INTO _asset_id FROM multi_asset MA WHERE MA.policy = _asset_policy_decoded AND MA.name = _asset_name_decoded; RETURN QUERY - SELECT - _asset_policy, - _asset_name, - ENCODE(_asset_name_decoded, 'escape'), - MA.fingerprint, - ( - SELECT - ENCODE(tx.hash, 'hex') - FROM - ma_tx_mint MTM - INNER JOIN tx ON tx.id = MTM.tx_id - WHERE - MTM.ident = _asset_id - ORDER BY - MTM.tx_id ASC - LIMIT 1 - ) AS tx_hash, - minting_data.total_supply, - minting_data.mint_cnt, - minting_data.burn_cnt, - EXTRACT(epoch from minting_data.date), - ( - SELECT - JSON_BUILD_OBJECT( - 'key', TM.key::text, - 'json', TM.json - ) - FROM - tx_metadata TM - INNER JOIN ma_tx_mint MTM on MTM.tx_id = TM.tx_id - WHERE - MTM.ident = _asset_id - ORDER BY - TM.tx_id ASC - LIMIT 1 - ) AS minting_tx_metadata, - ( - SELECT - JSON_BUILD_OBJECT( - 'name', ARC.name, - 'description', ARC.description, - 'ticker', ARC.ticker, - 'url', ARC.url, - 'logo', ARC.logo, - 'decimals', ARC.decimals - ) as metadata - FROM - grest.asset_registry_cache ARC - WHERE - ARC.asset_policy = _asset_policy - AND - ARC.asset_name = _asset_name - LIMIT 1 - ) AS token_registry_metadata - FROM - multi_asset MA - LEFT JOIN LATERAL ( - SELECT - MIN(B.time) AS date, - SUM(MTM.quantity)::text AS total_supply, - SUM(CASE WHEN quantity > 0 then 1 else 0 end) AS mint_cnt, - SUM(CASE WHEN quantity < 0 then 1 else 0 end) AS burn_cnt - FROM - ma_tx_mint MTM - INNER JOIN tx ON tx.id = MTM.tx_id - INNER JOIN block B ON B.id = tx.block_id - WHERE - MTM.ident = MA.id - ) minting_data ON TRUE - WHERE - MA.id = _asset_id; + SELECT + _asset_policy, + _asset_name, + ENCODE(_asset_name_decoded, 'escape'), + MA.fingerprint, + ( + SELECT + ENCODE(tx.hash, 'hex') + FROM + ma_tx_mint MTM + INNER JOIN tx ON tx.id = MTM.tx_id + WHERE + MTM.ident = _asset_id + ORDER BY + MTM.tx_id ASC + LIMIT 1 + ) AS tx_hash, + minting_data.total_supply, + minting_data.mint_cnt, + minting_data.burn_cnt, + EXTRACT(epoch from minting_data.date)::integer, + ( + SELECT + COALESCE( + JSON_AGG( + JSON_BUILD_OBJECT( + 'key', TM.key::text, + 'json', TM.json + ) + ), + JSON_BUILD_ARRAY() + ) + FROM + tx_metadata TM + WHERE + TM.tx_id = ( + SELECT MAX(MTM.tx_id) + FROM ma_tx_mint MTM + WHERE MTM.ident = _asset_id + ) + ) AS minting_tx_metadata, + ( + SELECT + JSON_BUILD_OBJECT( + 'name', ARC.name, + 'description', ARC.description, + 'ticker', ARC.ticker, + 'url', ARC.url, + 'logo', ARC.logo, + 'decimals', ARC.decimals + ) as metadata + FROM + grest.asset_registry_cache ARC + WHERE + ARC.asset_policy = _asset_policy + AND + ARC.asset_name = _asset_name + LIMIT 1 + ) AS token_registry_metadata + FROM + multi_asset MA + LEFT JOIN LATERAL ( + SELECT + MIN(B.time) AS date, + SUM(MTM.quantity)::text AS total_supply, + SUM(CASE WHEN quantity > 0 then 1 else 0 end) AS mint_cnt, + SUM(CASE WHEN quantity < 0 then 1 else 0 end) AS burn_cnt + FROM + ma_tx_mint MTM + INNER JOIN tx ON tx.id = MTM.tx_id + INNER JOIN block B ON B.id = tx.block_id + WHERE + MTM.ident = MA.id + ) minting_data ON TRUE + WHERE + MA.id = _asset_id; END; $$; diff --git a/files/grest/rpc/assets/asset_policy_info.sql b/files/grest/rpc/assets/asset_policy_info.sql index 7991e209b..fc54dbad4 100644 --- a/files/grest/rpc/assets/asset_policy_info.sql +++ b/files/grest/rpc/assets/asset_policy_info.sql @@ -6,7 +6,7 @@ CREATE FUNCTION grest.asset_policy_info (_asset_policy text) minting_tx_metadata jsonb, token_registry_metadata jsonb, total_supply text, - creation_time double precision + creation_time integer ) LANGUAGE PLPGSQL AS $$ @@ -87,7 +87,7 @@ BEGIN mtm.metadata as minting_tx_metadata, trm.metadata as token_registry_metadata, ts.amount::text as total_supply, - EXTRACT(epoch from ct.date) + EXTRACT(epoch from ct.date)::integer FROM multi_asset MA LEFT JOIN minting_tx_metadatas mtm ON mtm.ident = MA.id diff --git a/files/grest/rpc/assets/asset_txs.sql b/files/grest/rpc/assets/asset_txs.sql index 5e9e8e734..fe2580c89 100644 --- a/files/grest/rpc/assets/asset_txs.sql +++ b/files/grest/rpc/assets/asset_txs.sql @@ -1,9 +1,9 @@ CREATE FUNCTION grest.asset_txs (_asset_policy text, _asset_name text default '') RETURNS TABLE ( tx_hash text, - epoch_no uinteger, - block_height uinteger, - block_time double precision + epoch_no word31type, + block_height word31type, + block_time integer ) LANGUAGE PLPGSQL AS $$ @@ -28,7 +28,7 @@ BEGIN ENCODE(tx_hashes.hash, 'hex') as tx_hash, tx_hashes.epoch_no, tx_hashes.block_no, - EXTRACT(epoch from tx_hashes.time) + EXTRACT(epoch from tx_hashes.time)::integer FROM ( SELECT DISTINCT ON (tx.hash) tx.hash, diff --git a/files/grest/rpc/blocks/block_info.sql b/files/grest/rpc/blocks/block_info.sql index 1e121c9a4..132df2252 100644 --- a/files/grest/rpc/blocks/block_info.sql +++ b/files/grest/rpc/blocks/block_info.sql @@ -1,17 +1,19 @@ CREATE FUNCTION grest.block_info (_block_hashes text[]) RETURNS TABLE ( hash text, - epoch_no uinteger, - abs_slot uinteger, - epoch_slot uinteger, - block_height uinteger, - block_size uinteger, - block_time double precision, + epoch_no word31type, + abs_slot word63type, + epoch_slot word31type, + block_height word31type, + block_size word31type, + block_time integer, tx_count bigint, vrf_key varchar, op_cert text, op_cert_counter word63type, pool varchar, + proto_major word31type, + proto_minor word31type, total_output text, total_fees text, num_confirmations integer, @@ -23,7 +25,7 @@ CREATE FUNCTION grest.block_info (_block_hashes text[]) DECLARE _block_hashes_bytea bytea[]; _block_id_list bigint[]; - _curr_block_no uinteger; + _curr_block_no word31type; BEGIN SELECT max(block_no) INTO _curr_block_no FROM block b; @@ -54,12 +56,14 @@ BEGIN B.epoch_slot_no AS epoch_slot, B.block_no AS block_height, B.size AS block_size, - EXTRACT(epoch from B.time) AS block_time, + EXTRACT(epoch from B.time)::integer AS block_time, B.tx_count, B.vrf_key, ENCODE(B.op_cert::bytea, 'hex') as op_cert, B.op_cert_counter, PH.view AS pool, + B.proto_major, + B.proto_minor, block_data.total_output::text, block_data.total_fees::text, (_curr_block_no - B.block_no) AS num_confirmations, diff --git a/files/grest/rpc/epoch/epoch_info.sql b/files/grest/rpc/epoch/epoch_info.sql index f8a7aee56..757c7fa5e 100644 --- a/files/grest/rpc/epoch/epoch_info.sql +++ b/files/grest/rpc/epoch/epoch_info.sql @@ -1,14 +1,14 @@ CREATE FUNCTION grest.epoch_info (_epoch_no numeric DEFAULT NULL) RETURNS TABLE ( - epoch_no uinteger, + epoch_no word31type, out_sum text, fees text, - tx_count uinteger, - blk_count uinteger, - start_time double precision, - end_time double precision, - first_block_time double precision, - last_block_time double precision, + tx_count word31type, + blk_count word31type, + start_time integer, + end_time integer, + first_block_time integer, + last_block_time integer, active_stake text, total_rewards text, avg_blk_reward text @@ -18,7 +18,7 @@ CREATE FUNCTION grest.epoch_info (_epoch_no numeric DEFAULT NULL) DECLARE shelley_epoch_duration numeric := (select epochlength::numeric * slotlength::numeric as epochduration from grest.genesis); shelley_ref_epoch numeric := (select (ep.epoch_no::numeric + 1) from epoch_param ep ORDER BY ep.epoch_no LIMIT 1); - shelley_ref_time double precision := (select ei.i_first_block_time from grest.epoch_info_cache ei where ei.epoch_no = shelley_ref_epoch); + shelley_ref_time numeric := (select ei.i_first_block_time from grest.epoch_info_cache ei where ei.epoch_no = shelley_ref_epoch); BEGIN RETURN QUERY SELECT @@ -28,17 +28,17 @@ BEGIN ei.i_tx_count AS tx_count, ei.i_blk_count AS blk_count, CASE WHEN ei.epoch_no < shelley_ref_epoch THEN - ei.i_first_block_time + ei.i_first_block_time::integer ELSE - shelley_ref_time + (ei.epoch_no - shelley_ref_epoch) * shelley_epoch_duration + (shelley_ref_time + (ei.epoch_no - shelley_ref_epoch) * shelley_epoch_duration)::integer END AS start_time, CASE WHEN ei.epoch_no < shelley_ref_epoch THEN - ei.i_first_block_time + shelley_epoch_duration + (ei.i_first_block_time + shelley_epoch_duration)::integer ELSE - shelley_ref_time + ((ei.epoch_no + 1) - shelley_ref_epoch) * shelley_epoch_duration + (shelley_ref_time + ((ei.epoch_no + 1) - shelley_ref_epoch) * shelley_epoch_duration)::integer END AS end_time, - ei.i_first_block_time AS first_block_time, - ei.i_last_block_time AS last_block_time, + ei.i_first_block_time::integer AS first_block_time, + ei.i_last_block_time::integer AS last_block_time, eas.amount::text AS active_stake, ei.i_total_rewards::text AS total_rewards, ei.i_avg_blk_reward::text AS avg_blk_reward diff --git a/files/grest/rpc/epoch/epoch_params.sql b/files/grest/rpc/epoch/epoch_params.sql index f9a51f986..47372630e 100644 --- a/files/grest/rpc/epoch/epoch_params.sql +++ b/files/grest/rpc/epoch/epoch_params.sql @@ -1,22 +1,22 @@ CREATE FUNCTION grest.epoch_params (_epoch_no numeric DEFAULT NULL) RETURNS TABLE ( - epoch_no uinteger, - min_fee_a uinteger, - min_fee_b uinteger, - max_block_size uinteger, - max_tx_size uinteger, - max_bh_size uinteger, + epoch_no word31type, + min_fee_a word31type, + min_fee_b word31type, + max_block_size word31type, + max_tx_size word31type, + max_bh_size word31type, key_deposit lovelace, pool_deposit lovelace, - max_epoch uinteger, - optimal_pool_count uinteger, + max_epoch word31type, + optimal_pool_count word31type, influence double precision, monetary_expand_rate double precision, treasury_growth_rate double precision, decentralisation double precision, - entropy text, - protocol_major uinteger, - protocol_minor uinteger, + extra_entropy text, + protocol_major word31type, + protocol_minor word31type, min_utxo_value lovelace, min_pool_cost lovelace, nonce text, @@ -29,9 +29,9 @@ CREATE FUNCTION grest.epoch_params (_epoch_no numeric DEFAULT NULL) max_block_ex_mem word64type, max_block_ex_steps word64type, max_val_size word64type, - collateral_percent uinteger, - max_collateral_inputs uinteger, - coins_per_utxo_word lovelace) + collateral_percent word31type, + max_collateral_inputs word31type, + coins_per_utxo_size lovelace) LANGUAGE PLPGSQL AS $$ BEGIN @@ -52,7 +52,7 @@ BEGIN ei.p_monetary_expand_rate AS monetary_expand_rate, ei.p_treasury_growth_rate AS treasury_growth_rate, ei.p_decentralisation AS decentralisation, - ei.p_entropy AS entropy, + ei.p_extra_entropy AS extra_entropy, ei.p_protocol_major AS protocol_major, ei.p_protocol_minor AS protocol_minor, ei.p_min_utxo_value AS min_utxo_value, @@ -69,7 +69,7 @@ BEGIN ei.p_max_val_size AS max_val_size, ei.p_collateral_percent AS collateral_percent, ei.p_max_collateral_inputs AS max_collateral_inputs, - ei.p_coins_per_utxo_word AS coins_per_utxo_word + ei.p_coins_per_utxo_size AS coins_per_utxo_size FROM grest.epoch_info_cache ei ORDER BY @@ -91,7 +91,7 @@ BEGIN ei.p_monetary_expand_rate AS monetary_expand_rate, ei.p_treasury_growth_rate AS treasury_growth_rate, ei.p_decentralisation AS decentralisation, - ei.p_entropy AS entropy, + ei.p_extra_entropy AS extra_entropy, ei.p_protocol_major AS protocol_major, ei.p_protocol_minor AS protocol_minor, ei.p_min_utxo_value AS min_utxo_value, @@ -108,7 +108,7 @@ BEGIN ei.p_max_val_size AS max_val_size, ei.p_collateral_percent AS collateral_percent, ei.p_max_collateral_inputs AS max_collateral_inputs, - ei.p_coins_per_utxo_word AS coins_per_utxo_word + ei.p_coins_per_utxo_size AS coins_per_utxo_size FROM grest.epoch_info_cache ei WHERE diff --git a/files/grest/rpc/pool/pool_blocks.sql b/files/grest/rpc/pool/pool_blocks.sql index 3c7ed073f..1f3836d54 100644 --- a/files/grest/rpc/pool/pool_blocks.sql +++ b/files/grest/rpc/pool/pool_blocks.sql @@ -1,11 +1,11 @@ -CREATE FUNCTION grest.pool_blocks (_pool_bech32 text, _epoch_no uinteger DEFAULT NULL) +CREATE FUNCTION grest.pool_blocks (_pool_bech32 text, _epoch_no word31type DEFAULT NULL) RETURNS TABLE ( - epoch_no uinteger, - epoch_slot uinteger, - abs_slot uinteger, - block_height uinteger, + epoch_no word31type, + epoch_slot word31type, + abs_slot word63type, + block_height word31type, block_hash text, - block_time double precision + block_time integer ) LANGUAGE plpgsql AS $$ @@ -17,7 +17,7 @@ BEGIN b.slot_no as abs_slot, b.block_no as block_height, encode(b.hash::bytea, 'hex'), - EXTRACT(epoch from b.time) + EXTRACT(epoch from b.time)::integer FROM public.block b INNER JOIN diff --git a/files/grest/rpc/pool/pool_delegators.sql b/files/grest/rpc/pool/pool_delegators.sql index 1b5eb47af..5f635ab47 100644 --- a/files/grest/rpc/pool/pool_delegators.sql +++ b/files/grest/rpc/pool/pool_delegators.sql @@ -1,8 +1,8 @@ -CREATE FUNCTION grest.pool_delegators (_pool_bech32 text, _epoch_no uinteger DEFAULT NULL) +CREATE FUNCTION grest.pool_delegators (_pool_bech32 text, _epoch_no word31type DEFAULT NULL) RETURNS TABLE ( stake_address character varying, amount text, - epoch_no uinteger + active_epoch_no bigint ) LANGUAGE plpgsql AS $$ @@ -10,40 +10,62 @@ CREATE FUNCTION grest.pool_delegators (_pool_bech32 text, _epoch_no uinteger DEF DECLARE _pool_id bigint; BEGIN - SELECT id INTO _pool_id FROM pool_hash WHERE pool_hash.view = _pool_bech32; IF _epoch_no IS NULL THEN - RETURN QUERY + + RETURN QUERY + WITH + _all_delegations AS ( + SELECT + SA.id AS stake_address_id, + SDC.stake_address, + ( + CASE WHEN SDC.total_balance >= 0 + THEN SDC.total_balance + ELSE 0 + END + ) AS total_balance + FROM + grest.stake_distribution_cache AS SDC + INNER JOIN public.stake_address SA ON SA.view = SDC.stake_address + WHERE + SDC.pool_id = _pool_bech32 + ) + SELECT - stake_address, - ( - CASE WHEN total_balance >= 0 - THEN total_balance - ELSE 0 - END - )::text, - (SELECT MAX(no) FROM public.epoch)::uinteger + AD.stake_address, + AD.total_balance::text, + max(D.active_epoch_no) FROM - grest.stake_distribution_cache AS sdc - WHERE - sdc.pool_id = _pool_bech32 + _all_delegations AS AD + INNER JOIN public.delegation D ON D.addr_id = AD.stake_address_id + GROUP BY + AD.stake_address, AD.total_balance ORDER BY - sdc.total_balance DESC; + AD.total_balance DESC; + ELSE + + SELECT id INTO _pool_id FROM pool_hash WHERE pool_hash.view = _pool_bech32; + RETURN QUERY SELECT SA.view, ES.amount::text, - _epoch_no + max(D.active_epoch_no) FROM public.epoch_stake ES INNER JOIN public.stake_address SA ON ES.addr_id = SA.id + INNER JOIN public.delegation D ON D.addr_id = SA.id WHERE ES.pool_id = _pool_id AND ES.epoch_no = _epoch_no + GROUP BY + SA.view, ES.amount ORDER BY ES.amount DESC; + END IF; END; $$; diff --git a/files/grest/rpc/pool/pool_history.sql b/files/grest/rpc/pool/pool_history.sql index 36146e2d3..982de3f0e 100644 --- a/files/grest/rpc/pool/pool_history.sql +++ b/files/grest/rpc/pool/pool_history.sql @@ -1,4 +1,4 @@ -CREATE FUNCTION grest.pool_history (_pool_bech32 text, _epoch_no uinteger DEFAULT NULL) +CREATE FUNCTION grest.pool_history (_pool_bech32 text, _epoch_no word31type DEFAULT NULL) RETURNS TABLE ( epoch_no bigint, active_stake text, diff --git a/files/grest/rpc/pool/pool_info.sql b/files/grest/rpc/pool/pool_info.sql index 35b8be8a2..e6d61d5d9 100644 --- a/files/grest/rpc/pool/pool_info.sql +++ b/files/grest/rpc/pool/pool_info.sql @@ -14,7 +14,7 @@ CREATE FUNCTION grest.pool_info (_pool_bech32_ids text[]) meta_hash text, meta_json jsonb, pool_status text, - retiring_epoch uinteger, + retiring_epoch word31type, op_cert text, op_cert_counter word63type, active_stake text, diff --git a/files/grest/rpc/pool/pool_updates.sql b/files/grest/rpc/pool/pool_updates.sql index de505f853..353fcce3b 100644 --- a/files/grest/rpc/pool/pool_updates.sql +++ b/files/grest/rpc/pool/pool_updates.sql @@ -1,7 +1,7 @@ CREATE FUNCTION grest.pool_updates (_pool_bech32 text DEFAULT NULL) RETURNS TABLE ( tx_hash text, - block_time double precision, + block_time integer, pool_id_bech32 character varying, pool_id_hex text, active_epoch_no bigint, @@ -15,7 +15,7 @@ CREATE FUNCTION grest.pool_updates (_pool_bech32 text DEFAULT NULL) meta_url character varying, meta_hash text, pool_status text, - retiring_epoch uinteger + retiring_epoch word31type ) LANGUAGE plpgsql AS $$ @@ -24,7 +24,7 @@ BEGIN RETURN QUERY SELECT tx_hash, - block_time, + block_time::integer, pool_id_bech32, pool_id_hex, active_epoch_no, diff --git a/files/grest/rpc/script/plutus_script_list.sql b/files/grest/rpc/script/plutus_script_list.sql index c5ab85348..9ea3aee7f 100644 --- a/files/grest/rpc/script/plutus_script_list.sql +++ b/files/grest/rpc/script/plutus_script_list.sql @@ -12,7 +12,7 @@ BEGIN ENCODE(tx.hash, 'hex') as creation_tx_hash FROM script INNER JOIN tx ON tx.id = script.tx_id - WHERE script.type = 'plutus'; + WHERE script.type IN ('plutusV1', 'plutusV2'); END; $$; diff --git a/files/grest/rpc/script/script_redeemers.sql b/files/grest/rpc/script/script_redeemers.sql index 337fd7725..460119b33 100644 --- a/files/grest/rpc/script/script_redeemers.sql +++ b/files/grest/rpc/script/script_redeemers.sql @@ -25,14 +25,15 @@ select _script_hash, 'purpose', redeemer.purpose, 'datum_hash', - ENCODE(datum.hash, 'hex'), + ENCODE(rd.hash, 'hex'), 'datum_value', - datum.value + rd.value + -- extra bytes field available in rd. table here ) ) as redeemers FROM redeemer INNER JOIN TX ON tx.id = redeemer.tx_id - INNER JOIN DATUM on datum.id = redeemer.datum_id + INNER JOIN REDEEMER_DATA rd on rd.id = redeemer.redeemer_data_id WHERE redeemer.script_hash = _script_hash_bytea GROUP BY redeemer.script_hash; END; diff --git a/files/grest/rpc/transactions/tx_info.sql b/files/grest/rpc/transactions/tx_info.sql index b5b018fb9..60906c9eb 100644 --- a/files/grest/rpc/transactions/tx_info.sql +++ b/files/grest/rpc/transactions/tx_info.sql @@ -2,27 +2,29 @@ CREATE FUNCTION grest.tx_info (_tx_hashes text[]) RETURNS TABLE ( tx_hash text, block_hash text, - block_height uinteger, - epoch uinteger, - epoch_slot uinteger, - absolute_slot uinteger, - tx_timestamp double precision, - tx_block_index uinteger, - tx_size uinteger, + block_height word31type, + epoch word31type, + epoch_slot word31type, + absolute_slot word63type, + tx_timestamp integer, + tx_block_index word31type, + tx_size word31type, total_output text, fee text, deposit text, invalid_before word64type, invalid_after word64type, - collaterals json, - inputs json, - outputs json, - withdrawals json, - assets_minted json, - metadata json, - certificates json, - native_scripts json, - plutus_contracts json + collateral_inputs jsonb, + collateral_outputs jsonb, + reference_inputs jsonb, + inputs jsonb, + outputs jsonb, + withdrawals jsonb, + assets_minted jsonb, + metadata jsonb, + certificates jsonb, + native_scripts jsonb, + plutus_contracts jsonb ) LANGUAGE PLPGSQL AS $$ @@ -57,20 +59,20 @@ BEGIN _all_tx AS ( SELECT tx.id, - tx.hash as tx_hash, - b.hash as block_hash, - b.block_no AS block_height, - b.epoch_no AS epoch, - b.epoch_slot_no AS epoch_slot, - b.slot_no AS absolute_slot, - b.time AS tx_timestamp, - tx.block_index AS tx_block_index, - tx.size AS tx_size, - tx.out_sum AS total_output, + tx.hash AS tx_hash, + b.hash AS block_hash, + b.block_no AS block_height, + b.epoch_no AS epoch, + b.epoch_slot_no AS epoch_slot, + b.slot_no AS absolute_slot, + b.time AS tx_timestamp, + tx.block_index AS tx_block_index, + tx.size AS tx_size, + tx.out_sum AS total_output, tx.fee, tx.deposit, tx.invalid_before, - tx.invalid_hereafter AS invalid_after + tx.invalid_hereafter AS invalid_after FROM tx INNER JOIN block b ON tx.block_id = b.id @@ -79,22 +81,43 @@ BEGIN _all_collateral_inputs AS ( SELECT - collateral_tx_in.tx_in_id AS tx_id, - tx_out.address AS payment_addr_bech32, - ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, - SA.view AS stake_addr, - ENCODE(tx.hash, 'hex') AS tx_hash, - tx_out.index AS tx_index, - tx_out.value::text AS value, + collateral_tx_in.tx_in_id AS tx_id, + tx_out.address AS payment_addr_bech32, + ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, + SA.view AS stake_addr, + ENCODE(tx.hash, 'hex') AS tx_hash, + tx_out.index AS tx_index, + tx_out.value::text AS value, + ENCODE(tx_out.data_hash, 'hex') AS datum_hash, ( CASE WHEN MA.policy IS NULL THEN NULL ELSE - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'policy_id', ENCODE(MA.policy, 'hex'), 'asset_name', ENCODE(MA.name, 'hex'), + 'fingerprint', MA.fingerprint, 'quantity', MTO.quantity::text ) END - ) AS asset_list + ) AS asset_list, + ( CASE WHEN tx_out.inline_datum_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'bytes', ENCODE(datum.bytes, 'hex'), + 'value', datum.value + ) + END + ) AS inline_datum, + ( CASE WHEN tx_out.reference_script_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'hash', ENCODE(script.hash, 'hex'), + 'bytes', ENCODE(script.bytes, 'hex'), + 'value', script.json, + 'type', script.type::text, + 'size', script.serialised_size + ) + END + ) AS reference_script FROM collateral_tx_in INNER JOIN tx_out ON tx_out.tx_id = collateral_tx_in.tx_out_id @@ -103,28 +126,104 @@ BEGIN LEFT JOIN stake_address SA ON tx_out.stake_address_id = SA.id LEFT JOIN ma_tx_out MTO ON MTO.tx_out_id = tx_out.id LEFT JOIN multi_asset MA ON MA.id = MTO.ident + LEFT JOIN datum ON datum.id = tx_out.inline_datum_id + LEFT JOIN script ON script.id = tx_out.reference_script_id WHERE collateral_tx_in.tx_in_id = ANY (_tx_id_list) ), + _all_reference_inputs AS ( + SELECT + reference_tx_in.tx_in_id AS tx_id, + tx_out.address AS payment_addr_bech32, + ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, + SA.view AS stake_addr, + ENCODE(tx.hash, 'hex') AS tx_hash, + tx_out.index AS tx_index, + tx_out.value::text AS value, + ENCODE(tx_out.data_hash, 'hex') AS datum_hash, + ( CASE WHEN MA.policy IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'policy_id', ENCODE(MA.policy, 'hex'), + 'asset_name', ENCODE(MA.name, 'hex'), + 'fingerprint', MA.fingerprint, + 'quantity', MTO.quantity::text + ) + END + ) AS asset_list, + ( CASE WHEN tx_out.inline_datum_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'bytes', ENCODE(datum.bytes, 'hex'), + 'value', datum.value + ) + END + ) AS inline_datum, + ( CASE WHEN tx_out.reference_script_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'hash', ENCODE(script.hash, 'hex'), + 'bytes', ENCODE(script.bytes, 'hex'), + 'value', script.json, + 'type', script.type::text, + 'size', script.serialised_size + ) + END + ) AS reference_script + FROM + reference_tx_in + INNER JOIN tx_out ON tx_out.tx_id = reference_tx_in.tx_out_id + AND tx_out.index = reference_tx_in.tx_out_index + INNER JOIN tx ON tx_out.tx_id = tx.id + LEFT JOIN stake_address SA ON tx_out.stake_address_id = SA.id + LEFT JOIN ma_tx_out MTO ON MTO.tx_out_id = tx_out.id + LEFT JOIN multi_asset MA ON MA.id = MTO.ident + LEFT JOIN datum ON datum.id = tx_out.inline_datum_id + LEFT JOIN script ON script.id = tx_out.reference_script_id + WHERE + reference_tx_in.tx_in_id = ANY (_tx_id_list) + ), + _all_inputs AS ( SELECT - tx_in.tx_in_id AS tx_id, - tx_out.address AS payment_addr_bech32, - ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, - SA.view AS stake_addr, - ENCODE(tx.hash, 'hex') AS tx_hash, - tx_out.index AS tx_index, - tx_out.value::text AS value, + tx_in.tx_in_id AS tx_id, + tx_out.address AS payment_addr_bech32, + ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, + SA.view AS stake_addr, + ENCODE(tx.hash, 'hex') AS tx_hash, + tx_out.index AS tx_index, + tx_out.value::text AS value, + ENCODE(tx_out.data_hash, 'hex') AS datum_hash, ( CASE WHEN MA.policy IS NULL THEN NULL ELSE - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'policy_id', ENCODE(MA.policy, 'hex'), 'asset_name', ENCODE(MA.name, 'hex'), + 'fingerprint', MA.fingerprint, 'quantity', MTO.quantity::text ) END - ) AS asset_list + ) AS asset_list, + ( CASE WHEN tx_out.inline_datum_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'bytes', ENCODE(datum.bytes, 'hex'), + 'value', datum.value + ) + END + ) AS inline_datum, + ( CASE WHEN tx_out.reference_script_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'hash', ENCODE(script.hash, 'hex'), + 'bytes', ENCODE(script.bytes, 'hex'), + 'value', script.json, + 'type', script.type::text, + 'size', script.serialised_size + ) + END + ) AS reference_script FROM tx_in INNER JOIN tx_out ON tx_out.tx_id = tx_in.tx_out_id @@ -133,34 +232,110 @@ BEGIN LEFT JOIN stake_address SA ON tx_out.stake_address_id = SA.id LEFT JOIN ma_tx_out MTO ON MTO.tx_out_id = tx_out.id LEFT JOIN multi_asset MA ON MA.id = MTO.ident + LEFT JOIN datum ON datum.id = tx_out.inline_datum_id + LEFT JOIN script ON script.id = tx_out.reference_script_id WHERE tx_in.tx_in_id = ANY (_tx_id_list) ), + _all_collateral_outputs AS ( + SELECT + tx_out.tx_id, + tx_out.address AS payment_addr_bech32, + ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, + SA.view AS stake_addr, + ENCODE(tx.hash, 'hex') AS tx_hash, + tx_out.index AS tx_index, + tx_out.value::text AS value, + ENCODE(tx_out.data_hash, 'hex') AS datum_hash, + ( CASE WHEN MA.policy IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'policy_id', ENCODE(MA.policy, 'hex'), + 'asset_name', ENCODE(MA.name, 'hex'), + 'fingerprint', MA.fingerprint, + 'quantity', MTO.quantity::text + ) + END + ) AS asset_list, + ( CASE WHEN tx_out.inline_datum_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'bytes', ENCODE(datum.bytes, 'hex'), + 'value', datum.value + ) + END + ) AS inline_datum, + ( CASE WHEN tx_out.reference_script_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'hash', ENCODE(script.hash, 'hex'), + 'bytes', ENCODE(script.bytes, 'hex'), + 'value', script.json, + 'type', script.type::text, + 'size', script.serialised_size + ) + END + ) AS reference_script + FROM + collateral_tx_out tx_out + INNER JOIN tx on tx_out.tx_id = tx.id + LEFT JOIN stake_address SA ON tx_out.stake_address_id = SA.id + LEFT JOIN ma_tx_out MTO ON MTO.tx_out_id = tx_out.id + LEFT JOIN multi_asset MA ON MA.id = MTO.ident + LEFT JOIN datum ON datum.id = tx_out.inline_datum_id + LEFT JOIN script ON script.id = tx_out.reference_script_id + WHERE + tx_out.tx_id = ANY (_tx_id_list) + ), + _all_outputs AS ( SELECT tx_out.tx_id, - tx_out.address AS payment_addr_bech32, - ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, - SA.view AS stake_addr, - ENCODE(tx.hash, 'hex') AS tx_hash, - tx_out.index AS tx_index, - tx_out.value::text AS value, + tx_out.address AS payment_addr_bech32, + ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, + SA.view AS stake_addr, + ENCODE(tx.hash, 'hex') AS tx_hash, + tx_out.index AS tx_index, + tx_out.value::text AS value, + ENCODE(tx_out.data_hash, 'hex') AS datum_hash, ( CASE WHEN MA.policy IS NULL THEN NULL ELSE - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'policy_id', ENCODE(MA.policy, 'hex'), 'asset_name', ENCODE(MA.name, 'hex'), + 'fingerprint', MA.fingerprint, 'quantity', MTO.quantity::text ) END - ) AS asset_list + ) AS asset_list, + ( CASE WHEN tx_out.inline_datum_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'bytes', ENCODE(datum.bytes, 'hex'), + 'value', datum.value + ) + END + ) AS inline_datum, + ( CASE WHEN tx_out.reference_script_id IS NULL THEN NULL + ELSE + JSONB_BUILD_OBJECT( + 'hash', ENCODE(script.hash, 'hex'), + 'bytes', ENCODE(script.bytes, 'hex'), + 'value', script.json, + 'type', script.type::text, + 'size', script.serialised_size + ) + END + ) AS reference_script FROM tx_out INNER JOIN tx ON tx_out.tx_id = tx.id LEFT JOIN stake_address SA ON tx_out.stake_address_id = SA.id LEFT JOIN ma_tx_out MTO ON MTO.tx_out_id = tx_out.id LEFT JOIN multi_asset MA ON MA.id = MTO.ident + LEFT JOIN datum ON datum.id = tx_out.inline_datum_id + LEFT JOIN script ON script.id = tx_out.reference_script_id WHERE tx_out.tx_id = ANY (_tx_id_list) ), @@ -168,11 +343,11 @@ BEGIN _all_withdrawals AS ( SELECT tx_id, - JSON_AGG(data) AS list + JSONB_AGG(data) AS list FROM ( SELECT W.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'amount', W.amount::text, 'stake_addr', SA.view ) AS data @@ -189,11 +364,11 @@ BEGIN _all_mints AS ( SELECT tx_id, - JSON_AGG(data) AS list + JSONB_AGG(data) AS list FROM ( SELECT MTM.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'policy_id', ENCODE(MA.policy, 'hex'), 'asset_name', ENCODE(MA.name, 'hex'), 'quantity', MTM.quantity::text @@ -211,11 +386,11 @@ BEGIN _all_metadata AS ( SELECT tx_id, - JSON_AGG(data) AS list + JSONB_AGG(data) AS list FROM ( SELECT TM.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'key', TM.key::text, 'json', TM.json ) AS data @@ -231,14 +406,14 @@ BEGIN _all_certs AS ( SELECT tx_id, - JSON_AGG(data) AS list + JSONB_AGG(data) AS list FROM ( SELECT SR.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'index', SR.cert_index, 'type', 'stake_registration', - 'info', JSON_BUILD_OBJECT( + 'info', JSONB_BUILD_OBJECT( 'stake_address', SA.view ) ) AS data @@ -252,10 +427,10 @@ BEGIN -- SELECT SD.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'index', SD.cert_index, 'type', 'stake_deregistration', - 'info', JSON_BUILD_OBJECT( + 'info', JSONB_BUILD_OBJECT( 'stake_address', SA.view ) ) AS data @@ -269,10 +444,10 @@ BEGIN -- SELECT D.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'index', D.cert_index, 'type', 'delegation', - 'info', JSON_BUILD_OBJECT( + 'info', JSONB_BUILD_OBJECT( 'stake_address', SA.view, 'pool_id_bech32', PH.view, 'pool_id_hex', ENCODE(PH.hash_raw, 'hex') @@ -289,10 +464,10 @@ BEGIN -- SELECT T.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'index', T.cert_index, 'type', 'treasury_MIR', - 'info', JSON_BUILD_OBJECT( + 'info', JSONB_BUILD_OBJECT( 'stake_address', SA.view, 'amount', T.amount::text ) @@ -307,10 +482,10 @@ BEGIN -- SELECT R.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'index', R.cert_index, 'type', 'reserve_MIR', - 'info', JSON_BUILD_OBJECT( + 'info', JSONB_BUILD_OBJECT( 'stake_address', SA.view, 'amount', R.amount::text ) @@ -325,10 +500,10 @@ BEGIN -- SELECT PT.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'index', PT.cert_index, 'type', 'pot_transfer', - 'info', JSON_BUILD_OBJECT( + 'info', JSONB_BUILD_OBJECT( 'treasury', PT.treasury::text, 'reserves', PT.reserves::text ) @@ -343,10 +518,10 @@ BEGIN SELECT -- SELECT DISTINCT below because there are multiple entries for each signing key of a given transaction DISTINCT ON (PP.registered_tx_id) PP.registered_tx_id AS tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'index', null, -- cert_index not stored in param_proposal table 'type', 'param_proposal', - 'info', JSON_STRIP_NULLS(JSON_BUILD_OBJECT( + 'info', JSONB_STRIP_NULLS(JSONB_BUILD_OBJECT( 'min_fee_a', PP.min_fee_a, 'min_fee_b', PP.min_fee_b, 'max_block_size', PP.max_block_size, @@ -365,7 +540,7 @@ BEGIN 'protocol_minor', PP.protocol_minor, 'min_utxo_value', PP.min_utxo_value, 'min_pool_cost', PP.min_pool_cost, - 'cost_model_id', PP.cost_model_id, + 'cost_model', CM.costs, 'price_mem', PP.price_mem, 'price_step', PP.price_step, 'max_tx_ex_mem', PP.max_tx_ex_mem, @@ -375,11 +550,12 @@ BEGIN 'max_val_size', PP.max_val_size, 'collateral_percent', PP.collateral_percent, 'max_collateral_inputs', PP.max_collateral_inputs, - 'coins_per_utxo_word', PP.coins_per_utxo_word + 'coins_per_utxo_size', PP.coins_per_utxo_size )) ) AS data FROM public.param_proposal PP + INNER JOIN cost_model CM ON CM.id = PP.cost_model_id WHERE PP.registered_tx_id = ANY (_tx_id_list) -- @@ -387,10 +563,10 @@ BEGIN -- SELECT PR.announced_tx_id AS tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'index', PR.cert_index, 'type', 'pool_retire', - 'info', JSON_BUILD_OBJECT( + 'info', JSONB_BUILD_OBJECT( 'pool_id_bech32', PH.view, 'pool_id_hex', ENCODE(PH.hash_raw, 'hex'), 'retiring epoch', PR.retiring_epoch @@ -406,10 +582,10 @@ BEGIN -- SELECT PIC.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'index', PU.cert_index, 'type', 'pool_update', - 'info', JSON_BUILD_OBJECT( + 'info', JSONB_BUILD_OBJECT( 'pool_id_bech32', PIC.pool_id_bech32, 'pool_id_hex', PIC.pool_id_hex, 'active_epoch_no', PIC.active_epoch_no, @@ -437,11 +613,11 @@ BEGIN _all_native_scripts AS ( SELECT tx_id, - JSON_AGG(data) AS list + JSONB_AGG(data) AS list FROM ( SELECT script.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'script_hash', ENCODE(script.hash, 'hex'), 'script_json', script.json ) AS data @@ -459,37 +635,37 @@ BEGIN _all_plutus_contracts AS ( SELECT tx_id, - JSON_AGG(data) AS list + JSONB_AGG(data) AS list FROM ( SELECT redeemer.tx_id, - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'address', INUTXO.address, 'script_hash', ENCODE(script.hash, 'hex'), 'bytecode', ENCODE(script.bytes, 'hex'), 'size', script.serialised_size, 'valid_contract', tx.valid_contract, - 'input', JSON_BUILD_OBJECT( - 'redeemer', JSON_BUILD_OBJECT( + 'input', JSONB_BUILD_OBJECT( + 'redeemer', JSONB_BUILD_OBJECT( 'purpose', redeemer.purpose, 'fee', redeemer.fee::text, - 'unit', JSON_BUILD_OBJECT( + 'unit', JSONB_BUILD_OBJECT( 'steps', redeemer.unit_steps::text, 'mem', redeemer.unit_mem::text ), - 'datum', JSON_BUILD_OBJECT( + 'datum', JSONB_BUILD_OBJECT( 'hash', ENCODE(rd.hash, 'hex'), 'value', rd.value ) ), - 'datum', JSON_BUILD_OBJECT( + 'datum', JSONB_BUILD_OBJECT( 'hash', ENCODE(ind.hash, 'hex'), 'value', ind.value ) ), 'output', CASE WHEN outd.hash IS NULL THEN NULL ELSE - JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( 'hash', ENCODE(outd.hash, 'hex'), 'value', outd.value ) @@ -498,7 +674,7 @@ BEGIN FROM redeemer INNER JOIN tx ON redeemer.tx_id = tx.id - INNER JOIN datum RD ON RD.id = redeemer.datum_id + INNER JOIN redeemer_data RD ON RD.id = redeemer.redeemer_data_id INNER JOIN script ON redeemer.script_hash = script.hash INNER JOIN tx_in ON tx_in.redeemer_id = redeemer.id INNER JOIN tx_out INUTXO ON INUTXO.tx_id = tx_in.tx_out_id AND INUTXO.index = tx_in.tx_out_index @@ -519,7 +695,7 @@ BEGIN ATX.epoch AS epoch_no, ATX.epoch_slot, ATX.absolute_slot, - EXTRACT(epoch from ATX.tx_timestamp), + EXTRACT(epoch from ATX.tx_timestamp)::integer, ATX.tx_block_index, ATX.tx_size, ATX.total_output::text, @@ -528,11 +704,11 @@ BEGIN ATX.invalid_before, ATX.invalid_after, COALESCE(( - SELECT JSON_AGG(tx_collateral) + SELECT JSONB_AGG(tx_collateral_inputs) FROM ( SELECT - JSON_BUILD_OBJECT( - 'payment_addr', JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( + 'payment_addr', JSONB_BUILD_OBJECT( 'bech32', payment_addr_bech32, 'cred', payment_addr_cred ), @@ -540,19 +716,68 @@ BEGIN 'tx_hash', ACI.tx_hash, 'tx_index', tx_index, 'value', value, - 'asset_list', COALESCE(JSON_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSON_BUILD_ARRAY()) - ) AS tx_collateral + 'datum_hash', datum_hash, + 'inline_datum', inline_datum, + 'reference_script', reference_script, + 'asset_list', COALESCE(JSONB_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSONB_BUILD_ARRAY()) + ) AS tx_collateral_inputs FROM _all_collateral_inputs ACI WHERE ACI.tx_id = ATX.id - GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, ACI.tx_hash, tx_index, value + GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, ACI.tx_hash, tx_index, value, datum_hash, inline_datum, reference_script + ) AS tmp + ), JSONB_BUILD_ARRAY()), + COALESCE(( + SELECT JSONB_AGG(tx_collateral_outputs) + FROM ( + SELECT + JSONB_BUILD_OBJECT( + 'payment_addr', JSONB_BUILD_OBJECT( + 'bech32', payment_addr_bech32, + 'cred', payment_addr_cred + ), + 'stake_addr', stake_addr, + 'tx_hash', ACO.tx_hash, + 'tx_index', tx_index, + 'value', value, + 'datum_hash', datum_hash, + 'inline_datum', inline_datum, + 'reference_script', reference_script, + 'asset_list', COALESCE(JSONB_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSONB_BUILD_ARRAY()) + ) AS tx_collateral_outputs + FROM _all_collateral_outputs ACO + WHERE ACO.tx_id = ATX.id + GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, ACO.tx_hash, tx_index, value, datum_hash, inline_datum, reference_script + ) AS tmp + ), JSONB_BUILD_ARRAY()), + COALESCE(( + SELECT JSONB_AGG(tx_reference_inputs) + FROM ( + SELECT + JSONB_BUILD_OBJECT( + 'payment_addr', JSONB_BUILD_OBJECT( + 'bech32', payment_addr_bech32, + 'cred', payment_addr_cred + ), + 'stake_addr', stake_addr, + 'tx_hash', ARI.tx_hash, + 'tx_index', tx_index, + 'value', value, + 'datum_hash', datum_hash, + 'inline_datum', inline_datum, + 'reference_script', reference_script, + 'asset_list', COALESCE(JSONB_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSONB_BUILD_ARRAY()) + ) AS tx_reference_inputs + FROM _all_reference_inputs ARI + WHERE ARI.tx_id = ATX.id + GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, ARI.tx_hash, tx_index, value, datum_hash, inline_datum, reference_script ) AS tmp - ), JSON_BUILD_ARRAY()), + ), JSONB_BUILD_ARRAY()), COALESCE(( - SELECT JSON_AGG(tx_inputs) + SELECT JSONB_AGG(tx_inputs) FROM ( SELECT - JSON_BUILD_OBJECT( - 'payment_addr', JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( + 'payment_addr', JSONB_BUILD_OBJECT( 'bech32', payment_addr_bech32, 'cred', payment_addr_cred ), @@ -560,19 +785,22 @@ BEGIN 'tx_hash', AI.tx_hash, 'tx_index', tx_index, 'value', value, - 'asset_list', COALESCE(JSON_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSON_BUILD_ARRAY()) + 'datum_hash', datum_hash, + 'inline_datum', inline_datum, + 'reference_script', reference_script, + 'asset_list', COALESCE(JSONB_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSONB_BUILD_ARRAY()) ) AS tx_inputs FROM _all_inputs AI WHERE AI.tx_id = ATX.id - GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, AI.tx_hash, tx_index, value + GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, AI.tx_hash, tx_index, value, datum_hash, inline_datum, reference_script ) AS tmp - ), JSON_BUILD_ARRAY()), + ), JSONB_BUILD_ARRAY()), COALESCE(( - SELECT JSON_AGG(tx_outputs) + SELECT JSONB_AGG(tx_outputs) FROM ( SELECT - JSON_BUILD_OBJECT( - 'payment_addr', JSON_BUILD_OBJECT( + JSONB_BUILD_OBJECT( + 'payment_addr', JSONB_BUILD_OBJECT( 'bech32', payment_addr_bech32, 'cred', payment_addr_cred ), @@ -580,19 +808,22 @@ BEGIN 'tx_hash', AO.tx_hash, 'tx_index', tx_index, 'value', value, - 'asset_list', COALESCE(JSON_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSON_BUILD_ARRAY()) + 'datum_hash', datum_hash, + 'inline_datum', inline_datum, + 'reference_script', reference_script, + 'asset_list', COALESCE(JSONB_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSONB_BUILD_ARRAY()) ) AS tx_outputs FROM _all_outputs AO WHERE AO.tx_id = ATX.id - GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, AO.tx_hash, tx_index, value + GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, AO.tx_hash, tx_index, value, datum_hash, inline_datum, reference_script ) AS tmp - ), JSON_BUILD_ARRAY()), - COALESCE((SELECT AW.list FROM _all_withdrawals AW WHERE AW.tx_id = ATX.id), JSON_BUILD_ARRAY()), - COALESCE((SELECT AMI.list FROM _all_mints AMI WHERE AMI.tx_id = ATX.id), JSON_BUILD_ARRAY()), - COALESCE((SELECT AME.list FROM _all_metadata AME WHERE AME.tx_id = ATX.id), JSON_BUILD_ARRAY()), - COALESCE((SELECT AC.list FROM _all_certs AC WHERE AC.tx_id = ATX.id), JSON_BUILD_ARRAY()), - COALESCE((SELECT ANS.list FROM _all_native_scripts ANS WHERE ANS.tx_id = ATX.id), JSON_BUILD_ARRAY()), - COALESCE((SELECT APC.list FROM _all_plutus_contracts APC WHERE APC.tx_id = ATX.id), JSON_BUILD_ARRAY()) + ), JSONB_BUILD_ARRAY()), + COALESCE((SELECT AW.list FROM _all_withdrawals AW WHERE AW.tx_id = ATX.id), JSONB_BUILD_ARRAY()), + COALESCE((SELECT AMI.list FROM _all_mints AMI WHERE AMI.tx_id = ATX.id), JSONB_BUILD_ARRAY()), + COALESCE((SELECT AME.list FROM _all_metadata AME WHERE AME.tx_id = ATX.id), JSONB_BUILD_ARRAY()), + COALESCE((SELECT AC.list FROM _all_certs AC WHERE AC.tx_id = ATX.id), JSONB_BUILD_ARRAY()), + COALESCE((SELECT ANS.list FROM _all_native_scripts ANS WHERE ANS.tx_id = ATX.id), JSONB_BUILD_ARRAY()), + COALESCE((SELECT APC.list FROM _all_plutus_contracts APC WHERE APC.tx_id = ATX.id), JSONB_BUILD_ARRAY()) FROM _all_tx ATX WHERE ATX.tx_hash = ANY (_tx_hashes_bytea) diff --git a/files/grest/rpc/transactions/tx_metalabels.sql b/files/grest/rpc/transactions/tx_metalabels.sql new file mode 100644 index 000000000..237c65c85 --- /dev/null +++ b/files/grest/rpc/transactions/tx_metalabels.sql @@ -0,0 +1,21 @@ +DROP FUNCTION IF EXISTS grest.tx_metalabels; + +CREATE FUNCTION grest.tx_metalabels() + RETURNS TABLE (key word64type) + LANGUAGE PLPGSQL + AS $$ +BEGIN + RETURN QUERY + WITH RECURSIVE t AS ( + (SELECT tm.key FROM public.tx_metadata tm ORDER BY key LIMIT 1) + UNION ALL + SELECT (SELECT tm.key FROM tx_metadata tm WHERE tm.key > t.key ORDER BY key LIMIT 1) + FROM t + WHERE t.key IS NOT NULL + ) + SELECT t.key FROM t WHERE t.key IS NOT NULL; +END; +$$; + +COMMENT ON FUNCTION grest.tx_metalabels IS 'Get a list of all transaction metalabels'; + diff --git a/files/grest/rpc/transactions/tx_status.sql b/files/grest/rpc/transactions/tx_status.sql index b0c0b2d1b..617ebb18f 100644 --- a/files/grest/rpc/transactions/tx_status.sql +++ b/files/grest/rpc/transactions/tx_status.sql @@ -5,7 +5,7 @@ CREATE FUNCTION grest.tx_status (_tx_hashes text[]) LANGUAGE plpgsql AS $$ DECLARE - _curr_block_no uinteger; + _curr_block_no word31type; BEGIN SELECT max(block_no) INTO _curr_block_no diff --git a/files/grest/rpc/transactions/tx_utxos.sql b/files/grest/rpc/transactions/tx_utxos.sql index 1532c4b73..6c3514ef4 100644 --- a/files/grest/rpc/transactions/tx_utxos.sql +++ b/files/grest/rpc/transactions/tx_utxos.sql @@ -42,100 +42,110 @@ BEGIN WHERE tx.id = ANY (_tx_id_list) ), + _all_inputs AS ( + SELECT + tx_in.tx_in_id AS tx_id, + tx_out.address AS payment_addr_bech32, + ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, + SA.view AS stake_addr, + ENCODE(tx.hash, 'hex') AS tx_hash, + tx_out.index AS tx_index, + tx_out.value::text AS value, + ( CASE WHEN MA.policy IS NULL THEN NULL + ELSE + JSON_BUILD_OBJECT( + 'policy_id', ENCODE(MA.policy, 'hex'), + 'asset_name', ENCODE(MA.name, 'hex'), + 'fingerprint', MA.fingerprint, + 'quantity', MTO.quantity::text + ) + END + ) AS asset_list + FROM + tx_in + INNER JOIN tx_out ON tx_out.tx_id = tx_in.tx_out_id + AND tx_out.index = tx_in.tx_out_index + INNER JOIN tx on tx_out.tx_id = tx.id + LEFT JOIN stake_address SA ON tx_out.stake_address_id = SA.id + LEFT JOIN ma_tx_out MTO ON MTO.tx_out_id = tx_out.id + LEFT JOIN multi_asset MA ON MA.id = MTO.ident + WHERE + tx_in.tx_in_id = ANY (_tx_id_list) + ), + _all_outputs AS ( SELECT - tx_id, - JSON_AGG(t_outputs) AS list + tx_out.tx_id, + tx_out.address AS payment_addr_bech32, + ENCODE(tx_out.payment_cred, 'hex') AS payment_addr_cred, + SA.view AS stake_addr, + ENCODE(tx.hash, 'hex') AS tx_hash, + tx_out.index AS tx_index, + tx_out.value::text AS value, + ( CASE WHEN MA.policy IS NULL THEN NULL + ELSE + JSON_BUILD_OBJECT( + 'policy_id', ENCODE(MA.policy, 'hex'), + 'asset_name', ENCODE(MA.name, 'hex'), + 'fingerprint', MA.fingerprint, + 'quantity', MTO.quantity::text + ) + END + ) AS asset_list + FROM + tx_out + INNER JOIN tx ON tx_out.tx_id = tx.id + LEFT JOIN stake_address SA ON tx_out.stake_address_id = SA.id + LEFT JOIN ma_tx_out MTO ON MTO.tx_out_id = tx_out.id + LEFT JOIN multi_asset MA ON MA.id = MTO.ident + WHERE + tx_out.tx_id = ANY (_tx_id_list) + ) + + SELECT + ENCODE(ATX.tx_hash, 'hex'), + COALESCE(( + SELECT JSON_AGG(tx_inputs) FROM ( - SELECT - tx_out.tx_id, + SELECT JSON_BUILD_OBJECT( 'payment_addr', JSON_BUILD_OBJECT( - 'bech32', tx_out.address, - 'cred', ENCODE(tx_out.payment_cred, 'hex') + 'bech32', payment_addr_bech32, + 'cred', payment_addr_cred ), - 'stake_addr', SA.view, - 'tx_hash', ENCODE(_all_tx.tx_hash, 'hex'), - 'tx_index', tx_out.index, - 'value', tx_out.value::text, - 'asset_list', COALESCE(( - SELECT - JSON_AGG(JSON_BUILD_OBJECT( - 'policy_id', ENCODE(MA.policy, 'hex'), - 'asset_name', ENCODE(MA.name, 'hex'), - 'quantity', MTX.quantity::text - )) - FROM - ma_tx_out MTX - INNER JOIN MULTI_ASSET MA ON MA.id = MTX.ident - WHERE - MTX.tx_out_id = tx_out.id - ), JSON_BUILD_ARRAY()) - ) AS t_outputs - FROM - tx_out - INNER JOIN _all_tx ON tx_out.tx_id = _all_tx.tx_id - LEFT JOIN stake_address SA on tx_out.stake_address_id = SA.id - WHERE - tx_out.tx_id = ANY (_tx_id_list) + 'stake_addr', stake_addr, + 'tx_hash', AI.tx_hash, + 'tx_index', tx_index, + 'value', value, + 'asset_list', COALESCE(JSON_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSON_BUILD_ARRAY()) + ) AS tx_inputs + FROM _all_inputs AI + WHERE AI.tx_id = ATX.tx_id + GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, AI.tx_hash, tx_index, value ) AS tmp - - GROUP BY tx_id - ORDER BY tx_id - ), - - _all_inputs AS ( - SELECT - tx_id, - JSON_AGG(t_inputs) AS list + ), JSON_BUILD_ARRAY()), + COALESCE(( + SELECT JSON_AGG(tx_outputs) FROM ( - SELECT - tx_in.tx_in_id AS tx_id, + SELECT JSON_BUILD_OBJECT( 'payment_addr', JSON_BUILD_OBJECT( - 'bech32', tx_out.address, - 'cred', ENCODE(tx_out.payment_cred, 'hex') + 'bech32', payment_addr_bech32, + 'cred', payment_addr_cred ), - 'stake_addr', SA.view, - 'tx_hash', ENCODE(tx.hash, 'hex'), - 'tx_index', tx_out.index, - 'value', tx_out.value::text, - 'asset_list', COALESCE(( - SELECT - JSON_AGG(JSON_BUILD_OBJECT( - 'policy_id', ENCODE(MA.policy, 'hex'), - 'asset_name', ENCODE(MA.name, 'hex'), - 'quantity', MTX.quantity::text - )) - FROM - ma_tx_out MTX - INNER JOIN MULTI_ASSET MA ON MA.id = MTX.ident - WHERE - MTX.tx_out_id = tx_out.id - ), JSON_BUILD_ARRAY()) - ) AS t_inputs - FROM - tx_in - INNER JOIN tx_out ON tx_out.tx_id = tx_in.tx_out_id - AND tx_out.index = tx_in.tx_out_index - INNER JOIN tx on tx_out.tx_id = tx.id - LEFT JOIN stake_address SA on tx_out.stake_address_id = SA.id - WHERE - tx_in.tx_in_id = ANY (_tx_id_list) + 'stake_addr', stake_addr, + 'tx_hash', AO.tx_hash, + 'tx_index', tx_index, + 'value', value, + 'asset_list', COALESCE(JSON_AGG(asset_list) FILTER (WHERE asset_list IS NOT NULL), JSON_BUILD_ARRAY()) + ) AS tx_outputs + FROM _all_outputs AO + WHERE AO.tx_id = ATX.tx_id + GROUP BY payment_addr_bech32, payment_addr_cred, stake_addr, AO.tx_hash, tx_index, value ) AS tmp - - GROUP BY tx_id - ORDER BY tx_id - ) - - SELECT - ENCODE(ATX.tx_hash, 'hex'), - COALESCE(AI.list, JSON_BUILD_ARRAY()), - COALESCE(AO.list, JSON_BUILD_ARRAY()) + ), JSON_BUILD_ARRAY()) FROM _all_tx ATX - LEFT JOIN _all_inputs AI ON AI.tx_id = ATX.tx_id - LEFT JOIN _all_outputs AO ON AO.tx_id = ATX.tx_id WHERE ATX.tx_hash = ANY (_tx_hashes_bytea) ); diff --git a/files/grest/rpc/views/blocks.sql b/files/grest/rpc/views/blocks.sql index 038caba90..805c66c3d 100644 --- a/files/grest/rpc/views/blocks.sql +++ b/files/grest/rpc/views/blocks.sql @@ -8,10 +8,12 @@ CREATE VIEW grest.blocks AS b.EPOCH_SLOT_NO AS EPOCH_SLOT, b.BLOCK_NO AS BLOCK_HEIGHT, b.SIZE AS BLOCK_SIZE, - EXTRACT(epoch from b.TIME) AS BLOCK_TIME, + EXTRACT(epoch from b.TIME)::integer AS BLOCK_TIME, b.TX_COUNT, b.VRF_KEY, ph.VIEW AS POOL, + b.PROTO_MAJOR, + b.PROTO_MINOR, b.OP_CERT_COUNTER FROM BLOCK B diff --git a/files/grest/rpc/views/tx_metalabels.sql b/files/grest/rpc/views/tx_metalabels.sql deleted file mode 100644 index b84eeee9a..000000000 --- a/files/grest/rpc/views/tx_metalabels.sql +++ /dev/null @@ -1,9 +0,0 @@ -DROP VIEW IF EXISTS grest.tx_metalabels; - -CREATE VIEW grest.tx_metalabels AS SELECT DISTINCT - key::text as metalabel -FROM - public.tx_metadata; - -COMMENT ON VIEW grest.tx_metalabels IS 'Get a list of all transaction metalabels'; - diff --git a/files/tests/pre-merge/amazonlinux2-cabal.containerfile b/files/tests/pre-merge/amazonlinux2-cabal.containerfile index 5ac1b01d6..cf6e1935a 100644 --- a/files/tests/pre-merge/amazonlinux2-cabal.containerfile +++ b/files/tests/pre-merge/amazonlinux2-cabal.containerfile @@ -6,10 +6,11 @@ ARG CNODE_HOME=/opt/cardano/cnode ENV \ LANG=C.UTF-8 \ USER=root \ - PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH + PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH \ + LD_LIBRARY_PATH=/usr/lib:/usr/lib64:/usr/local/lib:$LD_LIBRARY_PATH RUN git clone https://github.com/input-output-hk/cardano-node &&\ - pwd ; ls -l + pwd WORKDIR /cardano-node @@ -21,4 +22,4 @@ RUN curl -o cardano-node-latest.txt "https://raw.githubusercontent.com/cardano-c git status &&\ /opt/cardano/cnode/scripts/cabal-build-all.sh &&\ cabal install cardano-ping &&\ - /root/.cabal/bin/cardano-cli version ; /root/.cabal/bin/cardano-node version \ No newline at end of file + /root/.cabal/bin/cardano-cli version ; /root/.cabal/bin/cardano-node version diff --git a/files/tests/pre-merge/amazonlinux2-cabal_-l.containerfile b/files/tests/pre-merge/amazonlinux2-cabal_-l.containerfile index ec5e092ef..dfec000ec 100644 --- a/files/tests/pre-merge/amazonlinux2-cabal_-l.containerfile +++ b/files/tests/pre-merge/amazonlinux2-cabal_-l.containerfile @@ -6,10 +6,11 @@ ARG CNODE_HOME=/opt/cardano/cnode ENV \ LANG=C.UTF-8 \ USER=root \ - PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH + PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH \ + LD_LIBRARY_PATH=/usr/lib:/usr/lib64:/usr/local/lib:$LD_LIBRARY_PATH RUN git clone https://github.com/input-output-hk/cardano-node &&\ - pwd ; ls -l + pwd WORKDIR /cardano-node @@ -21,4 +22,4 @@ RUN curl -o cardano-node-latest.txt "https://raw.githubusercontent.com/cardano-c git status &&\ /opt/cardano/cnode/scripts/cabal-build-all.sh -l &&\ cabal install cardano-ping &&\ - /root/.cabal/bin/cardano-cli version ; /root/.cabal/bin/cardano-node version \ No newline at end of file + /root/.cabal/bin/cardano-cli version ; /root/.cabal/bin/cardano-node version diff --git a/files/tests/pre-merge/debian-cabal.containerfile b/files/tests/pre-merge/debian-cabal.containerfile index a7a1a6445..d87f8b40d 100644 --- a/files/tests/pre-merge/debian-cabal.containerfile +++ b/files/tests/pre-merge/debian-cabal.containerfile @@ -10,7 +10,7 @@ ENV \ PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH RUN git clone https://github.com/input-output-hk/cardano-node &&\ - pwd ; ls -l + pwd WORKDIR /cardano-node diff --git a/files/tests/pre-merge/debian-cabal_-l.containerfile b/files/tests/pre-merge/debian-cabal_-l.containerfile index b813563ca..f6bd03abc 100644 --- a/files/tests/pre-merge/debian-cabal_-l.containerfile +++ b/files/tests/pre-merge/debian-cabal_-l.containerfile @@ -10,7 +10,7 @@ ENV \ PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH RUN git clone https://github.com/input-output-hk/cardano-node &&\ - pwd ; ls -l + pwd WORKDIR /cardano-node diff --git a/files/tests/pre-merge/rockylinux8-cabal.containerfile b/files/tests/pre-merge/rockylinux8-cabal.containerfile index 27504bb22..a79fb14f5 100644 --- a/files/tests/pre-merge/rockylinux8-cabal.containerfile +++ b/files/tests/pre-merge/rockylinux8-cabal.containerfile @@ -9,7 +9,7 @@ ENV \ PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH RUN git clone https://github.com/input-output-hk/cardano-node &&\ - pwd ; ls -l + pwd WORKDIR /cardano-node @@ -21,4 +21,4 @@ RUN curl -o cardano-node-latest.txt "https://raw.githubusercontent.com/cardano-c git status &&\ /opt/cardano/cnode/scripts/cabal-build-all.sh &&\ cabal install cardano-ping &&\ - /root/.cabal/bin/cardano-cli version ; /root/.cabal/bin/cardano-node version \ No newline at end of file + /root/.cabal/bin/cardano-cli version ; /root/.cabal/bin/cardano-node version diff --git a/files/tests/pre-merge/rockylinux8-cabal_-l.containerfile b/files/tests/pre-merge/rockylinux8-cabal_-l.containerfile index b6d1f634f..7c5a46a6b 100644 --- a/files/tests/pre-merge/rockylinux8-cabal_-l.containerfile +++ b/files/tests/pre-merge/rockylinux8-cabal_-l.containerfile @@ -9,7 +9,7 @@ ENV \ PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH RUN git clone https://github.com/input-output-hk/cardano-node &&\ - pwd ; ls -l + pwd WORKDIR /cardano-node @@ -21,4 +21,4 @@ RUN curl -o cardano-node-latest.txt "https://raw.githubusercontent.com/cardano-c git status &&\ /opt/cardano/cnode/scripts/cabal-build-all.sh -l &&\ cabal install cardano-ping &&\ - /root/.cabal/bin/cardano-cli version ; /root/.cabal/bin/cardano-node version \ No newline at end of file + /root/.cabal/bin/cardano-cli version ; /root/.cabal/bin/cardano-node version diff --git a/files/tests/pre-merge/ubuntu20-cabal.containerfile b/files/tests/pre-merge/ubuntu20-cabal.containerfile index 54f5c5bf0..e85fd4fdf 100644 --- a/files/tests/pre-merge/ubuntu20-cabal.containerfile +++ b/files/tests/pre-merge/ubuntu20-cabal.containerfile @@ -10,7 +10,7 @@ ENV \ PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH RUN git clone https://github.com/input-output-hk/cardano-node &&\ - pwd ; ls -l + pwd WORKDIR /cardano-node diff --git a/files/tests/pre-merge/ubuntu20-cabal_-l.containerfile b/files/tests/pre-merge/ubuntu20-cabal_-l.containerfile index dcdab6b37..958ef81ed 100644 --- a/files/tests/pre-merge/ubuntu20-cabal_-l.containerfile +++ b/files/tests/pre-merge/ubuntu20-cabal_-l.containerfile @@ -10,7 +10,7 @@ ENV \ PATH=$CNODE_HOME/scripts:/root/.cabal/bin:/root/.ghcup/bin:$PATH RUN git clone https://github.com/input-output-hk/cardano-node &&\ - pwd ; ls -l + pwd WORKDIR /cardano-node diff --git a/files/topology-guild.json b/files/topology-guild.json index 9f29b4e64..5ad07b520 100644 --- a/files/topology-guild.json +++ b/files/topology-guild.json @@ -11,7 +11,7 @@ {"addr": "148.72.153.168","port": 18000,"valency": 1,"name": "redo" }, {"addr": "relay.guild.cryptobounty.org","port": 9198,"valency": 1,"name": "bnty1" }, {"addr": "54.93.228.113","port": 4322,"valency": 1,"name": "titan" }, - {"addr": "95.216.160.145", "port": 6000, "valency": 1, "name": "damjan"}, + {"addr": "eden-guildnet.koios.rest", "port": 6000, "valency": 1, "name": "damjan"}, {"addr": "relay-guild.adaplus.io", "port": 6000, "valency": 1, "name": "adaplus"}, {"addr": "relays-guild.poolunder.com", "port": 8900, "valency": 1, "name": "TUNDR"}, {"addr": "guild.digitalsyndicate.io", "port": 6001, "valency": 1, "name": "BUDZ"} diff --git a/scripts/cnode-helper-scripts/cabal-build-all.sh b/scripts/cnode-helper-scripts/cabal-build-all.sh index e4a5789af..c7104fd3c 100755 --- a/scripts/cnode-helper-scripts/cabal-build-all.sh +++ b/scripts/cnode-helper-scripts/cabal-build-all.sh @@ -6,22 +6,42 @@ echo "Deleting build config artifact to remove cached version, this prevents invalid Git Rev" find dist-newstyle/build/x86_64-linux/ghc-8.10.?/cardano-config-* >/dev/null 2>&1 && rm -rf "dist-newstyle/build/x86_64-linux/ghc-8.*/cardano-config-*" +[[ -f /usr/lib/libsecp256k1.so ]] && export LD_LIBRARY_PATH=/usr/lib:"${LD_LIBRARY_PATH}" +[[ -f /usr/lib64/libsecp256k1.so ]] && export LD_LIBRARY_PATH=/usr/lib64:"${LD_LIBRARY_PATH}" +[[ -f /usr/local/lib/libsecp256k1.so ]] && export LD_LIBRARY_PATH=/usr/local/lib:"${LD_LIBRARY_PATH}" +[[ -d /usr/lib/pkgconfig ]] && export PKG_CONFIG_PATH=/usr/lib/pkgconfig:"${PKG_CONFIG_PATH}" +[[ -d /usr/lib64/pkgconfig ]] && export PKG_CONFIG_PATH=/usr/lib64/pkgconfig:"${PKG_CONFIG_PATH}" +[[ -d /usr/local/lib/pkgconfig ]] && export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:"${PKG_CONFIG_PATH}" + if [[ "$1" == "-l" ]] ; then USE_SYSTEM_LIBSODIUM="package cardano-crypto-praos flags: -external-libsodium-vrf" - # In case Custom libsodium module is present, exclude it from Load Library Path - [[ -f /usr/local/lib/libsodium.so ]] && export LD_LIBRARY_PATH=${LD_LIBRARY_PATH/\/usr\/local\/lib:/} else unset USE_SYSTEM_LIBSODIUM - source "${HOME}"/.bashrc - [[ -d /usr/local/lib/pkgconfig ]] && export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:"${PKG_CONFIG_PATH}" - [[ -f /usr/local/lib/libsodium.so ]] && export LD_LIBRARY_PATH=/usr/local/lib:"${LD_LIBRARY_PATH}" fi [[ -f cabal.project.local ]] && mv cabal.project.local cabal.project.local.bkp_"$(date +%s)" cat <<-EOF > .tmp.cabal.project.local ${USE_SYSTEM_LIBSODIUM} + source-repository-package + type: git + location: https://github.com/input-output-hk/hjsonpointer + tag: bb99294424e0c5b3c2942c743b545e4b01c12ce8 + --sha256: 11z5s4xmm6cxy6sdcf9ly4lr0qh3c811hpm0bmlp4c3yq8v3m9rk + + source-repository-package + type: git + location: https://github.com/input-output-hk/hjsonschema + tag: 1546af7fc267d5eea805ef223dd2b59ac120b784 + --sha256: 0sdikhsq6nnhmmvcpwzzrwfn9cn7rg75629qnrx01i5vm5ci4314 + + source-repository-package + type: git + location: https://github.com/haskell-works/hw-aeson + tag: d99d2f3e39a287607418ae605b132a3deb2b753f + --sha256: 1vxqcwjg9q37wbwi27y9ba5163lzfz51f1swbi0rp681yg63zvn4 + source-repository-package type: git location: https://github.com/input-output-hk/bech32 @@ -31,34 +51,40 @@ cat <<-EOF > .tmp.cabal.project.local source-repository-package type: git location: https://github.com/input-output-hk/cardano-addresses - tag: 71006f9eb956b0004022e80aadd4ad50d837b621 + tag: b6f2f3cef01a399376064194fd96711a5bdba4a7 subdir: command-line core + allow-newer: + *:aeson + EOF chmod 640 .tmp.cabal.project.local -if [[ -z "${USE_SYSTEM_LIBSODIUM}" ]] ; then - echo "Running cabal update to ensure you're on latest dependencies.." - cabal update 2>&1 | tee /tmp/cabal-update.log - echo "Building.." - cabal build all 2>&1 | tee tee /tmp/build.log +echo "Running cabal update to ensure you're on latest dependencies.." +cabal update 2>&1 | tee /tmp/cabal-update.log +echo "Building.." + +if [[ -z "${USE_SYSTEM_LIBSODIUM}" ]] ; then # Build using default cabal.project first and then add cabal.project.local for additional packages if [[ "${PWD##*/}" == "cardano-node" ]] || [[ "${PWD##*/}" == "cardano-db-sync" ]]; then - echo "Overwriting cabal.project.local to include cardano-addresses and bech32 .." + #cabal install cardano-crypto-class --disable-tests --disable-profiling | tee /tmp/build.log + [[ "${PWD##*/}" == "cardano-node" ]] && cabal build cardano-node cardano-cli cardano-submit-api --disable-tests --disable-profiling | tee /tmp/build.log + [[ "${PWD##*/}" == "cardano-db-sync" ]] && cabal build cardano-db-sync --disable-tests --disable-profiling | tee /tmp/build.log mv .tmp.cabal.project.local cabal.project.local - cabal install bech32 cardano-addresses-cli --overwrite-policy=always 2>&1 | tee /tmp/build-b32-caddr.log + cabal install bech32 cardano-addresses-cli cardano-ping --overwrite-policy=always 2>&1 | tee /tmp/build-b32-caddr.log + else + cabal build all --disable-tests --disable-profiling 2>&1 | tee /tmp/build.log fi -else +else # Add cabal.project.local customisations first before building if [[ "${PWD##*/}" == "cardano-node" ]] || [[ "${PWD##*/}" == "cardano-db-sync" ]]; then - echo "Overwriting cabal.project.local to include cardano-addresses and bech32 .." mv .tmp.cabal.project.local cabal.project.local + [[ "${PWD##*/}" == "cardano-node" ]] && cabal build cardano-node cardano-cli cardano-submit-api --disable-tests --disable-profiling | tee /tmp/build.log + [[ "${PWD##*/}" == "cardano-db-sync" ]] && cabal build cardano-db-sync --disable-tests --disable-profiling | tee /tmp/build.log + else + cabal build all --disable-tests --disable-profiling 2>&1 | tee /tmp/build.log fi - echo "Running cabal update to ensure you're on latest dependencies.." - cabal update 2>&1 | tee /tmp/cabal-update.log - echo "Building.." - cabal build all 2>&1 | tee tee /tmp/build.log - [[ -f cabal.project.local ]] && cabal install bech32 cardano-addresses-cli --overwrite-policy=always 2>&1 | tee /tmp/build-b32-caddr.log + [[ -f cabal.project.local ]] && cabal install bech32 cardano-ping cardano-addresses-cli --overwrite-policy=always 2>&1 | tee /tmp/build-b32-caddr.log fi grep "^Linking" /tmp/build.log | grep -Ev 'test|golden|demo|chairman|locli|ledger|topology' | while read -r line ; do diff --git a/scripts/cnode-helper-scripts/cnode.sh b/scripts/cnode-helper-scripts/cnode.sh index ceb11875e..ecafb7d4b 100755 --- a/scripts/cnode-helper-scripts/cnode.sh +++ b/scripts/cnode-helper-scripts/cnode.sh @@ -29,6 +29,7 @@ usage() { Cardano Node wrapper script !! -d Deploy cnode as a systemd service + -s Stop cnode using SIGINT EOF exit 1 @@ -60,6 +61,13 @@ pre_startup_sanity() { [[ $(find "${LOG_DIR}"/node*.json 2>/dev/null | wc -l) -gt 0 ]] && mv "${LOG_DIR}"/node*.json "${LOG_DIR}"/archive/ } +stop_node() { + CNODE_PID=$(pgrep -fn "$(basename ${CNODEBIN}).*.--port ${CNODE_PORT}" 2>/dev/null) # env was only called in offline mode + kill -2 ${CNODE_PID} 2>/dev/null + # touch clean "${CNODE_HOME}"/db/clean # Disabled as it's a bit hacky, but only runs when SIGINT is passed to node process. Should not be needed if node does it's job + exit 0 +} + deploy_systemd() { echo "Deploying ${CNODE_VNAME} as systemd service.." sudo bash -c "cat <<-'EOF' > /etc/systemd/system/${CNODE_VNAME}.service @@ -67,23 +75,22 @@ deploy_systemd() { Description=Cardano Node Wants=network-online.target After=network-online.target + StartLimitIntervalSec=600 + StartLimitBurst=5 [Service] Type=simple - Restart=always + Restart=on-failure RestartSec=60 User=${USER} LimitNOFILE=1048576 WorkingDirectory=${CNODE_HOME}/scripts ExecStart=/bin/bash -l -c \"exec ${CNODE_HOME}/scripts/cnode.sh\" - ExecStop=/bin/bash -l -c \"exec kill -2 \$(ps -ef | grep ${CNODEBIN}.*.--port\\ ${CNODE_PORT} | tr -s ' ' | cut -d ' ' -f2) &>/dev/null\" + ExecStop=/bin/bash -l -c \"exec ${CNODE_HOME}/scripts/cnode.sh -s\" KillSignal=SIGINT SuccessExitStatus=143 - StandardOutput=syslog - StandardError=syslog SyslogIdentifier=${CNODE_VNAME} TimeoutStopSec=60 - KillMode=mixed [Install] WantedBy=multi-user.target @@ -95,9 +102,10 @@ deploy_systemd() { ################### # Parse command line options -while getopts :d opt; do +while getopts :ds opt; do case ${opt} in d ) DEPLOY_SYSTEMD="Y" ;; + s ) STOP_NODE="Y" ;; \? ) usage ;; esac done @@ -110,6 +118,8 @@ case $? in 2) clear ;; esac +[[ "${STOP_NODE}" == "Y" ]] && stop_node + # Set defaults and do basic sanity checks set_defaults #Deploy systemd if -d argument was specified diff --git a/scripts/cnode-helper-scripts/cntools.library b/scripts/cnode-helper-scripts/cntools.library index fb79650bc..baf06b252 100644 --- a/scripts/cnode-helper-scripts/cntools.library +++ b/scripts/cnode-helper-scripts/cntools.library @@ -7,11 +7,11 @@ # The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) # and this adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html) # Major: Any considerable change in the code base, big feature, workflow or breaking change from previous version -CNTOOLS_MAJOR_VERSION=9 +CNTOOLS_MAJOR_VERSION=10 # Minor: Changes and features of minor character that can be applied without breaking existing functionality or workflow -CNTOOLS_MINOR_VERSION=1 +CNTOOLS_MINOR_VERSION=0 # Patch: Backwards compatible bug fixes. No additional functionality or major changes -CNTOOLS_PATCH_VERSION=0 +CNTOOLS_PATCH_VERSION=1 CNTOOLS_VERSION="${CNTOOLS_MAJOR_VERSION}.${CNTOOLS_MINOR_VERSION}.${CNTOOLS_PATCH_VERSION}" @@ -590,9 +590,10 @@ isPoolRegistered() { .[0].pool_status //"-", .[0].retiring_epoch //"-", .[0].op_cert //"-", - .[0].op_cert_counter //0, + .[0].op_cert_counter //"null", .[0].active_stake //0, - .[0].epoch_block_cnt //0, + .[0].block_count //0, + .[0].live_pledge //0, .[0].live_stake //0, .[0].live_delegators //0, .[0].live_saturation //0 @@ -616,10 +617,11 @@ isPoolRegistered() { p_op_cert=${pool_info_arr[13]} p_op_cert_counter=${pool_info_arr[14]} p_active_stake=${pool_info_arr[15]} - p_epoch_block_cnt=${pool_info_arr[16]} - p_live_stake=${pool_info_arr[17]} - p_live_delegators=${pool_info_arr[18]} - p_live_saturation=${pool_info_arr[19]} + p_block_count=${pool_info_arr[16]} + p_live_pledge=${pool_info_arr[17]} + p_live_stake=${pool_info_arr[18]} + p_live_delegators=${pool_info_arr[19]} + p_live_saturation=${pool_info_arr[20]} [[ ${p_pool_status} = 'registered' ]] && return 2 || return 3 fi @@ -1167,17 +1169,16 @@ registerStakeWallet() { getAssetsTxOut - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${base_addr}+0${assets_tx_out}" --invalid-hereafter ${ttl} --fee 0 --certificate-file "${stake_cert_file}" - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -1218,6 +1219,7 @@ registerStakeWallet() { --certificate-file "${stake_cert_file}" --out-file "${TMP_DIR}"/tx.raw ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -1265,17 +1267,16 @@ deregisterStakeWallet() { getAssetsTxOut - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${base_addr}+0${assets_tx_out}" --invalid-hereafter ${ttl} --fee 0 --certificate-file "${stake_dereg_file}" - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -1316,6 +1317,7 @@ deregisterStakeWallet() { --certificate-file "${stake_dereg_file}" --out-file "${TMP_DIR}"/tx.raw ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -1373,21 +1375,20 @@ sendAssets() { [[ ${assets_to_send[${idx}]} -gt 0 ]] && assets_tx_out_d+="+${assets_to_send[${idx}]} ${idx}" done - dummy_build_args=( + build_args=( ${tx_in} --invalid-hereafter ${ttl} --fee 0 ${metafile_param} - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) if [[ ${outCount} -eq 1 ]]; then - dummy_build_args+=( --tx-out "${d_addr}+0${assets_tx_out_d}" ) + build_args+=( --tx-out "${d_addr}+0${assets_tx_out_d}" ) else - dummy_build_args+=( --tx-out "${s_addr}+0${assets_tx_out_s}" --tx-out "${d_addr}+0${assets_tx_out_d}" ) + build_args+=( --tx-out "${s_addr}+0${assets_tx_out_s}" --tx-out "${d_addr}+0${assets_tx_out_d}" ) fi - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -1490,17 +1491,16 @@ delegate() { getAssetsTxOut - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${base_addr}+0${assets_tx_out}" --invalid-hereafter ${ttl} --fee 0 --certificate-file "${pool_delegcert_file}" - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -1541,6 +1541,7 @@ delegate() { --certificate-file "${pool_delegcert_file}" --out-file "${TMP_DIR}"/tx.raw ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -1580,16 +1581,15 @@ withdrawRewards() { getAssetsTxOut - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${base_addr}+0${assets_tx_out}" --invalid-hereafter ${ttl} --fee 0 - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -1630,6 +1630,7 @@ withdrawRewards() { --fee ${min_fee} --out-file "${TMP_DIR}"/tx.raw ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -1680,18 +1681,17 @@ registerPool() { getAssetsTxOut - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${base_addr}+0${assets_tx_out}" --invalid-hereafter ${ttl} --fee 0 --certificate-file "${pool_regcert_file}" - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - [[ -n ${owner_delegation_cert} ]] && dummy_build_args+=( --certificate-file "${owner_delegation_cert}" ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + [[ -n ${owner_delegation_cert} ]] && build_args+=( --certificate-file "${owner_delegation_cert}" ) + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -1733,6 +1733,7 @@ registerPool() { --out-file "${TMP_DIR}"/tx.raw ) [[ -n ${owner_delegation_cert} ]] && build_args+=( --certificate-file "${owner_delegation_cert}" ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -1796,17 +1797,16 @@ modifyPool() { getAssetsTxOut - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${base_addr}+0${assets_tx_out}" --invalid-hereafter ${ttl} --fee 0 --certificate-file "${pool_regcert_file}" - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -1847,6 +1847,7 @@ modifyPool() { --certificate-file "${pool_regcert_file}" --out-file "${TMP_DIR}"/tx.raw ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -1907,17 +1908,16 @@ deRegisterPool() { getAssetsTxOut - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${addr}+0${assets_tx_out}" --invalid-hereafter ${ttl} --fee 0 --certificate-file "${pool_deregcert_file}" - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -1958,6 +1958,7 @@ deRegisterPool() { --certificate-file "${pool_deregcert_file}" --out-file "${TMP_DIR}"/tx.raw ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -1993,7 +1994,7 @@ deRegisterPool() { # Command : rotatePoolKeys # Description : Rotate pool's KES keys -# Parameters : pool name > the pool name to rotate KES keys for +# parameters : $1 = cold counter (offline mode) rotatePoolKeys() { # cold keys pool_coldkey_sk_file="${POOL_FOLDER}/${pool_name}/${POOL_COLDKEY_SK_FILENAME}" @@ -2019,18 +2020,34 @@ rotatePoolKeys() { ! ${CCLI} node key-gen-KES --verification-key-file "${pool_hotkey_vk_file}" --signing-key-file "${pool_hotkey_sk_file}" && return 1 p_opcert="" - if [[ -n ${KOIOS_API} ]]; then + if [[ $# -eq 1 ]]; then + println ACTION "${CCLI} node new-counter --cold-verification-key-file ${pool_coldkey_vk_file} --counter-value $1 --operational-certificate-issue-counter-file ${pool_opcert_counter_file}" + ! ${CCLI} node new-counter --cold-verification-key-file "${pool_coldkey_vk_file}" --counter-value $1 --operational-certificate-issue-counter-file "${pool_opcert_counter_file}" && return 1 + elif [[ -n ${KOIOS_API} ]]; then ! getPoolID "${pool_name}" && println "ERROR" "\n${FG_RED}ERROR${NC}: failed to get pool ID!\n" && return 1 println ACTION "curl -sSL -f -X POST -H \"Content-Type: application/json\" -d '{\"_pool_bech32_ids\":[\"${pool_id_bech32}\"]}' ${KOIOS_API}/pool_info" ! pool_info=$(curl -sSL -f -X POST -H "Content-Type: application/json" -d '{"_pool_bech32_ids":["'${pool_id_bech32}'"]}' "${KOIOS_API}/pool_info" 2>&1) && println "ERROR" "\n${FG_RED}KOIOS_API ERROR${NC}: ${pool_info}\n" && p_opcert="" # print error but ignore if old_counter_nbr=$(jq -er '.[0].op_cert_counter' <<< "${pool_info}" 2>/dev/null); then - println ACTION "${CCLI} node new-counter --cold-verification-key-file ${pool_coldkey_vk_file} --counter-value $(( old_counter_nbr + 1 )) --operational-certificate-issue-counter-file ${pool_opcert_counter_file}" - ! ${CCLI} node new-counter --cold-verification-key-file "${pool_coldkey_vk_file}" --counter-value $(( old_counter_nbr + 1 )) --operational-certificate-issue-counter-file "${pool_opcert_counter_file}" && return 1 - elif [[ ! -f ${pool_opcert_counter_file} ]]; then - println "ERROR" "\n${FG_RED}ERROR${NC}: op cert counter file missing and unable to get previous counter value!\n" && return 1 + new_counter_nbr=$(( old_counter_nbr + 1 )) + else + new_counter_nbr=0 # null returned = no block on chain for this pool + fi + println ACTION "${CCLI} node new-counter --cold-verification-key-file ${pool_coldkey_vk_file} --counter-value ${new_counter_nbr} --operational-certificate-issue-counter-file ${pool_opcert_counter_file}" + ! ${CCLI} node new-counter --cold-verification-key-file "${pool_coldkey_vk_file}" --counter-value ${new_counter_nbr} --operational-certificate-issue-counter-file "${pool_opcert_counter_file}" && return 1 + elif [[ -f ${pool_opcert_file} ]]; then + println ACTION "${CCLI} query kes-period-info --op-cert-file ${pool_opcert_file} ${NETWORK_IDENTIFIER}" + if ! kes_period_info=$(${CCLI} query kes-period-info --op-cert-file "${pool_opcert_file}" ${NETWORK_IDENTIFIER}); then + println "ERROR" "\n${FG_RED}ERROR${NC}: failed to grab counter from node: [${kes_period_info}]\n" && return 1 fi - elif [[ ! -f ${pool_opcert_counter_file} ]]; then - println "ERROR" "\n${FG_RED}ERROR${NC}: op cert counter file missing and unable to get previous counter value!\n" && return 1 + if old_counter_nbr=$(awk '/{/,0' <<< "${kes_period_info}" | jq -er '.qKesNodeStateOperationalCertificateNumber' 2>/dev/null); then + new_counter_nbr=$(( old_counter_nbr + 1 )) + else + new_counter_nbr=0 # null returned = no block on chain for this pool + fi + println ACTION "${CCLI} node new-counter --cold-verification-key-file ${pool_coldkey_vk_file} --counter-value ${new_counter_nbr} --operational-certificate-issue-counter-file ${pool_opcert_counter_file}" + ! ${CCLI} node new-counter --cold-verification-key-file "${pool_coldkey_vk_file}" --counter-value ${new_counter_nbr} --operational-certificate-issue-counter-file "${pool_opcert_counter_file}" && return 1 + else + println "ERROR" "\n${FG_RED}ERROR${NC}: op cert file missing and Koios disabled/unavailable. Unable to get current on-chain counter value!\n" && return 1 fi println ACTION "${CCLI} node issue-op-cert --kes-verification-key-file ${pool_hotkey_vk_file} --cold-signing-key-file ${pool_coldkey_sk_file} --operational-certificate-issue-counter-file ${pool_opcert_counter_file} --kes-period ${current_kes_period} --out-file ${pool_opcert_file}" @@ -2063,17 +2080,16 @@ sendMetadata() { getAssetsTxOut - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${addr}+0${assets_tx_out}" --invalid-hereafter ${ttl} --fee 0 ${metafile_param} - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -2114,6 +2130,7 @@ sendMetadata() { --fee ${min_fee} --out-file "${TMP_DIR}"/tx.raw ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -2157,7 +2174,7 @@ mintAsset() { [[ -z ${asset_name} ]] && asset_name_out="" || asset_name_out=".$(asciiToHex "${asset_name}")" getAssetsTxOut "${policy_id}${asset_name_out}" "${asset_amount}" - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${addr}+0${assets_tx_out}" --mint "${asset_amount} ${policy_id}${asset_name_out}" @@ -2165,11 +2182,10 @@ mintAsset() { ${metafile_param} --invalid-hereafter ${ttl} --fee 0 - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -2212,6 +2228,7 @@ mintAsset() { --fee ${min_fee} --out-file "${TMP_DIR}"/tx.raw ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -2262,7 +2279,7 @@ burnAsset() { [[ -z ${asset_name} ]] && asset_name_out="" || asset_name_out=".${asset_name}" getAssetsTxOut "${policy_id}${asset_name_out}" "-${assets_to_burn}" - dummy_build_args=( + build_args=( ${tx_in} --tx-out "${addr}+0${assets_tx_out}" --mint "-${assets_to_burn} ${policy_id}${asset_name_out}" @@ -2270,11 +2287,10 @@ burnAsset() { ${metafile_param} --invalid-hereafter ${ttl} --fee 0 - ${ERA_IDENTIFIER} --out-file "${TMP_DIR}"/tx0.tmp ) - println ACTION "${CCLI} transaction build-raw ${dummy_build_args[*]}" - if ! ${CCLI} transaction build-raw "${dummy_build_args[@]}"; then return 1; fi + + if ! buildTx; then return 1; fi min_fee_args=( transaction calculate-min-fee @@ -2317,6 +2333,7 @@ burnAsset() { --fee ${min_fee} --out-file "${TMP_DIR}"/tx.raw ) + if ! buildTx; then return 1; fi if [[ ${op_mode} = "hybrid" ]]; then @@ -2356,8 +2373,8 @@ burnAsset() { # : populate an array variable called 'build_args' with all data # Parameters : build_args > an array with all the arguments to assemble the transaction buildTx() { - println ACTION "${CCLI} transaction build-raw ${ERA_IDENTIFIER} ${build_args[*]}" - ${CCLI} transaction build-raw ${ERA_IDENTIFIER} "${build_args[@]}" + println ACTION "${CCLI} transaction build-raw ${ERA_IDENTIFIER} --cddl-format ${build_args[*]}" + ${CCLI} transaction build-raw ${ERA_IDENTIFIER} --cddl-format "${build_args[@]}" } # Command : witnessTx [raw tx file] [signing keys ...] diff --git a/scripts/cnode-helper-scripts/cntools.sh b/scripts/cnode-helper-scripts/cntools.sh index b79d3c274..e15b1950c 100755 --- a/scripts/cnode-helper-scripts/cntools.sh +++ b/scripts/cnode-helper-scripts/cntools.sh @@ -2606,6 +2606,7 @@ function main { else println "$(printf "%-15s (${FG_YELLOW}%s${NC}) : ${FG_LBLUE}%s${NC} Ada" "Pledge" "new" "$(formatAsset "${fPParams_pledge::-6}")" )" fi + [[ -n ${KOIOS_API} ]] && println "$(printf "%-21s : ${FG_LBLUE}%s${NC} Ada" "Live Pledge" "$(formatLovelace "${p_live_pledge}")")" # get margin if [[ -z ${KOIOS_API} ]]; then @@ -2738,14 +2739,29 @@ function main { else # get active/live stake/block info println "$(printf "%-21s : ${FG_LBLUE}%s${NC} Ada" "Active Stake" "$(formatLovelace "${p_active_stake}")")" - println "$(printf "%-21s : ${FG_LBLUE}%s${NC}" "Epoch Blocks" "${p_epoch_block_cnt}")" + println "$(printf "%-21s : ${FG_LBLUE}%s${NC}" "Epoch Blocks" "${p_block_count}")" println "$(printf "%-21s : ${FG_LBLUE}%s${NC} Ada" "Live Stake" "$(formatLovelace "${p_live_stake}")")" println "$(printf "%-21s : ${FG_LBLUE}%s${NC} (incl owners)" "Delegators" "${p_live_delegators}")" println "$(printf "%-21s : ${FG_LBLUE}%s${NC} %%" "Saturation" "${p_live_saturation}")" fi unset pool_kes_start - if [[ ${CNTOOLS_MODE} = "CONNECTED" ]]; then + if [[ -n ${KOIOS_API} ]]; then + [[ ${p_op_cert_counter} != null ]] && kes_counter_str="${FG_LBLUE}${p_op_cert_counter}${FG_LGRAY} - use counter ${FG_LBLUE}$((p_op_cert_counter+1))${FG_LGRAY} for rotation in offline mode.${NC}" || kes_counter_str="${FG_LGRAY}No blocks minted so far with active operational certificate. Use counter ${FG_LBLUE}0${FG_LGRAY} for rotation in offline mode.${NC}" + println "$(printf "%-21s : %s" "KES counter" "${kes_counter_str}")" + elif [[ ${CNTOOLS_MODE} = "CONNECTED" ]]; then + pool_opcert_file="${POOL_FOLDER}/${pool_name}/${POOL_OPCERT_FILENAME}" + println ACTION "${CCLI} query kes-period-info --op-cert-file ${pool_opcert_file} ${NETWORK_IDENTIFIER}" + if ! kes_period_info=$(${CCLI} query kes-period-info --op-cert-file "${pool_opcert_file}" ${NETWORK_IDENTIFIER}); then + kes_counter_str="${FG_RED}ERROR${NC}: failed to grab counter from node: [${FG_LGRAY}${kes_period_info}${NC}]" + else + if op_cert_counter=$(awk '/{/,0' <<< "${kes_period_info}" | jq -er '.qKesNodeStateOperationalCertificateNumber' 2>/dev/null); then + kes_counter_str="${FG_LBLUE}${op_cert_counter}${FG_LGRAY} - use counter ${FG_LBLUE}$((op_cert_counter+1))${FG_LGRAY} for rotation in offline mode.${NC}" + else + kes_counter_str="${FG_LGRAY}No blocks minted so far with active operational certificate. Use counter ${FG_LBLUE}0${FG_LGRAY} for rotation in offline mode.${NC}" + fi + fi + println "$(printf "%-21s : %s" "KES counter" "${kes_counter_str}")" getNodeMetrics else [[ -f "${POOL_FOLDER}/${pool_name}/${POOL_CURRENT_KES_START}" ]] && pool_kes_start="$(cat "${POOL_FOLDER}/${pool_name}/${POOL_CURRENT_KES_START}")" @@ -2774,13 +2790,27 @@ function main { println " >> POOL >> ROTATE KES" println DEBUG "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" echo + if [[ ${CNTOOLS_MODE} = "OFFLINE" ]]; then + println DEBUG "${FG_LGRAY}OFFLINE MODE${NC}: CNTools started in offline mode, please grab correct counter value from online node using pool info!\n" + fi [[ ! $(ls -A "${POOL_FOLDER}" 2>/dev/null) ]] && println "${FG_YELLOW}No pools available!${NC}" && waitForInput && continue println DEBUG "# Select pool to rotate KES keys on" if ! selectPool "all" "${POOL_COLDKEY_SK_FILENAME}"; then # ${pool_name} populated by selectPool function waitForInput && continue fi - if ! rotatePoolKeys; then - waitForInput && continue + if [[ ${CNTOOLS_MODE} = "OFFLINE" ]]; then + getAnswerAnyCust new_counter "Enter new counter number" + if ! isNumber ${new_counter}; then + println ERROR "\n${FG_RED}ERROR${NC}: not a number" + waitForInput && continue + fi + if ! rotatePoolKeys ${new_counter}; then + waitForInput && continue + fi + else + if ! rotatePoolKeys; then + waitForInput && continue + fi fi echo println "Pool KES keys successfully updated" @@ -3171,6 +3201,7 @@ function main { done echo if [[ $(jq -r '."signing-file" | length' <<< "${offlineJSON}") -eq $(jq -r '.witness | length' <<< "${offlineJSON}") ]]; then # witnessed by all signing keys + tx_witness_files=() for otx_witness in $(jq -r '.witness[] | @base64' <<< "${offlineJSON}"); do _jq() { base64 -d <<< ${otx_witness} | jq -r "${1}"; } tx_witness="$(mktemp "${TMP_DIR}/tx.witness_XXXXXXXXXX")" diff --git a/scripts/cnode-helper-scripts/dbsync.sh b/scripts/cnode-helper-scripts/dbsync.sh index a1068a5d9..24a570ab2 100755 --- a/scripts/cnode-helper-scripts/dbsync.sh +++ b/scripts/cnode-helper-scripts/dbsync.sh @@ -9,7 +9,7 @@ ###################################### #PGPASSFILE="${CNODE_HOME}/priv/.pgpass" # PGPass file containing connection information for the postgres instance -#DBSYNCBIN="${HOME}/.cabal/bin/cardano-db-sync-extended" # Path for cardano-db-sync-extended binary, assumed to be available in $PATH +#DBSYNCBIN="${HOME}/.cabal/bin/cardano-db-sync" # Path for cardano-db-sync binary, assumed to be available in $PATH #DBSYNC_STATE_DIR="${CNODE_HOME}/guild-db/ledger-state" # Folder where DBSync instance will dump ledger-state files #DBSYNC_SCHEMA_DIR="${CNODE_HOME}/guild-db/schema" # Path to DBSync repository's schema folder #DBSYNC_CONFIG="${CNODE_HOME}/files/dbsync.json" # Config file for dbsync instance @@ -36,7 +36,7 @@ usage() { } set_defaults() { - [[ -z "${DBSYNCBIN}" ]] && DBSYNCBIN="${HOME}/.cabal/bin/cardano-db-sync-extended" + [[ -z "${DBSYNCBIN}" ]] && DBSYNCBIN="${HOME}/.cabal/bin/cardano-db-sync" [[ -z "${PGPASSFILE}" ]] && PGPASSFILE="${CNODE_HOME}/priv/.pgpass" [[ -z "${DBSYNC_CONFIG}" ]] && DBSYNC_CONFIG="${CNODE_HOME}/files/dbsync.json" [[ -z "${DBSYNC_SCHEMA_DIR}" ]] && DBSYNC_SCHEMA_DIR="${CNODE_HOME}/guild-db/schema" @@ -45,8 +45,8 @@ set_defaults() { } check_defaults() { - if [[ ! -f "${DBSYNCBIN}" ]] && [[ ! $(command -v cardano-db-sync-extended &>/dev/null) ]]; then - echo "ERROR: cardano-db-sync-extended seems to be absent in PATH, please investigate \$PATH environment variable!" && exit 1 + if [[ ! -f "${DBSYNCBIN}" ]] && [[ ! $(command -v cardano-db-sync &>/dev/null) ]]; then + echo "ERROR: cardano-db-sync seems to be absent in PATH, please investigate \$PATH environment variable!" && exit 1 elif [[ ! -f "${PGPASSFILE}" ]]; then echo "ERROR: The PGPASSFILE (${PGPASSFILE}) not found, please ensure you've followed the instructions on guild-operators website!" && exit 1 exit 1 @@ -68,7 +68,7 @@ check_config_sanity() { if [[ "${BYGENHASH}" != "${BYGENHASHCFG}" ]] || [[ "${SHGENHASH}" != "${SHGENHASHCFG}" ]] || [[ "${ALGENHASH}" != "${ALGENHASHCFG}" ]]; then cp "${CONFIG}" "${CONFIG}".tmp jq --arg BYGENHASH ${BYGENHASH} --arg SHGENHASH ${SHGENHASH} --arg ALGENHASH ${ALGENHASH} '.ByronGenesisHash = $BYGENHASH | .ShelleyGenesisHash = $SHGENHASH | .AlonzoGenesisHash = $ALGENHASH' <"${CONFIG}" >"${CONFIG}".tmp - mv -f "${CONFIG}".tmp "${CONFIG}" + [[ -s "${CONFIG}".tmp ]] && mv -f "${CONFIG}".tmp "${CONFIG}" fi } diff --git a/scripts/cnode-helper-scripts/env b/scripts/cnode-helper-scripts/env index 22556234a..d7434d083 100644 --- a/scripts/cnode-helper-scripts/env +++ b/scripts/cnode-helper-scripts/env @@ -481,7 +481,7 @@ createDistanceToBottom() { # Description : Query cardano-node for current metrics getNodeMetrics() { CNODE_PID=$(pgrep -fn "$(basename ${CNODEBIN}).*.--port ${CNODE_PORT}") # Define again - as node could be restarted since last attempt of sourcing env - [[ -n ${CNODE_PID} ]] && uptimes=$(ps -p ${CNODE_PID} -o etimes=) || uptimes=0 + [[ -n ${CNODE_PID} ]] && uptimes=$(( $(date -u +%s) - $(date -d "$(ps -p ${CNODE_PID} -o lstart=)" +%s) )) || uptimes=0 if [[ ${USE_EKG} = 'Y' ]]; then node_metrics=$(curl -s -m ${EKG_TIMEOUT} -H 'Accept: application/json' "http://${EKG_HOST}:${EKG_PORT}/" 2>/dev/null) node_metrics_tsv=$(jq -r '[ @@ -699,16 +699,17 @@ slotInterval() { } # Description : Identify current era and set variables accordingly +# TODO: Track progress of https://github.com/input-output-hk/cardano-node/pull/4135 to see if era identifier can be removed in a future release. getEraIdentifier() { - NETWORK_ERA=$(${CCLI} query tip ${NETWORK_IDENTIFIER} 2>/dev/null | jq '.era //empty') + NETWORK_ERA=$(${CCLI} query tip ${NETWORK_IDENTIFIER} 2>/dev/null | jq -r '.era //empty') case ${NETWORK_ERA} in Byron) ERA_IDENTIFIER="--byron-era" ;; Shelley) ERA_IDENTIFIER="--shelley-era" ;; Allegra) ERA_IDENTIFIER="--allegra-era" ;; Mary) ERA_IDENTIFIER="--mary-era" ;; - Alonzo) ERA_IDENTIFIER="--alonzo-era" ;; - Babbage) ERA_IDENTIFIER="--babbage-era" ;; - *) ERA_IDENTIFIER="" # use cli default + #Alonzo) ERA_IDENTIFIER="--alonzo-era" ;; + #Babbage) ERA_IDENTIFIER="--babbage-era" ;; + *) ERA_IDENTIFIER="--mary-era" # use mary era as default [[ ${OFFLINE_MODE} = "N" ]] && return 1 esac return 0 @@ -882,8 +883,8 @@ if ! command -v "jq" &>/dev/null; then return 1 fi -read -ra CONFIG_CONTENTS <<<"$(jq -r '[ .AlonzoGenesisFile, .ByronGenesisFile, .ShelleyGenesisFile, .Protocol, .TraceChainDb]| @tsv' "${CONFIG}" 2>/dev/null)" -if [[ -z "${CONFIG_CONTENTS[4]}" ]]; then +read -ra CONFIG_CONTENTS <<<"$(jq -r '[ .AlonzoGenesisFile //"null", .ByronGenesisFile //"null", .ShelleyGenesisFile //"null", .Protocol //"Cardano", .TraceChainDb //"null", .EnableP2P //"false"]| @tsv' "${CONFIG}" 2>/dev/null)" +if [[ ${CONFIG_CONTENTS[4]} != "true" ]]; then echo "Could not find TraceChainDb when attempting to parse ${CONFIG} file in JSON format, please double-check the syntax of your config, or simply download it from guild-operators repository!" return 1 else @@ -900,6 +901,7 @@ else [[ ! -f "${GENESIS_JSON}" ]] && echo "Shelley genesis file not found: ${GENESIS_JSON}" && return 1 GENESIS_HASH="$(${CCLI} genesis hash --genesis "${GENESIS_JSON}")" PROTOCOL="${CONFIG_CONTENTS[3]}" + P2P_ENABLED="${CONFIG_CONTENTS[5]}" fi [[ -z ${EKG_TIMEOUT} ]] && EKG_TIMEOUT=3 @@ -975,8 +977,8 @@ if [[ -n "${CNODE_PID}" ]]; then fi node_version="$(${CCLI} version | head -1 | cut -d' ' -f2)" -if ! versionCheck "1.32.1" "${node_version}"; then - echo -e "\nGuild scripts has now been upgraded to support cardano-node 1.32.1 or higher (${node_version} found).\nPlease update cardano-node (note that you should ideally update your config too) or use tagged branches for older node version.\n\n" +if ! versionCheck "1.35.0" "${node_version}"; then + echo -e "\nGuild scripts has now been upgraded to support cardano-node 1.35.0 or higher (${node_version} found).\nPlease update cardano-node (note that you should ideally update your config too) or use tagged branches for older node version.\n\n" return 1 fi diff --git a/scripts/cnode-helper-scripts/gLiveView.sh b/scripts/cnode-helper-scripts/gLiveView.sh index ccdf210e4..91da33f1a 100755 --- a/scripts/cnode-helper-scripts/gLiveView.sh +++ b/scripts/cnode-helper-scripts/gLiveView.sh @@ -57,7 +57,7 @@ setTheme() { # Do NOT modify code below # ###################################### -GLV_VERSION=v1.26.5 +GLV_VERSION=v1.27.0 PARENT="$(dirname $0)" @@ -603,7 +603,6 @@ checkPeers() { ##################################### check_peers="false" show_peers="false" -p2p_enabled=$(jq -r '.EnableP2P //false' ${CONFIG} 2>/dev/null) getNodeMetrics curr_epoch=${epochnum} getShelleyTransitionEpoch @@ -687,7 +686,7 @@ while true; do if [[ ${show_peers} = "false" ]]; then - if [[ ${p2p_enabled} != true ]]; then + if [[ ${P2P_ENABLED} != true ]]; then if [[ ${use_lsof} = 'Y' ]]; then peers_in=$(lsof -Pnl +M | grep ESTABLISHED | awk -v pid="${CNODE_PID}" -v port=":${CNODE_PORT}->" '$2 == pid && $9 ~ port {print $9}' | awk -F "->" '{print $2}' | wc -l) peers_out=$(lsof -Pnl +M | grep ESTABLISHED | awk -v pid="${CNODE_PID}" -v port=":(${CNODE_PORT}|${EKG_PORT}|${PROM_PORT})->" '$2 == pid && $9 !~ port {print $9}' | awk -F "->" '{print $2}' | wc -l) @@ -970,7 +969,7 @@ while true; do echo "${conndivider}" && ((line++)) - if [[ ${p2p_enabled} = true ]]; then + if [[ ${P2P_ENABLED} = true ]]; then # row 1 printf "${VL} P2P : ${style_status_1}%-${three_col_2_value_width}s${NC}" "enabled" diff --git a/scripts/cnode-helper-scripts/prereqs.sh b/scripts/cnode-helper-scripts/prereqs.sh index fc9dacfa5..c07dd26e1 100755 --- a/scripts/cnode-helper-scripts/prereqs.sh +++ b/scripts/cnode-helper-scripts/prereqs.sh @@ -151,6 +151,8 @@ if [[ ${UPDATE_CHECK} = 'Y' ]] && curl -s -f -m ${CURL_TIMEOUT} -o "${PARENT}"/p fi rm -f "${PARENT}"/prereqs.sh.tmp +mkdir -p "${HOME}"/git > /dev/null 2>&1 # To hold git repositories that will be used for building binaries + if [[ "${INTERACTIVE}" = 'Y' ]]; then clear CNODE_PATH=$(get_input "Please enter the project path" ${CNODE_PATH}) @@ -206,7 +208,7 @@ if [ "$WANT_BUILD_DEPS" = 'Y' ]; then elif [[ "${VERSION_ID}" == "7" ]]; then #RHEL/CentOS7 pkg_list="${pkg_list} libusb pkgconfig srm" - elif [[ "${VERSION_ID}" =~ "8" ]]; then + elif [[ "${VERSION_ID}" =~ "8" ]] || [[ "${VERSION_ID}" =~ "9" ]]; then #RHEL/CentOS/RockyLinux8 pkg_opts="${pkg_opts} --allowerasing" pkg_list="${pkg_list} libusbx ncurses-compat-libs pkgconf-pkg-config" @@ -215,7 +217,7 @@ if [ "$WANT_BUILD_DEPS" = 'Y' ]; then pkg_opts="${pkg_opts} --allowerasing" pkg_list="${pkg_list} libusbx ncurses-compat-libs pkgconf-pkg-config srm" fi - ! grep -q ^epel <<< "$(yum repolist)" && $sudo yum ${pkg_opts} install https://dl.fedoraproject.org/pub/epel/epel-release-latest-"$(grep ^VERSION_ID /etc/os-release | cut -d\" -f2)".noarch.rpm > /dev/null + ! grep -q ^epel <<< "$(yum repolist)" && $sudo yum ${pkg_opts} install https://dl.fedoraproject.org/pub/epel/epel-release-latest-"${VERSION_ID}".noarch.rpm > /dev/null $sudo yum ${pkg_opts} install ${pkg_list} > /dev/null;rc=$? if [ $rc != 0 ]; then echo "An error occurred while installing the prerequisite packages, please investigate by using the command below:" @@ -249,6 +251,20 @@ if [ "$WANT_BUILD_DEPS" = 'Y' ]; then echo "CentOS: curl pkgconfig libffi-devel gmp-devel openssl-devel ncurses-libs ncurses-compat-libs systemd-devel zlib-devel tmux procps-ng" err_exit fi + echo "Install libsecp256k1 ... " + if ! grep -q "/usr/local/lib:\$LD_LIBRARY_PATH" "${HOME}"/.bashrc; then + echo "export LD_LIBRARY_PATH=/usr/local/lib:\$LD_LIBRARY_PATH" >> "${HOME}"/.bashrc + export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH + fi + pushd "${HOME}"/git >/dev/null || err_exit + [[ ! -d "./secp256k1" ]] && git clone https://github.com/bitcoin-core/secp256k1 &>/dev/null + pushd secp256k1 >/dev/null || err_exit + git checkout ac83be33 &>/dev/null + ./autogen.sh > autogen.log > /tmp/secp256k1.log 2>&1 + ./configure --prefix=/usr --enable-module-schnorrsig --enable-experimental > configure.log >> /tmp/secp256k1.log 2>&1 + make > make.log 2>&1 + make check >>make.log 2>&1 + $sudo make install > install.log 2>&1 export BOOTSTRAP_HASKELL_NO_UPGRADE=1 export BOOTSTRAP_HASKELL_GHC_VERSION=8.10.7 export BOOTSTRAP_HASKELL_CABAL_VERSION=3.6.2.0 @@ -295,8 +311,6 @@ else fi -mkdir -p "${HOME}"/git > /dev/null 2>&1 # To hold git repositories that will be used for building binaries - if [[ "${LIBSODIUM_FORK}" = "Y" ]]; then if ! grep -q "/usr/local/lib:\$LD_LIBRARY_PATH" "${HOME}"/.bashrc; then echo "export LD_LIBRARY_PATH=/usr/local/lib:\$LD_LIBRARY_PATH" >> "${HOME}"/.bashrc @@ -324,6 +338,7 @@ if [[ "${INSTALL_CNCLI}" = "Y" ]]; then if ! output=$(git clone https://github.com/cardano-community/cncli.git 2>&1); then echo -e "${output}" && err_exit; fi fi pushd ./cncli >/dev/null || err_exit + git remote set-url origin https://github.com/cardano-community/cncli >/dev/null if ! output=$(git fetch --all --prune 2>&1); then echo -e "${output}" && err_exit; fi cncli_git_latestTag=$(git describe --tags "$(git rev-list --tags --max-count=1)") if ! output=$(git checkout ${cncli_git_latestTag} 2>&1 && git submodule update --init --recursive --force 2>&1); then echo -e "${output}" && err_exit; fi @@ -445,6 +460,7 @@ fi # Download dbsync config curl -sL -f -m ${CURL_TIMEOUT} -o dbsync.json.tmp ${URL_RAW}/files/config-dbsync.json +[[ "${NETWORK}" != "mainnet" ]] && sed -i 's#NetworkName": "mainnet"#NetworkName": "testnet"#g' dbsync.json.tmp # Download node config, genesis and topology from template if [[ ${NETWORK} = "guild" ]]; then diff --git a/scripts/cnode-helper-scripts/topologyUpdater.sh b/scripts/cnode-helper-scripts/topologyUpdater.sh index fa31a8730..0c555c230 100755 --- a/scripts/cnode-helper-scripts/topologyUpdater.sh +++ b/scripts/cnode-helper-scripts/topologyUpdater.sh @@ -129,39 +129,44 @@ if [[ ${TU_PUSH} = "Y" ]]; then curl -s -f -6 "https://api.clio.one/htopology/v1/?port=${CNODE_PORT}&blockNo=${blockNo}&valency=${CNODE_VALENCY}&magic=${NWMAGIC}${T_HOSTNAME}" | tee -a "${LOG_DIR}"/topologyUpdater_lastresult.json fi fi + if [[ ${TU_FETCH} = "Y" ]]; then - if [[ ${IP_VERSION} = "4" || ${IP_VERSION} = "mix" ]]; then - curl -s -f -4 -o "${TOPOLOGY}".tmp "https://api.clio.one/htopology/v1/fetch/?max=${MAX_PEERS}&magic=${NWMAGIC}&ipv=${IP_VERSION}" + if [[ ${P2P_ENABLED} = "true" ]]; then + echo "INFO: Skipping the TU fetch request because the node is running in P2P mode" else - curl -s -f -6 -o "${TOPOLOGY}".tmp "https://api.clio.one/htopology/v1/fetch/?max=${MAX_PEERS}&magic=${NWMAGIC}&ipv=${IP_VERSION}" - fi - [[ ! -s "${TOPOLOGY}".tmp ]] && echo "ERROR: The downloaded file is empty!" && exit 1 - if [[ -n "${CUSTOM_PEERS}" ]]; then - topo="$(cat "${TOPOLOGY}".tmp)" - IFS='|' read -ra cpeers <<< "${CUSTOM_PEERS}" - for cpeer in "${cpeers[@]}"; do - IFS=',' read -ra cpeer_attr <<< "${cpeer}" - case ${#cpeer_attr[@]} in - 2) addr="${cpeer_attr[0]}" - port=${cpeer_attr[1]} - valency=1 ;; - 3) addr="${cpeer_attr[0]}" - port=${cpeer_attr[1]} - valency=${cpeer_attr[2]} ;; - *) echo "ERROR: Invalid Custom Peer definition '${cpeer}'. Please double check CUSTOM_PEERS definition" - exit 1 ;; - esac - if [[ ${addr} = *.* ]]; then - ! isValidIPv4 "${addr}" && echo "ERROR: Invalid IPv4 address or hostname '${addr}'. Please check CUSTOM_PEERS definition" && continue - elif [[ ${addr} = *:* ]]; then - ! isValidIPv6 "${addr}" && echo "ERROR: Invalid IPv6 address '${addr}'. Please check CUSTOM_PEERS definition" && continue - fi - ! isNumber ${port} && echo "ERROR: Invalid port number '${port}'. Please check CUSTOM_PEERS definition" && continue - ! isNumber ${valency} && echo "ERROR: Invalid valency number '${valency}'. Please check CUSTOM_PEERS definition" && continue - topo=$(jq '.Producers += [{"addr": $addr, "port": $port|tonumber, "valency": $valency|tonumber}]' --arg addr "${addr}" --arg port ${port} --arg valency ${valency} <<< "${topo}") - done - echo "${topo}" | jq -r . >/dev/null 2>&1 && echo "${topo}" > "${TOPOLOGY}".tmp + if [[ ${IP_VERSION} = "4" || ${IP_VERSION} = "mix" ]]; then + curl -s -f -4 -o "${TOPOLOGY}".tmp "https://api.clio.one/htopology/v1/fetch/?max=${MAX_PEERS}&magic=${NWMAGIC}&ipv=${IP_VERSION}" + else + curl -s -f -6 -o "${TOPOLOGY}".tmp "https://api.clio.one/htopology/v1/fetch/?max=${MAX_PEERS}&magic=${NWMAGIC}&ipv=${IP_VERSION}" + fi + [[ ! -s "${TOPOLOGY}".tmp ]] && echo "ERROR: The downloaded file is empty!" && exit 1 + if [[ -n "${CUSTOM_PEERS}" ]]; then + topo="$(cat "${TOPOLOGY}".tmp)" + IFS='|' read -ra cpeers <<< "${CUSTOM_PEERS}" + for cpeer in "${cpeers[@]}"; do + IFS=',' read -ra cpeer_attr <<< "${cpeer}" + case ${#cpeer_attr[@]} in + 2) addr="${cpeer_attr[0]}" + port=${cpeer_attr[1]} + valency=1 ;; + 3) addr="${cpeer_attr[0]}" + port=${cpeer_attr[1]} + valency=${cpeer_attr[2]} ;; + *) echo "ERROR: Invalid Custom Peer definition '${cpeer}'. Please double check CUSTOM_PEERS definition" + exit 1 ;; + esac + if [[ ${addr} = *.* ]]; then + ! isValidIPv4 "${addr}" && echo "ERROR: Invalid IPv4 address or hostname '${addr}'. Please check CUSTOM_PEERS definition" && continue + elif [[ ${addr} = *:* ]]; then + ! isValidIPv6 "${addr}" && echo "ERROR: Invalid IPv6 address '${addr}'. Please check CUSTOM_PEERS definition" && continue + fi + ! isNumber ${port} && echo "ERROR: Invalid port number '${port}'. Please check CUSTOM_PEERS definition" && continue + ! isNumber ${valency} && echo "ERROR: Invalid valency number '${valency}'. Please check CUSTOM_PEERS definition" && continue + topo=$(jq '.Producers += [{"addr": $addr, "port": $port|tonumber, "valency": $valency|tonumber}]' --arg addr "${addr}" --arg port ${port} --arg valency ${valency} <<< "${topo}") + done + echo "${topo}" | jq -r . >/dev/null 2>&1 && echo "${topo}" > "${TOPOLOGY}".tmp + fi + mv "${TOPOLOGY}".tmp "${TOPOLOGY}" fi - mv "${TOPOLOGY}".tmp "${TOPOLOGY}" fi exit 0 diff --git a/scripts/grest-helper-scripts/db-scripts/basics.sql b/scripts/grest-helper-scripts/db-scripts/basics.sql index 8e460661f..09347ec4d 100644 --- a/scripts/grest-helper-scripts/db-scripts/basics.sql +++ b/scripts/grest-helper-scripts/db-scripts/basics.sql @@ -138,37 +138,6 @@ BEGIN END; $$; -CREATE FUNCTION grest.get_current_epoch () - RETURNS integer - LANGUAGE plpgsql - AS -$$ - BEGIN - RETURN ( - SELECT MAX(no) FROM public.epoch - ); - END; -$$; - -CREATE FUNCTION grest.get_epoch_stakes_count (_epoch_no integer) - RETURNS integer - LANGUAGE plpgsql - AS -$$ - BEGIN - RETURN ( - SELECT - count(*) - FROM - public.epoch_stake - WHERE - epoch_no = _epoch_no - GROUP BY - epoch_no - ); - END; -$$; - CREATE FUNCTION grest.update_control_table (_key text, _last_value text, _artifacts text default null) RETURNS void LANGUAGE plpgsql diff --git a/scripts/grest-helper-scripts/getmetrics.sh b/scripts/grest-helper-scripts/getmetrics.sh index cec740721..634b1180d 100755 --- a/scripts/grest-helper-scripts/getmetrics.sh +++ b/scripts/grest-helper-scripts/getmetrics.sh @@ -46,6 +46,7 @@ function get-metrics() { # in Bytes pubschsize=$(psql -t --csv -d cexplorer -c "SELECT sum(pg_total_relation_size(quote_ident(schemaname) || '.' || quote_ident(tablename))::bigint) FROM pg_tables WHERE schemaname = 'public'" | grep "^[0-9]") grestschsize=$(psql -t --csv -d cexplorer -c "SELECT sum(pg_total_relation_size(quote_ident(schemaname) || '.' || quote_ident(tablename))::bigint) FROM pg_tables WHERE schemaname = 'grest'" | grep "^[0-9]") + grestconns=$(psql -t --csv -d cexplorer -c "select count(1) from pg_stat_activity where state='active' or state='idle';" | awk '{print $1}') dbsize=$(psql -t --csv -d cexplorer -c "SELECT pg_database_size ('cexplorer');" | grep "^[0-9]") # Metrics @@ -61,9 +62,10 @@ function get-metrics() { export METRIC_load1m="$(( load1m ))" export METRIC_pubschsize="${pubschsize}" export METRIC_grestschsize="${grestschsize}" + export METRIC_grestconns="${grestconns}" export METRIC_dbsize="${dbsize}" #export METRIC_cnodeversion="$(echo $(cardano-node --version) | awk '{print $2 "-" $9}')" - #export METRIC_dbsyncversion="$(echo $(cardano-db-sync-extended --version) | awk '{print $2 "-" $9}')" + #export METRIC_dbsyncversion="$(echo $(cardano-db-sync --version) | awk '{print $2 "-" $9}')" #export METRIC_psqlversion="$(echo "" | psql cexplorer -c "SELECT version();" | grep PostgreSQL | awk '{print $2}')" for metric_var_name in $(env | grep ^METRIC | sort | awk -F= '{print $1}') diff --git a/scripts/grest-helper-scripts/grest-poll.sh b/scripts/grest-helper-scripts/grest-poll.sh index 0463fc143..e2f69f808 100755 --- a/scripts/grest-helper-scripts/grest-poll.sh +++ b/scripts/grest-helper-scripts/grest-poll.sh @@ -73,8 +73,8 @@ function usage() { } function chk_version() { - instance_vr=$(curl -sfkL "${GURL}/control_table?key=eq.version&select=last_value" 2>/dev/null) - monitor_vr=$(curl -sfkL "${API_COMPARE}/control_table?key=eq.version&select=last_value" 2>/dev/null) + instance_vr=$(curl -sfkL "${GURL}/control_table?key=eq.version&select=last_value" | jq -r '.[0].last_value' 2>/dev/null) + monitor_vr=$(curl -sf "${API_STRUCT_DEFINITION}" | grep ^\ \ version|awk '{print $2}' 2>/dev/null) if [[ -z "${instance_vr}" ]] || [[ "${instance_vr}" == "[]" ]]; then [[ "${DEBUG_MODE}" == "1" ]] && echo "Response received for ${GURL} version: ${instance_vr}" @@ -103,10 +103,10 @@ function chk_tip() { .[0].block_no //0, .[0].block_time // 0 ] | @tsv' )" - currtip=$(TZ='UTC' date "+%Y-%m-%d %H:%M:%S") - dbtip=${tip[4]} - if [[ -z "${dbtip}" ]] || [[ $(( $(date -d "${currtip}" +%s) - $(date -d "${dbtip}" +%s) )) -gt ${TIP_DIFF} ]] ; then - log_err "${URLRPC}/tip endpoint did not provide a timestamp that's within ${TIP_DIFF} seconds - Tip: ${currtip}, DB Tip: ${dbtip}, Difference: $(( $(date -d "${currtip}" +%s) - $(date -d "${dbtip}" +%s) ))" + currtip=$(date +%s) + [[ ${tip[4]} =~ ^[0-9.]+$ ]] && dbtip=$(cut -d. -f1 <<< "${tip[4]}") || dbtip=$(date --date "${tip[4]}+0" +%s) + if [[ -z "${dbtip}" ]] || [[ $(( currtip - dbtip )) -gt ${TIP_DIFF} ]] ; then + log_err "${URLRPC}/tip endpoint did not provide a timestamp that's within ${TIP_DIFF} seconds - Tip: ${currtip}, DB Tip: ${dbtip}, Difference: $(( currtip - dbtip ))" optexit else epoch=${tip[0]} @@ -151,7 +151,8 @@ function chk_cache_status() { optexit else if [[ "${last_actvstake_epoch}" != "${epoch}" ]]; then - epoch_length=$(curl -s "${GURL}"/genesis?select=epochlength | jq -r .[0].epochlength) + [[ -z "${GENESIS_JSON}" ]] && GENESIS_JSON="${PARENT}"/../shelley-genesis.json + epoch_length=$(jq -r .epochLength "${GENESIS_JSON}" 2>/dev/null) if [[ ${epoch_slot} -ge $(( epoch_length / 12 )) ]]; then log_err "Active Stake cache for epoch ${epoch} still not populated as of ${epoch_slot} slot, maximum tolerance was $(( epoch_length / 12 )) !!" optexit diff --git a/scripts/grest-helper-scripts/setup-grest.sh b/scripts/grest-helper-scripts/setup-grest.sh index 292b3b6c3..605b264ac 100755 --- a/scripts/grest-helper-scripts/setup-grest.sh +++ b/scripts/grest-helper-scripts/setup-grest.sh @@ -16,7 +16,7 @@ # Do NOT modify code below # ###################################### -SGVERSION=1.0.1 +SGVERSION=1.0.6 # Using versions from 1.0.5-1.0.9 for minor commit alignment before we're prepared for wider networks, targetted support for dbsync 13 will be against v1.1.0. Using a gap from 1.0.1 - 1.0.5 allows for scope to have any urgent fixes required before then on alpha branch itself ######## Functions ######## usage() { @@ -84,7 +84,7 @@ SGVERSION=1.0.1 [[ -z ${file_name} || ${file_name} != *.sql ]] && return dl_url=$(jqDecode '.download_url //empty' "${1}") [[ -z ${dl_url} ]] && return - ! rpc_sql=$(curl -s -f -m ${CURL_TIMEOUT} ${dl_url} 2>/dev/null) && echo -e "\e[31mERROR\e[0m: download failed: ${dl_url%.json}.sql" && return 1 + ! rpc_sql=$(curl -s -f -m ${CURL_TIMEOUT} ${dl_url} 2>/dev/null) && echo -e "\e[31mERROR\e[0m: download failed: ${dl_url%.json}" && return 1 echo -e " Deploying Function : \e[32m${file_name%.sql}\e[0m" ! output=$(psql "${PGDATABASE}" -v "ON_ERROR_STOP=1" <<<${rpc_sql} 2>&1) && echo -e " \e[31mERROR\e[0m: ${output}" } @@ -159,6 +159,10 @@ SGVERSION=1.0.1 ([[ ${NWMAGIC} -eq 141 ]] && install_cron_job "active-stake-cache-update" "*/5 * * * *") || install_cron_job "active-stake-cache-update" "*/15 * * * *" + get_cron_job_executable "stake-snapshot-cache" + set_cron_variables "stake-snapshot-cache" + install_cron_job "stake-snapshot-cache" "*/10 * * * *" + # Only testnet and mainnet asset registries supported # Possible future addition for the Guild network once there is a guild registry if [[ ${NWMAGIC} -eq 764824073 || ${NWMAGIC} -eq 1097911063 ]]; then @@ -173,10 +177,10 @@ SGVERSION=1.0.1 # Description : Remove a given grest cron entry. remove_cron_job() { local job=$1 - local cron_job_path_legacy="${CRON_DIR}/${job}" # legacy name w/o vname part, can be removed in future local cron_job_path="${CRON_DIR}/${CNODE_VNAME}-${job}" - is_file "${cron_job_path_legacy}" && sudo rm "${cron_job_path_legacy}" is_file "${cron_job_path}" && sudo rm "${cron_job_path}" + kill_cron_psql_process $(echo ${job} | tr '-' '_') + kill_cron_script_process ${job} &>/dev/null } # Description : Find and kill psql processes based on partial function name. @@ -189,32 +193,22 @@ SGVERSION=1.0.1 [[ -n "${output}" ]] && echo ${output} | xargs sudo kill -SIGTERM > /dev/null } - # Description : Kill cron-related psql update functions. - kill_cron_psql_processes() { - kill_cron_psql_process 'stake_distribution_cache_update' - kill_cron_psql_process 'pool_history_cache_update' - kill_cron_psql_process 'asset_registry_cache_update' - } - # Description : Kill a running cron script (does not stop psql executions). - kill_cron_script_processes() { - sudo pkill -9 -f asset-registry-update.sh - } - - # Description : Stop running grest-related cron jobs. - kill_running_cron_jobs() { - echo "Stopping currently running cron jobs..." - kill_cron_script_processes &>/dev/null - kill_cron_psql_processes + kill_cron_script_process() { + local job=$1 + sudo pkill -9 -f "${job}.sh" } # Description : Remove all grest-related cron entries. remove_all_grest_cron_jobs() { echo "Removing all installed cron jobs..." - remove_cron_job "stake-distribution-update" - remove_cron_job "pool-history-cache-update" + remove_cron_job "active-stake-cache-update" remove_cron_job "asset-registry-update" - kill_running_cron_jobs + remove_cron_job "epoch-info-cache-update" + remove_cron_job "pool-history-cache-update" + remove_cron_job "stake-distribution-new-accounts-update" + remove_cron_job "stake-distribution-update" + remove_cron_job "stake-snapshot-cache" } # Description : Set default env values if not user-specified. @@ -226,6 +220,11 @@ SGVERSION=1.0.1 DOCS_URL="https://cardano-community.github.io/guild-operators" API_DOCS_URL="https://api.koios.rest" [[ -z "${PGPASSFILE}" ]] && export PGPASSFILE="${CNODE_HOME}"/priv/.pgpass + case ${NWMAGIC} in + 1097911063) KOIOS_SRV="testnet.koios.rest" ;; + 764824073) KOIOS_SRV="api.koios.rest" ;; + *) KOIOS_SRV="guild.koios.rest" ;; + esac } parse_args() { @@ -305,7 +304,7 @@ SGVERSION=1.0.1 deploy_haproxy() { echo "[Re]Installing HAProxy.." pushd ~/tmp >/dev/null || err_exit - haproxy_url="http://www.haproxy.org/download/2.6/src/haproxy-2.6.0.tar.gz" + haproxy_url="http://www.haproxy.org/download/2.6/src/haproxy-2.6.1.tar.gz" if curl -sL -f -m ${CURL_TIMEOUT} -o haproxy.tar.gz "${haproxy_url}"; then tar xf haproxy.tar.gz &>/dev/null && rm -f haproxy.tar.gz if command -v apt-get >/dev/null; then @@ -314,7 +313,7 @@ SGVERSION=1.0.1 if command -v yum >/dev/null; then sudo yum -y install pcre-devel >/dev/null || err_exit "'sudo yum -y install prce-devel' failed!" fi - cd haproxy-2.6.0 || return + cd haproxy-2.6.1 || return make clean >/dev/null make -j $(nproc) TARGET=linux-glibc USE_ZLIB=1 USE_LIBCRYPT=1 USE_OPENSSL=1 USE_PCRE=1 USE_SYSTEMD=1 USE_PROMEX=1 >/dev/null sudo make install-bin >/dev/null @@ -339,6 +338,7 @@ SGVERSION=1.0.1 fi pushd "${CNODE_HOME}"/scripts >/dev/null || err_exit checkUpdate getmetrics.sh Y N N grest-helper-scripts >/dev/null + # script not available at first load sed -e "s@cexplorer@${PGDATABASE}@g" -i "${CNODE_HOME}"/scripts/getmetrics.sh echo -e "[Re]Installing Monitoring Agent.." e=! @@ -368,11 +368,6 @@ SGVERSION=1.0.1 EOF # Create HAProxy config template [[ -f "${HAPROXY_CFG}" ]] && cp "${HAPROXY_CFG}" "${HAPROXY_CFG}".bkp_$(date +%s) - case ${NWMAGIC} in - 1097911063) KOIOS_SRV="testnet.koios.rest" ;; - 764824073) KOIOS_SRV="api.koios.rest" ;; - *) KOIOS_SRV="guild.koios.rest" ;; - esac if grep 'koios.rest:8443' ${HAPROXY_CFG}; then echo " Skipping update of ${HAPROXY_CFG} as this instance is a monitoring instance" @@ -477,12 +472,12 @@ SGVERSION=1.0.1 common_update() { # Create skeleton whitelist URL file if one does not already exist using most common option if [[ ! -f "${CNODE_HOME}"/files/grestrpcs ]]; then - # Not network dependent, as the URL patterns followed will default to monitoring instance from koios - it will anyways be overwritten as per user preference based on variables in grest-poll.sh - curl -sfkL "https://api.koios.rest/koiosapi.yaml" -o "${CNODE_HOME}"/files/koiosapi.yaml 2>/dev/null + curl -sfkL "https://${KOIOS_SRV}/koiosapi.yaml" -o "${CNODE_HOME}"/files/koiosapi.yaml 2>/dev/null grep " #RPC" "${CNODE_HOME}"/files/koiosapi.yaml | sed -e 's#^ /#/#' | cut -d: -f1 | sort > "${CNODE_HOME}"/files/grestrpcs 2>/dev/null fi [[ "${SKIP_UPDATE}" == "Y" ]] && return 0 checkUpdate grest-poll.sh Y N N grest-helper-scripts >/dev/null + sed -i "s# API_STRUCT_DEFINITION=\"https://api.koios.rest/koiosapi.yaml\"# API_STRUCT_DEFINITION=\"https://${KOIOS_SRV}/koiosapi.yaml\"#g" grest-poll.sh checkUpdate checkstatus.sh Y N N grest-helper-scripts >/dev/null checkUpdate getmetrics.sh Y N N grest-helper-scripts >/dev/null }