Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"value for domain lovelace violates check constraint lovelace_check" when calling grest.pool_info() #155

Closed
agaffney opened this issue Jan 16, 2023 · 6 comments

Comments

@agaffney
Copy link

I'm encountering a weird error when using the grest.pool_info() function with a particular pool ID against mainnet. This doesn't happen with some other pool IDs that I tested, and it's only happening in 2 of our 3 Koios setups. These setups were initially provisioned using Koios v1.0.9rc and have not been upgraded since then.

cexplorer=> SELECT grest.pool_info(ARRAY ['pool1p3chxyfv5cfk95hwq5zqjuljrpyk3uk5ua5alpn8rjgkkvvy8ne']);
ERROR:  value for domain lovelace violates check constraint "lovelace_check"

This is the lovelace type from the DB sync schema:

Schema |    Name     |     Type      | Collation | Nullable | Default |                                             Check                                              
--------+-------------+---------------+-----------+----------+---------+------------------------------------------------------------------------------------------------
 public | lovelace    | numeric(20,0) |           |          |         | CHECK (VALUE >= 0::numeric AND VALUE <= '18446744073709551615'::numeric)                       |                   | 

I've also found this somewhat relevant upstream DB sync issue:

IntersectMBO/cardano-db-sync#351

The full SQL error context is:

cexplorer=> SELECT grest.pool_info(ARRAY ['pool1p3chxyfv5cfk95hwq5zqjuljrpyk3uk5ua5alpn8rjgkkvvy8ne']);
ERROR:  value for domain lovelace violates check constraint "lovelace_check"
CONTEXT:  SQL statement "WITH
      _all_pool_info AS (
        SELECT DISTINCT ON (pic.pool_id_bech32)
          *
        FROM
          grest.pool_info_cache AS pic
        WHERE
          pic.pool_id_bech32 = ANY(SELECT UNNEST(_pool_bech32_ids))
        ORDER BY
          pic.pool_id_bech32, pic.tx_id DESC
      )

    SELECT
      api.pool_id_bech32,
      api.pool_id_hex,
      api.active_epoch_no,
      api.vrf_key_hash,
      api.margin,
      api.fixed_cost::text,
      api.pledge::text,
      api.reward_addr,
      api.owners,
      api.relays,
      api.meta_url,
      api.meta_hash,
      offline_data.json,
      api.pool_status,
      api.retiring_epoch,
      ENCODE(block_data.op_cert::bytea, 'hex'),
      block_data.op_cert_counter,
      active_stake.as_sum::text,
      active_stake.as_sum / epoch_stake.es_sum,
      block_data.cnt,
      live.pledge::text,
      live.stake::text,
      live.delegators,
      ROUND((live.stake / _saturation_limit) * 100, 2)
    FROM
      _all_pool_info AS api
    LEFT JOIN LATERAL (
      (
        SELECT
          pod.json
        FROM
          public.pool_offline_data AS pod
        WHERE
          pod.pool_id = api.pool_hash_id
          AND
          pod.pmr_id = api.meta_id
      )
      UNION ALL
      (
        SELECT
          pod.json
        FROM
          public.pool_offline_data AS pod
        WHERE
          pod.pool_id = api.pool_hash_id
          AND
          pod.json IS NOT NULL
        ORDER BY
          pod.pmr_id DESC
      )
      LIMIT 1
    ) offline_data ON TRUE
    LEFT JOIN LATERAL (
      SELECT
        SUM(COUNT(b.id)) OVER () AS cnt,
        b.op_cert,
        b.op_cert_counter
      FROM 
        public.block AS b
      INNER JOIN 
        public.slot_leader AS sl ON b.slot_leader_id = sl.id
      WHERE
        sl.pool_hash_id = api.pool_hash_id
      GROUP BY
        b.op_cert,
        b.op_cert_counter
      ORDER BY
        b.op_cert_counter DESC
      LIMIT 1
    ) block_data ON TRUE
    LEFT JOIN LATERAL(
      SELECT
        amount::lovelace AS as_sum
      FROM
        grest.pool_active_stake_cache AS pasc
      WHERE 
        pasc.pool_id = api.pool_id_bech32
        AND
        pasc.epoch_no = _epoch_no
    ) active_stake ON TRUE
    LEFT JOIN LATERAL(
      SELECT
        amount::lovelace AS es_sum
      FROM
        grest.epoch_active_stake_cache AS easc
      WHERE 
        easc.epoch_no = _epoch_no
    ) epoch_stake ON TRUE
    LEFT JOIN LATERAL(
      SELECT
        CASE WHEN api.pool_status = 'retired'
          THEN NULL
        ELSE
          SUM (
            CASE WHEN total_balance >= 0
              THEN total_balance
              ELSE 0
            END
          )::lovelace
        END AS stake,
        COUNT (stake_address) AS delegators,
        CASE WHEN api.pool_status = 'retired'
          THEN NULL
        ELSE
          SUM (CASE WHEN sdc.stake_address = ANY (api.owners) THEN total_balance ELSE 0 END)::lovelace
        END AS pledge
      FROM
        grest.stake_distribution_cache AS sdc
      WHERE
        sdc.pool_id = api.pool_id_bech32
    ) live ON TRUE"
PL/pgSQL function grest.pool_info(text[]) line 15 at RETURN QUERY
@rdlrt
Copy link
Contributor

rdlrt commented Jan 16, 2023

Strange indeed. I checked all the registered Koios nodes and none of them seem to have trouble with the mentioned pool ID.

  1. Could you confirm your dbsync version and the method that was used to upgrade dbsync last time (for instance - when going from dbsync v12.x to v13.0.5, was the DB reset?)
  2. Since other instances do not have the error, my hunch would be potentially towards the cache tables (koios-1.0.9[rc] required reset of cache tables as it no longer required to maintain seperate history for epoch-wise stake information). Was the setup-grest for koios-1.0.9rc run with -r flag?
  3. Drilling down into lovelace fields used within RPC, can you run the below:
SELECT
  amount::lovelace AS as_sum
FROM
  grest.pool_active_stake_cache AS pasc
WHERE 
  pasc.pool_id = 'pool1p3chxyfv5cfk95hwq5zqjuljrpyk3uk5ua5alpn8rjgkkvvy8ne'
  AND
  pasc.epoch_no = 388;

#      as_sum
# ----------------
#  31775639067379
# (1 row)

SELECT
  amount::lovelace AS es_sum
FROM
  grest.epoch_active_stake_cache AS easc
WHERE 
  easc.epoch_no = 388;

#       es_sum
# -------------------
#  25427056571856294
# (1 row)

SELECT
  SUM(total_balance)::lovelace as stake
FROM
  grest.stake_distribution_cache AS sdc
WHERE
  sdc.pool_id = 'pool1p3chxyfv5cfk95hwq5zqjuljrpyk3uk5ua5alpn8rjgkkvvy8ne';

#      stake
# ----------------
#  32527930705976
# (1 row)

@agaffney
Copy link
Author

It seems that there's no data in the grest.pool_active_stake_cache or grest.epoch_active_stake_cache tables for epoch 388. That's something that I can investigate separately, but it's weird that those tables being empty resulted in this error.

@agaffney
Copy link
Author

Our Koios setup isn't exactly standard. This is running in k8s with manifests adapted from Dandelion, so we don't have the benefit of being able to run the setup script. The missing epoch 388 data is probably due to some manual cache repopulation work I did last week after we realized that we were missing all data from prior to epoch 375 (which is partly related to the k8s deployment).

@rdlrt
Copy link
Contributor

rdlrt commented Jan 17, 2023

Thanks for the background.

I think the part that might be related to k8s would be dbsync log JSON files which currently does not have better way of handling epoch-wide sum of epoch stake to be marked as ready for processing stake of new epoch (as the calculation of epoch_stake for new epoch occurs in small delta steps within dbsync - that's still pending rework upstream in cardano-ledger).

I am not sure if dandelion have upgraded their support for koios-1.0.9 yet (their current equivalents of cron job unfortunately does not cover all cron job deployments, and instead of being able to loop over job folder, it requires manual addition of individual cron job.

Coming back to the current issue, my question headed towards cache because with koios-1.0.9, one of the larger piece of improvements was the way some of cache entries are calculated (rather than querying entire transaction history, it leverages epoch stake information and only calculates transactions from that snapshot). If inaccurate - it could have potentially bought in much larger amount of stake additions for something like stake distribution cache.

@agaffney
Copy link
Author

I think we got into this situation due to a combination of factors:

  • DB sync wasn't fully caught up when Koios was first deployed, which for some reason caused the active stake cache data to only go back to epoch 375 (when Koios was originally deployed)
  • I performed some manual operations to try to backfill those cache tables, and it prevented the automatic job from running afterward for some reason

Instead of trying to figure out exactly what's broken and fix it, I've just dropped the grest schema from the affected databases and redeployed Koios.

@rdlrt
Copy link
Contributor

rdlrt commented May 18, 2023

Crux of pain-point is having to parse logs post epoch transition for which there currently isnt an alternate method, an issue was opened upstream - and will be tracked via #195 on Koios ends

@rdlrt rdlrt closed this as completed May 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants