chore(minibf): Implement /pools/pool_id endpoint#976
Conversation
📝 WalkthroughWalkthroughThis PR adds CIP-151 transaction metadata parsing for pool registrations, extends pool certificate indexing infrastructure across the codebase, implements a new pool-details query endpoint, and introduces related helpers for Bech32 Calidus key encoding and pool retirement certificates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant MiniBF as MiniBF Handler
participant CardanoIndex as CardanoIndex
participant Domain as Domain/DB
participant Metadata as Metadata Query
Client->>MiniBF: GET /pools/{id}
MiniBF->>MiniBF: Decode pool ID
MiniBF->>CardanoIndex: Query live snapshot by pool
CardanoIndex->>Domain: Look up delegators (parallel scan)
Domain-->>CardanoIndex: Delegator accounts
CardanoIndex->>CardanoIndex: Compute live/active stake via Rayon
CardanoIndex-->>MiniBF: Stake metrics
MiniBF->>CardanoIndex: Query pool registration blocks
CardanoIndex->>Domain: Lookup pool-certs index
Domain-->>CardanoIndex: Registration block slots
CardanoIndex-->>MiniBF: Registration metadata
MiniBF->>Metadata: Parse CIP-151 metadata
Metadata->>Metadata: Extract Calidus key
Metadata-->>MiniBF: Pool details + optional key
MiniBF-->>Client: JSON Pool response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| owners, | ||
| registration, | ||
| retirement, | ||
| calidus_key: None, |
There was a problem hiding this comment.
calidus_key lives on tx metadata, on label 867. We have 2 options:
- search for it using our tx metadata index. We scan all blocks with the 867 label on a transaction and try and find the calidus key for the particular block. If there is no calidus key, we go through every single block with that label. Probably not that expensive, but still ugly.
- add it to PoolState in the same way that we are handling the asset metadata, with something like
#[cbor(default)] calidus_key_slot: Option<BlockSlot>. We implement a visitor that extracts this value, validates its format, and if everything is right we put it in the field. Then on runtime we just go get that block.
@scarmuega what do you think is better? IMO second one is clearly superior but it implies modifying the state. BUT first one is easier, and we are already paying a high price on this endpoint because we have to scan all accounts to calculate live metrics.
There was a problem hiding this comment.
we'll go for the scan approach and we'll revisit these performance optimizations as a cross-cut concern at a later stage.
# Conflicts: # Cargo.lock # crates/cardano/src/indexes/dimensions.rs # crates/cardano/src/indexes/ext.rs # crates/cardano/src/indexes/query.rs # crates/minibf/src/routes/pools.rs # crates/redb3/src/archive/indexes.rs # crates/redb3/src/indexes/mod.rs
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
crates/minibf/src/routes/pools.rs (1)
329-399:⚠️ Potential issue | 🟡 MinorCalidus key lookup performs an unbounded scan across all CIP-151-tagged blocks.
load_pool_calidus_keywalks the entireMETADATAindex for label 867 from genesis to tip inDescorder and returns only on the first match for this pool. For any pool that has never published a CIP-151 registration, this scans every CIP-151 block on the chain on every request — and request cost will grow unboundedly with chain length. This is already flagged in a prior discussion on the approach (scanning via the metadata index vs. materializing the last CIP-151 registration intoPoolState), so just surfacing it again so it doesn't get lost before shipping.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/minibf/src/routes/pools.rs` around lines 329 - 399, load_pool_calidus_key currently scans the entire CIP-151 metadata index (blocks_by_metadata_stream) on every call, causing unbounded work for pools without registrations; stop doing that by reading a materialized latest registration stored on the pool state instead. Update the data model to add calidus fields to PoolState (e.g., calidus_id, calidus_pub_key, calidus_nonce, calidus_tx_hash, calidus_block_height, calidus_block_time, calidus_epoch), update the CIP-151 ingestion path that parses registrations to persist/update those fields when a registration or revocation is seen, and then change load_pool_calidus_key to return the PoolState's stored calidus values for the given pool (and only fall back to a bounded scan or None if necessary). Locate changes around load_pool_calidus_key, blocks_by_metadata_stream usage, and the CIP-151 registration parsing/ingestion code to implement this.
🧹 Nitpick comments (4)
crates/minibf/src/lib.rs (1)
472-473: LGTM — route placement is fine in Axum 0.8.Literal segments like
/pools/extendedstill take precedence over the{id}parameter in axum's matchit-based router, so co-existence with/pools/extendedand the/pools/{id}/...sub-routes works regardless of declaration order. Worth adding an integration test asserting/pools/extendedstill hitsall_extended(notby_idwithid="extended") to prevent future regressions if the router is ever swapped.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/minibf/src/lib.rs` around lines 472 - 473, Add an integration test that ensures the literal path "/pools/extended" is routed to routes::pools::all_extended::<D> rather than routes::pools::by_id::<D> with id="extended": write a test that boots the axum app (the same router used in lib.rs), issues an HTTP GET to "/pools/extended", and asserts the response and/or marker indicating all_extended was called (e.g., status code, response body, or a test-only header) and not the by_id handler; use the existing router setup so the test will fail if literal-segment precedence is lost.crates/cardano/src/cip151.rs (1)
141-146: Map lookup silently ignores non-integer keys.
lookuponly matchesMetadatum::Intentries. If the on-chain map contains the same field encoded asBytes/Text, the helper returnsNone, and the caller reports a genericMissingField, making misformatted submissions harder to debug. Since CIP-151 mandates integer keys this is probably fine, but consider returning a dedicatedInvalidType { field, expected: "int-keyed map" }for non-int keys to aid diagnostics.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cardano/src/cip151.rs` around lines 141 - 146, The lookup helper currently ignores non-Int map keys and returns None, masking cases where the on-chain map contains the same field encoded as Bytes/Text; change the lookup function (lookup) to return a Result<&Metadatum, Error> (or Result<Option<&Metadatum>, Error> if you prefer to keep "missing" distinct) and detect entries where the key exists but is the wrong Metadatum variant, returning a dedicated InvalidType { field, expected: "int-keyed map" } error instead of silently returning None; update callers to handle the new Result return so they can surface InvalidType diagnostics when a key is present but not encoded as Metadatum::Int.crates/minibf/src/routes/pools.rs (2)
119-146: Collapse the duplicatedlive_stake_pool == target_poolbranches.Lines 126-128 and 130-132 evaluate the exact same predicate — they can be merged into a single branch to avoid recomputing
live_stake_pool(account)twice per account and to make the aggregation easier to read.♻️ Proposed refactor
accounts .par_iter() .fold(PoolMetrics::default, |mut metrics, account| { let live_stake = account.live_stake(); if live_stake_denominator_pool(account, active_pools).is_some() { metrics.total_live_stake += live_stake; } - if live_stake_pool(account).is_some_and(|hash| hash == target_pool) { - metrics.live_stake += live_stake; - } - - if live_stake_pool(account).is_some_and(|hash| hash == target_pool) { - metrics.live_delegators += 1; - } + if live_stake_pool(account).is_some_and(|hash| hash == target_pool) { + metrics.live_stake += live_stake; + metrics.live_delegators += 1; + } if let Some(hash) = active_stake_pool(account) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/minibf/src/routes/pools.rs` around lines 119 - 146, The two duplicated branches calling live_stake_pool(account) should be collapsed to avoid recomputing the predicate: compute live_stake_pool once (e.g., let opt_hash = live_stake_pool(account)) and if let Some(hash) = opt_hash and hash == target_pool then add both metrics.live_stake += live_stake and metrics.live_delegators += 1; keep the existing live_stake_denominator_pool(account) and active_stake_pool(account) logic and return metrics as before; update references in the fold over accounts that produce PoolMetrics so the merge/reduce logic remains unchanged.
410-418: Consider returning 404 instead of 500 when the pool has no live snapshot.
PoolStatewas successfully read butsnapshot.live()returnedNone; that's a valid state (e.g., a pool that is known but doesn't yet have live params materialized) rather than a server fault. ReturningINTERNAL_SERVER_ERRORhere makes the endpoint look broken for a data-state condition.NOT_FOUND(or a dedicated error) would better match the existing "pool not found" contract used a few lines above.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/minibf/src/routes/pools.rs` around lines 410 - 418, The code treats a missing live snapshot as an internal server error; change the handling so that when pool.snapshot.live() returns None it maps to StatusCode::NOT_FOUND instead of StatusCode::INTERNAL_SERVER_ERROR. Locate the block around decode_pool_id, PoolState read, and the snapshot assignment (the variables operator, pool, snapshot and the call snapshot.live()) and replace the ok_or(StatusCode::INTERNAL_SERVER_ERROR)? with ok_or(StatusCode::NOT_FOUND)? so the endpoint returns 404 for pools without a live snapshot.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/testing/src/synthetic.rs`:
- Around line 186-197: The code silently ignores cfg.metadata_label when
cfg.metadata_entries is provided; add a clear check after building
metadata_entries that if cfg.metadata_entries is non-empty and the first entry's
label differs from cfg.metadata_label you either emit a warning or assert to
surface the precedence to callers. Locate the metadata_entries/metadata_label
logic (symbols: cfg.metadata_entries, cfg.metadata_label, metadata_entries,
SyntheticVectors::metadata_label, SyntheticBlockConfig) and add a branch that
logs a warning (e.g., tracing::warn! or similar) or returns an error/asserts
when both sources are set with conflicting labels so callers aren’t surprised.
---
Duplicate comments:
In `@crates/minibf/src/routes/pools.rs`:
- Around line 329-399: load_pool_calidus_key currently scans the entire CIP-151
metadata index (blocks_by_metadata_stream) on every call, causing unbounded work
for pools without registrations; stop doing that by reading a materialized
latest registration stored on the pool state instead. Update the data model to
add calidus fields to PoolState (e.g., calidus_id, calidus_pub_key,
calidus_nonce, calidus_tx_hash, calidus_block_height, calidus_block_time,
calidus_epoch), update the CIP-151 ingestion path that parses registrations to
persist/update those fields when a registration or revocation is seen, and then
change load_pool_calidus_key to return the PoolState's stored calidus values for
the given pool (and only fall back to a bounded scan or None if necessary).
Locate changes around load_pool_calidus_key, blocks_by_metadata_stream usage,
and the CIP-151 registration parsing/ingestion code to implement this.
---
Nitpick comments:
In `@crates/cardano/src/cip151.rs`:
- Around line 141-146: The lookup helper currently ignores non-Int map keys and
returns None, masking cases where the on-chain map contains the same field
encoded as Bytes/Text; change the lookup function (lookup) to return a
Result<&Metadatum, Error> (or Result<Option<&Metadatum>, Error> if you prefer to
keep "missing" distinct) and detect entries where the key exists but is the
wrong Metadatum variant, returning a dedicated InvalidType { field, expected:
"int-keyed map" } error instead of silently returning None; update callers to
handle the new Result return so they can surface InvalidType diagnostics when a
key is present but not encoded as Metadatum::Int.
In `@crates/minibf/src/lib.rs`:
- Around line 472-473: Add an integration test that ensures the literal path
"/pools/extended" is routed to routes::pools::all_extended::<D> rather than
routes::pools::by_id::<D> with id="extended": write a test that boots the axum
app (the same router used in lib.rs), issues an HTTP GET to "/pools/extended",
and asserts the response and/or marker indicating all_extended was called (e.g.,
status code, response body, or a test-only header) and not the by_id handler;
use the existing router setup so the test will fail if literal-segment
precedence is lost.
In `@crates/minibf/src/routes/pools.rs`:
- Around line 119-146: The two duplicated branches calling
live_stake_pool(account) should be collapsed to avoid recomputing the predicate:
compute live_stake_pool once (e.g., let opt_hash = live_stake_pool(account)) and
if let Some(hash) = opt_hash and hash == target_pool then add both
metrics.live_stake += live_stake and metrics.live_delegators += 1; keep the
existing live_stake_denominator_pool(account) and active_stake_pool(account)
logic and return metrics as before; update references in the fold over accounts
that produce PoolMetrics so the merge/reduce logic remains unchanged.
- Around line 410-418: The code treats a missing live snapshot as an internal
server error; change the handling so that when pool.snapshot.live() returns None
it maps to StatusCode::NOT_FOUND instead of StatusCode::INTERNAL_SERVER_ERROR.
Locate the block around decode_pool_id, PoolState read, and the snapshot
assignment (the variables operator, pool, snapshot and the call snapshot.live())
and replace the ok_or(StatusCode::INTERNAL_SERVER_ERROR)? with
ok_or(StatusCode::NOT_FOUND)? so the endpoint returns 404 for pools without a
live snapshot.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9f3c4dd3-733d-4922-b855-b89a6c5607c4
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (15)
crates/cardano/src/cip151.rscrates/cardano/src/indexes/delta.rscrates/cardano/src/indexes/dimensions.rscrates/cardano/src/indexes/ext.rscrates/cardano/src/indexes/query.rscrates/cardano/src/lib.rscrates/cardano/src/pallas_extras.rscrates/minibf/Cargo.tomlcrates/minibf/src/error.rscrates/minibf/src/lib.rscrates/minibf/src/mapping.rscrates/minibf/src/routes/pools.rscrates/redb3/src/archive/indexes.rscrates/redb3/src/indexes/mod.rscrates/testing/src/synthetic.rs
| let metadata_entries = if cfg.metadata_entries.is_empty() { | ||
| vec![( | ||
| cfg.metadata_label, | ||
| alonzo::Metadatum::Text(cfg.metadata_value.clone()), | ||
| )] | ||
| } else { | ||
| cfg.metadata_entries.clone() | ||
| }; | ||
| let metadata_label = metadata_entries | ||
| .first() | ||
| .map(|(label, _)| *label) | ||
| .unwrap_or(cfg.metadata_label); |
There was a problem hiding this comment.
Minor: cfg.metadata_label is silently ignored when metadata_entries is provided.
When a caller sets metadata_entries, both the block auxiliary data and SyntheticVectors::metadata_label are driven off the first entry's label; cfg.metadata_label becomes unreachable (the unwrap_or(cfg.metadata_label) fallback at Line 197 can't fire because metadata_entries is non-empty in that branch). If a caller populates both expecting the old metadata_label to still appear in vectors, their assertion will fail confusingly. Either document this precedence on SyntheticBlockConfig or assert/warn when both are set.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/testing/src/synthetic.rs` around lines 186 - 197, The code silently
ignores cfg.metadata_label when cfg.metadata_entries is provided; add a clear
check after building metadata_entries that if cfg.metadata_entries is non-empty
and the first entry's label differs from cfg.metadata_label you either emit a
warning or assert to surface the precedence to callers. Locate the
metadata_entries/metadata_label logic (symbols: cfg.metadata_entries,
cfg.metadata_label, metadata_entries, SyntheticVectors::metadata_label,
SyntheticBlockConfig) and add a branch that logs a warning (e.g., tracing::warn!
or similar) or returns an error/asserts when both sources are set with
conflicting labels so callers aren’t surprised.
Summary by CodeRabbit
Release Notes
/pools/{id}endpoint returning detailed pool information including live/active stake metrics, registration/retirement history, and CIP151 Calidus key support