chore: Update backend to use blockfrost instead of mumak#139
chore: Update backend to use blockfrost instead of mumak#139gonzalezzfelipe merged 1 commit intomainfrom
Conversation
📝 WalkthroughWalkthroughThe application's data source transitions from PostgreSQL-backed SQL queries to Blockfrost API-driven on-chain data fetching. Dependencies shift from sqlx/database libraries to Blockfrost and Cardano-related crates, with core GraphQL resolvers refactored to fetch UTXOs via API and perform in-memory transformations into domain objects. Changes
Sequence DiagramsequenceDiagram
participant Client as GraphQL Client
participant Resolver as Query Resolver
participant Blockfrost as Blockfrost API
participant Transform as Domain Transformer
participant Response as GraphQL Response
Client->>Resolver: Query objects_in_radius(lat, lon)
Resolver->>Blockfrost: fetch_utxos_by_policy(policy_id)
Blockfrost-->>Resolver: [AddressUtxoContentInner]
loop For each UTXO
Resolver->>Transform: TryFrom<AddressUtxoContentInner>
Transform->>Transform: Parse datum, extract position
Transform->>Transform: Calculate distance from center
Transform-->>Resolver: Ship/Pellet/Asteria
end
Resolver->>Resolver: Filter by radius
Resolver-->>Response: [Ship/Pellet/Asteria]
Response-->>Client: GraphQL Response
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/Cargo.toml`:
- Line 16: The Cargo.toml currently pins pallas = "1.0.0-alpha.4" with unstable
features; change the dependency to the stable release pallas = "0.34.0" (remove
alpha and unstable features) unless you explicitly need alpha-only
functionality—if so, add a short note in the project README and a new module
(e.g., pallas_adapter or pallas_shim) that isolates all uses of pallas behind a
narrow API so future breakage is contained and document the exact alpha
requirement and compatibility guarantees; update Cargo.toml and note the adapter
module name (pallas_adapter) and README entry accordingly.
In `@backend/src/main.rs`:
- Around line 599-603: The token iteration performs an unbounded external call
per token; dedupe tokens by policy_id and cap the number of outbound calls to
prevent amplification. In the block that uses tokens and calls
fetch_utxos_by_policy (refer to the tokens variable and fetch_utxos_by_policy
call), first collect unique policy_ids (e.g., via a HashSet) then limit to a
safe MAX_TOKENS (choose a constant like MAX_TOKENS = 20), and only iterate over
the deduped, truncated list; if the original list exceeds MAX_TOKENS, either log
a warning or return an error indicating the request is too large. Ensure the
deduping and truncation happen before any network calls so fetch_utxos_by_policy
is only invoked for the bounded set.
- Around line 44-45: The current CORS headers use Access-Control-Allow-Origin: *
together with Access-Control-Allow-Credentials: true which is invalid; update
the logic around the response.set_header calls so that when you enable
credentials (Access-Control-Allow-Credentials via
response.set_header(Header::new("Access-Control-Allow-Credentials", "true"))),
you do not set a wildcard origin; instead read the incoming request Origin
header and echo that value into Access-Control-Allow-Origin (or set a specific
allowed origin string) using the same
response.set_header(Header::new("Access-Control-Allow-Origin", origin_value")),
or else remove/disable the credentials header; change the code around the
response.set_header calls to implement this conditional behavior.
- Around line 809-811: Replace the current fallback behavior on
std::env::var("BLOCKFROST_PROJECT_ID") with a fail-fast check: call
std::env::var("BLOCKFROST_PROJECT_ID") and .expect(...) with a clear error
message so the process exits at startup if the variable is missing, and assign
the resulting String to a local binding (e.g., blockfrost_project_id) before
calling .as_str() when passing it to the client to avoid taking &str from a
temporary; update the code around the existing std::env::var(...) usage
accordingly.
- Around line 335-343: fetch_utxos_by_policy currently calls
api.addresses_utxos_asset(..., Pagination::all()) which triggers concurrent
fetching of all pages and can exceed Blockfrost rate limits; replace
Pagination::all() with a bounded paginator (e.g., Pagination::new(Order::Asc, 1,
100)) and change fetch_utxos_by_policy to perform sequential page iteration
(looping over pages and awaiting each request) or integrate an application-level
rate limiter (~10 req/s) to throttle calls; apply the same replacement for other
call sites that use Pagination::all() so all API paginated requests (the same
addresses_utxos_asset usage patterns) use bounded, sequential pagination or the
shared rate limiter.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 460023eb-18e1-4cee-82f1-f268cc2d37b8
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
backend/Cargo.tomlbackend/src/main.rs
| hex = "0.4.3" | ||
| blockfrost = "1.2.1" | ||
| blockfrost-openapi = "0.1.83" | ||
| pallas = { version = "1.0.0-alpha.4", features = ["hardano", "phase2", "unstable"] } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
pallas Rust crate latest stable release version 2025 2026
💡 Result:
As of March 4, 2026, the latest stable (non–pre-release) version of the pallas Rust crate is 0.34.0, released 2025-12-16. [1]
Notes:
- There are
1.0.0-alpha.*releases (latest shown:1.0.0-alpha.3, 2025-11-18), but those are pre-releases, not stable. [1]
Source(s):
- [1] docs.rs “pallas 0.34.0” (version list + release dates) (docs.rs)
Citations:
🌐 Web query:
pallas Rust crate hardano phase2 unstable features stability
💡 Result:
- Pallas (Rust) is a workspace of crates that re-implement Cardano/Ouroboros building blocks in native Rust; the top-level
pallascrate re-exports the subcrates and uses Cargo feature flags to tailor what you pull in. [1][2] pallas-hardano(“hardano”) is the interoperability crate for Haskellcardano-nodeartifacts / implementation-specific formats. [1][3]- “phase2” (in this context) refers to the optional
phase2Cargo feature onpallas-validate(the transaction-validation crate). It is disabled by default and is wired through thepallas-uplcdependency (i.e., Plutus/UPlc-related validation). [4] - Stability / “unstable features”: here, “unstable” primarily means opt-in Cargo features and pre-release crates (e.g.,
pallas-validatehas1.0.0-alpha.*releases), so APIs/behavior behind those flags should be treated as not stability-guaranteed and pinned carefully (lockfile / exact versions). [3][4]
Sources: [1] Cardano Developer Portal (Pallas Rust SDK page) • [2] txpipe/pallas GitHub README • [3] pallas-hardano docs.rs • [4] pallas-validate docs.rs feature flags page
🏁 Script executed:
# Check if pallas is used in the backend code and how
find . -name "*.rs" -type f | head -20Repository: txpipe/asteria
Length of output: 342
🏁 Script executed:
# Search for pallas usage patterns in Rust files
rg "pallas" --type rust -A 3 -B 1Repository: txpipe/asteria
Length of output: 396
🏁 Script executed:
# Check the full Cargo.toml context
cat -n backend/Cargo.toml | head -30Repository: txpipe/asteria
Length of output: 665
🏁 Script executed:
# Check for any comments or documentation about pallas choice
rg "pallas|alpha|1.0.0" backend/ -A 2 -B 2Repository: txpipe/asteria
Length of output: 1112
🏁 Script executed:
# Look for any migration or changelog documentation
find . -name "*.md" -type f | xargs grep -l "pallas\|1.0.0\|migration" 2>/dev/null | head -5Repository: txpipe/asteria
Length of output: 40
Replace pallas 1.0.0-alpha.4 with stable 0.34.0 or document the alpha requirement.
A stable version (0.34.0, released December 2025) is available. Using an alpha pre-release with unstable features in a core data path introduces unnecessary upgrade and runtime stability risk. If 1.0.0-alpha.4 is required for specific functionality not available in 0.34.0, document this explicitly. Otherwise, migrate to the stable release.
If the alpha version must remain, isolate pallas usage behind a small adapter module to contain future breakage, and pin exact compatibility in documentation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/Cargo.toml` at line 16, The Cargo.toml currently pins pallas =
"1.0.0-alpha.4" with unstable features; change the dependency to the stable
release pallas = "0.34.0" (remove alpha and unstable features) unless you
explicitly need alpha-only functionality—if so, add a short note in the project
README and a new module (e.g., pallas_adapter or pallas_shim) that isolates all
uses of pallas behind a narrow API so future breakage is contained and document
the exact alpha requirement and compatibility guarantees; update Cargo.toml and
note the adapter module name (pallas_adapter) and README entry accordingly.
| response.set_header(Header::new("Access-Control-Allow-Origin", "*")); | ||
| response.set_header(Header::new("Access-Control-Allow-Credentials", "true")); |
There was a problem hiding this comment.
Invalid CORS policy: wildcard origin with credentials.
At Line 44 and Line 45, Access-Control-Allow-Origin: * cannot be combined with Access-Control-Allow-Credentials: true. Browsers will reject/ignore credentialed cross-origin behavior.
Suggested fix
- response.set_header(Header::new("Access-Control-Allow-Origin", "*"));
- response.set_header(Header::new("Access-Control-Allow-Credentials", "true"));
+ if let Some(origin) = request.headers().get_one("Origin") {
+ response.set_header(Header::new("Access-Control-Allow-Origin", origin));
+ response.set_header(Header::new("Vary", "Origin"));
+ response.set_header(Header::new("Access-Control-Allow-Credentials", "true"));
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response.set_header(Header::new("Access-Control-Allow-Origin", "*")); | |
| response.set_header(Header::new("Access-Control-Allow-Credentials", "true")); | |
| if let Some(origin) = request.headers().get_one("Origin") { | |
| response.set_header(Header::new("Access-Control-Allow-Origin", origin)); | |
| response.set_header(Header::new("Vary", "Origin")); | |
| response.set_header(Header::new("Access-Control-Allow-Credentials", "true")); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/main.rs` around lines 44 - 45, The current CORS headers use
Access-Control-Allow-Origin: * together with Access-Control-Allow-Credentials:
true which is invalid; update the logic around the response.set_header calls so
that when you enable credentials (Access-Control-Allow-Credentials via
response.set_header(Header::new("Access-Control-Allow-Credentials", "true"))),
you do not set a wildcard origin; instead read the incoming request Origin
header and echo that value into Access-Control-Allow-Origin (or set a specific
allowed origin string) using the same
response.set_header(Header::new("Access-Control-Allow-Origin", origin_value")),
or else remove/disable the credentials header; change the code around the
response.set_header calls to implement this conditional behavior.
| async fn fetch_utxos_by_policy( | ||
| api: &BlockfrostAPI, | ||
| address: &str, | ||
| policy_id: &str, | ||
| ) -> Result<Vec<AddressUtxoContentInner>, Error> { | ||
| api.addresses_utxos_asset(address, policy_id, Pagination::all()) | ||
| .await | ||
| .map_err(|e| Error::new(e.to_string())) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify all unbounded pagination callsites and their resolver fan-out paths
rg -nP --type rust -C3 'Pagination::all\(|addresses_utxos_asset\('Repository: txpipe/asteria
Length of output: 450
🏁 Script executed:
#!/bin/bash
# Find fetch_utxos_by_policy call sites to understand usage patterns
rg -nP --type rust -C2 'fetch_utxos_by_policy\s*\('Repository: txpipe/asteria
Length of output: 2788
🏁 Script executed:
#!/bin/bash
# Check if there's any rate limiting or memory safeguards around pagination
rg -nP --type rust -C5 'Pagination\s*::'Repository: txpipe/asteria
Length of output: 622
🏁 Script executed:
#!/bin/bash
# Look at the complete file to understand context
wc -l backend/src/main.rs
head -350 backend/src/main.rs | tail -40Repository: txpipe/asteria
Length of output: 1243
🏁 Script executed:
#!/bin/bash
# Find BlockfrostAPI definition or dependency
fd -e rs -e toml | head -20
rg -l 'BlockfrostAPI' --type rustRepository: txpipe/asteria
Length of output: 495
🏁 Script executed:
#!/bin/bash
# Check Cargo.toml for blockfrost dependency
cat Cargo.toml | grep -A 5 'blockfrost\|pagination'Repository: txpipe/asteria
Length of output: 40
🏁 Script executed:
#!/bin/bash
# Look for any rate-limiting, timeout, or pagination constants
rg -i 'rate.limit|max.result|page.size|timeout' --type rustRepository: txpipe/asteria
Length of output: 40
🏁 Script executed:
#!/bin/bash
# Examine the broader loop context around line 600-605 and earlier
sed -n '595,610p' backend/src/main.rsRepository: txpipe/asteria
Length of output: 688
🏁 Script executed:
#!/bin/bash
# Check the Cargo.toml for blockfrost dependency details
cat backend/Cargo.tomlRepository: txpipe/asteria
Length of output: 539
🏁 Script executed:
#!/bin/bash
# Look for any async/concurrent request limits or semaphores
rg -n 'semaphore|Semaphore|concurrent|pool|limit' --type rust backend/src/main.rs | head -20Repository: txpipe/asteria
Length of output: 40
🏁 Script executed:
#!/bin/bash
# Check context around line 554-565 to see the full call pattern
sed -n '550,575p' backend/src/main.rsRepository: txpipe/asteria
Length of output: 1072
🏁 Script executed:
#!/bin/bash
# Look for the complete token loop context
sed -n '595,620p' backend/src/main.rsRepository: txpipe/asteria
Length of output: 1017
🏁 Script executed:
#!/bin/bash
# Let's understand the blockfrost crate's Pagination implementation
rg -n 'Pagination' --type rust backend/src/main.rsRepository: txpipe/asteria
Length of output: 197
🏁 Script executed:
#!/bin/bash
# Check if there are any imports or usage patterns for the blockfrost crate
grep -n 'use.*blockfrost\|use.*Pagination' backend/src/main.rs | head -20Repository: txpipe/asteria
Length of output: 305
🌐 Web query:
blockfrost rust crate Pagination::all() API rate limit
💡 Result:
- In
blockfrost(Rust SDK),Pagination::all()just setsfetch_all = true(it does not do any throttling by itself). [1] - When “fetch all pages” behavior is used, the crate’s
fetch_all_pages()helper generates a batch of page URLs and fetches them concurrently viatry_join_all(...)(so a largerbatch_sizecan create request bursts that trigger429). [2] - Blockfrost’s documented rate limit is 10 requests/second per IP, with an allowed burst of 500 requests that “cools off” at 10 req/s (i.e., after a full burst you need ~50s to fully recover). [3]
- The crate retries
429 Too Many Requests(and a few other status codes) insend_request_unprocessed(), but it uses a fixed delay fromRetrySettings.delayand does not appear to honor aRetry-Afterheader. [2][3]
Practical implication: if you combine Pagination::all() with a non-trivial batch_size (or other concurrency in your app), you can easily exceed 10 rps and hit 429; mitigate by lowering concurrency (e.g., batch_size = 1), adding your own rate limiter (~10 rps), and/or increasing RetrySettings.delay. [2][3]
Sources
[1] https://docs.rs/blockfrost/latest/src/blockfrost/pagination.rs.html
[2] https://docs.rs/crate/blockfrost/latest/source/src/request.rs
[3] https://docs.rs/blockfrost-openapi/latest/src/blockfrost_openapi/models/script.rs.html
🌐 Web query:
blockfrost-rs addresses_utxos_asset pagination documentation
💡 Result:
In blockfrost-rs, addresses_utxos_asset is explicitly a paginated endpoint:
pub async fn addresses_utxos_asset(
&self,
address: &str,
asset: &str,
pagination: Pagination,
) -> BlockfrostResult<Vec<AddressUtxoContentInner>>(docs.rs)
How pagination is represented (blockfrost-rs)
Pagination has these fields:
page: usizecount: usizeorder: Order(Asc/Desc)fetch_all: bool
and helpers:
Pagination::new(order, page, count)Pagination::all()(setsfetch_all: true)
(docs.rs)
The SDK turns that into query params ?page=...&count=...&order=.... (docs.rs)
Blockfrost endpoint parameters (server-side)
The underlying HTTP endpoint is:
GET /addresses/{address}/utxos/{asset}?count=...&page=...&order=... (postman.com)
Example usage
use blockfrost::{BlockfrostAPI, Pagination, Order};
let utxos_page_1 = api
.addresses_utxos_asset(address, asset, Pagination::new(Order::Asc, 1, 100))
.await?;To request “all pages” via the crate’s flag:
let all_utxos = api
.addresses_utxos_asset(address, asset, Pagination::all())
.await?;(docs.rs)
Citations:
- 1: https://docs.rs/blockfrost/latest/src/blockfrost/api/endpoints/addresses.rs.html
- 2: https://docs.rs/blockfrost/latest/src/blockfrost/pagination.rs.html
- 3: https://docs.rs/blockfrost/latest/src/blockfrost/url.rs.html?utm_source=openai
- 4: https://www.postman.com/blockfrost-io/workspace/my-workspace/request/18202599-e3e4910a-afc4-4696-b7c3-66c502460c6e?utm_source=openai
- 5: https://docs.rs/blockfrost/latest/src/blockfrost/pagination.rs.html
Pagination::all() in loop context creates unbounded concurrent requests that can exceed rate limits.
At line 602, fetch_utxos_by_policy is called in a loop over tokens, and Pagination::all() internally fetches all pages concurrently. This can generate many simultaneous API requests, easily exceeding Blockfrost's 10 req/s rate limit and triggering 429 Too Many Requests responses. Additional calls at lines 554, 565, 583, 638, 689, and 737 compound this risk. Replace Pagination::all() with bounded pagination (e.g., Pagination::new(Order::Asc, 1, 100)) and either add explicit sequential fetching or implement application-level rate limiting (~10 rps).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/main.rs` around lines 335 - 343, fetch_utxos_by_policy currently
calls api.addresses_utxos_asset(..., Pagination::all()) which triggers
concurrent fetching of all pages and can exceed Blockfrost rate limits; replace
Pagination::all() with a bounded paginator (e.g., Pagination::new(Order::Asc, 1,
100)) and change fetch_utxos_by_policy to perform sequential page iteration
(looping over pages and awaiting each request) or integrate an application-level
rate limiter (~10 req/s) to throttle calls; apply the same replacement for other
call sites that use Pagination::all() so all API paginated requests (the same
addresses_utxos_asset usage patterns) use bounded, sequential pagination or the
shared rate limiter.
| if let Some(tokens) = tokens { | ||
| for token in tokens { | ||
| let token_utxos = | ||
| fetch_utxos_by_policy(api, &pellet_address, &token.policy_id).await?; | ||
|
|
There was a problem hiding this comment.
User-controlled token list can trigger unbounded external call fan-out.
At Line 599-Line 603, each token causes a separate network call. Add a max token count (and ideally dedupe by policy_id) to prevent query-based API amplification.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/main.rs` around lines 599 - 603, The token iteration performs an
unbounded external call per token; dedupe tokens by policy_id and cap the number
of outbound calls to prevent amplification. In the block that uses tokens and
calls fetch_utxos_by_policy (refer to the tokens variable and
fetch_utxos_by_policy call), first collect unique policy_ids (e.g., via a
HashSet) then limit to a safe MAX_TOKENS (choose a constant like MAX_TOKENS =
20), and only iterate over the deduped, truncated list; if the original list
exceeds MAX_TOKENS, either log a warning or return an error indicating the
request is too large. Ensure the deduping and truncation happen before any
network calls so fetch_utxos_by_policy is only invoked for the bounded set.
| std::env::var("BLOCKFROST_PROJECT_ID") | ||
| .unwrap_or("asteria-backend".to_string()) | ||
| .as_str(), |
There was a problem hiding this comment.
Fail fast if BLOCKFROST_PROJECT_ID is missing.
At Line 809-Line 811, defaulting to "asteria-backend" turns misconfiguration into runtime API failures. Prefer startup-time failure with a clear message.
Suggested fix
- let client = BlockfrostAPI::new(
- std::env::var("BLOCKFROST_PROJECT_ID")
- .unwrap_or("asteria-backend".to_string())
- .as_str(),
- settings,
- );
+ let project_id = std::env::var("BLOCKFROST_PROJECT_ID")
+ .expect("BLOCKFROST_PROJECT_ID must be set in the environment");
+ let client = BlockfrostAPI::new(project_id.as_str(), settings);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| std::env::var("BLOCKFROST_PROJECT_ID") | |
| .unwrap_or("asteria-backend".to_string()) | |
| .as_str(), | |
| let project_id = std::env::var("BLOCKFROST_PROJECT_ID") | |
| .expect("BLOCKFROST_PROJECT_ID must be set in the environment"); | |
| let client = BlockfrostAPI::new(project_id.as_str(), settings); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/main.rs` around lines 809 - 811, Replace the current fallback
behavior on std::env::var("BLOCKFROST_PROJECT_ID") with a fail-fast check: call
std::env::var("BLOCKFROST_PROJECT_ID") and .expect(...) with a clear error
message so the process exits at startup if the variable is missing, and assign
the resulting String to a local binding (e.g., blockfrost_project_id) before
calling .as_str() when passing it to the client to avoid taking &str from a
temporary; update the code around the existing std::env::var(...) usage
accordingly.
Summary by CodeRabbit