Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review infoConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (15)
📝 WalkthroughWalkthroughAdds CI workflow and Justfile; introduces admin API key enforcement and require_admin helper; many backend formatting and small refactors (including FetchResult boxing and select handler signature changes); indexer concurrency and startup adjustments; frontend type-safety, error-handling, and UI rendering refinements. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @tac0turtle, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request primarily focuses on enhancing developer experience and code quality. It introduces a 'Justfile' to streamline common development and CI operations, making it easier to manage both frontend and backend tasks. Concurrently, significant effort was put into reformatting and cleaning up the Rust backend code, ensuring better readability and adherence to coding standards. Minor frontend adjustments were also made to improve type safety and remove technical debt. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a Justfile for CI and development tasks, applies extensive code formatting to the Rust backend via cargo fmt, and brings minor improvements to the frontend, including enhanced type definitions. However, two significant security vulnerabilities were identified: a path traversal flaw in the contract verification logic that allows execution of arbitrary binaries, and a lack of authorization on sensitive state-changing endpoints for label management and proxy detection. These critical issues must be addressed before merging. While these security concerns are paramount, the PR otherwise positively contributes to the codebase's quality and maintainability, with a specific suggestion to further improve type safety in the frontend API client.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
frontend/src/pages/BlocksPage.tsx (1)
97-116:⚠️ Potential issue | 🟡 MinorMinor race: cancelled RAF leaves orphaned timeouts for never-highlighted blocks.
If rapid block updates cause the RAF on line 101 to be cancelled before it fires (line 98–100), the
setHighlightBlockscall never executes. However, the blocks are already added toseenBlocksRef(line 106) and their removal timeouts are already scheduled (lines 107–115). This means those blocks silently skip their highlight animation and can never be retried.In practice this is unlikely to be noticeable (requires two block batches arriving within a single animation frame), but you could fix it by moving
seenBlocksRef.add(n)and the timeout scheduling inside the RAF callback alongsidesetHighlightBlocks.Proposed fix: move side effects inside the RAF callback
if (newlyAdded.length) { if (highlightRafRef.current !== null) { window.cancelAnimationFrame(highlightRafRef.current); } highlightRafRef.current = window.requestAnimationFrame(() => { setHighlightBlocks((prev) => new Set([...prev, ...newlyAdded])); highlightRafRef.current = null; + for (const n of newlyAdded) { + seenBlocksRef.current.add(n); + const t = window.setTimeout(() => { + setHighlightBlocks((prev) => { + const next = new Set(prev); + next.delete(n); + return next; + }); + timeoutsRef.current.delete(n); + }, 1600); + timeoutsRef.current.set(n, t); + } }); - for (const n of newlyAdded) { - seenBlocksRef.current.add(n); - const t = window.setTimeout(() => { - setHighlightBlocks((prev) => { - const next = new Set(prev); - next.delete(n); - return next; - }); - timeoutsRef.current.delete(n); - }, 1600); - timeoutsRef.current.set(n, t); - } }Note: if you move the
seenBlocksRef.addinside the RAF, you'll also need to guard against the same block number appearing in two consecutivenewlyAddedarrays before the first RAF fires. A quickseenBlocksRef.current.add(n)outside the RAF (keeping only timeouts inside) would be a simpler compromise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/BlocksPage.tsx` around lines 97 - 116, The RAF cancellation can leave orphaned timeouts because seenBlocksRef.add and scheduling of timeouts occur outside the requestAnimationFrame callback; move the side-effects that mark blocks as seen and schedule their removal timeouts into the highlightRafRef.current callback that calls setHighlightBlocks so those actions only run when the RAF actually fires. Concretely, update the block where newlyAdded is processed so the requestAnimationFrame callback performs seenBlocksRef.current.add(n) and timeoutsRef.current.set(n, timeout) (and the timeout's setHighlightBlocks removal) for each n, and if you prefer the simpler compromise keep only a quick seenBlocksRef.current.add(n) outside to dedupe duplicates while moving the timeout creation inside the RAF; reference highlightRafRef, seenBlocksRef, timeoutsRef, setHighlightBlocks and newlyAdded when locating code to change.backend/crates/atlas-api/src/handlers/transactions.rs (1)
40-59:⚠️ Potential issue | 🟡 MinorNormalize hash to lowercase for consistent lookups.
Current logic only adds the
0xprefix; mixed‑case input can miss rows if hashes are stored lowercase. Consider reusingnormalize_hashfor consistency with other handlers.🔧 Suggested fix
- let hash = if hash.starts_with("0x") { - hash - } else { - format!("0x{}", hash) - }; + let hash = normalize_hash(&hash);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/transactions.rs` around lines 40 - 59, Normalize the incoming transaction hash to the canonical form used elsewhere by calling the existing normalize_hash helper instead of only adding "0x": replace the inline normalization in get_transaction with a call like let hash = normalize_hash(&hash); so the hash has the 0x prefix and is lowercased, then bind that normalized hash when querying the transactions table; ensure you import or reference normalize_hash consistent with other handlers.backend/crates/atlas-api/src/handlers/nfts.rs (1)
327-338:⚠️ Potential issue | 🟡 MinorHandle
ipfs://ipfs/prefix to avoid doubleipfs/path.Some NFTs use
ipfs://ipfs/<cid>URIs. The current logic would producehttps://ipfs.io/ipfs/ipfs/<cid>. Trim the optionalipfs/segment after the scheme.🔧 Suggested fix
- if let Some(stripped) = uri.strip_prefix("ipfs://") { - // Convert ipfs://QmXxx... to https://ipfs.io/ipfs/QmXxx... - format!("https://ipfs.io/ipfs/{}", stripped) + if let Some(stripped) = uri.strip_prefix("ipfs://") { + let stripped = stripped.strip_prefix("ipfs/").unwrap_or(stripped); + // Convert ipfs://QmXxx... or ipfs://ipfs/QmXxx... to https://ipfs.io/ipfs/QmXxx... + format!("https://ipfs.io/ipfs/{}", stripped)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/nfts.rs` around lines 327 - 338, The resolve_ipfs_url function currently converts ipfs:// URIs by stripping the "ipfs://" prefix but does not handle the case where the remaining string begins with "ipfs/", producing double "ipfs/ipfs" in the path; update resolve_ipfs_url to, after uri.strip_prefix("ipfs://") returns stripped, check if stripped starts_with "ipfs/" and if so trim that segment (e.g., drop the leading "ipfs/") before formatting "https://ipfs.io/ipfs/{}", keeping the existing ar:// handling intact.backend/crates/atlas-api/src/handlers/mod.rs (1)
21-57:⚠️ Potential issue | 🟡 MinorThe
format!query construction is unsafe for untrusted input, but currently safe in practice.The function uses
format!("SELECT COUNT(*) FROM {}", table_name)which is vulnerable to SQL injection iftable_nameis user-controlled. However, the only call site passes the hardcoded constant"transactions", so there is no immediate risk.Since table names cannot be parameterized in SQL, add explicit documentation or validation to ensure
table_nameremains trusted—e.g., a whitelist of allowed tables or a comment clarifying this function is internal-only.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/mod.rs` around lines 21 - 57, get_table_count currently builds an SQL string with format!("SELECT COUNT(*) FROM {}", table_name) which is unsafe for untrusted input; ensure table_name is validated or documented as trusted by either: (a) implement a whitelist check of allowed table names before calling format! (validate in get_table_count), or (b) add a clear doc comment on get_table_count stating it is internal-only and table_name must be a trusted constant, and ideally assert/unwrap against a constant list (e.g., only "transactions") to prevent SQL injection at runtime; update references to the function accordingly.backend/crates/atlas-api/src/handlers/etherscan.rs (1)
504-512:⚠️ Potential issue | 🟡 MinorConfirmations calculation can go negative due to a TOCTOU race
current_block.0is fetched in a separate DB query before iterating transactions. If a new block is indexed between the two queries — or if a stale transaction row exceeds the MAX block value —current_block.0 - tx.block_numberproduces a negativei64. In release mode this silently serialises as a negative confirmations string; in debug mode it panics on overflow. The same issue is present inget_token_tx_listat line 629.🛡️ Proposed fix (both sites)
- let confirmations = current_block.0 - tx.block_number; + let confirmations = (current_block.0 - tx.block_number).max(0);- let confirmations = current_block.0 - transfer.block_number; + let confirmations = (current_block.0 - transfer.block_number).max(0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 504 - 512, The current confirmations calculation uses `current_block.0 - tx.block_number` which can go negative due to TOCTOU races; update the mapping in the closure that builds `EtherscanTransaction` (the place where `let confirmations = current_block.0 - tx.block_number;` is computed) to use saturating subtraction to never produce negative values (e.g., `current_block.0.saturating_sub(tx.block_number)` or otherwise clamp to zero) and apply the same change in the analogous computation inside `get_token_tx_list` to ensure confirmations are always >= 0.
♻️ Duplicate comments (1)
frontend/src/pages/AddressesPage.tsx (1)
48-63: Same RAF race condition as BlocksPage —seenRef.addand timeout scheduling run before the RAF fires.Same issue flagged in
BlocksPage.tsxlines 97–116: if the RAF is cancelled by a rapid subsequent update,seenRefalready marks the addresses as seen and removal timeouts are already ticking, but highlights are never added to state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/AddressesPage.tsx` around lines 48 - 63, The RAF race causes seenRef.current.add and timersRef scheduling to run before the highlight actually gets applied (so if highlightRafRef is cancelled the addresses are marked seen but never highlighted); move the seenRef.current.add(...) loop and timersRef.current.set(...) scheduling inside the requestAnimationFrame callback that calls setHighlight (i.e., co-locate the for (const h of newOnes) { ... } logic into the highlightRafRef.current = requestAnimationFrame(() => { ... }) block), preserve the existing cancelAnimationFrame behavior with highlightRafRef, and ensure timersRef.current.delete(h) and timeout creation happen only after the highlight has been added so cancelled RAFs won’t leave stale seen/timers for functions/refs: highlightRafRef, setHighlight, seenRef, timersRef.
🧹 Nitpick comments (13)
frontend/src/api/transactions.ts (1)
40-47: Extract a shared response-normalization helper.Both transfer fetchers duplicate the same “array or
{ data }” logic, which is easy to drift over time. A small helper keeps this consistent and easier to maintain.♻️ Example refactor
+function unwrapArray<T>(body: unknown): T[] { + if (Array.isArray(body)) return body as T[]; + if (body && typeof body === 'object' && Array.isArray((body as { data?: unknown }).data)) { + return (body as { data: T[] }).data; + } + return []; +} + export async function getTxErc20Transfers(txHash: string): Promise<TxErc20Transfer[]> { const response = await client.get(`/transactions/${txHash}/erc20-transfers`); - const body = response.data as unknown; - if (Array.isArray(body)) return body as TxErc20Transfer[]; - if (typeof body === 'object' && body !== null && 'data' in body) { - const data = (body as { data?: unknown }).data; - if (Array.isArray(data)) return data as TxErc20Transfer[]; - } - return []; + return unwrapArray<TxErc20Transfer>(response.data); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/api/transactions.ts` around lines 40 - 47, Add a small response-normalization helper (e.g., normalizeArrayResponse<T>(body: unknown): T[]) that encapsulates the existing “if Array.isArray(body) return body; if object with 'data' and data is array return data; otherwise return []” logic, keep the generic type so callers maintain TxErc20Transfer typing, and use it from getTxErc20Transfers (replace the inline checks) and the other transfer fetcher(s) that duplicate this logic so both call normalizeArrayResponse<TxErc20Transfer>(response.data) and return the resulting array.README.md (1)
9-12: Consider specifying a minimum version forjust.
Bun 1.0+andRust 1.75+both have version floors, butjusthas none. A minimum version is useful if the Justfile uses features not available in older releases (e.g., theset shelldirective requires just ≥ 1.0).📝 Suggested fix
- - `just` + - `just` 1.0+🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 9 - 12, The README currently lists "just" without a minimum version; update the dependency list to specify a minimum Just version (e.g., "just 1.0+" or whatever the project requires) so users know which Justfile features are supported; locate the README entry for "just" (and the project's Justfile) and pick the smallest Just release that supports used features (for example the "set shell" directive requires just ≥ 1.0), then change the bullet to include that minimum version..github/workflows/ci.yml (3)
1-61: Add aconcurrencygroup to cancel stale runs.Without a concurrency group, every push to an open PR queues a new run while previous ones are still running, wasting CI minutes. A standard pattern:
🔧 Proposed addition (top-level, after `on:`)
+concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true + jobs:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 1 - 61, Add a top-level concurrency block (placed directly after the existing on: key) to cancel stale runs for this workflow; set concurrency.group to a stable identifier like "${{ github.workflow }}-${{ github.ref }}" (or "${{ github.head_ref || github.ref }}" for PRs) and concurrency.cancel-in-progress to true so jobs such as the backend and frontend workflows (job names "Backend (Rust)" and "Frontend (Bun)") will cancel previous runs when a new push to the same PR/branch occurs.
17-18: Pin third-party actions to commit SHAs to guard against supply-chain attacks.All third-party actions (
actions/checkout,dtolnay/rust-toolchain,Swatinem/rust-cache,oven-sh/setup-bun) currently use floating tag references (@v4,@v2,@stable). A tag can be silently force-pushed to a malicious commit.Example for
actions/checkout@v4:- uses: actions/checkout@v4 + uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2Apply the same pattern to all four action references using a tool such as
pinactorpin-github-action.Also applies to: 20-21, 25-26, 46-47, 49-50
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 17 - 18, The workflow uses floating tags for third-party actions (actions/checkout, dtolnay/rust-toolchain, Swatinem/rust-cache, oven-sh/setup-bun); replace each tag reference (e.g., actions/checkout@v4) with the corresponding commit SHA to pin the action to an immutable ref—locate occurrences of those action strings in the workflow and update all instances (lines referenced around the checkout, rust-toolchain, rust-cache, and setup-bun steps) by running a pinning tool like pinact or pin-github-action or manually lookup the commit SHAs on the action repos and substitute the tag with the full SHA.
50-52: Pinbun-versionto a specific version instead oflatest.Using
bun-version: latestmeans CI will silently upgrade to any future Bun major release, which could introduce breaking changes in--frozen-lockfilesemantics or build/lint output and cause spurious failures. Thesetup-bunaction supports reading the version frompackage.json'spackageManagerfield or abun-version-fileas alternatives to a hardcoded version string.🔧 Suggested options
Option A — semver range (matches README prerequisite):
- bun-version: latest + bun-version: "1.x"Option B — delegate to a
.bun-versionfile checked into the repo:- bun-version: latest + bun-version-file: "frontend/.bun-version"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 50 - 52, The CI step currently sets the setup-bun action with bun-version: latest which can auto-upgrade Bun and cause flakes; update that action invocation (the uses: oven-sh/setup-bun@v2 step) to pin bun-version to a specific version or semver range (e.g. "0.x" or an exact "0.y.z"), or change it to read the version from package.json via the packageManager field or from a checked-in .bun-version file by using bun-version-file—replace the literal "latest" value for bun-version accordingly to ensure reproducible CI.frontend/src/hooks/useLatestBlockHeight.ts (1)
13-13: Optional: consider deprecating the unused parameter.
If_windowBlocksis no longer part of behavior, a follow‑up to remove or document it would reduce API ambiguity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/hooks/useLatestBlockHeight.ts` at line 13, The parameter _windowBlocks in the useLatestBlockHeight hook is unused and should be deprecated or removed to avoid API ambiguity; update the useLatestBlockHeight function signature to either remove the _windowBlocks parameter entirely (and adjust any callers) or add a JSDoc `@deprecated` notice above the hook documenting that _windowBlocks is no longer used, then update any tests/call sites to stop passing it and run the build to ensure no references remain.backend/crates/atlas-api/src/handlers/logs.rs (1)
201-215:normalize_hashandnormalize_addressare duplicated across multiple handler files.These identical helper functions appear in at least
logs.rs,transactions.rs,etherscan.rs,contracts.rs,labels.rs, andtokens.rs. Consider extracting them into a shared module (e.g., ahandlers/util.rsor intoatlas_common) to reduce duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/logs.rs` around lines 201 - 215, Extract the duplicated normalize_hash and normalize_address functions into a shared helper module and update each handler to call the shared helpers: create a new module (e.g., handlers::util or atlas_common::hex_utils) containing pub fn normalize_hash(hash: &str) -> String and pub fn normalize_address(address: &str) -> String implementing the existing behavior, then remove the duplicate implementations from logs.rs, transactions.rs, etherscan.rs, contracts.rs, labels.rs, tokens.rs and replace them with use/import of the new module and calls to util::normalize_hash and util::normalize_address; ensure functions are public and imported where needed and run cargo build to verify no visibility or name conflicts remain.backend/crates/atlas-api/src/handlers/contracts.rs (2)
382-406: Consider adding a timeout to the solc child process.The
compile_contractfunction spawnssolcand waits indefinitely viawait_with_output(). A malicious or pathological input could causesolcto hang, blocking the request thread forever. Wrapping the wait intokio::time::timeoutwould prevent this.This is pre-existing behavior, but worth flagging given the user-facing nature of this endpoint.
⏱️ Proposed fix: add a timeout
+ use tokio::time::{timeout, Duration}; + - let output = child - .wait_with_output() - .await - .map_err(|e| AtlasError::Compilation(format!("Failed to wait for solc: {}", e)))?; + let output = timeout(Duration::from_secs(120), child.wait_with_output()) + .await + .map_err(|_| AtlasError::Compilation("solc compilation timed out after 120s".to_string()))? + .map_err(|e| AtlasError::Compilation(format!("Failed to wait for solc: {}", e)))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/contracts.rs` around lines 382 - 406, The solc child spawned in compile_contract currently waits indefinitely via child.wait_with_output(); wrap that wait in tokio::time::timeout (e.g., tokio::time::Duration::from_secs configurable) and if the timeout elapses, attempt to terminate the child (call child.kill().await or child.kill() and then await child.wait() or wait_with_output()) and return an AtlasError::Compilation indicating a timeout; ensure you reference the Command/child created in this block and replace the direct child.wait_with_output().await call with a tokio::time::timeout wrapper that maps timeout errors into the new AtlasError path.
131-135:_constructor_argsis accepted but ignored inbytecodes_match.The
bytecodes_matchfunction receivesconstructor_argsbut never uses it (prefixed with_). The call site at Line 131-135 passes it through. If constructor args should influence the comparison (e.g., stripping appended constructor args from deployed bytecode), this is a gap. Otherwise, consider removing the parameter to avoid confusion.Also applies to: 528-551
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/contracts.rs` around lines 131 - 135, The call to bytecodes_match(&deployed_stripped, &compiled_stripped, &request.constructor_args) passes constructor_args but the bytecodes_match implementation ignores it (parameter named _constructor_args); either update bytecodes_match to actually use constructor_args when comparing (e.g., strip appended constructor args from deployed_stripped before comparison or incorporate them in the matching logic) or remove the constructor_args parameter from both the bytecodes_match signature and all call sites (including where deployed_stripped and compiled_stripped are passed) to avoid confusion; locate the bytecodes_match function and the call sites (symbols: bytecodes_match, deployed_stripped, compiled_stripped, request.constructor_args) and apply the chosen change consistently across the other occurrences mentioned.backend/crates/atlas-api/src/handlers/etherscan.rs (4)
888-902:normalize_addressandnormalize_hashare duplicated withtransactions.rsThe
normalize_hashimplementation here (lines 896-902) is byte-for-byte identical to the one visible in thetransactions.rssnippet.normalize_address(lines 888-894) likely has the same duplication. Both should live in a sharedutils(orhelpers) module and be imported where needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 888 - 902, Extract the duplicated functions normalize_address and normalize_hash into a shared utility module (e.g., a utils or helpers module) and export them; then remove the local definitions in this file and in transactions.rs and replace them with imports (use crate::utils::normalize_address; use crate::utils::normalize_hash; or equivalent). Ensure the utility functions keep the same signatures (&str -> String) and behavior, update any module paths or visibility (pub) so both etherscan.rs and transactions.rs compile, and run cargo build/tests to verify no references remain to the removed local functions.
198-221: Extract business logic fromcontracts::verify_contractinstead of calling the axum handler directlyManually constructing
axum::extract::State(state)andJson(request)to invoke a handler as a plain function is fragile: any future extractor added tocontracts::verify_contract's signature (e.g., anExtensionorTypedHeader) will silently break this call site without a compiler error at the definition. The right pattern is to move the core logic to a standaloneasync fn verify_contract_impl(state, request) -> ...that both the axum handler andverify_source_code_etherscancall.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 198 - 221, The call site should not construct axum extractors to invoke the handler directly; instead extract the core business logic from contracts::verify_contract into a new standalone async function (e.g., verify_contract_impl(state: SharedStateType, request: VerifyRequestType) -> Result<VerifyResponseType, ErrorType>) and have the axum handler contracts::verify_contract simply call that impl; update this etherscan handler to call verify_contract_impl with the actual state inner type and the deserialized request rather than axum::extract::State(state) and Json(request), and adjust return handling to match the impl's Result/response types.
861-868:hashis fetched from DB but never usedThe query
SELECT number, hash, timestamp FROM blocksretrieves the hash column, yet it is immediately shadowed as_hashin the destructuring pattern and not referenced anywhere in the response. Drop it from the SELECT to avoid the unnecessary data transfer.♻️ Proposed fix
- let block: Option<(i64, String, i64)> = - sqlx::query_as("SELECT number, hash, timestamp FROM blocks WHERE number = $1") + let block: Option<(i64, i64)> = + sqlx::query_as("SELECT number, timestamp FROM blocks WHERE number = $1") .bind(block_number) .fetch_optional(&state.pool) .await?; match block { - Some((number, _hash, timestamp)) => { + Some((number, timestamp)) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 861 - 868, The SELECT currently fetches (number, hash, timestamp) into an Option<(i64, String, i64)> but the hash is unused (shadowed as _hash); remove the hash column from the SQL, change the query_as tuple type to Option<(i64, i64)> (or Option<(i64, TimestampType)> matching timestamp's Rust type), and update the match destructuring from Some((number, _hash, timestamp)) to Some((number, timestamp)) so only number and timestamp are selected and used.
312-317: Repeated provider construction — extract a helper
ProviderBuilder::new().on_http(state.rpc_url.parse()...)appears verbatim in three handler functions (handle_proxy_module,get_balance,get_balance_multi). Extract it to a small helper to reduce duplication and make future transport changes (e.g., adding a timeout or retry layer) a one-line edit.♻️ Suggested helper
fn build_provider( rpc_url: &str, ) -> Result<impl Provider, AtlasError> { let url = rpc_url .parse() .map_err(|e| AtlasError::Config(format!("Invalid RPC URL: {}", e)))?; Ok(ProviderBuilder::new().on_http(url)) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 312 - 317, The three handlers (handle_proxy_module, get_balance, get_balance_multi) duplicate ProviderBuilder::new().on_http(state.rpc_url.parse()...) — extract a helper (e.g., build_provider) that takes &str (rpc_url), parses it returning Result<ProviderBuilder or impl Provider, AtlasError> and encapsulates the parse error mapping to AtlasError::Config; then replace the three inline constructions with a call to build_provider(&state.rpc_url) so future transport layers (timeout/retry) can be added in one place.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/crates/atlas-api/src/handlers/etherscan.rs`:
- Around line 70-73: The EtherscanQuery struct's field _apikey is missing the
serde rename so query parameter apikey won't deserialize into it; update the
EtherscanQuery definition to add #[serde(rename = "apikey")] on the _apikey
field (similar to the existing serde(rename) on _startblock/_endblock) so
incoming apikey=... query params populate the _apikey Option<String>.
In `@backend/crates/atlas-api/src/handlers/tokens.rs`:
- Around line 109-113: The current existence check using sqlx::query_as("SELECT
COUNT(*) ...").fetch_one(...) is ineffective because COUNT always returns a row;
instead query for the actual contract row and use fetch_optional to detect
absence: in tokens.rs replace the COUNT query with a SELECT of a real column
(e.g. id or address) from erc20_contracts bound to address and call
fetch_optional(&state.pool).await, then map None to
AtlasError::NotFound(format!("Token {} not found", address)) and proceed when
Some(row) is returned; update the surrounding code to use the retrieved row or
ignore it if only existence is needed.
In `@Justfile`:
- Line 36: The ci recipe currently lists frontend-lint and frontend-build but
omits frontend-install, causing just ci to fail on a clean checkout; update the
Justfile so the ci target's prerequisites include frontend-install (alongside
backend-fmt, backend-clippy, backend-test, frontend-lint, frontend-build) so
that the frontend dependencies are installed before running frontend-lint and
frontend-build.
In `@README.md`:
- Around line 31-36: The README currently places both commands ("just
backend-indexer" and "just backend-api") in one code block despite saying "run
in separate terminals"; split them into two separate bash code blocks, update
the prose to something like "Start backend services (each in its own terminal):"
and label them "Terminal 1" and "Terminal 2" so contributors copy/paste won't
run them sequentially in a single shell; locate the section containing the two
commands and replace the single block with two distinct blocks for the commands
"just backend-indexer" and "just backend-api".
---
Outside diff comments:
In `@backend/crates/atlas-api/src/handlers/etherscan.rs`:
- Around line 504-512: The current confirmations calculation uses
`current_block.0 - tx.block_number` which can go negative due to TOCTOU races;
update the mapping in the closure that builds `EtherscanTransaction` (the place
where `let confirmations = current_block.0 - tx.block_number;` is computed) to
use saturating subtraction to never produce negative values (e.g.,
`current_block.0.saturating_sub(tx.block_number)` or otherwise clamp to zero)
and apply the same change in the analogous computation inside
`get_token_tx_list` to ensure confirmations are always >= 0.
In `@backend/crates/atlas-api/src/handlers/mod.rs`:
- Around line 21-57: get_table_count currently builds an SQL string with
format!("SELECT COUNT(*) FROM {}", table_name) which is unsafe for untrusted
input; ensure table_name is validated or documented as trusted by either: (a)
implement a whitelist check of allowed table names before calling format!
(validate in get_table_count), or (b) add a clear doc comment on get_table_count
stating it is internal-only and table_name must be a trusted constant, and
ideally assert/unwrap against a constant list (e.g., only "transactions") to
prevent SQL injection at runtime; update references to the function accordingly.
In `@backend/crates/atlas-api/src/handlers/nfts.rs`:
- Around line 327-338: The resolve_ipfs_url function currently converts ipfs://
URIs by stripping the "ipfs://" prefix but does not handle the case where the
remaining string begins with "ipfs/", producing double "ipfs/ipfs" in the path;
update resolve_ipfs_url to, after uri.strip_prefix("ipfs://") returns stripped,
check if stripped starts_with "ipfs/" and if so trim that segment (e.g., drop
the leading "ipfs/") before formatting "https://ipfs.io/ipfs/{}", keeping the
existing ar:// handling intact.
In `@backend/crates/atlas-api/src/handlers/transactions.rs`:
- Around line 40-59: Normalize the incoming transaction hash to the canonical
form used elsewhere by calling the existing normalize_hash helper instead of
only adding "0x": replace the inline normalization in get_transaction with a
call like let hash = normalize_hash(&hash); so the hash has the 0x prefix and is
lowercased, then bind that normalized hash when querying the transactions table;
ensure you import or reference normalize_hash consistent with other handlers.
In `@frontend/src/pages/BlocksPage.tsx`:
- Around line 97-116: The RAF cancellation can leave orphaned timeouts because
seenBlocksRef.add and scheduling of timeouts occur outside the
requestAnimationFrame callback; move the side-effects that mark blocks as seen
and schedule their removal timeouts into the highlightRafRef.current callback
that calls setHighlightBlocks so those actions only run when the RAF actually
fires. Concretely, update the block where newlyAdded is processed so the
requestAnimationFrame callback performs seenBlocksRef.current.add(n) and
timeoutsRef.current.set(n, timeout) (and the timeout's setHighlightBlocks
removal) for each n, and if you prefer the simpler compromise keep only a quick
seenBlocksRef.current.add(n) outside to dedupe duplicates while moving the
timeout creation inside the RAF; reference highlightRafRef, seenBlocksRef,
timeoutsRef, setHighlightBlocks and newlyAdded when locating code to change.
---
Duplicate comments:
In `@frontend/src/pages/AddressesPage.tsx`:
- Around line 48-63: The RAF race causes seenRef.current.add and timersRef
scheduling to run before the highlight actually gets applied (so if
highlightRafRef is cancelled the addresses are marked seen but never
highlighted); move the seenRef.current.add(...) loop and
timersRef.current.set(...) scheduling inside the requestAnimationFrame callback
that calls setHighlight (i.e., co-locate the for (const h of newOnes) { ... }
logic into the highlightRafRef.current = requestAnimationFrame(() => { ... })
block), preserve the existing cancelAnimationFrame behavior with
highlightRafRef, and ensure timersRef.current.delete(h) and timeout creation
happen only after the highlight has been added so cancelled RAFs won’t leave
stale seen/timers for functions/refs: highlightRafRef, setHighlight, seenRef,
timersRef.
---
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 1-61: Add a top-level concurrency block (placed directly after the
existing on: key) to cancel stale runs for this workflow; set concurrency.group
to a stable identifier like "${{ github.workflow }}-${{ github.ref }}" (or "${{
github.head_ref || github.ref }}" for PRs) and concurrency.cancel-in-progress to
true so jobs such as the backend and frontend workflows (job names "Backend
(Rust)" and "Frontend (Bun)") will cancel previous runs when a new push to the
same PR/branch occurs.
- Around line 17-18: The workflow uses floating tags for third-party actions
(actions/checkout, dtolnay/rust-toolchain, Swatinem/rust-cache,
oven-sh/setup-bun); replace each tag reference (e.g., actions/checkout@v4) with
the corresponding commit SHA to pin the action to an immutable ref—locate
occurrences of those action strings in the workflow and update all instances
(lines referenced around the checkout, rust-toolchain, rust-cache, and setup-bun
steps) by running a pinning tool like pinact or pin-github-action or manually
lookup the commit SHAs on the action repos and substitute the tag with the full
SHA.
- Around line 50-52: The CI step currently sets the setup-bun action with
bun-version: latest which can auto-upgrade Bun and cause flakes; update that
action invocation (the uses: oven-sh/setup-bun@v2 step) to pin bun-version to a
specific version or semver range (e.g. "0.x" or an exact "0.y.z"), or change it
to read the version from package.json via the packageManager field or from a
checked-in .bun-version file by using bun-version-file—replace the literal
"latest" value for bun-version accordingly to ensure reproducible CI.
In `@backend/crates/atlas-api/src/handlers/contracts.rs`:
- Around line 382-406: The solc child spawned in compile_contract currently
waits indefinitely via child.wait_with_output(); wrap that wait in
tokio::time::timeout (e.g., tokio::time::Duration::from_secs configurable) and
if the timeout elapses, attempt to terminate the child (call child.kill().await
or child.kill() and then await child.wait() or wait_with_output()) and return an
AtlasError::Compilation indicating a timeout; ensure you reference the
Command/child created in this block and replace the direct
child.wait_with_output().await call with a tokio::time::timeout wrapper that
maps timeout errors into the new AtlasError path.
- Around line 131-135: The call to bytecodes_match(&deployed_stripped,
&compiled_stripped, &request.constructor_args) passes constructor_args but the
bytecodes_match implementation ignores it (parameter named _constructor_args);
either update bytecodes_match to actually use constructor_args when comparing
(e.g., strip appended constructor args from deployed_stripped before comparison
or incorporate them in the matching logic) or remove the constructor_args
parameter from both the bytecodes_match signature and all call sites (including
where deployed_stripped and compiled_stripped are passed) to avoid confusion;
locate the bytecodes_match function and the call sites (symbols:
bytecodes_match, deployed_stripped, compiled_stripped, request.constructor_args)
and apply the chosen change consistently across the other occurrences mentioned.
In `@backend/crates/atlas-api/src/handlers/etherscan.rs`:
- Around line 888-902: Extract the duplicated functions normalize_address and
normalize_hash into a shared utility module (e.g., a utils or helpers module)
and export them; then remove the local definitions in this file and in
transactions.rs and replace them with imports (use
crate::utils::normalize_address; use crate::utils::normalize_hash; or
equivalent). Ensure the utility functions keep the same signatures (&str ->
String) and behavior, update any module paths or visibility (pub) so both
etherscan.rs and transactions.rs compile, and run cargo build/tests to verify no
references remain to the removed local functions.
- Around line 198-221: The call site should not construct axum extractors to
invoke the handler directly; instead extract the core business logic from
contracts::verify_contract into a new standalone async function (e.g.,
verify_contract_impl(state: SharedStateType, request: VerifyRequestType) ->
Result<VerifyResponseType, ErrorType>) and have the axum handler
contracts::verify_contract simply call that impl; update this etherscan handler
to call verify_contract_impl with the actual state inner type and the
deserialized request rather than axum::extract::State(state) and Json(request),
and adjust return handling to match the impl's Result/response types.
- Around line 861-868: The SELECT currently fetches (number, hash, timestamp)
into an Option<(i64, String, i64)> but the hash is unused (shadowed as _hash);
remove the hash column from the SQL, change the query_as tuple type to
Option<(i64, i64)> (or Option<(i64, TimestampType)> matching timestamp's Rust
type), and update the match destructuring from Some((number, _hash, timestamp))
to Some((number, timestamp)) so only number and timestamp are selected and used.
- Around line 312-317: The three handlers (handle_proxy_module, get_balance,
get_balance_multi) duplicate
ProviderBuilder::new().on_http(state.rpc_url.parse()...) — extract a helper
(e.g., build_provider) that takes &str (rpc_url), parses it returning
Result<ProviderBuilder or impl Provider, AtlasError> and encapsulates the parse
error mapping to AtlasError::Config; then replace the three inline constructions
with a call to build_provider(&state.rpc_url) so future transport layers
(timeout/retry) can be added in one place.
In `@backend/crates/atlas-api/src/handlers/logs.rs`:
- Around line 201-215: Extract the duplicated normalize_hash and
normalize_address functions into a shared helper module and update each handler
to call the shared helpers: create a new module (e.g., handlers::util or
atlas_common::hex_utils) containing pub fn normalize_hash(hash: &str) -> String
and pub fn normalize_address(address: &str) -> String implementing the existing
behavior, then remove the duplicate implementations from logs.rs,
transactions.rs, etherscan.rs, contracts.rs, labels.rs, tokens.rs and replace
them with use/import of the new module and calls to util::normalize_hash and
util::normalize_address; ensure functions are public and imported where needed
and run cargo build to verify no visibility or name conflicts remain.
In `@frontend/src/api/transactions.ts`:
- Around line 40-47: Add a small response-normalization helper (e.g.,
normalizeArrayResponse<T>(body: unknown): T[]) that encapsulates the existing
“if Array.isArray(body) return body; if object with 'data' and data is array
return data; otherwise return []” logic, keep the generic type so callers
maintain TxErc20Transfer typing, and use it from getTxErc20Transfers (replace
the inline checks) and the other transfer fetcher(s) that duplicate this logic
so both call normalizeArrayResponse<TxErc20Transfer>(response.data) and return
the resulting array.
In `@frontend/src/hooks/useLatestBlockHeight.ts`:
- Line 13: The parameter _windowBlocks in the useLatestBlockHeight hook is
unused and should be deprecated or removed to avoid API ambiguity; update the
useLatestBlockHeight function signature to either remove the _windowBlocks
parameter entirely (and adjust any callers) or add a JSDoc `@deprecated` notice
above the hook documenting that _windowBlocks is no longer used, then update any
tests/call sites to stop passing it and run the build to ensure no references
remain.
In `@README.md`:
- Around line 9-12: The README currently lists "just" without a minimum version;
update the dependency list to specify a minimum Just version (e.g., "just 1.0+"
or whatever the project requires) so users know which Justfile features are
supported; locate the README entry for "just" (and the project's Justfile) and
pick the smallest Just release that supports used features (for example the "set
shell" directive requires just ≥ 1.0), then change the bullet to include that
minimum version.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (48)
.github/workflows/ci.ymlJustfileREADME.mdbackend/crates/atlas-api/src/error.rsbackend/crates/atlas-api/src/handlers/addresses.rsbackend/crates/atlas-api/src/handlers/blocks.rsbackend/crates/atlas-api/src/handlers/contracts.rsbackend/crates/atlas-api/src/handlers/etherscan.rsbackend/crates/atlas-api/src/handlers/labels.rsbackend/crates/atlas-api/src/handlers/logs.rsbackend/crates/atlas-api/src/handlers/mod.rsbackend/crates/atlas-api/src/handlers/nfts.rsbackend/crates/atlas-api/src/handlers/proxy.rsbackend/crates/atlas-api/src/handlers/search.rsbackend/crates/atlas-api/src/handlers/status.rsbackend/crates/atlas-api/src/handlers/tokens.rsbackend/crates/atlas-api/src/handlers/transactions.rsbackend/crates/atlas-api/src/main.rsbackend/crates/atlas-common/src/lib.rsbackend/crates/atlas-common/src/types.rsbackend/crates/atlas-indexer/src/batch.rsbackend/crates/atlas-indexer/src/config.rsbackend/crates/atlas-indexer/src/copy.rsbackend/crates/atlas-indexer/src/fetcher.rsbackend/crates/atlas-indexer/src/indexer.rsbackend/crates/atlas-indexer/src/main.rsbackend/crates/atlas-indexer/src/metadata.rsfrontend/src/App.tsxfrontend/src/api/addresses.tsfrontend/src/api/transactions.tsfrontend/src/components/ImageIpfs.tsxfrontend/src/components/Layout.tsxfrontend/src/components/SearchBar.tsxfrontend/src/components/SmoothCounter.tsxfrontend/src/hooks/index.tsfrontend/src/hooks/useEthBalance.tsfrontend/src/hooks/useEthPrice.tsfrontend/src/hooks/useLatestBlockHeight.tsfrontend/src/hooks/useStats.tsfrontend/src/pages/AddressPage.tsxfrontend/src/pages/AddressesPage.tsxfrontend/src/pages/BlockDetailPage.tsxfrontend/src/pages/BlocksPage.tsxfrontend/src/pages/NFTsPage.tsxfrontend/src/pages/SearchResultsPage.tsxfrontend/src/pages/TokensPage.tsxfrontend/src/pages/TransactionDetailPage.tsxfrontend/src/pages/TransactionsPage.tsx
💤 Files with no reviewable changes (2)
- frontend/src/components/ImageIpfs.tsx
- frontend/src/components/SmoothCounter.tsx
Overview
This pr adds ci, cleans fmt, lint, clippy and build issues to make ci pass and add a justfile to enable root level commands for backend and frontend
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Style & Refactoring