Skip to content

Comments

ci and fixes#14

Merged
tac0turtle merged 5 commits intomainfrom
marko/ci_fixes
Feb 24, 2026
Merged

ci and fixes#14
tac0turtle merged 5 commits intomainfrom
marko/ci_fixes

Conversation

@tac0turtle
Copy link
Contributor

@tac0turtle tac0turtle commented Feb 24, 2026

Overview

This pr adds ci, cleans fmt, lint, clippy and build issues to make ci pass and add a justfile to enable root level commands for backend and frontend

Summary by CodeRabbit

  • New Features

    • Added CI pipeline and a Just-based task runner for local workflows.
    • Admin API key support and admin-only enforcement for sensitive endpoints.
  • Bug Fixes

    • Hardened error handling across frontend hooks and API clients.
    • Safer deserialization/typing to reduce runtime errors.
  • Documentation

    • README updated with new prerequisites and Just-based developer commands.
  • Style & Refactoring

    • UI rendering/animation improvements and broad code readability cleanups.

@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 143b087 and ba45d79.

📒 Files selected for processing (15)
  • .github/workflows/ci.yml
  • Justfile
  • README.md
  • backend/crates/atlas-api/src/handlers/auth.rs
  • backend/crates/atlas-api/src/handlers/etherscan.rs
  • backend/crates/atlas-api/src/handlers/labels.rs
  • backend/crates/atlas-api/src/handlers/mod.rs
  • backend/crates/atlas-api/src/handlers/nfts.rs
  • backend/crates/atlas-api/src/handlers/proxy.rs
  • backend/crates/atlas-api/src/handlers/tokens.rs
  • backend/crates/atlas-api/src/handlers/transactions.rs
  • backend/crates/atlas-api/src/main.rs
  • backend/crates/atlas-common/src/error.rs
  • frontend/src/pages/AddressesPage.tsx
  • frontend/src/pages/BlocksPage.tsx

📝 Walkthrough

Walkthrough

Adds CI workflow and Justfile; introduces admin API key enforcement and require_admin helper; many backend formatting and small refactors (including FetchResult boxing and select handler signature changes); indexer concurrency and startup adjustments; frontend type-safety, error-handling, and UI rendering refinements.

Changes

Cohort / File(s) Summary
CI & Tasks
​.github/workflows/ci.yml, Justfile, README.md
Adds GitHub Actions CI with Backend (Rust) and Frontend (Bun) jobs; introduces Just tasks for frontend/backend workflows and updates README to document prerequisites and just usage.
Auth + Admin Gate
backend/crates/atlas-api/src/handlers/auth.rs, backend/crates/atlas-api/src/main.rs, backend/crates/atlas-api/src/handlers/...
Adds require_admin helper, wires admin_api_key into AppState, and enforces admin checks (HeaderMap) on probe endpoints (labels, proxy, etc.).
Backend handlers (formatting & small refactors)
backend/crates/atlas-api/src/handlers/... (addresses, blocks, contracts, etherscan, labels, logs, nfts, proxy, search, status, tokens, transactions, error.rs)
Widespread formatting, import reordering, SQL string reflow, small behavioral tweaks (struct field renames in etherscan, pagination/count changes, normalize_hash helper, label and proxy handlers now accept headers).
Handler module & API surface
backend/crates/atlas-api/src/handlers/mod.rs
Adds multiple public modules (auth, blocks, contracts, logs, nfts, search, tokens, transactions); removes get_filtered_count; changes get_table_count signature to fixed-table query.
Common crate changes
backend/crates/atlas-common/src/lib.rs, backend/crates/atlas-common/src/error.rs, backend/crates/atlas-common/src/types.rs
Reorders module/re-exports; adds AtlasError::Unauthorized(String) and maps it to HTTP 401.
Indexer - concurrency, API, startup
backend/crates/atlas-indexer/src/{batch.rs,fetcher.rs,indexer.rs,metadata.rs,main.rs,config.rs,copy.rs}
Refactors worker loops, boxes FetchedBlock in FetchResult::Success, adjusts metadata fetch interfaces (token_key tuple), tightens task draining, adds dotenv/migration/startup steps and background metadata fetcher.
Frontend type-safety & error handling
frontend/src/{api/*.ts,hooks/*.tsx,pages/*.tsx}
Replaces any with unknown in catches/response handling, improves error-message extraction, removes unsafe casts, adds type guards, and refines derived hasLoaded patterns across pages.
Frontend UI & hooks
frontend/src/components/{Layout,SearchBar,ImageIpfs,SmoothCounter}.tsx, frontend/src/hooks/index.ts
Layout: RAF-based height/display sync and live clock; SearchBar: type guards and helper render functions; adds useProxies re-export; minor cleanup (removed unused imports / ESLint disables).
Small frontend fixes
frontend/src/App.tsx, other small files
Minor import removals and small logic cleanups (e.g., getEthBalance return, response handling for transfers).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • feat: batch insert #8 — Overlapping changes to indexer batch processing and fetcher/indexer logic (batching and COPY-related work).

Poem

🐰 I hopped through diffs both wide and deep,
CI now wakes while the indexers keep,
Admin keys guarded behind a gate,
Frontend types lean, no casts to hate,
Metadata fetches bound and fleet,
A rabbit’s cheer for code complete! 🥕✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 41.74% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'ci and fixes' is vague and generic, using non-descriptive terms that don't convey specific information about the changeset's main objectives. Consider using a more descriptive title that reflects the primary changes, such as 'Add CI workflow and formatting/code quality fixes' or 'Introduce GitHub Actions workflow and refactor handler logic'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch marko/ci_fixes

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @tac0turtle, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on enhancing developer experience and code quality. It introduces a 'Justfile' to streamline common development and CI operations, making it easier to manage both frontend and backend tasks. Concurrently, significant effort was put into reformatting and cleaning up the Rust backend code, ensuring better readability and adherence to coding standards. Minor frontend adjustments were also made to improve type safety and remove technical debt.

Highlights

  • CI/CD Automation: Introduced a new 'Justfile' to centralize and simplify common development and CI tasks for both the frontend and backend, including installation, development, linting, building, formatting, testing, and running services.
  • Code Formatting and Readability: Applied extensive Rustfmt and general code reformatting across the backend crates (atlas-api and atlas-indexer) to improve consistency, readability, and maintainability. This includes reordering imports, breaking long lines, and consistent indentation.
  • API Handler Improvements: Refactored several API handlers in atlas-api to improve query formatting, error handling, and overall code structure, making the API more robust and easier to understand.
  • Frontend Code Quality: Made minor adjustments in the frontend to enhance type safety and remove unnecessary ESLint disable comments, contributing to a cleaner and more reliable codebase.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • Justfile
    • Added a new Justfile to define common development and CI commands for both frontend (bun) and backend (cargo).
  • backend/crates/atlas-api/src/error.rs
    • Reformated a line for improved readability.
  • backend/crates/atlas-api/src/handlers/addresses.rs
    • Reordered imports for consistency.
    • Applied minor formatting changes to function definitions and SQL queries.
  • backend/crates/atlas-api/src/handlers/blocks.rs
    • Reordered imports for consistency.
    • Applied minor formatting changes to function definitions and SQL queries.
  • backend/crates/atlas-api/src/handlers/contracts.rs
    • Reordered imports for consistency.
    • Applied extensive formatting changes to function definitions, struct fields, and SQL queries for improved readability.
  • backend/crates/atlas-api/src/handlers/etherscan.rs
    • Reordered imports for consistency.
    • Applied extensive formatting changes to function definitions, RPC calls, and SQL queries for improved readability.
  • backend/crates/atlas-api/src/handlers/labels.rs
    • Reordered imports for consistency.
    • Applied minor formatting changes to function definitions and SQL queries.
  • backend/crates/atlas-api/src/handlers/logs.rs
    • Reordered imports for consistency.
    • Applied minor formatting changes to function definitions and SQL queries.
  • backend/crates/atlas-api/src/handlers/mod.rs
    • Reordered module imports for consistency.
    • Applied minor formatting to SQL query string.
  • backend/crates/atlas-api/src/handlers/nfts.rs
    • Reordered imports for consistency.
    • Applied extensive formatting changes to function definitions, RPC calls, and SQL queries for improved readability.
  • backend/crates/atlas-api/src/handlers/proxy.rs
    • Reordered imports for consistency.
    • Applied extensive formatting changes to function definitions, RPC calls, and SQL queries for improved readability.
  • backend/crates/atlas-api/src/handlers/search.rs
    • Reordered imports for consistency.
    • Applied minor formatting changes to function definitions and SQL queries.
  • backend/crates/atlas-api/src/handlers/status.rs
    • Reordered imports for consistency.
    • Applied minor formatting changes to SQL query string.
  • backend/crates/atlas-api/src/handlers/tokens.rs
    • Reordered imports for consistency.
    • Applied minor formatting changes to function definitions and SQL queries.
  • backend/crates/atlas-api/src/handlers/transactions.rs
    • Reordered imports for consistency.
    • Applied minor formatting changes to function definitions and SQL queries.
  • backend/crates/atlas-api/src/main.rs
    • Reordered module imports for consistency.
    • Applied minor formatting changes to tracing initialization and router definitions.
  • backend/crates/atlas-common/src/lib.rs
    • Reordered module imports for consistency.
  • backend/crates/atlas-common/src/types.rs
    • Applied minor formatting changes to default_page and default_limit functions.
  • backend/crates/atlas-indexer/src/batch.rs
    • Applied minor formatting changes to comments and function signatures.
  • backend/crates/atlas-indexer/src/config.rs
    • Applied minor formatting changes to environment variable parsing.
  • backend/crates/atlas-indexer/src/copy.rs
    • Reordered tokio_postgres::types imports for consistency.
  • backend/crates/atlas-indexer/src/fetcher.rs
    • Applied minor formatting changes to type aliases and debug logging.
  • backend/crates/atlas-indexer/src/indexer.rs
    • Reordered imports for consistency.
    • Applied minor formatting to constants, debug logging, and function signatures.
    • Improved SQL query formatting.
  • backend/crates/atlas-indexer/src/main.rs
    • Reordered module imports for consistency.
    • Applied minor formatting changes to tracing initialization and database pool creation.
  • backend/crates/atlas-indexer/src/metadata.rs
    • Applied minor formatting changes to debug logging, SQL queries, and function signatures.
  • frontend/src/App.tsx
    • Removed unused Navigate import.
  • frontend/src/api/addresses.ts
    • Removed a @ts-expect-error comment and explicit type cast for response.data.result.
  • frontend/src/components/ImageIpfs.tsx
    • Removed an eslint-disable comment.
  • frontend/src/components/SmoothCounter.tsx
    • Removed unused React import.
  • frontend/src/hooks/index.ts
    • Added useProxies export.
  • frontend/src/pages/BlockDetailPage.tsx
    • Added ReactNode type import.
    • Updated DetailRow value type to ReactNode.
  • frontend/src/pages/BlocksPage.tsx
    • Added a hidden span element for tick value.
  • frontend/src/pages/TransactionDetailPage.tsx
    • Removed unused useMemo import.
    • Added ReactNode type import.
    • Updated DetailRow value type to ReactNode.
  • frontend/src/pages/TransactionsPage.tsx
    • Added a hidden span element for tick value.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/ci.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a Justfile for CI and development tasks, applies extensive code formatting to the Rust backend via cargo fmt, and brings minor improvements to the frontend, including enhanced type definitions. However, two significant security vulnerabilities were identified: a path traversal flaw in the contract verification logic that allows execution of arbitrary binaries, and a lack of authorization on sensitive state-changing endpoints for label management and proxy detection. These critical issues must be addressed before merging. While these security concerns are paramount, the PR otherwise positively contributes to the codebase's quality and maintainability, with a specific suggestion to further improve type safety in the frontend API client.

@tac0turtle tac0turtle marked this pull request as ready for review February 24, 2026 10:58
@tac0turtle tac0turtle requested a review from pthmas February 24, 2026 10:59
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
frontend/src/pages/BlocksPage.tsx (1)

97-116: ⚠️ Potential issue | 🟡 Minor

Minor race: cancelled RAF leaves orphaned timeouts for never-highlighted blocks.

If rapid block updates cause the RAF on line 101 to be cancelled before it fires (line 98–100), the setHighlightBlocks call never executes. However, the blocks are already added to seenBlocksRef (line 106) and their removal timeouts are already scheduled (lines 107–115). This means those blocks silently skip their highlight animation and can never be retried.

In practice this is unlikely to be noticeable (requires two block batches arriving within a single animation frame), but you could fix it by moving seenBlocksRef.add(n) and the timeout scheduling inside the RAF callback alongside setHighlightBlocks.

Proposed fix: move side effects inside the RAF callback
     if (newlyAdded.length) {
       if (highlightRafRef.current !== null) {
         window.cancelAnimationFrame(highlightRafRef.current);
       }
       highlightRafRef.current = window.requestAnimationFrame(() => {
         setHighlightBlocks((prev) => new Set([...prev, ...newlyAdded]));
         highlightRafRef.current = null;
+        for (const n of newlyAdded) {
+          seenBlocksRef.current.add(n);
+          const t = window.setTimeout(() => {
+            setHighlightBlocks((prev) => {
+              const next = new Set(prev);
+              next.delete(n);
+              return next;
+            });
+            timeoutsRef.current.delete(n);
+          }, 1600);
+          timeoutsRef.current.set(n, t);
+        }
       });
-      for (const n of newlyAdded) {
-        seenBlocksRef.current.add(n);
-        const t = window.setTimeout(() => {
-          setHighlightBlocks((prev) => {
-            const next = new Set(prev);
-            next.delete(n);
-            return next;
-          });
-          timeoutsRef.current.delete(n);
-        }, 1600);
-        timeoutsRef.current.set(n, t);
-      }
     }

Note: if you move the seenBlocksRef.add inside the RAF, you'll also need to guard against the same block number appearing in two consecutive newlyAdded arrays before the first RAF fires. A quick seenBlocksRef.current.add(n) outside the RAF (keeping only timeouts inside) would be a simpler compromise.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/BlocksPage.tsx` around lines 97 - 116, The RAF
cancellation can leave orphaned timeouts because seenBlocksRef.add and
scheduling of timeouts occur outside the requestAnimationFrame callback; move
the side-effects that mark blocks as seen and schedule their removal timeouts
into the highlightRafRef.current callback that calls setHighlightBlocks so those
actions only run when the RAF actually fires. Concretely, update the block where
newlyAdded is processed so the requestAnimationFrame callback performs
seenBlocksRef.current.add(n) and timeoutsRef.current.set(n, timeout) (and the
timeout's setHighlightBlocks removal) for each n, and if you prefer the simpler
compromise keep only a quick seenBlocksRef.current.add(n) outside to dedupe
duplicates while moving the timeout creation inside the RAF; reference
highlightRafRef, seenBlocksRef, timeoutsRef, setHighlightBlocks and newlyAdded
when locating code to change.
backend/crates/atlas-api/src/handlers/transactions.rs (1)

40-59: ⚠️ Potential issue | 🟡 Minor

Normalize hash to lowercase for consistent lookups.

Current logic only adds the 0x prefix; mixed‑case input can miss rows if hashes are stored lowercase. Consider reusing normalize_hash for consistency with other handlers.

🔧 Suggested fix
-    let hash = if hash.starts_with("0x") {
-        hash
-    } else {
-        format!("0x{}", hash)
-    };
+    let hash = normalize_hash(&hash);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/transactions.rs` around lines 40 - 59,
Normalize the incoming transaction hash to the canonical form used elsewhere by
calling the existing normalize_hash helper instead of only adding "0x": replace
the inline normalization in get_transaction with a call like let hash =
normalize_hash(&hash); so the hash has the 0x prefix and is lowercased, then
bind that normalized hash when querying the transactions table; ensure you
import or reference normalize_hash consistent with other handlers.
backend/crates/atlas-api/src/handlers/nfts.rs (1)

327-338: ⚠️ Potential issue | 🟡 Minor

Handle ipfs://ipfs/ prefix to avoid double ipfs/ path.

Some NFTs use ipfs://ipfs/<cid> URIs. The current logic would produce https://ipfs.io/ipfs/ipfs/<cid>. Trim the optional ipfs/ segment after the scheme.

🔧 Suggested fix
-    if let Some(stripped) = uri.strip_prefix("ipfs://") {
-        // Convert ipfs://QmXxx... to https://ipfs.io/ipfs/QmXxx...
-        format!("https://ipfs.io/ipfs/{}", stripped)
+    if let Some(stripped) = uri.strip_prefix("ipfs://") {
+        let stripped = stripped.strip_prefix("ipfs/").unwrap_or(stripped);
+        // Convert ipfs://QmXxx... or ipfs://ipfs/QmXxx... to https://ipfs.io/ipfs/QmXxx...
+        format!("https://ipfs.io/ipfs/{}", stripped)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/nfts.rs` around lines 327 - 338, The
resolve_ipfs_url function currently converts ipfs:// URIs by stripping the
"ipfs://" prefix but does not handle the case where the remaining string begins
with "ipfs/", producing double "ipfs/ipfs" in the path; update resolve_ipfs_url
to, after uri.strip_prefix("ipfs://") returns stripped, check if stripped
starts_with "ipfs/" and if so trim that segment (e.g., drop the leading "ipfs/")
before formatting "https://ipfs.io/ipfs/{}", keeping the existing ar:// handling
intact.
backend/crates/atlas-api/src/handlers/mod.rs (1)

21-57: ⚠️ Potential issue | 🟡 Minor

The format! query construction is unsafe for untrusted input, but currently safe in practice.

The function uses format!("SELECT COUNT(*) FROM {}", table_name) which is vulnerable to SQL injection if table_name is user-controlled. However, the only call site passes the hardcoded constant "transactions", so there is no immediate risk.

Since table names cannot be parameterized in SQL, add explicit documentation or validation to ensure table_name remains trusted—e.g., a whitelist of allowed tables or a comment clarifying this function is internal-only.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/mod.rs` around lines 21 - 57,
get_table_count currently builds an SQL string with format!("SELECT COUNT(*)
FROM {}", table_name) which is unsafe for untrusted input; ensure table_name is
validated or documented as trusted by either: (a) implement a whitelist check of
allowed table names before calling format! (validate in get_table_count), or (b)
add a clear doc comment on get_table_count stating it is internal-only and
table_name must be a trusted constant, and ideally assert/unwrap against a
constant list (e.g., only "transactions") to prevent SQL injection at runtime;
update references to the function accordingly.
backend/crates/atlas-api/src/handlers/etherscan.rs (1)

504-512: ⚠️ Potential issue | 🟡 Minor

Confirmations calculation can go negative due to a TOCTOU race

current_block.0 is fetched in a separate DB query before iterating transactions. If a new block is indexed between the two queries — or if a stale transaction row exceeds the MAX block value — current_block.0 - tx.block_number produces a negative i64. In release mode this silently serialises as a negative confirmations string; in debug mode it panics on overflow. The same issue is present in get_token_tx_list at line 629.

🛡️ Proposed fix (both sites)
-            let confirmations = current_block.0 - tx.block_number;
+            let confirmations = (current_block.0 - tx.block_number).max(0);
-            let confirmations = current_block.0 - transfer.block_number;
+            let confirmations = (current_block.0 - transfer.block_number).max(0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 504 - 512,
The current confirmations calculation uses `current_block.0 - tx.block_number`
which can go negative due to TOCTOU races; update the mapping in the closure
that builds `EtherscanTransaction` (the place where `let confirmations =
current_block.0 - tx.block_number;` is computed) to use saturating subtraction
to never produce negative values (e.g.,
`current_block.0.saturating_sub(tx.block_number)` or otherwise clamp to zero)
and apply the same change in the analogous computation inside
`get_token_tx_list` to ensure confirmations are always >= 0.
♻️ Duplicate comments (1)
frontend/src/pages/AddressesPage.tsx (1)

48-63: Same RAF race condition as BlocksPage — seenRef.add and timeout scheduling run before the RAF fires.

Same issue flagged in BlocksPage.tsx lines 97–116: if the RAF is cancelled by a rapid subsequent update, seenRef already marks the addresses as seen and removal timeouts are already ticking, but highlights are never added to state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/AddressesPage.tsx` around lines 48 - 63, The RAF race
causes seenRef.current.add and timersRef scheduling to run before the highlight
actually gets applied (so if highlightRafRef is cancelled the addresses are
marked seen but never highlighted); move the seenRef.current.add(...) loop and
timersRef.current.set(...) scheduling inside the requestAnimationFrame callback
that calls setHighlight (i.e., co-locate the for (const h of newOnes) { ... }
logic into the highlightRafRef.current = requestAnimationFrame(() => { ... })
block), preserve the existing cancelAnimationFrame behavior with
highlightRafRef, and ensure timersRef.current.delete(h) and timeout creation
happen only after the highlight has been added so cancelled RAFs won’t leave
stale seen/timers for functions/refs: highlightRafRef, setHighlight, seenRef,
timersRef.
🧹 Nitpick comments (13)
frontend/src/api/transactions.ts (1)

40-47: Extract a shared response-normalization helper.

Both transfer fetchers duplicate the same “array or { data }” logic, which is easy to drift over time. A small helper keeps this consistent and easier to maintain.

♻️ Example refactor
+function unwrapArray<T>(body: unknown): T[] {
+  if (Array.isArray(body)) return body as T[];
+  if (body && typeof body === 'object' && Array.isArray((body as { data?: unknown }).data)) {
+    return (body as { data: T[] }).data;
+  }
+  return [];
+}
+
 export async function getTxErc20Transfers(txHash: string): Promise<TxErc20Transfer[]> {
   const response = await client.get(`/transactions/${txHash}/erc20-transfers`);
-  const body = response.data as unknown;
-  if (Array.isArray(body)) return body as TxErc20Transfer[];
-  if (typeof body === 'object' && body !== null && 'data' in body) {
-    const data = (body as { data?: unknown }).data;
-    if (Array.isArray(data)) return data as TxErc20Transfer[];
-  }
-  return [];
+  return unwrapArray<TxErc20Transfer>(response.data);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/api/transactions.ts` around lines 40 - 47, Add a small
response-normalization helper (e.g., normalizeArrayResponse<T>(body: unknown):
T[]) that encapsulates the existing “if Array.isArray(body) return body; if
object with 'data' and data is array return data; otherwise return []” logic,
keep the generic type so callers maintain TxErc20Transfer typing, and use it
from getTxErc20Transfers (replace the inline checks) and the other transfer
fetcher(s) that duplicate this logic so both call
normalizeArrayResponse<TxErc20Transfer>(response.data) and return the resulting
array.
README.md (1)

9-12: Consider specifying a minimum version for just.

Bun 1.0+ and Rust 1.75+ both have version floors, but just has none. A minimum version is useful if the Justfile uses features not available in older releases (e.g., the set shell directive requires just ≥ 1.0).

📝 Suggested fix
-  - `just`
+  - `just` 1.0+
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 9 - 12, The README currently lists "just" without a
minimum version; update the dependency list to specify a minimum Just version
(e.g., "just 1.0+" or whatever the project requires) so users know which
Justfile features are supported; locate the README entry for "just" (and the
project's Justfile) and pick the smallest Just release that supports used
features (for example the "set shell" directive requires just ≥ 1.0), then
change the bullet to include that minimum version.
.github/workflows/ci.yml (3)

1-61: Add a concurrency group to cancel stale runs.

Without a concurrency group, every push to an open PR queues a new run while previous ones are still running, wasting CI minutes. A standard pattern:

🔧 Proposed addition (top-level, after `on:`)
+concurrency:
+  group: ${{ github.workflow }}-${{ github.ref }}
+  cancel-in-progress: true
+
 jobs:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 1 - 61, Add a top-level concurrency
block (placed directly after the existing on: key) to cancel stale runs for this
workflow; set concurrency.group to a stable identifier like "${{ github.workflow
}}-${{ github.ref }}" (or "${{ github.head_ref || github.ref }}" for PRs) and
concurrency.cancel-in-progress to true so jobs such as the backend and frontend
workflows (job names "Backend (Rust)" and "Frontend (Bun)") will cancel previous
runs when a new push to the same PR/branch occurs.

17-18: Pin third-party actions to commit SHAs to guard against supply-chain attacks.

All third-party actions (actions/checkout, dtolnay/rust-toolchain, Swatinem/rust-cache, oven-sh/setup-bun) currently use floating tag references (@v4, @v2, @stable). A tag can be silently force-pushed to a malicious commit.

Example for actions/checkout@v4:

-        uses: actions/checkout@v4
+        uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2

Apply the same pattern to all four action references using a tool such as pinact or pin-github-action.

Also applies to: 20-21, 25-26, 46-47, 49-50

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 17 - 18, The workflow uses floating
tags for third-party actions (actions/checkout, dtolnay/rust-toolchain,
Swatinem/rust-cache, oven-sh/setup-bun); replace each tag reference (e.g.,
actions/checkout@v4) with the corresponding commit SHA to pin the action to an
immutable ref—locate occurrences of those action strings in the workflow and
update all instances (lines referenced around the checkout, rust-toolchain,
rust-cache, and setup-bun steps) by running a pinning tool like pinact or
pin-github-action or manually lookup the commit SHAs on the action repos and
substitute the tag with the full SHA.

50-52: Pin bun-version to a specific version instead of latest.

Using bun-version: latest means CI will silently upgrade to any future Bun major release, which could introduce breaking changes in --frozen-lockfile semantics or build/lint output and cause spurious failures. The setup-bun action supports reading the version from package.json's packageManager field or a bun-version-file as alternatives to a hardcoded version string.

🔧 Suggested options

Option A — semver range (matches README prerequisite):

-        bun-version: latest
+        bun-version: "1.x"

Option B — delegate to a .bun-version file checked into the repo:

-        bun-version: latest
+        bun-version-file: "frontend/.bun-version"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 50 - 52, The CI step currently sets
the setup-bun action with bun-version: latest which can auto-upgrade Bun and
cause flakes; update that action invocation (the uses: oven-sh/setup-bun@v2
step) to pin bun-version to a specific version or semver range (e.g. "0.x" or an
exact "0.y.z"), or change it to read the version from package.json via the
packageManager field or from a checked-in .bun-version file by using
bun-version-file—replace the literal "latest" value for bun-version accordingly
to ensure reproducible CI.
frontend/src/hooks/useLatestBlockHeight.ts (1)

13-13: Optional: consider deprecating the unused parameter.
If _windowBlocks is no longer part of behavior, a follow‑up to remove or document it would reduce API ambiguity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/hooks/useLatestBlockHeight.ts` at line 13, The parameter
_windowBlocks in the useLatestBlockHeight hook is unused and should be
deprecated or removed to avoid API ambiguity; update the useLatestBlockHeight
function signature to either remove the _windowBlocks parameter entirely (and
adjust any callers) or add a JSDoc `@deprecated` notice above the hook documenting
that _windowBlocks is no longer used, then update any tests/call sites to stop
passing it and run the build to ensure no references remain.
backend/crates/atlas-api/src/handlers/logs.rs (1)

201-215: normalize_hash and normalize_address are duplicated across multiple handler files.

These identical helper functions appear in at least logs.rs, transactions.rs, etherscan.rs, contracts.rs, labels.rs, and tokens.rs. Consider extracting them into a shared module (e.g., a handlers/util.rs or into atlas_common) to reduce duplication.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/logs.rs` around lines 201 - 215,
Extract the duplicated normalize_hash and normalize_address functions into a
shared helper module and update each handler to call the shared helpers: create
a new module (e.g., handlers::util or atlas_common::hex_utils) containing pub fn
normalize_hash(hash: &str) -> String and pub fn normalize_address(address: &str)
-> String implementing the existing behavior, then remove the duplicate
implementations from logs.rs, transactions.rs, etherscan.rs, contracts.rs,
labels.rs, tokens.rs and replace them with use/import of the new module and
calls to util::normalize_hash and util::normalize_address; ensure functions are
public and imported where needed and run cargo build to verify no visibility or
name conflicts remain.
backend/crates/atlas-api/src/handlers/contracts.rs (2)

382-406: Consider adding a timeout to the solc child process.

The compile_contract function spawns solc and waits indefinitely via wait_with_output(). A malicious or pathological input could cause solc to hang, blocking the request thread forever. Wrapping the wait in tokio::time::timeout would prevent this.

This is pre-existing behavior, but worth flagging given the user-facing nature of this endpoint.

⏱️ Proposed fix: add a timeout
+    use tokio::time::{timeout, Duration};
+
-    let output = child
-        .wait_with_output()
-        .await
-        .map_err(|e| AtlasError::Compilation(format!("Failed to wait for solc: {}", e)))?;
+    let output = timeout(Duration::from_secs(120), child.wait_with_output())
+        .await
+        .map_err(|_| AtlasError::Compilation("solc compilation timed out after 120s".to_string()))?
+        .map_err(|e| AtlasError::Compilation(format!("Failed to wait for solc: {}", e)))?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/contracts.rs` around lines 382 - 406,
The solc child spawned in compile_contract currently waits indefinitely via
child.wait_with_output(); wrap that wait in tokio::time::timeout (e.g.,
tokio::time::Duration::from_secs configurable) and if the timeout elapses,
attempt to terminate the child (call child.kill().await or child.kill() and then
await child.wait() or wait_with_output()) and return an AtlasError::Compilation
indicating a timeout; ensure you reference the Command/child created in this
block and replace the direct child.wait_with_output().await call with a
tokio::time::timeout wrapper that maps timeout errors into the new AtlasError
path.

131-135: _constructor_args is accepted but ignored in bytecodes_match.

The bytecodes_match function receives constructor_args but never uses it (prefixed with _). The call site at Line 131-135 passes it through. If constructor args should influence the comparison (e.g., stripping appended constructor args from deployed bytecode), this is a gap. Otherwise, consider removing the parameter to avoid confusion.

Also applies to: 528-551

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/contracts.rs` around lines 131 - 135,
The call to bytecodes_match(&deployed_stripped, &compiled_stripped,
&request.constructor_args) passes constructor_args but the bytecodes_match
implementation ignores it (parameter named _constructor_args); either update
bytecodes_match to actually use constructor_args when comparing (e.g., strip
appended constructor args from deployed_stripped before comparison or
incorporate them in the matching logic) or remove the constructor_args parameter
from both the bytecodes_match signature and all call sites (including where
deployed_stripped and compiled_stripped are passed) to avoid confusion; locate
the bytecodes_match function and the call sites (symbols: bytecodes_match,
deployed_stripped, compiled_stripped, request.constructor_args) and apply the
chosen change consistently across the other occurrences mentioned.
backend/crates/atlas-api/src/handlers/etherscan.rs (4)

888-902: normalize_address and normalize_hash are duplicated with transactions.rs

The normalize_hash implementation here (lines 896-902) is byte-for-byte identical to the one visible in the transactions.rs snippet. normalize_address (lines 888-894) likely has the same duplication. Both should live in a shared utils (or helpers) module and be imported where needed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 888 - 902,
Extract the duplicated functions normalize_address and normalize_hash into a
shared utility module (e.g., a utils or helpers module) and export them; then
remove the local definitions in this file and in transactions.rs and replace
them with imports (use crate::utils::normalize_address; use
crate::utils::normalize_hash; or equivalent). Ensure the utility functions keep
the same signatures (&str -> String) and behavior, update any module paths or
visibility (pub) so both etherscan.rs and transactions.rs compile, and run cargo
build/tests to verify no references remain to the removed local functions.

198-221: Extract business logic from contracts::verify_contract instead of calling the axum handler directly

Manually constructing axum::extract::State(state) and Json(request) to invoke a handler as a plain function is fragile: any future extractor added to contracts::verify_contract's signature (e.g., an Extension or TypedHeader) will silently break this call site without a compiler error at the definition. The right pattern is to move the core logic to a standalone async fn verify_contract_impl(state, request) -> ... that both the axum handler and verify_source_code_etherscan call.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 198 - 221,
The call site should not construct axum extractors to invoke the handler
directly; instead extract the core business logic from
contracts::verify_contract into a new standalone async function (e.g.,
verify_contract_impl(state: SharedStateType, request: VerifyRequestType) ->
Result<VerifyResponseType, ErrorType>) and have the axum handler
contracts::verify_contract simply call that impl; update this etherscan handler
to call verify_contract_impl with the actual state inner type and the
deserialized request rather than axum::extract::State(state) and Json(request),
and adjust return handling to match the impl's Result/response types.

861-868: hash is fetched from DB but never used

The query SELECT number, hash, timestamp FROM blocks retrieves the hash column, yet it is immediately shadowed as _hash in the destructuring pattern and not referenced anywhere in the response. Drop it from the SELECT to avoid the unnecessary data transfer.

♻️ Proposed fix
-    let block: Option<(i64, String, i64)> =
-        sqlx::query_as("SELECT number, hash, timestamp FROM blocks WHERE number = $1")
+    let block: Option<(i64, i64)> =
+        sqlx::query_as("SELECT number, timestamp FROM blocks WHERE number = $1")
            .bind(block_number)
            .fetch_optional(&state.pool)
            .await?;
 
     match block {
-        Some((number, _hash, timestamp)) => {
+        Some((number, timestamp)) => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 861 - 868,
The SELECT currently fetches (number, hash, timestamp) into an Option<(i64,
String, i64)> but the hash is unused (shadowed as _hash); remove the hash column
from the SQL, change the query_as tuple type to Option<(i64, i64)> (or
Option<(i64, TimestampType)> matching timestamp's Rust type), and update the
match destructuring from Some((number, _hash, timestamp)) to Some((number,
timestamp)) so only number and timestamp are selected and used.

312-317: Repeated provider construction — extract a helper

ProviderBuilder::new().on_http(state.rpc_url.parse()...) appears verbatim in three handler functions (handle_proxy_module, get_balance, get_balance_multi). Extract it to a small helper to reduce duplication and make future transport changes (e.g., adding a timeout or retry layer) a one-line edit.

♻️ Suggested helper
fn build_provider(
    rpc_url: &str,
) -> Result<impl Provider, AtlasError> {
    let url = rpc_url
        .parse()
        .map_err(|e| AtlasError::Config(format!("Invalid RPC URL: {}", e)))?;
    Ok(ProviderBuilder::new().on_http(url))
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-api/src/handlers/etherscan.rs` around lines 312 - 317,
The three handlers (handle_proxy_module, get_balance, get_balance_multi)
duplicate ProviderBuilder::new().on_http(state.rpc_url.parse()...) — extract a
helper (e.g., build_provider) that takes &str (rpc_url), parses it returning
Result<ProviderBuilder or impl Provider, AtlasError> and encapsulates the parse
error mapping to AtlasError::Config; then replace the three inline constructions
with a call to build_provider(&state.rpc_url) so future transport layers
(timeout/retry) can be added in one place.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/crates/atlas-api/src/handlers/etherscan.rs`:
- Around line 70-73: The EtherscanQuery struct's field _apikey is missing the
serde rename so query parameter apikey won't deserialize into it; update the
EtherscanQuery definition to add #[serde(rename = "apikey")] on the _apikey
field (similar to the existing serde(rename) on _startblock/_endblock) so
incoming apikey=... query params populate the _apikey Option<String>.

In `@backend/crates/atlas-api/src/handlers/tokens.rs`:
- Around line 109-113: The current existence check using sqlx::query_as("SELECT
COUNT(*) ...").fetch_one(...) is ineffective because COUNT always returns a row;
instead query for the actual contract row and use fetch_optional to detect
absence: in tokens.rs replace the COUNT query with a SELECT of a real column
(e.g. id or address) from erc20_contracts bound to address and call
fetch_optional(&state.pool).await, then map None to
AtlasError::NotFound(format!("Token {} not found", address)) and proceed when
Some(row) is returned; update the surrounding code to use the retrieved row or
ignore it if only existence is needed.

In `@Justfile`:
- Line 36: The ci recipe currently lists frontend-lint and frontend-build but
omits frontend-install, causing just ci to fail on a clean checkout; update the
Justfile so the ci target's prerequisites include frontend-install (alongside
backend-fmt, backend-clippy, backend-test, frontend-lint, frontend-build) so
that the frontend dependencies are installed before running frontend-lint and
frontend-build.

In `@README.md`:
- Around line 31-36: The README currently places both commands ("just
backend-indexer" and "just backend-api") in one code block despite saying "run
in separate terminals"; split them into two separate bash code blocks, update
the prose to something like "Start backend services (each in its own terminal):"
and label them "Terminal 1" and "Terminal 2" so contributors copy/paste won't
run them sequentially in a single shell; locate the section containing the two
commands and replace the single block with two distinct blocks for the commands
"just backend-indexer" and "just backend-api".

---

Outside diff comments:
In `@backend/crates/atlas-api/src/handlers/etherscan.rs`:
- Around line 504-512: The current confirmations calculation uses
`current_block.0 - tx.block_number` which can go negative due to TOCTOU races;
update the mapping in the closure that builds `EtherscanTransaction` (the place
where `let confirmations = current_block.0 - tx.block_number;` is computed) to
use saturating subtraction to never produce negative values (e.g.,
`current_block.0.saturating_sub(tx.block_number)` or otherwise clamp to zero)
and apply the same change in the analogous computation inside
`get_token_tx_list` to ensure confirmations are always >= 0.

In `@backend/crates/atlas-api/src/handlers/mod.rs`:
- Around line 21-57: get_table_count currently builds an SQL string with
format!("SELECT COUNT(*) FROM {}", table_name) which is unsafe for untrusted
input; ensure table_name is validated or documented as trusted by either: (a)
implement a whitelist check of allowed table names before calling format!
(validate in get_table_count), or (b) add a clear doc comment on get_table_count
stating it is internal-only and table_name must be a trusted constant, and
ideally assert/unwrap against a constant list (e.g., only "transactions") to
prevent SQL injection at runtime; update references to the function accordingly.

In `@backend/crates/atlas-api/src/handlers/nfts.rs`:
- Around line 327-338: The resolve_ipfs_url function currently converts ipfs://
URIs by stripping the "ipfs://" prefix but does not handle the case where the
remaining string begins with "ipfs/", producing double "ipfs/ipfs" in the path;
update resolve_ipfs_url to, after uri.strip_prefix("ipfs://") returns stripped,
check if stripped starts_with "ipfs/" and if so trim that segment (e.g., drop
the leading "ipfs/") before formatting "https://ipfs.io/ipfs/{}", keeping the
existing ar:// handling intact.

In `@backend/crates/atlas-api/src/handlers/transactions.rs`:
- Around line 40-59: Normalize the incoming transaction hash to the canonical
form used elsewhere by calling the existing normalize_hash helper instead of
only adding "0x": replace the inline normalization in get_transaction with a
call like let hash = normalize_hash(&hash); so the hash has the 0x prefix and is
lowercased, then bind that normalized hash when querying the transactions table;
ensure you import or reference normalize_hash consistent with other handlers.

In `@frontend/src/pages/BlocksPage.tsx`:
- Around line 97-116: The RAF cancellation can leave orphaned timeouts because
seenBlocksRef.add and scheduling of timeouts occur outside the
requestAnimationFrame callback; move the side-effects that mark blocks as seen
and schedule their removal timeouts into the highlightRafRef.current callback
that calls setHighlightBlocks so those actions only run when the RAF actually
fires. Concretely, update the block where newlyAdded is processed so the
requestAnimationFrame callback performs seenBlocksRef.current.add(n) and
timeoutsRef.current.set(n, timeout) (and the timeout's setHighlightBlocks
removal) for each n, and if you prefer the simpler compromise keep only a quick
seenBlocksRef.current.add(n) outside to dedupe duplicates while moving the
timeout creation inside the RAF; reference highlightRafRef, seenBlocksRef,
timeoutsRef, setHighlightBlocks and newlyAdded when locating code to change.

---

Duplicate comments:
In `@frontend/src/pages/AddressesPage.tsx`:
- Around line 48-63: The RAF race causes seenRef.current.add and timersRef
scheduling to run before the highlight actually gets applied (so if
highlightRafRef is cancelled the addresses are marked seen but never
highlighted); move the seenRef.current.add(...) loop and
timersRef.current.set(...) scheduling inside the requestAnimationFrame callback
that calls setHighlight (i.e., co-locate the for (const h of newOnes) { ... }
logic into the highlightRafRef.current = requestAnimationFrame(() => { ... })
block), preserve the existing cancelAnimationFrame behavior with
highlightRafRef, and ensure timersRef.current.delete(h) and timeout creation
happen only after the highlight has been added so cancelled RAFs won’t leave
stale seen/timers for functions/refs: highlightRafRef, setHighlight, seenRef,
timersRef.

---

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 1-61: Add a top-level concurrency block (placed directly after the
existing on: key) to cancel stale runs for this workflow; set concurrency.group
to a stable identifier like "${{ github.workflow }}-${{ github.ref }}" (or "${{
github.head_ref || github.ref }}" for PRs) and concurrency.cancel-in-progress to
true so jobs such as the backend and frontend workflows (job names "Backend
(Rust)" and "Frontend (Bun)") will cancel previous runs when a new push to the
same PR/branch occurs.
- Around line 17-18: The workflow uses floating tags for third-party actions
(actions/checkout, dtolnay/rust-toolchain, Swatinem/rust-cache,
oven-sh/setup-bun); replace each tag reference (e.g., actions/checkout@v4) with
the corresponding commit SHA to pin the action to an immutable ref—locate
occurrences of those action strings in the workflow and update all instances
(lines referenced around the checkout, rust-toolchain, rust-cache, and setup-bun
steps) by running a pinning tool like pinact or pin-github-action or manually
lookup the commit SHAs on the action repos and substitute the tag with the full
SHA.
- Around line 50-52: The CI step currently sets the setup-bun action with
bun-version: latest which can auto-upgrade Bun and cause flakes; update that
action invocation (the uses: oven-sh/setup-bun@v2 step) to pin bun-version to a
specific version or semver range (e.g. "0.x" or an exact "0.y.z"), or change it
to read the version from package.json via the packageManager field or from a
checked-in .bun-version file by using bun-version-file—replace the literal
"latest" value for bun-version accordingly to ensure reproducible CI.

In `@backend/crates/atlas-api/src/handlers/contracts.rs`:
- Around line 382-406: The solc child spawned in compile_contract currently
waits indefinitely via child.wait_with_output(); wrap that wait in
tokio::time::timeout (e.g., tokio::time::Duration::from_secs configurable) and
if the timeout elapses, attempt to terminate the child (call child.kill().await
or child.kill() and then await child.wait() or wait_with_output()) and return an
AtlasError::Compilation indicating a timeout; ensure you reference the
Command/child created in this block and replace the direct
child.wait_with_output().await call with a tokio::time::timeout wrapper that
maps timeout errors into the new AtlasError path.
- Around line 131-135: The call to bytecodes_match(&deployed_stripped,
&compiled_stripped, &request.constructor_args) passes constructor_args but the
bytecodes_match implementation ignores it (parameter named _constructor_args);
either update bytecodes_match to actually use constructor_args when comparing
(e.g., strip appended constructor args from deployed_stripped before comparison
or incorporate them in the matching logic) or remove the constructor_args
parameter from both the bytecodes_match signature and all call sites (including
where deployed_stripped and compiled_stripped are passed) to avoid confusion;
locate the bytecodes_match function and the call sites (symbols:
bytecodes_match, deployed_stripped, compiled_stripped, request.constructor_args)
and apply the chosen change consistently across the other occurrences mentioned.

In `@backend/crates/atlas-api/src/handlers/etherscan.rs`:
- Around line 888-902: Extract the duplicated functions normalize_address and
normalize_hash into a shared utility module (e.g., a utils or helpers module)
and export them; then remove the local definitions in this file and in
transactions.rs and replace them with imports (use
crate::utils::normalize_address; use crate::utils::normalize_hash; or
equivalent). Ensure the utility functions keep the same signatures (&str ->
String) and behavior, update any module paths or visibility (pub) so both
etherscan.rs and transactions.rs compile, and run cargo build/tests to verify no
references remain to the removed local functions.
- Around line 198-221: The call site should not construct axum extractors to
invoke the handler directly; instead extract the core business logic from
contracts::verify_contract into a new standalone async function (e.g.,
verify_contract_impl(state: SharedStateType, request: VerifyRequestType) ->
Result<VerifyResponseType, ErrorType>) and have the axum handler
contracts::verify_contract simply call that impl; update this etherscan handler
to call verify_contract_impl with the actual state inner type and the
deserialized request rather than axum::extract::State(state) and Json(request),
and adjust return handling to match the impl's Result/response types.
- Around line 861-868: The SELECT currently fetches (number, hash, timestamp)
into an Option<(i64, String, i64)> but the hash is unused (shadowed as _hash);
remove the hash column from the SQL, change the query_as tuple type to
Option<(i64, i64)> (or Option<(i64, TimestampType)> matching timestamp's Rust
type), and update the match destructuring from Some((number, _hash, timestamp))
to Some((number, timestamp)) so only number and timestamp are selected and used.
- Around line 312-317: The three handlers (handle_proxy_module, get_balance,
get_balance_multi) duplicate
ProviderBuilder::new().on_http(state.rpc_url.parse()...) — extract a helper
(e.g., build_provider) that takes &str (rpc_url), parses it returning
Result<ProviderBuilder or impl Provider, AtlasError> and encapsulates the parse
error mapping to AtlasError::Config; then replace the three inline constructions
with a call to build_provider(&state.rpc_url) so future transport layers
(timeout/retry) can be added in one place.

In `@backend/crates/atlas-api/src/handlers/logs.rs`:
- Around line 201-215: Extract the duplicated normalize_hash and
normalize_address functions into a shared helper module and update each handler
to call the shared helpers: create a new module (e.g., handlers::util or
atlas_common::hex_utils) containing pub fn normalize_hash(hash: &str) -> String
and pub fn normalize_address(address: &str) -> String implementing the existing
behavior, then remove the duplicate implementations from logs.rs,
transactions.rs, etherscan.rs, contracts.rs, labels.rs, tokens.rs and replace
them with use/import of the new module and calls to util::normalize_hash and
util::normalize_address; ensure functions are public and imported where needed
and run cargo build to verify no visibility or name conflicts remain.

In `@frontend/src/api/transactions.ts`:
- Around line 40-47: Add a small response-normalization helper (e.g.,
normalizeArrayResponse<T>(body: unknown): T[]) that encapsulates the existing
“if Array.isArray(body) return body; if object with 'data' and data is array
return data; otherwise return []” logic, keep the generic type so callers
maintain TxErc20Transfer typing, and use it from getTxErc20Transfers (replace
the inline checks) and the other transfer fetcher(s) that duplicate this logic
so both call normalizeArrayResponse<TxErc20Transfer>(response.data) and return
the resulting array.

In `@frontend/src/hooks/useLatestBlockHeight.ts`:
- Line 13: The parameter _windowBlocks in the useLatestBlockHeight hook is
unused and should be deprecated or removed to avoid API ambiguity; update the
useLatestBlockHeight function signature to either remove the _windowBlocks
parameter entirely (and adjust any callers) or add a JSDoc `@deprecated` notice
above the hook documenting that _windowBlocks is no longer used, then update any
tests/call sites to stop passing it and run the build to ensure no references
remain.

In `@README.md`:
- Around line 9-12: The README currently lists "just" without a minimum version;
update the dependency list to specify a minimum Just version (e.g., "just 1.0+"
or whatever the project requires) so users know which Justfile features are
supported; locate the README entry for "just" (and the project's Justfile) and
pick the smallest Just release that supports used features (for example the "set
shell" directive requires just ≥ 1.0), then change the bullet to include that
minimum version.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5864a0d and 143b087.

📒 Files selected for processing (48)
  • .github/workflows/ci.yml
  • Justfile
  • README.md
  • backend/crates/atlas-api/src/error.rs
  • backend/crates/atlas-api/src/handlers/addresses.rs
  • backend/crates/atlas-api/src/handlers/blocks.rs
  • backend/crates/atlas-api/src/handlers/contracts.rs
  • backend/crates/atlas-api/src/handlers/etherscan.rs
  • backend/crates/atlas-api/src/handlers/labels.rs
  • backend/crates/atlas-api/src/handlers/logs.rs
  • backend/crates/atlas-api/src/handlers/mod.rs
  • backend/crates/atlas-api/src/handlers/nfts.rs
  • backend/crates/atlas-api/src/handlers/proxy.rs
  • backend/crates/atlas-api/src/handlers/search.rs
  • backend/crates/atlas-api/src/handlers/status.rs
  • backend/crates/atlas-api/src/handlers/tokens.rs
  • backend/crates/atlas-api/src/handlers/transactions.rs
  • backend/crates/atlas-api/src/main.rs
  • backend/crates/atlas-common/src/lib.rs
  • backend/crates/atlas-common/src/types.rs
  • backend/crates/atlas-indexer/src/batch.rs
  • backend/crates/atlas-indexer/src/config.rs
  • backend/crates/atlas-indexer/src/copy.rs
  • backend/crates/atlas-indexer/src/fetcher.rs
  • backend/crates/atlas-indexer/src/indexer.rs
  • backend/crates/atlas-indexer/src/main.rs
  • backend/crates/atlas-indexer/src/metadata.rs
  • frontend/src/App.tsx
  • frontend/src/api/addresses.ts
  • frontend/src/api/transactions.ts
  • frontend/src/components/ImageIpfs.tsx
  • frontend/src/components/Layout.tsx
  • frontend/src/components/SearchBar.tsx
  • frontend/src/components/SmoothCounter.tsx
  • frontend/src/hooks/index.ts
  • frontend/src/hooks/useEthBalance.ts
  • frontend/src/hooks/useEthPrice.ts
  • frontend/src/hooks/useLatestBlockHeight.ts
  • frontend/src/hooks/useStats.ts
  • frontend/src/pages/AddressPage.tsx
  • frontend/src/pages/AddressesPage.tsx
  • frontend/src/pages/BlockDetailPage.tsx
  • frontend/src/pages/BlocksPage.tsx
  • frontend/src/pages/NFTsPage.tsx
  • frontend/src/pages/SearchResultsPage.tsx
  • frontend/src/pages/TokensPage.tsx
  • frontend/src/pages/TransactionDetailPage.tsx
  • frontend/src/pages/TransactionsPage.tsx
💤 Files with no reviewable changes (2)
  • frontend/src/components/ImageIpfs.tsx
  • frontend/src/components/SmoothCounter.tsx

@tac0turtle tac0turtle merged commit 5afc683 into main Feb 24, 2026
2 of 3 checks passed
@tac0turtle tac0turtle deleted the marko/ci_fixes branch February 24, 2026 11:14
@coderabbitai coderabbitai bot mentioned this pull request Feb 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants