Skip to content

Add local db table schemas and insert command generation#2101

Merged
findolor merged 12 commits intomainfrom
local-db-insert-commands
Oct 2, 2025
Merged

Add local db table schemas and insert command generation#2101
findolor merged 12 commits intomainfrom
local-db-insert-commands

Conversation

@findolor
Copy link
Copy Markdown
Collaborator

@findolor findolor commented Aug 27, 2025

Caution

Chained PR - Do not merge before #2100

Motivation

See issue: #2071 and #2072

After fetching and decoding orderbook contract event data, we need a way to create the insert commands for the sqlite db. This PR implements that for the decoded events.

Solution

Checks

By submitting this for review, I'm confirming I've done the following:

  • made this PR as small as possible
  • unit-tested any new functionality
  • linked any relevant issues or PRs
  • included screenshots (if this involves a front-end change)

fix #2071
fix #2072

Summary by CodeRabbit

  • New Features

    • Introduces SQLite support for raindex web data, including tables for deposits, withdrawals, order events, order IOs, take orders, clear/after-clear events, and meta events.
    • Adds comprehensive indexing to speed up common queries (by block, owner, order hash, vaults, tokens, and subjects).
    • Enables transactional generation of SQL from decoded events to populate the database consistently.
  • Chores

    • Exposes the new SQLite insert capability via the module surface.
    • Cleans up an obsolete error type variant.

@findolor findolor self-assigned this Aug 27, 2025
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Aug 27, 2025

Walkthrough

Adds a new SQLite insert pipeline that converts decoded events into SQL statements and transaction batches, introduces a public InsertError and decoded_events_to_sql API, exposes the insert module, defines the SQLite schema for all related tables and indexes, and removes the OrderHashComputation variant from DecodeError.

Changes

Cohort / File(s) Summary
SQLite Web Insert Pipeline
crates/common/src/raindex_client/sqlite_web/insert.rs, crates/common/src/raindex_client/sqlite_web/mod.rs
Introduces decoded_events_to_sql entry point with per-event SQL generators, transaction assembly, and InsertError enum. Exposes module via pub mod insert.
Decode Error Cleanup
crates/common/src/raindex_client/sqlite_web/decode.rs
Removes DecodeError::OrderHashComputation(String) variant.
SQLite Schema
crates/common/src/raindex_client/sqlite_web/sql/tables.sql
Adds tables for deposits, withdrawals, order events/IOs, take orders and contexts, clear_v3, after_clear_v2, meta; plus related indexes.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Caller
  participant Insert as sqlite_web::insert
  participant Gen as Per-Event Generators
  participant SQL as SQL Buffer

  Caller->>Insert: decoded_events_to_sql(events, end_block)
  Insert->>Insert: Build EventContext per event
  loop For each event
    Insert->>Gen: Dispatch generate_*_sql(event, ctx)
    Gen->>Gen: Compute order hashes / convert fields
    Gen->>SQL: Append INSERTs (+ child rows)
  end
  Insert->>SQL: Append sync_status update + COMMIT
  Insert-->>Caller: Ok(SQL string) or Err(InsertError)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • 0xgleb
  • hardyjosh

Pre-merge checks and finishing touches

❌ Failed checks (3 warnings)
Check name Status Explanation Resolution
Linked Issues Check ⚠️ Warning The PR fully implements the table definitions and schema additions required by issue #2071 but does not address the event-fetching query logic described in issue #2072, meaning not all primary objectives from the linked issues are met. The author should implement or reference the event-fetching query logic required by issue #2072 or clearly separate concerns by splitting the insert command generation into its own PR and addressing query logic in a follow-up.
Out of Scope Changes Check ⚠️ Warning The removal of the OrderHashComputation variant from DecodeError in decode.rs is unrelated to the local database schema additions and insert command generation objectives, indicating an out-of-scope change. Extract the DecodeError variant removal into a separate follow-up PR or restore the variant to keep this pull request focused solely on the local database schema and insert command generation.
Docstring Coverage ⚠️ Warning Docstring coverage is 21.43% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title accurately reflects the main changes by highlighting the addition of local database table schemas and the generation of insert commands, which matches the content of the changeset without extraneous detail or noise.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch local-db-insert-commands

Comment @coderabbitai help to get the list of available commands and usage tips.

@findolor findolor changed the title add table schemas Add local db table schemas and insert command generation Aug 28, 2025
Base automatically changed from hyperrpc-decode to main October 2, 2025 08:52
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3e11215 and c85cd0c.

📒 Files selected for processing (4)
  • crates/common/src/raindex_client/sqlite_web/decode.rs (0 hunks)
  • crates/common/src/raindex_client/sqlite_web/insert.rs (1 hunks)
  • crates/common/src/raindex_client/sqlite_web/mod.rs (1 hunks)
  • crates/common/src/raindex_client/sqlite_web/sql/tables.sql (1 hunks)
💤 Files with no reviewable changes (1)
  • crates/common/src/raindex_client/sqlite_web/decode.rs
🧰 Additional context used
📓 Path-based instructions (1)
crates/**/{src,tests,benches,examples}/**/*.rs

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

crates/**/{src,tests,benches,examples}/**/*.rs: Rust crates: lint with cargo clippy --workspace --all-targets --all-features -D warnings
Rust crates: ensure cargo test --workspace passes
Rust crates: build with cargo build during development/readiness checks

Files:

  • crates/common/src/raindex_client/sqlite_web/mod.rs
  • crates/common/src/raindex_client/sqlite_web/insert.rs
🧬 Code graph analysis (1)
crates/common/src/raindex_client/sqlite_web/insert.rs (2)
crates/common/src/raindex_client/sqlite_web/decode.rs (2)
  • U256 (225-225)
  • event (169-178)
subgraph/tests/event-mocks.test.ts (1)
  • IOV2 (115-123)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: standard-tests (ubuntu-latest, test-js-bindings)
  • GitHub Check: build-tauri (ubuntu-22.04, true)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-wasm-test)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-artifacts)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-rs-static)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-rs-artifacts, true)
  • GitHub Check: Deploy-Docs-Preview
  • GitHub Check: Deploy-Preview-Push
  • GitHub Check: copilot-setup-steps
🔇 Additional comments (5)
crates/common/src/raindex_client/sqlite_web/mod.rs (1)

3-3: Expose module looks good

pub mod insert; correctly surfaces the new API. No further changes needed here.

crates/common/src/raindex_client/sqlite_web/insert.rs (3)

73-82: Guard against SQLite INTEGER range

SQLite INTEGER is signed 64-bit. If future block_number/log_index exceed i64::MAX, writes may misbehave. Add a range check or store as TEXT.

Would you like a small guard that errors when value > i64::MAX?


384-417: Order IOs generation is solid

Correctly splits inputs/outputs, resets indices per io_type, and batches into a single INSERT.


419-424: Hex parsing helper is fine

Handles 0x-optional prefix and errors cleanly.

crates/common/src/raindex_client/sqlite_web/sql/tables.sql (1)

121-130: Keep BLOB for meta; ensure insertion uses X'…'

Schema is fine as BLOB. After adjusting the insert to X'…', this will be consistent.

Comment on lines +125 to +131
DecodedEvent::Unknown(decoded) => {
eprintln!(
"Warning: Unknown event type for transaction {}: {}",
event.transaction_hash, decoded.note
);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Use structured logging instead of eprintln

Replace eprintln! with tracing/log for production observability and tests.

-                eprintln!(
+                tracing::warn!(
                     "Warning: Unknown event type for transaction {}: {}",
                     event.transaction_hash, decoded.note
                 );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
DecodedEvent::Unknown(decoded) => {
eprintln!(
"Warning: Unknown event type for transaction {}: {}",
event.transaction_hash, decoded.note
);
}
}
DecodedEvent::Unknown(decoded) => {
tracing::warn!(
"Warning: Unknown event type for transaction {}: {}",
event.transaction_hash,
decoded.note
);
}
}
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/sqlite_web/insert.rs around lines 125 to
131, replace the eprintln! call in the DecodedEvent::Unknown arm with structured
logging (e.g., tracing::warn! or log::warn!) and include structured fields for
easier observability: pass the transaction hash and the decoded.note as named
fields instead of formatting into a string. Ensure the tracing (or log) crate is
in scope (add use tracing::warn; or use log::warn;) and that the crate
features/dependencies and tests are updated to initialize the logger/tracing
subscriber where needed so tests still run.

Comment on lines +134 to +140
sql.push_str(&format!(
"\nUPDATE sync_status SET last_synced_block = {}, updated_at = CURRENT_TIMESTAMP WHERE id = 1;\n",
end_block
));

sql.push_str("\nCOMMIT;\n");

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

sync_status update will fail if table/row doesn’t exist; use UPSERT

The UPDATE assumes a pre-existing row id=1. Prefer an UPSERT to initialize and update atomically.

Apply this diff:

-    sql.push_str(&format!(
-        "\nUPDATE sync_status SET last_synced_block = {}, updated_at = CURRENT_TIMESTAMP WHERE id = 1;\n",
-        end_block
-    ));
+    sql.push_str(&format!(
+        "\nINSERT INTO sync_status (id, last_synced_block, updated_at) VALUES (1, {}, CURRENT_TIMESTAMP)\n\
+ON CONFLICT(id) DO UPDATE SET last_synced_block = {}, updated_at = CURRENT_TIMESTAMP;\n",
+        end_block, end_block
+    ));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
sql.push_str(&format!(
"\nUPDATE sync_status SET last_synced_block = {}, updated_at = CURRENT_TIMESTAMP WHERE id = 1;\n",
end_block
));
sql.push_str("\nCOMMIT;\n");
sql.push_str(&format!(
"\nINSERT INTO sync_status (id, last_synced_block, updated_at) \
VALUES (1, {}, CURRENT_TIMESTAMP)\n\
ON CONFLICT(id) DO UPDATE SET last_synced_block = {}, updated_at = CURRENT_TIMESTAMP;\n",
end_block, end_block
));
sql.push_str("\nCOMMIT;\n");
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/sqlite_web/insert.rs around lines 134 to
140, the code issues an UPDATE on sync_status row id=1 which will fail if the
table or row doesn't exist; replace the UPDATE with an UPSERT that atomically
creates or updates the row. Concretely, remove the UPDATE statement and append
an INSERT ... ON CONFLICT(id) DO UPDATE (or SQLite's INSERT OR REPLACE/INSERT
... ON CONFLICT) that sets id=1, last_synced_block to end_block and updated_at
to CURRENT_TIMESTAMP so the row is initialized when missing and updated
otherwise while remaining inside the same transaction.

Comment on lines +232 to +285
fn generate_take_order_sql(
context: &EventContext<'_>,
decoded: &TakeOrderV3,
) -> Result<String, InsertError> {
let input_io_index_u64 = u256_to_u64(&decoded.config.inputIOIndex, "inputIOIndex")?;
let output_io_index_u64 = u256_to_u64(&decoded.config.outputIOIndex, "outputIOIndex")?;

let mut sql = String::new();

sql.push_str(&format!(
"INSERT INTO take_orders (block_number, block_timestamp, transaction_hash, log_index, sender, order_owner, order_nonce, input_io_index, output_io_index, input, output) VALUES ({}, {}, '{}', {}, '{}', '{}', '{}', {}, {}, '{}', '{}');\n",
context.block_number,
context.block_timestamp,
context.transaction_hash,
context.log_index,
hex::encode_prefixed(decoded.sender),
hex::encode_prefixed(decoded.config.order.owner),
hex::encode_prefixed(decoded.config.order.nonce),
input_io_index_u64,
output_io_index_u64,
hex::encode_prefixed(decoded.input),
hex::encode_prefixed(decoded.output)
));

for (context_index, signed_context) in decoded.config.signedContext.iter().enumerate() {
let context_value = format!(
"signer:{},signature:{}",
hex::encode_prefixed(signed_context.signer),
hex::encode_prefixed(&signed_context.signature)
);

sql.push_str(&format!(
"INSERT INTO take_order_contexts (transaction_hash, log_index, context_index, context_value) VALUES ('{}', {}, {}, '{}');\n",
context.transaction_hash,
context.log_index,
context_index,
context_value
));

for (value_index, value) in signed_context.context.iter().enumerate() {
sql.push_str(&format!(
"INSERT INTO context_values (transaction_hash, log_index, context_index, value_index, value) VALUES ('{}', {}, {}, {}, '{}');\n",
context.transaction_hash,
context.log_index,
context_index,
value_index,
hex::encode_prefixed(value)
));
}
}

Ok(sql)
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Context serialization: consider first-class columns for signer/signature

Packing signer/signature into a single TEXT forces downstream parsing. Adding columns (signer, signature) to take_order_contexts would be cleaner and indexable.

🤖 Prompt for AI Agents
In crates/common/src/raindex_client/sqlite_web/insert.rs around lines 232 to
285, the current code packs signer and signature into a single TEXT
context_value which forces downstream parsing; change the INSERT into
take_order_contexts to use two first-class columns (signer, signature) instead
of a single context_value, inserting hex::encode_prefixed(signed_context.signer)
and hex::encode_prefixed(&signed_context.signature) into those columns, remove
the combined context_value insertion, update any schema/migration to add signer
and signature columns (and drop/mark context_value as deprecated if desired),
and update any downstream consumers/tests to read the new columns; keep the
existing loop for signed_context.context values unchanged but ensure
transaction_hash/log_index/context_index semantics remain the same.

Comment on lines +368 to +382
fn generate_meta_sql(
context: &EventContext<'_>,
decoded: &MetaV1_2,
) -> Result<String, InsertError> {
Ok(format!(
"INSERT INTO meta_events (block_number, block_timestamp, transaction_hash, log_index, sender, subject, meta) VALUES ({}, {}, '{}', {}, '{}', '{}', '{}');\n",
context.block_number,
context.block_timestamp,
context.transaction_hash,
context.log_index,
hex::encode_prefixed(decoded.sender),
hex::encode_prefixed(decoded.subject),
hex::encode_prefixed(&decoded.meta)
))
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Insert BLOB correctly (use X'…' literal), not a quoted 0x hex string

meta_events.meta is BLOB, but the current SQL inserts TEXT ('0x…'). Use an SQLite BLOB literal.

Apply this diff:

-    Ok(format!(
-        "INSERT INTO meta_events (block_number, block_timestamp, transaction_hash, log_index, sender, subject, meta) VALUES ({}, {}, '{}', {}, '{}', '{}', '{}');\n",
-        context.block_number,
-        context.block_timestamp,
-        context.transaction_hash,
-        context.log_index,
-        hex::encode_prefixed(decoded.sender),
-        hex::encode_prefixed(decoded.subject),
-        hex::encode_prefixed(&decoded.meta)
-    ))
+    let meta_hex = hex::encode(&decoded.meta); // no 0x prefix
+    Ok(format!(
+        "INSERT INTO meta_events (block_number, block_timestamp, transaction_hash, log_index, sender, subject, meta) VALUES ({}, {}, '{}', {}, '{}', '{}', X'{}');\n",
+        context.block_number,
+        context.block_timestamp,
+        context.transaction_hash,
+        context.log_index,
+        hex::encode_prefixed(decoded.sender),
+        hex::encode_prefixed(decoded.subject),
+        meta_hex
+    ))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn generate_meta_sql(
context: &EventContext<'_>,
decoded: &MetaV1_2,
) -> Result<String, InsertError> {
Ok(format!(
"INSERT INTO meta_events (block_number, block_timestamp, transaction_hash, log_index, sender, subject, meta) VALUES ({}, {}, '{}', {}, '{}', '{}', '{}');\n",
context.block_number,
context.block_timestamp,
context.transaction_hash,
context.log_index,
hex::encode_prefixed(decoded.sender),
hex::encode_prefixed(decoded.subject),
hex::encode_prefixed(&decoded.meta)
))
}
fn generate_meta_sql(
context: &EventContext<'_>,
decoded: &MetaV1_2,
) -> Result<String, InsertError> {
let meta_hex = hex::encode(&decoded.meta); // no 0x prefix
Ok(format!(
"INSERT INTO meta_events (block_number, block_timestamp, transaction_hash, log_index, sender, subject, meta) VALUES ({}, {}, '{}', {}, '{}', '{}', X'{}');\n",
context.block_number,
context.block_timestamp,
context.transaction_hash,
context.log_index,
hex::encode_prefixed(decoded.sender),
hex::encode_prefixed(decoded.subject),
meta_hex
))
}
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/sqlite_web/insert.rs around lines 368 to
382, meta_events.meta is a BLOB but the SQL currently inserts it as a quoted 0x
hex string; change the INSERT to use an SQLite BLOB literal X'hex' instead of a
quoted '0x…' value. Concretely, stop using hex::encode_prefixed(...) for
decoded.meta (which yields a "0x" prefix) and instead use
hex::encode(&decoded.meta) and embed it in the SQL as X'{hex}' (no surrounding
single quotes) so the meta column receives a true BLOB literal; update the
format string to remove the single quotes around the meta placeholder and wrap
that placeholder with X'{}' while leaving sender/subject handling unchanged.

Comment on lines +1 to +156
CREATE TABLE deposits (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
deposit_amount_uint256 TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);

CREATE TABLE withdrawals (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
target_amount TEXT NOT NULL,
withdraw_amount TEXT NOT NULL,
withdraw_amount_uint256 TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);

CREATE TABLE order_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
order_hash TEXT NOT NULL,
event_type TEXT NOT NULL,
order_owner TEXT NOT NULL,
order_nonce TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);

CREATE TABLE order_ios (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
io_index INTEGER NOT NULL,
io_type TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, io_index, io_type),
FOREIGN KEY (transaction_hash, log_index) REFERENCES order_events(transaction_hash, log_index)
);

CREATE TABLE take_orders (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
order_owner TEXT NOT NULL,
order_nonce TEXT NOT NULL,
input_io_index INTEGER NOT NULL,
output_io_index INTEGER NOT NULL,
input TEXT NOT NULL,
output TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);

CREATE TABLE take_order_contexts (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
context_index INTEGER NOT NULL,
context_value TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, context_index),
FOREIGN KEY (transaction_hash, log_index) REFERENCES take_orders(transaction_hash, log_index)
);

CREATE TABLE context_values (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
context_index INTEGER NOT NULL,
value_index INTEGER NOT NULL,
value TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, context_index, value_index),
FOREIGN KEY (transaction_hash, log_index, context_index) REFERENCES take_order_contexts(transaction_hash, log_index, context_index)
);

CREATE TABLE clear_v3_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
alice_order_hash TEXT NOT NULL,
alice_order_owner TEXT NOT NULL,
alice_input_io_index INTEGER NOT NULL,
alice_output_io_index INTEGER NOT NULL,
alice_bounty_vault_id TEXT NOT NULL,
alice_input_vault_id TEXT NOT NULL,
alice_output_vault_id TEXT NOT NULL,
bob_order_hash TEXT NOT NULL,
bob_order_owner TEXT NOT NULL,
bob_input_io_index INTEGER NOT NULL,
bob_output_io_index INTEGER NOT NULL,
bob_bounty_vault_id TEXT NOT NULL,
bob_input_vault_id TEXT NOT NULL,
bob_output_vault_id TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);

CREATE TABLE after_clear_v2_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
alice_output TEXT NOT NULL,
bob_output TEXT NOT NULL,
alice_input TEXT NOT NULL,
bob_input TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);

CREATE TABLE meta_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
subject TEXT NOT NULL,
meta BLOB NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE INDEX idx_deposits_vault ON deposits(sender, token, vault_id);
CREATE INDEX idx_deposits_block ON deposits(block_number);
CREATE INDEX idx_deposits_token ON deposits(token);

CREATE INDEX idx_withdrawals_vault ON withdrawals(sender, token, vault_id);
CREATE INDEX idx_withdrawals_block ON withdrawals(block_number);
CREATE INDEX idx_withdrawals_token ON withdrawals(token);

CREATE INDEX idx_order_events_hash ON order_events(order_hash);
CREATE INDEX idx_order_events_owner ON order_events(order_owner);
CREATE INDEX idx_order_events_block ON order_events(block_number);

CREATE INDEX idx_order_ios_token ON order_ios(token);

CREATE INDEX idx_take_orders_owner ON take_orders(order_owner);
CREATE INDEX idx_take_orders_block ON take_orders(block_number);

CREATE INDEX idx_clear_events_alice_bob ON clear_v3_events(alice_order_hash, bob_order_hash);
CREATE INDEX idx_clear_events_block ON clear_v3_events(block_number);
CREATE INDEX idx_clear_alice_vaults ON clear_v3_events(alice_input_vault_id, alice_output_vault_id);
CREATE INDEX idx_clear_bob_vaults ON clear_v3_events(bob_input_vault_id, bob_output_vault_id);

CREATE INDEX idx_after_clear_block ON after_clear_v2_events(block_number);

CREATE INDEX idx_meta_subject ON meta_events(subject);
CREATE INDEX idx_meta_block ON meta_events(block_number); No newline at end of file
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add sync_status table to match generated SQL

decoded_events_to_sql updates sync_status, but schema lacks it. Add a small table with singleton id=1.

Append this near the top:

+CREATE TABLE IF NOT EXISTS sync_status (
+    id INTEGER PRIMARY KEY CHECK (id = 1),
+    last_synced_block INTEGER NOT NULL DEFAULT 0,
+    updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
+);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE deposits (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
deposit_amount_uint256 TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE withdrawals (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
target_amount TEXT NOT NULL,
withdraw_amount TEXT NOT NULL,
withdraw_amount_uint256 TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE order_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
order_hash TEXT NOT NULL,
event_type TEXT NOT NULL,
order_owner TEXT NOT NULL,
order_nonce TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE order_ios (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
io_index INTEGER NOT NULL,
io_type TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, io_index, io_type),
FOREIGN KEY (transaction_hash, log_index) REFERENCES order_events(transaction_hash, log_index)
);
CREATE TABLE take_orders (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
order_owner TEXT NOT NULL,
order_nonce TEXT NOT NULL,
input_io_index INTEGER NOT NULL,
output_io_index INTEGER NOT NULL,
input TEXT NOT NULL,
output TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE take_order_contexts (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
context_index INTEGER NOT NULL,
context_value TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, context_index),
FOREIGN KEY (transaction_hash, log_index) REFERENCES take_orders(transaction_hash, log_index)
);
CREATE TABLE context_values (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
context_index INTEGER NOT NULL,
value_index INTEGER NOT NULL,
value TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, context_index, value_index),
FOREIGN KEY (transaction_hash, log_index, context_index) REFERENCES take_order_contexts(transaction_hash, log_index, context_index)
);
CREATE TABLE clear_v3_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
alice_order_hash TEXT NOT NULL,
alice_order_owner TEXT NOT NULL,
alice_input_io_index INTEGER NOT NULL,
alice_output_io_index INTEGER NOT NULL,
alice_bounty_vault_id TEXT NOT NULL,
alice_input_vault_id TEXT NOT NULL,
alice_output_vault_id TEXT NOT NULL,
bob_order_hash TEXT NOT NULL,
bob_order_owner TEXT NOT NULL,
bob_input_io_index INTEGER NOT NULL,
bob_output_io_index INTEGER NOT NULL,
bob_bounty_vault_id TEXT NOT NULL,
bob_input_vault_id TEXT NOT NULL,
bob_output_vault_id TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE after_clear_v2_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
alice_output TEXT NOT NULL,
bob_output TEXT NOT NULL,
alice_input TEXT NOT NULL,
bob_input TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE meta_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
subject TEXT NOT NULL,
meta BLOB NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE INDEX idx_deposits_vault ON deposits(sender, token, vault_id);
CREATE INDEX idx_deposits_block ON deposits(block_number);
CREATE INDEX idx_deposits_token ON deposits(token);
CREATE INDEX idx_withdrawals_vault ON withdrawals(sender, token, vault_id);
CREATE INDEX idx_withdrawals_block ON withdrawals(block_number);
CREATE INDEX idx_withdrawals_token ON withdrawals(token);
CREATE INDEX idx_order_events_hash ON order_events(order_hash);
CREATE INDEX idx_order_events_owner ON order_events(order_owner);
CREATE INDEX idx_order_events_block ON order_events(block_number);
CREATE INDEX idx_order_ios_token ON order_ios(token);
CREATE INDEX idx_take_orders_owner ON take_orders(order_owner);
CREATE INDEX idx_take_orders_block ON take_orders(block_number);
CREATE INDEX idx_clear_events_alice_bob ON clear_v3_events(alice_order_hash, bob_order_hash);
CREATE INDEX idx_clear_events_block ON clear_v3_events(block_number);
CREATE INDEX idx_clear_alice_vaults ON clear_v3_events(alice_input_vault_id, alice_output_vault_id);
CREATE INDEX idx_clear_bob_vaults ON clear_v3_events(bob_input_vault_id, bob_output_vault_id);
CREATE INDEX idx_after_clear_block ON after_clear_v2_events(block_number);
CREATE INDEX idx_meta_subject ON meta_events(subject);
CREATE INDEX idx_meta_block ON meta_events(block_number);
CREATE TABLE IF NOT EXISTS sync_status (
id INTEGER PRIMARY KEY CHECK (id = 1),
last_synced_block INTEGER NOT NULL DEFAULT 0,
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE deposits (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
deposit_amount_uint256 TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE withdrawals (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
target_amount TEXT NOT NULL,
withdraw_amount TEXT NOT NULL,
withdraw_amount_uint256 TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE order_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
order_hash TEXT NOT NULL,
event_type TEXT NOT NULL,
order_owner TEXT NOT NULL,
order_nonce TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE order_ios (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
io_index INTEGER NOT NULL,
io_type TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, io_index, io_type),
FOREIGN KEY (transaction_hash, log_index) REFERENCES order_events(transaction_hash, log_index)
);
CREATE TABLE take_orders (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
order_owner TEXT NOT NULL,
order_nonce TEXT NOT NULL,
input_io_index INTEGER NOT NULL,
output_io_index INTEGER NOT NULL,
input TEXT NOT NULL,
output TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE take_order_contexts (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
context_index INTEGER NOT NULL,
context_value TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, context_index),
FOREIGN KEY (transaction_hash, log_index) REFERENCES take_orders(transaction_hash, log_index)
);
CREATE TABLE context_values (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
context_index INTEGER NOT NULL,
value_index INTEGER NOT NULL,
value TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, context_index, value_index),
FOREIGN KEY (transaction_hash, log_index, context_index) REFERENCES take_order_contexts(transaction_hash, log_index, context_index)
);
CREATE TABLE clear_v3_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
alice_order_hash TEXT NOT NULL,
alice_order_owner TEXT NOT NULL,
alice_input_io_index INTEGER NOT NULL,
alice_output_io_index INTEGER NOT NULL,
alice_bounty_vault_id TEXT NOT NULL,
alice_input_vault_id TEXT NOT NULL,
alice_output_vault_id TEXT NOT NULL,
bob_order_hash TEXT NOT NULL,
bob_order_owner TEXT NOT NULL,
bob_input_io_index INTEGER NOT NULL,
bob_output_io_index INTEGER NOT NULL,
bob_bounty_vault_id TEXT NOT NULL,
bob_input_vault_id TEXT NOT NULL,
bob_output_vault_id TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE after_clear_v2_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
alice_output TEXT NOT NULL,
bob_output TEXT NOT NULL,
alice_input TEXT NOT NULL,
bob_input TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE TABLE meta_events (
transaction_hash TEXT NOT NULL,
log_index INTEGER NOT NULL,
block_number INTEGER NOT NULL,
block_timestamp INTEGER NOT NULL,
sender TEXT NOT NULL,
subject TEXT NOT NULL,
meta BLOB NOT NULL,
PRIMARY KEY (transaction_hash, log_index)
);
CREATE INDEX idx_deposits_vault ON deposits(sender, token, vault_id);
CREATE INDEX idx_deposits_block ON deposits(block_number);
CREATE INDEX idx_deposits_token ON deposits(token);
CREATE INDEX idx_withdrawals_vault ON withdrawals(sender, token, vault_id);
CREATE INDEX idx_withdrawals_block ON withdrawals(block_number);
CREATE INDEX idx_withdrawals_token ON withdrawals(token);
CREATE INDEX idx_order_events_hash ON order_events(order_hash);
CREATE INDEX idx_order_events_owner ON order_events(order_owner);
CREATE INDEX idx_order_events_block ON order_events(block_number);
CREATE INDEX idx_order_ios_token ON order_ios(token);
CREATE INDEX idx_take_orders_owner ON take_orders(order_owner);
CREATE INDEX idx_take_orders_block ON take_orders(block_number);
CREATE INDEX idx_clear_events_alice_bob ON clear_v3_events(alice_order_hash, bob_order_hash);
CREATE INDEX idx_clear_events_block ON clear_v3_events(block_number);
CREATE INDEX idx_clear_alice_vaults ON clear_v3_events(alice_input_vault_id, alice_output_vault_id);
CREATE INDEX idx_clear_bob_vaults ON clear_v3_events(bob_input_vault_id, bob_output_vault_id);
CREATE INDEX idx_after_clear_block ON after_clear_v2_events(block_number);
CREATE INDEX idx_meta_subject ON meta_events(subject);
CREATE INDEX idx_meta_block ON meta_events(block_number);
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/sqlite_web/sql/tables.sql around lines 1 to
156, the schema is missing the sync_status table referenced by
decoded_events_to_sql; add a small singleton table named sync_status (with id
constrained to 1 as the primary key and columns to store the last synced block
number and a timestamp/updated_at) near the top of the file so the application
can read/update sync progress; ensure id is defined as INTEGER PRIMARY KEY with
a CHECK(id=1) (or equivalent) to enforce the single-row invariant and include
NOT NULL on the status fields.

Comment on lines +44 to +49
io_type TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, io_index, io_type),
FOREIGN KEY (transaction_hash, log_index) REFERENCES order_events(transaction_hash, log_index)
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Constrain io_type values

Add a CHECK to ensure only 'input' or 'output' are stored.

-    io_type TEXT NOT NULL,
+    io_type TEXT NOT NULL CHECK (io_type IN ('input','output')),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
io_type TEXT NOT NULL,
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, io_index, io_type),
FOREIGN KEY (transaction_hash, log_index) REFERENCES order_events(transaction_hash, log_index)
);
io_type TEXT NOT NULL CHECK (io_type IN ('input','output')),
token TEXT NOT NULL,
vault_id TEXT NOT NULL,
PRIMARY KEY (transaction_hash, log_index, io_index, io_type),
FOREIGN KEY (transaction_hash, log_index) REFERENCES order_events(transaction_hash, log_index)
);
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/sqlite_web/sql/tables.sql around lines 44 to
49, the io_type column currently allows any text; add a CHECK constraint to
restrict values to 'input' or 'output'. Modify the table definition so io_type
is declared as NOT NULL with a CHECK (io_type IN ('input','output')) (or add a
separate table-level CHECK line) to enforce allowed values; update any relevant
migration or schema version if you maintain migrations.

Comment on lines +131 to +156
CREATE INDEX idx_deposits_vault ON deposits(sender, token, vault_id);
CREATE INDEX idx_deposits_block ON deposits(block_number);
CREATE INDEX idx_deposits_token ON deposits(token);

CREATE INDEX idx_withdrawals_vault ON withdrawals(sender, token, vault_id);
CREATE INDEX idx_withdrawals_block ON withdrawals(block_number);
CREATE INDEX idx_withdrawals_token ON withdrawals(token);

CREATE INDEX idx_order_events_hash ON order_events(order_hash);
CREATE INDEX idx_order_events_owner ON order_events(order_owner);
CREATE INDEX idx_order_events_block ON order_events(block_number);

CREATE INDEX idx_order_ios_token ON order_ios(token);

CREATE INDEX idx_take_orders_owner ON take_orders(order_owner);
CREATE INDEX idx_take_orders_block ON take_orders(block_number);

CREATE INDEX idx_clear_events_alice_bob ON clear_v3_events(alice_order_hash, bob_order_hash);
CREATE INDEX idx_clear_events_block ON clear_v3_events(block_number);
CREATE INDEX idx_clear_alice_vaults ON clear_v3_events(alice_input_vault_id, alice_output_vault_id);
CREATE INDEX idx_clear_bob_vaults ON clear_v3_events(bob_input_vault_id, bob_output_vault_id);

CREATE INDEX idx_after_clear_block ON after_clear_v2_events(block_number);

CREATE INDEX idx_meta_subject ON meta_events(subject);
CREATE INDEX idx_meta_block ON meta_events(block_number); No newline at end of file
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Optional: idempotency and ergonomics

  • Consider IF NOT EXISTS on all CREATE TABLE/INDEX to ease repeated bootstrap.
  • Consider indexes on frequently filtered columns (e.g., order_ios.vault_id).
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/sqlite_web/sql/tables.sql around lines 131
to 156, the CREATE INDEX statements are not idempotent and are missing an index
on a frequently filtered column; update each CREATE INDEX to use "IF NOT EXISTS"
so repeated bootstraps won't error, and add a new index definition for
order_ios(vault_id) (also with IF NOT EXISTS) to cover queries filtering by that
column. Ensure naming follows existing conventions (idx_<table>_<columns>) and
place the new index near the other order_ios index for readability.

@findolor findolor merged commit 1348347 into main Oct 2, 2025
20 checks passed
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Oct 2, 2025

@coderabbitai assess this PR size classification for the totality of the PR with the following criterias and report it in your comment:

S/M/L PR Classification Guidelines:

This guide helps classify merged pull requests by effort and complexity rather than just line count. The goal is to assess the difficulty and scope of changes after they have been completed.

Small (S)

Characteristics:

  • Simple bug fixes, typos, or minor refactoring
  • Single-purpose changes affecting 1-2 files
  • Documentation updates
  • Configuration tweaks
  • Changes that require minimal context to review

Review Effort: Would have taken 5-10 minutes

Examples:

  • Fix typo in variable name
  • Update README with new instructions
  • Adjust configuration values
  • Simple one-line bug fixes
  • Import statement cleanup

Medium (M)

Characteristics:

  • Feature additions or enhancements
  • Refactoring that touches multiple files but maintains existing behavior
  • Breaking changes with backward compatibility
  • Changes requiring some domain knowledge to review

Review Effort: Would have taken 15-30 minutes

Examples:

  • Add new feature or component
  • Refactor common utility functions
  • Update dependencies with minor breaking changes
  • Add new component with tests
  • Performance optimizations
  • More complex bug fixes

Large (L)

Characteristics:

  • Major feature implementations
  • Breaking changes or API redesigns
  • Complex refactoring across multiple modules
  • New architectural patterns or significant design changes
  • Changes requiring deep context and multiple review rounds

Review Effort: Would have taken 45+ minutes

Examples:

  • Complete new feature with frontend/backend changes
  • Protocol upgrades or breaking changes
  • Major architectural refactoring
  • Framework or technology upgrades

Additional Factors to Consider

When deciding between sizes, also consider:

  • Test coverage impact: More comprehensive test changes lean toward larger classification
  • Risk level: Changes to critical systems bump up a size category
  • Team familiarity: Novel patterns or technologies increase complexity

Notes:

  • the assessment must be for the totality of the PR, that means comparing the base branch to the last commit of the PR
  • the assessment output must be exactly one of: S, M or L (single-line comment) in format of: SIZE={S/M/L}
  • do not include any additional text, only the size classification
  • your assessment comment must not include tips or additional sections
  • do NOT tag me or anyone else on your comment

@findolor findolor deleted the local-db-insert-commands branch October 2, 2025 10:16
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 2, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get the total diff statistics for the PR
# Compare base branch (main) to current branch HEAD

# First, fetch the latest from origin to ensure we have up-to-date refs
git fetch origin main >/dev/null 2>&1

# Get the merge-base to find the common ancestor
MERGE_BASE=$(git merge-base origin/main HEAD)

# Get detailed statistics
echo "=== Detailed file changes ==="
git diff --stat $MERGE_BASE HEAD

echo -e "\n=== Summary statistics ==="
git diff --shortstat $MERGE_BASE HEAD

echo -e "\n=== Number of files changed ==="
git diff --name-only $MERGE_BASE HEAD | wc -l

echo -e "\n=== List of changed files ==="
git diff --name-only $MERGE_BASE HEAD

Length of output: 123


🏁 Script executed:

#!/bin/bash
# Use GitHub CLI to get PR statistics
gh pr view 2101 --json additions,deletions,changedFiles,files

# Also get detailed file information
echo -e "\n=== File details ==="
gh pr diff 2101 --patch | head -100

Length of output: 3860


SIZE=L


🧠 Learnings used
Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:42.275Z
Learning: For PR #1884, after each commit review, I should provide a new comment with a PR size report in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding irrelevant files such as lock files (package-lock.json, cargo.lock, etc.).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Build query logic for each of the events we need on orderbook contract Think about table definitions for the local db

3 participants