Conversation
WalkthroughThis PR introduces a new Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/request_logs.rs (1)
11-39:⚠️ Potential issue | 🟠 MajorAdd schema migration or validation for legacy
request_logstables.
CREATE TABLE IF NOT EXISTSdoes not modify existing databases. Any DuckDB file created before this change will retain its old schema withSTRUCT("1","2")[]columns, while this code now writesSTRUCT(name,value)[]. The staging table at lines 88–95 handles Arrow IPC conversion temporarily but does not address persistent schema compatibility. A database created with the older schema will conflict with the new insert statements. Add either a migration that renames the legacy columns or a fail-fast schema check before writes to catch version mismatches. Current tests only cover fresh databases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/request_logs.rs` around lines 11 - 39, The current ensure_request_logs_table creates the new request_logs schema but doesn't handle preexisting legacy schemas with STRUCT("1","2")[] column types; add a migration or validation step (e.g., new function migrate_or_validate_request_logs_schema called from ensure_request_logs_table) that (1) queries the existing schema (via PRAGMA table_info or information_schema) for request_headers/response_headers column types and names, (2) if legacy STRUCT("1","2")[] columns are present, perform an in-place migration (ALTER TABLE ... RENAME COLUMN from the legacy names to the new names or create a temporary staging table, copy data converting STRUCT("1","2")[] to STRUCT(name VARCHAR, value VARCHAR)[] then swap tables), and (3) if migration cannot be performed, fail-fast with a clear error indicating schema version mismatch; ensure the migration touches request_headers and response_headers and is run before any inserts that assume STRUCT(name,value)[].
🧹 Nitpick comments (3)
src/request_details.rs (2)
239-246: Consider wrapping DB writes in a transaction for atomicity.The three write operations (
write_request_details_to_db,write_application_logs_to_db,write_spans_to_db) are executed sequentially without transaction wrapping. If an error occurs duringwrite_spans_to_db, the request_logs and application_logs would already be committed while spans remain in an inconsistent state (deleted but not re-inserted).Given that this operates on a single request's data, the practical impact is low, but wrapping in a transaction would ensure all-or-nothing semantics.
♻️ Optional: Wrap writes in a transaction
if let Some((db_path, conn)) = &db { ensure_request_logs_table(conn)?; ensure_application_logs_table(conn)?; ensure_spans_table(conn)?; + conn.execute("BEGIN TRANSACTION", [])?; - write_request_details_to_db(conn, app_id, &data)?; - write_application_logs_to_db(conn, app_id, &data.request_uuid, &data.logs)?; - write_spans_to_db(conn, app_id, &data.request_uuid, &data.spans)?; + let result = (|| { + write_request_details_to_db(conn, app_id, &data)?; + write_application_logs_to_db(conn, app_id, &data.request_uuid, &data.logs)?; + write_spans_to_db(conn, app_id, &data.request_uuid, &data.spans)?; + Ok::<_, anyhow::Error>(()) + })(); + if result.is_err() { + let _ = conn.execute("ROLLBACK", []); + return result; + } + conn.execute("COMMIT", [])?; eprintln!(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/request_details.rs` around lines 239 - 246, The three DB writes (write_request_details_to_db, write_application_logs_to_db, write_spans_to_db) must be executed inside a single DB transaction to provide all-or-nothing semantics: create a transaction from the existing connection (e.g., conn.transaction() or equivalent Transaction API), perform the three writes using the Transaction (modify the write_* helpers to accept &Transaction or a trait-compatible reference if needed), and call commit() only after all three succeed so failures automatically roll back; keep the ensure_request_logs_table/ensure_application_logs_table/ensure_spans_table calls outside or run schema creation before starting the transaction.
67-100: Consider adding unique constraints for query efficiency.The
application_logsandspanstables lack unique constraints. While the delete-then-insert pattern at lines 161-164 and 192-195 handles idempotency, adding a composite unique constraint on(app_id, request_uuid, timestamp)for logs and(app_id, request_uuid, span_id)for spans could:
- Prevent accidental duplicates if delete fails silently
- Enable efficient lookups by these key combinations
This is optional since the current implementation works correctly for the single-request use case.
♻️ Optional: Add unique constraints
fn ensure_application_logs_table(conn: &duckdb::Connection) -> Result<()> { conn.execute_batch( "CREATE TABLE IF NOT EXISTS application_logs ( app_id INTEGER NOT NULL, request_uuid VARCHAR NOT NULL, timestamp TIMESTAMPTZ NOT NULL, message VARCHAR NOT NULL, level VARCHAR, logger VARCHAR, file VARCHAR, - line INTEGER + line INTEGER, + UNIQUE (app_id, request_uuid, timestamp, message) )", )?; Ok(()) } fn ensure_spans_table(conn: &duckdb::Connection) -> Result<()> { conn.execute_batch( "CREATE TABLE IF NOT EXISTS spans ( app_id INTEGER NOT NULL, request_uuid VARCHAR NOT NULL, span_id VARCHAR NOT NULL, parent_span_id VARCHAR, name VARCHAR NOT NULL, kind VARCHAR NOT NULL, start_time_ns BIGINT NOT NULL, end_time_ns BIGINT NOT NULL, duration_ns BIGINT NOT NULL, status VARCHAR NOT NULL, - attributes JSON + attributes JSON, + UNIQUE (app_id, request_uuid, span_id) )", )?; Ok(()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/request_details.rs` around lines 67 - 100, The CREATE TABLE statements in ensure_application_logs_table and ensure_spans_table lack composite uniqueness, so add UNIQUE constraints for idempotency and faster lookups: modify the SQL in ensure_application_logs_table to include a composite UNIQUE(app_id, request_uuid, timestamp) (or create a separate CREATE UNIQUE INDEX on those columns) and modify the SQL in ensure_spans_table to include UNIQUE(app_id, request_uuid, span_id) (or create a unique index) so duplicate inserts are prevented and lookups by those key combinations are efficient; ensure the chosen approach is compatible with duckdb (use UNIQUE in the table DDL or CREATE UNIQUE INDEX immediately after table creation).src/request_logs.rs (1)
297-305: Assert the remapped header value too.This round-trip test only checks
.name, so a broken"2" -> valuemapping would still pass. It's worth assertingrequest_headers[1].valuehere as well.Proposed test tightening
- let (url, header_name): (String, Option<String>) = conn + let (url, header_name, header_value): (String, Option<String>, Option<String>) = conn .query_row( - "SELECT url, request_headers[1].name FROM request_logs WHERE app_id = 1", + "SELECT url, request_headers[1].name, request_headers[1].value \ + FROM request_logs WHERE app_id = 1", [], - |row| Ok((row.get(0)?, row.get(1)?)), + |row| Ok((row.get(0)?, row.get(1)?, row.get(2)?)), ) .unwrap(); assert_eq!(url, "https://api.example.com/test"); assert_eq!(header_name.as_deref(), Some("content-type")); + assert_eq!(header_value.as_deref(), Some("application/json"));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/request_logs.rs` around lines 297 - 305, The test in src/request_logs.rs currently only asserts the remapped header name via header_name/request_headers[1].name; extend the check to also assert the remapped header value (request_headers[1].value) so that the "2" -> value mapping is validated; after the query_row that binds (url, header_name) add an assertion that the corresponding header value equals the expected string (e.g., "application/json" or the value used in the test) by selecting or retrieving request_headers[1].value (or expanding the tuple returned from the closure) and asserting equality.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/request_logs.rs`:
- Around line 11-39: The current ensure_request_logs_table creates the new
request_logs schema but doesn't handle preexisting legacy schemas with
STRUCT("1","2")[] column types; add a migration or validation step (e.g., new
function migrate_or_validate_request_logs_schema called from
ensure_request_logs_table) that (1) queries the existing schema (via PRAGMA
table_info or information_schema) for request_headers/response_headers column
types and names, (2) if legacy STRUCT("1","2")[] columns are present, perform an
in-place migration (ALTER TABLE ... RENAME COLUMN from the legacy names to the
new names or create a temporary staging table, copy data converting
STRUCT("1","2")[] to STRUCT(name VARCHAR, value VARCHAR)[] then swap tables),
and (3) if migration cannot be performed, fail-fast with a clear error
indicating schema version mismatch; ensure the migration touches request_headers
and response_headers and is run before any inserts that assume
STRUCT(name,value)[].
---
Nitpick comments:
In `@src/request_details.rs`:
- Around line 239-246: The three DB writes (write_request_details_to_db,
write_application_logs_to_db, write_spans_to_db) must be executed inside a
single DB transaction to provide all-or-nothing semantics: create a transaction
from the existing connection (e.g., conn.transaction() or equivalent Transaction
API), perform the three writes using the Transaction (modify the write_* helpers
to accept &Transaction or a trait-compatible reference if needed), and call
commit() only after all three succeed so failures automatically roll back; keep
the ensure_request_logs_table/ensure_application_logs_table/ensure_spans_table
calls outside or run schema creation before starting the transaction.
- Around line 67-100: The CREATE TABLE statements in
ensure_application_logs_table and ensure_spans_table lack composite uniqueness,
so add UNIQUE constraints for idempotency and faster lookups: modify the SQL in
ensure_application_logs_table to include a composite UNIQUE(app_id,
request_uuid, timestamp) (or create a separate CREATE UNIQUE INDEX on those
columns) and modify the SQL in ensure_spans_table to include UNIQUE(app_id,
request_uuid, span_id) (or create a unique index) so duplicate inserts are
prevented and lookups by those key combinations are efficient; ensure the chosen
approach is compatible with duckdb (use UNIQUE in the table DDL or CREATE UNIQUE
INDEX immediately after table creation).
In `@src/request_logs.rs`:
- Around line 297-305: The test in src/request_logs.rs currently only asserts
the remapped header name via header_name/request_headers[1].name; extend the
check to also assert the remapped header value (request_headers[1].value) so
that the "2" -> value mapping is validated; after the query_row that binds (url,
header_name) add an assertion that the corresponding header value equals the
expected string (e.g., "application/json" or the value used in the test) by
selecting or retrieving request_headers[1].value (or expanding the tuple
returned from the closure) and asserting equality.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3c7487d6-08a1-4182-b548-1c8491f36079
📒 Files selected for processing (10)
AGENTS.mdREADME.mdskills/apitally-cli/SKILL.mdskills/apitally-cli/references/commands.mdskills/apitally-cli/references/tables.mdsrc/apps.rssrc/consumers.rssrc/main.rssrc/request_details.rssrc/request_logs.rs
Summary by CodeRabbit
New Features
request-detailscommand to retrieve comprehensive request data including headers, body, exceptions, application logs, and distributed traces.Documentation
application_logs,spans).