Skip to content

Add request-details command#5

Merged
itssimon merged 7 commits intomainfrom
request-details-command
Mar 26, 2026
Merged

Add request-details command#5
itssimon merged 7 commits intomainfrom
request-details-command

Conversation

@itssimon
Copy link
Copy Markdown
Member

@itssimon itssimon commented Mar 26, 2026

Summary by CodeRabbit

  • New Features

    • Added request-details command to retrieve comprehensive request data including headers, body, exceptions, application logs, and distributed traces.
  • Documentation

    • Updated CLI documentation and table schemas to reflect the new command and DuckDB tables (application_logs, spans).
    • Improved DuckDB header schema with clearer field naming.

@itssimon itssimon self-assigned this Mar 26, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 26, 2026

Walkthrough

This PR introduces a new request-details CLI command to fetch comprehensive request-specific data (headers, bodies, logs, spans) from an API and persist it to DuckDB. It refactors database insert statements to use explicit column lists and updates the header struct schema from positional fields to named fields (name and value).

Changes

Cohort / File(s) Summary
Documentation Updates
AGENTS.md, README.md
Added new request-details subcommand entries to command tables and reformatted column alignment.
Skills References
skills/apitally-cli/SKILL.md, skills/apitally-cli/references/commands.md, skills/apitally-cli/references/tables.md
Documented the new request-details command, updated header struct field names from numeric positions to name/value, and added definitions for application_logs and spans tables.
Database Schema & Queries
src/request_logs.rs
Renamed header struct fields from ("1","2") to ("name","value"), made ensure_request_logs_table public, and updated Arrow IPC import logic with intermediate staging table and field remapping during insert.
Database Inserts
src/apps.rs, src/consumers.rs
Refactored INSERT OR REPLACE statements to use explicit column lists instead of relying on positional value ordering.
CLI & New Module
src/main.rs, src/request_details.rs
Added RequestDetails command variant to CLI dispatcher, integrated parsing and validation, and implemented new 407-line module for API calls, JSON/DuckDB output routing, and structured data persistence.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.57% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: adding a new request-details command to the CLI.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch request-details-command

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/request_logs.rs (1)

11-39: ⚠️ Potential issue | 🟠 Major

Add schema migration or validation for legacy request_logs tables.

CREATE TABLE IF NOT EXISTS does not modify existing databases. Any DuckDB file created before this change will retain its old schema with STRUCT("1","2")[] columns, while this code now writes STRUCT(name,value)[]. The staging table at lines 88–95 handles Arrow IPC conversion temporarily but does not address persistent schema compatibility. A database created with the older schema will conflict with the new insert statements. Add either a migration that renames the legacy columns or a fail-fast schema check before writes to catch version mismatches. Current tests only cover fresh databases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/request_logs.rs` around lines 11 - 39, The current
ensure_request_logs_table creates the new request_logs schema but doesn't handle
preexisting legacy schemas with STRUCT("1","2")[] column types; add a migration
or validation step (e.g., new function migrate_or_validate_request_logs_schema
called from ensure_request_logs_table) that (1) queries the existing schema (via
PRAGMA table_info or information_schema) for request_headers/response_headers
column types and names, (2) if legacy STRUCT("1","2")[] columns are present,
perform an in-place migration (ALTER TABLE ... RENAME COLUMN from the legacy
names to the new names or create a temporary staging table, copy data converting
STRUCT("1","2")[] to STRUCT(name VARCHAR, value VARCHAR)[] then swap tables),
and (3) if migration cannot be performed, fail-fast with a clear error
indicating schema version mismatch; ensure the migration touches request_headers
and response_headers and is run before any inserts that assume
STRUCT(name,value)[].
🧹 Nitpick comments (3)
src/request_details.rs (2)

239-246: Consider wrapping DB writes in a transaction for atomicity.

The three write operations (write_request_details_to_db, write_application_logs_to_db, write_spans_to_db) are executed sequentially without transaction wrapping. If an error occurs during write_spans_to_db, the request_logs and application_logs would already be committed while spans remain in an inconsistent state (deleted but not re-inserted).

Given that this operates on a single request's data, the practical impact is low, but wrapping in a transaction would ensure all-or-nothing semantics.

♻️ Optional: Wrap writes in a transaction
     if let Some((db_path, conn)) = &db {
         ensure_request_logs_table(conn)?;
         ensure_application_logs_table(conn)?;
         ensure_spans_table(conn)?;

+        conn.execute("BEGIN TRANSACTION", [])?;
-        write_request_details_to_db(conn, app_id, &data)?;
-        write_application_logs_to_db(conn, app_id, &data.request_uuid, &data.logs)?;
-        write_spans_to_db(conn, app_id, &data.request_uuid, &data.spans)?;
+        let result = (|| {
+            write_request_details_to_db(conn, app_id, &data)?;
+            write_application_logs_to_db(conn, app_id, &data.request_uuid, &data.logs)?;
+            write_spans_to_db(conn, app_id, &data.request_uuid, &data.spans)?;
+            Ok::<_, anyhow::Error>(())
+        })();
+        if result.is_err() {
+            let _ = conn.execute("ROLLBACK", []);
+            return result;
+        }
+        conn.execute("COMMIT", [])?;

         eprintln!(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/request_details.rs` around lines 239 - 246, The three DB writes
(write_request_details_to_db, write_application_logs_to_db, write_spans_to_db)
must be executed inside a single DB transaction to provide all-or-nothing
semantics: create a transaction from the existing connection (e.g.,
conn.transaction() or equivalent Transaction API), perform the three writes
using the Transaction (modify the write_* helpers to accept &Transaction or a
trait-compatible reference if needed), and call commit() only after all three
succeed so failures automatically roll back; keep the
ensure_request_logs_table/ensure_application_logs_table/ensure_spans_table calls
outside or run schema creation before starting the transaction.

67-100: Consider adding unique constraints for query efficiency.

The application_logs and spans tables lack unique constraints. While the delete-then-insert pattern at lines 161-164 and 192-195 handles idempotency, adding a composite unique constraint on (app_id, request_uuid, timestamp) for logs and (app_id, request_uuid, span_id) for spans could:

  1. Prevent accidental duplicates if delete fails silently
  2. Enable efficient lookups by these key combinations

This is optional since the current implementation works correctly for the single-request use case.

♻️ Optional: Add unique constraints
 fn ensure_application_logs_table(conn: &duckdb::Connection) -> Result<()> {
     conn.execute_batch(
         "CREATE TABLE IF NOT EXISTS application_logs (
             app_id INTEGER NOT NULL,
             request_uuid VARCHAR NOT NULL,
             timestamp TIMESTAMPTZ NOT NULL,
             message VARCHAR NOT NULL,
             level VARCHAR,
             logger VARCHAR,
             file VARCHAR,
-            line INTEGER
+            line INTEGER,
+            UNIQUE (app_id, request_uuid, timestamp, message)
         )",
     )?;
     Ok(())
 }

 fn ensure_spans_table(conn: &duckdb::Connection) -> Result<()> {
     conn.execute_batch(
         "CREATE TABLE IF NOT EXISTS spans (
             app_id INTEGER NOT NULL,
             request_uuid VARCHAR NOT NULL,
             span_id VARCHAR NOT NULL,
             parent_span_id VARCHAR,
             name VARCHAR NOT NULL,
             kind VARCHAR NOT NULL,
             start_time_ns BIGINT NOT NULL,
             end_time_ns BIGINT NOT NULL,
             duration_ns BIGINT NOT NULL,
             status VARCHAR NOT NULL,
-            attributes JSON
+            attributes JSON,
+            UNIQUE (app_id, request_uuid, span_id)
         )",
     )?;
     Ok(())
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/request_details.rs` around lines 67 - 100, The CREATE TABLE statements in
ensure_application_logs_table and ensure_spans_table lack composite uniqueness,
so add UNIQUE constraints for idempotency and faster lookups: modify the SQL in
ensure_application_logs_table to include a composite UNIQUE(app_id,
request_uuid, timestamp) (or create a separate CREATE UNIQUE INDEX on those
columns) and modify the SQL in ensure_spans_table to include UNIQUE(app_id,
request_uuid, span_id) (or create a unique index) so duplicate inserts are
prevented and lookups by those key combinations are efficient; ensure the chosen
approach is compatible with duckdb (use UNIQUE in the table DDL or CREATE UNIQUE
INDEX immediately after table creation).
src/request_logs.rs (1)

297-305: Assert the remapped header value too.

This round-trip test only checks .name, so a broken "2" -> value mapping would still pass. It's worth asserting request_headers[1].value here as well.

Proposed test tightening
-        let (url, header_name): (String, Option<String>) = conn
+        let (url, header_name, header_value): (String, Option<String>, Option<String>) = conn
             .query_row(
-                "SELECT url, request_headers[1].name FROM request_logs WHERE app_id = 1",
+                "SELECT url, request_headers[1].name, request_headers[1].value \
+                 FROM request_logs WHERE app_id = 1",
                 [],
-                |row| Ok((row.get(0)?, row.get(1)?)),
+                |row| Ok((row.get(0)?, row.get(1)?, row.get(2)?)),
             )
             .unwrap();
         assert_eq!(url, "https://api.example.com/test");
         assert_eq!(header_name.as_deref(), Some("content-type"));
+        assert_eq!(header_value.as_deref(), Some("application/json"));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/request_logs.rs` around lines 297 - 305, The test in src/request_logs.rs
currently only asserts the remapped header name via
header_name/request_headers[1].name; extend the check to also assert the
remapped header value (request_headers[1].value) so that the "2" -> value
mapping is validated; after the query_row that binds (url, header_name) add an
assertion that the corresponding header value equals the expected string (e.g.,
"application/json" or the value used in the test) by selecting or retrieving
request_headers[1].value (or expanding the tuple returned from the closure) and
asserting equality.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/request_logs.rs`:
- Around line 11-39: The current ensure_request_logs_table creates the new
request_logs schema but doesn't handle preexisting legacy schemas with
STRUCT("1","2")[] column types; add a migration or validation step (e.g., new
function migrate_or_validate_request_logs_schema called from
ensure_request_logs_table) that (1) queries the existing schema (via PRAGMA
table_info or information_schema) for request_headers/response_headers column
types and names, (2) if legacy STRUCT("1","2")[] columns are present, perform an
in-place migration (ALTER TABLE ... RENAME COLUMN from the legacy names to the
new names or create a temporary staging table, copy data converting
STRUCT("1","2")[] to STRUCT(name VARCHAR, value VARCHAR)[] then swap tables),
and (3) if migration cannot be performed, fail-fast with a clear error
indicating schema version mismatch; ensure the migration touches request_headers
and response_headers and is run before any inserts that assume
STRUCT(name,value)[].

---

Nitpick comments:
In `@src/request_details.rs`:
- Around line 239-246: The three DB writes (write_request_details_to_db,
write_application_logs_to_db, write_spans_to_db) must be executed inside a
single DB transaction to provide all-or-nothing semantics: create a transaction
from the existing connection (e.g., conn.transaction() or equivalent Transaction
API), perform the three writes using the Transaction (modify the write_* helpers
to accept &Transaction or a trait-compatible reference if needed), and call
commit() only after all three succeed so failures automatically roll back; keep
the ensure_request_logs_table/ensure_application_logs_table/ensure_spans_table
calls outside or run schema creation before starting the transaction.
- Around line 67-100: The CREATE TABLE statements in
ensure_application_logs_table and ensure_spans_table lack composite uniqueness,
so add UNIQUE constraints for idempotency and faster lookups: modify the SQL in
ensure_application_logs_table to include a composite UNIQUE(app_id,
request_uuid, timestamp) (or create a separate CREATE UNIQUE INDEX on those
columns) and modify the SQL in ensure_spans_table to include UNIQUE(app_id,
request_uuid, span_id) (or create a unique index) so duplicate inserts are
prevented and lookups by those key combinations are efficient; ensure the chosen
approach is compatible with duckdb (use UNIQUE in the table DDL or CREATE UNIQUE
INDEX immediately after table creation).

In `@src/request_logs.rs`:
- Around line 297-305: The test in src/request_logs.rs currently only asserts
the remapped header name via header_name/request_headers[1].name; extend the
check to also assert the remapped header value (request_headers[1].value) so
that the "2" -> value mapping is validated; after the query_row that binds (url,
header_name) add an assertion that the corresponding header value equals the
expected string (e.g., "application/json" or the value used in the test) by
selecting or retrieving request_headers[1].value (or expanding the tuple
returned from the closure) and asserting equality.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3c7487d6-08a1-4182-b548-1c8491f36079

📥 Commits

Reviewing files that changed from the base of the PR and between 7a01dc8 and d26f4a0.

📒 Files selected for processing (10)
  • AGENTS.md
  • README.md
  • skills/apitally-cli/SKILL.md
  • skills/apitally-cli/references/commands.md
  • skills/apitally-cli/references/tables.md
  • src/apps.rs
  • src/consumers.rs
  • src/main.rs
  • src/request_details.rs
  • src/request_logs.rs

@itssimon itssimon merged commit eb005a6 into main Mar 26, 2026
4 of 6 checks passed
@itssimon itssimon deleted the request-details-command branch March 26, 2026 00:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant