Add decode-events CLI command for processing JSON event files#2104
Add decode-events CLI command for processing JSON event files#2104findolor merged 8 commits intolocal-db-clifrom
Conversation
WalkthroughAdds a new CLI subcommand LocalDb::DecodeEvents that reads LogEntryResponse JSON from a file, decodes events, and writes pretty-printed output. Wires the subcommand into the CLI execution flow, exposes it via local_db module, and updates .gitignore to ignore decoded output files. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User
participant CLI as CLI (LocalDb)
participant DE as DecodeEvents::execute
participant FS as Filesystem
participant JSON as JSON Parser
participant DEC as decode_events (decoder)
U->>CLI: run local-db decode-events --input --output
CLI->>DE: execute()
DE->>FS: read input file
FS-->>DE: raw JSON
DE->>JSON: parse Vec<LogEntryResponse>
JSON-->>DE: parsed entries
DE->>DEC: decode(entries)
DEC-->>DE: decoded events
DE->>FS: write pretty JSON (output or default)
FS-->>DE: write OK
DE-->>CLI: Result<()>
CLI-->>U: command completes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (4)
crates/cli/.gitignore(1 hunks)crates/cli/src/commands/local_db/decode_events.rs(1 hunks)crates/cli/src/commands/local_db/mod.rs(1 hunks)crates/cli/src/lib.rs(2 hunks)
🔇 Additional comments (8)
crates/cli/.gitignore (1)
2-4: LGTM!The ignore patterns appropriately exclude generated event output files from version control.
crates/cli/src/commands/local_db/mod.rs (1)
1-5: LGTM!The module declaration and public re-export follow the existing pattern established by
fetch_events, maintaining consistency in the codebase structure.crates/cli/src/lib.rs (3)
1-1: LGTM!The import correctly adds
DecodeEventsalongside the existingFetchEvents.
13-14: LGTM!The new
DecodeEventsvariant is properly integrated into theLocalDbenum with appropriate clap command naming.
21-21: LGTM!The execution routing for
DecodeEventsfollows the established pattern and correctly delegates to theexecute()method.crates/cli/src/commands/local_db/decode_events.rs (3)
1-8: LGTM!The imports are appropriate and necessary for the implementation. Good use of
BufReaderfor efficient file reading.
9-16: LGTM!The struct design is clean and follows clap conventions. Using
Option<String>foroutput_fileappropriately enables default fallback behavior.
61-122: LGTM!The test helper functions are well-designed and appropriate:
sample_order_v4()creates consistent test dataadd_order_event()correctly constructsLogEntryResponsewith proper hex encodingwrite_events()anddecoded_output()provide clean file I/O abstractions for tests
| pub async fn execute(self) -> Result<()> { | ||
| println!("Reading events from: {}", self.input_file); | ||
|
|
||
| let file = File::open(&self.input_file) | ||
| .with_context(|| format!("Failed to open {}", self.input_file))?; | ||
| let reader = BufReader::new(file); | ||
| let events: Vec<LogEntryResponse> = serde_json::from_reader(reader) | ||
| .with_context(|| format!("Failed to parse {} as log entries", self.input_file))?; | ||
|
|
||
| println!("Processing {} events...", events.len()); | ||
|
|
||
| let decoded_result = decode_events(&events) | ||
| .map_err(|e| anyhow::anyhow!("Failed to decode events: {}", e))?; | ||
|
|
||
| let output_filename = self | ||
| .output_file | ||
| .unwrap_or_else(|| "decoded_events.json".to_string()); | ||
|
|
||
| let mut file = File::create(&output_filename) | ||
| .with_context(|| format!("Failed to create {}", output_filename))?; | ||
| serde_json::to_writer_pretty(&mut file, &decoded_result) | ||
| .with_context(|| format!("Failed to write decoded events to {}", output_filename))?; | ||
| writeln!(file)?; | ||
|
|
||
| println!("Decoded events saved to: {}", output_filename); | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
LGTM!
The implementation is well-structured with comprehensive error handling using .with_context() at each I/O boundary. The logic correctly:
- Reads and parses input events
- Decodes them via the shared library function
- Writes pretty-printed output with a trailing newline (good practice for JSON files)
- Falls back to a sensible default output filename
The use of BufReader for parsing is efficient and appropriate.
Optional consideration: Blocking I/O in async function
While the async signature is required by the Execute trait, the function uses blocking std::fs operations. For a CLI tool that runs a single command and exits, this trade-off is pragmatic and acceptable. However, if this command were ever called in a long-running async server context, consider switching to tokio::fs to avoid blocking the executor thread.
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/decode_events.rs around lines 19 to 45, the
async function uses blocking std::fs operations (File::open, File::create,
serde_json::from_reader/to_writer) which can block the async executor; replace
these with non-blocking equivalents (e.g., use tokio::fs::File and
tokio::io::{AsyncReadExt, AsyncWriteExt} plus serde_json::from_slice/to_vec or
perform the blocking work inside tokio::task::spawn_blocking) so the function
remains async-friendly — choose either to fully migrate I/O to tokio async APIs
or wrap the blocking sections in spawn_blocking to avoid blocking the runtime.
| #[tokio::test] | ||
| async fn test_execute_with_custom_output_file() -> Result<()> { | ||
| let temp_dir = TempDir::new()?; | ||
| let input_file = temp_dir.path().join("input.json"); | ||
| let output_file = temp_dir.path().join("custom_output.json"); | ||
|
|
||
| let test_events = vec![add_order_event(7, 10, 0)]; | ||
|
|
||
| write_events(&input_file, &test_events)?; | ||
|
|
||
| let cmd = DecodeEvents { | ||
| input_file: input_file.to_string_lossy().to_string(), | ||
| output_file: Some(output_file.to_string_lossy().to_string()), | ||
| }; | ||
|
|
||
| cmd.execute().await?; | ||
|
|
||
| assert!(output_file.exists()); | ||
| let parsed_output = decoded_output(&output_file); | ||
| assert_eq!(parsed_output.as_array().map(|arr| arr.len()), Some(1)); | ||
| let event = &parsed_output[0]; | ||
| assert_eq!(event["event_type"], "AddOrderV3"); | ||
| assert_eq!( | ||
| event["decoded_data"]["sender"], | ||
| serde_json::Value::String(format!("0x{}", hex::encode([7u8; 20]))) | ||
| ); | ||
|
|
||
| Ok(()) | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn test_execute_with_default_output_file() -> Result<()> { | ||
| let temp_dir = TempDir::new()?; | ||
| let input_file = temp_dir.path().join("input.json"); | ||
| let expected_output = temp_dir.path().join("decoded_events.json"); | ||
|
|
||
| let test_events = vec![add_order_event(9, 11, 1)]; | ||
|
|
||
| write_events(&input_file, &test_events)?; | ||
|
|
||
| let cmd = DecodeEvents { | ||
| input_file: input_file.to_string_lossy().to_string(), | ||
| output_file: None, | ||
| }; | ||
|
|
||
| let original_dir = std::env::current_dir()?; | ||
| std::env::set_current_dir(&temp_dir)?; | ||
|
|
||
| let result = cmd.execute().await; | ||
|
|
||
| std::env::set_current_dir(original_dir)?; | ||
|
|
||
| result?; | ||
|
|
||
| assert!(expected_output.exists()); | ||
| let parsed_output = decoded_output(&expected_output); | ||
| assert_eq!(parsed_output.as_array().map(|arr| arr.len()), Some(1)); | ||
| assert_eq!(parsed_output[0]["event_type"], "AddOrderV3"); | ||
|
|
||
| Ok(()) | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn test_execute_with_empty_events() -> Result<()> { | ||
| let temp_dir = TempDir::new()?; | ||
| let input_file = temp_dir.path().join("empty_input.json"); | ||
| let output_file = temp_dir.path().join("empty_output.json"); | ||
|
|
||
| let empty_events: Vec<LogEntryResponse> = vec![]; | ||
| write_events(&input_file, &empty_events)?; | ||
|
|
||
| let cmd = DecodeEvents { | ||
| input_file: input_file.to_string_lossy().to_string(), | ||
| output_file: Some(output_file.to_string_lossy().to_string()), | ||
| }; | ||
|
|
||
| cmd.execute().await?; | ||
|
|
||
| assert!(output_file.exists()); | ||
| let parsed_output = decoded_output(&output_file); | ||
| assert_eq!(parsed_output.as_array().map(Vec::len), Some(0)); | ||
|
|
||
| Ok(()) | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn test_execute_with_multiple_events() -> Result<()> { | ||
| let temp_dir = TempDir::new()?; | ||
| let input_file = temp_dir.path().join("multi_input.json"); | ||
| let output_file = temp_dir.path().join("multi_output.json"); | ||
|
|
||
| let test_events = vec![ | ||
| add_order_event(10, 12, 1), | ||
| add_order_event(11, 13, 2), | ||
| add_order_event(12, 14, 3), | ||
| ]; | ||
|
|
||
| write_events(&input_file, &test_events)?; | ||
|
|
||
| let cmd = DecodeEvents { | ||
| input_file: input_file.to_string_lossy().to_string(), | ||
| output_file: Some(output_file.to_string_lossy().to_string()), | ||
| }; | ||
|
|
||
| cmd.execute().await?; | ||
|
|
||
| assert!(output_file.exists()); | ||
| let parsed_output = decoded_output(&output_file); | ||
| assert_eq!(parsed_output.as_array().map(|arr| arr.len()), Some(3)); | ||
| assert!(parsed_output | ||
| .as_array() | ||
| .unwrap() | ||
| .iter() | ||
| .all(|event| event["event_type"] == "AddOrderV3")); | ||
|
|
||
| Ok(()) | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn test_execute_with_nonexistent_input_file() { | ||
| let cmd = DecodeEvents { | ||
| input_file: "/path/that/does/not/exist.json".to_string(), | ||
| output_file: Some("output.json".to_string()), | ||
| }; | ||
|
|
||
| let result = cmd.execute().await; | ||
| assert!(result.is_err()); | ||
|
|
||
| let error = result.unwrap_err(); | ||
| let error_msg = error.to_string(); | ||
| assert!( | ||
| error_msg.contains("Failed to open"), | ||
| "unexpected error message: {}", | ||
| error_msg | ||
| ); | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn test_execute_with_invalid_json() -> Result<()> { | ||
| let temp_dir = TempDir::new()?; | ||
| let input_file = temp_dir.path().join("invalid.json"); | ||
| let output_file = temp_dir.path().join("output.json"); | ||
|
|
||
| fs::write(&input_file, "{ invalid json content")?; | ||
|
|
||
| let cmd = DecodeEvents { | ||
| input_file: input_file.to_string_lossy().to_string(), | ||
| output_file: Some(output_file.to_string_lossy().to_string()), | ||
| }; | ||
|
|
||
| let result = cmd.execute().await; | ||
| assert!(result.is_err()); | ||
|
|
||
| let error_msg = result.unwrap_err().to_string(); | ||
| assert!(!error_msg.is_empty()); | ||
|
|
||
| Ok(()) | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn test_execute_with_json_not_array() -> Result<()> { | ||
| let temp_dir = TempDir::new()?; | ||
| let input_file = temp_dir.path().join("not_array.json"); | ||
| let output_file = temp_dir.path().join("output.json"); | ||
|
|
||
| fs::write(&input_file, r#"{"events": "not an array"}"#)?; | ||
|
|
||
| let cmd = DecodeEvents { | ||
| input_file: input_file.to_string_lossy().to_string(), | ||
| output_file: Some(output_file.to_string_lossy().to_string()), | ||
| }; | ||
|
|
||
| let result = cmd.execute().await; | ||
| assert!(result.is_err()); | ||
|
|
||
| let error_msg = result.unwrap_err().to_string(); | ||
| assert!(!error_msg.is_empty()); | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Excellent test coverage!
The test suite comprehensively covers:
- Happy paths: custom output, default output, empty events, multiple events
- Error paths: nonexistent input file, malformed JSON, incorrect JSON structure
- Proper cleanup using
TempDir - Verification of both file existence and decoded content structure
The tests correctly use tokio::test for async execution and include appropriate assertions.
Optional enhancement: Test decode_events error path
Consider adding a test case where decode_events() itself returns an error (e.g., by providing a LogEntryResponse with invalid event data). This would verify the error handling at line 30-31 works as expected. Currently, all tests use valid AddOrderV3 events.
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/decode_events.rs around lines 124 to 303,
add one more tokio::test that constructs a TempDir, writes an input JSON
containing a LogEntryResponse whose event data/payload is intentionally
malformed or contains invalid fields to force decode_events() to return an
error, creates a DecodeEvents with that input (and optional output file), runs
cmd.execute().await, and asserts the result is Err (and optionally inspects the
error string is non-empty or contains a decoding-related phrase); this verifies
the decode_events error path at the earlier lines.
|
@coderabbitai assess this PR size classification for the totality of the PR with the following criterias and report it in your comment: S/M/L PR Classification Guidelines:This guide helps classify merged pull requests by effort and complexity rather than just line count. The goal is to assess the difficulty and scope of changes after they have been completed. Small (S)Characteristics:
Review Effort: Would have taken 5-10 minutes Examples:
Medium (M)Characteristics:
Review Effort: Would have taken 15-30 minutes Examples:
Large (L)Characteristics:
Review Effort: Would have taken 45+ minutes Examples:
Additional Factors to ConsiderWhen deciding between sizes, also consider:
Notes:
|
|
Tip For best results, initiate chat on the files or code changes. SIZE=M 🧠 Learnings used |
Caution
Chained PR - Do not merge before #2102
Motivation
The CLI needed a way to decode events that have been previously fetched and stored in JSON format. This
functionality allows users to take raw event data and decode it into a more usable format, completing the
workflow of fetching and then processing events locally.
Solution
Checks
By submitting this for review, I'm confirming I've done the following:
Summary by CodeRabbit
New Features
Chores