-
Notifications
You must be signed in to change notification settings - Fork 0
api: enforce single default ledger, JWT env secret, export_stream scaffold #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @zensgit, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the API's robustness, security, and performance. It introduces a database-level constraint to guarantee a single default ledger per family, improves JWT secret management by leveraging environment variables with appropriate fallbacks and warnings, and implements a feature-gated streaming mechanism for CSV transaction exports to handle large datasets more efficiently. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements several security and data integrity improvements along with a new streaming CSV export feature. The changes enforce single default ledger per family, secure JWT secret handling, and add streaming capabilities for CSV exports.
- Database migration adds unique constraint ensuring only one default ledger per family
- JWT secret configuration moved from hardcoded constant to environment variable with secure fallback
- Added streaming CSV export feature behind
export_streamfeature flag for better memory efficiency
Reviewed Changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| jive-api/migrations/028_add_unique_default_ledger_index.sql | Creates unique index to enforce single default ledger per family |
| jive-api/tests/integration/family_default_ledger_test.rs | Added test to verify unique constraint enforcement |
| jive-api/src/auth.rs | Replaced hardcoded JWT secret with environment variable lookup |
| jive-api/src/handlers/transactions.rs | Added streaming CSV export implementation behind feature flag |
| jive-api/Cargo.toml | Added export_stream feature flag |
| README.md | Added JWT secret configuration documentation |
| .env.example | Added JWT_SECRET environment variable example |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| } | ||
| }); | ||
| let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) }); | ||
| let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error")))); |
Copilot
AI
Sep 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message "stream error" is too generic and loses the original error information. Consider preserving the original error details for better debugging.
| let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error")))); | |
| let body = Body::from_stream(byte_stream.map(|res| res.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, format!("stream error: {}", e))))); |
Copilot uses AI. Check for mistakes.
| csv_escape_cell(category.clone().unwrap_or_default(), ','), | ||
| account_id, | ||
| csv_escape_cell(payee.clone().unwrap_or_default(), ','), |
Copilot
AI
Sep 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unnecessary cloning of category and payee values. Since these are Option<String> values already extracted, you can use category.unwrap_or_default() and payee.unwrap_or_default() directly without clone().
| csv_escape_cell(category.clone().unwrap_or_default(), ','), | |
| account_id, | |
| csv_escape_cell(payee.clone().unwrap_or_default(), ','), | |
| csv_escape_cell(category.unwrap_or_default(), ','), | |
| account_id, | |
| csv_escape_cell(payee.unwrap_or_default(), ','), |
Copilot uses AI. Check for mistakes.
| csv_escape_cell(category.clone().unwrap_or_default(), ','), | ||
| account_id, | ||
| csv_escape_cell(payee.clone().unwrap_or_default(), ','), |
Copilot
AI
Sep 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unnecessary cloning of category and payee values. Since these are Option<String> values already extracted, you can use category.unwrap_or_default() and payee.unwrap_or_default() directly without clone().
| csv_escape_cell(category.clone().unwrap_or_default(), ','), | |
| account_id, | |
| csv_escape_cell(payee.clone().unwrap_or_default(), ','), | |
| csv_escape_cell(category.unwrap_or_default(), ','), | |
| account_id, | |
| csv_escape_cell(payee.unwrap_or_default(), ','), |
Copilot uses AI. Check for mistakes.
| if !cfg!(test) { | ||
| eprintln!("WARNING: JWT_SECRET not set; using insecure default key"); | ||
| } |
Copilot
AI
Sep 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using eprintln! for warnings bypasses proper logging infrastructure. Consider using a proper logging framework (like tracing or log) to ensure warnings are captured appropriately in production environments.
Copilot uses AI. Check for mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces several valuable improvements, including enforcing a single default ledger via a database constraint, sourcing the JWT secret from an environment variable for better security, and adding a feature-gated streaming CSV export. The changes are generally well-implemented. My review focuses on the new streaming export functionality, pointing out a critical build issue related to feature flags, a high-severity issue with error handling that leads to information loss, and a few medium-severity suggestions to improve code clarity, reduce redundancy, and fix inconsistencies. The database migration and associated test for the unique ledger constraint are well-executed.
| #[cfg(feature = "export_stream")] | ||
| { | ||
| use futures::{StreamExt}; | ||
| use tokio_stream::wrappers::ReceiverStream; | ||
| use tokio::sync::mpsc; | ||
| let include_header = q.include_header.unwrap_or(true); | ||
| let (tx, rx) = mpsc::channel::<Result<bytes::Bytes, ApiError>>(8); | ||
| let built = query.build(); | ||
| let pool_clone = pool.clone(); | ||
| tokio::spawn(async move { | ||
| let mut stream = built.fetch_many(&pool_clone); | ||
| // Header | ||
| if include_header { | ||
| if tx.send(Ok(bytes::Bytes::from_static(b"Date,Description,Amount,Category,Account,Payee,Type\n"))).await.is_err() { return; } | ||
| } | ||
| while let Some(item) = stream.next().await { | ||
| match item { | ||
| Ok(sqlx::Either::Right(row)) => { | ||
| use sqlx::Row; | ||
| let date: NaiveDate = row.get("transaction_date"); | ||
| let desc: String = row.try_get::<String,_>("description").unwrap_or_default(); | ||
| let amount: Decimal = row.get("amount"); | ||
| let category: Option<String> = row.try_get::<String,_>("category_name").ok().filter(|s| !s.is_empty()); | ||
| let account_id: Uuid = row.get("account_id"); | ||
| let payee: Option<String> = row.try_get::<String,_>("payee_name").ok().filter(|s| !s.is_empty()); | ||
| let ttype: String = row.get("transaction_type"); | ||
| let line = format!("{},{},{},{},{},{},{}\n", | ||
| date, | ||
| csv_escape_cell(desc, ','), | ||
| amount, | ||
| csv_escape_cell(category.clone().unwrap_or_default(), ','), | ||
| account_id, | ||
| csv_escape_cell(payee.clone().unwrap_or_default(), ','), | ||
| csv_escape_cell(ttype, ',')); | ||
| if tx.send(Ok(bytes::Bytes::from(line))).await.is_err() { return; } | ||
| } | ||
| Ok(sqlx::Either::Left(_)) => { /* ignore query result count */ } | ||
| Err(e) => { let _ = tx.send(Err(ApiError::DatabaseError(e.to_string()))).await; return; } | ||
| } | ||
| } | ||
| }); | ||
| let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) }); | ||
| let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error")))); | ||
| // Build headers & return early (skip buffered path below) | ||
| let mut headers_map = header::HeaderMap::new(); | ||
| headers_map.insert(header::CONTENT_TYPE, "text/csv; charset=utf-8".parse().unwrap()); | ||
| let filename = format!("transactions_export_{}.csv", Utc::now().format("%Y%m%d%H%M%S")); | ||
| headers_map.insert(header::CONTENT_DISPOSITION, format!("attachment; filename=\"{}\"", filename).parse().unwrap()); | ||
| return Ok((headers_map, body)); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new streaming implementation for CSV export under the export_stream feature flag has a dependency on the csv_escape_cell function. However, csv_escape_cell is conditionally compiled only when the core_export feature is not enabled. If a user enables both export_stream and core_export features, the build will fail because csv_escape_cell will not be defined. To fix this, the csv_escape_cell function should be available whenever export_stream is enabled, regardless of the core_export feature. Consider adjusting the cfg flags or moving the shared helper function to a location where it's always available when needed.
| } | ||
| }); | ||
| let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) }); | ||
| let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error")))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the streaming implementation, when an error occurs while fetching from the database, the original error information is lost. The map_err call replaces the detailed ApiError with a generic std::io::Error containing only the message "stream error". This makes debugging issues in production very difficult. The original error message should be preserved when converting to std::io::Error.
| let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error")))); | |
| let body = Body::from_stream(byte_stream.map(|res| res.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e.to_string())))); |
| #[cfg(feature = "export_stream")] | ||
| { | ||
| use futures::{StreamExt}; | ||
| use tokio_stream::wrappers::ReceiverStream; | ||
| use tokio::sync::mpsc; | ||
| let include_header = q.include_header.unwrap_or(true); | ||
| let (tx, rx) = mpsc::channel::<Result<bytes::Bytes, ApiError>>(8); | ||
| let built = query.build(); | ||
| let pool_clone = pool.clone(); | ||
| tokio::spawn(async move { | ||
| let mut stream = built.fetch_many(&pool_clone); | ||
| // Header | ||
| if include_header { | ||
| if tx.send(Ok(bytes::Bytes::from_static(b"Date,Description,Amount,Category,Account,Payee,Type\n"))).await.is_err() { return; } | ||
| } | ||
| while let Some(item) = stream.next().await { | ||
| match item { | ||
| Ok(sqlx::Either::Right(row)) => { | ||
| use sqlx::Row; | ||
| let date: NaiveDate = row.get("transaction_date"); | ||
| let desc: String = row.try_get::<String,_>("description").unwrap_or_default(); | ||
| let amount: Decimal = row.get("amount"); | ||
| let category: Option<String> = row.try_get::<String,_>("category_name").ok().filter(|s| !s.is_empty()); | ||
| let account_id: Uuid = row.get("account_id"); | ||
| let payee: Option<String> = row.try_get::<String,_>("payee_name").ok().filter(|s| !s.is_empty()); | ||
| let ttype: String = row.get("transaction_type"); | ||
| let line = format!("{},{},{},{},{},{},{}\n", | ||
| date, | ||
| csv_escape_cell(desc, ','), | ||
| amount, | ||
| csv_escape_cell(category.clone().unwrap_or_default(), ','), | ||
| account_id, | ||
| csv_escape_cell(payee.clone().unwrap_or_default(), ','), | ||
| csv_escape_cell(ttype, ',')); | ||
| if tx.send(Ok(bytes::Bytes::from(line))).await.is_err() { return; } | ||
| } | ||
| Ok(sqlx::Either::Left(_)) => { /* ignore query result count */ } | ||
| Err(e) => { let _ = tx.send(Err(ApiError::DatabaseError(e.to_string()))).await; return; } | ||
| } | ||
| } | ||
| }); | ||
| let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) }); | ||
| let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error")))); | ||
| // Build headers & return early (skip buffered path below) | ||
| let mut headers_map = header::HeaderMap::new(); | ||
| headers_map.insert(header::CONTENT_TYPE, "text/csv; charset=utf-8".parse().unwrap()); | ||
| let filename = format!("transactions_export_{}.csv", Utc::now().format("%Y%m%d%H%M%S")); | ||
| headers_map.insert(header::CONTENT_DISPOSITION, format!("attachment; filename=\"{}\"", filename).parse().unwrap()); | ||
| return Ok((headers_map, body)); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CSV generation logic inside the new export_stream block duplicates logic from the non-streaming implementation. This duplication can lead to maintenance issues where one implementation is updated but the other is not. For instance, the streaming version hardcodes the delimiter as ,, whereas the non-streaming version uses a configurable delimiter. It would be better to refactor the CSV row generation into a shared helper function to ensure consistency and reduce code duplication.
| csv_escape_cell(category.clone().unwrap_or_default(), ','), | ||
| account_id, | ||
| csv_escape_cell(payee.clone().unwrap_or_default(), ','), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The .clone() calls on category and payee are unnecessary because the variables are not used again after being passed to csv_escape_cell. The unwrap_or_default() method consumes the Option, so you can pass ownership directly without cloning. This is slightly more efficient and idiomatic.
| csv_escape_cell(category.clone().unwrap_or_default(), ','), | |
| account_id, | |
| csv_escape_cell(payee.clone().unwrap_or_default(), ','), | |
| csv_escape_cell(category.unwrap_or_default(), ','), | |
| account_id, | |
| csv_escape_cell(payee.unwrap_or_default(), ','), |
| } | ||
| } | ||
| }); | ||
| let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The .map() call on ReceiverStream is redundant. ReceiverStream::new(rx) already produces a stream of Result<bytes::Bytes, ApiError>, which is what byte_stream is expected to be. The map call just unwraps and re-wraps the Result without changing it. This line can be simplified for better readability.
| let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) }); | |
| let byte_stream = ReceiverStream::new(rx); |
Summary:
Testing: SQLX_OFFLINE=true cargo test --tests => all pass.