Skip to content

Conversation

@zensgit
Copy link
Owner

@zensgit zensgit commented Sep 25, 2025

Summary:

  • Migration 028 (unique index) for single default ledger integrity.
  • Extended ledger test asserts constraint.
  • JWT secret now from JWT_SECRET env (dev fallback + warning).
  • Added export_stream feature flag and streaming CSV implementation (feature gated).
  • Updated README and .env.example.

Testing: SQLX_OFFLINE=true cargo test --tests => all pass.

@Copilot Copilot AI review requested due to automatic review settings September 25, 2025 06:26
@zensgit zensgit enabled auto-merge (squash) September 25, 2025 06:26
@gemini-code-assist
Copy link

Summary of Changes

Hello @zensgit, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the API's robustness, security, and performance. It introduces a database-level constraint to guarantee a single default ledger per family, improves JWT secret management by leveraging environment variables with appropriate fallbacks and warnings, and implements a feature-gated streaming mechanism for CSV transaction exports to handle large datasets more efficiently.

Highlights

  • Enforced Single Default Ledger: A new database migration (028) introduces a unique index to ensure that each family can have at most one default ledger, preventing data inconsistencies and race conditions. This is verified by an extended integration test.
  • JWT Secret from Environment Variable: The JWT secret is now securely loaded from the JWT_SECRET environment variable. A development fallback is provided, but a warning is issued if the secret is not explicitly set in non-test environments, promoting better security practices.
  • Feature-Gated Streaming CSV Export: A new export_stream feature flag has been added to Cargo.toml. When enabled, the transaction CSV export endpoint will stream data incrementally, rather than buffering the entire response, which improves performance and reduces memory usage for large exports.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements several security and data integrity improvements along with a new streaming CSV export feature. The changes enforce single default ledger per family, secure JWT secret handling, and add streaming capabilities for CSV exports.

  • Database migration adds unique constraint ensuring only one default ledger per family
  • JWT secret configuration moved from hardcoded constant to environment variable with secure fallback
  • Added streaming CSV export feature behind export_stream feature flag for better memory efficiency

Reviewed Changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
jive-api/migrations/028_add_unique_default_ledger_index.sql Creates unique index to enforce single default ledger per family
jive-api/tests/integration/family_default_ledger_test.rs Added test to verify unique constraint enforcement
jive-api/src/auth.rs Replaced hardcoded JWT secret with environment variable lookup
jive-api/src/handlers/transactions.rs Added streaming CSV export implementation behind feature flag
jive-api/Cargo.toml Added export_stream feature flag
README.md Added JWT secret configuration documentation
.env.example Added JWT_SECRET environment variable example

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

}
});
let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) });
let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error"))));
Copy link

Copilot AI Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message "stream error" is too generic and loses the original error information. Consider preserving the original error details for better debugging.

Suggested change
let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error"))));
let body = Body::from_stream(byte_stream.map(|res| res.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, format!("stream error: {}", e)))));

Copilot uses AI. Check for mistakes.

Comment on lines +480 to +482
csv_escape_cell(category.clone().unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.clone().unwrap_or_default(), ','),
Copy link

Copilot AI Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary cloning of category and payee values. Since these are Option<String> values already extracted, you can use category.unwrap_or_default() and payee.unwrap_or_default() directly without clone().

Suggested change
csv_escape_cell(category.clone().unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.clone().unwrap_or_default(), ','),
csv_escape_cell(category.unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.unwrap_or_default(), ','),

Copilot uses AI. Check for mistakes.

Comment on lines +480 to +482
csv_escape_cell(category.clone().unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.clone().unwrap_or_default(), ','),
Copy link

Copilot AI Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary cloning of category and payee values. Since these are Option<String> values already extracted, you can use category.unwrap_or_default() and payee.unwrap_or_default() directly without clone().

Suggested change
csv_escape_cell(category.clone().unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.clone().unwrap_or_default(), ','),
csv_escape_cell(category.unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.unwrap_or_default(), ','),

Copilot uses AI. Check for mistakes.

Comment on lines +25 to +27
if !cfg!(test) {
eprintln!("WARNING: JWT_SECRET not set; using insecure default key");
}
Copy link

Copilot AI Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using eprintln! for warnings bypasses proper logging infrastructure. Consider using a proper logging framework (like tracing or log) to ensure warnings are captured appropriately in production environments.

Copilot uses AI. Check for mistakes.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several valuable improvements, including enforcing a single default ledger via a database constraint, sourcing the JWT secret from an environment variable for better security, and adding a feature-gated streaming CSV export. The changes are generally well-implemented. My review focuses on the new streaming export functionality, pointing out a critical build issue related to feature flags, a high-severity issue with error handling that leads to information loss, and a few medium-severity suggestions to improve code clarity, reduce redundancy, and fix inconsistencies. The database migration and associated test for the unique ledger constraint are well-executed.

Comment on lines 450 to 499
#[cfg(feature = "export_stream")]
{
use futures::{StreamExt};
use tokio_stream::wrappers::ReceiverStream;
use tokio::sync::mpsc;
let include_header = q.include_header.unwrap_or(true);
let (tx, rx) = mpsc::channel::<Result<bytes::Bytes, ApiError>>(8);
let built = query.build();
let pool_clone = pool.clone();
tokio::spawn(async move {
let mut stream = built.fetch_many(&pool_clone);
// Header
if include_header {
if tx.send(Ok(bytes::Bytes::from_static(b"Date,Description,Amount,Category,Account,Payee,Type\n"))).await.is_err() { return; }
}
while let Some(item) = stream.next().await {
match item {
Ok(sqlx::Either::Right(row)) => {
use sqlx::Row;
let date: NaiveDate = row.get("transaction_date");
let desc: String = row.try_get::<String,_>("description").unwrap_or_default();
let amount: Decimal = row.get("amount");
let category: Option<String> = row.try_get::<String,_>("category_name").ok().filter(|s| !s.is_empty());
let account_id: Uuid = row.get("account_id");
let payee: Option<String> = row.try_get::<String,_>("payee_name").ok().filter(|s| !s.is_empty());
let ttype: String = row.get("transaction_type");
let line = format!("{},{},{},{},{},{},{}\n",
date,
csv_escape_cell(desc, ','),
amount,
csv_escape_cell(category.clone().unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.clone().unwrap_or_default(), ','),
csv_escape_cell(ttype, ','));
if tx.send(Ok(bytes::Bytes::from(line))).await.is_err() { return; }
}
Ok(sqlx::Either::Left(_)) => { /* ignore query result count */ }
Err(e) => { let _ = tx.send(Err(ApiError::DatabaseError(e.to_string()))).await; return; }
}
}
});
let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) });
let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error"))));
// Build headers & return early (skip buffered path below)
let mut headers_map = header::HeaderMap::new();
headers_map.insert(header::CONTENT_TYPE, "text/csv; charset=utf-8".parse().unwrap());
let filename = format!("transactions_export_{}.csv", Utc::now().format("%Y%m%d%H%M%S"));
headers_map.insert(header::CONTENT_DISPOSITION, format!("attachment; filename=\"{}\"", filename).parse().unwrap());
return Ok((headers_map, body));
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The new streaming implementation for CSV export under the export_stream feature flag has a dependency on the csv_escape_cell function. However, csv_escape_cell is conditionally compiled only when the core_export feature is not enabled. If a user enables both export_stream and core_export features, the build will fail because csv_escape_cell will not be defined. To fix this, the csv_escape_cell function should be available whenever export_stream is enabled, regardless of the core_export feature. Consider adjusting the cfg flags or moving the shared helper function to a location where it's always available when needed.

}
});
let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) });
let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error"))));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

In the streaming implementation, when an error occurs while fetching from the database, the original error information is lost. The map_err call replaces the detailed ApiError with a generic std::io::Error containing only the message "stream error". This makes debugging issues in production very difficult. The original error message should be preserved when converting to std::io::Error.

Suggested change
let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error"))));
let body = Body::from_stream(byte_stream.map(|res| res.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e.to_string()))));

Comment on lines 450 to 499
#[cfg(feature = "export_stream")]
{
use futures::{StreamExt};
use tokio_stream::wrappers::ReceiverStream;
use tokio::sync::mpsc;
let include_header = q.include_header.unwrap_or(true);
let (tx, rx) = mpsc::channel::<Result<bytes::Bytes, ApiError>>(8);
let built = query.build();
let pool_clone = pool.clone();
tokio::spawn(async move {
let mut stream = built.fetch_many(&pool_clone);
// Header
if include_header {
if tx.send(Ok(bytes::Bytes::from_static(b"Date,Description,Amount,Category,Account,Payee,Type\n"))).await.is_err() { return; }
}
while let Some(item) = stream.next().await {
match item {
Ok(sqlx::Either::Right(row)) => {
use sqlx::Row;
let date: NaiveDate = row.get("transaction_date");
let desc: String = row.try_get::<String,_>("description").unwrap_or_default();
let amount: Decimal = row.get("amount");
let category: Option<String> = row.try_get::<String,_>("category_name").ok().filter(|s| !s.is_empty());
let account_id: Uuid = row.get("account_id");
let payee: Option<String> = row.try_get::<String,_>("payee_name").ok().filter(|s| !s.is_empty());
let ttype: String = row.get("transaction_type");
let line = format!("{},{},{},{},{},{},{}\n",
date,
csv_escape_cell(desc, ','),
amount,
csv_escape_cell(category.clone().unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.clone().unwrap_or_default(), ','),
csv_escape_cell(ttype, ','));
if tx.send(Ok(bytes::Bytes::from(line))).await.is_err() { return; }
}
Ok(sqlx::Either::Left(_)) => { /* ignore query result count */ }
Err(e) => { let _ = tx.send(Err(ApiError::DatabaseError(e.to_string()))).await; return; }
}
}
});
let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) });
let body = Body::from_stream(byte_stream.map(|res| res.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "stream error"))));
// Build headers & return early (skip buffered path below)
let mut headers_map = header::HeaderMap::new();
headers_map.insert(header::CONTENT_TYPE, "text/csv; charset=utf-8".parse().unwrap());
let filename = format!("transactions_export_{}.csv", Utc::now().format("%Y%m%d%H%M%S"));
headers_map.insert(header::CONTENT_DISPOSITION, format!("attachment; filename=\"{}\"", filename).parse().unwrap());
return Ok((headers_map, body));
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The CSV generation logic inside the new export_stream block duplicates logic from the non-streaming implementation. This duplication can lead to maintenance issues where one implementation is updated but the other is not. For instance, the streaming version hardcodes the delimiter as ,, whereas the non-streaming version uses a configurable delimiter. It would be better to refactor the CSV row generation into a shared helper function to ensure consistency and reduce code duplication.

Comment on lines +480 to +482
csv_escape_cell(category.clone().unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.clone().unwrap_or_default(), ','),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The .clone() calls on category and payee are unnecessary because the variables are not used again after being passed to csv_escape_cell. The unwrap_or_default() method consumes the Option, so you can pass ownership directly without cloning. This is slightly more efficient and idiomatic.

Suggested change
csv_escape_cell(category.clone().unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.clone().unwrap_or_default(), ','),
csv_escape_cell(category.unwrap_or_default(), ','),
account_id,
csv_escape_cell(payee.unwrap_or_default(), ','),

}
}
});
let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) });

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The .map() call on ReceiverStream is redundant. ReceiverStream::new(rx) already produces a stream of Result<bytes::Bytes, ApiError>, which is what byte_stream is expected to be. The map call just unwraps and re-wraps the Result without changing it. This line can be simplified for better readability.

Suggested change
let byte_stream = ReceiverStream::new(rx).map(|r| match r { Ok(b)=>Ok::<_,ApiError>(b), Err(e)=>Err(e) });
let byte_stream = ReceiverStream::new(rx);

@zensgit zensgit merged commit 5b770e5 into main Sep 25, 2025
10 checks passed
@zensgit zensgit deleted the feat/ledger-unique-jwt-stream branch September 25, 2025 06:37
@zensgit zensgit mentioned this pull request Oct 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant