Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 58 additions & 0 deletions PRODUCTION_PREFLIGHT_CHECKLIST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Production Preflight Checklist

Use this list before promoting a build to production.

## 1. Configuration
- [ ] `JWT_SECRET` set (>=32 random bytes, not placeholder)
- [ ] Database URL / credentials use production secrets (no local `.env` leakage)
- [ ] `RUST_LOG` level appropriate (no `debug` in prod unless temporarily troubleshooting)

## 2. Database & Migrations
- [ ] All migrations applied up to latest (028 unique default ledger index)
- [ ] No duplicate default ledgers:
```sql
SELECT family_id, COUNT(*) FILTER (WHERE is_default) AS defaults
FROM ledgers GROUP BY family_id HAVING COUNT(*) FILTER (WHERE is_default) > 1;
```
- [ ] (Optional) Pending rehash plan for bcrypt users:
```sql
SELECT COUNT(*) FROM users WHERE password_hash LIKE '$2%';
```

## 3. Security
- [ ] Superadmin password rotated from baseline (not `admin123` / `SuperAdmin@123`)
- [ ] No hardcoded tokens or secrets committed
- [ ] HTTPS termination configured (proxy / ingress)

## 4. Logging & Monitoring
- [ ] Sensitive data not logged (spot-check recent logs)
- [ ] Health endpoint returns `status=healthy`
- [ ] Alerting / log retention configured (if applicable)

## 5. Features & Flags
- [ ] `export_stream` feature decision documented (enabled or deferred)
- [ ] `core_export` feature aligns with export requirements

## 6. Performance (Optional but Recommended)
- [ ] Benchmark run with expected production dataset size (see `scripts/benchmark_export_streaming.rs`)
- [ ] Latency and memory within targets

## 7. CI / CD
- [ ] Required checks enforced on protected branches
- [ ] Cargo Deny passes (no newly introduced high-severity advisories)

## 8. Backup & Recovery
- [ ] Automated DB backups configured & tested restore path
- [ ] Rollback plan documented for latest migrations

## 9. Documentation
- [ ] README environment section up to date
- [ ] Addendum report corrections merged (`PR_MERGE_REPORT_2025_09_25_ADDENDUM.md`)

## 10. Final Sanity
- [ ] Smoke test: register → login → create family → create transaction → export CSV
- [ ] Error rate in logs acceptable (< agreed threshold)

---
Status: Template ready for team adoption.

64 changes: 64 additions & 0 deletions docs/PR_MERGE_REPORT_2025_09_25_ADDENDUM.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
## PR Merge Report Addendum (2025-09-25)

This addendum corrects and extends the original `PR_MERGE_REPORT_2025_09_25.md`.

### 1. Correction: Default Ledger Unique Index

Original report showed an example:
```sql
CREATE UNIQUE INDEX idx_family_ledgers_default_unique ON family_ledgers (family_id) WHERE is_default = true;
```

Actual merged migration (028):
```sql
-- File: jive-api/migrations/028_add_unique_default_ledger_index.sql
CREATE UNIQUE INDEX IF NOT EXISTS idx_ledgers_one_default
ON ledgers(family_id)
WHERE is_default = true;
```

Key differences:
- Table name is `ledgers` (not `family_ledgers`).
- Index name: `idx_ledgers_one_default`.
- Uses `IF NOT EXISTS` to remain idempotent.

### 2. Streaming Export Feature Flag

- New cargo feature: `export_stream` (disabled by default).
- Enables incremental CSV response for `GET /api/v1/transactions/export.csv`.
- Activation example:
```bash
cd jive-api
cargo run --features export_stream --bin jive-api
```
- Fallback: Without the feature the endpoint buffers entire CSV in memory.

### 3. JWT Secret Management

Changes recap:
- Hardcoded secret removed; now resolved via environment variable `JWT_SECRET`.
- Dev/test fallback: `insecure-dev-jwt-secret-change-me` (warning logged unless tests).
- Production requirement: non-empty, high entropy (≥32 bytes random) value.

### 4. Added Negative & Integrity Tests

| Test File | Purpose |
|-----------|---------|
| `auth_login_negative_test.rs` | Wrong bcrypt password → 401; inactive user refresh → 403 |
| `family_default_ledger_test.rs` | Ensures only one default ledger; duplicate insert fails (migration enforced) |

### 5. Recommended Follow-up Benchmark

Suggested script (added separately) seeds N transactions and measures export latency for buffered vs streaming modes.

### 6. Production Preflight (See `PRODUCTION_PREFLIGHT_CHECKLIST.md`)

- One default ledger per family (query check)
- No bcrypt hashes remaining (or plan opportunistic rehash)
- `JWT_SECRET` set & not fallback
- Migrations up to 028 applied
- Optional: streaming feature withheld until benchmarks accepted

---
Status: All corrections applied. No further action required for already merged PRs.

94 changes: 94 additions & 0 deletions scripts/benchmark_export_streaming.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
use std::time::Instant;
use rand::Rng;
use sqlx::{PgPool, postgres::PgPoolOptions};
use chrono::{NaiveDate, Utc};
use rust_decimal::Decimal;
use rust_decimal::prelude::FromPrimitive;

// Simple benchmark harness for export CSV (buffered vs streaming).
// NOT part of CI. Run manually:
// cargo run --bin benchmark_export_streaming --features export_stream -- \
// --rows 5000 --database-url postgresql://.../jive_bench

#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut rows: i64 = 5000;
let mut db_url = std::env::var("DATABASE_URL").unwrap_or_default();
let mut args = std::env::args().skip(1);
while let Some(a) = args.next() {
match a.as_str() {
"--rows" => if let Some(v) = args.next() { rows = v.parse().unwrap_or(rows); },

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current argument parsing for --rows silently ignores invalid input by using unwrap_or. This can lead to the script running with unexpected parameters without any warning, which can be confusing for the user. It's better to fail fast and inform the user of the incorrect input. For more robust and user-friendly CLI argument parsing in the future, you might consider using a crate like clap.

Suggested change
"--rows" => if let Some(v) = args.next() { rows = v.parse().unwrap_or(rows); },
"--rows" => if let Some(v) = args.next() { rows = v.parse().expect("Invalid value for --rows: must be an integer"); },

"--database-url" => if let Some(v) = args.next() { db_url = v; },
_ => {}
}
}
if db_url.is_empty() { eprintln!("Set --database-url or DATABASE_URL"); std::process::exit(1); }
let pool = PgPoolOptions::new().max_connections(5).connect(&db_url).await?;

println!("Preparing benchmark data: {} rows", rows);
seed(&pool, rows).await?;

// NOTE: For real benchmark, call HTTP endpoints (buffered vs streaming) via reqwest.
// Here we only simulate the query cost to establish a baseline.
let start = Instant::now();
let count: (i64,) = sqlx::query_as("SELECT COUNT(*) FROM transactions")
.fetch_one(&pool).await?;
let dur = start.elapsed();
println!("Query COUNT(*) took {:?}, total rows {}", dur, count.0);
println!("(Next step manually: compare curl timings for /export.csv with and without --features export_stream)");
Ok(())
}

async fn seed(pool: &PgPool, rows: i64) -> anyhow::Result<()> {
// Expect at least one ledger & account; if not, create minimal scaffolding.
let ledger_id: Option<(uuid::Uuid,)> = sqlx::query_as("SELECT id FROM ledgers LIMIT 1")
.fetch_optional(pool).await?;
let ledger_id = if let Some((id,)) = ledger_id { id } else {
let user_id = uuid::Uuid::new_v4();
sqlx::query("INSERT INTO users (id,email,password_hash,name,is_active,created_at,updated_at) VALUES ($1,$2,'placeholder','Bench',true,NOW(),NOW())")
.bind(user_id).bind(format!("bench_{}@example.com", user_id))
.execute(pool).await?;
let fam_id = uuid::Uuid::new_v4();
sqlx::query("INSERT INTO families (id,name,owner_id,invite_code,member_count,created_at,updated_at) VALUES ($1,'Bench Family',$2,'BENCH',1,NOW(),NOW())")
.bind(fam_id).bind(user_id).execute(pool).await?;
sqlx::query("INSERT INTO family_members (family_id,user_id,role,permissions,joined_at) VALUES ($1,$2,'owner','{}',NOW())")
.bind(fam_id).bind(user_id).execute(pool).await?;
let l_id = uuid::Uuid::new_v4();
sqlx::query("INSERT INTO ledgers (id,family_id,name,currency,is_default,is_active,created_by,created_at,updated_at) VALUES ($1,$2,'Bench Ledger','CNY',true,true,$3,NOW(),NOW())")
.bind(l_id).bind(fam_id).bind(user_id).execute(pool).await?;
l_id
};
let account_id: Option<(uuid::Uuid,)> = sqlx::query_as("SELECT id FROM accounts WHERE ledger_id=$1 LIMIT 1")
.bind(ledger_id).fetch_optional(pool).await?;
let account_id = if let Some((id,)) = account_id { id } else {
let id = uuid::Uuid::new_v4();
sqlx::query("INSERT INTO accounts (id,ledger_id,name,account_type,currency,current_balance,created_at,updated_at) VALUES ($1,$2,'Bench Account','cash','CNY',0,NOW(),NOW())")
.bind(id).bind(ledger_id).execute(pool).await?; id
};
// Insert transactions
let mut rng = rand::thread_rng();
let batch = 1000;
let mut inserted = 0;
while inserted < rows {
let take = std::cmp::min(batch, (rows - inserted) as i64);
Copy link

Copilot AI Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Type mismatch: rows - inserted is already i64 (since rows is i64 and inserted is i64), so the cast to i64 is unnecessary and could mask potential issues. Remove the cast.

Suggested change
let take = std::cmp::min(batch, (rows - inserted) as i64);
let take = std::cmp::min(batch, rows - inserted);

Copilot uses AI. Check for mistakes.
let mut qb = sqlx::QueryBuilder::new("INSERT INTO transactions (id,ledger_id,account_id,transaction_type,amount,currency,transaction_date,description,created_at,updated_at) VALUES ");
let mut sep = qb.separated(",");
for _ in 0..take {
Copy link

Copilot AI Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Type mismatch in range: take is i64 but range expects usize. Use take as usize in the range or change the loop to avoid the type conversion.

Suggested change
for _ in 0..take {
for _ in 0..take as usize {

Copilot uses AI. Check for mistakes.
let id = uuid::Uuid::new_v4();
let amount = Decimal::from_f64(rng.gen_range(1.0..500.0)).unwrap();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using Decimal::from_f64 followed by .unwrap() can be risky. from_f64 returns None for non-finite values (like NaN or infinity), which would cause a panic. While rng.gen_range is unlikely to produce these, it's more robust to avoid floating-point conversions when creating Decimal values. You can construct the Decimal from integers to ensure precision and safety.

Suggested change
let amount = Decimal::from_f64(rng.gen_range(1.0..500.0)).unwrap();
let amount = Decimal::new(rng.gen_range(100..50000), 2);

let date = NaiveDate::from_ymd_opt(2025, 9, rng.gen_range(1..=25)).unwrap();
sep.push("(").push_bind(id)
.push(",").push_bind(ledger_id)
.push(",").push_bind(account_id)
.push(",'expense',").push_bind(amount)
.push(",'CNY',").push_bind(date)
.push(",").push_bind(format!("Bench txn {}", inserted))
.push(",NOW(),NOW())");
inserted += 1;
}
qb.build().execute(pool).await?;
}
println!("Seeded {} transactions (ledger_id={})", rows, ledger_id);
Ok(())
}

Loading