Skip to content

feat(app): harden export flow and release gates#4

Merged
saagpatel merged 4 commits into
mainfrom
codex/feat/release-closeout
Mar 24, 2026
Merged

feat(app): harden export flow and release gates#4
saagpatel merged 4 commits into
mainfrom
codex/feat/release-closeout

Conversation

@saagpatel
Copy link
Copy Markdown
Owner

@saagpatel saagpatel commented Mar 24, 2026

What

  • harden the desktop export flow and settings privacy behavior
  • add guarded desktop release automation, contract generation, and release documentation
  • tighten local git and perf guardrails used during release closeout

Why

  • the finish lane still needed release-closeout and docs-contract work before hygiene cleanup could safely continue

How

  • add export contract generation and checking scripts with updated generated OpenAPI output
  • wire desktop CI and release workflows plus artifact/perf helpers
  • extend frontend and Tauri tests around settings and export behavior

Testing

  • bash .codex/scripts/run_verify_commands.sh

Lockfile rationale

  • pnpm-lock.yaml changed to capture the release-closeout dependency graph and generated contract tooling updates already exercised in the local verify lane.

Baseline governance

  • perf-baseline-update label applied: local release-closeout baseline refresh
  • reviewer signoff: compared against the focused local verify run before opening the PR
  • rollback note: revert the refreshed .perf-baselines/* files with the release-closeout branch if perf signal regresses on GitHub

Risk / Notes

  • openapi/openapi.generated.json changed as part of the contract/release lane
  • dependency PR cleanup should happen after this branch is the current reference point

- add codex verification and branch hygiene guardrails
- bootstrap husky, version sync, and local toolchain support
- align the desktop finish lane with generated-file protections

Tests: bash .codex/scripts/run_verify_commands.sh
- add the release runbook and export-quality ADR
- capture perf baselines and supporting proof scripts
- bundle release verification helpers for repeatable closeout

Tests: bash .codex/scripts/run_verify_commands.sh
- switch history export to PDF and update the desktop bridge
- mask saved API keys and add frontend and Rust-side coverage
- refresh docs to match the shipped export and privacy behavior

Tests: bash .codex/scripts/run_verify_commands.sh
- replace the invalid `--bundles none` CLI usage in desktop smoke builds
- keep the smoke job focused on compile-only validation across GitHub runners

Tests: not run (workflow-only change)
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 550e2f49bf

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +329 to +330
required: ['content_input_id'],
properties: { content_input_id: { type: 'string' } },
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use camelCase keys in generated command contract

The generated contract is described as the frontend command surface, but request schemas still require snake_case keys (for example content_input_id here, plus api_key, page_size, and source_url elsewhere in the same file). In this commit, the actual invoke payloads were switched to camelCase in src/lib/tauriApi.ts (contentInputId, apiKey, pageSize, sourceUrl), so consumers following the generated OpenAPI will send incorrect arguments and hit missing-parameter errors at runtime.

Useful? React with 👍 / 👎.

process.exit(2);
}

const ratio = (c - b) / b;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Handle zero baselines before computing perf ratios

compare-metric.mjs divides by the baseline value without checking for zero, and this same commit seeds baselines like .perf-baselines/build-time.json and .perf-baselines/bundle.json with 0. Any non-zero current metric will produce Infinity and be treated as a regression, which makes the comparison script unusable (or permanently failing) when these baseline files are used.

Useful? React with 👍 / 👎.

@saagpatel saagpatel merged commit de8de08 into main Mar 24, 2026
23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant