feat: add sync storage and config foundation#76
Conversation
Prevents directory navigation issues when user is inside the cache folder being deleted. Uses subshell with cd ~ and relative paths.
…s (v2.1.2) This release includes comprehensive OpenAI ToS compliance updates and fixes 4 critical bugs that prevented per-model options and multi-turn conversations from working correctly. ## 🔒 Compliance & Legal Updates ### Terms of Service & Usage Guidelines - Added comprehensive ToS & Usage Notice to README emphasizing personal use only - Added Rate Limits & Responsible Use section with best practices - Added extensive FAQ section addressing TOS compliance questions - Updated LICENSE with MIT + Usage Disclaimer for personal development use - Added compliance header to index.ts documenting intended use cases ### New Documentation - **CONTRIBUTING.md**: Complete contribution guidelines with compliance requirements - **SECURITY.md**: Security policy, vulnerability reporting, best practices - **docs/privacy.md**: Comprehensive privacy & data handling documentation - **Issue Templates**: Bug reports & feature requests with compliance checklists ### Documentation Updates - Updated docs/index.md with usage notice and trademark disclaimers - Updated docs/getting-started.md with "Before You Begin" compliance notice - Updated docs/troubleshooting.md with subscription & compliance sections - All references updated to "Plus/Pro" (removed Team/Enterprise mentions) ### Key Compliance Principles - ✅ Emphasizes personal development use with own ChatGPT Plus/Pro subscription - ✅ Uses OpenAI's official OAuth authentication (same as Codex CLI) - ✅ Clear prohibition of commercial resale, multi-user services - ✅ Not a "free API alternative" - uses existing ChatGPT subscription - ✅ Proper trademark notices and attribution to OpenAI - ✅ Transparent about data handling and privacy ## 🐛 Critical Bug Fixes ### Bug #1: Per-Model Options Ignored (Config Lookup Mismatch) **Severity:** 🔴 HIGH Users configured different reasoning levels for each model variant, but all variants behaved identically because the plugin was looking up normalized model names instead of the original config keys. **Fix:** lib/request/request-transformer.ts:277 - Use original model name for config lookup before normalization - Allows per-model options to work correctly - Impact: Different reasoning levels now properly applied per variant ### Bug #2: Multi-Turn Conversations Fail (AI SDK Compatibility) **Severity:** 🔴 CRITICAL Multi-turn conversations failed with "Item not found" errors because: 1. AI SDK sends `item_reference` (not in Codex API spec) 2. IDs weren't completely stripped for stateless mode (store: false) **Fix:** lib/request/request-transformer.ts:114-135 - Filter out AI SDK `item_reference` items - Strip ALL IDs from remaining items (not just rs_* prefix) - Confirmed ChatGPT backend requires store: false via testing - Full message history preserved for LLM context ### Bug #3: Case-Sensitive Normalization **Severity:** 🟡 MEDIUM **Fix:** lib/request/request-transformer.ts:22-27 - Added toLowerCase() for case-insensitive matching - Backwards compatible with old verbose config names - Handles uppercase/mixed case user input ### Bug #4: GitHub API Rate Limiting **Severity:** 🟡 MEDIUM Plugin checked GitHub on every request, exhausting 60 req/hour limit. 15-minute cache timestamp was stored but never checked. **Fix:** lib/prompts/codex.ts:50-53, lib/prompts/opencode-codex.ts:47-50 - Check cache TTL before GitHub API call - Only fetch if cache >15min old - Prevents rate limit exhaustion during testing ## ✨ Enhancements ### Debug Logging System - New environment variable: `DEBUG_CODEX_PLUGIN=1` - Added logDebug() and logWarn() functions - Shows config lookups, ID filtering, options resolution - File: lib/logger.ts ### Optimized Config Structure - Changed to Codex CLI preset names (gpt-5-codex-low, etc.) - Removed redundant `id` field (not used by OpenAI provider) - Added `name` field for friendly TUI display - File: config/full-opencode.json ### GitHub Pages Documentation - Complete documentation restructure for GitHub Pages - User guides: getting-started.md, configuration.md, troubleshooting.md - Developer guides: ARCHITECTURE.md, CONFIG_FLOW.md, CONFIG_FIELDS.md, TESTING.md - Privacy policy, security policy, contribution guidelines ## 🧪 Testing ### Test Coverage - ✅ 159 unit tests passing (30+ new tests) - ✅ 14 integration tests passing (actual API verification) - ✅ All documented scenarios have test coverage ### Integration Tests (scripts/test-all-models.sh) - Tests all 9 custom model configurations - Tests all 4 default OpenCode models - Verifies backwards compatibility with old config format - Uses ENABLE_PLUGIN_REQUEST_LOGGING to verify actual API requests ### Test Results - 14/14 model configuration tests passing - All per-model options verified correct - Multi-turn conversations work without errors - Backwards compatibility confirmed ## 📝 Files Changed ### Modified (7 files) - lib/request/request-transformer.ts: Config lookup & ID filtering fixes - lib/prompts/codex.ts: 15-minute cache TTL - lib/prompts/opencode-codex.ts: 15-minute cache TTL - lib/logger.ts: Debug logging system - config/full-opencode.json: Optimized structure - test/request-transformer.test.ts: 30+ new tests - LICENSE: Added usage disclaimer - index.ts: Added compliance header - README.md: Added ToS, Rate Limits, FAQ sections ### Created (20+ files) - CONTRIBUTING.md: Contribution guidelines - SECURITY.md: Security policy - docs/index.md: GitHub Pages landing - docs/getting-started.md: Installation guide - docs/configuration.md: Advanced config - docs/troubleshooting.md: Debug guide - docs/privacy.md: Privacy policy - docs/development/ARCHITECTURE.md: Technical architecture - docs/development/CONFIG_FLOW.md: Config system internals - docs/development/CONFIG_FIELDS.md: Field usage guide - docs/development/TESTING.md: Testing guide - .github/ISSUE_TEMPLATE/bug_report.md - .github/ISSUE_TEMPLATE/feature_request.md - .github/ISSUE_TEMPLATE/config.yml - scripts/test-all-models.sh: Automated testing ## 🎯 Verification All scenarios tested and verified: - ✅ Default models work without config - ✅ Custom models with per-variant options - ✅ Old config format (backwards compatible) - ✅ Mixed default + custom models - ✅ Multi-turn conversations (no "item not found" errors) - ✅ Model switching with different options - ✅ All normalization patterns - ✅ Complete ID filtering ## 📚 Documentation Complete documentation suite (100+ KB): - User guides for installation, configuration, troubleshooting - Developer guides for architecture, config system, testing - Privacy policy and security policy - Compliance guidelines and contribution requirements - Release notes in tmp/release-notes/ ## 🔗 References - OpenAI Terms of Use: https://openai.com/policies/terms-of-use/ - OpenAI Usage Policies: https://openai.com/policies/usage-policies/ - OpenAI Platform API: https://platform.openai.com/ - OpenAI Codex CLI: https://github.com/openai/codex 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…2.0.50 compatibility
This PR standardizes all documentation and configuration on GPT 5.1 model identifiers, removes deprecated GPT 5.0 models, and enforces full-opencode.json as the only officially supported configuration. The changes address GPT 5 model temperamental behavior and ensure users have the proper setup for reliable operation with OpenCode. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
docs: Enforce GPT 5.1 full-opencode.json as only supported configuration
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
BREAKING CHANGE: Major prompt engineering overhaul
This release brings full parity with Codex CLI's prompt selection:
Model-Specific Prompts:
- gpt-5.1-codex-max* → gpt-5.1-codex-max_prompt.md (117 lines, frontend design)
- gpt-5.1-codex*, codex-* → gpt_5_codex_prompt.md (105 lines, focused coding)
- gpt-5.1* → gpt_5_1_prompt.md (368 lines, full behavioral guidance)
Legacy GPT-5.0 → GPT-5.1 Migration:
- gpt-5-codex → gpt-5.1-codex
- gpt-5 → gpt-5.1
- gpt-5-mini, gpt-5-nano → gpt-5.1
- codex-mini-latest → gpt-5.1-codex-mini
New Features:
- ModelFamily type for prompt selection ("codex-max" | "codex" | "gpt-5.1")
- getModelFamily() function for model family detection
- Lazy instruction loading per model family
- Separate caching per model family
- Model family logged in request logs
Fixes:
- OpenCode prompt cache URL (main → dev branch)
- Multi-model session log detection in test script
Test Coverage:
- 191 unit tests (16 new for model family detection)
- 13 integration tests with family verification
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Changed config/full-opencode.json plugin from local file path to npm package name - Added Highlights section to v4.0.0 changelog emphasizing Codex Max support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Align config rename retry behavior with storage, log backup permission failures, and add regression coverage for the new foundation storage and config APIs. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
1 issue found across 4 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="test/plugin-config.test.ts">
<violation number="1" location="test/plugin-config.test.ts:773">
P2: The malformed-config mutation test leaves `fs.promises.mkdir` unstubbed, so it touches the real filesystem before asserting parse failure.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Document the non-reentrant flagged transaction helper and stub mkdir in the malformed config test. Co-authored-by: Codex <noreply@openai.com>
Avoid reopening the accounts file while building the combined snapshot and cover the single-read path with a regression test. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="lib/storage.ts">
<violation number="1" location="lib/storage.ts:1045">
P2: `accountsSnapshot ?? ...` incorrectly re-reads storage when the provided snapshot is `null`; treat only `undefined` as “no snapshot provided.”</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Preserve null account snapshots, retry raw backup copies on Windows lock errors, and constrain prepare() to filtered imports with regression coverage. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/config.ts`:
- Around line 538-576: The savePluginConfigMutation flow
(savePluginConfigMutation, withConfigMutationLock, CONFIG_PATH, tempPath,
renameConfigWithWindowsRetry) is racy across processes because the in-memory
mutex doesn't prevent other processes from reading/modifying CONFIG_PATH; fix by
adding an OS-visible coordination step: either acquire a filesystem
lock/lockfile (e.g., create a .lock using O_EXCL or use a cross-process lock
library) around the read/modify/write, or implement optimistic concurrency by
after computing next and before rename re-reading CONFIG_PATH (or checking
mtime/checksum) and if it changed, recompute next from the fresh contents and
retry a bounded number of times; ensure temp file cleanup and that rename still
uses renameConfigWithWindowsRetry.
In `@lib/storage.ts`:
- Around line 1335-1341: importAccounts() currently trusts the prepare hook to
only filter the incoming normalized snapshot, but the hook may add/modify
accounts; enforce the subset invariant by validating after calling
prepare(normalized, existing) and normalizeAccountStorage(prepared) that every
account in preparedNormalized.accounts exists in the original
normalized.accounts (compare by stable unique key/identity used by
normalizeAccountStorage, e.g. account.id or the canonical identity field) and
that no account identities were rewritten; if any account is new or mismatched,
throw an error explaining prepare() must return a subset (alternatively narrow
the hook API to a filter-only callback), ensuring skippedByPrepare remains
accurate.
- Around line 1042-1058: The current removeOrphanedFlaggedAccounts uses the
nullish coalescing (accountsSnapshot ?? loadAccountsInternal(...)) which treats
an explicit null snapshot the same as "no snapshot provided" and causes an
unnecessary second read; change the logic to check for undefined specifically:
if accountsSnapshot === undefined then call
loadAccountsInternal(saveAccountsUnlocked), if accountsSnapshot === null then
treat as "accounts file absent" and return the original storage without loading,
and otherwise use the provided snapshot; update references in
removeOrphanedFlaggedAccounts and any call site logic around
loadAccountAndFlaggedStorageSnapshot, loadAccountsInternal, accountsSnapshot and
saveFlaggedAccountsUnlocked accordingly so explicit null stops further reads and
leaves flagged entries untouched.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2ab3f5ea-a385-4e66-b430-1e37bad82fd9
📒 Files selected for processing (7)
lib/config.tslib/schemas.tslib/storage.tslib/sync-prune-backup.tstest/plugin-config.test.tstest/storage.test.tstest/sync-prune-backup.test.ts
Guard config mutations with a lockfile and enforce prepare() as a true subset of imported accounts. Co-authored-by: Codex <noreply@openai.com>
Reuse the loaded accounts snapshot in flagged transactions and cover the single-read path with a regression test. Co-authored-by: Codex <noreply@openai.com>
|
@coderabbitai review |
@ndycode I have started the AI code review. It will take a few minutes to complete. |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
4 issues found across 7 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="lib/config.ts">
<violation number="1" location="lib/config.ts:135">
P1: Opening the lock file before ensuring the config directory exists can throw `ENOENT` on first run, so config writes fail on fresh environments.</violation>
</file>
<file name="test/plugin-config.test.ts">
<violation number="1" location="test/plugin-config.test.ts:784">
P2: The malformed-config test does not mock the lockfile `open`/`unlink` calls, so it depends on real `~/.opencode` state and may fail before hitting the invalid-JSON path.</violation>
</file>
<file name="test/sync-prune-backup.test.ts">
<violation number="1" location="test/sync-prune-backup.test.ts:82">
P2: This line reassigns the nested object instead of mutating it, so the deep-clone check can pass even when only a shallow clone is used.</violation>
</file>
<file name="lib/storage.ts">
<violation number="1" location="lib/storage.ts:1367">
P1: `prepare()` is called with live mutable storage objects, so it can mutate inputs before validation and bypass the subset constraint.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Bound config lock retries, create the config directory before locking, and isolate tests around the lockfile and prune backup clone checks. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
lib/storage.ts (1)
1468-1469: Verify the skipped calculation accounts for all scenarios.The calculation
skipped = skippedByPrepare + Math.max(0, preparedNormalized.accounts.length - imported)may double-count in edge cases. Whenprepare()filters accounts and those filtered accounts would also have been deduplicated, the totalskippedcould exceednormalized.accounts.length - imported.Consider verifying the calculation against the original import file's account count:
- const skipped = skippedByPrepare + Math.max(0, preparedNormalized.accounts.length - imported); + const skipped = Math.max(0, normalized.accounts.length - imported);Alternatively, if the intent is to separately track prepare-filtered vs dedupe-skipped, the current approach works but may benefit from a comment explaining the semantics.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/storage.ts` around lines 1468 - 1469, The skipped count can double-count; replace the current line computing skipped with a single-source calculation using the original import count (e.g., use normalized.accounts.length) so skipped = Math.max(0, normalized.accounts.length - imported) rather than adding skippedByPrepare, or if you intentionally want two separate metrics keep the current math but add a clarifying comment next to skippedByPrepare and the skipped computation; update the line that currently references preparedNormalized.accounts.length and imported (and the variables deduplicatedAccounts, existingAccounts) to use normalized.accounts.length to accurately reflect total skipped from the original file.lib/config.ts (1)
133-164: Stale lockfile may block subsequent mutations if process crashes.If a process crashes or is killed between creating the lockfile and unlinking it (lines 140-146), the lockfile will remain and block all future config mutations until manually removed.
Consider adding stale lock detection (e.g., check lockfile age or write PID to it):
♻️ Proposed enhancement for stale lock handling
async function withConfigProcessLock<T>(fn: () => Promise<T>): Promise<T> { let lastError: NodeJS.ErrnoException | null = null; await fs.mkdir(dirname(CONFIG_PATH), { recursive: true }); for (let attempt = 0; attempt < CONFIG_LOCK_RETRY_ATTEMPTS; attempt += 1) { try { const handle = await fs.open(CONFIG_LOCK_PATH, "wx", 0o600); + // Write PID to help identify stale locks + await handle.write(`${process.pid}\n`); try { return await fn(); } finally { await handle.close(); await fs.unlink(CONFIG_LOCK_PATH).catch(() => undefined); } } catch (error) { const code = (error as NodeJS.ErrnoException).code; if (code === "EEXIST") { + // Check if lock is stale (e.g., > 30 seconds old) + try { + const stat = await fs.stat(CONFIG_LOCK_PATH); + if (Date.now() - stat.mtimeMs > 30_000) { + await fs.unlink(CONFIG_LOCK_PATH).catch(() => undefined); + continue; // Retry immediately after removing stale lock + } + } catch { + // Lock may have been released, retry + } lastError = error as NodeJS.ErrnoException; await new Promise((resolve) => setTimeout( resolve, Math.min(CONFIG_LOCK_RETRY_BASE_DELAY_MS * 2 ** attempt, CONFIG_LOCK_RETRY_MAX_DELAY_MS), ), ); continue; } throw error; } }
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@lib/storage.ts`:
- Around line 1468-1469: The skipped count can double-count; replace the current
line computing skipped with a single-source calculation using the original
import count (e.g., use normalized.accounts.length) so skipped = Math.max(0,
normalized.accounts.length - imported) rather than adding skippedByPrepare, or
if you intentionally want two separate metrics keep the current math but add a
clarifying comment next to skippedByPrepare and the skipped computation; update
the line that currently references preparedNormalized.accounts.length and
imported (and the variables deduplicatedAccounts, existingAccounts) to use
normalized.accounts.length to accurately reflect total skipped from the original
file.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 44407809-a551-40e2-9aca-1ced35cb2243
📒 Files selected for processing (5)
lib/config.tslib/storage.tstest/plugin-config.test.tstest/storage.test.tstest/sync-prune-backup.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- test/sync-prune-backup.test.ts
Retry through stale config lockfiles and cover first-run config-guard paths with focused tests. Co-authored-by: Codex <noreply@openai.com>
* chore: add anti-slop PR quality workflow * docs: add required PR template and screening guidance * chore: pin anti-slop action to full SHA * chore: document PR workflow guardrails * docs: clarify anti-slop maintainer guardrails * chore: tighten anti-slop canary handling * fix: avoid canary token in PR template body * fix: rerun PR quality on label changes * fix: remove PR template canary hint * fix: protect PR screening files --------- Co-authored-by: ndycode <ndycode@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This PR adds the foundational storage/config plumbing needed for upcoming codex-multi-auth sync work, including safer cross-process config mutation, flagged-account transaction/snapshot helpers, import preview support, and backup utilities (raw and token-redacted).
Changes:
- Added process-visible + in-process locking for config mutations and introduced a sync feature toggle in config/schema.
- Added flagged-account transaction/snapshot helpers, import preview against an arbitrary existing snapshot, and raw accounts backup with Windows retry handling.
- Added prune-backup payload generation that deep-clones data and redacts token fields, plus accompanying tests.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| lib/config.ts | Adds config mutation locking + persisted sync toggle; introduces Windows rename retry helper. |
| lib/schemas.ts | Extends plugin config schema with experimental.syncFromCodexMultiAuth.enabled. |
| lib/storage.ts | Adds flagged account helpers, import preview helper, and raw backup helper with Windows retry. |
| lib/sync-prune-backup.ts | New helper to create token-redacted, deep-cloned prune-backup payloads. |
| test/plugin-config.test.ts | Adds tests for config mutation lock/rename behavior and persisted sync toggle reads. |
| test/storage.test.ts | Adds tests for import prepare/subset enforcement, raw backup, and snapshot reuse. |
| test/sync-prune-backup.test.ts | Adds tests for token redaction and deep-clone isolation of prune payload. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| await fs.unlink(CONFIG_LOCK_PATH).catch(() => undefined); | ||
| continue; | ||
| } | ||
| } catch (statError) { | ||
| const statCode = (statError as NodeJS.ErrnoException).code; | ||
| if (statCode === "ENOENT") { |
| async function renameConfigWithWindowsRetry(sourcePath: string, destinationPath: string): Promise<void> { | ||
| let lastError: NodeJS.ErrnoException | null = null; | ||
| for (let attempt = 0; attempt < 5; attempt += 1) { | ||
| try { | ||
| await fs.rename(sourcePath, destinationPath); | ||
| return; | ||
| } catch (error) { | ||
| const code = (error as NodeJS.ErrnoException).code; | ||
| if (code === "EPERM" || code === "EBUSY") { | ||
| lastError = error as NodeJS.ErrnoException; | ||
| await new Promise((resolve) => setTimeout(resolve, 10 * 2 ** attempt)); | ||
| continue; | ||
| } | ||
| throw error; | ||
| } | ||
| } |
lib/storage.ts
Outdated
|
|
||
| export async function saveFlaggedAccounts(storage: FlaggedAccountStorageV1): Promise<void> { | ||
| export async function loadFlaggedAccounts(): Promise<FlaggedAccountStorageV1> { | ||
| return withStorageLock(async () => loadFlaggedAccountsUnlocked()); |
| export async function backupRawAccountsFile(filePath: string, force = true): Promise<void> { | ||
| await withStorageLock(async () => { | ||
| const resolvedPath = resolvePath(filePath); | ||
|
|
||
| if (!force && existsSync(resolvedPath)) { | ||
| throw new Error(`File already exists: ${resolvedPath}`); | ||
| } | ||
|
|
||
| await migrateLegacyProjectStorageIfNeeded(saveAccountsUnlocked); | ||
| const storagePath = getStoragePath(); | ||
| if (!existsSync(storagePath)) { | ||
| throw new Error("No accounts to back up"); | ||
| } | ||
|
|
||
| const deduplicatedAccounts = deduplicateAccountsForStorage(merged); | ||
| const imported = deduplicatedAccounts.length - existingAccounts.length; | ||
| const skipped = normalized.accounts.length - imported; | ||
| return Promise.resolve({ | ||
| imported, | ||
| total: deduplicatedAccounts.length, | ||
| skipped, | ||
| await fs.mkdir(dirname(resolvedPath), { recursive: true }); | ||
| await copyFileWithWindowsRetry(storagePath, resolvedPath); | ||
| await fs.chmod(resolvedPath, 0o600).catch((chmodErr) => { |
Summary
codex-multi-authsyncSafety Notes
EBUSY/EPERM, and backup permission-restriction failures are logged instead of being swallowedprepare()is enforced as a true subset of the imported accounts, including identity preservation, and is evaluated against cloned inputs so caller mutation cannot bypass validationValidation
npm run typechecknpm run lintnpx vitest run test/storage.test.ts test/plugin-config.test.ts test/sync-prune-backup.test.ts --reporter=dotNotes
#75#77Summary by CodeRabbit
New Features
Improvements
note: greptile review for oc-chatgpt-multi-auth. cite files like
lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.Greptile Summary
this PR lays the config/storage foundation for the
codex-multi-authsync engine landing in #77. it adds a two-layer config mutation path (in-process promise-chain mutex + process-visible lockfile with stale-lock cleanup and bounded backoff), flagged-account transaction and snapshot helpers that reuse a single accounts read inside the storage lock, import preview support, and raw/token-redacted backup helpers. theprepare()subset guard and deep-clone isolation in the import path are solid.issues found:
exportAccountsTOCTOU race onforce=false—existsSyncis called outside any exclusive write, thenfs.writeFileuses flagw(notwx). a concurrent exporter or any process creating the file between the check and the write silently clobbers it with no error. the fix is to usefs.open(path, 'wx')for atomic exclusive creation.writePreImportBackupFilemissing post-rename chmod + warn log —backupRawAccountsFilecallsfs.chmod(0o600)after the write and emits alog.warnwhen chmod fails (visible on Windows wheremode: 0o600inwriteFileis a no-op).writePreImportBackupFiledoes neither, so token-bearing pre-import backups land world-readable on Windows with no diagnostic log entry.missing vitest coverage for concurrent
withConfigMutationLockserialization — the process-lock paths are well-exercised, but no test fires two concurrentsetSyncFromCodexMultiAuthEnabledcalls to verify the in-process mutex queues them correctly without a torn write.Confidence Score: 3/5
exportAccountsshould be resolved before the sync engine lands on topprepare()subset enforcement and structuredClone isolation are correct. score is reduced because theexportAccountsTOCTOU is a logic bug (broken contract forforce=false) and the missing chmod+log inwritePreImportBackupFileleaves token-bearing backup files silently world-readable on Windows — the exact risk class called out in the repo's own safety notesImportant Files Changed
savePluginConfigMutationwith two-layer locking (in-process promise mutex + process-visible lockfile with stale-lock cleanup), temp-file rename, andsetSyncFromCodexMultiAuthEnabled; the locking mechanism is solid but the in-process mutex lacks a concurrency regression testexportAccountshas a TOCTOU race on theforce=falseguard, andwritePreImportBackupFileomits the post-rename chmod + warn logging thatbackupRawAccountsFilealready has for Windows token safetyexperimental.syncFromCodexMultiAuth.enabledtoPluginConfigSchema; minimal and correctloadAccountAndFlaggedStorageSnapshotandwithFlaggedAccountsTransaction; no coverage forexportAccountsTOCTOU or pre-import backup chmod on WindowssetSyncFromCodexMultiAuthEnabledprocess-lock paths; missing concurrent in-process mutex serialization testSequence Diagram
sequenceDiagram participant Caller participant withConfigMutationLock participant withConfigProcessLock participant ConfigFile Caller->>withConfigMutationLock: setSyncFromCodexMultiAuthEnabled(enabled) withConfigMutationLock->>withConfigMutationLock: queue behind previous promise withConfigMutationLock->>withConfigProcessLock: fn() withConfigProcessLock->>withConfigProcessLock: fs.mkdir (config dir) loop retry up to 5x withConfigProcessLock->>ConfigFile: open(.lock, "wx") alt EEXIST + stale withConfigProcessLock->>ConfigFile: unlink(.lock) else EEXIST + fresh withConfigProcessLock->>withConfigProcessLock: exponential backoff delay end end withConfigProcessLock->>ConfigFile: readFile (current config) withConfigProcessLock->>ConfigFile: writeFile (.pid.ts.tmp, mode 0o600) withConfigProcessLock->>ConfigFile: rename(.tmp → config) withConfigProcessLock->>ConfigFile: unlink(.lock) withConfigMutationLock->>withConfigMutationLock: release in-process mutex Note over Caller,ConfigFile: importAccounts flow Caller->>withAccountStorageTransaction: importAccounts(filePath, options, prepare?) withAccountStorageTransaction->>withAccountStorageTransaction: withStorageLock acquired withAccountStorageTransaction->>withAccountStorageTransaction: prepare() subset validation withAccountStorageTransaction->>ConfigFile: writePreImportBackupFile (mode 0o600, NO chmod post-rename) withAccountStorageTransaction->>ConfigFile: persist merged storage withAccountStorageTransaction->>withAccountStorageTransaction: release lockComments Outside Diff (4)
lib/config.ts, line 153-155 (link)stale-lock
continueeats retry budgeteach stale-lock removal path does
continue, which still incrementsattemptin the for-loop. withCONFIG_LOCK_RETRY_ATTEMPTS = 5, five consecutive stale-lock detections exhaust all retries without ever acquiring the lock — the caller gets a timeout error even though no live contention existed.the fix is to avoid incrementing the attempt counter after a successful stale cleanup — either by not counting the attempt on a successful unlink/continue, or by resetting the attempt counter:
lib/storage.ts, line 1138-1139 (link)loadFlaggedAccountsstill triggers double-read of accounts fileloadFlaggedAccountsUnlocked()is called here without anaccountsSnapshot. insideremoveOrphanedFlaggedAccounts,accountsSnapshot === undefinedcauses a secondloadAccountsInternal(saveAccountsUnlocked)— a second open of the token-bearing accounts file within the same storage-lock window.this is the identical race fixed for
withFlaggedAccountsTransaction(line 1155) andloadAccountAndFlaggedStorageSnapshot(line 1168), but not carried over here. on windows with av scanning, the second rapid open risksEBUSYwith no retry guard.fix matches what the other two callers already do:
a regression test mirroring the
accountReadCount === 1check in"reuses the loaded accounts snapshot inside withFlaggedAccountsTransaction"should be added for this path too.lib/storage.ts, line 1295-1297 (link)TOCTOU race on
existsSyncwhenforce=falsewithStorageLockis a single-process promise-chain mutex; it does not prevent a separate OS process from creatingresolvedPathbetween the syncexistsSynccheck and thecopyFileWithWindowsRetrycall. on windows, a second backup process (or av move of a temp file) can create the destination in that window, and the raw token-bearing backup is silently overwritten even thoughforce=falsewas explicitly requested.use
fs.openwith thewxflag to make the "create only if absent" check atomic:a vitest covering the cross-process race (two concurrent
backupRawAccountsFile(path, false)calls) is missing.lib/storage.ts, line 1337-1349 (link)TOCTOU race on
force=falseguardexistsSyncat line 1337 is checked outside any exclusive write, thenwithAccountStorageTransactionacquires the storage lock, thenfs.writeFileis called. between theexistsSyncand the finalwriteFilethere is a visible window (at minimum, the full duration of the storage transaction) where another process or concurrent call can create the file. when that happens,fs.writeFilewith its defaultwflag silently clobbers the file, breaking theforce=falsecontract with no error.on Windows this race is worse because antivirus or indexing can hold the file long enough for a second exporter to slip through.
the correct fix is to replace the
existsSynccheck +writeFilepair with an exclusiveopen('wx')call, which is atomic at the OS level:Prompt To Fix All With AI
Last reviewed commit: 2114491
Context used: