[codex] Add CLI workspace monorepo context#1977
Conversation
git-subtree-dir: cli git-subtree-split: d73aa0ca9295ffb664221d26d59608d7c154da80
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a full Capgo CLI workspace: package manifest and build script, extensive CLI commands wired via Commander, bundle zip/encrypt/decrypt/upload pipelines, cloud build and onboarding flows (including Apple App Store Connect integration and onboarding UI), Supabase-backed APIs/permission gates, docs/skill artifacts, IDE/tooling, and CI workflow to run/select CLI tests. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User as "User / CLI"
participant CLI as "Capgo CLI"
participant Zip as "Local Zipper/Encryptor"
participant Supabase as "Supabase (DB/API)"
participant Uploader as "Upload Service (TUS/PUT)"
participant Build as "Cloud Build Service"
participant WS as "WebSocket Logs"
User->>CLI: run upload/request (apikey, appId, path)
CLI->>Supabase: validate API key, permissions, 2FA, channel
CLI->>Zip: zip project, compute checksum, optionally encrypt
Zip-->>CLI: return bundle file + checksum
CLI->>Uploader: create upload job & upload bundle (TUS/PUT)
Uploader-->>Build: notify upload complete (r2_path)
CLI->>Build: start build request (metadata, credentials)
Build->>WS: open websocket for logs
WS-->>CLI: stream build logs/status
Build->>Supabase: persist build status/result
CLI->>Supabase: poll/fetch final status and report outcome
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: bbbda4fff4
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 3
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (38)
cli/LICENCE-662-665 (1)
662-665:⚠️ Potential issue | 🟠 MajorRemove non-license trailing text from the license file.
Lines 662-665 are unrelated to AGPLv3 and should not be in a legal license document.
Proposed fix
-Toucan -Over 100 word limit -We’re working to increase this limit and keep load times short. In the meantime, try highlighting up to 100 words at one time to translate. -Don’t show again🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/LICENCE` around lines 662 - 665, The LICENCE file contains non-license trailing UI text ("Toucan\nOver 100 word limit\nWe’re working to increase this limit..." etc.) appended after the AGPLv3 text; remove that extraneous block so the file contains only the official AGPLv3 license content (delete the trailing Toucan/Over 100 word limit/Don’t show again lines), ensure no other non-license text remains at the end, and save the cleaned LICENCE file.cli/src/build/platform-paths.ts-15-39 (1)
15-39:⚠️ Potential issue | 🟠 MajorReject parent/absolute paths during normalization.
normalizeRelPathcurrently preserves values like../iosor/tmp/ios. That can route packaging outside the intended project subtree.🛡️ Proposed guardrails
export function normalizeRelPath(input: string): string { @@ // Strip trailing "/" (repeatable) while (s.endsWith('/')) s = s.slice(0, -1) + // Reject absolute paths and parent traversal + if (s.startsWith('/') || /^[A-Za-z]:\//.test(s)) + return '' + if (s.split('/').includes('..')) + return '' + if (s === '.') return ''Also applies to: 47-54
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/platform-paths.ts` around lines 15 - 39, normalizeRelPath currently allows absolute (e.g. "/tmp/ios" or "C:\...") and parent-traversal segments ("../ios"), which can escape the project tree; update normalizeRelPath to reject any absolute paths (detect leading "/" or Windows drive prefixes like "C:" after separator normalization) and any parent-segment usages (reject if any path segment === ".." or if it starts with "../") by returning an empty string or a clear error value, and apply the same guard to the other normalization block with identical logic in this file (the repeat at lines ~47-54), ensuring checks run after converting backslashes to "/" but before collapsing or trimming segments.cli/src/build/platform-paths.ts-47-48 (1)
47-48: 🛠️ Refactor suggestion | 🟠 MajorReplace
anywith an explicit config interface.Line 47 loses type-safety for the shape this helper expects. Define focused interfaces for the platform config structure:
♻️ Proposed typing
+interface CapacitorPlatformConfig { + path?: string +} + +interface CapacitorConfigLike { + ios?: CapacitorPlatformConfig + android?: CapacitorPlatformConfig +} @@ -export function getPlatformDirFromCapacitorConfig(capConfig: any, platform: 'ios' | 'android'): string { +export function getPlatformDirFromCapacitorConfig(capConfig: CapacitorConfigLike, platform: 'ios' | 'android'): string {As per coding guidelines: "
**/*.{ts,tsx}: Preferinterfacefor defining object shapes in TypeScript".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/platform-paths.ts` around lines 47 - 48, The function getPlatformDirFromCapacitorConfig currently uses a loose any for capConfig; declare explicit interfaces (e.g., an interface CapacitorPlatformConfig { path?: string } and an interface CapacitorConfig { ios?: CapacitorPlatformConfig; android?: CapacitorPlatformConfig }) and replace the parameter type any with the new CapacitorConfig type on getPlatformDirFromCapacitorConfig(capConfig: CapacitorConfig, platform: 'ios' | 'android'): string; update any internal accesses (configured = capConfig?.[platform]?.path) to use the typed properties so TypeScript enforces the expected shape.cli/src/bundle/check.ts-13-17 (1)
13-17:⚠️ Potential issue | 🟠 MajorAvoid symlink-following recursion in directory traversal.
At line 13,
statSyncfollows symlinks; recursive descent at line 16 can loop on cyclic symlink chains and hang or crash the CLI.🛡️ Proposed fix
-import { readdirSync, readFileSync, statSync } from 'node:fs' +import { lstatSync, readdirSync, readFileSync } from 'node:fs' @@ - const stats = statSync(filePath) + const stats = lstatSync(filePath) + if (stats.isSymbolicLink()) + continue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/bundle/check.ts` around lines 13 - 17, The code currently uses statSync when inspecting filePath inside searchInDirectory which follows symlinks and allows recursive descent into cyclic links; change the check to use lstatSync and skip entries where stats.isSymbolicLink() is true (or otherwise avoid recursing into symlinked directories) so searchInDirectory does not follow symlinks and cannot get stuck in symlink loops.cli/.vscode/tasks.json-15-23 (1)
15-23:⚠️ Potential issue | 🟠 MajorUse
bunxinstead ofnpxfor the CLI workspace task, but verify the webpack dev task is still needed.Line 15 invokes npm tooling in a Bun monorepo. However, the webpack task references
webpackandwebpack.config.jswhich do not exist in thecli/workspace (webpack is not in devDependencies and the config file is absent). Either remove this task if it's no longer used, or add the missing webpack dependency and config file if it should be functional.♻️ Proposed change (if keeping the task)
- "command": "npx", + "command": "bunx",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/.vscode/tasks.json` around lines 15 - 23, The tasks.json entry currently runs "npx" with args ["webpack", "--config webpack.config.js"] but in the cli workspace there is no webpack devDependency or webpack.config.js; either replace the "command": "npx" with "bunx" to use Bun tooling and ensure webpack is installed, or remove the whole webpack dev task if it’s unused; if you intend to keep the task, add webpack to cli package.json devDependencies and create the missing webpack.config.js (refer to the "command" field, the "args" array containing "webpack" and "--config webpack.config.js", and the surrounding isBackground task definition).cli/src/config/index.ts-17-18 (1)
17-18:⚠️ Potential issue | 🟠 MajorInitialize
extConfiginstead of silently dropping the write.When
oldConfig.app.extConfigis missing, this function exits without persisting anything. Callers likesetSettingcan still report success, so fresh projects will look updated even though no config file changed.🛠️ Suggested fix
- let { extConfig } = oldConfig.app - if (extConfig) { + let extConfig = oldConfig.app.extConfig ?? { plugins: {} } if (!extConfig.plugins) { extConfig.plugins = { [key]: {}, } } @@ - writeConfigCap(extConfig, oldConfig.app.extConfigFilePath) - } + writeConfigCap(extConfig, oldConfig.app.extConfigFilePath)Also applies to: 32-33
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/config/index.ts` around lines 17 - 18, The code currently reads let { extConfig } = oldConfig.app and bails out if extConfig is falsy, causing writes to be dropped; update the logic in the function that merges/writes config (referencing oldConfig, oldConfig.app, and extConfig) to initialize extConfig to an empty object when missing (e.g., assign extConfig = oldConfig.app.extConfig || {} or set oldConfig.app.extConfig = {}) before applying updates so subsequent operations like setSetting actually persist changes; ensure the same initialization is added to the other occurrence flagged (lines 32-33) so both code paths always write a valid extConfig object.cli/src/config/index.ts-19-23 (1)
19-23:⚠️ Potential issue | 🟠 MajorDon't persist a synthetic
plugins.extConfigentry.This initializer creates a real plugin key named
extConfig, so the next write will serializeplugins.extConfiginto the project config even though it is not the target plugin.🧹 Suggested fix
if (!extConfig.plugins) { extConfig.plugins = { - extConfig: {}, [key]: {}, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/config/index.ts` around lines 19 - 23, The initializer currently writes a synthetic plugins.extConfig entry; instead ensure you only create the actual plugin key under extConfig.plugins. Change the logic so that when extConfig.plugins is missing you set extConfig.plugins = { [key]: {} } (not { extConfig: {}, [key]: {} }), and if extConfig.plugins exists but extConfig.plugins[key] is missing, initialize only extConfig.plugins[key] = {}; update the code paths that reference extConfig, plugins and key accordingly.cli/src/app/setting.ts-31-47 (1)
31-47:⚠️ Potential issue | 🟠 MajorReject empty path segments before mutating the config.
setting.split('.')turns''into['']andfoo..barinto['foo', '', 'bar'], so both inputs currently write to an empty-string property instead of failing fast.🛡️ Suggested fix
let baseObj = config.config as any const pathElements = setting.split('.') - if (pathElements.length === 0) { + if (!setting.trim() || pathElements.some(segment => segment.length === 0)) { if (!silent) log.error('Invalid path') throw new Error('Invalid config path') }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/app/setting.ts` around lines 31 - 47, The split of setting into pathElements can produce empty segments (e.g., '' or 'foo..bar') which currently creates an empty-string property; before mutating baseObj, validate the result of setting.split('.') by rejecting any empty or all-whitespace segment (e.g., if pathElements.length === 0 or pathElements.some(seg => seg.trim() === '')), log the error when !silent and throw an Error('Invalid config path'), and only proceed to the existing for-loop and assignment to baseObj[pathElements.at(-1)!] when validation passes; update the code around the setting.split('.') / pathElements usage and keep references to baseObj, pathElements, options.bool and options.string intact.cli/src/app/info.ts-11-16 (1)
11-16:⚠️ Potential issue | 🟠 MajorDon't collapse lookup failures into "dependencies are outdated".
getLatestDependencies()silently drops packages whose registry lookup fails, then the exact-object comparison turns those missing keys intoSome dependencies are not up to date. A transient network failure will faildoctorwith the wrong diagnosis.🩺 Suggested direction
async function getLatestDependencies(installedDependencies: Record<string, string>) { const latestDependencies: Record<string, string> = {} + const unresolved: string[] = [] const keys = Object.keys(installedDependencies) const versions = await Promise.all(keys.map(dependency => getLatestVersion(dependency))) versions.forEach((v, index) => { if (v) latestDependencies[keys[index]] = v + else + unresolved.push(keys[index]) }) - return latestDependencies + return { latestDependencies, unresolved } }Then fail with a distinct "could not resolve latest version for ..." error, or compare only the keys that were resolved successfully.
Also applies to: 88-91
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/app/info.ts` around lines 11 - 16, getLatestDependencies currently ignores failed lookups from getLatestVersion by dropping undefined results into latestDependencies, which causes the overall comparison to incorrectly report "Some dependencies are not up to date"; update the logic in getLatestDependencies (the block that builds latestDependencies using keys, getLatestVersion, and versions) to detect failures: collect which keys returned undefined or errored and either (A) throw or return a distinct error listing "could not resolve latest version for X,Y" so callers can surface a separate diagnosis, or (B) only include successfully resolved keys in latestDependencies and document/return the unresolved keys so the comparison excludes them; ensure you reference getLatestVersion, getLatestDependencies, latestDependencies and keys when changing the behavior.cli/src/api/channels.ts-197-214 (1)
197-214:⚠️ Potential issue | 🟠 MajorThe channel table headers and row values are out of sync.
headersdeclares 18 columns, but each row only appends 12 values and the tail values no longer match the header labels. The rendered table will be misaligned and several booleans will appear under the wrong headings.📋 Suggested fix
- t.headers = ['Name', 'Version', 'Public', 'iOS', 'Android', 'Auto Update', 'Native Auto Update', 'Device Self Set', 'Progressive Deploy', 'Secondary Version', 'Secondary Version Percentage', 'AB Testing', 'AB Testing Version', 'AB Testing Percentage', 'Emulator', 'Device', 'Dev', 'Prod'] + t.headers = ['Name', 'Version', 'Public', 'iOS', 'Android', 'Auto Update', 'Native Auto Update', 'Device Self Set', 'Emulator', 'Device', 'Dev', 'Prod']🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/api/channels.ts` around lines 197 - 214, The table headers (t.headers) list 18 columns but each pushed row only contains 12 values, causing misalignment; update the row construction in the loop (the t.rows.push call iterating over data.toReversed()) to output 18 values matching each header in order — e.g. row.name, row.version?.name, row.public ? '✅':'❌', row.ios ? '✅':'❌', row.android ? '✅':'❌', (invert) !row.disable_auto_update ? '✅':'❌' for "Auto Update", (invert) !row.disable_auto_update_under_native ? '✅':'❌' for "Native Auto Update", row.allow_device_self_set ? '✅':'❌' for "Device Self Set", row.progressive_deploy ? '✅':'❌' (or the correct field) for "Progressive Deploy", row.secondary?.name for "Secondary Version", row.secondary_percentage + '%' (or formatted number) for "Secondary Version Percentage", row.ab_testing ? '✅':'❌' for "AB Testing", row.ab_testing_version for "AB Testing Version", row.ab_testing_percentage + '%' for "AB Testing Percentage", and finally row.allow_emulator, row.allow_device, row.allow_dev, row.allow_prod as '✅'/'❌'; ensure each header has a corresponding entry and adjust boolean inversions/formatting where noted.cli/src/build/mobileprovision-parser.ts-53-58 (1)
53-58:⚠️ Potential issue | 🟠 MajorNested
<dict>blocks will truncate valid entitlement payloads.
extractNestedPlistValue()stops at the first</dict>, so anEntitlementssection containing any nested dictionary beforeapplication-identifierwill be parsed as incomplete and returnnull. That breaks valid provisioning profiles.🛠️ Suggested fix
function extractNestedPlistValue(xml: string, dictKey: string, valueKey: string): string | null { - const dictKeyRegex = new RegExp(`<key>${escapeRegex(dictKey)}</key>\\s*<dict>([\\s\\S]*?)</dict>`) - const dictMatch = xml.match(dictKeyRegex) - if (!dictMatch) + const marker = `<key>${dictKey}</key>` + const markerIndex = xml.indexOf(marker) + if (markerIndex === -1) return null - return extractPlistValue(dictMatch[1], valueKey) + + const dictStart = xml.indexOf('<dict>', markerIndex) + if (dictStart === -1) + return null + + let depth = 0 + for (let i = dictStart; i < xml.length; i++) { + if (xml.startsWith('<dict>', i)) { + depth++ + i += '<dict>'.length - 1 + continue + } + if (xml.startsWith('</dict>', i)) { + depth-- + if (depth === 0) + return extractPlistValue(xml.slice(dictStart, i + '</dict>'.length), valueKey) + i += '</dict>'.length - 1 + } + } + + return null }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/mobileprovision-parser.ts` around lines 53 - 58, extractNestedPlistValue currently uses a single regex (dictKeyRegex) that stops at the first </dict>, which breaks when the target dict contains nested <dict> blocks; replace the regex-based capture with a manual tag-balance scan: locate the <key>{dictKey}</key> occurrence, find the immediately following <dict> start, then iterate forward counting nested <dict> and </dict> tags until the matching closing </dict> is found, extract that balanced substring and pass it to extractPlistValue; update the function extractNestedPlistValue to use this balanced-parse approach instead of dictKeyRegex so nested dictionaries are handled correctly.cli/src/api/channels.ts-87-116 (1)
87-116:⚠️ Potential issue | 🟠 MajorUse
.maybeSingle()to distinguish "not found" from other errors.Supabase
.single()returns a 406 error for both zero rows and multiple rows. This masks three failure modes:
- Row not found (create should be attempted)
- Multiple
unknownversions exist (conflicting state that should not trigger another create)- Transient query errors (should not fall through to insert)
Use
.maybeSingle()instead, which returns{ data: null, error: null }only when zero rows match. Then check both!findError && datato distinguish "not found" from other errors.🧭 Suggested fix
const { data, error: findError } = await supabase .from('app_versions') .select('id') .eq('app_id', appId) .eq('name', 'unknown') - .single() + .maybeSingle() - if (!findError) { + if (!findError && data) { return data } + + if (findError) { + if (!silent) + log.error(`Cannot query unknown version for ${appId}: ${formatError(findError)}`) + throw new Error(`Cannot query unknown version for app ${appId}: ${formatError(findError)}`) + } // Not found - try to create it silently🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/api/channels.ts` around lines 87 - 116, The query using .single() on the app_versions select masks "not found" vs other errors; change the initial lookup to use .maybeSingle() and then treat a successful find only when !findError && data exists, so absence of data triggers the silent create branch; if findError is set (including multiple-row/other errors) log/throw using formatError (preserve existing behavior), and ensure the create branch still calls getOrganizationId and handles createError as before (refer to the supabase .from('app_versions') select, the getOrganizationId call, and the existing formatError/log/silent handling).cli/src/api/app.ts-6-10 (1)
6-10:⚠️ Potential issue | 🟠 MajorDon't collapse RPC failures into “app does not exist.”
checkAppExists()ignores theexist_app_v2error path, so auth/network/RPC failures becomefalseand downstream commands can attempt duplicate creates or return the wrong “not found” message. Throw onerrorhere and let callers handle the failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/api/app.ts` around lines 6 - 10, The function checkAppExists collapses any RPC error from supabase.rpc('exist_app_v2', { appid }).single() into a false result; change it to check the RPC response for an error and throw that error (or a wrapped error) instead of returning false so callers can handle auth/network/RPC failures. Specifically, in checkAppExists, inspect the returned error object from the Supabase call and if present throw it (or throw a new Error including error.message and context about exist_app_v2/appid) otherwise continue to return !!app.cli/src/app/set.ts-49-63 (1)
49-63:⚠️ Potential issue | 🟠 MajorRetention handling mixes units and makes
0impossible to set.The guards treat
retentionas truthy/falsy, so0is skipped and later written asundefined. The max check is also comparing the pre-conversion value against a seconds constant, so values well above 2 years in days still pass.Suggested fix
- if (retention && Number.isNaN(Number(retention))) { + if (retention !== undefined && Number.isNaN(Number(retention))) { if (!silent) log.error('retention value must be a number') throw new Error('Retention value must be a number') } - else if (retention && retention < 0) { + else if (retention !== undefined && Number(retention) < 0) { if (!silent) log.error('retention value cannot be less than 0') throw new Error('Retention value cannot be less than 0') } - else if (retention && retention >= 63113904) { + else if (retention !== undefined && Number(retention) * 24 * 60 * 60 >= 63113904) { if (!silent) log.error('retention value cannot be greater than 63113904 seconds (2 years)') throw new Error('Retention value cannot be greater than 63113904 seconds (2 years)') } @@ - retention: !retention ? undefined : retention * 24 * 60 * 60, + retention: retention === undefined ? undefined : Number(retention) * 24 * 60 * 60,Also applies to: 110-110
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/app/set.ts` around lines 49 - 63, The validation treats retention as truthy so 0 is ignored and comparisons use the pre-conversion value against a seconds constant; fix by converting retention once (e.g. const retentionNum = retention == null ? undefined : Number(retention)), use Number.isNaN(retentionNum) to detect invalid numbers, perform explicit null/undefined check (allow 0), and compare retentionNum against the seconds max constant (63113904) and >=0 using retentionNum; update both occurrences of the retention guards (the checks around the retention variable and the duplicate at the later occurrence) to use retentionNum and explicit null/undefined checks.cli/src/app/set.ts-68-69 (1)
68-69:⚠️ Potential issue | 🟠 MajorUpdating any field will reset the icon to the default asset.
iconUrlstarts asdefaultAppIconPathand is always included in the update payload. If the user changes only the name/retention and no icon file is found, the existing custom icon is overwritten.Also applies to: 105-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/app/set.ts` around lines 68 - 69, The code always sets iconUrl to defaultAppIconPath and includes it in the update payload, which overwrites any existing custom icon when the user updates other fields; change the logic so iconUrl is initialized undefined (not defaultAppIconPath), only assign it when an icon file is explicitly provided or when the user requests resetting to default, and modify the payload construction (where the update is assembled and sent) to include iconUrl only if it is explicitly set; update any helper/flags you use (e.g., the iconFile variable or an explicit resetIcon boolean) so existing custom icons remain unchanged when the user only updates name/retention.cli/src/app/add.ts-54-67 (1)
54-67:⚠️ Potential issue | 🟠 MajorBlock the reserved app ID before returning on the existence check.
The
io.ionic.starterguard is currently inside the “already exists” branch. In a fresh org/database,checkAppExists()returns false and this reserved ID is accepted.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/app/add.ts` around lines 54 - 67, The reserved appId 'io.ionic.starter' is only checked after checkAppExists returns true, so a new org can still use it; in ensureAppDoesNotExist move the guard for appId === 'io.ionic.starter' to the top (before awaiting/returning on checkAppExists) and keep the same behavior: log via log.error when !silent and throw the Error('Reserved appId, please change it in capacitor config'); ensure you still call checkAppExists(supabase, appId) afterwards to detect real existing apps.cli/src/app/set.ts-43-45 (1)
43-45:⚠️ Potential issue | 🟠 MajorThe final update is scoped to the creator, not the authorized admin.
Line 45 already accepts any org admin, but the update still filters by
.eq('user_id', userId). For admins who did not create the app, this matches zero rows and the command reports success anyway because no affected-row check is performed.Also applies to: 105-115
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/app/set.ts` around lines 43 - 45, The update is incorrectly always scoped to the creator by using .eq('user_id', userId) even when the caller is an org admin; change the update logic in the command so that if checkAppExistsAndHasPermissionOrgErr confirms OrganizationPerm.admin (i.e., caller is org admin) you do not add the .eq('user_id', userId) filter, otherwise keep it for normal creators; also after performing the update query (the same pattern around lines 105-115) check the number of affected rows and return an error if zero so the CLI does not silently succeed when nothing was updated.cli/src/app/delete.ts-110-118 (1)
110-118:⚠️ Potential issue | 🟠 Major
getOrganizationIdis called after the app is deleted.The app record is deleted at line 99-102, but
getOrganizationId(supabase, appId)is called at line 110. Since the app no longer exists in the database, this query will likely fail or return incorrect results, causing the analytics event to be sent with an invaliduser_id.Consider fetching the organization ID before deletion (you already have
appOwner?.owner_org.idfrom line 51).Proposed fix
+ // Capture orgId before deletion + const orgId = appOwner?.owner_org?.id + const { error: dbError } = await supabase .from('apps') .delete() .eq('app_id', appId) if (dbError) { if (!silent) log.error('Could not delete app') throw new Error(`Could not delete app: ${formatError(dbError)}`) } - const orgId = await getOrganizationId(supabase, appId) await sendEvent(options.apikey, { channel: 'app', event: 'App Deleted', icon: '🗑️', - user_id: orgId, + user_id: orgId ?? '', tags: { 'app-id': appId }, notify: false, }).catch(() => {})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/app/delete.ts` around lines 110 - 118, getOrganizationId is being called after the app row is deleted, so the analytics event may receive an invalid user_id; use the already-available org id (appOwner?.owner_org.id) or fetch the organization ID before you perform the deletion and pass that value into sendEvent (the call sites to getOrganizationId and sendEvent should be reordered or replaced so sendEvent uses the pre-deletion org id). Specifically, obtain orgId from appOwner?.owner_org.id (or call getOrganizationId(supabase, appId) before deleteApp/delete query) and then call sendEvent(options.apikey, {... user_id: orgId ...}). Ensure sendEvent remains after deletion if you want post-delete notification but supply the pre-fetched orgId.cli/src/bundle/cleanup.ts-46-50 (1)
46-50:⚠️ Potential issue | 🟠 MajorUse
tryParseto safely handle invalid version strings without crashing.
parse(row.name)will throw if anyrow.namein the database is not a valid semver string, crashing the entire cleanup operation. UsetryParsewith a null check instead, following the pattern already used elsewhere in the codebase.Proposed fix
+import { tryParse } from '@std/semver' + function getRemovableVersionsInSemverRange( data: Database['public']['Tables']['app_versions']['Row'][], bundleVersion: SemVer, nextMajorVersion: SemVer, ) { const toRemove: Database['public']['Tables']['app_versions']['Row'][] = [] for (const row of data ?? []) { - const rowVersion = parse(row.name) - if (greaterThan(rowVersion, bundleVersion) && lessThan(rowVersion, nextMajorVersion)) - toRemove.push(row) + const rowVersion = tryParse(row.name) + if (rowVersion && greaterThan(rowVersion, bundleVersion) && lessThan(rowVersion, nextMajorVersion)) + toRemove.push(row) } return toRemove }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/bundle/cleanup.ts` around lines 46 - 50, Replace the unsafe parse(row.name) call with semver.tryParse(row.name) and skip invalid versions: in the loop over data (the block that sets rowVersion and compares it to bundleVersion and nextMajorVersion), call tryParse(row.name), check for null/undefined and continue the loop when it returns null, and only perform greaterThan/lessThan comparisons and push toRemove when rowVersion is a valid SemVer object.cli/src/build/credentials-command.ts-503-507 (1)
503-507:⚠️ Potential issue | 🟠 MajorDefault listing misses local-only credential sets.
The comment says the default path should show both local and global credentials, but when
--localis omittedlocalAppIdsis forced to[]. That means local-only apps never make it intoallAppIds, and anything shown in the mixed path is labeled as global.Also applies to: 529-531
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/credentials-command.ts` around lines 503 - 507, The default path incorrectly skips fetching local app IDs (localAppIds is set to [] when options?.local is falsy), so local-only credential sets never appear; change the logic to always fetch local app IDs (call listAllApps(true) for localAppIds) and only skip global app IDs when options?.local is true (keep globalAppIds = options?.local ? [] : await listAllApps(false)); apply the same fix to the analogous block around the listAllApps usage at the later occurrence (lines handling localAppIds/globalAppIds there) so both places gather local apps unless explicitly restricted by --local..sonarcloud.properties-1-2 (1)
1-2:⚠️ Potential issue | 🟠 MajorDon't exclude the new CLI workspace from Sonar entirely.
Adding
cli/**tosonar.exclusionsdrops all first-party CLI code from bug/security analysis just as this PR makes it a core workspace. If the goal is to mute imported or generated noise, narrow the exclusions to those specific paths undercli/instead of removing the whole workspace.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.sonarcloud.properties around lines 1 - 2, The Sonar exclusions currently remove the entire CLI workspace via the patterns "cli/**" in sonar.exclusions and sonar.cpd.exclusions; remove those broad "cli/**" entries and instead restrict exclusions to generated or noisy subpaths (e.g., build artifacts, node_modules, generated files) by replacing "cli/**" with explicit patterns like the specific generated directories under the CLI (for example "cli/build/**" or "cli/node_modules/**") so the CLI source remains included in analysis; update the sonar.exclusions and sonar.cpd.exclusions entries accordingly (refer to the sonar.exclusions and sonar.cpd.exclusions lines in the diff).cli/src/build/credentials-command.ts-88-97 (1)
88-97:⚠️ Potential issue | 🟠 MajorValidate explicit
bundleId=pathassignments against the provisioning profile.For explicit entries, the left-hand bundle ID is accepted without checking whether the parsed
.mobileprovisionactually authorizes that app ID. A typo here saves unusable signing data and only fails much later during the build.Possible guard
- const profileData = readFileSync(resolvedPath) - const base64 = profileData.toString('base64') - const info = parseMobileprovision(resolvedPath) + const profileData = readFileSync(resolvedPath) + const base64 = profileData.toString('base64') + const info = parseMobileprovision(resolvedPath) + if (equalsIdx !== -1) { + const exactMatch = info.bundleId === bundleId + const wildcardPrefix = info.bundleId.endsWith('.*') ? info.bundleId.slice(0, -2) : undefined + const wildcardMatch = wildcardPrefix ? bundleId.startsWith(`${wildcardPrefix}.`) : false + if (!exactMatch && !wildcardMatch) { + throw new Error(`Provisioning profile ${resolvedPath} is for "${info.bundleId}", not "${bundleId}"`) + } + }Also applies to: 138-145
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/credentials-command.ts` around lines 88 - 97, When handling explicit entries of the form bundleId=path (variables equalsIdx, bundleId, profilePath, entry), after trimming and before accepting them, open and parse the .mobileprovision at profilePath and validate that the provisioning profile actually authorizes the provided bundleId (check the profile's AppID/Entitlements—e.g., application-identifier and the provisioning ProfileName/AppIDName/wildcard entries). If the parsed profile does not include or match the left-hand bundleId, throw a clear Error indicating the mismatch and the profile path so the user can fix the typo; apply the same validation logic to the other explicit-entry branch referenced in the review (lines ~138-145) so both explicit assignment paths reject invalid bundleId/profile combinations.cli/src/build/qr.ts-38-40 (1)
38-40:⚠️ Potential issue | 🟠 MajorAvoid logging raw unknown
custom_msgpayloads.Printing full unknown payloads risks leaking secrets/PII in terminal logs and copied support output. Use a redacted/keys-only fallback.
🔒 Safer fallback example
- warn(`Unknown message type "${kind}" — you may need to update the CLI`) - log(JSON.stringify(data, null, 2)) + warn(`Unknown message type "${kind}" — you may need to update the CLI`) + const keys = Object.keys(data) + log(`payload keys: ${keys.length ? keys.join(', ') : '(none)'}`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/qr.ts` around lines 38 - 40, The current fallback prints the entire unknown `custom_msg` payload (variables `kind` and `data`) which may leak secrets/PII; change the behavior in the unknown-message branch (where `warn(...)` and `log(JSON.stringify(data, null, 2))` are called) to redact values and/or emit keys-only instead. Replace the raw dump with a redacted summary: produce a sorted list of top-level keys from `data` and log either those keys or a JSON object where each value is replaced with a placeholder like "[REDACTED]" (optionally redact values for keys matching sensitive names), then call `warn` with the same contextual message and log the redacted summary instead of the full `data`.cli/src/build/onboarding/ui/app.tsx-660-666 (1)
660-666:⚠️ Potential issue | 🟠 MajorThe completion screen exits before users can read it.
As soon as this state is reached, Line 663 schedules
exit()after 100ms. In practice that hides the final build URL and follow-up command almost immediately, especially on the failure path where the user needs the retry instruction most.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/onboarding/ui/app.tsx` around lines 660 - 666, The auto-exit in the 'build-complete' branch is too aggressive and hides the final output; update the handler around step === 'build-complete' (the code that calls setBuildOutput([]) and schedules exit() using timer) so it does not call exit() after 100ms — either remove the automatic exit and wait for explicit user action/keypress to call exit(), or substantially increase the timeout (e.g., several seconds) and only auto-exit on the success path while never auto-exiting on the failure path (use the cancelled flag and the build result to decide). Ensure you update the logic that references timer, cancelled and exit so users can read the final build URL and retry instructions.cli/src/docs.ts-225-227 (1)
225-227:⚠️ Potential issue | 🟠 MajorSkip
generate-docs; don't abort the entire file-generation pass.
returnon Line 227 exitsgenerateDocs()completely, so any commands aftergenerate-docsnever get an.mdxfile.🩹 Suggested fix
for (const cmd of commands) { if (cmd.name === 'generate-docs') - return // Skip documenting this command + continue // Skip documenting this command🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/docs.ts` around lines 225 - 227, The loop inside generateDocs() is using "return" when encountering a cmd with cmd.name === 'generate-docs', which aborts the entire docs generation; change that "return" to "continue" so only the current iteration is skipped and the rest of the commands array still produces .mdx files (locate the for (const cmd of commands) loop and the conditional checking cmd.name === 'generate-docs' in generateDocs()).cli/src/bundle/zip.ts-56-77 (1)
56-77:⚠️ Potential issue | 🟠 MajorCentralize failure output in one place.
These validation branches emit an error and then throw, and the
catchblock emits again. That duplicates human output, and in--jsonmode it gives callers multiple error records for a single failure.Also applies to: 88-105, 189-196
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/bundle/zip.ts` around lines 56 - 77, Both validation branches (checking bundle against regexSemver and checking resolvedAppId/bundle/path) currently emit/log errors then throw, causing duplicate output; instead remove the local emitJsonError/log.error calls and throw Errors that carry a machine-readable code (e.g., set error.name or error.code like 'invalid_semver' and 'missing_argument') so the single catch handler can produce the human or JSON output. Update the checks around variables bundle, regexSemver, silent, json, emitJsonError, log.error, path, extConfig?.config?.webDir and resolvedAppId to only throw with the specific error code/message, and apply the same change to the similar branches referenced (lines ~88-105 and ~189-196) so the centralized catch emits once based on error.code.cli/package.json-30-39 (1)
30-39:⚠️ Potential issue | 🟠 MajorAlign
requirecondition with official guidance: ship a CommonJS build or remove dual-package exports.The
"require"condition here points to an ES module, but Node.js doesn't reliably support this across your declared engine versions. Node.js v20.19.0+ can require ESM files with synchronous module graphs, but your package declares"engines": { "node": ">=20.0.0" }, creating a gap where versions 20.0.0–20.18.x cannot properly resolverequire('@capgo/cli'). Additionally, official Node.js guidance explicitly recommends shipping CommonJS (e.g.,.cjsfiles) forrequireconditions to avoid namespace object interop issues and ensure broad compatibility. If you intend dual-package support, either (a) tightenengines.nodeto match the capability (≥20.19.0), (b) build and ship a CommonJS entrypoint, or (c) remove therequirecondition entirely to signal ESM-only.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/package.json` around lines 30 - 39, The exports "require" fields in package.json point at ESM files (./dist/index.js and ./dist/src/sdk.js), which breaks CommonJS consumers; either tighten engines.node to ">=20.19.0" or produce CJS bundles and update the "require" conditions to those CJS outputs (e.g., ./dist/index.cjs and ./dist/src/sdk.cjs), or remove the "require" keys to be ESM-only—pick one approach and update the exports mapping accordingly; look for the "exports" object and the "engines.node" entry to change and the build pipeline that emits ./dist/index.js and ./dist/src/sdk.js to add CJS outputs if you choose the CJS route.cli/src/build/onboarding/types.ts-48-64 (1)
48-64:⚠️ Potential issue | 🟠 MajorAvoid persisting
_privateKeyPemin onboarding progress.This type is the resume payload, so
_privateKeyPemturns the Apple distribution private key into plain-text recoverable state on disk. A crash before the cleanup step leaves signing material behind indefinitely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/onboarding/types.ts` around lines 48 - 64, OnboardingProgress currently includes a sensitive plain-text field _privateKeyPem that must not be persisted; remove _privateKeyPem from the persisted OnboardingProgress type and stop serializing/deserializing it, and instead keep the private key in an ephemeral in-memory or secure runtime container (e.g. a transient runtime-only object used by the onboarding flow) until .p12 creation and ensure explicit cleanup after CertificateData/p12 creation; update any functions that read/write OnboardingProgress (search for OnboardingProgress, load/save, serialize/deserialize) to drop the _privateKeyPem field and adjust callers to obtain the private key from the new transient store during the signing step.cli/src/bundle/upload.ts-611-615 (1)
611-615:⚠️ Potential issue | 🟠 MajorGuard channels that have no linked version yet.
data[0].versioncan benullfor an existing channel with nothing assigned. Accessingversion.deletedhere will throw before the upload finishes.Suggested fix
const version = data[0].version + if (!version) { + log.warn('No linked bundle found in the channel you are trying to upload to') + return + } if (version.deleted) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/bundle/upload.ts` around lines 611 - 615, The code assumes data[0].version is non-null before checking version.deleted, which throws when a channel exists but has no linked version; update the block around data[0] to first check if data[0].version is null/undefined and bail (e.g., log a warning like "No version linked to channel" and return) before accessing version.deleted, then proceed to check version.deleted for the existing deletion case using the same variable name (version) so callers like the upload flow safely handle both "no version" and "deleted version" cases.cli/src/build/onboarding/csr.ts-65-82 (1)
65-82:⚠️ Potential issue | 🟠 MajorDon’t default every exported
.p12to the same password.Using a repo-wide constant (
'capgo') means any leaked archive is immediately decryptable. Please require an explicit password or generate a per-credential secret and store it alongside the credential metadata.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/onboarding/csr.ts` around lines 65 - 82, DEFAULT_P12_PASSWORD and createP12 are currently using a hardcoded default ('capgo') which makes all exported .p12 files vulnerable; change createP12 to require an explicit password argument (remove or deprecate DEFAULT_P12_PASSWORD usage) or generate a strong random per-credential password inside createP12 and return it in P12Result so callers can persist it with the credential metadata; update callers of createP12 to supply or store the returned password and remove any implicit fallback to DEFAULT_P12_PASSWORD to ensure no repo-wide default is used.cli/src/bundle/decrypt.ts-53-58 (1)
53-58:⚠️ Potential issue | 🟠 MajorHonor
keyDatain the missing-key preflight.This guard ignores
options.keyData, sobundle decrypt --key-data ...still fails with “Public Key not found” whenever neitherbaseKeyPubV2nor config is present, even thoughresolvePublicKey()accepts inline key data a few lines later.Suggested fix
- if (!options.key && !existsSync(baseKeyPubV2) && !extConfig.config.plugins?.CapacitorUpdater?.publicKey) { + if (!options.key && !options.keyData && !existsSync(baseKeyPubV2) && !extConfig.config.plugins?.CapacitorUpdater?.publicKey) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/bundle/decrypt.ts` around lines 53 - 58, The preflight guard incorrectly omits inline key data (options.keyData), causing a false "Public Key not found" error; update the conditional that checks for key presence to also consider options.keyData (i.e., treat options.keyData as a valid key source alongside options.key, the file at baseKeyPubV2, and extConfig.config.plugins?.CapacitorUpdater?.publicKey) so the block only throws when none of these are present, and keep the existing logging behavior using silent and log.error; reference resolvePublicKey(), baseKeyPubV2, options.key, options.keyData, extConfig and CapacitorUpdater.publicKey to locate where to change the guard.cli/src/bundle/decrypt.ts-83-105 (1)
83-105:⚠️ Potential issue | 🟠 MajorDon’t force V2 checksum verification for local/workspace updater specs.
prepareBundleFile()already treats locallink:/file:updater versions as “latest”, but this path falls back tosupportsV3Checksum = falsefor any non-semver value. In the new workspace setup that can make checksum verification pick V2 against a current updater and reject valid bundles.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/bundle/decrypt.ts` around lines 83 - 105, The code treats any non-semver updaterVersion as V2 by leaving supportsV3Checksum false when parse(updaterVersion) fails; instead, detect local/workspace specs (e.g., values starting with "link:" or "file:" or any non-semver that indicates a local/latest install) and treat them as “latest” (i.e., enable V3). Update the logic around getInstalledVersion/parse/isDeprecatedPluginVersion in decrypt.ts so that when parse throws or returns undefined but updaterVersion exists and matches a local spec or is non-semver, set supportsV3Checksum = true; otherwise keep the existing isDeprecatedPluginVersion check to determine V3 vs V2.cli/src/api/crypto.ts-152-162 (1)
152-162:⚠️ Potential issue | 🟠 MajorNormalize both public-key PEM header variants.
calcKeyId()only stripsBEGIN/END RSA PUBLIC KEY. If the config stores the public key in the common SPKI form (BEGIN PUBLIC KEY), the returned ID still includes header text and won’t match the device-reported key ID.Suggested fix
const cleanedKey = publicKey - .replace(/-----BEGIN RSA PUBLIC KEY-----/g, '') - .replace(/-----END RSA PUBLIC KEY-----/g, '') - .replace(/\n/g, '') - .replace(/\r/g, '') - .replace(/ /g, '') + .replace(/-----BEGIN (RSA )?PUBLIC KEY-----/g, '') + .replace(/-----END (RSA )?PUBLIC KEY-----/g, '') + .replace(/\s+/g, '')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/api/crypto.ts` around lines 152 - 162, The calcKeyId() logic only strips "-----BEGIN RSA PUBLIC KEY-----"/"-----END RSA PUBLIC KEY-----" headers so keys in SPKI form ("-----BEGIN PUBLIC KEY-----"/"-----END PUBLIC KEY-----") remain polluted; update the cleanup in calcKeyId (or the function that computes the key id) to also remove the SPKI header/footer variants (both "BEGIN PUBLIC KEY" and "END PUBLIC KEY"), and continue normalizing newlines/whitespace before returning the first 20 characters so the produced ID matches device-reported key IDs regardless of PEM variant.cli/src/bundle/partial.ts-43-60 (1)
43-60:⚠️ Potential issue | 🟠 MajorTreat local/workspace updater specs as supported instead of hard-failing delta uploads.
This helper returns
{ version: null, supportsBrotliV2: false }for any non-semver updater string. In the new Bun workspace setup that can happen for locallink:/file:installs, which means delta uploads get rejected even when the checked-out updater is current.prepareBundleFile()already has a local-install fallback; reusing it here would keep workspace tests aligned.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/bundle/partial.ts` around lines 43 - 60, getUpdaterVersion currently returns {version: null, supportsBrotliV2: false} whenever parse(updaterVersion) fails, which causes delta uploads to be rejected for local/workspace specs; instead, detect local/workspace installs and treat them as supported by reusing the local-install fallback used by prepareBundleFile. Update getUpdaterVersion so that when parse(updaterVersion) throws or returns undefined, call the same local resolution used by prepareBundleFile (or otherwise detect link:/file:/workspace: style specs from getInstalledVersion output) and, if it resolves to a local updater, return a non-null version string and supportsBrotliV2=true (or compute via isDeprecatedPluginVersion using the resolved coerced version); keep using getInstalledVersion, parse, isDeprecatedPluginVersion and BROTLI_MIN_UPDATER_VERSION_V7 to determine semantics for non-local cases.cli/src/bundle/encrypt.ts-98-117 (1)
98-117:⚠️ Potential issue | 🟠 MajorMirror the local-updater fallback before choosing V2 vs V3.
When
getInstalledVersion()returns alink:/file:spec,parse()fails and this branch always emits a V2 checksum.prepareBundleFile()already special-cases local installs; without the same fallback here, workspace-based projects can generate checksums the current updater won’t accept.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/bundle/encrypt.ts` around lines 98 - 117, The code fails to handle local installs like "link:"/"file:" from getInstalledVersion, causing parse() to throw and defaulting to V2 checksums; mirror the prepareBundleFile fallback by detecting updaterVersion that starts with "link:" or "file:", resolve the referenced path (strip the prefix, resolve relative to root), read that package.json to obtain its "version" string, set updaterVersion to that resolved version before calling parse(), and then use isDeprecatedPluginVersion/encryptChecksumV3/encryptChecksum as before so workspace-local updater installs choose the correct V3 vs V2 encryption.cli/src/build/request.ts-892-895 (1)
892-895:⚠️ Potential issue | 🟠 MajorDon't descend into nested package
node_modulestrees unconditionally.This branch fires for every
node_modulesdirectory, not just the project root. That defeats the earlier Cordova-plugin exclusion and can walk large pnpm/package-local dependency trees while zipping, which is a real performance/reliability risk for build uploads. Restrict the unconditional recursion to the top-level projectnode_modules, then opt nested ones back in only when they are explicitly needed.Suggested fix
- if (item === 'node_modules') { + if (item === 'node_modules' && zipPath === '') { addDirectoryToZip(zip, itemPath, itemZipPath, platform, nativeDeps, platformDir) continue } + + if (item === 'node_modules') { + continue + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/request.ts` around lines 892 - 895, The code unconditionally recurses into every directory named 'node_modules' inside addDirectoryToZip, causing traversal of nested package-local dependency trees; change the condition so it only auto-recurses for the top-level project node_modules and for nested node_modules only when explicitly required. Concretely, in addDirectoryToZip check whether the node_modules path equals the top-level platformDir/node_modules (or resolve itemPath against the project root) before calling addDirectoryToZip recursively, and for nested node_modules gate recursion on an explicit inclusion test (e.g., a lookup in nativeDeps or a provided whitelist) so only needed nested module trees are zipped.cli/src/build/request.ts-1657-1688 (1)
1657-1688:⚠️ Potential issue | 🟠 MajorKeep the SIGINT handler installed through the polling fallback.
The listener is removed as soon as
streamBuildLogs()returns. If streaming gives up and execution falls back topollBuildStatus(), Ctrl+C no longer triggerscancelBuild()/abortController.abort(), so the fallback path becomes non-cancellable.Suggested fix
let streamStatus: string | null = null try { streamStatus = await streamBuildLogs( silent, verbose, @@ silent && !logger ? undefined : log, ) - } - finally { - process.removeListener('SIGINT', onSigint) - } - - // Only poll if we didn't get the final status from the stream - if (streamStatus) { - finalStatus = streamStatus - // Persist terminal status to the database via /build/status. - // The WebSocket only delivers status to the CLI — calling the API - // endpoint triggers the backend to write status + last_error into build_requests. - if (TERMINAL_STATUS_SET.has(streamStatus)) - await statusCheck().catch(() => {}) - } - else { - // Fall back to polling if stream ended without final status - finalStatus = await pollBuildStatus(host, buildRequest.job_id, appId, options.platform, options.apikey, silent, showStatusChecks, abortController.signal, log) + // Only poll if we didn't get the final status from the stream + if (streamStatus) { + finalStatus = streamStatus + if (TERMINAL_STATUS_SET.has(streamStatus)) + await statusCheck().catch(() => {}) + } + else { + finalStatus = await pollBuildStatus(host, buildRequest.job_id, appId, options.platform, options.apikey, silent, showStatusChecks, abortController.signal, log) + } + } + finally { + process.removeListener('SIGINT', onSigint) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/request.ts` around lines 1657 - 1688, The SIGINT handler is removed in the finally block immediately after streamBuildLogs returns, which makes the polling fallback (pollBuildStatus) uncancellable; retain the SIGINT handler until both streaming and any fallback polling complete by moving or deferring process.removeListener('SIGINT', onSigint) so it runs only after you finish handling finalStatus/pollBuildStatus (i.e., after the if/else that sets finalStatus), or re-register the handler before calling pollBuildStatus; ensure onSigint still calls cancelBuild()/abortController.abort() so both streamBuildLogs and pollBuildStatus remain cancellable.cli/src/build/request.ts-1589-1622 (1)
1589-1622:⚠️ Potential issue | 🟠 MajorWait for a successful cancel acknowledgement before aborting local monitoring.
cancelBuild()ignores non-2xx responses, so a failed/build/cancel/...request still leads toabortController.abort()and the CLI can report the build as cancelled while the remote job keeps running. Treat non-OK responses as a cancellation failure and keep the monitor alive until the server actually accepts the cancel.Suggested fix
const cancelBuild = async () => { if (cancelRequested) return cancelRequested = true const cancelAbort = new AbortController() const timeout = setTimeout(() => cancelAbort.abort(), 4000) try { - await fetch(`${host}/build/cancel/${buildRequest.job_id}`, { + const response = await fetch(`${host}/build/cancel/${buildRequest.job_id}`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'authorization': options.apikey, }, body: JSON.stringify({ app_id: appId }), signal: cancelAbort.signal, }) + if (!response.ok) { + const body = await response.text().catch(() => '') + throw new Error(`Failed to cancel build: ${response.status}${body ? ` - ${body}` : ''}`) + } } - catch { - // ignore cancellation errors + catch (error) { + cancelRequested = false + throw error } finally { clearTimeout(timeout) } } @@ try { if (cancelRequested) { process.exit(1) } log.warn('Canceling build... (press Ctrl+C again to force quit)') await cancelBuild() abortController.abort() } - catch { - // Prevent unhandled rejection from crashing the process + catch (error) { + log.error(error instanceof Error ? error.message : String(error)) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/build/request.ts` around lines 1589 - 1622, cancelBuild currently swallows all errors and ignores non-2xx HTTP responses, so onSigint calls abortController.abort() even if the server rejected the cancel; change cancelBuild (used by onSigint) to treat non-OK fetch responses as failures by checking response.ok and throwing or returning a rejection when !response.ok (include response.status/text in the error), ensure the abort timeout and AbortController handling remain, and have onSigint await cancelBuild and only call abortController.abort() (and allow process.exit(1) on second Ctrl+C) when cancelBuild resolved successfully; reference: cancelBuild, onSigint, abortController, fetch(`${host}/build/cancel/${buildRequest.job_id}`).
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 291738328a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
cli/README.md (1)
216-241: Confirm “--comment” wording for clarity (“could be” → “e.g.”).Line [Line 229] says: “Comment about this version, could be a release note…”. This is a style/wording nit but it affects readability; changing to “e.g.” makes it smoother and grammatically correct.
✅ Proposed fix
-| **--comment** | <code>string</code> | Comment about this version, could be a release note, a commit hash, a commit message, etc. | +| **--comment** | <code>string</code> | Comment about this version (e.g., a release note, a commit hash, a commit message, etc.) |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/README.md` around lines 216 - 241, Update the README table row for the CLI flag **--comment**: replace the phrase "Comment about this version, could be a release note, a commit hash, a commit message, etc." with clearer wording using "e.g." such as "Comment about this version, e.g. a release note, a commit hash, a commit message, etc." Locate the table entry for the flag **--comment** and edit the description string accordingly to improve grammar and readability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/README.md`:
- Around line 66-70: The TOC and surrounding prose use mixed UK/US spelling;
change prose occurrences of "organisation" to US English "organization" (e.g.,
update the TOC entry from "🔹 [Organisation](`#organisation`)" to "🔹
[Organization](`#organization`)" and any prose sentences), but do NOT rename
actual CLI command names or code examples that intentionally use the deprecated
"organisation" token—leave those headings/command usages unchanged; ensure
anchor links match the updated prose anchor (`#organization`) or add an explicit
anchor for the original command heading if needed so links work.
- Around line 10-12: Replace the phrase "most up to date" in the README.md text
with the hyphenated form "most up-to-date"; locate the sentence that currently
reads "You can find the most up to date version of this doc in our web doc:" and
update it to "You can find the most up-to-date version of this doc in our web
doc:" to satisfy markdownlint/LanguageTool.
In `@cli/src/api/channels.ts`:
- Around line 211-228: The table headers (t.headers) contain 18 columns but each
pushed row only has 12 values, causing misalignment; update the array pushed
into t.rows in the loop so it has 18 elements in the exact same order as
t.headers. Specifically, ensure you include values for: Version
(row.version?.name), Public, iOS, Android, Auto Update
(row.disable_auto_update), Native Auto Update (invert
row.disable_auto_update_under_native), Device Self Set
(row.allow_device_self_set), Progressive Deploy (e.g. row.progressive_deploy or
the actual property on Channel), Secondary Version
(row.secondary_version?.name), Secondary Version Percentage
(row.secondary_version_percentage or actual property), AB Testing
(row.ab_testing or boolean flag), AB Testing Version
(row.ab_testing_version?.name), AB Testing Percentage
(row.ab_testing_percentage), Emulator (row.allow_emulator), Device
(row.allow_device), Dev (row.allow_dev), Prod (row.allow_prod); verify property
names against the Channel type and adjust names accordingly so
t.rows.push([...]) produces exactly 18 values in header order.
- Around line 100-113: The lookup currently treats any findError as "not found"
and proceeds to insert; instead call .maybeSingle() on
supabase.from('app_versions').select(...).eq(...).maybeSingle() so the query
returns null without an error when no row exists, then only fall through to the
insert path when data is null and findError is falsy; keep the current insert
logic (the try block) untouched but gate it behind a check like "if (!data &&
!findError)" using the existing variables data and findError to avoid masking
transient errors or triggering duplicate inserts.
- Around line 48-56: The current guard throws when silent is true even if
autoUnlink is requested, preventing non-interactive unlinking; update the check
so it only throws when silent && !autoUnlink (or otherwise let the auto-unlink
path execute without prompting). Locate the block around the variables silent,
autoUnlink, appid, versionData and channelFound (the intro(...) and
confirmC(...) logic) and change the condition to honor autoUnlink so that when
autoUnlink === true the code skips prompting and proceeds to unlink in
non-interactive mode.
In `@cli/src/bundle/unlink.ts`:
- Around line 35-38: Don't call getConfig() unconditionally; only load the
Capacitor config as a silent fallback when the caller didn't provide necessary
values. Change the block so you compute packVersion via getBundleVersion('',
options.packageJson) and set bundle = enrichedOptions.bundle || packVersion
first, then if either appId is missing or bundle is still empty call getConfig({
silent: true }) (or getConfig(true) depending on the function signature) and use
its config when computing resolvedAppId via getAppId(appId, extConfig?.config)
and to fill any missing bundle fallback; ensure getConfig is invoked only when
needed and with the silent flag.
In `@cli/src/bundle/upload.ts`:
- Around line 1048-1115: The S3 upload path can fail after the version row is
already created (persistVersionData), so wrap the S3 upload (the
s3Client.putObject call and the subsequent versionData updates +
persistVersionData('update')) in a try/catch; on error call
deletedFailedVersion(...) with the same parameters used elsewhere to remove the
previously inserted version row (pass the supabase client and the version
identifier from versionData), log the failure, and rethrow or return a failure
result so the caller doesn't leave a live version pointing to no artifact;
update the code around s3Client.putObject,
versionData.external_url/storage_provider, and the following persistVersionData
call accordingly.
---
Nitpick comments:
In `@cli/README.md`:
- Around line 216-241: Update the README table row for the CLI flag
**--comment**: replace the phrase "Comment about this version, could be a
release note, a commit hash, a commit message, etc." with clearer wording using
"e.g." such as "Comment about this version, e.g. a release note, a commit hash,
a commit message, etc." Locate the table entry for the flag **--comment** and
edit the description string accordingly to improve grammar and readability.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2c1c51b8-d1bc-4a61-a686-7c8660ce5386
📒 Files selected for processing (7)
cli/README.mdcli/package.jsoncli/src/api/channels.tscli/src/bundle/unlink.tscli/src/bundle/upload.tscli/src/index.tscli/webdocs/bundle.mdx
✅ Files skipped from review due to trivial changes (2)
- cli/package.json
- cli/src/index.ts
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (1)
cli/src/bundle/upload.ts (1)
1048-1068:⚠️ Potential issue | 🔴 Critical
--dry-uploadstill reserves a real version row.Line 1048 persists
app_versionsbefore the dry-run return, so a dry upload can block the next real upload of the same bundle viacheckVersionExists()even though no artifact or channel link was created. Short-circuit before persisting, or mark/delete the row as non-live.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/bundle/upload.ts` around lines 1048 - 1068, The code currently calls persistVersionData(...) before honoring options.dryUpload, which creates a real app_versions row and can block subsequent real uploads; either short-circuit the dry-run path before calling persistVersionData (return the dry-upload result immediately when options.dryUpload is true) or, if the flow must persist, persist a non-live/demo row by adding a durable flag (e.g., set a "dry_run" or "live" column) and ensure checkVersionExists(...) and related readers ignore rows marked as dry/non-live; update the logic around persistVersionData, options.dryUpload, and checkVersionExists to implement one of these fixes so dry uploads do not reserve a blocking live version row.
🧹 Nitpick comments (2)
cli/README.md (2)
1324-1324: Verify./src/sdk.tslink works on the rendered docs site.The “TypeScript types” link is a repo-relative path (
./src/sdk.ts). If this README is served from a website build (rather than directly from the repo), that relative path may 404.Please verify the deployed docs location and confirm the link resolves, or adjust it to a stable published path (e.g., generated types/docs output or a GitHub URL).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/README.md` at line 1324, Check that the "TypeScript types" link targeting "./src/sdk.ts" in README.md resolves on the published docs site; if it 404s when rendered, replace the repo-relative path with a stable published location (for example the generated docs/types output or the raw GitHub URL to src/sdk.ts) so the "TypeScript types" link reliably resolves in deployed docs.
10-12: Render the overview URL as a markdown link instead of a raw URL line.Currently the overview link is split across text + a standalone raw URL line. Switching to a markdown link improves readability and avoids formatting inconsistencies.
🛠️ Proposed fix
-You can find the most up-to-date version of this doc in our web doc: -https://capgo.app/docs/cli/overview/ +You can find the most up-to-date version of this doc in our [web docs](https://capgo.app/docs/cli/overview/).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/README.md` around lines 10 - 12, Replace the split text + raw URL in the README (the line "You can find the most up-to-date version of this doc in our web doc:" and the raw URL "https://capgo.app/docs/cli/overview/") with a single markdown link: embed the URL as a markdown link label (e.g., "[CLI overview](https://capgo.app/docs/cli/overview/)") so the sentence reads using the linked text instead of leaving the raw URL on its own; ensure the visible link text matches the surrounding sentence.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/README.md`:
- Around line 125-133: The README uses duplicate HTML anchor ids in headings
like the line '## <a id="options"></a> Options' which causes invalid HTML and
markdownlint duplicate-heading errors; update each repeated heading to use a
unique id and distinct anchor name (e.g., change '<a id="options"></a>' to '<a
id="options-login"></a>' or '<a id="options-doctor"></a>' and include the
command name in the heading text where appropriate) and apply the same pattern
to other occurrences noted (e.g., options-doctor, options-probe) or update the
generator/template producing these headings so each section has a unique id and
heading.
In `@cli/src/api/channels.ts`:
- Line 51: The intro(...) call that prints "❌ Version ..." is executed during
unlinkDeviceInternal(..., silent) when autoUnlink is true, breaking silent mode;
wrap the intro(...) invocation with a guard so it only runs when not silent
(e.g., if (!silent) intro(...)), ensuring unlinkDeviceInternal's silent flag is
respected while retaining the existing message logic that uses channelFound and
versionData.
- Around line 119-147: The insert of the 'unknown' app_version in channels.ts is
vulnerable to a race: replace the blind insert into app_versions (the supabase
.insert({...}) that returns newVersion/createError) with either an upsert (so
duplicate-key wins and returns the existing row) or, if using insert semantics,
catch a unique-violation error in the createErr/createError branch and then
refetch the row (select where app_id = appId and name = 'unknown') and return
that instead of failing; update the error logging branches around
createError/createErr to handle the refetch path and still respect the
silent/log variables (log.error, silent, formatError).
In `@cli/src/bundle/upload.ts`:
- Around line 1296-1297: The condition controlling the final outro() is
inverted: change the check that currently uses "if (silent && !result.skipped)
outro(...)" so that it only prints when not silent and not skipped; update the
expression to use "!silent && !result.skipped" to match the guarded intro()
behavior and prevent silent uploads from emitting outro(). Ensure you modify the
branch that calls outro(...) and leave the message text and result.skipped logic
unchanged.
- Around line 477-496: Replace the early returns that use "return
Promise.reject(new Error(...))" with thrown errors so the surrounding async
function's catch block runs and cleanup (the version-row deletion around the
cleanup at lines ~566-568) executes; specifically update the branch handling "if
(changeError) { ... return Promise.reject(new Error('Cannot finish TUS upload'))
}" and the branch "if (!url) { ... return Promise.reject(new Error('Cannot get
upload url')) }" (and any other similar occurrences in the upload flow around
the 531-568 region) to "throw new Error(...)" so failures propagate to the outer
catch instead of short-circuiting the function.
- Around line 1095-1115: The persisted external_url is missing the configured S3
port: when constructing endPoint (used to set versionData.external_url) include
the s3Port if provided so the URL points to the correct host:port and scheme;
update the logic that defines endPoint (and/or build the external URL directly
before assigning versionData.external_url) to append `:${s3Port}` when s3Port is
set (respecting s3SSL for the scheme) so that the final URL uses the same host,
port and scheme as the S3Client (s3Client, endPoint, s3Port, s3SSL,
versionData.external_url, encodeFileName).
- Around line 237-242: The current check mixes a successful existence result
with RPC/query failures by using appVersion || appVersionError; change the logic
to only treat real existence as a skip by branching on versionExistsOk or
appVersion (whichever represents a positive existence) for the "version already
exists / --silent-fail" path (referencing versionExistsOk, appVersion, bundle,
and the --silent-fail handling). Separately handle appVersionError in its own
block: do not treat it as a successful skip—log or surface the RPC error (and in
non-silent mode continue to interactive suggestions), and in silent-fail mode do
not return a graceful success but return/propagate a failure result so RPC
failures are not masked. Apply the same separation to the later block covering
lines 244-287 where appVersionError is currently folded into the existence
logic.
- Around line 138-145: The code currently ignores channelError from the Supabase
query and treats any failure as a missing row; update the logic around the
supabase .from('channels') .select(...).single() call to explicitly handle
query/auth errors: if channelError is truthy and not a true "row not found"
result, abort/return the error (fail the upload) and log the error details
instead of falling back to the "new channel / first upload" path; only treat the
absence of channelData as the legitimate fallback. Apply the same check to the
later compatibility branch that uses channelData/channelError around the "new
channel / first upload" logic so real query errors don't disable compatibility
enforcement.
---
Duplicate comments:
In `@cli/src/bundle/upload.ts`:
- Around line 1048-1068: The code currently calls persistVersionData(...) before
honoring options.dryUpload, which creates a real app_versions row and can block
subsequent real uploads; either short-circuit the dry-run path before calling
persistVersionData (return the dry-upload result immediately when
options.dryUpload is true) or, if the flow must persist, persist a non-live/demo
row by adding a durable flag (e.g., set a "dry_run" or "live" column) and ensure
checkVersionExists(...) and related readers ignore rows marked as dry/non-live;
update the logic around persistVersionData, options.dryUpload, and
checkVersionExists to implement one of these fixes so dry uploads do not reserve
a blocking live version row.
---
Nitpick comments:
In `@cli/README.md`:
- Line 1324: Check that the "TypeScript types" link targeting "./src/sdk.ts" in
README.md resolves on the published docs site; if it 404s when rendered, replace
the repo-relative path with a stable published location (for example the
generated docs/types output or the raw GitHub URL to src/sdk.ts) so the
"TypeScript types" link reliably resolves in deployed docs.
- Around line 10-12: Replace the split text + raw URL in the README (the line
"You can find the most up-to-date version of this doc in our web doc:" and the
raw URL "https://capgo.app/docs/cli/overview/") with a single markdown link:
embed the URL as a markdown link label (e.g., "[CLI
overview](https://capgo.app/docs/cli/overview/)") so the sentence reads using
the linked text instead of leaving the raw URL on its own; ensure the visible
link text matches the surrounding sentence.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 09cea9f1-3717-4f89-840f-1c5bed8dd516
📒 Files selected for processing (6)
cli/README.mdcli/src/api/channels.tscli/src/bundle/unlink.tscli/src/bundle/upload.tspackage.jsonscripts/serve-backend-playwright.ts
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (2)
cli/README.md (1)
1252-1261: Fence the Claude Desktop config as JSON for readability.The config snippet is currently plain text; wrapping it in a fenced
jsonblock will render correctly and be easier to copy/paste.🧩 Proposed doc formatting update
Example usage with Claude Desktop: Add to claude_desktop_config.json: - { - "mcpServers": { - "capgo": { - "command": "npx", - "args": ["@capgo/cli", "mcp"] - } - } - } +```json +{ + "mcpServers": { + "capgo": { + "command": "npx", + "args": ["@capgo/cli", "mcp"] + } + } +} +```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/README.md` around lines 1252 - 1261, Wrap the claude_desktop_config.json snippet in a fenced code block annotated with json so it renders and is copy/pasteable; update the README example around the JSON containing the "mcpServers" -> "capgo" object (with "command" and "args" keys) to be enclosed in ```json ... ``` instead of plain text.cli/src/docs.ts (1)
14-16: Avoid coupling to Commander internals like_actionHandler.
_actionHandleris a private implementation detail of Commander.js and is not part of the public API. This creates a coupling risk—your code can break on library upgrades even if the public API stays compatible. Consider deriving the needed metadata explicitly (e.g., by tracking which commands have action handlers during registration) or find an alternative approach that doesn't depend on private fields.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/docs.ts` around lines 14 - 16, The code currently depends on Commander private internals via the CommandWithInternals interface and its _actionHandler field; instead stop reading _actionHandler and record action registration explicitly when wiring commands: during command registration (the place that calls command.action(...)) set a flag or add an entry to a small map/metadata object (e.g., hasActionHandlerByName or commandMetadata keyed by command.name()) or attach a public property you control (e.g., command.meta = { hasAction: true }), then read that metadata elsewhere instead of accessing _actionHandler; update any logic that references CommandWithInternals/_actionHandler to use the new metadata API.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/README.md`:
- Line 17: Update the onboarding sentence in the README: replace "Then go to
your account in `apikey` section and click in the `all` key to copy it." with
clearer wording such as "Then go to your account's `apikey` section and click
the `all` key to copy it." so it reads naturally and removes the awkward "click
in" phrasing; ensure surrounding sentences flow with the change.
- Line 242: Update the user-facing docs entry for the --delta flag by
hyphenating the compound adjective: change "super fast updates" to "super-fast
updates" in the description for the --delta boolean flag so the line reads
"...for instant, super-fast updates instead of big zip downloads"; locate this
text by searching for the "--delta" flag entry or the phrase "super fast
updates" and make the hyphenation edit.
In `@cli/src/api/channels.ts`:
- Around line 136-150: The error messages inside the catch blocks still include
a stale reference to findError (which is null/undefined there); update the
log.error and thrown Error messages in the function that returns newVersion so
they do not reference findError — use only the current error (createError or
createErr) and contextual text (e.g., "Create error" or "Cannot retrieve or
create unknown version for app ${appId}") when building messages for the
log.error and the thrown Error, leaving the rest of the control flow (return
newVersion / throw) unchanged.
In `@cli/src/bundle/upload.ts`:
- Line 293: The message passed to uploadFail references a stale appVersionError
which is guaranteed to be null/undefined; remove that reference and supply a
clean message (e.g. call uploadFail(`Version ${bundle} already exists`) ), or
conditionally append formatError(appVersionError) only when appVersionError is
non-null; update the uploadFail call in upload.ts where `uploadFail(\`Version
${bundle} already exists ${formatError(appVersionError)}\`)` appears to either
drop formatError(appVersionError) or guard it with a null check.
In `@cli/src/docs.ts`:
- Around line 252-259: The parent command's documentation is skipped when
cmd.subcommands.length > 0; modify the logic to always include the parent's
formatted docs by calling formatCommand(cmd, /*isSub*/ false, cmd.name,
/*skipMainHeading?*/ true/false as appropriate) and appending that to
cmdMarkdown before iterating subcommands, then loop over cmd.subcommands and
append formatCommand(subCmd, true, cmd.name) for each subCmd; update references
where cmdMarkdown is initialized/used so parent aliases, description, and flags
are preserved (look for cmd, cmdMarkdown, formatCommand, subcommands).
- Around line 230-232: In generateDocs(), the loop over commands currently uses
"return" when cmd.name === 'generate-docs', which exits generateDocs() early;
change that to "continue" so the loop skips only that command and continues
processing the rest (locate the for (const cmd of commands) loop and replace the
return with continue).
- Around line 330-341: The current fallback writes the entire file when marker
indices don't both exist, which can wipe content if only one marker is present;
update the logic around startIndex/endIndex to detect a partial marker pair (one
index === -1 and the other !== -1) and fail fast (throw or log error and abort)
instead of calling writeFileSync(filePath, markdown,...). Only allow the
full-file write when both startIndex and endIndex are -1 (no markers present).
Reference startTag, endTag, startIndex, endIndex, fileContent, writeFileSync,
and filePath when making the change.
- Around line 212-214: The docs generator misclassifies options because it
infers type by splitting opt.flags on spaces; instead extract the displayed
param as the first alias (e.g., take opt.flags.split(',')[0].trim()) and
determine type using the option metadata: if opt.required || opt.optional then
set type = 'string' else type = 'boolean'; update the variables used in the
template (param, type) accordingly in the code that builds section so boolean
flags with short aliases like `-f, --force` are documented correctly.
---
Nitpick comments:
In `@cli/README.md`:
- Around line 1252-1261: Wrap the claude_desktop_config.json snippet in a fenced
code block annotated with json so it renders and is copy/pasteable; update the
README example around the JSON containing the "mcpServers" -> "capgo" object
(with "command" and "args" keys) to be enclosed in ```json ... ``` instead of
plain text.
In `@cli/src/docs.ts`:
- Around line 14-16: The code currently depends on Commander private internals
via the CommandWithInternals interface and its _actionHandler field; instead
stop reading _actionHandler and record action registration explicitly when
wiring commands: during command registration (the place that calls
command.action(...)) set a flag or add an entry to a small map/metadata object
(e.g., hasActionHandlerByName or commandMetadata keyed by command.name()) or
attach a public property you control (e.g., command.meta = { hasAction: true }),
then read that metadata elsewhere instead of accessing _actionHandler; update
any logic that references CommandWithInternals/_actionHandler to use the new
metadata API.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 498ab2ea-395a-4e5a-8e42-33792e231c6b
📒 Files selected for processing (10)
cli/README.mdcli/src/api/channels.tscli/src/bundle/upload.tscli/src/docs.tscli/webdocs/bundle.mdxcli/webdocs/doctor.mdxcli/webdocs/init.mdxcli/webdocs/login.mdxcli/webdocs/probe.mdxcli/webdocs/star-all.mdx
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4228385284
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
cli/src/api/channels.ts (1)
123-134:⚠️ Potential issue | 🟠 MajorPass the unique conflict target to this upsert.
This still doesn't fully close the concurrent-create race.
upsert()only helps if it conflicts on the same unique key the duplicate row would hit; here the payload has noid, so two callers can still collide on theunknownversion row and one side will fail instead of reusing the existing record. Please set the conflict target explicitly (for example, the(app_id, name)unique key) or keep the unique-violation refetch path.Suggested fix
const { data: newVersion, error: createError } = await supabase .from('app_versions') - .upsert({ - owner_org: orgId, - deleted: true, - name: 'unknown', - app_id: appId, - }) + .upsert({ + owner_org: orgId, + deleted: true, + name: 'unknown', + app_id: appId, + }, { + onConflict: 'app_id,name', + }) .select('id') .eq('app_id', appId) .eq('name', 'unknown') .single()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/api/channels.ts` around lines 123 - 134, The upsert against supabase.from('app_versions') is missing an explicit conflict target so concurrent creators can still race on the (app_id, name) unique key; update the upsert call that builds the 'unknown' version to pass the conflict target for the unique key (app_id, name) so the DB will reuse the existing row on conflict, or alternatively retain the unique-violation refetch path after upsert to fetch the existing record when a duplicate error occurs; locate the upsert invocation in channels.ts (the supabase.upsert(...) that inserts owner_org, deleted, name, app_id) and add the on-conflict / conflict target for (app_id, name) or implement a retry/refetch on unique-violation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/src/bundle/upload.ts`:
- Around line 1216-1229: Currently deleteLinkedBundleOnUpload runs before
setVersionInChannel which can leave the channel pointing at a deleted bundle if
relink fails; change the flow so that after verifying
hasOrganizationPerm(permissions, OrganizationPerm.write) and successfully
awaiting setVersionInChannel(...) you then, only on success and if
options.deleteLinkedBundleOnUpload is true, call
deleteLinkedBundleOnUpload(supabase, appid, channel); ensure verbose logs
reflect the new order and preserve the permission check (use the same
hasOrganizationPerm and options flags) so deletion happens post-relink (or
consider batching both ops into a single backend call if atomicity is required).
- Around line 695-703: The current supabase lookup treats any query error as "no
default_upload_channel" and lets uploadBundleInternal fall back to 'production';
change this so that if the database query via
supabase.from('apps').select(...).eq('app_id', appId).single() returns an error
you do NOT substitute 'production' but instead propagate a null/throw (i.e.,
return null or surface the error) so uploadBundleInternal can abort; only apply
the 'production' fallback when the row was successfully fetched and
data.default_upload_channel is explicitly null. In practice, update the error
handling around the select/.single() call (the block checking error and
data.default_upload_channel and the variable used by uploadBundleInternal) to
return on any query/auth error and only default to 'production' when data exists
but default_upload_channel === null.
- Around line 611-629: The code assumes a linked version exists but
data[0].version can be null; update the block after retrieving const version =
data[0].version to first check if version is null/undefined and handle it (e.g.,
log.warn('No linked bundle found in the channel you are trying to upload to')
and return) before accessing version.deleted; then keep the existing
version.deleted check unchanged. This prevents a runtime crash when
channel.version is null.
In `@cli/src/docs.ts`:
- Around line 109-127: The current mapping in the cmd.commands.map block forces
subcommands: [] and truncates the tree; modify it to recursively map child
commands instead. Replace the hard-coded subcommands: [] in the mapping for
subCmd (within cli/src/docs.ts) with a recursive call (e.g., a helper mapCommand
or mapSubcommands) that inspects subCmd.commands, casts to CommandWithInternals,
and maps each child into MappedCommand (preserving name, alias, description,
options built from Option.flags/description/short/long, hasAction from
_actionHandler, and isCommandGroup) so the nested structure (build credentials
save/list/…) is preserved for the generator.
---
Duplicate comments:
In `@cli/src/api/channels.ts`:
- Around line 123-134: The upsert against supabase.from('app_versions') is
missing an explicit conflict target so concurrent creators can still race on the
(app_id, name) unique key; update the upsert call that builds the 'unknown'
version to pass the conflict target for the unique key (app_id, name) so the DB
will reuse the existing row on conflict, or alternatively retain the
unique-violation refetch path after upsert to fetch the existing record when a
duplicate error occurs; locate the upsert invocation in channels.ts (the
supabase.upsert(...) that inserts owner_org, deleted, name, app_id) and add the
on-conflict / conflict target for (app_id, name) or implement a retry/refetch on
unique-violation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a460f8e8-a50f-4be4-aac4-f8dc5f4a2c8a
📒 Files selected for processing (15)
cli/README.mdcli/src/api/channels.tscli/src/bundle/upload.tscli/src/docs.tscli/src/index.tscli/webdocs/account.mdxcli/webdocs/app.mdxcli/webdocs/build.mdxcli/webdocs/bundle.mdxcli/webdocs/channel.mdxcli/webdocs/init.mdxcli/webdocs/key.mdxcli/webdocs/mcp.mdxcli/webdocs/organisation.mdxcli/webdocs/organization.mdx
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0b764a134f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
56be82c20 chore(release): 7.93.4 9c0e7c101 docs: update README.md 9cfb4a7f3 fix(init): refresh iOS run device selection (#594) bc6367ff0 chore(release): 7.93.3 e9811058b fix onboarding updater sync recovery (#596) 2cfae2516 chore(release): 7.93.2 0598118de docs: update README.md 7959e4921 [codex] fix init existing app conflict (#595) git-subtree-dir: cli git-subtree-split: 56be82c2018b7077545f47c34bfaedebe1d943ee
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 795f27df80
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|



Summary (AI generated)
cli/and turn the repo into a Bun workspace monorepoMotivation (AI generated)
Capgo and the CLI need to evolve in the same repository so backend, SDK, and CLI changes can be validated together without cross-repo choreography. This also lets CI avoid running the full matrix when a change only affects one side of the monorepo.
Business Impact (AI generated)
This reduces coordination cost between product and CLI changes, lowers CI waste on unrelated pull requests, and makes it safer to ship CLI-dependent backend work because the integration path is validated from one repo.
Test Plan (AI generated)
bun typecheckbun run cli:typecheckbun run cli:buildbun run cli:lintbun run --cwd cli test:version-detection:setupbun run cli:testbun run supabase:startbun run supabase:functions:servebun run test:clibun run supabase:stopGenerated with AI
Summary by CodeRabbit