fix: add RLS policy for service_role on stripe_info table#1370
fix: add RLS policy for service_role on stripe_info table#1370
Conversation
Stripe webhooks were failing to update customer subscription data because the stripe_info table has RLS enabled but no policy allowing service_role (used by supabaseAdmin) to perform UPDATE operations. This adds a policy granting service_role full access to stripe_info.
|
Warning Rate limit exceeded@riderx has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 23 minutes and 42 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughAdds bundle-preview translation keys across many locales, migrates numerous Supabase functions from admin clients to per-request authenticated clients (apikey/JWT) to rely on RLS, updates webhook RLS policies, adds org_id to device creation, and adjusts tests and some type exports. Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: feb5ac3a70
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| CREATE POLICY "Allow admin to insert webhooks" | ||
| ON public.webhooks | ||
| FOR INSERT | ||
| TO authenticated, anon | ||
| WITH CHECK ( | ||
| public.check_min_rights( | ||
| 'admin'::public.user_min_right, | ||
| (SELECT public.get_identity_org_allowed('{all}'::public.key_mode[], webhooks.org_id)), | ||
| org_id, |
There was a problem hiding this comment.
Allow write API keys to pass webhook RLS
The new webhook RLS policies only allow key_mode '{all}' for INSERT/UPDATE/DELETE, but the webhook routes are still gated by middlewareKey(['all','write']), so existing clients using write API keys will now hit RLS errors when creating/updating/deleting webhooks or retrying deliveries. This is a behavioral regression from the pre-migration supabaseAdmin usage and will break integrations that legitimately use write-scoped keys; consider changing the policy key_mode list to '{write,all}' (matching other tables like channels) or narrowing the middleware to all only. See supabase/functions/_backend/public/webhooks/index.ts for the write-key allowance.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (15)
supabase/functions/_backend/private/download_link.ts (3)
16-16: UsecreateHonoinstead ofnew Hono()per coding guidelines.As per coding guidelines, all Hono framework application initialization should use
createHonofromutils/hono.ts.🔎 Proposed fix
+import { createHono } from '../utils/hono.ts' -export const app = new Hono<MiddlewareKeyVariables>() +export const app = createHono()
38-46: Invalid manifest table schema query—app_idcolumn does not exist in manifest table.The query at lines 59–63 filters the manifest table by
.eq('app_id', body.app_id), but the manifest table schema only contains:
id(serial primary key)app_version_id(bigint, foreign key to app_versions)file_name,s3_path,file_hash,file_sizeThe
app_idcolumn does not exist. The correct filter should useapp_version_idwith the bundle's id value (from line 45):.eq('app_version_id', bundle.id) .eq('id', body.id)This will cause a runtime error when
body.isManifestis true.
59-67: Fix manifest table query: useapp_version_idinstead of non-existentapp_idcolumn.The manifest table does not have an
app_idcolumn—it only hasapp_version_idas a foreign key toapp_versions.id. Line 62 attempts to filter by.eq('app_id', body.app_id)which will fail since that column doesn't exist in the schema.Use
bundle.id(theapp_version_idfrom the app_versions query above) as the filter criteria instead:.eq('app_version_id', bundle.id).supabase/functions/_backend/private/validate_password_compliance.ts (1)
140-164: Remove overly-permissive GRANT ALL statements from prod.sql that contradict the migration's security model.The migration file properly restricts operations: authenticated users can only SELECT their own records, and only
service_rolecan write touser_password_compliance. However,prod.sql(lines 12477-12485) grantsALLprivileges to authenticated users, andget_password_policy_hashis granted to authenticated users (lines 11413-11415) despite the migration granting only topostgresandservice_role.This creates a critical security mismatch. The backend code relies on service-role isolation, but the prod schema allows authenticated users full table and function access. Align
prod.sqlwith the migration's intent: revoke INSERT/UPDATE/DELETE from authenticated users onuser_password_compliance, and revoke EXECUTE from authenticated users onget_password_policy_hash. Only theservice_roleshould perform password compliance record modifications.supabase/functions/_backend/public/organization/members/delete.ts (1)
27-42: RLS policy onuserstable will block this endpoint—SELECT by email is not permitted.The
userstable has a SELECT RLS policy that restricts queries to the authenticated user's own record (id = get_identity(...)). However, lines 27–31 attempt to query any user by email via.eq('email', body.email). This query will fail for all cases except when the authenticated user's own email matches the requested email, which defeats the purpose of an admin member deletion endpoint.Add an RLS policy on the
userstable that allows organization admins to SELECT users by email within their organization context, or useservice_roleclient for this operation (bypassing RLS when admin rights are already verified at the application level).supabase/functions/_backend/public/organization/members/get.ts (3)
31-31: Use typed Context for improved type safety.Per the coding guidelines, all Hono endpoint handlers must accept
Context<MiddlewareKeyVariables>instead of the plainContexttype. This ensures proper type checking forc.get('requestId'),c.get('apikey'),c.get('auth'), and other middleware-injected values.🔎 Proposed fix
-export async function get(c: Context, bodyRaw: any, apikey: Database['public']['Tables']['apikeys']['Row']): Promise<Response> { +import type { MiddlewareKeyVariables } from '../../../utils/hono.ts' + +export async function get(c: Context<MiddlewareKeyVariables>, bodyRaw: any, apikey: Database['public']['Tables']['apikeys']['Row']): Promise<Response> {As per coding guidelines, Hono endpoint handlers should use the typed Context.
38-40: Verify capgkey availability before type assertion.The type assertion
c.get('capgkey') as stringcould hide runtime errors if thecapgkeyis not set in the middleware context. SincesupabaseApikeyalready handles the fallback logic (apikey.key ?? c.get('capgkey')), consider validating that at least one is available before the authorization check.🔎 Suggested defensive check
+ const effectiveKey = apikey.key ?? c.get('capgkey') + if (!effectiveKey) { + throw simpleError('missing_api_key', 'API key not available') + } + - if (!(await hasOrgRightApikey(c, body.orgId, apikey.user_id, 'read', c.get('capgkey') as string)) || !(apikeyHasOrgRight(apikey, body.orgId))) { + if (!(await hasOrgRightApikey(c, body.orgId, apikey.user_id, 'read', effectiveKey)) || !(apikeyHasOrgRight(apikey, body.orgId))) {
44-48: Use the 1-argument wrapper function instead of passing explicituser_id.The RPC call should use
get_org_members(guild_id)instead ofget_org_members(user_id, guild_id). The migration that added API key support (20251228160000) explicitly revoked public access to the 2-argument inner function and established a 1-argument wrapper that handles authentication and authorization internally usingget_identity(). Calling the inner function directly with an explicituser_idparameter bypasses this security layer.Change line 45-48 to:
const { data, error } = await supabase .rpc('get_org_members', { guild_id: body.orgId, })supabase/functions/_backend/public/organization/post.ts (1)
25-31: Consider using RETURNING clause and transaction for atomicity.The insert operation doesn't use a
RETURNINGclause, requiring a separate SELECT query (lines 33-38). Additionally, the insert and select are not wrapped in a transaction. This creates two issues:
- Unnecessary round-trip: A separate query is needed to retrieve the created org's ID
- Potential orphan records: If the subsequent SELECT fails, the org is already created but an error is returned to the user, who may retry and encounter a duplicate error
🔎 Proposed fix using RETURNING clause
- const { error: errorOrg } = await supabase + const { data: dataOrg, error: errorOrg } = await supabase .from('orgs') .insert(newOrg) + .select('id') + .single() if (errorOrg) { throw simpleError('cannot_create_org', 'Cannot create org', { error: errorOrg?.message }) } - // Read the created org - the insert trigger creates org_users so RLS should allow access - const { data: dataOrg, error: errorOrg2 } = await supabase - .from('orgs') - .select('id') - .eq('created_by', apikey.user_id) - .eq('name', body.name) - .single() - if (errorOrg2 || !dataOrg) { + if (!dataOrg) { throw simpleError('cannot_get_org', 'Cannot get created org', { error: errorOrg2?.message }) }This eliminates the race condition window and ensures the correct org ID is returned atomically.
supabase/functions/_backend/files/preview.ts (2)
102-111: Critical: Double "Bearer " prefix breaks Authorization header authentication.Line 105 extracts the token after
'Bearer 'from the authorization header, then line 111 adds'Bearer 'prefix again. This results in'Bearer Bearer <token>'for Authorization header requests, which will fail authentication.🔎 Proposed fix
// Accept token from: query param (first request), cookie (subsequent requests), or Authorization header const authorization = c.req.header('authorization') const tokenFromQuery = c.req.query('token') const tokenFromCookie = getCookie(c, TOKEN_COOKIE_NAME) - const token = authorization?.split('Bearer ')[1] || tokenFromQuery || tokenFromCookie + const tokenWithoutBearer = authorization?.split('Bearer ')[1] + const token = tokenWithoutBearer || tokenFromQuery || tokenFromCookie if (!token) return simpleError('cannot_find_authorization', 'Cannot find authorization. Pass token as query param on first request.') // Use authenticated client - RLS will enforce access based on JWT - const supabase = supabaseClient(c, `Bearer ${token}`) + // tokenWithoutBearer already has no prefix, query/cookie tokens never had prefix + const jwt = authorization ? authorization : `Bearer ${token}` + const supabase = supabaseClient(c, jwt)
113-185: Fix manifest table SELECT RLS policy to enforce organization access control.The manifest table has a critical security vulnerability: the SELECT policy permits any authenticated user to read any manifest entry, inconsistent with the DELETE and INSERT policies which properly enforce organization membership checks.
The SELECT policy should validate that the requesting user belongs to the organization that owns the app_version, matching the logic in the DELETE and INSERT policies:
CREATE POLICY "Allow users to read any manifest entry" ON "public"."manifest" FOR SELECT TO "authenticated" USING ( EXISTS ( SELECT 1 FROM "public"."app_versions" "av" JOIN "public"."apps" "a" ON (("av"."app_id")::"text" = ("a"."app_id")::"text") WHERE ("av"."id" = "manifest"."app_version_id") AND ("a"."owner_org" IN ( SELECT "ou"."org_id" FROM "public"."org_users" "ou" WHERE "ou"."user_id" = (SELECT "auth"."uid" ()) )) ) );Currently, the SELECT policy allows information disclosure: any authenticated user can query manifest entries belonging to other organizations. While preview.ts and download_link.ts mitigate this through application-level checks, RLS should enforce access control at the database boundary rather than relying on application logic.
supabase/functions/_backend/private/credits.ts (1)
304-355:/complete-top-up: idempotency flow is solid; fix regex to satisfy ESLintThe complete-top-up handler correctly:
- Re-resolves
customerIdvia RLS (resolveOrgStripeContext).- Validates that the Stripe session belongs to the org, is a paid
paymentsession, and accumulatescreditQuantityonly from the configured credit product.- Checks for existing
usage_credit_transactionsto enforce idempotency before invoking thetop_up_usage_creditsRPC, and re-reads balances if a previous transaction exists.The additional
sessionIdformat guard is a good protection before interpolating into the PostgREST.or(...)filter.However, ESLint is flagging the regex:
if (!/^cs_(test|live)_[a-zA-Z0-9]+$/.test(body.sessionId))because the capturing group
(test|live)is never used (regexp/no-unused-capturing-groupin the pipeline).You can fix the lint error (and keep the same semantics) by using a non‑capturing group:
// Validate sessionId format to prevent injection (Stripe session IDs: cs_test_* or cs_live_*) if (!/^cs_(?:test|live)_[a-zA-Z0-9]+$/.test(body.sessionId)) throw simpleError('invalid_session_id', 'Invalid session ID format')Everything else in this block looks good and remains RLS-safe via the token-scoped
supabaseClient. Based on learnings, ...Also applies to: 360-425
supabase/functions/_backend/public/webhooks/test.ts (1)
28-37: Missing error handling forsupabaseWithAuthreturn value.
supabaseWithAuthcan return aResponseobject (fromsimpleError) when auth is invalid. The code assigns the result directly tosupabaseand uses it as a Supabase client without checking if it's an error response. This will cause a runtime error when calling.from()on aResponseobject.🔎 Proposed fix
// Use authenticated client for data queries - RLS will enforce access const supabase = supabaseWithAuth(c, auth) + if (supabase instanceof Response) { + return supabase + } // Get webhooksupabase/functions/_backend/private/invite_new_user_to_org.ts (1)
149-155: Replace direct insert with call toinvite_user_to_org()database function.The
tmp_userstable RLS policy hasWITH CHECK (false), which blocks all INSERT operations. The code at lines 149-155 performs a direct insert that will fail at runtime. Use the existingpublic.invite_user_to_org(email, org_id, invite_type)SECURITY DEFINER function instead, which properly handles authentication checks and permissions.supabase/functions/_backend/public/build/start.ts (1)
109-117: Add RLS UPDATE policy for API key-based access tobuild_requeststable.The code uses
supabaseApikeywhich authenticates as theanonrole with acapgkeyheader, but thebuild_requeststable has no UPDATE policy for this authentication method. Current policies only allow authenticated users (org membership check) for SELECT and service_role for all operations. An UPDATE policy usingis_allowed_capgkey()is required, similar to patterns used elsewhere in the codebase:CREATE POLICY "Allow apikey to update build requests" ON public.build_requests FOR UPDATE TO anon USING (public.is_allowed_capgkey(public.get_apikey_header(), '{all,write}'::public.key_mode[], app_id)) WITH CHECK (public.is_allowed_capgkey(public.get_apikey_header(), '{all,write}'::public.key_mode[], app_id));
🤖 Fix all issues with AI Agents
In @src/auto-imports.d.ts:
- Line 335: The code references a missing type Webhook in stores/webhooks.ts
used by several functions; add and export the type alias Webhook =
Database['public']['Tables']['webhooks']['Row'] inside stores/webhooks.ts (and
update any exports so it is available where imported, e.g., re-export or ensure
auto-imports.d.ts picks it up) so the function signatures that reference Webhook
resolve correctly.
In @supabase/functions/_backend/private/create_device.ts:
- Line 7: The code currently assumes supabaseWithAuth returns a Supabase client;
guard the error path by checking the returned value for an error (e.g., if the
result equals simpleError('not_authorized', ...) or has an .error flag) before
calling supabase.rpc and return an HTTP 401/appropriate response if auth failed;
also add org_id to bodySchema and the CreateDeviceBody type (use z.uuid() if
org_id is a UUID) so validation and typings match the new parameter, and ensure
callers of this private endpoint are migrated or handle invalid_json_body
accordingly.
In @supabase/functions/_backend/private/set_org_email.ts:
- Around line 29-30: The code assumes supabaseWithAuth(c, auth) always returns a
Supabase client but it can return a simpleError object; update the handler to
check the return of supabaseWithAuth (variable supabase) and detect an error
response (e.g., presence of error/status/message or whatever shape simpleError
uses) before calling supabase.from(); if it is an error, short-circuit the
function by returning or throwing that error response (preserving its
status/message) so you never call .from() on a non-client value.
In @supabase/functions/_backend/private/upload_link.ts:
- Around line 26-27: Replace all calls that use supabaseApikey(...) to run DB
RPCs/queries with the D1 clients from utils/pg.ts: call getPgClient() or
getDrizzleClient() and use that client to execute the same RPC/queries (e.g.,
the rpc('get_user_id', { apikey: capgkey, app_id: body.app_id }) call and the
other RPCs/queries currently invoked via supabaseApikey). Locate every
supabaseApikey usage (the rpc('get_user_id') call and the subsequent
RPCs/queries in this file) and refactor to obtain a client via getPgClient() or
getDrizzleClient(), then run the RPC or equivalent query through that client,
preserving parameters and error handling and returning the same data shape.
In @supabase/functions/_backend/utils/stripe.ts:
- Line 389: The inline comment in supabase/functions/_backend/utils/stripe.ts
contains a Supabase dashboard URL exposing a project ID and customer filter;
remove that specific comment line and any other hardcoded dashboard or customer
links, and if a note is needed replace it with a generic comment (e.g.,
"reference Supabase dashboard") or an internal doc pointer; ensure no real
project IDs or customer identifiers remain in comments or commits.
🧹 Nitpick comments (17)
supabase/functions/_backend/public/app/delete.ts (1)
57-144: Consider adding error logging for failed deletions.The
Promise.allexecutes 14 delete operations but none check the returnederrorproperty. Supabase client doesn't throw on delete failures—it returns{ data, error }. If any operation fails silently (e.g., FK constraint violation, unexpected RLS block), orphaned data may remain.Since this is cleanup code and the critical deletions at the end do check errors, this may be acceptable as-is. However, logging failures would help diagnose incomplete cleanups.
🔎 Optional: Log errors from parallel deletions
- // Run most deletions in parallel - await Promise.all([ + // Run most deletions in parallel + const deleteResults = await Promise.all([ // Delete version related data (user-facing, has RLS) supabase .from('app_versions_meta') .delete() .eq('app_id', appId), // ... rest of deletions ]) + + // Log any deletion failures for debugging + deleteResults.forEach((result, index) => { + if (result.error) { + cloudlog({ requestId: c.get('requestId'), message: 'deletion failed during app cleanup', index, error: result.error, app_id: appId }) + } + })supabase/functions/_backend/private/set_org_email.ts (1)
18-75: Add structured logging for observability.The function handles critical operations (organization email updates with Stripe integration) but lacks structured logging. Adding logging would improve debugging and monitoring, especially for the error paths and the rollback scenario.
🔎 Suggested logging additions
Consider adding structured logging at key points:
// At the start of the handler cloudlog({ requestId: c.get('requestId'), message: 'Setting org email', orgId: safeBody.org_id, newEmail: safeBody.email }) // On successful completion (before line 74) cloudlog({ requestId: c.get('requestId'), message: 'Org email updated successfully', orgId: safeBody.org_id }) // On rollback (around line 70) cloudlog({ requestId: c.get('requestId'), message: 'Rolling back Stripe email update due to database error', orgId: safeBody.org_id, error: updateOrgErr })Note: You'll need to import
cloudlogfrom the appropriate utility module.As per coding guidelines, structured logging with
cloudlog({ requestId: c.get('requestId'), message: '...' })should be used for all backend logging.supabase/functions/_backend/private/stripe_checkout.ts (2)
40-40: Defensive check for authorization token.The non-null assertion operator on
authorization!bypasses TypeScript's safety checks. AlthoughmiddlewareAuthshould ensureauthorizationexists, consider adding an explicit check for defensive programming.🔎 Suggested defensive check
// Use authenticated client for data queries - RLS will enforce access - const supabase = supabaseClient(c, authorization!) + if (!authorization) + return simpleError('not_authorize', 'Not authorize') + const supabase = supabaseClient(c, authorization)
41-45: Consider migrating to getPgClient() or getDrizzleClient().According to the coding guidelines, all database operations should use
getPgClient()orgetDrizzleClient()fromutils/pg.tsduring the active migration to Cloudflare D1. This code currently uses the Supabase client for querying the orgs table.Based on coding guidelines, database operations should use
getPgClient()orgetDrizzleClient()instead of Supabase client during the migration to Cloudflare D1.supabase/functions/_backend/private/upload_link.ts (1)
26-27: Consider user ID consistency.The code uses two different user IDs:
userIdfrom theget_user_idRPC (line 26)apikey.user_idfrom the context (line 52)If both represent the same user, consider using
userIdconsistently throughout for clarity. If they serve different purposes (e.g., requesting user vs. API key owner), consider adding a comment to clarify the distinction.Also applies to: 52-52
supabase/functions/_backend/public/statistics/index.ts (1)
43-54: Consider usingauth.jwtdirectly instead of re-reading the authorization header.The
AuthInfointerface already providesauth.jwt: string | null. Re-reading fromc.req.header('authorization')duplicates the header lookup that middleware already performed. Usingauth.jwtwould be more consistent with the pattern of using theauthobject throughout.Additionally, if
auth.authType === 'apikey'butauth.apikeyis somehownull, the code falls through to the JWT branch, which may produce unexpected behavior.🔎 Suggested refactor
function getAuthenticatedSupabase(c: Context, auth: AuthInfo) { if (auth.authType === 'apikey' && auth.apikey) { return supabaseApikey(c, auth.apikey.key) } + if (auth.authType === 'apikey') { + // apikey auth but apikey object is null - fallback to capgkey header + return supabaseApikey(c, null) + } // JWT auth - const authorization = c.req.header('authorization') - if (!authorization) { + if (!auth.jwt) { throw quickError(401, 'no_authorization', 'No authorization header') } - return supabaseClient(c, authorization) + return supabaseClient(c, auth.jwt) }supabase/functions/_backend/private/delete_failed_version.ts (1)
21-21: Consider removing the type assertion.The
as stringcast assumescapgkeyis always present. WhilemiddlewareKeyshould ensure this, andsupabaseApikeywill throw a clear error if the key is missing, removing the assertion would be more type-safe.🔎 Optional refactor to improve type safety
- const capgkey = c.get('capgkey') as string + const capgkey = c.get('capgkey') + if (!capgkey) { + return quickError(401, 'missing_api_key', 'API key is required') + }supabase/functions/_backend/utils/webhook.ts (1)
74-74: Optional: Consider consistent code formatting.The formatting changes (opening braces on new lines at 74, 353) and the removal of the type assertion at line 365 are minor stylistic adjustments with no behavioral impact. If these changes are part of a broader formatting standardization effort, ensure they're applied consistently across the codebase.
Also applies to: 353-353, 365-365
supabase/functions/_backend/private/create_device.ts (1)
61-68: Differentiate app “not found” from other DB errors in the new existence checkThe new apps query is useful to fail fast when an
app_idis invalid, but currently any error from:const { error: appError } = await supabase .from('apps') .select('owner_org') .eq('app_id', safeBody.app_id) .single()is mapped to
404 app_not_found. That will also treat permission issues, transient DB failures, or other PostgREST errors as “not found”.Consider distinguishing “no rows found” from other errors (e.g. by checking
appError?.codeordetails) and returning a 5xx or internal error in the latter case, so operational problems don’t look like missing apps. This can be done later as a small refactor.messages/pl.json (1)
678-681: Consider keeping “Pixel” as an untranslated device brand label
"device-iphone"correctly keeps the brand name, but"device-pixel": "Piksel"localizes the Pixel device name. For consistency with other locales and to match the actual device brand, consider leaving it as “Pixel”.Optional tweak
- "device-pixel": "Piksel", + "device-pixel": "Pixel",Also applies to: 738-738, 766-766, 1199-1199
supabase/functions/_backend/private/credits.ts (3)
7-8: RLS-awaresupabaseClientusage ingetCreditTopUpProductIdSwitching to
supabaseClient(c, token)forstripe_infoandplanslookups is a good move: RLS now governs access, and the.single()calls keep the intent clear. Error logging is structured and error messages are specific. Just ensure thetokenyou pass is exactly what Supabase expects in theAuthorizationheader (typicallyBearer <jwt>), which you appear to preserve viarawAuthHeader.Based on learnings, you might later consider normalizing the auth header (e.g. stripping whitespace / enforcing
Bearerprefix) before passing it intosupabaseClient, but that’s not a blocker here. Based on learnings, ...Also applies to: 72-108
110-132:resolveOrgStripeContext: enforcing JWT/RLS for org-scoped Stripe contextDeriving
rawAuthHeaderand usingsupabaseClientto fetchorgs.customer_idmeans access to the Stripe customer is now RLS-gated, which is desirable. The combined error message for “no Stripe customer or no access” keeps leakage minimal.To better align with the backend auth conventions, you could:
- Check
c.get('auth')?.authTypeand explicitly requireauthType === 'jwt'here (or handleapikeyseparately) before using the JWT-backed client.- If you ever support API-key based flows for these endpoints, branch to an API-key–scoped client rather than relying solely on the
authorizationheader.Based on learnings, ...
266-282:/start-top-up: quantity clamping and token plumbing look correctThe quantity parsing/clamping logic is sane (
DEFAULT_TOP_UP_QUANTITY,MAX_TOP_UP_QUANTITY), and wiringcustomerId/tokenfromresolveOrgStripeContextintogetCreditTopUpProductIdkeeps all Stripe/Supabase lookups scoped to the caller.If you expect API key callers here, consider explicitly rejecting
authType === 'apikey'(viac.get('auth')) with a clearer error than a generic “not_authorized”, or extendresolveOrgStripeContextto support that path. Based on learnings, ...src/pages/app/[package].channel.[channel].devices.vue (1)
127-135: Empty string fallback for org_id may cause unclear backend errors.If
channel.value?.owner_orgis undefined, an empty string is sent asorg_id. This could pass initial request construction but fail with a confusing backend error. Consider failing early or showing a user-friendly error when the organization context is missing.🔎 Suggested improvement
+ if (!channel.value?.owner_org) { + toast.error(t('no-organization-selected')) + return + } + const { error: addDeviceError } = await supabase.functions.invoke('private/create_device', { body: { device_id: deviceId, app_id: route.params.package as string, - org_id: channel.value?.owner_org ?? '', + org_id: channel.value.owner_org, platform, version_name: channel.value?.version.name ?? 'unknown', }, })tests/app.test.ts (1)
217-222: Good test isolation - cleanup restores original state.Adding the afterAll cleanup to reset
user_rightback to'read'prevents test pollution. Consider adding error handling to catch and log cleanup failures, which could help debug intermittent test issues.🔎 Optional: Add error handling to cleanup
afterAll(async () => { await resetAppData(APPNAME) await resetAppDataStats(APPNAME) // Restore user rights back to 'read' to avoid polluting other tests const supabase = getSupabaseClient() - await supabase.from('org_users').update({ + const { error } = await supabase.from('org_users').update({ user_right: 'read', }).eq('org_id', NON_OWNER_ORG_ID).eq('user_id', USER_ID) + if (error) { + console.warn('Failed to reset user rights in afterAll:', error) + } })supabase/functions/_backend/public/webhooks/deliveries.ts (1)
40-49: Authenticated client with typed query.The
supabaseApikeyclient is properly instantiated. Note that the webhook query here doesn't use type assertion(as any)unlike other webhook files - this suggests the types may be partially available. Consider aligning the typing approach across all webhook files for consistency.supabase/functions/_backend/private/invite_new_user_to_org.ts (1)
53-54: Consider reusing the Supabase client instance.The authenticated Supabase client is created twice: once in
validateInvite(line 54) and again in the POST handler (line 114). Consider returning the client fromvalidateInviteor extracting the client creation to a shared location to avoid redundancy.🔎 Suggested approach
Return the client from
validateInvitealongside other data:- return { inviteCreatorUser, org, body, authorization } + return { inviteCreatorUser, org, body, supabase }Then in the handler:
- // Use authenticated client for data queries - RLS will enforce access - const supabase = supabaseClient(c, res.authorization!) + const supabase = res.supabaseAlso applies to: 113-114
| import { middlewareV2 } from '../utils/hono_middleware.ts' | ||
| import { createStatsDevices } from '../utils/stats.ts' | ||
| import { supabaseAdmin as useSupabaseAdmin } from '../utils/supabase.ts' | ||
| import { supabaseWithAuth } from '../utils/supabase.ts' |
There was a problem hiding this comment.
Guard supabaseWithAuth error path and validate new org_id usage
supabaseWithAuthcan fall back tosimpleError('not_authorized', 'Not authorized'); in that casesupabasewill not be a Supabase client, andsupabase.rpc(...)will likely throw at runtime instead of short‑circuiting the request as an HTTP error. It’s safer to explicitly handle that case before using it as a client, e.g.:
Proposed guard around supabaseWithAuth
- const supabase = supabaseWithAuth(c, auth)
+ const supabase = supabaseWithAuth(c, auth)
+ if (supabase instanceof Response)
+ return supabase- Adding
org_idto bothbodySchemaandCreateDeviceBodykeeps validation and typing in sync; just ensure all callers of this private endpoint already sendorg_idor have been migrated, otherwise they’ll now getinvalid_json_body. - If
org_idis a UUID in your schema (likedevice_id), consider tighteningbodySchematoz.uuid()for earlier validation, but that’s optional.
Also applies to: 12-15, 21-27, 40-42, 45-51
🤖 Prompt for AI Agents
In @supabase/functions/_backend/private/create_device.ts at line 7, The code
currently assumes supabaseWithAuth returns a Supabase client; guard the error
path by checking the returned value for an error (e.g., if the result equals
simpleError('not_authorized', ...) or has an .error flag) before calling
supabase.rpc and return an HTTP 401/appropriate response if auth failed; also
add org_id to bodySchema and the CreateDeviceBody type (use z.uuid() if org_id
is a UUID) so validation and typings match the new parameter, and ensure callers
of this private endpoint are migrated or handle invalid_json_body accordingly.
| const { data: userId, error: _errorUserId } = await supabaseApikey(c, capgkey) | ||
| .rpc('get_user_id', { apikey: capgkey, app_id: body.app_id }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the actual file to understand the context
cat -n supabase/functions/_backend/private/delete_failed_version.ts | head -80Repository: Cap-go/capgo
Length of output: 3533
🏁 Script executed:
# Search for RLS policies on apps and app_versions tables
echo "=== RLS policies for apps table ==="
rg -n "CREATE POLICY.*apps" -A 8 -g "*.sql"
echo -e "\n=== RLS policies for app_versions table ==="
rg -n "CREATE POLICY.*app_versions" -A 8 -g "*.sql"Repository: Cap-go/capgo
Length of output: 27243
🏁 Script executed:
# Search for get_user_id RPC function
echo "=== get_user_id RPC function definition ==="
rg -n "CREATE.*FUNCTION.*get_user_id" -A 15 -g "*.sql"Repository: Cap-go/capgo
Length of output: 7736
🏁 Script executed:
# Search for API key authentication in policies and functions
echo "=== API key authentication context ==="
rg -n "apikey|capgkey" -g "*.sql" -C 2 | head -100Repository: Cap-go/capgo
Length of output: 8388
🏁 Script executed:
# Check for any newer migrations with DELETE policies for API keys
rg -n "CREATE POLICY.*DELETE.*anon" -g "*.sql" | grep -i "app_versions"
# Check for migrations after 20250913
fd "202510|202511|202512" -g "*.sql" supabase/migrations | head -20
# Look for DELETE policies with API key support on app_versions
rg -n "DELETE.*anon.*app_versions|app_versions.*DELETE.*anon" -g "*.sql"Repository: Cap-go/capgo
Length of output: 107
🏁 Script executed:
# Check the actual DELETE policy more carefully
rg -n "CREATE POLICY.*app_versions.*DELETE" -A 10 -g "*.sql"Repository: Cap-go/capgo
Length of output: 1909
🏁 Script executed:
# Verify the middleware and authentication setup
cat -n supabase/functions/_backend/utils/hono_middleware.ts | grep -A 20 "middlewareKey"Repository: Cap-go/capgo
Length of output: 2605
🏁 Script executed:
# Check all migrations after base migration for app_versions DELETE policies
fd -type f -name "*.sql" supabase/migrations | sort | tail -30Repository: Cap-go/capgo
Length of output: 226
🏁 Script executed:
# Search for any DELETE policy that allows anon role on app_versions
rg "DELETE.*TO.*anon|TO.*anon.*DELETE" -A 5 -g "*.sql" supabase/migrations | grep -i "app_versions" -A 5 -B 5Repository: Cap-go/capgo
Length of output: 38
🏁 Script executed:
# List all policies on app_versions to see complete picture
rg "CREATE POLICY.*app_versions" -A 2 -g "*.sql" subabase/schemas/prod.sql 2>/dev/null | head -50Repository: Cap-go/capgo
Length of output: 38
🏁 Script executed:
# Check the most recent migrations for any DELETE policy updates
ls -ltr supabase/migrations | tail -20Repository: Cap-go/capgo
Length of output: 2094
🏁 Script executed:
# Check the recent migration that might add API key support
cat -n supabase/migrations/20260104110000_add_apikey_policy_to_get_orgs_v7.sqlRepository: Cap-go/capgo
Length of output: 11695
🏁 Script executed:
# Search comprehensively for all DELETE policies on app_versions across all migrations
rg "CREATE POLICY.*DELETE.*app_versions|app_versions.*DELETE" -B 2 -A 15 subabase/migrations/*.sqlRepository: Cap-go/capgo
Length of output: 121
🏁 Script executed:
# Check if there's a DELETE policy with anon role
rg "DELETE.*TO.*anon|TO.*anon" -B 5 -A 10 subabase/migrations/*.sql | grep -i "policy\|delete\|anon" | head -80Repository: Cap-go/capgo
Length of output: 121
🏁 Script executed:
# Get all policies for app_versions in the latest prod schema
rg "CREATE POLICY.*app_versions" subabase/schemas/prod.sqlRepository: Cap-go/capgo
Length of output: 121
🏁 Script executed:
# Search for all DELETE policies on app_versions
rg "CREATE POLICY.*DELETE" supabase/schemas/prod.sql | grep -i "app_versions"Repository: Cap-go/capgo
Length of output: 305
🏁 Script executed:
# Check how supabaseApikey is configured
cat -n supabase/functions/_backend/utils/supabase.ts | head -100Repository: Cap-go/capgo
Length of output: 4175
🏁 Script executed:
# Search for the supabaseApikey function definition
rg -n "export.*supabaseApikey|function.*supabaseApikey" -A 20 supabase/functions/_backend/utils/supabase.tsRepository: Cap-go/capgo
Length of output: 883
🏁 Script executed:
# Check if there's an updated DELETE policy for app_versions with anon or API key support
rg "DELETE.*app_versions|app_versions.*DELETE" supabase/schemas/prod.sql -A 3Repository: Cap-go/capgo
Length of output: 1891
🏁 Script executed:
# Verify that supabaseApikey uses anon role by checking the complete function
cat -n supabase/functions/_backend/utils/supabase.ts | sed -n '86,103p'Repository: Cap-go/capgo
Length of output: 848
🏁 Script executed:
# Look at the most recent migrations to see if DELETE policy was added for API keys
ls -tr supabase/migrations/*.sql | tail -5 | xargs -I {} sh -c 'echo "=== {} ===" && head -50 {}'Repository: Cap-go/capgo
Length of output: 6127
🏁 Script executed:
# Confirm there's no other DELETE policy for app_versions supporting anon
rg "CREATE POLICY.*app_versions.*DELETE|DELETE.*app_versions" supabase/schemas/prod.sqlRepository: Cap-go/capgo
Length of output: 813
🏁 Script executed:
# Check if there's a recent migration that might have added API key DELETE support
grep -r "DELETE.*anon.*app_versions\|app_versions.*DELETE.*anon" supabase/migrations/Repository: Cap-go/capgo
Length of output: 38
DELETE operation will fail due to missing RLS policy for API key authentication.
The refactor from supabaseAdmin to supabaseApikey at line 70-74 will cause runtime failures. The DELETE policy on app_versions requires the authenticated role with super_admin privilege level:
CREATE POLICY "Allow all for auth (super_admin+)" ON "public"."app_versions" FOR DELETE TO "authenticated" USING (check_min_rights('super_admin', ...))However, supabaseApikey() connects as the anon role, and API keys with modes 'all', 'write', and 'upload' cannot have super_admin privileges. No DELETE policy exists for the anon role on this table, so all delete attempts will be rejected with a permission denied error.
Add an RLS policy for app_versions DELETE that supports API key authentication (similar to the SELECT policies which use TO authenticated, anon).
| // Use authenticated client for data queries - RLS will enforce access | ||
| const supabase = supabaseWithAuth(c, auth) |
There was a problem hiding this comment.
Missing error handling for supabaseWithAuth return value.
The supabaseWithAuth function can return a simpleError response when authentication is invalid, but the code assumes it always returns a Supabase client. This will cause a runtime error when trying to call .from() on an error response.
🔎 Proposed fix to handle error return value
// Use authenticated client for data queries - RLS will enforce access
const supabase = supabaseWithAuth(c, auth)
+ if (supabase instanceof Response) {
+ return supabase
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Use authenticated client for data queries - RLS will enforce access | |
| const supabase = supabaseWithAuth(c, auth) | |
| // Use authenticated client for data queries - RLS will enforce access | |
| const supabase = supabaseWithAuth(c, auth) | |
| if (supabase instanceof Response) { | |
| return supabase | |
| } |
🤖 Prompt for AI Agents
In @supabase/functions/_backend/private/set_org_email.ts around lines 29 - 30,
The code assumes supabaseWithAuth(c, auth) always returns a Supabase client but
it can return a simpleError object; update the handler to check the return of
supabaseWithAuth (variable supabase) and detect an error response (e.g.,
presence of error/status/message or whatever shape simpleError uses) before
calling supabase.from(); if it is an error, short-circuit the function by
returning or throwing that error response (preserving its status/message) so you
never call .from() on a non-client value.
| const { data: userId, error: _errorUserId } = await supabaseApikey(c, capgkey) | ||
| .rpc('get_user_id', { apikey: capgkey, app_id: body.app_id }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the actual file content
echo "=== Content of upload_link.ts ==="
head -100 supabase/functions/_backend/private/upload_link.ts | cat -nRepository: Cap-go/capgo
Length of output: 4330
🏁 Script executed:
#!/bin/bash
# Search for examples of getPgClient/getDrizzleClient usage in backend files
echo "=== Examples of getPgClient usage in backend files ==="
rg "getPgClient|getDrizzleClient" supabase/functions/_backend/ -A 3 -B 1 | head -50Repository: Cap-go/capgo
Length of output: 3622
🏁 Script executed:
#!/bin/bash
# Check if supabaseApikey is used elsewhere in backend files and how
echo "=== Usage of supabaseApikey in backend files ==="
rg "supabaseApikey" supabase/functions/_backend/ -B 2 -A 2 | head -60Repository: Cap-go/capgo
Length of output: 4868
🏁 Script executed:
#!/bin/bash
# Check if there are comments or documentation about supabaseApikey usage patterns
echo "=== Check for comments explaining supabaseApikey usage ==="
rg -B 3 "supabaseApikey\(c," supabase/functions/_backend/ | grep -A 3 "Use authenticated" | head -30Repository: Cap-go/capgo
Length of output: 2253
🏁 Script executed:
#!/bin/bash
# Check the PR or migration context for D1 migration status
echo "=== Check for migration status or context ==="
find . -name "AGENTS.md" -o -name "MIGRATION*.md" -o -name ".github/copilot*" | xargs cat 2>/dev/null | head -100Repository: Cap-go/capgo
Length of output: 3011
Refactor to use getPgClient() and getDrizzleClient() instead of supabaseApikey.
All database operations in supabase/functions/_backend must use getPgClient() or getDrizzleClient() from utils/pg.ts as part of the active Cloudflare D1 migration. Replace supabaseApikey(c, capgkey) with the prescribed clients at lines 26–27, 36–41, 46–53, and 88–91.
🤖 Prompt for AI Agents
In @supabase/functions/_backend/private/upload_link.ts around lines 26 - 27,
Replace all calls that use supabaseApikey(...) to run DB RPCs/queries with the
D1 clients from utils/pg.ts: call getPgClient() or getDrizzleClient() and use
that client to execute the same RPC/queries (e.g., the rpc('get_user_id', {
apikey: capgkey, app_id: body.app_id }) call and the other RPCs/queries
currently invoked via supabaseApikey). Locate every supabaseApikey usage (the
rpc('get_user_id') call and the subsequent RPCs/queries in this file) and
refactor to obtain a client via getPgClient() or getDrizzleClient(), then run
the RPC or equivalent query through that client, preserving parameters and error
handling and returning the same data shape.
| } | ||
| if (baseConsoleUrl) { | ||
| metadata.log_as = `${baseConsoleUrl}/log-as/${userId}` | ||
| // https://supabase.com/dashboard/project/xvwzpoazmxkqosrdewyv/editor/445780?schema=public&filter=customer_id%3Aeq%3Acus_LR8PMu6exnGSuZ |
There was a problem hiding this comment.
Remove this comment - it exposes internal project details and customer data.
This comment contains a direct link to your Supabase dashboard with a specific project ID and customer ID filter. This exposes internal infrastructure details and potentially real customer data that should not be committed to the repository.
🔎 Proposed fix
- // https://supabase.com/dashboard/project/xvwzpoazmxkqosrdewyv/editor/445780?schema=public&filter=customer_id%3Aeq%3Acus_LR8PMu6exnGSuZ📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // https://supabase.com/dashboard/project/xvwzpoazmxkqosrdewyv/editor/445780?schema=public&filter=customer_id%3Aeq%3Acus_LR8PMu6exnGSuZ |
🤖 Prompt for AI Agents
In @supabase/functions/_backend/utils/stripe.ts at line 389, The inline comment
in supabase/functions/_backend/utils/stripe.ts contains a Supabase dashboard URL
exposing a project ID and customer filter; remove that specific comment line and
any other hardcoded dashboard or customer links, and if a note is needed replace
it with a generic comment (e.g., "reference Supabase dashboard") or an internal
doc pointer; ensure no real project IDs or customer identifiers remain in
comments or commits.
|
|
Closing - PR contained unrelated changes mixed in. Will create a clean PR with only the Stripe webhook fix. |
|
Hey thanks for reaching out, this is not used in prod so it’s not
critical.
That interesting we do test with many tool we could use it how to set it
up?
Martin
Maker of Capgo <https://capgo.app/> Capacitor Live updates
Podcast host at SOLOS <https://solos.ventures/>
Maker of Captime <https://captime.app> Crossfit timer app
Mobile app Builder at Forgr <https://forgr.ee/>
…On January 12, 2026, Martin DONADIEU ***@***.***> wrote:
RomirJ left a comment (Cap-go/capgo#1370) <https://github.com/Cap-
go/capgo/pull/1370#issuecomment-3737174493>
Saw you closed this PR, but I wanted to flag a critical security issue
that was part of the commit history. The file test_db.py contained
hardcoded production database credentials (password included) for the
neondb instance. Even though you deleted the file in the cleanup,
those credentials are now permanently in the git history and can be
scraped by bots.
You need to rotate those database credentials immediately in
Neon/Supabase.
I’m building Hikaflow, an AI engineering assistant that scans full
repo history and commit diffs to catch exactly these kinds of secrets
leaks and configuration risks before they merge. It builds a map of
your infrastructure to spot dangerous patterns like hardcoded
connection strings.
I’d love to give you free access to run a scan on CapGo. We're early
and strictly looking for feedback from engineers managing sensitive
production data. Access is manual right now, but if you book a quick
time, I can get you set up: [https://calendly.com/romirjain/30min]
Quick question—for the build-worker network isolation, are you
blocking outbound traffic entirely, or just relying on the internal
network config? If it can reach the public internet, a compromised
build script could still exfiltrate data.
—
Reply to this email directly, view it on GitHub
<#1370 (comment)>,
or unsubscribe <https://github.com/notifications/unsubscribe-
auth/AA7FGLZKDV2GHZHAE5NSQZD4GNCX5AVCNFSM6AAAAACQ3ND2KWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTOMZXGE3TINBZGM>.
You are receiving this because you modified the open/close
state.Message ID: ***@***.***>
|



Summary
service_rolefull access tostripe_infotableProblem
The
stripe_infotable has RLS enabled but only had a SELECT policy for org members. When Stripe webhooks attempt to update subscription information viasupabaseAdmin()(which usesservice_role), the UPDATE operations were being blocked by RLS.Solution
Added a new policy
Allow service_role full access to stripe_infothat grants service_role full access (SELECT, INSERT, UPDATE, DELETE) to the stripe_info table, allowing Stripe webhook handlers to properly update subscription data.Summary by CodeRabbit
New Features
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.