[Refactor][Feat] Implement Plan Limits for Hard-and-Soft Item Caps#1215
[Refactor][Feat] Implement Plan Limits for Hard-and-Soft Item Caps#1215
Conversation
…ipe for subscription stripe mock gives us invalifd subscription dates which means we dont get the item. so we sanitize it to make it a valid range. In prod, this is unlikely to happen but it will serve as a guard.
Stripe metadata has a character limit of 500. We used to pass product info (including num of items) into the metadata of the stripe object. So when we tried to invoke stripe with this metadata, if it was over 500 chars, it would cause stripe to return an error. This was done because when the stripe webhook event fired, it would send the metadata along with it so our handler could pick it up. We rework this to only passing an id for use in a new table lookup in the handler. This decouples the product info from the webhook event. We keep it backwards compatible because there are existing subscriptions that have the product in the metadata, the same way we kept the offer parsing code for the subscriptions that had offer in the metadata. The productVersionId is hashed on the productJson to dedup it.
One bug where if a subscription was cancelled but its end period hadn't passed, we were getting the items for both it and the default/free plan Another bug where the alreadyOwnsProduct flag was checking all subscriptions, not just active ones.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR implements a team-wide billing quota system that enforces plan-based limits for analytics events, session replays, and emails. It adds entitlement tracking, quota debit logic at API endpoints, plan-based timeout clamping for analytics queries, auth-user soft-limit checks, and dashboard banners displaying usage warnings against plan allocations. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
71b3666 to
660dded
Compare
We just captureError when you exceed the limit.
We check in processSingleEmail since that maps to the sending state. The state transition from sending -> server_error already exists and so it would be cleaner to just hook into that transition pathway. Since we early return before the captureErrors, this should not clog up sentry. We check the payments items list as source of truth.
We also switch growth to 4 seats by default. Note that extra seats can only be purchased as an add on to the team or growth plans. We keep the add purchase button in the team dialog for now, though the UI is subject to change.
We update the schema to enforce a max of the growth plan limit.
ba8e724 to
72c2829
Compare
This applies to both client side and server side events. Two write points for events to clickhouse-batch route for client side events, and logEvent for server side. Max batch size for client side events is 500. We decrease item quantity after batch is uploaded for clickhouse. The alternative is to either do a check for eventsItem.quantity -batchSize <=0 before the upload and block upsert based on it (which would result in users not getting their full limit) or doing some weird partial batch upload. At most, with this approach, we would give users ~500 extra events. This is 0.5% extra on free plan, so we judge that it is ok. frontend-banners pop up on analytics when at 80%+ and when you hit your limit, with an option to upgrade your plan. We deliberated using tryDecreaseQuantity instead to both check and debit items, but there is potential for debit to happen even when clickhouse is down with that approach. This would be worse for users, so we accept the slight race condition. Manual testing: on local, changed events for free and team plan to smaller numbers. Checked analytics tables to verify no new events being added after limit reached.
we block new creations of session replays when limit is hit. Session replays refresh monthly
72c2829 to
d7081a8
Compare
We defer implementation until new onboarding flow is done.
Greptile SummaryThis PR implements hard and soft plan limits across analytics events, session replays, analytics query timeouts, email sending, and auth users. It introduces Most of the previously-flagged concerns have been addressed: the batch analytics check now uses Confidence Score: 5/5Safe to merge — all previously-raised P1 concerns have been addressed and only minor P2 style suggestions remain. The key correctness concerns from prior review rounds (batch quota mismatch, zero-timeout ClickHouse bypass, email retry bypass, fetchInvitations double-error) are resolved. Remaining findings are P2: a hardcoded price string and a question about equal email/replay limits between team and growth plans. packages/stack-shared/src/plans.ts — confirm growth emailsPerMonth and sessionReplays limits are intentionally equal to team's. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[Request arrives] --> B{Which path?}
B --> C[Client batch events /analytics/events/batch]
B --> D[Session replay batch /session-replays/batch]
B --> E[Analytics query /internal/analytics/query]
B --> F[Send test email /internal/send-test-email]
B --> G[Email queue worker processSingleEmail]
B --> H[User signup / anon upgrade /users CRUD]
C --> C1{tryDecreaseQuantity batch.length}
C1 -- insufficient --> C2[400 ITEM_QUANTITY_INSUFFICIENT_AMOUNT]
C1 -- ok --> C3[Write to ClickHouse]
D --> D1{New session?}
D1 -- existing session --> D3[Append chunk - no debit]
D1 -- new session --> D2{tryDecreaseQuantity 1}
D2 -- insufficient --> D4[400 ITEM_QUANTITY_INSUFFICIENT_AMOUNT]
D2 -- ok --> D5[Create session row + S3 upload]
E --> E1{timeout quota > 0?}
E1 -- no --> E2[400 ITEM_QUANTITY_INSUFFICIENT_AMOUNT]
E1 -- yes --> E3[effectiveTimeout = min request plan_limit]
E3 --> E4[ClickHouse query with clamped timeout]
F --> F1{tryDecreaseQuantity 1}
F1 -- insufficient --> F2[400 ITEM_QUANTITY_INSUFFICIENT_AMOUNT]
F1 -- ok --> F3[SMTP attempt]
F3 -- error --> F4[increaseQuantity 1 refund]
F3 -- success --> F5[Sent - debit kept]
G --> G1{sendRetries===0 AND billingTeamId?}
G1 -- no --> G2[Skip quota check SMTP retry]
G1 -- yes --> G3{tryDecreaseQuantity 1}
G3 -- insufficient --> G4[Mark SERVER_ERROR keep sendRetries=0]
G3 -- ok --> G5[SMTP send]
H --> H1[runAsynchronouslyAndWaitUntil checkAuthUsersSoftLimit]
H1 --> H2{usage > capacity?}
H2 -- yes --> H3[captureError to Sentry user still created]
H2 -- no --> H4[No-op]
Prompt To Fix All With AIThis is a comment left during a code review.
Path: apps/dashboard/src/app/(main)/(protected)/(outside-dashboard)/projects/page-client.tsx
Line: 477-481
Comment:
**Hardcoded price may drift from seed config**
The strings `"$29/month"` and `"$29/mo"` are hardcoded in the UI copy and the button label, but the actual price is defined in `seed.ts` (`USD: "29"`). If the extra-seats price is updated in the config, the UI copy will silently become stale without a TS/build error to catch it.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: packages/stack-shared/src/plans.ts
Line: 56-67
Comment:
**Growth and team plans share the same email and session replay limits**
`emailsPerMonth` and `sessionReplays` are identical for the team plan ($49/mo) and the growth plan ($299/mo, ~6x the price). The only differentiated limits are `analyticsTimeoutSeconds` (60 s vs 300 s) and `analyticsEvents` (500k vs 1M). If this is intentional product positioning, a brief comment here would make the intent clear and prevent future reviewers from treating it as an oversight.
How can I resolve this? If you propose a fix, please make it concise.Reviews (9): Last reviewed commit: "refactor: reorg free plan regrant to be ..." | Re-trigger Greptile |
There was a problem hiding this comment.
Pull request overview
This PR expands the plan/entitlements system to cover additional monthly quotas (analytics events, session replays, emails), enforces those quotas in backend ingestion paths, and adds dashboard UX + E2E coverage around approaching/exhausted limits.
Changes:
- Added new plan item IDs/limits (e.g., session replays) and updated seeded plan configs to use monthly repeating quotas.
- Implemented quota enforcement/clamping in backend endpoints (analytics query timeout, analytics event ingestion, session replay ingestion, email queue sending) plus team-wide entitlement helpers.
- Added dashboard warning banners and E2E tests for quota behavior (limits, debits, and timeout clamping).
Reviewed changes
Copilot reviewed 22 out of 22 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| packages/stack-shared/src/plans.ts | Adds new item IDs and plan limits (session replays/onboarding call). |
| apps/backend/prisma/seed.ts | Seeds monthly repeating quotas for events/emails/replays and new items. |
| apps/backend/src/app/api/latest/analytics/events/batch/route.tsx | Enforces/debits analytics event quota during batch ingestion. |
| apps/backend/src/app/api/latest/session-replays/batch/route.tsx | Enforces/debits session replay quota for new replays. |
| apps/backend/src/app/api/latest/internal/analytics/query/route.ts | Clamps analytics query timeout to plan entitlement and updates schema max. |
| apps/backend/src/lib/events.tsx | Adds analytics event quota enforcement/debit for server-side event logging. |
| apps/backend/src/lib/email-queue-step.tsx | Adds email quota enforcement/debit in queue sending path. |
| apps/backend/src/lib/plan-entitlements.ts | New helpers for billing-team resolution and team-wide capacity/usage aggregation. |
| apps/backend/src/lib/plan-entitlements.test.ts | Unit tests for new plan entitlement helpers. |
| apps/backend/src/app/api/latest/users/crud.tsx | Adds soft-limit monitoring for auth user capacity. |
| apps/backend/src/lib/payments.tsx | Clarifies subscription-fetch behavior via doc comment. |
| apps/backend/src/lib/payments.test.tsx | Adds regression tests (some marked it.fails) for subscription cancellation/renewal edge cases and add-on behavior. |
| apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/shared.tsx | Introduces limit warning banners for analytics events and session replays. |
| apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/{tables,queries,replays}/page-client.tsx | Renders the new analytics quota banners in analytics pages. |
| apps/dashboard/src/app/(main)/(protected)/(outside-dashboard)/projects/page-client.tsx | Updates admin seat invite UX and adds “extra seats” checkout flow. |
| apps/e2e/tests/backend/endpoints/api/v1/session-replays.test.ts | Adds E2E tests for session replay quota enforcement/debit behavior. |
| apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts | Adds E2E tests for analytics event quota enforcement and allocation. |
| apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts | Adds E2E tests for timeout clamping by plan and updates schema max timeout test. |
| apps/e2e/tests/js/payments.test.ts | Adds a TODO block describing missing E2E coverage for renewal-after-cancel flow. |
| apps/backend/src/lib/stripe.tsx | Minor whitespace-only adjustment. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (2)
apps/backend/prisma/seed.ts (1)
133-136: Type the new monthly intervals instead of casting them.
DayIntervalis already available in this file, so these new quota windows can share a typedmonthlyIntervalconstant instead of bypassing the checker withas any. That keeps the billing seed schema-checked end to end.♻️ Suggested cleanup
+const monthlyInterval: DayInterval = [1, "month"]; + ... - [ITEM_IDS.emailsPerMonth]: { quantity: PLAN_LIMITS.free.emailsPerMonth, repeat: [1, "month"] as any, expires: "when-repeated" as const }, + [ITEM_IDS.emailsPerMonth]: { quantity: PLAN_LIMITS.free.emailsPerMonth, repeat: monthlyInterval, expires: "when-repeated" as const }, ... - [ITEM_IDS.analyticsEvents]: { quantity: PLAN_LIMITS.free.analyticsEvents, repeat: [1, "month"] as any, expires: "when-repeated" as const }, + [ITEM_IDS.analyticsEvents]: { quantity: PLAN_LIMITS.free.analyticsEvents, repeat: monthlyInterval, expires: "when-repeated" as const }, ... - [ITEM_IDS.sessionReplays]: { quantity: PLAN_LIMITS.free.sessionReplays, repeat: [1, "month"] as any, expires: "when-repeated" as const }, + [ITEM_IDS.sessionReplays]: { quantity: PLAN_LIMITS.free.sessionReplays, repeat: monthlyInterval, expires: "when-repeated" as const },As per coding guidelines: "Do NOT use
as/any/type casts or anything else to bypass the type system unless you specifically asked the user about it."Also applies to: 155-158, 178-181
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/prisma/seed.ts` around lines 133 - 136, Replace the untyped monthly quota repeats that use "as any" with a properly typed DayInterval constant: create a const monthlyInterval: DayInterval = [1, "month"] and reuse it for all monthly quota entries (e.g., for ITEM_IDS.emailsPerMonth, ITEM_IDS.analyticsEvents, ITEM_IDS.sessionReplays and the other occurrences at the mentioned blocks) so you remove the casts and keep PLAN_LIMITS.free.* quotas schema-checked; update each place that currently uses [1, "month"] as any to reference monthlyInterval instead.packages/stack-shared/src/plans.ts (1)
13-20: KeeponboardingCallin the shared plan model.
ITEM_IDSnow exposesonboardingCall, butPlanProductOfferings/PLAN_LIMITSstill omit it, soapps/backend/prisma/seed.tshas to hardcode that entitlement separately. That splits the source of truth for plan placement and makes tier changes easy to drift.Also applies to: 28-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stack-shared/src/plans.ts` around lines 13 - 20, ITEM_IDS added onboardingCall but the shared plan model structs are missing it; update PlanProductOfferings and the PLAN_LIMITS object to include an onboardingCall entry (use ITEM_IDS.onboardingCall as the key) and set the appropriate per-plan values consistent with other entitlements so apps/backend/prisma/seed.ts no longer needs to hardcode this entitlement. Locate PlanProductOfferings and PLAN_LIMITS in this file, add the onboardingCall property to the type/shape and include it in each plan record in PLAN_LIMITS, and then remove the special-case from the seed code so the seed reads the entitlement from the shared model.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/backend/src/app/api/latest/analytics/events/batch/route.tsx`:
- Around line 69-75: Preflight currently only checks eventsItem.quantity <= 0
which allows overshoot when a large or concurrent batch is written before debit;
update the logic around getBillingTeamId / app.getItem
(ITEM_IDS.analyticsEvents) to verify and reserve/debit the full batch count
atomically before performing the ClickHouse insert — at minimum reject when
eventsItem.quantity < body.events.length prior to write, and if you perform a
debit after write ensure you refund the decremented amount on insert failure
(tie to decreaseQuantity() or a new atomic reserve/debit helper to avoid race
conditions).
In `@apps/backend/src/app/api/latest/session-replays/batch/route.tsx`:
- Around line 114-121: The cap check is racey: instead of only reading
replaysItem.quantity (app.getItem for ITEM_IDS.sessionReplays) and later calling
decreaseQuantity, perform a reserve-first operation (atomically decrement
quantity by 1 only if quantity > 0) before creating the replay/session (i.e.
call the existing decreaseQuantity(1) or a conditional DB update for
ITEM_IDS.sessionReplays while checking result), then proceed to create
S3/chunk/session rows; if any subsequent write fails, immediately compensate by
incrementing the reserved slot (call increaseQuantity(1) or reverse the
conditional decrement) to release the reservation; apply the same pattern where
you currently check and debit (including the other occurrence around lines
214-217) to ensure race-safe reservations.
In `@apps/backend/src/lib/email-queue-step.tsx`:
- Around line 706-745: The quota check is happening after the success path which
allows races; modify the flow to atomically reserve one monthly email credit
before calling lowLevelSendEmailDirectWithoutRetries(): call the stack/store
method that decrements or reserves ITEM_IDS.emailsPerMonth (via
getStackServerApp()/emailItem or the service that exposes decreaseQuantity)
inside a single atomic operation and only proceed to
lowLevelSendEmailDirectWithoutRetries() if that reserve succeeds; on any send
failure (or if you abort before final success) refund the reserved credit
(increment/undo the decrement) and record the same SendAttemptError/updates to
globalPrismaClient.emailOutbox (same fields you currently set) so the database
reflects a quota-reserved-but-failed attempt; apply the same change to the other
occurrence around lines 854-858.
In `@apps/backend/src/lib/events.tsx`:
- Around line 282-293: The pre-check against eventsItem.quantity in
getStackServerApp()/app.getItem({ itemId: ITEM_IDS.analyticsEvents, teamId: ...
}) is insufficient for a hard cap because writes happen before quota is
consumed; change the flow to atomically reserve/consume the quota before
persisting events by invoking the quota debit (e.g., decreaseQuantity(1) or a
dedicated reserve method) using a conditional update that ensures quantity >= 1
(DB-level WHERE/optimistic lock) so concurrent writers cannot exceed the cap,
then perform the Postgres+ClickHouse writes; if any write fails, immediately
compensate by refunding the quota (e.g., increaseQuantity(1)) and log the
failure with captureError/StackAssertionError, and ensure the same pattern is
applied to the other affected blocks (the regions referenced at 295-316 and
408-413).
In `@apps/backend/src/lib/payments.test.tsx`:
- Around line 741-807: getSubscriptions currently injects the default (free)
plan as soon as a subscription is not "active", and getCustomerPurchaseContext
still treats canceled-but-not-expired subscriptions as non-owning; update the
ownership logic so that a subscription with status 'canceled' but
currentPeriodEnd > now is treated as still active for capacity/ownership
purposes (i.e., do not inject the default plan in getSubscriptions for those
cases and make getCustomerPurchaseContext's alreadyOwnsProduct consider such
subscriptions as owned until currentPeriodEnd). Locate and modify
getSubscriptions() and getCustomerPurchaseContext() in payments.tsx (and any
helpers those call) to check currentPeriodEnd against the current time before
injecting the default plan or excluding the canceled subscription from
alreadyOwnsProduct logic.
- Around line 752-807: The test using vi.setSystemTime in the it.fails block
leaves fake timers active because vi.useRealTimers() is after an expect that can
throw; change the cleanup so timers are always restored by wrapping the test
body in try/finally (call vi.useRealTimers() in finally) or remove the per-test
cleanup and add a global afterEach that calls vi.useRealTimers(); update the
failing test that calls getItemQuantityForCustomer and uses
vi.setSystemTime/vi.useRealTimers to use the try/finally pattern (or rely on the
shared afterEach) so fake timers are always reverted even if expect throws.
In `@apps/backend/src/lib/payments.tsx`:
- Around line 315-318: getCustomerPurchaseContext() is treating every
subscription from getSubscriptions() (which returns canceled too) as ownership
and triggers ProductAlreadyGranted; change the ownership derivation to consider
only active subscriptions by filtering subscriptions with
isActiveSubscription(subscription) (or otherwise excluding canceled/ended
statuses) before computing alreadyOwnsProduct so renewals/reactivations are
allowed. Update any logic in getCustomerPurchaseContext() that builds
alreadyOwnsProduct to iterate only over the filtered activeSubscriptions list.
In
`@apps/dashboard/src/app/`(main)/(protected)/(outside-dashboard)/projects/page-client.tsx:
- Around line 202-205: The modal gates inputs and CTAs on a single boolean
invitationsLoaded, so if listInvitations() fails the dialog stays permanently
disabled; change the invitations state handling to track loading and error
separately (e.g., invitationsLoading and invitationsError alongside
invitations), default invitations to [] so activeSeats calculation (users.length
+ (invitations?.length ?? 0)) works even on error, compute atCapacity using
seatLimit but only while invitationsLoading is true (or treat error as
non-blocking), and surface invitationsError in the UI with a retry action that
re-calls listInvitations(); update references to invitationsLoaded, activeSeats,
seatLimit, atCapacity and ensure inputs/footer CTA enable when not loading (even
if invitationsError is set) and show an inline error + retry button.
In
`@apps/dashboard/src/app/`(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx:
- Around line 1388-1390: The PanelGroup is using a fixed height
(!h-[calc(100vh-180px)]) which causes the replay workspace to overflow when
AnalyticsEventLimitBanner or SessionReplayLimitBanner render; change the layout
so the container is a flex column and make PanelGroup consume remaining space
(e.g., remove the fixed calc height and replace it with a flex-grow class like
flex-1 and keep min-h-[520px] and existing border/rounded classes) so the
banners can expand without pushing the bottom controls offscreen; update the JSX
around PanelGroup and ensure parent containers use flex flex-col if needed and
references to AnalyticsEventLimitBanner and SessionReplayLimitBanner remain
unchanged.
In
`@apps/dashboard/src/app/`(main)/(protected)/projects/[projectId]/analytics/shared.tsx:
- Around line 324-327: The resolvePlanId function currently collapses any
non-team/growth product into "free", which misclassifies unknown paid plans;
update resolvePlanId(products) to detect three outcomes—explicit "growth",
explicit "team", and a distinct "unknownPaid" (or null/undefined) when a paid
subscription exists but its id isn't recognized—and only return "free" when no
paid subscription is present; adjust the PlanId type signature accordingly
(e.g., include "unknownPaid" or make it nullable) and ensure callers of
resolvePlanId (the banner/upgrade logic) handle the new unknown-paid result
instead of treating it as free.
- Around line 334-349: The banner components (AnalyticsEventLimitBanner and
SessionReplayLimitBanner) incorrectly try to locate the project owner team by
searching the dashboard user's teams (user.useTeams()), which returns internal
dashboard memberships and will often not contain the project's owner team;
change each to fetch the team from the admin app context by calling
adminApp.useTeam(project.ownerTeamId) (and remove the useMemo with teams.find),
then pass that returned team to
AnalyticsEventLimitBannerInner/SessionReplayLimitBannerInner and handle null the
same way as before.
In `@apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts`:
- Around line 544-562: The test currently sets the project's remaining quota to
0 which only validates the exhausted-quota path; change the setup in the test
(use setupProjectWithPlan, Auth.Otp.signIn, setEventItemQuantity,
uploadEventBatch) so setEventItemQuantity(ownerTeamId, 1) instead of 0 and then
send the same 2-event batch (the uploadEventBatch call) so the test exercises
the "remaining quota less than batch size" condition while keeping the same
assertions (expect res.status 400 and res.body.code
"ITEM_QUANTITY_INSUFFICIENT_AMOUNT").
- Around line 442-454: The current code sets backendContext to
InternalProjectKeys before calling niceBackendFetch and only restores savedKeys
after success; move the assignment and grant call into a try block and restore
the original keys in a finally block so backendContext.set({ projectKeys:
savedKeys }) always runs even if niceBackendFetch throws or grantResponse.status
!== 200; update the block that references planId, savedKeys, backendContext.set,
InternalProjectKeys, grantResponse and niceBackendFetch accordingly.
In `@apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts`:
- Around line 23-35: The test temporarily replaces backendContext.projectKeys
with InternalProjectKeys before calling niceBackendFetch; if the grant request
fails or throws the savedKeys are never restored, so wrap the grant request and
its status check in a try/finally (using backendContext.set({ projectKeys:
savedKeys }) in the finally) to guarantee restoration of the original keys (and
rethrow the error or preserve failing behavior after restoring); reference
backendContext, InternalProjectKeys, niceBackendFetch, grantResponse,
ownerTeamId, and planId when locating where to apply the try/finally.
In `@apps/e2e/tests/backend/endpoints/api/v1/session-replays.test.ts`:
- Around line 1484-1515: The test "does not debit quota when appending chunks to
an existing session replay" currently never verifies the "append after cap"
contract; after the first successful upload you should set the team's quota for
session_replays to 0 (use Project.updateConfig or the helper that alters
apps.installed.analytics.quota/session_replays), then call uploadBatch again
with the same session_replay flow (reuse firstBatch.body.session_replay_id via
uploadBatch inputs or assert equality) and assert the second response.status is
200 and returns the same session_replay_id and that
getSessionReplayItemQuantity(ownerTeamId) did not decrease; update the
assertions around uploadBatch, getSessionReplayItemQuantity,
Project.updateConfig and any usage of randomUUID to keep the test deterministic.
In `@apps/e2e/tests/js/payments.test.ts`:
- Around line 414-431: Add a real E2E that simulates canceling a subscription
then re-purchasing before the billing period ends to guard the change: write a
test in payments.test.ts that creates a subscription, calls the cancel API to
set cancel_at_period_end: true, then runs the client purchase/reactivate flow
(the same codepath used by the JS client) and assert the system reactivates the
existing subscription (cancel_at_period_end becomes false), does not create a
new subscription (same subscription id), and that the client-side helper
alreadyOwnsProduct returns false for the canceled-but-not-expired state so the
reactivate path is taken; tie assertions to the existing functions/flows
alreadyOwnsProduct and the purchase/switch code path to ensure regression
coverage.
---
Nitpick comments:
In `@apps/backend/prisma/seed.ts`:
- Around line 133-136: Replace the untyped monthly quota repeats that use "as
any" with a properly typed DayInterval constant: create a const monthlyInterval:
DayInterval = [1, "month"] and reuse it for all monthly quota entries (e.g., for
ITEM_IDS.emailsPerMonth, ITEM_IDS.analyticsEvents, ITEM_IDS.sessionReplays and
the other occurrences at the mentioned blocks) so you remove the casts and keep
PLAN_LIMITS.free.* quotas schema-checked; update each place that currently uses
[1, "month"] as any to reference monthlyInterval instead.
In `@packages/stack-shared/src/plans.ts`:
- Around line 13-20: ITEM_IDS added onboardingCall but the shared plan model
structs are missing it; update PlanProductOfferings and the PLAN_LIMITS object
to include an onboardingCall entry (use ITEM_IDS.onboardingCall as the key) and
set the appropriate per-plan values consistent with other entitlements so
apps/backend/prisma/seed.ts no longer needs to hardcode this entitlement. Locate
PlanProductOfferings and PLAN_LIMITS in this file, add the onboardingCall
property to the type/shape and include it in each plan record in PLAN_LIMITS,
and then remove the special-case from the seed code so the seed reads the
entitlement from the shared model.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2044892f-4171-45ef-af2b-d5be4ca1bbe3
📒 Files selected for processing (22)
apps/backend/prisma/seed.tsapps/backend/src/app/api/latest/analytics/events/batch/route.tsxapps/backend/src/app/api/latest/internal/analytics/query/route.tsapps/backend/src/app/api/latest/session-replays/batch/route.tsxapps/backend/src/app/api/latest/users/crud.tsxapps/backend/src/lib/email-queue-step.tsxapps/backend/src/lib/events.tsxapps/backend/src/lib/payments.test.tsxapps/backend/src/lib/payments.tsxapps/backend/src/lib/plan-entitlements.test.tsapps/backend/src/lib/plan-entitlements.tsapps/backend/src/lib/stripe.tsxapps/dashboard/src/app/(main)/(protected)/(outside-dashboard)/projects/page-client.tsxapps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/queries/page-client.tsxapps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsxapps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/shared.tsxapps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/tables/page-client.tsxapps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.tsapps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.tsapps/e2e/tests/backend/endpoints/api/v1/session-replays.test.tsapps/e2e/tests/js/payments.test.tspackages/stack-shared/src/plans.ts
💤 Files with no reviewable changes (1)
- apps/backend/src/lib/stripe.tsx
|
Caution Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted. Error details |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
apps/backend/src/lib/payments/ensure-free-plan.test.ts (1)
16-189: Good coverage of the fast/slow/race/regression matrix.Test design is deliberate and matches the production code's mental model:
- Seeding via
bulldozerWriteSubscriptiononly (not Prisma) is the right choice for the fast-path assertions — the fast path reads Bulldozer, so Bulldozer-only seeds exercise exactly the "already owns a base plan" branch. The slow-path test with no seed correctly finds an empty Prisma and creates the free sub.- The
incompleteregression test pins the behaviour the endedAt-based predicate was written to preserve; this is the bug that would silently re-open if someone ever "simplified" the predicate back to a status filter. Worth keeping.Promise.allconcurrency test validates the SERIALIZABLE retry path end-to-end.describe.sequentialis appropriate given the shared internal tenancy.Optional nit: since
randomUUID()billing-team IDs leave real rows in the internal tenancy's Subscription table after each run, a bulk cleanup in anafterAll(e.g.deleteManyfor the UUIDs created during the suite) would keep the dev DB tidy across reruns — not a correctness concern, just housekeeping.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/payments/ensure-free-plan.test.ts` around lines 16 - 189, Tests leave real Subscription rows in the internal tenancy because seedSub uses randomUUID() for billingTeam IDs; add teardown to delete those rows after the suite: collect created billingTeamId values (or subscription IDs) when calling seedSub/when generating randomUUIDs, then in an afterAll hook call getPrismaClientForTenancy(tenancy) (use getInternal() to obtain tenancy/prisma) and run prisma.subscription.deleteMany({ where: { tenancyId: tenancy.id, customerId: { in: [/* collected billingTeamIds */] } } }) to remove the test rows; ensure cleanup references the seedSub/randomUUID identifiers and runs regardless of test outcomes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@apps/backend/src/lib/payments/ensure-free-plan.test.ts`:
- Around line 16-189: Tests leave real Subscription rows in the internal tenancy
because seedSub uses randomUUID() for billingTeam IDs; add teardown to delete
those rows after the suite: collect created billingTeamId values (or
subscription IDs) when calling seedSub/when generating randomUUIDs, then in an
afterAll hook call getPrismaClientForTenancy(tenancy) (use getInternal() to
obtain tenancy/prisma) and run prisma.subscription.deleteMany({ where: {
tenancyId: tenancy.id, customerId: { in: [/* collected billingTeamIds */] } } })
to remove the test rows; ensure cleanup references the seedSub/randomUUID
identifiers and runs regardless of test outcomes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4325336c-2080-470a-a4cb-05c2cdb384a9
📒 Files selected for processing (21)
apps/backend/prisma/seed.tsapps/backend/src/app/api/latest/analytics/events/batch/route.tsxapps/backend/src/app/api/latest/integrations/stripe/webhooks/route.tsxapps/backend/src/app/api/latest/internal/analytics/query/route.tsapps/backend/src/app/api/latest/internal/send-test-email/route.tsxapps/backend/src/app/api/latest/payments/products/[customer_type]/[customer_id]/[product_id]/route.tsapps/backend/src/app/api/latest/payments/products/[customer_type]/[customer_id]/route.tsapps/backend/src/app/api/latest/payments/products/[customer_type]/[customer_id]/switch/route.tsapps/backend/src/app/api/latest/session-replays/batch/route.tsxapps/backend/src/app/api/latest/teams/crud.tsxapps/backend/src/app/api/latest/users/crud.tsxapps/backend/src/lib/email-queue-step.tsxapps/backend/src/lib/events.tsxapps/backend/src/lib/payments.tsxapps/backend/src/lib/payments/ensure-free-plan.test.tsapps/backend/src/lib/payments/ensure-free-plan.tsapps/backend/src/lib/plan-entitlements.test.tsapps/backend/src/lib/plan-entitlements.tsapps/backend/src/lib/sign-up-rules.tsapps/backend/src/lib/stripe.tsxapps/backend/src/lib/tokens.tsx
✅ Files skipped from review due to trivial changes (1)
- apps/backend/src/lib/tokens.tsx
🚧 Files skipped from review as they are similar to previous changes (11)
- apps/backend/src/app/api/latest/integrations/stripe/webhooks/route.tsx
- apps/backend/src/lib/stripe.tsx
- apps/backend/src/app/api/latest/internal/send-test-email/route.tsx
- apps/backend/src/lib/payments.tsx
- apps/backend/src/app/api/latest/payments/products/[customer_type]/[customer_id]/[product_id]/route.ts
- apps/backend/src/lib/email-queue-step.tsx
- apps/backend/src/app/api/latest/users/crud.tsx
- apps/backend/src/app/api/latest/internal/analytics/query/route.ts
- apps/backend/prisma/seed.ts
- apps/backend/src/lib/plan-entitlements.ts
- apps/backend/src/lib/plan-entitlements.test.ts
f8533a8 to
7e6492c
Compare
They seem to be failing because the fallback path was not exercised by the backend stackserverapp. fallback path is _withFallback(), and lets it default when there is an issue.
Suggested Review Areas
Please see
plans.tsandseed.tsto verify whether the item caps are where they should be. Outside of that, each commit should be atomic so stepping through the commits should give you an idea of how I implemented each limit.Discussion
Something to discuss: when a user cancels team/growth we regrant free fine, but any extra-seats they had just keeps billing. So they end up paying ~$29/mo per extra-seat on top of free's 1 seat, which is strictly worse than just staying on team. This surfaced while manually testing this PR, we only enforce the add-on base requirement at purchase time, nothing cascades on cancel. Should we cascade cancel add ons?
Context
Now that we have a stable suite of products for stack-auth, we want to limit the items under each product a customer has access to based on their plan. So for example, a free plan user has a certain amount of emails they can send out each month, and so on. We try to implement limits in this PR.
Summary of Changes
Implemented hard limits for dashboard admins, analytics per-query timeouts, sent email monthly capacity, events, and session replays. Implemented a soft cap for auth users (where if there's a signup beyond the limit, we log it to sentry so we can manually choose to email that user/team).
For auth users, we do not block new user sign ups once plan limit has been hit. We also don't degrade or impact the customer experience. It logs to sentry and it is up to us to take manual action to email the user to upgrade the plan. Also, implementation wise, we count all the users across all the projects for this team and compare it to their plan item limit, rather than debiting items like we do for other approaches. As a soft cap, this should be fine plus this is a better source of truth.
For email capacity, we operate a monthly limit of emails. Once this is hit, no more emails can be sent until the next month/ a plan upgrade. These emails will be treated as a send error, so they can be manually resent once the capacity is reset. With respect to the
email-queuestate engine, they go fromSENDING->SERVER_ERROR, hooking into the existing state engine flow, with an external error that shows it's because of the rate limit. This is cleaner than inventing a new state that is identical for all intents and purposes toSERVER_ERROR. We check in processSingleEmail since that maps to the sending state.For analytics query timeouts, the backend route accepts a timeout parameter with the request. The way we implement the timeout for each query is by taking the
min(request_timeout,plan_timeout)and using that. This determines how long a query can run for.For analytics events, there are server-side events (like refresh token refreshes or sign up rule triggers) and client side events (like page views or clicks). When these events occur, they are written to the events table in clickhouse. We choose to implement a hard cap for the total events, not just server side or client side. Once the cap is hit, we stop storing the events and display a banner on the analytics page. A different banner renders when we are at >=80% of total plan capacity.
For session replays, we stop creating new session replays when the limit is hit. Old replays can still have chunks appended to them. The source of truth here is the session replay table- a new replay corresponds to a new row in the table. We have similar banners as to the events.
Dashboard admins should be 4 for both team and unlimited.
Implementation Caveats
For debiting items across these limits, we now use
tryDecreaseQuantityat the beginning. This means we debit first if possible before conducting the action (like writing events to clickhouse). In practice, this means that if clickhouse fails, then the user is debited for something that doesn't happen. However trying to build a refund workaround would be very clunky, and also, clickhouse is reliable. For debits that are very small in the order of things (say, 200 items on a 100k plan), it doesn't mean much.For emails, we don't debit items if it's a retry. This prevents the user for being charged multiple times for effectively one email.
UI Changes
The only UI changes in this PR are having certain banners render in analytics when a customer is approaching/ is at their monthly limit of session replays or events.
Out of Scope for this PR
We do not have metered pricing yet, so events/session replays/ email use beyond the limits cannot be charged yet. This is why for this implementation, we rely on hard and soft caps.
We do not implement payment per-transaction pricing yet. That is deferred to a followup PR.
The UI for the onboarding call will be set up as part of the overall onboarding flow which doesn't exist yet, so it has been deferred.
Since the UI for the dashboard home page and project/account settings is currently being reworked, finding a better spot for plan upgrades is not handled in this PR.
Summary by CodeRabbit
New Features
Behavior Changes
Tests