Skip to content

feat: team + operator + seed + health endpoints#28

Merged
Gradata merged 7 commits intomainfrom
worktree-dashboard-nextjs-rebuild
Apr 13, 2026
Merged

feat: team + operator + seed + health endpoints#28
Gradata merged 7 commits intomainfrom
worktree-dashboard-nextjs-rebuild

Conversation

@Gradata
Copy link
Copy Markdown
Owner

@Gradata Gradata commented Apr 13, 2026

Summary

Wave 3 of the dashboard rebuild — fills the remaining backend gaps so dashboard widgets back to real data, plus polish (404, error boundary, /ready endpoint).

Backend (115 tests, was 78 → +33)

  • Team management (/api/v1/workspaces/{id}/members, /invites, role updates) — 11 tests, RLS-aware, EmailStr validation, secrets-based invite tokens with 7-day expiry
  • Operator god-mode (/api/v1/admin/{global-kpis,customers,alerts}) — 15 tests, gated by OPERATOR_DOMAINS = {gradata.ai, sprites.ai} via JWT email claim with DB fallback. Pulls MRR/ARR/churn/NRR from workspaces; alerts derived from inactivity + Stripe sub status + correction spike heuristics
  • Demo seed on signup (migration 004 + POST /brains/{id}/clear-demo) — 7 tests. Function seed_demo_brain(workspace_id, user_id) creates 1 brain, 8 lessons (6-dim taxonomy across 3 states), 25 corrections (Wozniak decay shape), 4 meta-rules, 6 events, all flagged is_demo
  • /ready endpoint with DB ping + Railway SHA + uptime — 4 tests. Pairs with /health (cheap liveness)

Dashboard

  • Branded app/not-found.tsx (V3-themed 404)
  • Top-level app/error.tsx error boundary that captures to Sentry + shows digest id

Migrations applied to live Supabase

  • 003_workspace_invites — table + RLS policies for team invites
  • 004 (partial) — schema columns (brains.metadata, corrections.data, lessons.data, meta_rules.data)
  • Migration 004's seed_demo_brain function + handle_new_user replacement intentionally NOT applied yet (changes signup behavior — apply when ready to roll demo seed for new signups)

Manual data seed (live now)

Inserted 8 lessons / 25 corrections / 4 meta-rules / 6 events into Oliver's existing brain (afd6fe39-...) so the live dashboard at gradata-dashboard.pages.dev renders populated widgets when he signs in.

Test plan

  • 115/115 backend tests pass (78 → +33)
  • Dashboard build passes (25 routes)
  • Schema migration 003 applied to live Supabase
  • Schema additions from 004 applied to live Supabase
  • Existing user's brain seeded with demo data (verified via SQL counts)
  • Deploy this PR's backend (Railway auto-deploys on merge)
  • Apply seed_demo_brain function + handle_new_user trigger replacement when ready

Generated with Gradata

Oliver Le and others added 6 commits April 13, 2026 06:38
- /health stays minimal liveness probe (no DB hit, always 200 if alive)
- /ready new readiness endpoint pings Supabase, reports release SHA from
  RAILWAY_GIT_COMMIT_SHA, uptime, environment, db_latency_ms
- Status field reflects truth: 'ready' on db ok, 'unavailable' on db
  failure (HTTP 200 either way so monitors can read the body)
- 4 new tests (82/82 backend pass)

Useful for: external uptime monitoring (Better Stack / UptimeRobot point
at /ready, alert on body.status != 'ready'), debugging deploys (curl to
verify which SHA is live), readiness-aware load balancers later.

Co-Authored-By: Gradata <noreply@gradata.ai>
Adds /workspaces/{id}/members (list), /workspaces/{id}/invites (create),
DELETE/PATCH /workspaces/{id}/members/{user_id} so the dashboard team page
can replace mockTeam fixtures with real data.

- New route file cloud/app/routes/team.py with member/admin/owner role gating
- New migration 003_workspace_invites.sql (table + RLS + indexes)
- Pydantic MemberResponse/InviteRequest/InviteResponse/UpdateRoleRequest models
- SupabaseClient gains async delete()
- 11 new tests (48 -> 59) covering happy paths, 403 for non-members and
  member role, 400 when removing the owner or assigning owner via PATCH,
  and pydantic email validation

Permission decisions: owner cannot be removed and owner role cannot be
assigned through PATCH (separate ownership-transfer flow needed). Mock
SupabaseClient also gains a delete() stub so other tests stay green.
Adds /admin/global-kpis, /admin/customers, /admin/alerts gated behind an
email-domain allowlist (@gradata.ai, @sprites.ai). Powers the dashboard
operator page (currently mock-backed).

- New route: cloud/app/routes/operator.py
- New auth dep: require_operator resolves email from JWT claim first,
  falls back to auth.users lookup
- New Pydantic models: GlobalKpis, AdminCustomer, AdminAlert
- 15 tests covering access control, KPI aggregation, health buckets,
  sort + pagination, and derived alerts

Stripe subscription status is read from the workspaces.subscription_status
column populated by the existing billing webhook — no direct Stripe API call
yet (TODO noted in code).
New signups drop off when the dashboard renders empty. Migration 004 adds
seed_demo_brain(workspace_id, user_id) that creates 1 demo brain with 8
lessons across the 6-dim taxonomy (2 INSTINCT / 3 PATTERN / 3 RULE), 25
corrections in a Wozniak decay shape over 30 days, 4 meta-rules, and 6
events. handle_new_user is replaced to call it. Every row carries an
is_demo marker so users can purge via POST /brains/{id}/clear-demo.

Co-Authored-By: Gradata <noreply@gradata.ai>
- app/not-found.tsx: V3-themed 404 with CTAs to /dashboard or /login
- app/error.tsx: client error boundary that captures to Sentry, shows
  the digest id (matches Sentry event for support), and offers retry +
  back-to-overview

Co-Authored-By: Gradata <noreply@gradata.ai>
Schema (001_initial_schema.sql line 51) uses brain_name; agent referenced
the wrong column. Spotted before applying migration to live Supabase.

Co-Authored-By: Gradata <noreply@gradata.ai>
Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 13, 2026

📝 Walkthrough

Summary

Backend Features:

  • Team management API: 4 new endpoints (GET /members, POST /invites, DELETE /members/{id}, PATCH /members/{id}) with RLS-aware authorization, EmailStr validation, and 7-day expiring invite tokens
  • Operator admin endpoints: 3 new "god-mode" endpoints (GET /admin/global-kpis, /admin/customers, /admin/alerts) restricted to gradata.ai and sprites.ai domains via JWT email claim, with MRR/ARR aggregation, churn/NRR calculations, and inactivity/payment-status alerts
  • Demo brain seeding: Migration 004 adds seed_demo_brain() function creating 1 demo brain, 8 lessons, 25 corrections (with Wozniak decay), 4 meta-rules, and 6 events on new user signup; POST /brains/{id}/clear-demo endpoint purges demo data
  • Health/readiness endpoints: Enhanced /health with typed response (status, service, version); new /ready endpoint reporting DB latency, Railway SHA (RAILWAY_GIT_COMMIT_SHA), uptime, and environment
  • Supabase schema updates: Migration 003 creates workspace_invites table with token-based invite flow and RLS policies; Migration 004 adds metadata/data JSONB columns to brains, corrections, lessons, and meta_rules for demo tracking

Dashboard:

  • Branded 404 page (not-found.tsx) and global error boundary (error.tsx) capturing to Sentry with digest ID display

Database:

  • New workspace_invites table with 7-day expiring tokens, role constraints (admin/member only), and RLS enforcement
  • Schema extensions: brains.metadata, corrections.data, lessons.data, meta_rules.data (all JSONB for is_demo flagging)

Tests & Validation:

  • 115 backend tests total (+33 new): team endpoints (11), operator endpoints (15), health/ready (4), demo seed (7)
  • Security: JWT email allowlist verification for operators with DB fallback; RLS policies on invites; EmailStr validation on invitations

Breaking Changes:

  • None to existing API contracts; verify_jwt() wrapper maintained with same return type

Walkthrough

This pull request introduces comprehensive workspace team management, operator/admin analytics features, and demo brain seeding functionality. It extends authentication to support operator domains, adds database migrations for workspace invites and demo brain initialization, implements multiple new API routes for team member management and admin analytics, enhances health/readiness probing, and includes extensive test coverage alongside documentation.

Changes

Cohort / File(s) Summary
Auth & Access Control
cloud/app/auth.py, cloud/app/db.py
Added operator domain allowlist (OPERATOR_DOMAINS), require_operator dependency for email-based authorization, refactored JWT verification to separate verify_jwt_claims(...) returning full claims from wrapper verify_jwt(...), and implemented _resolve_user_email(...) for email lookup. Added SupabaseClient.delete(...) method for issuing DELETE requests with equality filters.
Data Models
cloud/app/models.py
Introduced role enums (MemberRole, InviteRole), member/invite request/response models (MemberResponse, InviteRequest, InviteResponse, UpdateRoleRequest), and admin dashboard aggregates (GlobalKpis, AdminCustomer, AdminAlert) for KPI tracking and customer analytics.
Database Schema
cloud/migrations/003_workspace_invites.sql, cloud/migrations/004_seed_demo_brain.sql
Created workspace_invites table with token-based invite tracking and RLS policies. Extended brains, corrections, lessons, meta_rules, and events tables with JSON metadata/data columns. Implemented seed_demo_brain(...) PL/pgSQL function to populate demo content on signup and refactored handle_new_user() trigger to auto-seed demos.
Routes & Endpoints
cloud/app/routes/team.py, cloud/app/routes/operator.py, cloud/app/routes/brains.py, cloud/app/routes/__init__.py
Added team routes for member listing, invite creation, member removal, and role updates. Implemented operator-gated admin endpoints for global KPIs, customer list with health classification, and churn/payment/usage-spike alerts. Added /brains/{brain_id}/clear-demo endpoint with per-table deletion tracking.
Health & Readiness
cloud/app/routes/health.py
Replaced untyped /health endpoint with HealthResponse model and added /ready endpoint with database probing, latency measurement, and readiness status (ready/degraded/unavailable).
Dashboard UI
cloud/dashboard/app/error.tsx, cloud/dashboard/app/not-found.tsx
Added global error boundary component with Sentry integration and 404 page with navigation links to dashboard/login.
Testing
cloud/tests/conftest.py, cloud/tests/test_health.py, cloud/tests/test_operator.py, cloud/tests/test_seed_demo.py, cloud/tests/test_team.py
Added MockSupabaseClient.delete() method, comprehensive tests for health/ready endpoints, operator endpoint access control and KPI logic, demo brain seeding and clear-demo functionality, and team member management workflows.
Documentation
cloud/SUPABASE-SETUP.md
Added operational documentation for demo brain migration, clear-demo endpoint usage, local verification steps, and teardown procedures.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

feature

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 59.77% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: team + operator + seed + health endpoints' accurately summarizes the main changes: four major feature additions. It is specific and concise.
Description check ✅ Passed The description is comprehensive and directly related to the changeset. It explains the Wave 3 rebuild, details backend endpoints, dashboard improvements, migrations, and test plan with clear context.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai Bot added the feature label Apr 13, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cloud/app/auth.py (1)

57-77: ⚠️ Potential issue | 🔴 Critical

Catch JWKS transport failures before they become 500s.

_get_jwks() and raise_for_status() raise httpx transport/status exceptions, but this block only catches JWTError. A transient Supabase/JWKS outage will take down JWT auth instead of falling back or returning a controlled auth failure.

Suggested hardening
-    except JWTError:
+    except (JWTError, httpx.HTTPError):
         pass  # Fall through to HS256
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/app/auth.py` around lines 57 - 77, The JWKS retrieval block only
catches JWTError but network/HTTP errors from _get_jwks() (and its
raise_for_status()) can raise httpx exceptions and cause a 500; update the
try/except so transport/status errors are caught and handled the same as JWT
parsing failures: either wrap the _get_jwks() call in its own try/except that
catches httpx.RequestError and httpx.HTTPStatusError (or add them to the
existing except) and log the error before falling through to the HS256 fallback,
ensuring httpx is imported; keep the existing behavior of continuing to attempt
HS256 verification using signed_jwt and jose_jwt.decode if JWKS fetch fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cloud/app/auth.py`:
- Around line 156-172: The fallback in _resolve_user_email currently calls
get_db() and db.select("users", ...) which hits PostgREST and cannot access
auth.users; replace that fallback with a Supabase Admin API call (e.g., use your
Supabase client method supabase.auth.admin.get_user(user_id) or equivalent
client helper) to fetch the user and return user.data.user.email, handling
errors and logging similarly; remove or stop using get_db()/db.select for auth
users, ensure you import/obtain the admin-capable Supabase client (symbol name
you use for the client) and catch exceptions to _log.warning(...) before
returning None.

In `@cloud/app/db.py`:
- Around line 69-82: The delete method currently allows unfiltered table-wide
deletes; modify the async delete(self, table: str, filters: dict[str, Any] |
None = None) to disallow nil filters by default: if filters is None raise a
ValueError (or custom exception) with a clear message instructing the caller to
provide filters, and add an explicit opt-in parameter (e.g., allow_all: bool =
False) so callers can pass allow_all=True to intentionally perform an unfiltered
delete. Update the method docstring to state the new behavior and mention the
allow_all escape hatch, and ensure the early check runs before making the HTTP
DELETE request in _http.delete inside delete.

In `@cloud/app/routes/brains.py`:
- Around line 174-183: The current code deletes the brain row whenever
_is_demo_metadata(metadata) is true, which can cascade-delete non-demo child
rows; change the logic in the block handling brain_rows (the select -> delete
sequence) to first query each child table that can reference brains (e.g.,
lessons, corrections, events, meta_rules) for records with this brain_id that
are NOT demo, and only call db.delete("brains", filters={"id": brain_id}) if all
those queries return zero non-demo rows; otherwise do not delete the brain and
set by_table["brains"]=0 (or always preserve the brain), ensuring
_is_demo_metadata is still checked before considering deletion.

In `@cloud/app/routes/operator.py`:
- Around line 294-325: Replace the N+1 loop that calls db.select("corrections",
...) for each brain in brains_by_ws with a single query that fetches all
corrections for the relevant brain ids within the two_weeks_ago window, then
aggregate counts in-memory by brain_id and date-range; use those aggregated
counts to compute this_week and prior_week per workspace and then construct
AdminAlert entries as before (keep using _parse_ts to parse created_at, use
now/week_ago/two_weeks_ago for ranges, and reference brains_by_ws/workspaces to
map brain_id→workspace). Ensure the single db.select uses a filter like brain_id
IN (...) and created_at >= two_weeks_ago to limit rows returned and remove the
per-brain db.select calls.
- Around line 229-234: The descending sort currently flips the tuple order and
brings None last_active values to the top; fix this by doing a stable two-step
sort: first sort rows by the actual timestamp using rows.sort(key=lambda r:
r.last_active or "", reverse=reverse) and then stable-sort to push None to the
end with rows.sort(key=lambda r: r.last_active is None) so that r.last_active is
None always sorts to the end regardless of the reverse flag.
- Around line 105-108: The SELECTs against the workspaces table are requesting
non-existent columns (deleted_at and subscription_status) and will raise runtime
errors; locate the db.select calls that populate the workspaces variable (the
assignment using db.select("workspaces", columns=...)) and the later/select that
requests subscription_status, and either (A) remove those non-existent column
names from the columns list so queries only request id, name, owner_id, plan,
stripe_customer_id, created_at, or (B) add a proper DB migration to create
deleted_at and subscription_status and update the schema before querying; after
changing the queries, update any downstream code that reads deleted_at or
subscription_status to handle their absence (e.g., default values or optional
access) so no runtime KeyError/AttributeError occurs.

In `@cloud/app/routes/team.py`:
- Around line 70-95: Replace the per-member brains query and string comparison
with a single workspace-level query and datetime-aware grouping: call db.select
once for table "brains" with filters {"workspace_id": workspace_id} (instead of
calling db.select inside the loop), parse each brain's "last_sync_at" into a
timezone-aware datetime, keep a dict mapping user_id -> latest datetime (compare
datetimes, not raw strings), then when building MemberResponse (the existing
MemberResponse construction that uses user_id, email, display_name, role,
joined_at), set last_sync_at from the grouped/latest datetime (format back to
the original string format or ISO) for that member_id; update variable names
around brains, last_sync_by_user, and member_user_id accordingly.
- Around line 132-138: The info-level logs in the invite/team actions currently
emit PII (body.email and user_id) — update the logging calls (e.g., the
_log.info that logs workspace_id, body.email, invite_data["role"], user_id and
the similar calls at the other locations) to redact or pseudonymize identifiers:
log workspace_id and role as-is but replace email and user id with a hashed or
masked value (e.g., compute a stable SHA-256 or mask like ****@domain) or move
full identifiers into a lower-severity/specialized audit sink; ensure the code
that constructs the log message uses the same redaction utility to produce the
redacted_email and redacted_user_id so all three sites use the identical helper.

In `@cloud/dashboard/app/error.tsx`:
- Around line 30-32: The user-facing copy in cloud/dashboard/app/error.tsx
incorrectly claims "The error has been logged" even when Sentry is not
initialized; update the component to conditionally render the message based on
whether Sentry is enabled: import a boolean flag or check NEXT_PUBLIC_SENTRY_DSN
from cloud/dashboard/sentry.client.config.ts (or export one there, e.g.,
isSentryEnabled) and use it in the error UI to show "The error has been logged"
only when true, otherwise show a neutral message like "We couldn't automatically
log this error — please try again or contact support." Ensure the change targets
the JSX around the paragraph in error.tsx so the copy correctly reflects Sentry
state.

In `@cloud/migrations/003_workspace_invites.sql`:
- Around line 22-23: The current RLS policy
"workspace_invites_read_own_workspace" on table workspace_invites allows any
workspace member to SELECT pending invites, which can leak invitee emails;
tighten the USING clause so only workspace members with admin/owner privileges
can read invites by checking the role (e.g., require workspace_members.role IN
('owner','admin') or an equivalent admin flag) in the subquery that compares
workspace_id and auth.uid() against workspace_members; update the policy
condition accordingly to reference workspace_invites, workspace_members and
auth.uid().
- Around line 25-29: The INSERT policy workspace_invites_create_admin currently
only verifies the caller's workspace role; tighten it so inserts are bound to
the authenticated inviter and disallow setting lifecycle fields by adding to the
WITH CHECK: workspace_id is in the existing SELECT from workspace_members (role
IN ('owner','admin')) AND invited_by = auth.uid() AND accepted_at IS NULL AND
expires_at IS NULL (or expires_at IS DEFAULT/NULL if you use a default); update
the policy on table workspace_invites to enforce these column checks so clients
cannot spoof invited_by or pre-populate lifecycle timestamps.

In `@cloud/migrations/004_seed_demo_brain.sql`:
- Around line 36-41: The SECURITY DEFINER functions
seed_demo_brain(p_workspace_id UUID, p_user_id UUID) and
handle_new_user_test(...) are callable by any client unless you explicitly
revoke public execute and grant it only to the trusted service role; update the
migration to revoke EXECUTE on these function objects from public and then GRANT
EXECUTE only to the internal DB role you use for server-side operations (e.g.,
the service_role or an admin DB role), ensuring no anonymous/public role can RPC
them.

In `@cloud/SUPABASE-SETUP.md`:
- Around line 126-128: The fenced code block containing "POST
/api/v1/brains/{brain_id}/clear-demo" is missing a language specifier and
triggers markdownlint MD040; update the fence from ``` to ```text (or another
appropriate language) so the block reads ```text followed by the POST line and
closing ```, ensuring the fenced block is annotated to satisfy the linter.

In `@cloud/tests/conftest.py`:
- Around line 52-54: The mock delete(self, table: str, filters: dict | None =
None) currently ignores filters and always returns the seeded "delete" list;
change it to apply filters against the mock table rows (e.g.,
self._responses[table]["rows"] or equivalent) to find matching rows, remove
those rows from the stored mock state (mutate self._responses[table]["rows"] to
reflect deletion), and return the list of deleted rows; if filters is None treat
it as deleting all rows and ensure the stored rows become empty. Use the
existing symbols delete, table, filters and self._responses to locate and update
the logic.

In `@cloud/tests/test_operator.py`:
- Around line 64-67: Add a positive test that exercises require_operator()'s DB
email fallback: create a new test (e.g.,
test_global_kpis_allows_db_email_fallback) that calls
client.get("/api/v1/admin/global-kpis", headers=_headers(email=None)) but sets
up mock_supabase to return a user record whose email is allowlisted; assert the
response is successful (200) so the DB lookup path is covered; use the same
_headers helper and mock_supabase fixture used in the existing test to locate
the behavior in require_operator().

---

Outside diff comments:
In `@cloud/app/auth.py`:
- Around line 57-77: The JWKS retrieval block only catches JWTError but
network/HTTP errors from _get_jwks() (and its raise_for_status()) can raise
httpx exceptions and cause a 500; update the try/except so transport/status
errors are caught and handled the same as JWT parsing failures: either wrap the
_get_jwks() call in its own try/except that catches httpx.RequestError and
httpx.HTTPStatusError (or add them to the existing except) and log the error
before falling through to the HS256 fallback, ensuring httpx is imported; keep
the existing behavior of continuing to attempt HS256 verification using
signed_jwt and jose_jwt.decode if JWKS fetch fails.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 2ef9214b-68e6-436a-aca0-7d51c5a0d94d

📥 Commits

Reviewing files that changed from the base of the PR and between 229e7d9 and 639393b.

📒 Files selected for processing (18)
  • cloud/SUPABASE-SETUP.md
  • cloud/app/auth.py
  • cloud/app/db.py
  • cloud/app/models.py
  • cloud/app/routes/__init__.py
  • cloud/app/routes/brains.py
  • cloud/app/routes/health.py
  • cloud/app/routes/operator.py
  • cloud/app/routes/team.py
  • cloud/dashboard/app/error.tsx
  • cloud/dashboard/app/not-found.tsx
  • cloud/migrations/003_workspace_invites.sql
  • cloud/migrations/004_seed_demo_brain.sql
  • cloud/tests/conftest.py
  • cloud/tests/test_health.py
  • cloud/tests/test_operator.py
  • cloud/tests/test_seed_demo.py
  • cloud/tests/test_team.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cloudflare Pages
🧰 Additional context used
🪛 markdownlint-cli2 (0.22.0)
cloud/SUPABASE-SETUP.md

[warning] 126-126: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (3)
cloud/dashboard/app/error.tsx (1)

19-21: Solid error-boundary behavior and recovery flow.

Sentry.captureException(error) plus the reset retry path is cleanly implemented, and showing error.digest when available is useful for support/debug handoff.

Also applies to: 33-44

cloud/dashboard/app/not-found.tsx (2)

5-7: Metadata declaration looks correct.

The metadata export is concise and aligned with the route’s purpose.


11-33: 404 UX is clear and well-structured.

Centered layout, concise guidance text, and Link-based recovery actions (/dashboard, /login) make this flow practical and consistent.

Comment thread cloud/app/auth.py
Comment on lines +156 to +172
async def _resolve_user_email(user_id: str, claims: dict) -> str | None:
"""Resolve the caller's email — prefer JWT claim, fall back to auth.users lookup."""
email = claims.get("email")
if email:
return email

# Fallback: query auth.users via the Supabase service-role client.
# Supabase exposes auth users through the admin REST endpoint, not PostgREST.
# If your db wrapper doesn't expose that, the JWT claim path is the primary route.
try:
db = get_db()
rows = await db.select("users", columns="email", filters={"id": user_id})
if rows:
return rows[0].get("email")
except Exception as exc: # pragma: no cover - defensive
_log.warning("Failed to resolve email for user=%s: %s", user_id, exc)
return None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Searching for users-table definitions and auth.users access..."
rg -n -C2 'CREATE TABLE .*users|auth\.users|select\("users"|from\("users"' cloud

echo
echo "Inspecting the DB wrapper select() API..."
rg -n -C3 'async def select\(|def select\(' cloud/app

Repository: Gradata/gradata

Length of output: 3541


🏁 Script executed:

# Find the DB wrapper implementation and get_db() function
rg -n 'def get_db|class.*DB|async def select' cloud/app --type py -B2 -A5

Repository: Gradata/gradata

Length of output: 1027


🏁 Script executed:

# Check if there's a public "users" table definition in migrations
rg -n 'CREATE TABLE.*users|CREATE TABLE "users"' cloud/migrations

Repository: Gradata/gradata

Length of output: 167


🏁 Script executed:

# Look for service-role or admin API configuration
rg -n 'service.role|admin_api|supabase.*key' cloud --type py

Repository: Gradata/gradata

Length of output: 1736


🏁 Script executed:

# Read the complete select() method implementation
sed -n '32,80p' cloud/app/db.py

Repository: Gradata/gradata

Length of output: 2104


🏁 Script executed:

# Check if SupabaseClient is external or custom, and how it's imported
head -n 30 cloud/app/db.py

Repository: Gradata/gradata

Length of output: 901


🏁 Script executed:

# Look for any special "users" table handling or mapping in the codebase
rg -n 'users.*table|table.*users|auth\.users' cloud/app --type py

Repository: Gradata/gradata

Length of output: 485


Fix the email fallback: db.select("users") won't reach auth.users via PostgREST.

The code calls db.select("users", ...), which routes to PostgREST /rest/v1/users, but there is no public users table in the migrations. The auth.users table exists in Supabase's auth schema and is not exposed through PostgREST—it requires the Admin API instead. As written, the fallback will fail silently and always return None for JWTs without an email claim.

Either:

  • Create a public users view that mirrors auth.users email data, or
  • Replace the PostgREST call with direct Admin API access via Supabase client's auth methods (e.g., supabase.auth.admin.get_user(user_id)), or
  • Remove the fallback and require the email claim in JWTs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/app/auth.py` around lines 156 - 172, The fallback in
_resolve_user_email currently calls get_db() and db.select("users", ...) which
hits PostgREST and cannot access auth.users; replace that fallback with a
Supabase Admin API call (e.g., use your Supabase client method
supabase.auth.admin.get_user(user_id) or equivalent client helper) to fetch the
user and return user.data.user.email, handling errors and logging similarly;
remove or stop using get_db()/db.select for auth users, ensure you import/obtain
the admin-capable Supabase client (symbol name you use for the client) and catch
exceptions to _log.warning(...) before returning None.

Comment thread cloud/app/db.py
Comment on lines +69 to +82
async def delete(
self, table: str, filters: dict[str, Any] | None = None,
) -> list[dict]:
"""DELETE rows matching eq filters. Returns deleted rows when PostgREST sends them back."""
params: dict[str, str] = {}
if filters:
for key, val in filters.items():
params[key] = f"eq.{val}"
resp = await self._http.delete(f"/{table}", params=params)
resp.raise_for_status()
try:
return resp.json()
except ValueError:
return []
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

In PostgREST / Supabase REST, what happens when you send DELETE /rest/v1/<table> without any filter query parameters?

💡 Result:

In PostgREST (and thus Supabase REST API), sending DELETE /rest/v1/<table> without any filter query parameters generates a SQL DELETE statement with no WHERE clause, which deletes all rows in the table that the authenticated database role has DELETE permission on. This behavior follows PostgreSQL semantics: without a WHERE clause, DELETE removes all visible rows. If Row Level Security (RLS) is enabled, only rows passing the relevant DELETE policies are deleted (silently; no error if zero rows affected). The response is typically HTTP 204 No Content. Official docs warn: "Beware of accidentally deleting all rows in a table." (PostgREST v14 docs). To prevent this, use the pg-safeupdate extension (blocks WHERE-less DELETE/UPDATE), database permissions/RLS, or a reverse proxy like Nginx. Supabase docs emphasize: "Always include a filter when deleting to avoid deleting all rows in the table." Community discussions confirm users must add filters (e.g., ?id=eq.0 or ?limit=0 workaround discouraged). No built-in block in recent PostgREST v14.x (2026); a prior "limited delete" feature was removed.

Citations:


Reject unfiltered deletes on the service-role client.

If a caller omits filters, the method issues a table-wide DELETE—PostgREST sends no WHERE clause, deleting all rows the service-role has permission to delete. This is a single call-site bug causing full data loss. Supabase and PostgREST docs explicitly warn: "Always include a filter when deleting to avoid deleting all rows in the table." Require at least one filter, or force explicit opt-in with an allow_all escape hatch.

🛡️ Safer API shape
     async def delete(
-        self, table: str, filters: dict[str, Any] | None = None,
+        self,
+        table: str,
+        filters: dict[str, Any] | None = None,
+        *,
+        allow_all: bool = False,
     ) -> list[dict]:
         """DELETE rows matching eq filters. Returns deleted rows when PostgREST sends them back."""
+        if not filters and not allow_all:
+            raise ValueError("Refusing unfiltered delete on service-role client")
         params: dict[str, str] = {}
         if filters:
             for key, val in filters.items():
                 params[key] = f"eq.{val}"
         resp = await self._http.delete(f"/{table}", params=params)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/app/db.py` around lines 69 - 82, The delete method currently allows
unfiltered table-wide deletes; modify the async delete(self, table: str,
filters: dict[str, Any] | None = None) to disallow nil filters by default: if
filters is None raise a ValueError (or custom exception) with a clear message
instructing the caller to provide filters, and add an explicit opt-in parameter
(e.g., allow_all: bool = False) so callers can pass allow_all=True to
intentionally perform an unfiltered delete. Update the method docstring to state
the new behavior and mention the allow_all escape hatch, and ensure the early
check runs before making the HTTP DELETE request in _http.delete inside delete.

Comment on lines +174 to +183
# Finally the brain itself — only if it was flagged is_demo in metadata.
brain_rows = await db.select(
"brains", columns="id,metadata", filters={"id": brain_id}
)
if brain_rows and _is_demo_metadata(brain_rows[0].get("metadata")):
await db.delete("brains", filters={"id": brain_id})
by_table["brains"] = 1
total += 1
else:
by_table["brains"] = 0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don't delete the parent brain in a demo-only cleanup.

Line 179 drops the brains row whenever metadata.is_demo is set. If the user has created any non-demo lessons, corrections, events, or meta-rules under that brain, the FK cascade will remove those real records too. That violates the endpoint's "clear demo data" contract and can lose user data.

🛡️ Safer direction
-    if brain_rows and _is_demo_metadata(brain_rows[0].get("metadata")):
-        await db.delete("brains", filters={"id": brain_id})
-        by_table["brains"] = 1
-        total += 1
-    else:
-        by_table["brains"] = 0
+    by_table["brains"] = 0
+    if brain_rows and isinstance(brain_rows[0].get("metadata"), dict):
+        metadata = dict(brain_rows[0]["metadata"])
+        metadata.pop("is_demo", None)
+        await db.update("brains", data={"metadata": metadata}, filters={"id": brain_id})

If preserving the brain is not the intended UX, only delete it after proving there are no non-demo child rows left.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Finally the brain itself — only if it was flagged is_demo in metadata.
brain_rows = await db.select(
"brains", columns="id,metadata", filters={"id": brain_id}
)
if brain_rows and _is_demo_metadata(brain_rows[0].get("metadata")):
await db.delete("brains", filters={"id": brain_id})
by_table["brains"] = 1
total += 1
else:
by_table["brains"] = 0
# Finally the brain itself — only if it was flagged is_demo in metadata.
brain_rows = await db.select(
"brains", columns="id,metadata", filters={"id": brain_id}
)
by_table["brains"] = 0
if brain_rows and isinstance(brain_rows[0].get("metadata"), dict):
metadata = dict(brain_rows[0]["metadata"])
metadata.pop("is_demo", None)
await db.update("brains", data={"metadata": metadata}, filters={"id": brain_id})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/app/routes/brains.py` around lines 174 - 183, The current code deletes
the brain row whenever _is_demo_metadata(metadata) is true, which can
cascade-delete non-demo child rows; change the logic in the block handling
brain_rows (the select -> delete sequence) to first query each child table that
can reference brains (e.g., lessons, corrections, events, meta_rules) for
records with this brain_id that are NOT demo, and only call db.delete("brains",
filters={"id": brain_id}) if all those queries return zero non-demo rows;
otherwise do not delete the brain and set by_table["brains"]=0 (or always
preserve the brain), ensuring _is_demo_metadata is still checked before
considering deletion.

Comment on lines +105 to +108
workspaces = await db.select(
"workspaces",
columns="id,plan,created_at,deleted_at",
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Searching migrations for workspace soft-delete and subscription columns..."
rg -n -C2 'deleted_at|subscription_status' cloud/migrations

Repository: Gradata/gradata

Length of output: 259


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# First, confirm the actual code at the specified lines
echo "=== Code at lines 105-108 in operator.py ==="
sed -n '105,108p' cloud/app/routes/operator.py

echo ""
echo "=== Code at lines 251-254 in operator.py ==="
sed -n '251,254p' cloud/app/routes/operator.py

echo ""
echo "=== Looking for workspace model/schema definition ==="
fd -e py -e sql | xargs rg -l "workspace" | head -20

Repository: Gradata/gradata

Length of output: 979


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check migration directory structure
echo "=== Migration files structure ==="
find cloud/migrations -type f 2>/dev/null | head -30

echo ""
echo "=== Looking for any file mentioning workspace schema ==="
rg -l "CREATE TABLE.*workspace|class Workspace|workspace.*schema" cloud/ 2>/dev/null | head -20

Repository: Gradata/gradata

Length of output: 401


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Try a broader search for workspace table definitions and column mentions
echo "=== Looking for workspace table/schema anywhere in codebase ==="
rg -n "deleted_at" cloud/app --type py | head -20

Repository: Gradata/gradata

Length of output: 716


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check the initial schema for workspace table definition
echo "=== Workspace table definition in 001_initial_schema.sql ==="
grep -A 30 "CREATE TABLE.*workspace" cloud/migrations/001_initial_schema.sql

echo ""
echo "=== Check all migration files for workspace table mentions ==="
for file in cloud/migrations/*.sql; do
  echo "--- $file ---"
  grep -i "workspace" "$file" || echo "(no workspace references)"
done

Repository: Gradata/gradata

Length of output: 6313


These workspace columns do not exist in the database schema and will cause runtime failures.

Lines 105–108 select deleted_at and lines 251–254 select subscription_status, but neither column is defined in the workspaces table. The initial schema contains only: id, name, owner_id, plan, stripe_customer_id, and created_at. None of the four migration files add these missing columns.

The code at line 142 includes a defensive comment ("If deleted_at isn't tracked (column may not exist), this stays 0.0") but the SELECT itself will still error before that code executes. Similarly, subscription_status has no defensive handling. These queries will return 500 errors immediately when the endpoints are hit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/app/routes/operator.py` around lines 105 - 108, The SELECTs against the
workspaces table are requesting non-existent columns (deleted_at and
subscription_status) and will raise runtime errors; locate the db.select calls
that populate the workspaces variable (the assignment using
db.select("workspaces", columns=...)) and the later/select that requests
subscription_status, and either (A) remove those non-existent column names from
the columns list so queries only request id, name, owner_id, plan,
stripe_customer_id, created_at, or (B) add a proper DB migration to create
deleted_at and subscription_status and update the schema before querying; after
changing the queries, update any downstream code that reads deleted_at or
subscription_status to handle their absence (e.g., default values or optional
access) so no runtime KeyError/AttributeError occurs.

Comment on lines +229 to +234
elif sort == "last_active":
# None sorts to the end regardless of order direction.
rows.sort(
key=lambda r: (r.last_active is None, r.last_active or ""),
reverse=reverse,
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

last_active=desc currently pulls never-active customers to the top.

With reverse=True, the key (r.last_active is None, r.last_active or "") sorts None ahead of real timestamps. That contradicts the inline comment and makes descending sort output misleading.

Suggested fix
     elif sort == "last_active":
-        # None sorts to the end regardless of order direction.
-        rows.sort(
-            key=lambda r: (r.last_active is None, r.last_active or ""),
-            reverse=reverse,
-        )
+        active = [r for r in rows if r.last_active is not None]
+        inactive = [r for r in rows if r.last_active is None]
+        active.sort(key=lambda r: r.last_active, reverse=reverse)
+        rows = active + inactive
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/app/routes/operator.py` around lines 229 - 234, The descending sort
currently flips the tuple order and brings None last_active values to the top;
fix this by doing a stable two-step sort: first sort rows by the actual
timestamp using rows.sort(key=lambda r: r.last_active or "", reverse=reverse)
and then stable-sort to push None to the end with rows.sort(key=lambda r:
r.last_active is None) so that r.last_active is None always sorts to the end
regardless of the reverse flag.

Comment on lines +25 to +29
CREATE POLICY "workspace_invites_create_admin" ON workspace_invites FOR INSERT
WITH CHECK (workspace_id IN (
SELECT workspace_id FROM workspace_members
WHERE user_id = auth.uid() AND role IN ('owner', 'admin')
));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bind invite inserts to the authenticated inviter.

This WITH CHECK only proves the caller is an owner/admin of the workspace. A direct client insert can still spoof invited_by and lifecycle fields like accepted_at/expires_at, which weakens auditability and invite semantics.

Suggested policy tightening
 CREATE POLICY "workspace_invites_create_admin" ON workspace_invites FOR INSERT
   WITH CHECK (workspace_id IN (
     SELECT workspace_id FROM workspace_members
     WHERE user_id = auth.uid() AND role IN ('owner', 'admin')
-  ));
+  )
+  AND invited_by = auth.uid()
+  AND accepted_at IS NULL);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE POLICY "workspace_invites_create_admin" ON workspace_invites FOR INSERT
WITH CHECK (workspace_id IN (
SELECT workspace_id FROM workspace_members
WHERE user_id = auth.uid() AND role IN ('owner', 'admin')
));
CREATE POLICY "workspace_invites_create_admin" ON workspace_invites FOR INSERT
WITH CHECK (workspace_id IN (
SELECT workspace_id FROM workspace_members
WHERE user_id = auth.uid() AND role IN ('owner', 'admin')
)
AND invited_by = auth.uid()
AND accepted_at IS NULL);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/migrations/003_workspace_invites.sql` around lines 25 - 29, The INSERT
policy workspace_invites_create_admin currently only verifies the caller's
workspace role; tighten it so inserts are bound to the authenticated inviter and
disallow setting lifecycle fields by adding to the WITH CHECK: workspace_id is
in the existing SELECT from workspace_members (role IN ('owner','admin')) AND
invited_by = auth.uid() AND accepted_at IS NULL AND expires_at IS NULL (or
expires_at IS DEFAULT/NULL if you use a default); update the policy on table
workspace_invites to enforce these column checks so clients cannot spoof
invited_by or pre-populate lifecycle timestamps.

Comment on lines +36 to +41
CREATE OR REPLACE FUNCTION seed_demo_brain(p_workspace_id UUID, p_user_id UUID)
RETURNS UUID
LANGUAGE plpgsql
SECURITY DEFINER
SET search_path = public
AS $$
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

rg -n -C2 'seed_demo_brain|handle_new_user_test|REVOKE .*FUNCTION|GRANT EXECUTE .*FUNCTION|ALTER DEFAULT PRIVILEGES' cloud/migrations

Repository: Gradata/gradata

Length of output: 2621


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Read the migration file to see full function definitions
echo "=== Checking full function definitions ==="
cat -n cloud/migrations/004_seed_demo_brain.sql | sed -n '36,50p'
echo "---"
cat -n cloud/migrations/004_seed_demo_brain.sql | sed -n '396,405p'

Repository: Gradata/gradata

Length of output: 972


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Search entire repo for REVOKE/GRANT/ALTER DEFAULT PRIVILEGES related to these functions
echo "=== Searching for privilege management across entire repo ==="
rg -i 'REVOKE|GRANT.*EXECUTE|ALTER DEFAULT PRIVILEGES' --type sql

Repository: Gradata/gradata

Length of output: 229


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check if there's any explicit privilege management at all
echo "=== Looking for RLS policies on related tables ==="
rg -A 5 'CREATE POLICY|ALTER POLICY' cloud/migrations/ --type sql | head -50

Repository: Gradata/gradata

Length of output: 3932


Lock down these SECURITY DEFINER functions.

seed_demo_brain() and handle_new_user_test() are defined as SECURITY DEFINER and accept caller-controlled UUIDs. Without explicit privilege revocation, Supabase clients can invoke them via RPC and write demo data across tenants. No REVOKE or GRANT EXECUTE statements protect these functions in the codebase.

Suggested hardening
+REVOKE ALL ON FUNCTION public.seed_demo_brain(UUID, UUID) FROM PUBLIC, anon, authenticated;
+REVOKE ALL ON FUNCTION public.handle_new_user_test(UUID, TEXT) FROM PUBLIC, anon, authenticated;
+-- Grant only to the narrow role that actually needs these RPCs.
+-- Example:
+-- GRANT EXECUTE ON FUNCTION public.seed_demo_brain(UUID, UUID) TO service_role;

Also applies to: 396-414

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/migrations/004_seed_demo_brain.sql` around lines 36 - 41, The SECURITY
DEFINER functions seed_demo_brain(p_workspace_id UUID, p_user_id UUID) and
handle_new_user_test(...) are callable by any client unless you explicitly
revoke public execute and grant it only to the trusted service role; update the
migration to revoke EXECUTE on these function objects from public and then GRANT
EXECUTE only to the internal DB role you use for server-side operations (e.g.,
the service_role or an admin DB role), ensuring no anonymous/public role can RPC
them.

Comment thread cloud/SUPABASE-SETUP.md
Comment on lines +126 to +128
```
POST /api/v1/brains/{brain_id}/clear-demo
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language to this fenced block.

This is already tripping markdownlint (MD040), so the runbook will keep the docs check noisy until the fence is annotated.

📝 Minimal fix
-```
+```text
 POST /api/v1/brains/{brain_id}/clear-demo
</details>

<!-- suggestion_start -->

<details>
<summary>📝 Committable suggestion</summary>

> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

```suggestion

🧰 Tools
🪛 markdownlint-cli2 (0.22.0)

[warning] 126-126: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/SUPABASE-SETUP.md` around lines 126 - 128, The fenced code block
containing "POST /api/v1/brains/{brain_id}/clear-demo" is missing a language
specifier and triggers markdownlint MD040; update the fence from ``` to ```text
(or another appropriate language) so the block reads ```text followed by the
POST line and closing ```, ensuring the fenced block is annotated to satisfy the
linter.

Comment thread cloud/tests/conftest.py
Comment on lines +52 to +54
async def delete(self, table: str, filters: dict | None = None) -> list[dict]:
"""Mock delete: returns pre-seeded delete response rows (treat as 'deleted rows')."""
return list(self._responses[table].get("delete", []))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Apply filters and mutate mock state in delete().

Line 54 returns the same seeded rows for every call and ignores filters, so the /clear-demo tests will still pass if the handler deletes the wrong ids or double-deletes a row. That weakens coverage on a destructive path.

🧪 Suggested mock behavior
     async def delete(self, table: str, filters: dict | None = None) -> list[dict]:
         """Mock delete: returns pre-seeded delete response rows (treat as 'deleted rows')."""
-        return list(self._responses[table].get("delete", []))
+        rows = list(self._responses[table].get("delete", []))
+        if filters:
+            rows = [r for r in rows if all(r.get(k) == v for k, v in filters.items())]
+        self._responses[table]["delete"] = [
+            r for r in self._responses[table].get("delete", [])
+            if r not in rows
+        ]
+        return rows
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/tests/conftest.py` around lines 52 - 54, The mock delete(self, table:
str, filters: dict | None = None) currently ignores filters and always returns
the seeded "delete" list; change it to apply filters against the mock table rows
(e.g., self._responses[table]["rows"] or equivalent) to find matching rows,
remove those rows from the stored mock state (mutate
self._responses[table]["rows"] to reflect deletion), and return the list of
deleted rows; if filters is None treat it as deleting all rows and ensure the
stored rows become empty. Use the existing symbols delete, table, filters and
self._responses to locate and update the logic.

Comment on lines +64 to +67
def test_global_kpis_rejects_missing_email(client, mock_supabase):
"""No email claim + no DB match -> 403, not 500."""
resp = client.get("/api/v1/admin/global-kpis", headers=_headers(email=None))
assert resp.status_code == 403
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Cover the DB email fallback with a positive test.

This suite only proves email-missing requests get rejected. Since require_operator() is supposed to fall back to a users lookup, add one case where the JWT omits email but the DB resolves an allowlisted address; otherwise that path can break silently.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cloud/tests/test_operator.py` around lines 64 - 67, Add a positive test that
exercises require_operator()'s DB email fallback: create a new test (e.g.,
test_global_kpis_allows_db_email_fallback) that calls
client.get("/api/v1/admin/global-kpis", headers=_headers(email=None)) but sets
up mock_supabase to return a user record whose email is allowlisted; assert the
response is successful (200) so the DB lookup path is covered; use the same
_headers helper and mock_supabase fixture used in the existing test to locate
the behavior in require_operator().

Gradata pushed a commit that referenced this pull request Apr 13, 2026
- RuleCache: typed as dict[str, list] but stored str (brain.py caches format_rules_for_prompt output). Change to dict[str, str].
- Lesson: add _contradiction_streak field (was set dynamically, failing attribute check in self_improvement + rule_evolution).
- behavioral_extractor: sorted(counts, key=counts.get) fails overload resolution; wrap with lambda.
- rule_engine: assign example_draft/example_corrected to locals after None guard so pyright narrows the Optional.

Pre-existing errors blocking CI on main for multiple sessions. Unblocks PR #28.
Gradata added a commit that referenced this pull request Apr 13, 2026
* fix(sdk): resolve 10 pyright type errors in basic mode

- RuleCache: typed as dict[str, list] but stored str (brain.py caches format_rules_for_prompt output). Change to dict[str, str].
- Lesson: add _contradiction_streak field (was set dynamically, failing attribute check in self_improvement + rule_evolution).
- behavioral_extractor: sorted(counts, key=counts.get) fails overload resolution; wrap with lambda.
- rule_engine: assign example_draft/example_corrected to locals after None guard so pyright narrows the Optional.

Pre-existing errors blocking CI on main for multiple sessions. Unblocks PR #28.

* fix(sdk): clear 17 ruff lint errors blocking SDK CI

Auto-fixed 12 (datetime.UTC alias, unused vars, zip strict, comprehensions)
plus 5 manual fixes:
- _embed.py: zip(a, b, strict=False) for cosine_distance
- rule_graph.py: collapse nested ifs with 'and'
- rule_tree.py: rename unused loop var path->_path, collapse nested contract check

Unblocks sdk-ci.yml workflow on main.

---------

Co-authored-by: Oliver Le <oliver@gradata.com>
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages Bot commented Apr 13, 2026

Deploying gradata-dashboard with  Cloudflare Pages  Cloudflare Pages

Latest commit: c938357
Status:🚫  Build failed.

View logs

Copy link
Copy Markdown

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.

@Gradata Gradata merged commit 9fdab9e into main Apr 13, 2026
10 of 11 checks passed
Gradata pushed a commit that referenced this pull request Apr 13, 2026
PR #28 added WorkspaceInvite with email: EmailStr, but pydantic's
optional email-validator is not installed unless you request the
pydantic[email] extra. Boot crashed on schema generation:

  ImportError: email-validator is not installed, run
  `pip install 'pydantic[email]'`

Switch dependency spec to `pydantic[email]>=2.10.0`.
@Gradata Gradata deleted the worktree-dashboard-nextjs-rebuild branch April 17, 2026 19:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant