Conversation
Adds the resource lifecycle layer: - ResourceDal with org-scoped CRUD in dal.ts - ResourceManager domain class with policy enforcement (domain/resource-manager.ts, domain/policy.ts, domain/errors.ts) - Provider adapter interface + mock adapter (src/adapters/) - REST routes and Zod schemas for /resources (src/api/routes/resources.ts, src/api/schemas/resources.ts) - resourceServicesPlugin wiring (src/plugins/resource-services.ts) - Unit and integration tests (tests/resources.test.ts, tests/resources.integration.ts)
…ent test - agentmail-adapter: wrap JSON.parse in parseWebhook with try/catch to avoid crashing on malformed webhook payloads - actions: replace crypto.randomUUID() fallback with empty string so message_id in email.sent event reflects actual adapter output rather than a fake ID - resource-manager: document adapter-over-user config merge precedence - email.test: add missing test for send_email returning 404 on archived agent
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a6b48da30d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| and( | ||
| eq(resources.provider, provider), | ||
| eq(resources.providerRef, providerRef), | ||
| ne(resources.state, 'deleted'), |
There was a problem hiding this comment.
Scope webhook resource lookup to tenant
The webhook ingestion path resolves resources only by provider + providerRef, then writes events with whatever orgId that first match has. Because resources are unique per org (resources_org_provider_provider_ref_unique), two tenants can legitimately have the same provider ref; in that case this lookup can attach a webhook event to the wrong tenant/agent and violate data isolation. Pass tenant context through parsing/processing and include it in this query before writing events.
Useful? React with 👍 / 👎.
| const updated = await dal.resources.updateById(id, { | ||
| providerRef: result.providerRef, | ||
| config: parsedConfig.data, | ||
| state: 'active', | ||
| }); |
There was a problem hiding this comment.
Clean up DB state when resource activation update fails
After adapter.provision succeeds, the code updates the row to set providerRef/state: 'active', but failures in this update path are not compensated. For example, if the providerRef conflicts with an existing row, the update throws and leaves the newly inserted resource stuck in provisioning (and the external resource may already exist), creating orphaned/stale resources. Wrap this post-provision update in cleanup logic similar to the adapter call failure path.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 88eb4ed8a1
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (err instanceof AppError) { | ||
| return reply.code(404).send({ message: err.message }); |
There was a problem hiding this comment.
Use AppError status codes in resource route handlers
Both resource mutation handlers coerce every AppError to HTTP 404, which masks real server faults as not-found responses. In ResourceManager, some AppErrors are intentionally 500-level (for example invalid adapter config or failed state transitions), so returning 404 here will mislead clients and monitoring and can trigger incorrect retry/backoff behavior; the handler should return err.httpStatus instead of a hardcoded 404.
Useful? React with 👍 / 👎.
| eventType: EVENT_TYPES.EMAIL_SENT, | ||
| idempotencyKey: idempotency_key, | ||
| data: { | ||
| message_id: (actionResult['message_id'] as string) || '', |
There was a problem hiding this comment.
Avoid writing empty message_id for email.sent events
This fallback writes message_id: '' when the adapter response lacks message_id, but EMAIL_SENT validation requires message_id to be non-empty (message_id: nonEmptyStringSchema in src/domain/events.ts). In that case the email may already be sent by the provider, then writeEvent throws and the API returns 500, which can cause clients to retry and send duplicates. Handle missing IDs explicitly (or make schema/contract consistent) instead of defaulting to an invalid empty string.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 91e0b3f77f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| return reply.code(500).send({ message: 'AgentMail adapter not configured' }); | ||
| } | ||
|
|
||
| const actionResult = await adapter.performAction(emailResource, 'send_email', { |
There was a problem hiding this comment.
Enforce idempotency before sending provider email
When idempotency_key is provided, this handler still calls adapter.performAction(...) before any dedupe check, and only then writes the event with that key. If a client retries the same request after a timeout/network retry, the provider send runs again and can deliver duplicate emails even though writeEvent collapses the event record; idempotency needs to be checked/locked before the outbound side effect.
Useful? React with 👍 / 👎.
| const refProviderOrgId = new Map(eventsWithRef.map((e) => [e.resourceRef, e.providerOrgId])); | ||
| const uniqueRefs = [...refProviderOrgId.keys()]; |
There was a problem hiding this comment.
Preserve providerOrgId when batching webhook lookups
This map is keyed only by resourceRef, so if processEvents receives multiple events sharing a ref but with different providerOrgId values, later entries overwrite earlier ones and all matching events are resolved through a single resource mapping. That can attach at least one event to the wrong tenant/agent in mixed-org batches; use a composite key (resourceRef + providerOrgId) or per-event lookup to keep tenant scoping intact.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6697458cc2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const adapter = this.getAdapter(resource.provider); | ||
| await adapter.deprovision(resource); | ||
|
|
||
| const updated = await dal.resources.updateById(resourceId, { state: 'deleted' }); |
There was a problem hiding this comment.
Clear providerRef when deprovisioning a resource
This update only sets state: 'deleted', but the unique index resources_org_provider_provider_ref_unique still applies to deleted rows (src/db/schema.ts lines 81-83), so the old providerRef remains reserved forever. Re-provisioning a resource that gets the same provider ref (deterministic with the mock adapter’s default mock_ref_123) will fail on the activation update with a uniqueness error and leave the new row deleted, effectively breaking reprovisioning for that agent/org.
Useful? React with 👍 / 👎.
| const allowedSet = allowedDomains && allowedDomains.length > 0 ? new Set(allowedDomains) : null; | ||
| const blockedSet = blockedDomains && blockedDomains.length > 0 ? new Set(blockedDomains) : null; |
There was a problem hiding this comment.
Normalize policy domain lists before membership checks
Recipient domains are normalized to lowercase, but allowed_domains/blocked_domains are inserted into sets without normalization, so mixed-case config values produce incorrect policy outcomes (for example allowed_domains: ['Trusted.com'] blocks user@trusted.com, and blocked_domains: ['Spam.com'] does not block bad@spam.com). Because DNS domains are case-insensitive, this creates avoidable false allows/denies in enforcement.
Useful? React with 👍 / 👎.
all of Phase C from @plan.md completed.