feat(webhooks): on-chain event webhooks with batch delivery + dashboard#767
feat(webhooks): on-chain event webhooks with batch delivery + dashboard#767orgy272 wants to merge 0 commit intoSolFoundry:mainfrom
Conversation
📝 WalkthroughWalkthroughRegisters new on-chain webhook routers in the FastAPI app and starts/stops a background on-chain webhook batch worker. Adds contributor webhook DB model, Pydantic schemas, Alembic migration for webhook tables, a ContributorWebhookService (CRUD, signing, retry/backoff, delivery recording), an onchain_webhook_service (in-memory queue, batch worker, emit/handler functions), API endpoints for emitting events, dashboard, testing, and event catalog, and a comprehensive async test suite for the subsystem. Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Important Merge conflicts detected (Beta)
✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/main.py`:
- Around line 55-63: Remove the literal Git conflict markers and consolidate the
duplicate import block in main.py so the module parses; specifically delete the
lines starting with <<<<<<<, =======, and >>>>>>> and ensure a single import
section that imports contributor_webhooks_router, onchain_webhooks_router,
onchain_dashboard_router, and siws_router (referencing the symbols
contributor_webhooks_router, onchain_webhooks_router, onchain_dashboard_router,
siws_router) from their respective modules so each router is imported exactly
once.
- Around line 402-408: There is an unresolved git merge conflict block left in
the router registration section—remove the conflict markers (<<<<<<<, =======,
>>>>>>>) and restore the intended router registrations by keeping the valid
include_router calls (app.include_router(contributor_webhooks_router,
prefix="/api"), app.include_router(onchain_webhooks_router, prefix="/api"),
app.include_router(onchain_dashboard_router, prefix="/api"), and
app.include_router(siws_router, prefix="/api")); ensure these router variables
are correctly imported/defined so there are no duplicates or missing imports
after removing the markers.
In `@backend/app/models/contributor_webhook.py`:
- Around line 50-76: The model defines a duplicate index on the user_id column:
remove one of the two index declarations to avoid redundant indexes; either
delete the inline index flag on the Column(user_id) (user_id = Column(...,
index=True)) or remove the explicit Index("ix_contributor_webhooks_user_id",
"user_id") from __table_args__ in the ContributorWebhook model so only a single
index on user_id remains.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8a83077f-6512-43f0-8a32-a1bc48c5dbe4
📒 Files selected for processing (6)
backend/alembic/versions/005_webhook_delivery_log.pybackend/app/api/onchain_webhooks.pybackend/app/main.pybackend/app/models/contributor_webhook.pybackend/app/services/onchain_webhook_service.pybackend/tests/test_onchain_webhooks.py
There was a problem hiding this comment.
Actionable comments posted: 13
♻️ Duplicate comments (1)
backend/app/models/contributor_webhook.py (1)
51-75:⚠️ Potential issue | 🟡 Minor
user_idis indexed twice.Line 51 sets
index=True, and Lines 73-75 addIndex("ix_contributor_webhooks_user_id", "user_id")again. That redundant index increases write cost and storage without improving query coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/models/contributor_webhook.py` around lines 51 - 75, The model defines the user_id column with index=True and also creates an explicit Index in __table_args__ (Index("ix_contributor_webhooks_user_id", "user_id")), creating a duplicate index; remove the redundancy by dropping index=True from the Column(...) for user_id or by removing the explicit Index(...) in __table_args__ so only one index remains (update the user_id Column declaration or the __table_args__ tuple accordingly), keeping the Index name ix_contributor_webhooks_user_id if you prefer the explicit index.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/onchain_webhooks.py`:
- Around line 61-90: The emit_event POST handler (emit_event) currently allows
unauthenticated callers to enqueue arbitrary events (via emit_onchain_event and
OnChainEmitRequest) and set arbitrary user_id, enabling spoofed webhook traffic
and injection of non-on-chain events from WEBHOOK_EVENTS; fix by adding source
authentication/authorization (e.g., a FastAPI Depends that verifies an
indexer/admin API key or HMAC/signature on the payload), validate the incoming
event against a hardened allowlist (reject events not provably from on-chain
sources), and forbid or validate client-supplied user_id (derive it from
verified payload or reject if missing/invalid); also log and return 401/403 for
unauthorized attempts and ensure emit_onchain_event only processes
authenticated/verified inputs.
- Around line 108-130: The dashboard currently uses
ContributorWebhookService.list_for_user (which returns only active webhooks)
inside webhook_dashboard, making total_webhooks == active_webhooks and
miscomputing deliveries_last_24h and failure_rate; fix by having
webhook_dashboard call a service that returns all webhooks (or add
ContributorWebhookService.list_all_for_user) and separately query delivery
attempts (e.g., a WebhookDeliveryAttemptService or the WebhookDeliveryAttempt
model) to compute deliveries in the last 24 hours, total_failures, and
total_attempts; then compute deliveries_last_24h as count(attempts where
attempted_at >= now-24h), total_failures as count(attempts where status ==
"failure"), and failure_rate as total_failures / max(total_attempts, 1) * 100
before returning WebhookDashboardResponse (update or add service methods:
ContributorWebhookService.list_all_for_user and a delivery-attempt query method
to locate relevant code).
In `@backend/app/main.py`:
- Around line 396-398: The new on-chain webhook routes register event intake but
the background delivery loop is never started; modify the FastAPI app startup
(lifespan()) to call start_batch_worker() so the _batch_worker/_flush_batch
delivery path processes _pending_events appended by emit_onchain_event();
specifically ensure lifespan() (or app startup event) invokes
start_batch_worker() and that start_batch_worker() schedules/starts the
_batch_worker coroutine/task so events received via /api/webhooks/onchain/emit
are flushed after the expected interval.
In `@backend/app/models/contributor_webhook.py`:
- Around line 85-98: The must_be_https field validator on the "url" field in
contributor_webhook.py currently only checks the scheme; extend it to block SSRF
by rejecting URLs whose host is an IP literal in
private/loopback/link-local/reserved ranges or that resolve (via DNS lookup) to
any such addresses, and also disallow localhost and known internal hostnames;
use Python's ipaddress to classify IPs and perform a DNS resolve of v.host
(checking all A/AAAA results) before accepting, and keep the existing HTTPS
requirement and error messages in the same validator (must_be_https) so the
validation is atomic and centralized.
- Around line 53-55: The secret is stored plaintext on the ContributorWebhook
model (secret = Column(...)) and persisted unchanged by
ContributorWebhookService.register(); instead encrypt the secret at rest: change
the model field to something like encrypted_secret (or keep secret but store
ciphertext), update ContributorWebhookService.register() to encrypt req.secret
with the app's envelope key or KMS before saving, and add a secure decrypt path
(e.g., a helper in ContributorWebhookService like decrypt_secret_for_signing())
to decrypt only when signing outgoing payloads; ensure encryption/decryption
functions and key usage are centralized and tested so raw secrets are never
stored in DB or logs.
- Around line 145-149: The four on-chain event strings were added as plain class
attributes instead of Enum members; update the WebhookEvent enum so
ESCROW_LOCKED, ESCROW_RELEASED, REPUTATION_UPDATED, STAKE_DEPOSITED, and
STAKE_WITHDRAWN are declared as actual Enum members inside the WebhookEvent
class (not assigned after the class), remove the post-class assignments
(WebhookEvent.ESCROW_LOCKED = ...), and ensure WEBHOOK_EVENTS (the tuple used
for validation) still references these enum members (or their values) so
WebhookEvent.__members__ includes the on-chain events.
In `@backend/app/services/contributor_webhook_service.py`:
- Around line 56-64: The payload builder _build_payload is wrapping on-chain
fields (tx_signature, slot, block_time) inside data which violates the
documented OnChainWebhookPayload; update _build_payload (or add a new builder
used by emit_onchain_event) to emit those fields at the top level instead of
inside data: detect on-chain events in emit_onchain_event and
construct/serialize an OnChainWebhookPayload (or extend WebhookPayload to
include optional tx_signature, slot, block_time) so the final JSON has top-level
tx_signature, slot and block_time along with event, bounty_id, timestamp and
data; ensure you reference the WebhookPayload and OnChainWebhookPayload models
and change the serialization call in _build_payload/emit_onchain_event
accordingly.
- Around line 173-175: The concurrent gather of _deliver_with_retry tasks uses
the shared AsyncSession self._db causing unsafe concurrent session use; modify
_deliver_with_retry (or the helper that calls _record_delivery) to create and
use its own AsyncSession instance from self._sessionmaker() (or otherwise obtain
a per-task session/context) and pass that session into _record_delivery instead
of using self._db, ensuring each delivery task manages its own session lifecycle
(open/commit/close) so asyncio.gather can run safely.
In `@backend/app/services/onchain_webhook_service.py`:
- Around line 174-195: The handlers (e.g., handle_escrow_released) are passing
base58 wallet addresses or non-UUID contributor identifiers into the
emit_onchain_event user_id parameter, which later gets converted with
UUID(user_id) in ContributorWebhookService.dispatch_event and will raise; change
the calls to emit_onchain_event (e.g., in handle_escrow_released) to stop using
winner_wallet/wallet/contributor_id as user_id — instead, omit user_id or pass
None and include the wallet or contributor identifier inside the event payload
under a distinct key (e.g., "wallet" or "contributor_id"); apply the same fix to
the other affected handlers (stake.deposited, stake.withdrawn,
reputation.updated) so that only real UUIDs are passed to user_id and non-UUID
identifiers live in the payload.
- Around line 53-61: The loop in onchain_webhook_service.py currently calls
ContributorWebhookService.dispatch_event once per event (for ev in events ...
await svc.dispatch_event(...)), so the debounce window only delays individual
calls rather than delivering a single batched webhook; change this to aggregate
the collected events and invoke a batch delivery method instead—either add/use a
ContributorWebhookService.dispatch_events (or dispatch_batch) that accepts the
full events list (or grouped by bounty_id/webhook target) and sends one POST per
webhook with all relevant events, or modify dispatch_event to accept a list and
update its signature and callers accordingly; ensure events are grouped
appropriately (e.g., by bounty_id or webhook URL) before calling the new batch
method so outbound calls are reduced to a single POST per group.
- Around line 43-45: The batch flush currently clears `_pending_events` inside
the `async with _batch_lock:` block before persistence or delivery, risking
permanent loss if `db_factory()` or `dispatch_event()` fail; fix by not dropping
the global list prematurely: under `_batch_lock` copy events =
list(_pending_events) but do not clear `_pending_events` until after successful
durable handoff/delivery, or persist the copied batch first via
`db_factory()`/your persistence method and only then remove those events from
`_pending_events`; if delivery fails, re-append or leave `_pending_events`
intact and use try/except around `dispatch_event()` to restore/requeue the
copied events before returning. Ensure error handling around `db_factory()` and
`dispatch_event()` (with references to `_batch_lock`, `_pending_events`,
`db_factory()`, and `dispatch_event()`) so failures do not permanently drop
events.
In `@backend/tests/test_onchain_webhooks.py`:
- Around line 39-41: The tests mount a standalone FastAPI app (app with
onchain_router and dashboard_router) so they skip the real application
entrypoint, `/api` prefix, and lifespan hooks; update the test setup to import
and use the app instance from backend.app.main (the real FastAPI app) instead of
creating a new FastAPI() and including onchain_router/dashboard_router, so the
tests exercise the `/api` prefix, application lifespan (startup/shutdown) hooks,
batch-worker startup, real flush behavior, outbound payload shape, and internal
emit auth; also adjust endpoint paths in the tests to include the `/api` prefix
and remove assumptions that bypass startup tasks used in the queue-only and
static catalog checks referenced around the test blocks that asserted queue
contents (Lines ~67-106), batch behavior (~164-200), and catalog assertions
(~315-340).
---
Duplicate comments:
In `@backend/app/models/contributor_webhook.py`:
- Around line 51-75: The model defines the user_id column with index=True and
also creates an explicit Index in __table_args__
(Index("ix_contributor_webhooks_user_id", "user_id")), creating a duplicate
index; remove the redundancy by dropping index=True from the Column(...) for
user_id or by removing the explicit Index(...) in __table_args__ so only one
index remains (update the user_id Column declaration or the __table_args__ tuple
accordingly), keeping the Index name ix_contributor_webhooks_user_id if you
prefer the explicit index.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5f0c4e47-d4fb-4898-a113-b8de552490dc
📒 Files selected for processing (6)
backend/app/api/onchain_webhooks.pybackend/app/main.pybackend/app/models/contributor_webhook.pybackend/app/services/contributor_webhook_service.pybackend/app/services/onchain_webhook_service.pybackend/tests/test_onchain_webhooks.py
backend/app/api/onchain_webhooks.py
Outdated
| async def webhook_dashboard( | ||
| user_id: str = Depends(get_current_user_id), | ||
| db: AsyncSession = Depends(get_db), | ||
| ) -> WebhookDashboardResponse: | ||
| """Return webhook delivery dashboard for the authenticated user.""" | ||
| svc = ContributorWebhookService(db) | ||
| webhooks = await svc.list_for_user(user_id) | ||
|
|
||
| active = [w for w in webhooks if w.active] | ||
| total_failures = sum(w.failure_count for w in webhooks) | ||
| total_deliveries = sum(1 for w in webhooks if w.last_delivery_at is not None) | ||
| failure_rate = ( | ||
| round(total_failures / max(total_deliveries + total_failures, 1) * 100, 1) | ||
| if webhooks | ||
| else 0.0 | ||
| ) | ||
|
|
||
| return WebhookDashboardResponse( | ||
| total_webhooks=len(webhooks), | ||
| active_webhooks=len(active), | ||
| deliveries_last_24h=total_deliveries, | ||
| failure_rate=failure_rate, | ||
| webhooks=webhooks, |
There was a problem hiding this comment.
Dashboard metrics are derived from the wrong data model.
Lines 113-130 compute total_webhooks, active_webhooks, deliveries_last_24h, and failure_rate from list_for_user(), but backend/app/services/contributor_webhook_service.py Lines 127-137 only return active webhooks. That makes total_webhooks and active_webhooks effectively the same, and deliveries_last_24h is really “count of active webhooks with any last_delivery_at”, not deliveries in the last 24 hours. The failure-rate denominator is also not total attempts. This endpoint cannot provide the advertised delivery stats or retry history without querying per-attempt delivery logs.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/onchain_webhooks.py` around lines 108 - 130, The dashboard
currently uses ContributorWebhookService.list_for_user (which returns only
active webhooks) inside webhook_dashboard, making total_webhooks ==
active_webhooks and miscomputing deliveries_last_24h and failure_rate; fix by
having webhook_dashboard call a service that returns all webhooks (or add
ContributorWebhookService.list_all_for_user) and separately query delivery
attempts (e.g., a WebhookDeliveryAttemptService or the WebhookDeliveryAttempt
model) to compute deliveries in the last 24 hours, total_failures, and
total_attempts; then compute deliveries_last_24h as count(attempts where
attempted_at >= now-24h), total_failures as count(attempts where status ==
"failure"), and failure_rate as total_failures / max(total_attempts, 1) * 100
before returning WebhookDashboardResponse (update or add service methods:
ContributorWebhookService.list_all_for_user and a delivery-attempt query method
to locate relevant code).
| # HMAC-SHA256 secret supplied by the contributor at registration time. | ||
| # Stored as plaintext (contributor's choice); used to sign outgoing payloads. | ||
| secret = Column(String(256), nullable=False) |
There was a problem hiding this comment.
Webhook signing secrets are stored in plaintext.
Lines 53-55 explicitly document raw secret storage, and ContributorWebhookService.register() persists req.secret unchanged in backend/app/services/contributor_webhook_service.py Lines 96-100. A database disclosure would expose all webhook-signing keys and let an attacker forge X-SolFoundry-Signature values to downstream consumers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/models/contributor_webhook.py` around lines 53 - 55, The secret
is stored plaintext on the ContributorWebhook model (secret = Column(...)) and
persisted unchanged by ContributorWebhookService.register(); instead encrypt the
secret at rest: change the model field to something like encrypted_secret (or
keep secret but store ciphertext), update ContributorWebhookService.register()
to encrypt req.secret with the app's envelope key or KMS before saving, and add
a secure decrypt path (e.g., a helper in ContributorWebhookService like
decrypt_secret_for_signing()) to decrypt only when signing outgoing payloads;
ensure encryption/decryption functions and key usage are centralized and tested
so raw secrets are never stored in DB or logs.
| async def _record_delivery(self, webhook_id: UUID, *, success: bool) -> None: | ||
| """Update last_delivery stats; increment failure_count on failure.""" | ||
| values: dict[str, Any] = { | ||
| "last_delivery_at": datetime.now(timezone.utc), | ||
| "last_delivery_status": "success" if success else "failed", | ||
| } | ||
| if not success: | ||
| # Increment via SQL expression to avoid race conditions | ||
|
|
||
| await self._db.execute( | ||
| update(ContributorWebhookDB) | ||
| .where(ContributorWebhookDB.id == webhook_id) | ||
| .values( | ||
| last_delivery_at=values["last_delivery_at"], | ||
| last_delivery_status=values["last_delivery_status"], | ||
| failure_count=ContributorWebhookDB.failure_count + 1, | ||
| ) | ||
| ) | ||
| else: | ||
| await self._db.execute( | ||
| update(ContributorWebhookDB) | ||
| .where(ContributorWebhookDB.id == webhook_id) | ||
| .values(**values) | ||
| ) | ||
| await self._db.commit() |
There was a problem hiding this comment.
Per-attempt delivery history is never persisted.
Lines 247-271 only update summary columns on contributor_webhooks. There is no insert of attempt-level status, HTTP code, retry number, or error details anywhere in this service, so the promised audit trail / retry history and accurate 24-hour delivery metrics cannot be generated from this implementation.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.
| async with _batch_lock: | ||
| events = list(_pending_events) | ||
| _pending_events.clear() |
There was a problem hiding this comment.
Batch flush drops events before any durable handoff.
Lines 43-45 clear _pending_events before DB acquisition and before any delivery succeeds. If db_factory() fails, dispatch_event() fails, or the process dies mid-flush, the batch is gone permanently because there is no persistence or requeue path here. A transient backend failure becomes a permanent missed webhook.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: Error handling and edge case coverage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/onchain_webhook_service.py` around lines 43 - 45, The
batch flush currently clears `_pending_events` inside the `async with
_batch_lock:` block before persistence or delivery, risking permanent loss if
`db_factory()` or `dispatch_event()` fail; fix by not dropping the global list
prematurely: under `_batch_lock` copy events = list(_pending_events) but do not
clear `_pending_events` until after successful durable handoff/delivery, or
persist the copied batch first via `db_factory()`/your persistence method and
only then remove those events from `_pending_events`; if delivery fails,
re-append or leave `_pending_events` intact and use try/except around
`dispatch_event()` to restore/requeue the copied events before returning. Ensure
error handling around `db_factory()` and `dispatch_event()` (with references to
`_batch_lock`, `_pending_events`, `db_factory()`, and `dispatch_event()`) so
failures do not permanently drop events.
| svc = ContributorWebhookService(db) | ||
| for ev in events: | ||
| try: | ||
| await svc.dispatch_event( | ||
| event=ev["event"], | ||
| bounty_id=ev["bounty_id"], | ||
| data=ev["data"], | ||
| user_id=ev.get("user_id"), | ||
| ) |
There was a problem hiding this comment.
This is a debounce window, not batch delivery.
Lines 53-61 loop over the collected events and call dispatch_event() once per event. A burst of 100 events still produces 100 outbound webhook POSTs after a 5-second delay, so this does not implement the promised batch delivery behavior or reduce webhook call volume in the way the PR/API description claims.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/onchain_webhook_service.py` around lines 53 - 61, The
loop in onchain_webhook_service.py currently calls
ContributorWebhookService.dispatch_event once per event (for ev in events ...
await svc.dispatch_event(...)), so the debounce window only delays individual
calls rather than delivering a single batched webhook; change this to aggregate
the collected events and invoke a batch delivery method instead—either add/use a
ContributorWebhookService.dispatch_events (or dispatch_batch) that accepts the
full events list (or grouped by bounty_id/webhook target) and sends one POST per
webhook with all relevant events, or modify dispatch_event to accept a list and
update its signature and callers accordingly; ensure events are grouped
appropriately (e.g., by bounty_id or webhook URL) before calling the new batch
method so outbound calls are reduced to a single POST per group.
| app = FastAPI() | ||
| app.include_router(onchain_router) | ||
| app.include_router(dashboard_router) |
There was a problem hiding this comment.
The suite bypasses the production integration path.
Lines 39-41 mount the routers on a standalone FastAPI app instead of backend/app/main.py, so the tests never exercise the /api prefix or the application lifespan hooks. Combined with the queue-only assertions in Lines 67-106 and the static catalog checks in Lines 315-340, this means the current suite does not cover batch-worker startup, real flush behavior, the actual outbound payload shape, or auth on the internal emit endpoint. Those are exactly the areas where the implementation is currently broken.
Also applies to: 67-106, 164-200, 315-340
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_onchain_webhooks.py` around lines 39 - 41, The tests mount
a standalone FastAPI app (app with onchain_router and dashboard_router) so they
skip the real application entrypoint, `/api` prefix, and lifespan hooks; update
the test setup to import and use the app instance from backend.app.main (the
real FastAPI app) instead of creating a new FastAPI() and including
onchain_router/dashboard_router, so the tests exercise the `/api` prefix,
application lifespan (startup/shutdown) hooks, batch-worker startup, real flush
behavior, outbound payload shape, and internal emit auth; also adjust endpoint
paths in the tests to include the `/api` prefix and remove assumptions that
bypass startup tasks used in the queue-only and static catalog checks referenced
around the test blocks that asserted queue contents (Lines ~67-106), batch
behavior (~164-200), and catalog assertions (~315-340).
|
Solana Wallet: |
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (8)
backend/app/models/contributor_webhook.py (1)
71-73:⚠️ Potential issue | 🟠 MajorWebhook signing secrets are still stored plaintext at rest.
ContributorWebhookDB.secretis explicitly modeled as the raw HMAC key, andbackend/app/services/contributor_webhook_service.pypersistsreq.secretunchanged during registration. A database disclosure would immediately let an attacker forgeX-SolFoundry-Signaturefor every registered subscriber endpoint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/models/contributor_webhook.py` around lines 71 - 73, The ContributorWebhookDB model currently stores the raw HMAC key in the secret Column and contributor_webhook_service.py persists req.secret directly; change this to never store plaintext secrets by replacing direct storage of ContributorWebhookDB.secret with a secure derived/opaque value (e.g., store an encrypted blob or a keyed hash/derived key using a KMS-wrapped encryption or HMAC key-derivation and only store the wrapped/hashed output and any nonce) and update registration code that assigns req.secret to instead persist the derived/wrapped value and keep the raw secret transient (return it only once to the caller). Locate the model field named secret on class ContributorWebhookDB and the registration code that uses req.secret in contributor_webhook_service.py, implement encryption/derivation using your platform KMS or a strong KDF, and ensure verification/signing uses the unwrapped/derived key at runtime without ever writing plaintext to the DB.backend/app/api/onchain_webhooks.py (1)
147-170:⚠️ Potential issue | 🟠 MajorDashboard metrics are still derived from per-webhook summary fields, not delivery attempts.
deliveries_last_24his currently just “number of webhooks with anylast_delivery_at”, andfailure_ratemixes cumulativefailure_countwith that per-webhook count. Sincebackend/app/services/contributor_webhook_service.pyis not readingwebhook_delivery_loghere, this endpoint still cannot return the retry history or failure-rate semantics described by the PR.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/onchain_webhooks.py` around lines 147 - 170, The dashboard currently uses per-webhook summary fields (last_delivery_at and failure_count) instead of real delivery attempts; update webhook_dashboard to compute deliveries_last_24h and failure_rate from webhook_delivery_log records. Call or add a service method on ContributorWebhookService (e.g., get_delivery_attempts_for_user or get_delivery_stats_for_user) that queries webhook_delivery_log for attempts in the last 24 hours and for all-time attempts/failures, then compute total_deliveries (count of successful attempts in 24h), total_attempts (all attempts), and total_failures (failed attempts) from that data and use those numbers to build the WebhookDashboardResponse instead of using last_delivery_at and failure_count on each Webhook object. Ensure the service method returns per-webhook and aggregated attempt counts so webhook_dashboard can still return the webhooks list plus accurate metrics.backend/app/services/onchain_webhook_service.py (3)
58-70:⚠️ Potential issue | 🟠 MajorThis is still delayed per-event delivery, not batched delivery.
After the 5-second window, the worker iterates
eventsand callsContributorWebhookService.dispatch_event()once per event. That preserves one outbound POST per event/webhook pair, so the implementation does not meet the PR’s advertised batch-delivery behavior or reduce webhook call volume in the way the API documentation claims.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.
131-135:⚠️ Potential issue | 🟠 Major
emit_onchain_event()still accepts off-chain event names.Validation is against
WEBHOOK_EVENTS, not the fiveONCHAIN_EVENTS, so the internal indexer path can enqueue contributor events likebounty.claimedandreview.*even thoughbackend/app/api/onchain_webhooks.pydocuments this path as on-chain-only. That expands what the indexer credential is allowed to trigger beyond the intended boundary.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: Authentication/authorization gaps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/services/onchain_webhook_service.py` around lines 131 - 135, The validation in emit_onchain_event currently checks WEBHOOK_EVENTS allowing off-chain events; change it to validate against the on-chain whitelist ONCHAIN_EVENTS (use the ONCHAIN_EVENTS symbol instead of WEBHOOK_EVENTS) and update the raised ValueError message to list the allowed ONCHAIN_EVENTS so only the five on-chain events can be enqueued by the indexer credential; locate emit_onchain_event in onchain_webhook_service.py and replace the membership check and error text accordingly (add/update tests if present to assert that off-chain names like "bounty.claimed" are rejected).
44-70:⚠️ Potential issue | 🔴 CriticalQueued events are still dropped before any durable handoff, including on normal shutdown.
_flush_batch()clears_pending_eventsbefore DB acquisition or successful dispatch, andstop_batch_worker()only cancels the worker without a final drain. A DB/session failure, process exit, or graceful restart inside the 5-second window will permanently lose already accepted on-chain events.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: Error handling and edge case coverage.Also applies to: 94-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/services/onchain_webhook_service.py` around lines 44 - 70, The current _flush_batch clears _pending_events before acquiring a DB or successfully dispatching, risking permanent loss; change the flow so that you snapshot but do NOT remove items from _pending_events until each event is durably handed off: acquire the DB via db_factory and/or ensure dispatch succeeds (or is retried/persisted) before removing events from _pending_events, and update stop_batch_worker to perform a final safe drain that waits for dispatch completion (using _batch_lock) or preserves remaining events on failure; use ContributorWebhookService.dispatch_event for delivery and only clear or pop events after successful dispatch (or persist failures) so events aren’t lost on DB/session errors or shutdown.backend/tests/test_onchain_webhooks.py (1)
44-46:⚠️ Potential issue | 🟠 MajorThe suite still bypasses the production app path and misses the real webhook pipeline.
The HTTP tests keep mounting standalone
FastAPI()instances with routers attached directly, so they never exercisebackend/app/main.py’s/apiprefixing or lifespan hooks that start and stop the batch worker. The remaining assertions are mostly against_pending_events, the static catalog, or queue clearing, which means the suite still does not cover the serialized outbound webhook body or the delivery-log path that the dashboard/retry features depend on.As per coding guidelines,
CONTRIBUTING.md: keep changes cleanly covered by tests and consistent with existing backend style/linting expectations so the webhook pipeline (emit/batch/dashboard/retry) remains integration-safe.Also applies to: 73-112, 190-243, 290-358, 370-425
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_onchain_webhooks.py` around lines 44 - 46, Tests mount a fresh FastAPI() and attach onchain_router/dashboard_router, bypassing the real production app and its lifespan hooks (batch worker) and /api prefix; replace the standalone app with the real application instance used in production (import the app or factory from backend.app.main — e.g., the exported app or create_app/get_app function) so the test client triggers lifespan hooks that start/stop the batch worker and preserves routing prefix, use TestClient as a context manager to run lifespan events, update request paths to include the /api prefix, and adjust assertions to exercise the serialized outbound webhook body, delivery-log/retry paths and not rely solely on _pending_events or static queue clearing.backend/app/services/contributor_webhook_service.py (2)
56-64:⚠️ Potential issue | 🟠 MajorOutbound on-chain payloads still do not match the published schema.
_build_payload()always emitsWebhookPayload(event, bounty_id, timestamp, data). Becausebackend/app/services/onchain_webhook_service.pyputstx_signature,slot, andblock_timeinsidedata, subscribers never receive the top-level provenance fields documented byOnChainWebhookPayloadand/api/webhooks/events.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/services/contributor_webhook_service.py` around lines 56 - 64, _build_payload currently always serialises only event, bounty_id, timestamp and data, which causes on-chain fields (tx_signature, slot, block_time) to be nested inside data instead of emitted as top-level fields per the OnChainWebhookPayload spec; update _build_payload to accept (or extract) tx_signature, slot, and block_time and set them as top-level attributes on the WebhookPayload instance (removing them from the data dict), then update the caller in onchain_webhook_service.py to pass those values through to _build_payload so outgoing webhook JSON matches the published OnChainWebhookPayload schema.
264-293:⚠️ Potential issue | 🟠 MajorThe new
webhook_delivery_logtable is still never written.
_record_delivery()only updatescontributor_webhooks.last_delivery_*andfailure_count; it never inserts attempt rows with event, HTTP status, attempt number, or error details. That leaves the migration-added audit table unused and prevents accurate retry history and 24-hour delivery metrics.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/alembic/versions/005_webhook_delivery_log.py`:
- Around line 34-61: The contributor_webhooks table creation currently passes
postgresql_if_not_exists=True (invalid for Alembic) and the webhook_delivery_log
table lacks any idempotency flag; replace/remove postgresql_if_not_exists and
instead pass if_not_exists=True into the op.create_table() calls for both
contributor_webhooks and webhook_delivery_log so Alembic emits CREATE TABLE IF
NOT EXISTS (i.e., add if_not_exists=True to the op.create_table(...) invocation
that constructs contributor_webhooks and to the op.create_table(...) that
constructs webhook_delivery_log, and remove the incorrect
postgresql_if_not_exists kwarg).
In `@backend/app/api/onchain_webhooks.py`:
- Around line 203-210: The test payload currently allows callers to override
sentinel fields because req.payload_override is merged into test_data last; to
fix, ensure the test sentinel(s) cannot be replaced by forcing those fields
after applying overrides — locate test_data and req.payload_override in
onchain_webhooks.py and merge overrides first (or apply them) but then
explicitly set test_data["test"] = True and re-assign any provenance markers
(e.g., tx_signature set to the TestTxSig value and slot/block_time as intended)
so that /webhooks/test payloads always carry the documented test sentinel
values.
- Around line 106-110: The OpenAPI responses on onchain_webhooks.py incorrectly
declare ErrorResponse while your global exception handlers in
backend/app/main.py return a different shape (message/code/request_id) and
FastAPI defaults to {"detail":...} for other routers; fix by making the contract
consistent: either update the routes in onchain_webhooks.py (where responses
currently reference ErrorResponse) to declare the actual error schema returned
by your handlers (create/use a new ErrorEnvelope model with fields message,
code, request_id and use that for 400/403, and keep a fallback response schema
matching FastAPI's {"detail": ...} for other cases), or alter the exception
handlers in backend/app/main.py to return the existing ErrorResponse model;
locate and modify the ErrorResponse usage in onchain_webhooks.py and the
exception handlers in backend/app/main.py to ensure both use the same error
model.
---
Duplicate comments:
In `@backend/app/api/onchain_webhooks.py`:
- Around line 147-170: The dashboard currently uses per-webhook summary fields
(last_delivery_at and failure_count) instead of real delivery attempts; update
webhook_dashboard to compute deliveries_last_24h and failure_rate from
webhook_delivery_log records. Call or add a service method on
ContributorWebhookService (e.g., get_delivery_attempts_for_user or
get_delivery_stats_for_user) that queries webhook_delivery_log for attempts in
the last 24 hours and for all-time attempts/failures, then compute
total_deliveries (count of successful attempts in 24h), total_attempts (all
attempts), and total_failures (failed attempts) from that data and use those
numbers to build the WebhookDashboardResponse instead of using last_delivery_at
and failure_count on each Webhook object. Ensure the service method returns
per-webhook and aggregated attempt counts so webhook_dashboard can still return
the webhooks list plus accurate metrics.
In `@backend/app/models/contributor_webhook.py`:
- Around line 71-73: The ContributorWebhookDB model currently stores the raw
HMAC key in the secret Column and contributor_webhook_service.py persists
req.secret directly; change this to never store plaintext secrets by replacing
direct storage of ContributorWebhookDB.secret with a secure derived/opaque value
(e.g., store an encrypted blob or a keyed hash/derived key using a KMS-wrapped
encryption or HMAC key-derivation and only store the wrapped/hashed output and
any nonce) and update registration code that assigns req.secret to instead
persist the derived/wrapped value and keep the raw secret transient (return it
only once to the caller). Locate the model field named secret on class
ContributorWebhookDB and the registration code that uses req.secret in
contributor_webhook_service.py, implement encryption/derivation using your
platform KMS or a strong KDF, and ensure verification/signing uses the
unwrapped/derived key at runtime without ever writing plaintext to the DB.
In `@backend/app/services/contributor_webhook_service.py`:
- Around line 56-64: _build_payload currently always serialises only event,
bounty_id, timestamp and data, which causes on-chain fields (tx_signature, slot,
block_time) to be nested inside data instead of emitted as top-level fields per
the OnChainWebhookPayload spec; update _build_payload to accept (or extract)
tx_signature, slot, and block_time and set them as top-level attributes on the
WebhookPayload instance (removing them from the data dict), then update the
caller in onchain_webhook_service.py to pass those values through to
_build_payload so outgoing webhook JSON matches the published
OnChainWebhookPayload schema.
In `@backend/app/services/onchain_webhook_service.py`:
- Around line 131-135: The validation in emit_onchain_event currently checks
WEBHOOK_EVENTS allowing off-chain events; change it to validate against the
on-chain whitelist ONCHAIN_EVENTS (use the ONCHAIN_EVENTS symbol instead of
WEBHOOK_EVENTS) and update the raised ValueError message to list the allowed
ONCHAIN_EVENTS so only the five on-chain events can be enqueued by the indexer
credential; locate emit_onchain_event in onchain_webhook_service.py and replace
the membership check and error text accordingly (add/update tests if present to
assert that off-chain names like "bounty.claimed" are rejected).
- Around line 44-70: The current _flush_batch clears _pending_events before
acquiring a DB or successfully dispatching, risking permanent loss; change the
flow so that you snapshot but do NOT remove items from _pending_events until
each event is durably handed off: acquire the DB via db_factory and/or ensure
dispatch succeeds (or is retried/persisted) before removing events from
_pending_events, and update stop_batch_worker to perform a final safe drain that
waits for dispatch completion (using _batch_lock) or preserves remaining events
on failure; use ContributorWebhookService.dispatch_event for delivery and only
clear or pop events after successful dispatch (or persist failures) so events
aren’t lost on DB/session errors or shutdown.
In `@backend/tests/test_onchain_webhooks.py`:
- Around line 44-46: Tests mount a fresh FastAPI() and attach
onchain_router/dashboard_router, bypassing the real production app and its
lifespan hooks (batch worker) and /api prefix; replace the standalone app with
the real application instance used in production (import the app or factory from
backend.app.main — e.g., the exported app or create_app/get_app function) so the
test client triggers lifespan hooks that start/stop the batch worker and
preserves routing prefix, use TestClient as a context manager to run lifespan
events, update request paths to include the /api prefix, and adjust assertions
to exercise the serialized outbound webhook body, delivery-log/retry paths and
not rely solely on _pending_events or static queue clearing.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: bb639432-792a-46fb-9c84-a83c6155bf5a
📒 Files selected for processing (7)
backend/alembic/versions/005_webhook_delivery_log.pybackend/app/api/onchain_webhooks.pybackend/app/main.pybackend/app/models/contributor_webhook.pybackend/app/services/contributor_webhook_service.pybackend/app/services/onchain_webhook_service.pybackend/tests/test_onchain_webhooks.py
| op.create_table( | ||
| "contributor_webhooks", | ||
| sa.Column( | ||
| "id", | ||
| postgresql.UUID(as_uuid=True), | ||
| primary_key=True, | ||
| server_default=sa.text("gen_random_uuid()"), | ||
| ), | ||
| sa.Column("user_id", postgresql.UUID(as_uuid=True), nullable=False), | ||
| sa.Column("url", sa.Text(), nullable=False), | ||
| sa.Column("secret", sa.String(256), nullable=False), | ||
| sa.Column("active", sa.Boolean(), nullable=False, server_default=sa.text("true")), | ||
| sa.Column( | ||
| "created_at", | ||
| sa.DateTime(timezone=True), | ||
| nullable=False, | ||
| server_default=sa.text("now()"), | ||
| ), | ||
| sa.Column( | ||
| "updated_at", | ||
| sa.DateTime(timezone=True), | ||
| nullable=False, | ||
| server_default=sa.text("now()"), | ||
| ), | ||
| sa.Column("last_delivery_at", sa.DateTime(timezone=True), nullable=True), | ||
| sa.Column("last_delivery_status", sa.String(20), nullable=True), | ||
| sa.Column("failure_count", sa.Integer(), nullable=False, server_default="0"), | ||
| postgresql_if_not_exists=True, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Alembic Operations.create_table(), is if_not_exists=Truethe documented flag, and are extra kwargs forwarded to the underlying SQLAlchemyTable rather than treated as Alembic create-table options?
💡 Result:
Yes.
Operations.create_table()has a documented keyword parameterif_not_exists(added in Alembic 1.13.3) which, when true, emitsCREATE TABLE IF NOT EXISTSon dialects that support it. [1]- Aside from Alembic’s explicit parameters (like
if_not_existsandschema), the remaining**kware documented as being passed through to the underlying SQLAlchemyTableobject created for the operation (i.e., they are not treated as additional Alembic-specific create-table options). [1]
Source
[1] Alembic “Operation Reference” docs for Operations.create_table() (signature, if_not_exists docs, and **kw pass-through). (alembic.sqlalchemy.org)
Citations:
🏁 Script executed:
# Check if the migration file exists and inspect its content
find . -name "005_webhook_delivery_log.py" -type f | head -5Repository: SolFoundry/solfoundry
Length of output: 119
🏁 Script executed:
# Look for project dependencies/requirements
find . -name "pyproject.toml" -o -name "requirements.txt" -o -name "setup.py" | grep -E "(pyproject|requirements|setup)" | head -10Repository: SolFoundry/solfoundry
Length of output: 91
🏁 Script executed:
# Search for postgresql_if_not_exists usage in the codebase
rg "postgresql_if_not_exists" --type pyRepository: SolFoundry/solfoundry
Length of output: 156
🏁 Script executed:
cat -n ./backend/alembic/versions/005_webhook_delivery_log.pyRepository: SolFoundry/solfoundry
Length of output: 6116
🏁 Script executed:
cat -n ./backend/requirements.txt | grep -i alembicRepository: SolFoundry/solfoundry
Length of output: 94
Fix migration idempotency: use if_not_exists=True flag on both table creations.
Line 61 uses postgresql_if_not_exists=True which is not a documented Alembic parameter. With Alembic ≥1.13.0 (your constraint), the correct flag is if_not_exists=True passed to op.create_table(), not as a separate kwarg. As written, this kwarg gets forwarded to SQLAlchemy's Table object and provides no CREATE TABLE IF NOT EXISTS protection.
Additionally, the webhook_delivery_log table (lines 79–110) lacks any idempotency flag entirely. Both tables need if_not_exists=True at their respective op.create_table() calls to match the intent expressed in the docstring ("if it doesn't already exist") and comply with idempotent migration requirements.
Note: The index creations correctly use if_not_exists=True (lines 68, 75), so apply the same pattern to the table operations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/alembic/versions/005_webhook_delivery_log.py` around lines 34 - 61,
The contributor_webhooks table creation currently passes
postgresql_if_not_exists=True (invalid for Alembic) and the webhook_delivery_log
table lacks any idempotency flag; replace/remove postgresql_if_not_exists and
instead pass if_not_exists=True into the op.create_table() calls for both
contributor_webhooks and webhook_delivery_log so Alembic emits CREATE TABLE IF
NOT EXISTS (i.e., add if_not_exists=True to the op.create_table(...) invocation
that constructs contributor_webhooks and to the op.create_table(...) that
constructs webhook_delivery_log, and remove the incorrect
postgresql_if_not_exists kwarg).
backend/app/api/onchain_webhooks.py
Outdated
| responses={ | ||
| 202: {"description": "Event queued for delivery"}, | ||
| 400: {"model": ErrorResponse, "description": "Unsupported event type"}, | ||
| 403: {"model": ErrorResponse, "description": "Invalid or missing indexer token"}, | ||
| }, |
There was a problem hiding this comment.
The declared error model does not match the actual responses these routes produce.
These route definitions advertise ErrorResponse, but the real app-level handlers in backend/app/main.py return message/code/request_id for HTTPException and ValueError, while a router mounted outside backend/app/main.py falls back to FastAPI’s default detail shape. The OpenAPI emitted for these endpoints is therefore publishing the wrong error contract.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.
Also applies to: 143-145, 185-189
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/onchain_webhooks.py` around lines 106 - 110, The OpenAPI
responses on onchain_webhooks.py incorrectly declare ErrorResponse while your
global exception handlers in backend/app/main.py return a different shape
(message/code/request_id) and FastAPI defaults to {"detail":...} for other
routers; fix by making the contract consistent: either update the routes in
onchain_webhooks.py (where responses currently reference ErrorResponse) to
declare the actual error schema returned by your handlers (create/use a new
ErrorEnvelope model with fields message, code, request_id and use that for
400/403, and keep a fallback response schema matching FastAPI's {"detail": ...}
for other cases), or alter the exception handlers in backend/app/main.py to
return the existing ErrorResponse model; locate and modify the ErrorResponse
usage in onchain_webhooks.py and the exception handlers in backend/app/main.py
to ensure both use the same error model.
backend/app/api/onchain_webhooks.py
Outdated
| test_data: dict[str, Any] = { | ||
| "test": True, | ||
| "message": "This is a test event from SolFoundry.", | ||
| "tx_signature": "TestTxSig1111111111111111111111111111111111111111", | ||
| "slot": 0, | ||
| "block_time": None, | ||
| **(req.payload_override or {}), | ||
| } |
There was a problem hiding this comment.
payload_override can remove the test sentinel and make test deliveries look live.
Because payload_override is merged after the fixed fields, callers can set "test": false or replace the provenance markers that are supposed to distinguish /webhooks/test payloads from real events. That breaks the endpoint’s documented safety signal for downstream consumers.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/onchain_webhooks.py` around lines 203 - 210, The test payload
currently allows callers to override sentinel fields because
req.payload_override is merged into test_data last; to fix, ensure the test
sentinel(s) cannot be replaced by forcing those fields after applying overrides
— locate test_data and req.payload_override in onchain_webhooks.py and merge
overrides first (or apply them) but then explicitly set test_data["test"] = True
and re-assign any provenance markers (e.g., tx_signature set to the TestTxSig
value and slot/block_time as intended) so that /webhooks/test payloads always
carry the documented test sentinel values.
| @field_validator("url") | ||
| @classmethod | ||
| def must_be_https_and_public(cls, v: AnyHttpUrl) -> AnyHttpUrl: | ||
| """Reject non-HTTPS and SSRF-risky targets (localhost, private ranges).""" | ||
| s = str(v) | ||
| if s.startswith("http://"): | ||
| raise ValueError("Webhook URL must use HTTPS") | ||
| host = v.host or "" | ||
| if not host or host.lower() in ("localhost", "127.0.0.1", "::1"): | ||
| raise ValueError("Webhook URL must not target localhost") | ||
| if _is_private_ip(host): | ||
| raise ValueError( | ||
| "Webhook URL must not target private or reserved IP ranges" | ||
| ) | ||
| return v |
There was a problem hiding this comment.
The SSRF hostname check is still vulnerable to DNS rebinding.
must_be_https_and_public() resolves the hostname once during registration, but backend/app/services/contributor_webhook_service.py later delivers to the stored hostname verbatim. A domain that resolves to a public IP when registered can later rebind to loopback or private space and turn webhook delivery into SSRF.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: Input validation and SQL injection vectors.
4687dfb to
25fca5f
Compare
Closes #508
Summary
Extends the contributor webhook system with 5 new on-chain event types, 5-second batch delivery windows, a delivery dashboard endpoint, a test-fire endpoint, and a full event catalog.
New Event Types
escrow.lockedescrow.releasedreputation.updatedstake.depositedstake.withdrawnAcceptance Criteria
New Endpoints
/api/webhooks/onchain/emit/api/webhooks/dashboard/api/webhooks/test/api/webhooks/eventsOn-Chain Payload Schema
{ "event": "escrow.locked", "bounty_id": "<uuid>", "timestamp": "<ISO-8601>", "tx_signature": "<base58-tx-sig>", "slot": 123456789, "block_time": 1742686800, "data": { "creator_wallet": "<pubkey>", "amount": 50000.0, "currency": "FNDRY", "indexed_at": "<ISO-8601>" } }All deliveries signed:
X-SolFoundry-Signature: sha256=<hmac-hex>DB Migration
005_webhook_delivery_log— addswebhook_delivery_logtable for audit/retry history.Wallet
Solana:
61FYMEPXMe73ypR53wMAR7PYAWHhZWKFJMNKnG9NwoW