feat(webhooks): outbound contributor webhook notification system#729
Conversation
Implements bounty SolFoundry#475: contributors can register HTTPS webhook URLs to receive POST notifications on bounty lifecycle events. ## New files - backend/app/models/contributor_webhook.py SQLAlchemy ORM (ContributorWebhookDB) + Pydantic schemas (WebhookRegisterRequest, WebhookResponse, WebhookListResponse, WebhookPayload, WebhookEvent enum) - backend/app/services/contributor_webhook_service.py Full CRUD + dispatch service: - register(): enforces 10 webhook/user limit - unregister(): soft-delete (active=False) - list_for_user(): returns active subscriptions - dispatch_event(): fan-out to all matching webhooks - _deliver_with_retry(): 3 attempts, exponential backoff (2s/4s/8s) - _sign_payload(): HMAC-SHA256 sha256= header - Delivery stats updated after each attempt - backend/app/api/contributor_webhooks.py REST endpoints: - POST /api/webhooks/register - DELETE /api/webhooks/{id} - GET /api/webhooks - backend/tests/test_contributor_webhooks.py 19 tests covering: signing, payload structure, register/unregister/list, retry-on-503, retry-on-network-error, success-on-second-attempt, limit enforcement, API 201/204/400/404, HTTPS-only validation ## Wire-up - app/main.py: include contributor_webhooks_router at /api prefix ## Events supported bounty.claimed | review.started | review.passed | review.failed | bounty.paid
- contributor_webhooks.py: remove unused BackgroundTasks import - contributor_webhook_service.py: remove unused json, time, sqlalchemy.text imports - test_contributor_webhooks.py: remove unused asyncio import
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughAdds a contributor outbound webhook subsystem: a new FastAPI router mounted under /api/webhooks with endpoints to register (POST /register), unregister (DELETE /{webhook_id}), and list (GET /) webhooks; a SQLAlchemy model and Alembic migration for Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/contributor_webhooks.py`:
- Around line 83-96: The unregister_webhook endpoint currently types webhook_id
as str which lets malformed UUIDs reach ContributorWebhookService.unregister and
raise ValueError; update the API-level validation by changing the webhook_id
parameter in unregister_webhook to a UUID-typed path parameter (e.g.,
webhook_id: UUID = Path(...)) or explicitly parse UUID(webhook_id) inside
unregister_webhook and catch ValueError to raise an HTTPException (400/422) with
a clear "invalid webhook id" message so malformed IDs never produce a 500 from
ContributorWebhookService.unregister.
In `@backend/app/models/contributor_webhook.py`:
- Around line 93-98: The must_be_https field validator for "url" currently only
rejects values that start with "http://", which can miss other non-HTTPS
schemes; update the must_be_https(cls, v: AnyHttpUrl) implementation to
explicitly check v.scheme == "https" (or use v.url.scheme) and raise
ValueError("Webhook URL must use HTTPS") for any other scheme, ensuring the
validator uses the parsed URL's scheme rather than a string prefix check; keep
the validator attached to the "url" field and return v when the scheme is
"https".
- Around line 23-39: Remove the duplicated literal tuple and derive
WEBHOOK_EVENTS from the WebhookEvent enum to avoid drift: replace the current
WEBHOOK_EVENTS definition with a tuple built from WebhookEvent values (e.g.,
iterate over WebhookEvent members and collect their .value), keeping the enum
WebhookEvent as the single source of truth.
- Around line 50-76: The contributor_webhook model defines two indexes for
user_id: the Column user_id has index=True and there is also an explicit
Index("ix_contributor_webhooks_user_id", "user_id"); remove the duplicate by
deleting the column-level index (remove index=True from the user_id Column) so
the named Index("ix_contributor_webhooks_user_id", "user_id") remains as the
single index, keeping the explicit index name and avoiding redundant indexes.
In `@backend/app/services/contributor_webhook_service.py`:
- Around line 186-194: The retry loop in dispatch_event is creating a new
aiohttp.ClientSession on every attempt which is inefficient; modify
ContributorWebhookService.dispatch_event (or the method that contains the for
attempt in range loop) to create a single ClientSession per dispatch (or accept
an injected/shared session parameter) outside the retry loop, use that session
for all session.post calls inside the attempts, and ensure the session is
properly closed (use async with around the single session or close it in a
finally block) so resources are reused across retries and not leaked.
- Around line 141-147: dispatch_event currently accepts any string for event and
may dispatch invalid events; validate the event parameter against the allowed
set (WEBHOOK_EVENTS or the WebhookEvent enum) at the start of the dispatch_event
method and reject invalid values (e.g., raise ValueError or return early and log
an error). Locate dispatch_event in contributor_webhook_service.py and add a
guard that checks if event is in WEBHOOK_EVENTS or is a member of WebhookEvent
(convert enums to their string value if needed) before proceeding to subscriber
lookup and dispatching.
- Around line 141-168: dispatch_event is never called from the bounty lifecycle,
so webhook deliveries never run; integrate
ContributorWebhookService.dispatch_event into the bounty status transition flow
by invoking it from the lifecycle handlers that call transition_status() and
lifecycle_service.log_event(), passing the appropriate event name, bounty_id,
payload data (e.g., summary of status change or review info) and user_id when
relevant. Update the lifecycle transition points in bounty_lifecycle_service
where events are logged to also call await
contributor_webhook_service.dispatch_event(...) (or schedule it as a background
task consistent with existing async patterns), ensuring you map lifecycle
states/actions to the correct event strings and include any review/bounty
details in the data argument so webhooks receive useful context.
In `@backend/tests/test_contributor_webhooks.py`:
- Around line 356-358: Remove the unused pytest fixture named anyio_backend (the
function defined as anyio_backend returning "asyncio") since no tests reference
it; delete the anyio_backend fixture definition to eliminate dead/test
scaffolding or, if you intended to use a different async backend, update tests
to reference this fixture instead (e.g., use it in tests or change test markers
to require it).
- Around line 285-286: Remove the redundant local imports of MAX_ATTEMPTS inside
the test functions and use the module‑scope MAX_ATTEMPTS already imported at the
top of the test file; specifically, delete the in-function "from
app.services.contributor_webhook_service import MAX_ATTEMPTS" statements and
leave the existing assertions (e.g., "assert mock_session.post.call_count ==
MAX_ATTEMPTS") to reference the already-imported MAX_ATTEMPTS.
- Around line 42-56: Remove the unused module-level FastAPI instance named app
and the empty assignment to app.dependency_overrides; locate the top-level
symbols app and dependency_overrides in this test module and delete their
declarations (the FastAPI() creation and the empty overrides mapping). Leave or
relocate the helper coroutines _fake_user_id and _fake_db only if they are
referenced by individual tests—if they are unused, remove them as well to
eliminate dead code. Ensure tests continue to instantiate their own FastAPI
instances and set dependency overrides locally within each test.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 667211a2-ab67-47e2-848f-a01ad1c96efd
📒 Files selected for processing (5)
backend/app/api/contributor_webhooks.pybackend/app/main.pybackend/app/models/contributor_webhook.pybackend/app/services/contributor_webhook_service.pybackend/tests/test_contributor_webhooks.py
| async def unregister_webhook( | ||
| webhook_id: str = Path(..., description="Webhook UUID"), | ||
| user_id: str = Depends(get_current_user_id), | ||
| db: AsyncSession = Depends(get_db), | ||
| ) -> None: | ||
| """Unregister (soft-delete) an existing webhook.""" | ||
| service = ContributorWebhookService(db) | ||
| try: | ||
| await service.unregister(user_id, webhook_id) | ||
| except WebhookNotFoundError: | ||
| raise HTTPException( | ||
| status_code=status.HTTP_404_NOT_FOUND, | ||
| detail="Webhook not found", | ||
| ) |
There was a problem hiding this comment.
Missing UUID format validation for webhook_id path parameter.
The webhook_id parameter is declared as str without UUID format validation. If a malformed UUID is passed, UUID(webhook_id) in the service layer raises ValueError, which may result in an unhandled 500 or leak internal error details.
🛡️ Proposed fix — validate UUID format at API layer
+from uuid import UUID as PyUUID
+
`@router.delete`(
"/{webhook_id}",
status_code=status.HTTP_204_NO_CONTENT,
summary="Unregister a contributor webhook",
responses={
+ 422: {"model": ErrorResponse, "description": "Invalid webhook ID format"},
},
)
async def unregister_webhook(
- webhook_id: str = Path(..., description="Webhook UUID"),
+ webhook_id: PyUUID = Path(..., description="Webhook UUID"),
user_id: str = Depends(get_current_user_id),
db: AsyncSession = Depends(get_db),
) -> None:
"""Unregister (soft-delete) an existing webhook."""
service = ContributorWebhookService(db)
try:
- await service.unregister(user_id, webhook_id)
+ await service.unregister(user_id, str(webhook_id))
except WebhookNotFoundError:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/contributor_webhooks.py` around lines 83 - 96, The
unregister_webhook endpoint currently types webhook_id as str which lets
malformed UUIDs reach ContributorWebhookService.unregister and raise ValueError;
update the API-level validation by changing the webhook_id parameter in
unregister_webhook to a UUID-typed path parameter (e.g., webhook_id: UUID =
Path(...)) or explicitly parse UUID(webhook_id) inside unregister_webhook and
catch ValueError to raise an HTTPException (400/422) with a clear "invalid
webhook id" message so malformed IDs never produce a 500 from
ContributorWebhookService.unregister.
| WEBHOOK_EVENTS = ( | ||
| "bounty.claimed", | ||
| "review.started", | ||
| "review.passed", | ||
| "review.failed", | ||
| "bounty.paid", | ||
| ) | ||
|
|
||
|
|
||
| class WebhookEvent(str, Enum): | ||
| """Supported outbound webhook event types.""" | ||
|
|
||
| BOUNTY_CLAIMED = "bounty.claimed" | ||
| REVIEW_STARTED = "review.started" | ||
| REVIEW_PASSED = "review.passed" | ||
| REVIEW_FAILED = "review.failed" | ||
| BOUNTY_PAID = "bounty.paid" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redundant event definitions: WEBHOOK_EVENTS tuple and WebhookEvent enum.
Both WEBHOOK_EVENTS (tuple) and WebhookEvent (enum) define the same event strings. This duplication can lead to drift if one is updated without the other. Consider deriving the tuple from the enum or vice versa.
♻️ Proposed fix — derive tuple from enum
class WebhookEvent(str, Enum):
"""Supported outbound webhook event types."""
BOUNTY_CLAIMED = "bounty.claimed"
REVIEW_STARTED = "review.started"
REVIEW_PASSED = "review.passed"
REVIEW_FAILED = "review.failed"
BOUNTY_PAID = "bounty.paid"
-WEBHOOK_EVENTS = (
- "bounty.claimed",
- "review.started",
- "review.passed",
- "review.failed",
- "bounty.paid",
-)
+WEBHOOK_EVENTS: tuple[str, ...] = tuple(e.value for e in WebhookEvent)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/models/contributor_webhook.py` around lines 23 - 39, Remove the
duplicated literal tuple and derive WEBHOOK_EVENTS from the WebhookEvent enum to
avoid drift: replace the current WEBHOOK_EVENTS definition with a tuple built
from WebhookEvent values (e.g., iterate over WebhookEvent members and collect
their .value), keeping the enum WebhookEvent as the single source of truth.
| id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) | ||
| user_id = Column(UUID(as_uuid=True), nullable=False, index=True) | ||
| url = Column(Text, nullable=False) | ||
| # HMAC-SHA256 secret supplied by the contributor at registration time. | ||
| # Stored as plaintext (contributor's choice); used to sign outgoing payloads. | ||
| secret = Column(String(256), nullable=False) | ||
| active = Column(Boolean, default=True, nullable=False) | ||
| created_at = Column( | ||
| DateTime(timezone=True), | ||
| default=lambda: datetime.now(timezone.utc), | ||
| nullable=False, | ||
| ) | ||
| updated_at = Column( | ||
| DateTime(timezone=True), | ||
| default=lambda: datetime.now(timezone.utc), | ||
| onupdate=lambda: datetime.now(timezone.utc), | ||
| nullable=False, | ||
| ) | ||
| # Delivery stats (updated on each dispatch attempt) | ||
| last_delivery_at = Column(DateTime(timezone=True), nullable=True) | ||
| last_delivery_status = Column(String(20), nullable=True) # success | failed | ||
| failure_count = Column(Integer, default=0, nullable=False) | ||
|
|
||
| __table_args__ = ( | ||
| Index("ix_contributor_webhooks_user_id", "user_id"), | ||
| Index("ix_contributor_webhooks_active", "active"), | ||
| ) |
There was a problem hiding this comment.
Duplicate index definition for user_id column.
Line 51 declares index=True on the user_id column, and line 74 creates an explicit Index("ix_contributor_webhooks_user_id", "user_id"). This results in two indexes on the same column, wasting storage and slowing writes.
Remove one of the duplicate index definitions:
♻️ Proposed fix
- user_id = Column(UUID(as_uuid=True), nullable=False, index=True)
+ user_id = Column(UUID(as_uuid=True), nullable=False)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) | |
| user_id = Column(UUID(as_uuid=True), nullable=False, index=True) | |
| url = Column(Text, nullable=False) | |
| # HMAC-SHA256 secret supplied by the contributor at registration time. | |
| # Stored as plaintext (contributor's choice); used to sign outgoing payloads. | |
| secret = Column(String(256), nullable=False) | |
| active = Column(Boolean, default=True, nullable=False) | |
| created_at = Column( | |
| DateTime(timezone=True), | |
| default=lambda: datetime.now(timezone.utc), | |
| nullable=False, | |
| ) | |
| updated_at = Column( | |
| DateTime(timezone=True), | |
| default=lambda: datetime.now(timezone.utc), | |
| onupdate=lambda: datetime.now(timezone.utc), | |
| nullable=False, | |
| ) | |
| # Delivery stats (updated on each dispatch attempt) | |
| last_delivery_at = Column(DateTime(timezone=True), nullable=True) | |
| last_delivery_status = Column(String(20), nullable=True) # success | failed | |
| failure_count = Column(Integer, default=0, nullable=False) | |
| __table_args__ = ( | |
| Index("ix_contributor_webhooks_user_id", "user_id"), | |
| Index("ix_contributor_webhooks_active", "active"), | |
| ) | |
| id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) | |
| user_id = Column(UUID(as_uuid=True), nullable=False) | |
| url = Column(Text, nullable=False) | |
| # HMAC-SHA256 secret supplied by the contributor at registration time. | |
| # Stored as plaintext (contributor's choice); used to sign outgoing payloads. | |
| secret = Column(String(256), nullable=False) | |
| active = Column(Boolean, default=True, nullable=False) | |
| created_at = Column( | |
| DateTime(timezone=True), | |
| default=lambda: datetime.now(timezone.utc), | |
| nullable=False, | |
| ) | |
| updated_at = Column( | |
| DateTime(timezone=True), | |
| default=lambda: datetime.now(timezone.utc), | |
| onupdate=lambda: datetime.now(timezone.utc), | |
| nullable=False, | |
| ) | |
| # Delivery stats (updated on each dispatch attempt) | |
| last_delivery_at = Column(DateTime(timezone=True), nullable=True) | |
| last_delivery_status = Column(String(20), nullable=True) # success | failed | |
| failure_count = Column(Integer, default=0, nullable=False) | |
| __table_args__ = ( | |
| Index("ix_contributor_webhooks_user_id", "user_id"), | |
| Index("ix_contributor_webhooks_active", "active"), | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/models/contributor_webhook.py` around lines 50 - 76, The
contributor_webhook model defines two indexes for user_id: the Column user_id
has index=True and there is also an explicit
Index("ix_contributor_webhooks_user_id", "user_id"); remove the duplicate by
deleting the column-level index (remove index=True from the user_id Column) so
the named Index("ix_contributor_webhooks_user_id", "user_id") remains as the
single index, keeping the explicit index name and avoiding redundant indexes.
| # HMAC-SHA256 secret supplied by the contributor at registration time. | ||
| # Stored as plaintext (contributor's choice); used to sign outgoing payloads. | ||
| secret = Column(String(256), nullable=False) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Best practices for storing webhook secrets at rest encryption
💡 Result:
Best practices for storing webhook secrets (with strong encryption at rest)
-
Prefer a dedicated secrets manager (don’t roll your own)
- Store webhook signing secrets in a managed vault/service (e.g., AWS Secrets Manager, HashiCorp Vault, Google Secret Manager) instead of app config files or databases. These systems are designed for encryption at rest plus access controls and auditability. [1][2][3]
-
Use envelope encryption (DEK/KEK model)
- Encrypt the secret value with a data-encryption key (DEK), and protect that DEK with a key-encryption key (KEK) in a KMS/HSM (typical vault/KMS design). This reduces blast radius and simplifies rotation. [4][3]
-
Control and protect encryption keys
- Use KMS/HSM-backed keys and tightly scope which services/principals can decrypt.
- If you need stronger governance, use customer-managed encryption keys (CMEK) so you control rotation, access policies, and audit logs for key usage. [2][3]
-
Lock down access (least privilege + separation of duties)
- Limit decrypt/read permissions to only the webhook verification runtime (the exact service account / workload identity).
- Avoid broad “read secrets” roles in dev/test; separate “manage secrets” from “use secrets.” (This is a core OWASP theme for protecting key material.) [1][5]
-
Never hardcode or commit webhook secrets
- Don’t embed secrets in source code, images, or repo config; store them in a secure location your server can access. [6]
-
Plan for rotation and multi-key validation
- Have a documented rotation process; update secrets without downtime by supporting an overlap window (validate against current + previous secret during cutover). [7][1]
-
Encrypt backups and replicas too
- Ensure the storage layer used by the secrets system (snapshots/backups/replicas) is encrypted and access-controlled—at-rest encryption must apply end-to-end, not just in the primary store. (Vault and cloud secret managers assume encrypted-at-rest storage, but you still must secure underlying backups/policies.) [1][3]
-
Minimize plaintext exposure in runtime
- Retrieve secrets just-in-time, keep them only in memory as long as needed, and avoid logging/printing them (including error traces). Vault explicitly notes secrets must exist in memory to function, so operational controls matter. [3]
-
Audit and monitor secret access
- Turn on audit logs for secret reads/decrypt operations (KMS/Secret Manager/Vault) and alert on anomalies (new principals, unusual regions, spikes). [2][3]
-
Use modern authenticated encryption where you encrypt yourself
- If you must encrypt webhook secrets in your own datastore (not recommended), use an AEAD mode and sound key management (don’t store encryption keys alongside ciphertext). OWASP guidance emphasizes protecting secret keys and using proper secret vaults. [1][5]
Sources:
[1] OWASP Developer Guide (crypto practices; recommends vault/KMS/HSM for key storage & lifecycle) (owasp.org)
[2] Google Secret Manager CMEK (default encryption at rest; CMEK for control, rotation, auditing) (cloud.google.com)
[3] HashiCorp Vault production hardening (encrypts at rest/in transit; seal/KMS/HSM considerations) (developer.hashicorp.com)
[4] NIST SP 800-57 Part 1 Rev.5 (key management concepts incl. DEKs protecting data) (nvlpubs.nist.gov)
[5] OWASP Proactive Controls (encrypt data at rest; protect secret keys; use proper vault) (wiki.owasp.org)
[6] GitHub: securing your webhooks (store webhook secret securely; never hardcode/push to repo) (docs.github.com)
[7] AWS Secrets Manager best practices (encrypts secrets at rest with KMS; rotation guidance) (docs.aws.amazon.com)
Citations:
- 1: https://owasp.org/www-project-developer-guide/assets/exports/OWASP_Developer_Guide.pdf?utm_source=openai
- 2: https://cloud.google.com/secret-manager/docs/cmek?utm_source=openai
- 3: https://developer.hashicorp.com/vault/docs/concepts/production-hardening?utm_source=openai
- 4: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt1r5.pdf?utm_source=openai
- 5: https://wiki.owasp.org/images/b/bc/OWASP_Top_10_Proactive_Controls_V3.pdf?utm_source=openai
- 6: https://docs.github.com/developers/webhooks-and-events/webhooks/securing-your-webhooks?utm_source=openai
- 7: https://docs.aws.amazon.com/secretsmanager/latest/userguide/best-practices.html?utm_source=openai
🏁 Script executed:
# First, let's examine the webhook model file to understand the full context
cat -n backend/app/models/contributor_webhook.pyRepository: SolFoundry/solfoundry
Length of output: 4787
🏁 Script executed:
# Check for any encryption utilities or KMS integration in the codebase
rg -i "encrypt|decrypt|kms|vault|secret" --type py -A 2 -B 2Repository: SolFoundry/solfoundry
Length of output: 41134
🏁 Script executed:
# Search for how the webhook secret is used/retrieved
rg "secret" backend/app/models/contributor_webhook.py -A 3 -B 3Repository: SolFoundry/solfoundry
Length of output: 802
🏁 Script executed:
# Check if there's any encryption mechanism applied at the database layer or application
fd -e py -type f | xargs rg -l "cryptography|fernet|aes" | head -20Repository: SolFoundry/solfoundry
Length of output: 304
Webhook signing secrets stored unencrypted in database.
The secret column stores HMAC-SHA256 signing secrets in plaintext. If the database is compromised, all webhook credentials are exposed. The comment citing "contributor's choice" does not justify this practice. Follow industry best practice: encrypt webhook secrets at rest using KMS (AWS Secrets Manager, HashiCorp Vault, or similar) or application-level encryption with proper key management. Support secret rotation with an overlap window to allow gradual key transitions.
| @field_validator("url") | ||
| @classmethod | ||
| def must_be_https(cls, v: AnyHttpUrl) -> AnyHttpUrl: | ||
| if str(v).startswith("http://"): | ||
| raise ValueError("Webhook URL must use HTTPS") | ||
| return v |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
HTTPS validator may not reject all non-HTTPS schemes.
The validator only checks for http:// prefix but AnyHttpUrl could theoretically allow other schemes in edge cases. A stricter approach is to assert the scheme directly.
♻️ Proposed fix
`@field_validator`("url")
`@classmethod`
def must_be_https(cls, v: AnyHttpUrl) -> AnyHttpUrl:
- if str(v).startswith("http://"):
+ if v.scheme != "https":
raise ValueError("Webhook URL must use HTTPS")
return v🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/models/contributor_webhook.py` around lines 93 - 98, The
must_be_https field validator for "url" currently only rejects values that start
with "http://", which can miss other non-HTTPS schemes; update the
must_be_https(cls, v: AnyHttpUrl) implementation to explicitly check v.scheme ==
"https" (or use v.url.scheme) and raise ValueError("Webhook URL must use HTTPS")
for any other scheme, ensuring the validator uses the parsed URL's scheme rather
than a string prefix check; keep the validator attached to the "url" field and
return v when the scheme is "https".
| async def dispatch_event( | ||
| self, | ||
| event: str, | ||
| bounty_id: str, | ||
| data: dict[str, Any], | ||
| user_id: str | None = None, | ||
| ) -> None: | ||
| """Dispatch an event to all matching active webhooks. | ||
|
|
||
| If *user_id* is given, only that user's webhooks are notified. | ||
| Delivery runs in the background and failures do not propagate. | ||
| """ | ||
| query = select(ContributorWebhookDB).where( | ||
| ContributorWebhookDB.active.is_(True) | ||
| ) | ||
| if user_id: | ||
| query = query.where(ContributorWebhookDB.user_id == UUID(user_id)) | ||
|
|
||
| result = await self._db.execute(query) | ||
| webhooks = result.scalars().all() | ||
|
|
||
| payload_bytes = _build_payload(event, bounty_id, data) | ||
|
|
||
| tasks = [ | ||
| self._deliver_with_retry(wh, event, payload_bytes) for wh in webhooks | ||
| ] | ||
| if tasks: | ||
| await asyncio.gather(*tasks, return_exceptions=True) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Search for usages of dispatch_event to verify integration
# Check if dispatch_event is called anywhere outside the service and tests
rg -n 'dispatch_event' --type=py -g '!*test*' -g '!contributor_webhook_service.py'Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check if dispatch_event is defined in the service file
rg -n 'def dispatch_event' backend/app/services/contributor_webhook_service.pyRepository: SolFoundry/solfoundry
Length of output: 98
🏁 Script executed:
# Look for bounty/review lifecycle handlers and state transitions
fd -e py -path '*backend*' | xargs rg -l 'bounty.*state\|state.*change\|lifecycle' | head -20Repository: SolFoundry/solfoundry
Length of output: 233
🏁 Script executed:
# Search for webhook-related tests or TODOs
rg -n 'webhook.*test\|TODO.*webhook\|dispatch_event' --type=py -g '*test*'Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Fix: Search for files with bounty/review handlers in backend
find backend -name "*.py" -type f | xargs rg -l 'bounty.*service\|review.*service' | head -10Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Search for state transition or lifecycle patterns
rg -n 'async def.*bounty\|async def.*review' backend --type py | grep -E 'create|update|complete|approve|reject|state' | head -20Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Look for where webhooks should logically be called (bounty state changes)
rg -n 'ContributorWebhookDB\|dispatch_event' backend --type py -B 2 -A 2 | head -50Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Explore backend directory structure
ls -la backend/app/services/ | head -20Repository: SolFoundry/solfoundry
Length of output: 1372
🏁 Script executed:
# List all service files to understand architecture
find backend -name "*service*.py" -o -name "*handler*.py" | sortRepository: SolFoundry/solfoundry
Length of output: 941
🏁 Script executed:
# Check if ContributorWebhookDB model exists and where it's used
rg -n 'class ContributorWebhookDB' backend --type pyRepository: SolFoundry/solfoundry
Length of output: 143
🏁 Script executed:
# Find bounty-related models or endpoints
rg -n 'class.*Bounty|def.*bounty' backend --type py -i | head -20Repository: SolFoundry/solfoundry
Length of output: 1896
🏁 Script executed:
# Check if dispatch_event is called in bounty lifecycle service
rg -n 'dispatch_event' backend/app/services/bounty_lifecycle_service.pyRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check if dispatch_event is called in review service
rg -n 'dispatch_event' backend/app/services/review_service.pyRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check all services for dispatch_event calls
rg -n 'dispatch_event' backend/app/services/ --type pyRepository: SolFoundry/solfoundry
Length of output: 150
🏁 Script executed:
# Get the full contributor_webhook_service.py to understand context
wc -l backend/app/services/contributor_webhook_service.pyRepository: SolFoundry/solfoundry
Length of output: 120
🏁 Script executed:
# Check if dispatch_event is called in tests
rg -n 'dispatch_event' backend/tests/ --type pyRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Review bounty_lifecycle_service.py to understand state transitions
head -100 backend/app/services/bounty_lifecycle_service.pyRepository: SolFoundry/solfoundry
Length of output: 3130
🏁 Script executed:
# Check the contributor_webhook_service.py around the dispatch_event method
sed -n '135,170p' backend/app/services/contributor_webhook_service.pyRepository: SolFoundry/solfoundry
Length of output: 1270
dispatch_event is not integrated with bounty/review lifecycle handlers.
The dispatch_event method is implemented but never called from any state transition handlers. bounty_lifecycle_service.py handles bounty status transitions via transition_status() and logs events via lifecycle_service.log_event(), but does not trigger webhook dispatch. Webhook events will not be sent until dispatch_event() is invoked from the appropriate lifecycle transition points.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/contributor_webhook_service.py` around lines 141 - 168,
dispatch_event is never called from the bounty lifecycle, so webhook deliveries
never run; integrate ContributorWebhookService.dispatch_event into the bounty
status transition flow by invoking it from the lifecycle handlers that call
transition_status() and lifecycle_service.log_event(), passing the appropriate
event name, bounty_id, payload data (e.g., summary of status change or review
info) and user_id when relevant. Update the lifecycle transition points in
bounty_lifecycle_service where events are logged to also call await
contributor_webhook_service.dispatch_event(...) (or schedule it as a background
task consistent with existing async patterns), ensuring you map lifecycle
states/actions to the correct event strings and include any review/bounty
details in the data argument so webhooks receive useful context.
| for attempt in range(1, MAX_ATTEMPTS + 1): | ||
| try: | ||
| async with aiohttp.ClientSession() as session: | ||
| async with session.post( | ||
| webhook.url, | ||
| data=payload_bytes, | ||
| headers=headers, | ||
| timeout=aiohttp.ClientTimeout(total=DISPATCH_TIMEOUT_SECONDS), | ||
| ) as resp: |
There was a problem hiding this comment.
Inefficient: New aiohttp.ClientSession created per delivery attempt.
Creating a new ClientSession for each attempt (and each webhook) is resource-intensive. aiohttp recommends reusing a single session for multiple requests. Consider accepting a shared session or creating one per dispatch_event call.
♻️ Proposed fix — create session once per dispatch
async def dispatch_event(
self,
event: str,
bounty_id: str,
data: dict[str, Any],
user_id: str | None = None,
) -> None:
...
payload_bytes = _build_payload(event, bounty_id, data)
- tasks = [
- self._deliver_with_retry(wh, event, payload_bytes) for wh in webhooks
- ]
- if tasks:
- await asyncio.gather(*tasks, return_exceptions=True)
+ if webhooks:
+ async with aiohttp.ClientSession() as session:
+ tasks = [
+ self._deliver_with_retry(wh, event, payload_bytes, session)
+ for wh in webhooks
+ ]
+ await asyncio.gather(*tasks, return_exceptions=True)
async def _deliver_with_retry(
self,
webhook: ContributorWebhookDB,
event: str,
payload_bytes: bytes,
+ session: aiohttp.ClientSession,
) -> None:
...
for attempt in range(1, MAX_ATTEMPTS + 1):
try:
- async with aiohttp.ClientSession() as session:
- async with session.post(
+ async with session.post(
webhook.url,
...
- ) as resp:
+ ) as resp:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/contributor_webhook_service.py` around lines 186 - 194,
The retry loop in dispatch_event is creating a new aiohttp.ClientSession on
every attempt which is inefficient; modify
ContributorWebhookService.dispatch_event (or the method that contains the for
attempt in range loop) to create a single ClientSession per dispatch (or accept
an injected/shared session parameter) outside the retry loop, use that session
for all session.post calls inside the attempts, and ensure the session is
properly closed (use async with around the single session or close it in a
finally block) so resources are reused across retries and not leaked.
| app = FastAPI() | ||
| app.include_router(webhook_router) | ||
|
|
||
| # Override auth to always return TEST_USER_ID | ||
| app.dependency_overrides = {} | ||
|
|
||
|
|
||
| async def _fake_user_id() -> str: | ||
| return TEST_USER_ID | ||
|
|
||
|
|
||
| async def _fake_db(): | ||
| """Yield a mock DB session.""" | ||
| yield MagicMock() | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Module-level app and dependency_overrides are unused by API tests.
The module-level app (lines 42-43) and the empty dependency_overrides assignment (line 46) are not used — API tests create their own FastAPI instances with proper overrides. This dead code can be removed.
♻️ Proposed cleanup
-app = FastAPI()
-app.include_router(webhook_router)
-
-# Override auth to always return TEST_USER_ID
-app.dependency_overrides = {}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| app = FastAPI() | |
| app.include_router(webhook_router) | |
| # Override auth to always return TEST_USER_ID | |
| app.dependency_overrides = {} | |
| async def _fake_user_id() -> str: | |
| return TEST_USER_ID | |
| async def _fake_db(): | |
| """Yield a mock DB session.""" | |
| yield MagicMock() | |
| async def _fake_user_id() -> str: | |
| return TEST_USER_ID | |
| async def _fake_db(): | |
| """Yield a mock DB session.""" | |
| yield MagicMock() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_contributor_webhooks.py` around lines 42 - 56, Remove the
unused module-level FastAPI instance named app and the empty assignment to
app.dependency_overrides; locate the top-level symbols app and
dependency_overrides in this test module and delete their declarations (the
FastAPI() creation and the empty overrides mapping). Leave or relocate the
helper coroutines _fake_user_id and _fake_db only if they are referenced by
individual tests—if they are unused, remove them as well to eliminate dead code.
Ensure tests continue to instantiate their own FastAPI instances and set
dependency overrides locally within each test.
| from app.services.contributor_webhook_service import MAX_ATTEMPTS | ||
| assert mock_session.post.call_count == MAX_ATTEMPTS |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redundant re-import of MAX_ATTEMPTS inside test functions.
MAX_ATTEMPTS is already imported at module scope (line 35). The re-imports at lines 285 and 310 are unnecessary.
♻️ Proposed fix
- from app.services.contributor_webhook_service import MAX_ATTEMPTS
assert mock_session.post.call_count == MAX_ATTEMPTSAlso applies to: 310-311
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_contributor_webhooks.py` around lines 285 - 286, Remove
the redundant local imports of MAX_ATTEMPTS inside the test functions and use
the module‑scope MAX_ATTEMPTS already imported at the top of the test file;
specifically, delete the in-function "from
app.services.contributor_webhook_service import MAX_ATTEMPTS" statements and
leave the existing assertions (e.g., "assert mock_session.post.call_count ==
MAX_ATTEMPTS") to reference the already-imported MAX_ATTEMPTS.
| @pytest.fixture | ||
| def anyio_backend(): | ||
| return "asyncio" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Unused anyio_backend fixture.
The anyio_backend fixture is defined but not used by any test. pytest-asyncio with @pytest.mark.asyncio doesn't require this fixture. Consider removing it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_contributor_webhooks.py` around lines 356 - 358, Remove
the unused pytest fixture named anyio_backend (the function defined as
anyio_backend returning "asyncio") since no tests reference it; delete the
anyio_backend fixture definition to eliminate dead/test scaffolding or, if you
intended to use a different async backend, update tests to reference this
fixture instead (e.g., use it in tests or change test markers to require it).
- Add aiohttp>=3.9,<4.0 to requirements.txt (CI ModuleNotFoundError fix) - Add Alembic migration 004_contributor_webhooks.py for contributor_webhooks table - Validate event string in dispatch_event() against WEBHOOK_EVENTS tuple - Wire dispatch_event() into bounty lifecycle: - bounties.py/claim_bounty: dispatches 'bounty.claimed' - bounties.py/submit_solution: dispatches 'review.started' - bounties.py/record_review_score: dispatches 'review.passed' or 'review.failed' - payouts.py/execute_payout: dispatches 'bounty.paid' - Apply ruff format to all changed files - All 19 tests still passing
|
Solana Wallet: |
2 similar comments
|
Solana Wallet: |
|
Solana Wallet: |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/services/contributor_webhook_service.py`:
- Around line 82-94: The count-then-insert in register() using
ContributorWebhookDB and MAX_WEBHOOKS_PER_USER is vulnerable to a race where
concurrent registrations both pass the count check; make the check-and-create
atomic by performing the count inside the same transaction with a row-level lock
(e.g., SELECT ... FOR UPDATE on a user-locked row or a per-user counter row) or,
preferably, add a DB-level enforcement (unique constraint/counter with CHECK or
trigger) and handle the DB error in register(); ensure the transaction is
started/committed around the locked select and insert so concurrent register()
calls cannot exceed MAX_WEBHOOKS_PER_USER.
- Around line 153-160: The dispatch_event path currently loads all active
ContributorWebhookDB entries when user_id is omitted, risking cross-user data
leaks; update the dispatch_event method in ContributorWebhookService to require
an explicit user_id (or add a boolean flag like broadcast_global) and refuse to
load or send webhooks system-wide by default. Modify the query logic around
ContributorWebhookDB.active and ContributorWebhookDB.user_id so that if user_id
is None the method raises/returns an error (or only proceeds when
broadcast_global is true), and document/validate the new parameter in the method
signature to make global dispatch an explicit action.
- Around line 238-262: dispatch_event concurrently runs _deliver_with_retry
tasks that all use the shared self._db AsyncSession, causing unsafe concurrent
commits; change the delivery-recording to use a dedicated session per task by
creating an AsyncSession from the existing async_session_factory (or by
accepting a session parameter) instead of using self._db in _record_delivery
(and propagate that session into _deliver_with_retry/dispatch_event), perform
execute/commit on that per-task session, and ensure the session is closed (or
rolled back on exceptions) in a finally block to avoid session-sharing
concurrency violations.
In `@backend/tests/test_contributor_webhooks.py`:
- Around line 476-524: Add a new async test (e.g.,
test_unregister_endpoint_returns_400_for_malformed_uuid) that calls DELETE
/webhooks/{id} with a non-UUID string (e.g., "not-a-uuid") and asserts a 400
response; set up the FastAPI app the same way as the existing tests by including
webhook_router and overriding dependencies get_current_user_id and get_db (use
TEST_USER_ID and _fake_db), use AsyncClient with ASGITransport to send the
request, and you may patch ContributorWebhookService to an AsyncMock like in the
other tests but the malformed UUID should trigger the global ValueError handler
before the service is invoked.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3f0de254-865d-4553-a5a9-1497aee868f7
📒 Files selected for processing (2)
backend/app/services/contributor_webhook_service.pybackend/tests/test_contributor_webhooks.py
| count_result = await self._db.execute( | ||
| select(func.count()) | ||
| .select_from(ContributorWebhookDB) | ||
| .where( | ||
| ContributorWebhookDB.user_id == UUID(user_id), | ||
| ContributorWebhookDB.active.is_(True), | ||
| ) | ||
| ) | ||
| count = count_result.scalar_one() | ||
| if count >= MAX_WEBHOOKS_PER_USER: | ||
| raise WebhookLimitExceededError( | ||
| f"Maximum {MAX_WEBHOOKS_PER_USER} active webhooks per user" | ||
| ) |
There was a problem hiding this comment.
Race condition: concurrent registrations could exceed the webhook limit.
The count check and insert are not atomic. Two concurrent register() calls could both pass the count >= MAX_WEBHOOKS_PER_USER check before either commits, allowing a user to exceed the limit by one. While the limit is a soft cap (10 webhooks), consider using SELECT ... FOR UPDATE or a database constraint to enforce strict limits.
🛡️ Proposed fix using SELECT FOR UPDATE
count_result = await self._db.execute(
select(func.count())
.select_from(ContributorWebhookDB)
.where(
ContributorWebhookDB.user_id == UUID(user_id),
ContributorWebhookDB.active.is_(True),
)
+ .with_for_update()
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/contributor_webhook_service.py` around lines 82 - 94,
The count-then-insert in register() using ContributorWebhookDB and
MAX_WEBHOOKS_PER_USER is vulnerable to a race where concurrent registrations
both pass the count check; make the check-and-create atomic by performing the
count inside the same transaction with a row-level lock (e.g., SELECT ... FOR
UPDATE on a user-locked row or a per-user counter row) or, preferably, add a
DB-level enforcement (unique constraint/counter with CHECK or trigger) and
handle the DB error in register(); ensure the transaction is started/committed
around the locked select and insert so concurrent register() calls cannot exceed
MAX_WEBHOOKS_PER_USER.
| query = select(ContributorWebhookDB).where( | ||
| ContributorWebhookDB.active.is_(True) | ||
| ) | ||
| if user_id: | ||
| query = query.where(ContributorWebhookDB.user_id == UUID(user_id)) | ||
|
|
||
| result = await self._db.execute(query) | ||
| webhooks = result.scalars().all() |
There was a problem hiding this comment.
Security concern: user_id=None dispatches to ALL active webhooks across all users.
When user_id is not provided, dispatch_event sends the event to every active webhook in the system. This could leak bounty information to unrelated users. Consider requiring user_id or explicitly documenting the intended use case for system-wide broadcasts.
🛡️ Proposed fix — require user_id or make global dispatch explicit
async def dispatch_event(
self,
event: str,
bounty_id: str,
data: dict[str, Any],
- user_id: str | None = None,
+ user_id: str,
) -> None:
- """Dispatch an event to all matching active webhooks.
-
- If *user_id* is given, only that user's webhooks are notified.
- Delivery runs in the background and failures do not propagate.
- """
+ """Dispatch an event to the specified user's active webhooks."""
query = select(ContributorWebhookDB).where(
ContributorWebhookDB.active.is_(True)
)
- if user_id:
- query = query.where(ContributorWebhookDB.user_id == UUID(user_id))
+ query = query.where(ContributorWebhookDB.user_id == UUID(user_id))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/contributor_webhook_service.py` around lines 153 - 160,
The dispatch_event path currently loads all active ContributorWebhookDB entries
when user_id is omitted, risking cross-user data leaks; update the
dispatch_event method in ContributorWebhookService to require an explicit
user_id (or add a boolean flag like broadcast_global) and refuse to load or send
webhooks system-wide by default. Modify the query logic around
ContributorWebhookDB.active and ContributorWebhookDB.user_id so that if user_id
is None the method raises/returns an error (or only proceeds when
broadcast_global is true), and document/validate the new parameter in the method
signature to make global dispatch an explicit action.
| async def _record_delivery(self, webhook_id: UUID, *, success: bool) -> None: | ||
| """Update last_delivery stats; increment failure_count on failure.""" | ||
| values: dict[str, Any] = { | ||
| "last_delivery_at": datetime.now(timezone.utc), | ||
| "last_delivery_status": "success" if success else "failed", | ||
| } | ||
| if not success: | ||
| # Increment via SQL expression to avoid race conditions | ||
|
|
||
| await self._db.execute( | ||
| update(ContributorWebhookDB) | ||
| .where(ContributorWebhookDB.id == webhook_id) | ||
| .values( | ||
| last_delivery_at=values["last_delivery_at"], | ||
| last_delivery_status=values["last_delivery_status"], | ||
| failure_count=ContributorWebhookDB.failure_count + 1, | ||
| ) | ||
| ) | ||
| else: | ||
| await self._db.execute( | ||
| update(ContributorWebhookDB) | ||
| .where(ContributorWebhookDB.id == webhook_id) | ||
| .values(**values) | ||
| ) | ||
| await self._db.commit() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify if there's a session factory that could be used to create per-task sessions
rg -n 'async_session_factory\|AsyncSession' backend/app/database.py -A 3 -B 1Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
find backend -name "*.py" -type f | grep -E "(database|db)" | head -20Repository: SolFoundry/solfoundry
Length of output: 121
🏁 Script executed:
rg -n "dispatch_event|_deliver_with_retry" backend/app/services/contributor_webhook_service.py -B 2 -A 15Repository: SolFoundry/solfoundry
Length of output: 1633
🏁 Script executed:
rg -n "AsyncSession\|async_session_factory\|Session" backend/app -A 2 -B 1 | head -100Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
cat -n backend/app/database.pyRepository: SolFoundry/solfoundry
Length of output: 7830
🏁 Script executed:
rg -n "ContributorWebhookService\|__init__" backend/app/services/contributor_webhook_service.py -A 10 | head -50Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
rg -n "def _record_delivery" backend/app/services/contributor_webhook_service.py -A 30Repository: SolFoundry/solfoundry
Length of output: 1471
🏁 Script executed:
rg -n "ContributorWebhookService\(" backend -B 5 -A 5 | head -100Repository: SolFoundry/solfoundry
Length of output: 7328
🏁 Script executed:
grep -n "class ContributorWebhookService" backend/app/services/contributor_webhook_service.py -A 20Repository: SolFoundry/solfoundry
Length of output: 885
🏁 Script executed:
rg -n "def __init__" backend/app/services/contributor_webhook_service.py -A 10Repository: SolFoundry/solfoundry
Length of output: 507
🏁 Script executed:
rg -n "async def dispatch_event" backend/app/services/contributor_webhook_service.py -A 40 | head -80Repository: SolFoundry/solfoundry
Length of output: 1639
🏁 Script executed:
rg -n "_deliver_with_retry" backend/app/services/contributor_webhook_service.py -A 50 | head -100Repository: SolFoundry/solfoundry
Length of output: 2691
🏁 Script executed:
grep -rn "dispatch_event\|ContributorWebhookService" backend/app --include="*.py" | grep -E "(await|call)" | head -20Repository: SolFoundry/solfoundry
Length of output: 359
🏁 Script executed:
rg -n "gather\|concurrent\|lock\|Lock\|Semaphore" backend/app/services/contributor_webhook_service.pyRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
cat -n backend/app/services/contributor_webhook_service.py | sed -n '195,245p'Repository: SolFoundry/solfoundry
Length of output: 2568
🏁 Script executed:
rg -n "AsyncSession\|concurrency\|thread" backend/app -A 2 -B 1 | head -60Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
rg -n "ContributorWebhookService\(" backend/app/api -B 5 -A 5Repository: SolFoundry/solfoundry
Length of output: 5731
🏁 Script executed:
cat -n backend/app/api/bounties.py | sed -n '400,410p'Repository: SolFoundry/solfoundry
Length of output: 466
🏁 Script executed:
rg -n "def.*bounce\|def.*submit" backend/app/api/bounties.py | head -5Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
rg -n "expire_on_commit\|autoflush\|autocommit" backend/app/database.pyRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
python3 << 'EOF'
# Quick verification of SQLAlchemy AsyncSession concurrency constraints
import sys
print("SQLAlchemy AsyncSession is designed for single-coroutine sequential access.")
print("Using the same AsyncSession instance across multiple concurrent coroutines")
print("can cause connection state corruption and race conditions.")
print("\nRelevant constraints:")
print("- AsyncSession.execute() is awaitable but not concurrent-safe on same instance")
print("- AsyncSession.commit() must not be called concurrently from multiple tasks")
print("- Each concurrent task should use its own session instance")
EOFRepository: SolFoundry/solfoundry
Length of output: 511
🏁 Script executed:
rg -n "asyncio.gather" backend/app/services/contributor_webhook_service.py -B 5 -A 2Repository: SolFoundry/solfoundry
Length of output: 368
🏁 Script executed:
cat -n backend/app/services/contributor_webhook_service.py | sed -n '237,272p'Repository: SolFoundry/solfoundry
Length of output: 1698
Shared AsyncSession across concurrent asyncio.gather tasks creates concurrency violations.
dispatch_event (lines 141-175) concurrently delivers to multiple webhooks using asyncio.gather(*tasks, return_exceptions=True), but all concurrent tasks share the same self._db AsyncSession instance. Each _deliver_with_retry task calls _record_delivery (lines 247-271), which executes await self._db.execute(...) and then await self._db.commit(). SQLAlchemy's AsyncSession is not designed for concurrent use by multiple coroutines—multiple concurrent commit() calls on the same session instance can cause connection state corruption, lost updates, and race conditions.
Create a separate session instance per delivery task (via async_session_factory()) or serialize the delivery recording to ensure each database operation uses its own session.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/contributor_webhook_service.py` around lines 238 - 262,
dispatch_event concurrently runs _deliver_with_retry tasks that all use the
shared self._db AsyncSession, causing unsafe concurrent commits; change the
delivery-recording to use a dedicated session per task by creating an
AsyncSession from the existing async_session_factory (or by accepting a session
parameter) instead of using self._db in _record_delivery (and propagate that
session into _deliver_with_retry/dispatch_event), perform execute/commit on that
per-task session, and ensure the session is closed (or rolled back on
exceptions) in a finally block to avoid session-sharing concurrency violations.
| @pytest.mark.asyncio | ||
| async def test_unregister_endpoint_returns_204(): | ||
| """DELETE /webhooks/{id} returns 204 on success.""" | ||
| from app.auth import get_current_user_id | ||
| from app.database import get_db | ||
|
|
||
| wh_id = str(uuid.uuid4()) | ||
| mock_service = AsyncMock() | ||
| mock_service.unregister.return_value = None | ||
|
|
||
| with patch( | ||
| "app.api.contributor_webhooks.ContributorWebhookService", | ||
| return_value=mock_service, | ||
| ): | ||
| test_app = FastAPI() | ||
| test_app.include_router(webhook_router) | ||
| test_app.dependency_overrides[get_current_user_id] = lambda: TEST_USER_ID | ||
| test_app.dependency_overrides[get_db] = _fake_db | ||
|
|
||
| async with AsyncClient( | ||
| transport=ASGITransport(app=test_app), base_url="http://test" | ||
| ) as client: | ||
| response = await client.delete(f"/webhooks/{wh_id}") | ||
| assert response.status_code == 204 | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_unregister_endpoint_returns_404(): | ||
| """DELETE /webhooks/{id} returns 404 when webhook not found.""" | ||
| from app.auth import get_current_user_id | ||
| from app.database import get_db | ||
|
|
||
| mock_service = AsyncMock() | ||
| mock_service.unregister.side_effect = WebhookNotFoundError("not found") | ||
|
|
||
| with patch( | ||
| "app.api.contributor_webhooks.ContributorWebhookService", | ||
| return_value=mock_service, | ||
| ): | ||
| test_app = FastAPI() | ||
| test_app.include_router(webhook_router) | ||
| test_app.dependency_overrides[get_current_user_id] = lambda: TEST_USER_ID | ||
| test_app.dependency_overrides[get_db] = _fake_db | ||
|
|
||
| async with AsyncClient( | ||
| transport=ASGITransport(app=test_app), base_url="http://test" | ||
| ) as client: | ||
| response = await client.delete(f"/webhooks/{uuid.uuid4()}") | ||
| assert response.status_code == 404 |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how the API endpoint handles webhook_id path parameter conversion
rg -n 'webhook_id' backend/app/api/contributor_webhooks.py -A 5 -B 2Repository: SolFoundry/solfoundry
Length of output: 957
🏁 Script executed:
#!/bin/bash
# Find the ContributorWebhookService implementation
fd -t f 'contributor_webhooks' backend/app --type f | head -20Repository: SolFoundry/solfoundry
Length of output: 104
🏁 Script executed:
#!/bin/bash
# Look for the unregister method in the service
rg -n 'def unregister' backend/app -A 10Repository: SolFoundry/solfoundry
Length of output: 2063
🏁 Script executed:
#!/bin/bash
# Check if there's UUID validation or conversion in the service or endpoint
rg -n 'UUID|uuid' backend/app/api/contributor_webhooks.py -A 2 -B 2Repository: SolFoundry/solfoundry
Length of output: 261
🏁 Script executed:
#!/bin/bash
# Search for exception handlers in the app
rg -n 'exception_handler|@app\.' backend/app -A 5 | head -50Repository: SolFoundry/solfoundry
Length of output: 2337
🏁 Script executed:
#!/bin/bash
# Check if there's a Pydantic UUID validator or FastAPI Path UUID validation
rg -n 'Path.*UUID|validators.*UUID' backend/app -B 2 -A 2Repository: SolFoundry/solfoundry
Length of output: 461
🏁 Script executed:
#!/bin/bash
# Check the test file to see if malformed UUID test already exists
rg -n 'malformed|invalid.*uuid|400|422' backend/tests/test_contributor_webhooks.py -iRepository: SolFoundry/solfoundry
Length of output: 302
🏁 Script executed:
#!/bin/bash
# Get full details of the ValueError exception handler
sed -n '340,350p' backend/app/main.pyRepository: SolFoundry/solfoundry
Length of output: 479
🏁 Script executed:
#!/bin/bash
# Check if there are tests for malformed UUID in other endpoints
rg -n 'invalid.*uuid|uuid.*invalid|malformed' backend/tests -i -A 3Repository: SolFoundry/solfoundry
Length of output: 4126
Add test for malformed webhook_id in DELETE endpoint.
The DELETE endpoint lacks a test for invalid UUID format in the path parameter. While the API correctly handles this case via the global ValueError exception handler (returns 400 status), adding an explicit test would improve coverage and align with the pattern used in other endpoints (e.g., test_get_bounty_invalid_uuid in test_bounty_edge_cases.py).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_contributor_webhooks.py` around lines 476 - 524, Add a new
async test (e.g., test_unregister_endpoint_returns_400_for_malformed_uuid) that
calls DELETE /webhooks/{id} with a non-UUID string (e.g., "not-a-uuid") and
asserts a 400 response; set up the FastAPI app the same way as the existing
tests by including webhook_router and overriding dependencies
get_current_user_id and get_db (use TEST_USER_ID and _fake_db), use AsyncClient
with ASGITransport to send the request, and you may patch
ContributorWebhookService to an AsyncMock like in the other tests but the
malformed UUID should trigger the global ValueError handler before the service
is invoked.
Closes #475
Summary
Implements a complete outbound webhook notification system so contributors can receive HTTP POST callbacks when bounty lifecycle events occur.
New Files
backend/app/models/contributor_webhook.pybackend/app/services/contributor_webhook_service.pybackend/app/api/contributor_webhooks.pybackend/tests/test_contributor_webhooks.pyEndpoints
/api/webhooks/register/api/webhooks/{id}/api/webhooksFeatures
bounty.claimed,review.started,review.passed,review.failed,bounty.paidX-SolFoundry-Signature: sha256=<hex>on every requesthttp://URLs rejected at validation (422)last_delivery_at,last_delivery_status,failure_counttracked per webhookactive=False, preserves historyWebhook Payload
{ "event": "bounty.paid", "bounty_id": "<uuid>", "timestamp": "2026-03-23T00:00:00Z", "data": {} }Signature header:
X-SolFoundry-Signature: sha256=<hmac-hex>Wallet
Solana:
61FYMEPXMe73ypR53wMAR7PYAWHhZWKFJMNKnG9NwoW