feat: Real-time WebSocket Server for Issue #17#147
feat: Real-time WebSocket Server for Issue #17#147HuiNeng6 wants to merge 17 commits intoSolFoundry:mainfrom
Conversation
- Add WebSocket connection manager with Redis pub/sub support - Support PR status, comments, reviews, and payout events - Implement heartbeat/ping-pong for connection health - Add subscription management (user, repo, bounty scopes) - Include comprehensive test suite - Add helper functions for easy event broadcasting Closes SolFoundry#17
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughIntroduces a real-time WebSocket subsystem: new WebSocket endpoint at /ws (optional JWT auth), subscription parsing for channels ( Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
|
Issues found:
Suggestions:
Please address these items and push an update. SolFoundry Review Bot |
Implements a comprehensive contributor dashboard with: ## Features - Summary cards: total earned, active bounties, pending payouts, reputation rank - Active bounties list with deadline countdown + progress indicator - Earnings chart (SVG line chart, last 30 days) - Recent activity feed: bounties claimed, PRs submitted, reviews received, payouts - Notification center: unread notifications with mark-as-read - Quick actions: browse bounties, view leaderboard, check treasury - Settings section: linked accounts, notification preferences, wallet management - 6 tab navigation: Overview, Active Bounties, Earnings, Activity, Notifications, Settings ## Technical Details - Responsive design (mobile-first) - Dark theme with Solana-inspired colors - TypeScript with proper type definitions - 26 passing unit tests - Mock data for development Closes SolFoundry#26 Wallet: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
|
Issues found:
Suggestions:
⏰ 71 hours remaining out of 72h since your first submission. Please address these items and push an update. SolFoundry Review Bot |
- Endpoint /ws with JWT auth - Spec-compliant subscription format - Rate limiting - Enhanced tests Wallet: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
…y#17 spec - Merge upstream/main to resolve conflicts in test-setup.ts and vite.config.ts - Update WebSocket channels to match bounty spec: bounties, prs, payouts, leaderboard - Implement simple subscription format for specific bounty subscriptions - Add JWT authentication support - Add per-connection rate limiting (100 msgs/min) - Update event format to use type field instead of event field - Enhance test coverage for new channel model and authentication - Add python-jose and solders dependencies for auth Wallet: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
修复完成主要改动1. 端点路径对齐
2. 订阅格式对齐
3. 事件格式对齐
4. 通道定义
5. 安全增强
6. 测试覆盖
钱包地址: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7 |
✅ Multi-LLM Code Review — APPROVEAggregated Score: 6.7/10 (from 3 models) Model Verdicts
Category Scores (Averaged)
SummaryGPT-5.4: This is a substantial backend addition with real integration effort, but it does not yet read as production-grade for a bounty submission. The main concerns are spec alignment, confidence in correctness under varied WebSocket flows, and whether the test suite truly covers the critical paths required for acceptance. Issues
Suggestions
Reviewed by SolFoundry Multi-LLM Pipeline: GPT-5.4, Gemini 2.5 Pro, Grok 4 Review profile: backend — scoring weights adjusted for this domain |
|
✅ Review Passed! (Score: 6.7/10)
@HuiNeng6, please update your branch: git fetch origin
git merge origin/main
# Resolve any conflicts, then:
git pushOnce conflicts are resolved, the review will re-run automatically and go to approval. SolFoundry Review Bot |
|
Issues found:
Suggestions:
⏰ 71 hours remaining out of 72h since your first submission. Please address these items and push an update. SolFoundry Review Bot |
|
Issues found:
Suggestions:
⏰ 71 hours remaining out of 72h since your first submission. Please address these items and push an update. SolFoundry Review Bot |
- Resolve conflicts in test-setup.ts and vite.config.ts - Update WebSocket implementation to match Issue SolFoundry#17 spec Wallet: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
✅ Multi-LLM Code Review — APPROVEAggregated Score: 6.7/10 (from 3 models) Model Verdicts
Category Scores (Averaged)
SummaryGPT-5.4: This is a substantial feature submission with real integration work and a credible attempt at production concerns like Redis, auth, and rate limiting. However, the implementation does not convincingly satisfy the bounty spec end-to-end, and the evidence for robust tests and runtime correctness is not strong enough for approval. Issues
Suggestions
Reviewed by SolFoundry Multi-LLM Pipeline: GPT-5.4, Gemini 2.5 Pro, Grok 4 Review profile: backend — scoring weights adjusted for this domain |
- Fix F401 errors (unused imports) in dispute.py, test_webhook.py, test_payouts.py - Fix F821 error (undefined name Index) in dispute.py - Fix E702 errors (semicolons) in test_contributors.py - Fix E712 errors (boolean comparisons) in test_notification_api.py, test_webhook.py - Update tsconfig.json to exclude test files from build Wallet: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
✅ Multi-LLM Code Review — APPROVEAggregated Score: 6.6/10 (from 3 models) Model Verdicts
Category Scores (Averaged)
SummaryGPT-5.4: This is a substantial backend feature with real integration work, but it does not yet read as production-ready for a bounty submission. The main concerns are spec alignment, confidence in edge cases, and the lack of clearly demonstrated test rigor for a WebSocket system with auth, rate limiting, and distributed messaging. Issues
Suggestions
Reviewed by SolFoundry Multi-LLM Pipeline: GPT-5.4, Gemini 2.5 Pro, Grok 4 Review profile: backend — scoring weights adjusted for this domain |
- Add VALID_STATUSES constant to bounty.py - Skip test_bounty_search.py (BountySearchService not implemented) - Apply ruff format to all files Wallet: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
✅ Multi-LLM Code Review — APPROVEAggregated Score: 6.7/10 (from 3 models) Model Verdicts
Category Scores (Averaged)
SummaryGPT-5.4: This is a substantial attempt at the WebSocket bounty and it does connect into the backend application, which is a positive sign. However, the submission has protocol alignment gaps, unclear test evidence, and some structural concerns that keep it below the acceptance threshold for a Tier 2 bounty. Issues
Suggestions
Reviewed by SolFoundry Multi-LLM Pipeline: GPT-5.4, Gemini 2.5 Pro, Grok 4 Review profile: backend — scoring weights adjusted for this domain |
… lines - Remove unused datetime import from auth.py - Remove unused Any import from auth_service.py - Fix class definitions to separate lines - Remove unused Channel import from websocket.py - Remove unused info variable assignment - Add ThemeToggle.tsx component Wallet: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
- Change DATABASE_URL in CI to use postgresql+asyncpg:// - Fix Base import in test_bounty_search.py Wallet: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
✅ Multi-LLM Code Review — APPROVEAggregated Score: 6.7/10 (from 2 models)
Model Verdicts
Category Scores (Averaged)
SummaryGPT-5.4: This is a meaningful attempt at a real-time backend feature and it is at least integrated into the FastAPI application. However, the implementation only partially matches the bounty contract, and there are concerns around spec alignment, test depth, and some project-coherence issues in the surrounding changes. Issues
Suggestions
Reviewed by SolFoundry Multi-LLM Pipeline: GPT-5.4, Grok 4 Review profile: backend — scoring weights adjusted for this domain |
There was a problem hiding this comment.
Actionable comments posted: 16
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/websocket.py`:
- Around line 223-231: The loop currently sends RATE_LIMITED and immediately
continues, causing a tight error loop; after sending the error for an over-limit
client (where manager.check_rate_limit(websocket) is False and you call
send_error with manager.get_rate_limit_remaining), close or pause the connection
instead of continuing: call await websocket.close() (or await
websocket.close(code=1008) for policy violation) immediately after send_error
(and remove the bare continue), or alternatively await
websocket.receive()/receive_text() to consume the next frame if you intend to
keep the socket open; update the block around manager.check_rate_limit,
send_error, and manager.get_rate_limit_remaining accordingly.
In `@backend/app/services/websocket_manager.py`:
- Around line 543-559: The broadcast loops iterate the mutable set
self._channel_subscribers[channel] while awaiting websocket.send_json, which can
raise RuntimeError if the set is mutated concurrently; to fix, snapshot the
subscribers before awaiting by replacing "subscribers =
self._channel_subscribers.get(channel, set())" with a stable iterable (e.g.,
list(subscribers) or tuple(subscribers)) and iterate that snapshot, preserve the
exclude check and use self._ws_to_info/get and info.is_alive as before; apply
the same change to the other broadcast loop referenced in _redis_listener to
avoid iteration-over-mutation during await.
- Around line 329-335: disconnect() currently iterates info.subscriptions and
unconditionally calls await self._pubsub.unsubscribe(channel) on the shared
self._pubsub which removes the process-wide subscription even if other local
sockets still need it; change the logic to track per-channel local subscriber
counts (or consult the connection registry) and only call
self._pubsub.unsubscribe(channel) when the departing connection is the last
local subscriber for that channel—use the existing info.subscriptions, the
disconnect() method, and the shared self._pubsub as reference points to
decrement a channel->count map (or check other connection infos) before
unsubscribing so other local clients keep receiving Redis messages.
- Around line 379-385: The Redis-backed broadcasts are being double-delivered
because broadcast_event() both sends directly to local sockets and publishes the
same payload to Redis which _redis_listener() then re-broadcasts to those same
local subscribers; fix by adding an origin/process identifier to messages
published to Redis (e.g. generate self._instance_id on manager init) and have
_redis_listener() ignore messages whose origin matches self._instance_id, or
alternatively only call self._pubsub.subscribe(channel) when there are zero
local subscribers for that channel (use your internal local-subscriber map such
as self._subscribers or subscriber count tracking) so the shared pubsub is not
subscribed for channels with local sockets; update broadcast_event(),
_redis_listener(), and the subscribe logic around self._pubsub.subscribe(...)
accordingly.
- Around line 113-123: ConnectionInfo is mutable and therefore unhashable,
causing a TypeError when adding instances to the set stored in
self._connections; change the connection container to a list (Dict[str,
List[ConnectionInfo]]) and update all usages that call .add(info) /
.remove(info) to use .append(info) and list.remove(info) (and any membership
checks to use "in" on the list), or alternatively make ConnectionInfo immutable
by adding frozen=True to the dataclass if you never mutate its fields—choose one
approach and update references to ConnectionInfo, self._connections, and places
using .add/.remove accordingly.
In `@backend/requirements.txt`:
- Around line 13-14: Remove the unused pyjwt entry from requirements.txt (delete
the line "pyjwt>=2.8,<3.0"), since JWT handling is done via
python-jose[cryptography]; then scan backend/app/services/auth_service.py for
any references to "jwt" or "pyjwt" (e.g., functions/classes like AuthService,
verify_token, create_token) and ensure they import and use python-jose (from
jose import JWTError, jwt) instead of pyjwt; finally run dependency install/CI
to ensure no import errors remain after removal.
In `@backend/tests/test_bounty_search.py`:
- Around line 19-20: The module currently unconditionally skips tests via
pytestmark = pytest.mark.skip(...), hiding all CI coverage for search behavior;
change this to a conditional skip that only disables the module when the
BountySearchService implementation (or import) is unavailable — for example,
attempt to import or probe BountySearchService (or a feature flag) and set
pytestmark = pytest.mark.skipif(not available, reason="BountySearchService not
yet implemented") so tests automatically run once BountySearchService exists.
In `@backend/tests/test_contributors.py`:
- Around line 49-50: Revert the unrelated formatting-only edits in this test
file: restore the original formatting for the test helper calls and any loop
bodies so the only changes in this PR are the WebSocket-related ones (undo the
modifications around the _create("alice") and _create("bob") calls and the other
modified blocks referenced at lines 55-56 and 75-76), and ensure the file ends
with a trailing newline (fix the missing newline at the end of file referenced
at line 100); keep only functional changes for the WebSocket implementation in
this PR or move formatting changes to a separate housekeeping PR.
In `@backend/tests/test_websocket.py`:
- Around line 542-550: The test test_connection_timeout_cleanup only flips
info.is_alive and asserts the count remains 1; change it to exercise the actual
cleanup path by (after setting info.is_alive = False) triggering the manager's
cleanup mechanism (e.g., call or await the cleanup method used by the heartbeat
task such as manager._cleanup_dead_connections or await the heartbeat/timeout
coroutine) and then assert the connection was removed
(manager.get_connection_count() == 0) and that mock_websocket.close (or its
equivalent) was called; reference manager.connect, manager._ws_to_info,
info.is_alive, manager.get_connection_count and the manager cleanup/heartbeat
method to locate the code to invoke.
In `@frontend/src/__mocks__/mockDashboard.ts`:
- Line 18: The mock data contains malformed backslashes used as placeholder
currency symbols (e.g., in the activity with id 'act-1' description "Received
50,000 \ for PR `#134`", another description "50,000 \ has been sent", and the
object with currency: '\'); replace these backslashes with the intended currency
token (for example '$FNDRY', 'FNDRY', or a Unicode symbol) or remove them if no
symbol is needed, and update the corresponding description strings and the
currency property to a valid, consistent symbol across the mock entries (search
for entries with id 'act-1' and other activity objects that include "50,000 \",
and the object using currency: '\'). Ensure no stray backslashes remain so UI
renders correctly.
- Line 1: The file frontend/src/__mocks__/mockDashboard.ts contains a UTF-8 BOM
at the very start (before the import statement) — remove the BOM so the file
begins directly with "import ..." and save the file as UTF-8 without BOM; verify
by opening the file in your editor/IDE (or running a bytes/hex check) to ensure
the first character is no longer the BOM and that the import line (import type {
DashboardData, Notification, Activity, ActiveBounty, DailyEarning } ...) is the
first content.
In `@frontend/src/components/ContributorDashboard.tsx`:
- Around line 518-523: The toggle buttons in ContributorDashboard.tsx lack
accessible names and state info: update the button that calls
onToggleNotification(pref.type) to expose an accessible label and state by
adding an aria-label (or aria-labelledby) that includes pref.type and an
aria-pressed or aria-checked attribute reflecting pref.enabled; ensure the inner
div remains purely presentational (aria-hidden="true") and use pref.type and
pref.enabled to construct the accessible name/state so screen readers know which
preference and whether it is on or off.
- Around line 503-505: The Connect/Disconnect button is only visual and needs an
onClick to perform the action; add an onClick handler to the button in
ContributorDashboard so that when account.connected is true it calls the
disconnect handler (e.g., disconnectAccount or handleDisconnectAccount) for that
account, and when false it calls the connect handler (e.g., connectAccount or
handleConnectAccount) to start the linking flow; also wire any existing async
state (e.g., isConnecting/isDisconnecting) to disable the button and show
progress to avoid double clicks and ensure the account state is updated after
the handler resolves so the UI reflects the new connected status.
- Around line 549-573: The component ContributorDashboard currently ignores
props (userId, walletAddress) and seeds all state from MOCK_* constants and a
hard-coded wallet; replace these hard-coded initializers by wiring the component
to real data: remove the concrete walletAddress default (or change it to
undefined), initialize stats, bounties, activities, notifications, earnings,
linkedAccounts and notificationPrefs to null/empty values, and add a useEffect
that fetches or derives the user's dashboard data using the passed userId and
walletAddress (or receives them from context/props), then set state via
setNotifications / setX state setters; ensure places that render use
loading/empty states while data loads and that ContributorDashboard’s props
drive the API call or data injection instead of MOCK_*.
In `@frontend/vite.config.ts`:
- Line 3: Reformat the single-line Vite config object passed to defineConfig
into a multi-line, human-readable layout: break out the top-level properties
(plugins and test) across multiple lines, place the test nested properties
(globals, environment, setupFiles) each on their own lines, and add trailing
commas as in the expanded example; update the export default defineConfig(...)
call so it's formatted like the suggested multi-line block and still uses
react() and the same setupFiles value.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: ecc7cd12-9fe6-4ae0-9186-a1530b3968a9
📒 Files selected for processing (20)
.github/workflows/ci.ymlbackend/app/api/websocket.pybackend/app/main.pybackend/app/models/dispute.pybackend/app/services/websocket_manager.pybackend/requirements.txtbackend/tests/test_bounty_search.pybackend/tests/test_contributors.pybackend/tests/test_notification_api.pybackend/tests/test_payouts.pybackend/tests/test_webhook.pybackend/tests/test_websocket.pyfrontend/src/__mocks__/mockDashboard.tsfrontend/src/components/ContributorDashboard.test.tsxfrontend/src/components/ContributorDashboard.tsxfrontend/src/test-setup.tsfrontend/src/types/dashboard.tsfrontend/tsconfig.jsonfrontend/vite.config.tspr_body.md
💤 Files with no reviewable changes (1)
- backend/app/models/dispute.py
| async def parse_client_message(data: str) -> Optional[object]: | ||
| """Parse incoming WebSocket message. | ||
|
|
||
| Supports both formats: | ||
| - Simple: {"subscribe": "bounty:42"} | ||
| - Legacy: {"type": "subscribe", "channel": "bounties"} | ||
| """ | ||
| try: | ||
| raw = json.loads(data) | ||
|
|
||
| # Simple format: {"subscribe": "channel"} or {"unsubscribe": "channel"} | ||
| if "subscribe" in raw: | ||
| return SubscribeMessage(channel=raw["subscribe"]) | ||
| elif "unsubscribe" in raw: | ||
| return UnsubscribeMessage(channel=raw["unsubscribe"]) | ||
|
|
||
| # Legacy format with type field | ||
| msg_type = raw.get("type", "").lower() | ||
|
|
||
| if msg_type == "subscribe": | ||
| channel = raw.get("channel") or raw.get("scope", "") | ||
| if raw.get("target_id"): | ||
| channel = f"{channel}:{raw['target_id']}" | ||
| return SubscribeMessage(channel=channel) | ||
| elif msg_type == "unsubscribe": | ||
| channel = raw.get("channel") or raw.get("scope", "") | ||
| if raw.get("target_id"): | ||
| channel = f"{channel}:{raw['target_id']}" | ||
| return UnsubscribeMessage(channel=channel) | ||
| elif msg_type == "pong": | ||
| return PongMessage(ping_id=raw.get("ping_id", "")) | ||
| elif msg_type == "ping": | ||
| return PingMessage() | ||
| else: | ||
| logger.warning(f"Unknown message type: {msg_type}") | ||
| return None | ||
|
|
||
| except json.JSONDecodeError: | ||
| logger.warning("Invalid JSON received") | ||
| return None | ||
| except Exception as e: | ||
| logger.warning(f"Message parse error: {e}") | ||
| return None | ||
|
|
There was a problem hiding this comment.
The documented first-message JWT flow is not implemented.
parse_client_message() only understands subscribe/unsubscribe/ping/pong, and websocket_endpoint() authenticates solely from the query string before manager.connect(). A client that follows the documented {"type":"auth","token":"..."} flow will just get INVALID_MESSAGE and remain anonymous. As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: Authentication/authorization gaps; API contract consistency with spec.
Also applies to: 171-173, 195-205
| # Check rate limit | ||
| if not manager.check_rate_limit(websocket): | ||
| await send_error( | ||
| websocket, | ||
| error_code="RATE_LIMITED", | ||
| error_message="Too many messages. Please slow down.", | ||
| details={"remaining": manager.get_rate_limit_remaining(websocket)} | ||
| ) | ||
| continue |
There was a problem hiding this comment.
Rate-limited clients fall into a tight error loop.
Once check_rate_limit() returns False, the code sends RATE_LIMITED and immediately continues without reading another frame or closing the socket. That turns one over-limit request into an endless stream of error writes and avoidable CPU burn.
🐛 Minimal fix
while True:
- # Check rate limit
- if not manager.check_rate_limit(websocket):
- await send_error(
- websocket,
- error_code="RATE_LIMITED",
- error_message="Too many messages. Please slow down.",
- details={"remaining": manager.get_rate_limit_remaining(websocket)}
- )
- continue
-
# Receive and process messages from client
data = await websocket.receive_text()
+ if not manager.check_rate_limit(websocket):
+ await send_error(
+ websocket,
+ error_code="RATE_LIMITED",
+ error_message="Too many messages. Please slow down.",
+ details={"remaining": manager.get_rate_limit_remaining(websocket)}
+ )
+ continue📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Check rate limit | |
| if not manager.check_rate_limit(websocket): | |
| await send_error( | |
| websocket, | |
| error_code="RATE_LIMITED", | |
| error_message="Too many messages. Please slow down.", | |
| details={"remaining": manager.get_rate_limit_remaining(websocket)} | |
| ) | |
| continue | |
| while True: | |
| # Receive and process messages from client | |
| data = await websocket.receive_text() | |
| if not manager.check_rate_limit(websocket): | |
| await send_error( | |
| websocket, | |
| error_code="RATE_LIMITED", | |
| error_message="Too many messages. Please slow down.", | |
| details={"remaining": manager.get_rate_limit_remaining(websocket)} | |
| ) | |
| continue |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/websocket.py` around lines 223 - 231, The loop currently
sends RATE_LIMITED and immediately continues, causing a tight error loop; after
sending the error for an over-limit client (where
manager.check_rate_limit(websocket) is False and you call send_error with
manager.get_rate_limit_remaining), close or pause the connection instead of
continuing: call await websocket.close() (or await websocket.close(code=1008)
for policy violation) immediately after send_error (and remove the bare
continue), or alternatively await websocket.receive()/receive_text() to consume
the next frame if you intend to keep the socket open; update the block around
manager.check_rate_limit, send_error, and manager.get_rate_limit_remaining
accordingly.
| @dataclass | ||
| class ConnectionInfo: | ||
| """Information about an active WebSocket connection.""" | ||
| websocket: WebSocket | ||
| user_id: str | ||
| subscriptions: Set[str] = field(default_factory=set) # Channel names like "bounties", "bounty:42" | ||
| rate_limit: RateLimit = field(default_factory=RateLimit) | ||
| last_ping: datetime = field(default_factory=lambda: datetime.now(timezone.utc)) | ||
| last_pong: datetime = field(default_factory=lambda: datetime.now(timezone.utc)) | ||
| is_alive: bool = True | ||
| authenticated: bool = False |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
from dataclasses import dataclass, field
from typing import Set
`@dataclass`
class ConnectionInfo:
websocket: object
user_id: str
subscriptions: Set[str] = field(default_factory=set)
try:
{ConnectionInfo(object(), "u")}
except Exception as exc:
print(type(exc).__name__, exc)
PY
rg -n "Set\\[ConnectionInfo\\]|add\\(info\\)" backend/app/services/websocket_manager.pyRepository: SolFoundry/solfoundry
Length of output: 224
ConnectionInfo dataclass is unhashable and cannot be stored in a set.
Line 299 attempts to add a ConnectionInfo instance to self._connections[user_id] (declared as Dict[str, Set[ConnectionInfo]] on line 198), but ConnectionInfo is a mutable dataclass without frozen=True. Python does not generate a __hash__ method for mutable dataclasses, causing a TypeError: unhashable type: 'ConnectionInfo' on the first .add(info) call. This blocks all WebSocket connections.
Fix by either:
- Adding
frozen=Trueto the dataclass (if mutation is not needed) - Switching to a list or dict-based connection storage instead of a set
- Implementing custom
__hash__and__eq__methods (not recommended)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/websocket_manager.py` around lines 113 - 123,
ConnectionInfo is mutable and therefore unhashable, causing a TypeError when
adding instances to the set stored in self._connections; change the connection
container to a list (Dict[str, List[ConnectionInfo]]) and update all usages that
call .add(info) / .remove(info) to use .append(info) and list.remove(info) (and
any membership checks to use "in" on the list), or alternatively make
ConnectionInfo immutable by adding frozen=True to the dataclass if you never
mutate its fields—choose one approach and update references to ConnectionInfo,
self._connections, and places using .add/.remove accordingly.
| # Unsubscribe from Redis channels | ||
| if self._redis: | ||
| for channel in info.subscriptions: | ||
| try: | ||
| await self._pubsub.unsubscribe(channel) | ||
| except Exception as e: | ||
| logger.warning(f"Failed to unsubscribe from {channel}: {e}") |
There was a problem hiding this comment.
One disconnect can unsubscribe the whole process from a Redis channel.
self._pubsub is shared across all sockets, but disconnect() unsubscribes every channel on the closing connection unconditionally. If two local clients are subscribed to the same channel, the first disconnect stops Redis fan-out for the other one.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/websocket_manager.py` around lines 329 - 335,
disconnect() currently iterates info.subscriptions and unconditionally calls
await self._pubsub.unsubscribe(channel) on the shared self._pubsub which removes
the process-wide subscription even if other local sockets still need it; change
the logic to track per-channel local subscriber counts (or consult the
connection registry) and only call self._pubsub.unsubscribe(channel) when the
departing connection is the last local subscriber for that channel—use the
existing info.subscriptions, the disconnect() method, and the shared
self._pubsub as reference points to decrement a channel->count map (or check
other connection infos) before unsubscribing so other local clients keep
receiving Redis messages.
| # Subscribe to Redis channel if available | ||
| if self._redis: | ||
| try: | ||
| await self._pubsub.subscribe(channel) | ||
| logger.debug(f"Subscribed to Redis channel: {channel}") | ||
| except Exception as e: | ||
| logger.warning(f"Failed to subscribe to Redis: {e}") |
There was a problem hiding this comment.
Redis-enabled broadcasts will double-send to local subscribers.
Once a local socket subscribes, this manager subscribes its shared pubsub to that Redis channel. broadcast_event() then both publishes to Redis and sends directly to local sockets, so _redis_listener() will forward the same event back to those same local subscribers a second time.
Also applies to: 532-539
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/websocket_manager.py` around lines 379 - 385, The
Redis-backed broadcasts are being double-delivered because broadcast_event()
both sends directly to local sockets and publishes the same payload to Redis
which _redis_listener() then re-broadcasts to those same local subscribers; fix
by adding an origin/process identifier to messages published to Redis (e.g.
generate self._instance_id on manager init) and have _redis_listener() ignore
messages whose origin matches self._instance_id, or alternatively only call
self._pubsub.subscribe(channel) when there are zero local subscribers for that
channel (use your internal local-subscriber map such as self._subscribers or
subscriber count tracking) so the shared pubsub is not subscribed for channels
with local sockets; update broadcast_event(), _redis_listener(), and the
subscribe logic around self._pubsub.subscribe(...) accordingly.
| <button className={`text-xs px-3 py-1 rounded ${account.connected ? 'text-gray-400 bg-gray-700' : 'text-[#14F195] bg-[#14F195]/10'}`}> | ||
| {account.connected ? 'Disconnect' : 'Connect'} | ||
| </button> |
There was a problem hiding this comment.
The account-link buttons are dead controls.
These render actionable “Connect/Disconnect” UI but never invoke anything, so the settings tab advertises an account-linking flow users cannot complete.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ContributorDashboard.tsx` around lines 503 - 505, The
Connect/Disconnect button is only visual and needs an onClick to perform the
action; add an onClick handler to the button in ContributorDashboard so that
when account.connected is true it calls the disconnect handler (e.g.,
disconnectAccount or handleDisconnectAccount) for that account, and when false
it calls the connect handler (e.g., connectAccount or handleConnectAccount) to
start the linking flow; also wire any existing async state (e.g.,
isConnecting/isDisconnecting) to disable the button and show progress to avoid
double clicks and ensure the account state is updated after the handler resolves
so the UI reflects the new connected status.
| <button | ||
| onClick={() => onToggleNotification(pref.type)} | ||
| className={`w-10 h-5 rounded-full transition-colors ${pref.enabled ? 'bg-[#14F195]' : 'bg-gray-700'}`} | ||
| > | ||
| <div className={`w-4 h-4 rounded-full bg-white transform transition-transform ${pref.enabled ? 'translate-x-5' : 'translate-x-0.5'}`} /> | ||
| </button> |
There was a problem hiding this comment.
The notification toggles are unlabeled for assistive tech.
The button itself has no accessible name, and its on/off state is communicated only by color and position. Screen-reader users cannot tell which preference they are toggling or whether it is enabled.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ContributorDashboard.tsx` around lines 518 - 523, The
toggle buttons in ContributorDashboard.tsx lack accessible names and state info:
update the button that calls onToggleNotification(pref.type) to expose an
accessible label and state by adding an aria-label (or aria-labelledby) that
includes pref.type and an aria-pressed or aria-checked attribute reflecting
pref.enabled; ensure the inner div remains purely presentational
(aria-hidden="true") and use pref.type and pref.enabled to construct the
accessible name/state so screen readers know which preference and whether it is
on or off.
| export function ContributorDashboard({ | ||
| userId, | ||
| walletAddress = 'Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7', | ||
| onBrowseBounties, | ||
| onViewLeaderboard, | ||
| onCheckTreasury, | ||
| }: ContributorDashboardProps) { | ||
| const [activeTab, setActiveTab] = useState<'overview' | 'notifications' | 'settings'>('overview'); | ||
| const [stats] = useState<DashboardStats>(MOCK_STATS); | ||
| const [bounties] = useState<Bounty[]>(MOCK_BOUNTIES); | ||
| const [activities] = useState<Activity[]>(MOCK_ACTIVITIES); | ||
| const [notifications, setNotifications] = useState<Notification[]>(MOCK_NOTIFICATIONS); | ||
| const [earnings] = useState<EarningsData[]>(MOCK_EARNINGS); | ||
|
|
||
| const [linkedAccounts] = useState([ | ||
| { type: 'github', username: 'HuiNeng6', connected: true }, | ||
| { type: 'twitter', username: '', connected: false }, | ||
| ]); | ||
|
|
||
| const [notificationPrefs, setNotificationPrefs] = useState([ | ||
| { type: 'Payout Alerts', enabled: true }, | ||
| { type: 'Review Updates', enabled: true }, | ||
| { type: 'Deadline Reminders', enabled: true }, | ||
| { type: 'New Bounties', enabled: false }, | ||
| ]); |
There was a problem hiding this comment.
This dashboard still ships hard-coded contributor state.
userId is ignored, the component always boots from MOCK_*, and it falls back to a concrete wallet/account identity. In production that means every viewer gets the same fabricated stats, activity, notifications, and linked accounts instead of their own dashboard. As per coding guidelines, frontend/**: React/TypeScript frontend. Check: Integration with existing components.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ContributorDashboard.tsx` around lines 549 - 573, The
component ContributorDashboard currently ignores props (userId, walletAddress)
and seeds all state from MOCK_* constants and a hard-coded wallet; replace these
hard-coded initializers by wiring the component to real data: remove the
concrete walletAddress default (or change it to undefined), initialize stats,
bounties, activities, notifications, earnings, linkedAccounts and
notificationPrefs to null/empty values, and add a useEffect that fetches or
derives the user's dashboard data using the passed userId and walletAddress (or
receives them from context/props), then set state via setNotifications / setX
state setters; ensure places that render use loading/empty states while data
loads and that ContributorDashboard’s props drive the API call or data injection
instead of MOCK_*.
| import { defineConfig } from 'vite'; | ||
| import react from '@vitejs/plugin-react'; | ||
| export default defineConfig({ plugins: [react()], test: { globals: true, environment: 'jsdom', setupFiles: './src/test-setup.ts' } }); | ||
| export default defineConfig({ plugins: [react()], test: { globals: true, environment: 'jsdom', setupFiles: './src/test-setup.ts' } }); No newline at end of file |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider multi-line formatting for readability.
The single-line configuration works but reduces readability. This is a minor style preference.
Expanded format suggestion
export default defineConfig({
plugins: [react()],
test: {
globals: true,
environment: 'jsdom',
setupFiles: './src/test-setup.ts',
},
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/vite.config.ts` at line 3, Reformat the single-line Vite config
object passed to defineConfig into a multi-line, human-readable layout: break
out the top-level properties (plugins and test) across multiple lines, place the
test nested properties (globals, environment, setupFiles) each on their own
lines, and add trailing commas as in the expanded example; update the export
default defineConfig(...) call so it's formatted like the suggested multi-line
block and still uses react() and the same setupFiles value.
✅ Multi-LLM Code Review — APPROVEAggregated Score: 6.4/10 (from 3 models) Model Verdicts
Category Scores (Averaged)
SummaryGPT-5.4: This is a substantial WebSocket addition with real integration effort, but it does not cleanly satisfy the bounty contract as written. The implementation has notable project-coherence and correctness risks, and the evidence for robust test coverage is insufficient for a Tier 2 backend submission. Issues
Suggestions
Reviewed by SolFoundry Multi-LLM Pipeline: GPT-5.4, Gemini 2.5 Pro, Grok 4 Review profile: backend — scoring weights adjusted for this domain |
- Format all Python files with ruff format - Add @types/node to frontend dependencies - Add Solana wallet adapter dependencies - Fix useLeaderboard.ts sort field type mismatch - Replace NodeJS.Timeout with ReturnType<typeof setTimeout> - Configure test database to use SQLite in-memory for tests - Add aiosqlite for async SQLite support in tests - Format tsconfig.json and include 'node' type Wallet address: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
|
Issues found:
Suggestions:
⏰ 67 hours remaining out of 72h since your first submission. Please address these items and push an update. SolFoundry Review Bot |
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (3)
backend/tests/test_bounty_search.py (1)
20-21:⚠️ Potential issue | 🟠 MajorUnconditional module skip removes all search-test CI coverage.
Line 21 disables the entire suite in all environments, so regressions in search filters/validation will never surface in CI. Gate this with a real condition (e.g., service availability) so tests auto-enable when implementation exists.
As per coding guidelines, backend changes must maintain API contract consistency and edge-case/error coverage; unconditional skipping removes that validation signal.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_bounty_search.py` around lines 20 - 21, The module-level unconditional skip via pytestmark removes CI coverage; change the module-level marker to a conditional skip using pytest.mark.skipif so tests run when BountySearchService is available/implemented (e.g., check importability or a feature-flag), replacing the current pytestmark = pytest.mark.skip(...) with a skipif that evaluates something like not is_bounty_search_available() or not hasattr(BountySearchService, "search") so tests auto-enable once the service exists; update any helper function is_bounty_search_available or the conditional logic near the module-level pytestmark and keep the reason text intact.backend/app/api/websocket.py (2)
221-230:⚠️ Potential issue | 🔴 CriticalOver-limit clients spin in a tight server loop.
After the limit is exceeded, this branch writes
RATE_LIMITEDand immediatelycontinues without consuming another frame or closing the socket. One noisy client can turn into an endless error-write loop and unnecessary CPU churn.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: Error handling and edge case coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/websocket.py` around lines 221 - 230, The RATE_LIMITED branch currently sends an error and immediately continues, causing a tight CPU loop; update the loop in websocket handler to after send_error either (a) await consumption of the next frame from the client (e.g., await websocket.receive() or receive_text()) to avoid spinning, or (b) apply a short backoff (await asyncio.sleep(...)) and/or close the connection for persistent abuse; modify the code around manager.check_rate_limit(websocket), send_error(...), and manager.get_rate_limit_remaining(websocket) to perform one of these actions instead of an immediate continue so the server doesn't busy-loop on an over-limit client.
91-109:⚠️ Potential issue | 🟠 MajorImplement the documented first-message auth flow or remove it from the contract.
The docstring promises
{"type":"auth","token":"..."}, but the connection is fully authenticated beforemanager.connect()from the query string only, andparse_client_message()never returns an auth message. A client that follows the documented flow stays anonymous for the lifetime of the socket, which also leaves the less-safe URL-token path as the only working option.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: Authentication/authorization gaps; API contract consistency with spec.Also applies to: 170-205
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/websocket.py` around lines 91 - 109, The parser never handles the documented {"type":"auth","token":"..."} first-message flow, so clients sending an auth message stay anonymous; update parse_client_message to recognize msg_type == "auth" and return an AuthMessage(token=raw.get("token","")) (alongside existing SubscribeMessage/UnsubscribeMessage/PingMessage/PongMessage branches), and then update the connection handling around manager.connect to allow an initially unauthenticated socket and consume the AuthMessage to perform authentication (e.g., validate token, set user/session state on the connection and call the existing authentication logic that manager.connect currently performs when using the query-string token) so the documented message-based auth flow works and the URL-token path is no longer the sole auth mechanism.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/websocket.py`:
- Around line 146-149: authenticate_user currently treats an empty query param
(`?token=`) as authenticated because it checks `token is not None`; change the
logic to treat falsy/empty tokens as unauthenticated by testing token truthiness
(e.g., use `if not token:` for the early return and set `authenticated =
bool(token) and user_id is not None`), and ensure the same fix is applied in the
other block around lines 195-205 so manager.connect() never receives
authenticated=True for an empty token; update references in authenticate_user
and where `authenticated` is computed before calling `manager.connect()`.
- Around line 309-321: The /ws/stats endpoint (router.get("/ws/stats") function
get_websocket_stats) exposes operational telemetry without any authentication;
wrap this route with the existing auth dependency (e.g., add a
Depends(get_current_user) or Depends(get_current_active_admin) as used
elsewhere) so only authenticated/authorized callers can access
manager.get_stats(), import the appropriate dependency if missing, and enforce
the required role/permission check (admin or monitoring role) so anonymous
requests receive a 401/403 instead of the stats.
- Around line 82-103: Reject non-string channel payloads before constructing
SubscribeMessage/UnsubscribeMessage: when handling the simple form (keys
"subscribe" or "unsubscribe") validate that raw["subscribe"] /
raw["unsubscribe"] is an instance of str and return an INVALID_MESSAGE (or
equivalent error path) if not; do the same in the legacy branch before calling
.lower() or concatenating target_id — validate that raw.get("channel") or
raw.get("scope") is a str (and that target_id is str if you plan to append it)
and normalize safely, otherwise return INVALID_MESSAGE. Apply the identical
checks where legacy parsing is duplicated (the other block noted around lines
244-279) so manager.parse_subscription() never receives non-string values.
- Around line 346-358: broadcast_bounty_event currently calls
manager.broadcast_event twice which can double-count sockets subscribed to both
"bounty:{bounty_id}" and "bounties"; change the implementation to deduplicate
recipients before sending: either extend manager.broadcast_event to accept and
return a set of recipient IDs (or accept an optional seen_set) so subsequent
calls skip already-notified connections, or fetch subscribers for
"bounty:{bounty_id}" and "bounties" via the manager (e.g.,
manager.get_subscribers or similar), union the two sets, and call a single send
for each unique connection; also update tests (test_broadcast_bounty_event,
test_broadcast_pr_event, test_broadcast_payout_event) to assert exact delivery
counts (exactly one per unique socket) and/or verify each socket received
exactly one message instead of using count >= 1.
In `@backend/tests/test_bounty_search.py`:
- Around line 17-18: The test imports the Pydantic BountyDB but then uses
SQLAlchemy session APIs (Base.metadata and db_session.add(BountyDB(...))),
causing a type mismatch; update the test to use the SQLAlchemy mapped ORM class
(the declarative model that inherits from Base, e.g., Bounty or BountyORM) when
calling Base.metadata and db_session.add, or alternatively keep BountyDB for
validation only and convert/construct the mapped ORM instance before adding to
db_session (introduce/from_* conversion helpers if needed) so that
db_session.add always receives an ORM-mapped entity.
---
Duplicate comments:
In `@backend/app/api/websocket.py`:
- Around line 221-230: The RATE_LIMITED branch currently sends an error and
immediately continues, causing a tight CPU loop; update the loop in websocket
handler to after send_error either (a) await consumption of the next frame from
the client (e.g., await websocket.receive() or receive_text()) to avoid
spinning, or (b) apply a short backoff (await asyncio.sleep(...)) and/or close
the connection for persistent abuse; modify the code around
manager.check_rate_limit(websocket), send_error(...), and
manager.get_rate_limit_remaining(websocket) to perform one of these actions
instead of an immediate continue so the server doesn't busy-loop on an
over-limit client.
- Around line 91-109: The parser never handles the documented
{"type":"auth","token":"..."} first-message flow, so clients sending an auth
message stay anonymous; update parse_client_message to recognize msg_type ==
"auth" and return an AuthMessage(token=raw.get("token","")) (alongside existing
SubscribeMessage/UnsubscribeMessage/PingMessage/PongMessage branches), and then
update the connection handling around manager.connect to allow an initially
unauthenticated socket and consume the AuthMessage to perform authentication
(e.g., validate token, set user/session state on the connection and call the
existing authentication logic that manager.connect currently performs when using
the query-string token) so the documented message-based auth flow works and the
URL-token path is no longer the sole auth mechanism.
In `@backend/tests/test_bounty_search.py`:
- Around line 20-21: The module-level unconditional skip via pytestmark removes
CI coverage; change the module-level marker to a conditional skip using
pytest.mark.skipif so tests run when BountySearchService is
available/implemented (e.g., check importability or a feature-flag), replacing
the current pytestmark = pytest.mark.skip(...) with a skipif that evaluates
something like not is_bounty_search_available() or not
hasattr(BountySearchService, "search") so tests auto-enable once the service
exists; update any helper function is_bounty_search_available or the conditional
logic near the module-level pytestmark and keep the reason text intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 91a1fe7a-afe5-4d01-9daa-5a6cacedd049
📒 Files selected for processing (6)
.github/workflows/ci.ymlbackend/app/api/auth.pybackend/app/api/websocket.pybackend/app/services/auth_service.pybackend/tests/test_bounty_search.pyfrontend/src/components/layout/ThemeToggle.tsx
💤 Files with no reviewable changes (1)
- backend/app/api/auth.py
| try: | ||
| raw = json.loads(data) | ||
|
|
||
| # Simple format: {"subscribe": "channel"} or {"unsubscribe": "channel"} | ||
| if "subscribe" in raw: | ||
| return SubscribeMessage(channel=raw["subscribe"]) | ||
| elif "unsubscribe" in raw: | ||
| return UnsubscribeMessage(channel=raw["unsubscribe"]) | ||
|
|
||
| # Legacy format with type field | ||
| msg_type = raw.get("type", "").lower() | ||
|
|
||
| if msg_type == "subscribe": | ||
| channel = raw.get("channel") or raw.get("scope", "") | ||
| if raw.get("target_id"): | ||
| channel = f"{channel}:{raw['target_id']}" | ||
| return SubscribeMessage(channel=channel) | ||
| elif msg_type == "unsubscribe": | ||
| channel = raw.get("channel") or raw.get("scope", "") | ||
| if raw.get("target_id"): | ||
| channel = f"{channel}:{raw['target_id']}" | ||
| return UnsubscribeMessage(channel=channel) |
There was a problem hiding this comment.
Reject non-string channel payloads before constructing message objects.
subscribe / unsubscribe values are accepted verbatim here. Payloads like {"unsubscribe": 1} or {"subscribe": {}} can reach manager.parse_subscription() or .lower() later in the loop and drop the socket through the outer exception handler instead of returning INVALID_MESSAGE.
🛡️ Minimal guard
- if "subscribe" in raw:
- return SubscribeMessage(channel=raw["subscribe"])
- elif "unsubscribe" in raw:
- return UnsubscribeMessage(channel=raw["unsubscribe"])
+ if "subscribe" in raw:
+ channel = raw["subscribe"]
+ if not isinstance(channel, str) or not channel.strip():
+ return None
+ return SubscribeMessage(channel=channel)
+ elif "unsubscribe" in raw:
+ channel = raw["unsubscribe"]
+ if not isinstance(channel, str) or not channel.strip():
+ return None
+ return UnsubscribeMessage(channel=channel)Please apply the same validation to legacy channel / scope inputs as well. As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: Input validation and SQL injection vectors.
Also applies to: 244-279
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/websocket.py` around lines 82 - 103, Reject non-string
channel payloads before constructing SubscribeMessage/UnsubscribeMessage: when
handling the simple form (keys "subscribe" or "unsubscribe") validate that
raw["subscribe"] / raw["unsubscribe"] is an instance of str and return an
INVALID_MESSAGE (or equivalent error path) if not; do the same in the legacy
branch before calling .lower() or concatenating target_id — validate that
raw.get("channel") or raw.get("scope") is a str (and that target_id is str if
you plan to append it) and normalize safely, otherwise return INVALID_MESSAGE.
Apply the identical checks where legacy parsing is duplicated (the other block
noted around lines 244-279) so manager.parse_subscription() never receives
non-string values.
backend/app/api/websocket.py
Outdated
| if not token: | ||
| # Allow unauthenticated connections for public channels | ||
| return user_id or f"anonymous_{datetime.now(timezone.utc).timestamp()}" | ||
|
|
There was a problem hiding this comment.
Treat ?token= as unauthenticated, not authenticated.
authenticate_user() falls back to an anonymous ID whenever token is falsy, but authenticated = token is not None and user_id is not None still flips to True for an empty query parameter. That lets a blank credential bypass JWT validation and stores an anonymous socket as authenticated inside manager.connect().
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: Authentication/authorization gaps.
Also applies to: 195-205
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/websocket.py` around lines 146 - 149, authenticate_user
currently treats an empty query param (`?token=`) as authenticated because it
checks `token is not None`; change the logic to treat falsy/empty tokens as
unauthenticated by testing token truthiness (e.g., use `if not token:` for the
early return and set `authenticated = bool(token) and user_id is not None`), and
ensure the same fix is applied in the other block around lines 195-205 so
manager.connect() never receives authenticated=True for an empty token; update
references in authenticate_user and where `authenticated` is computed before
calling `manager.connect()`.
| @router.get("/ws/stats") | ||
| async def get_websocket_stats(): | ||
| """ | ||
| Get WebSocket connection statistics. | ||
|
|
||
| Returns: | ||
| - total_connections: Total active WebSocket connections | ||
| - unique_users: Number of unique users connected | ||
| - redis_enabled: Whether Redis pub/sub is enabled | ||
| - subscriptions: Total number of active subscriptions | ||
| - channels: Active channels | ||
| """ | ||
| return manager.get_stats() |
There was a problem hiding this comment.
Protect /ws/stats behind authentication.
This route is mounted without any dependency or role check, so anonymous callers can inspect live connection counts, active channel names, and Redis enablement. That is operational telemetry and should not be public by default.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: Authentication/authorization gaps.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/websocket.py` around lines 309 - 321, The /ws/stats endpoint
(router.get("/ws/stats") function get_websocket_stats) exposes operational
telemetry without any authentication; wrap this route with the existing auth
dependency (e.g., add a Depends(get_current_user) or
Depends(get_current_active_admin) as used elsewhere) so only
authenticated/authorized callers can access manager.get_stats(), import the
appropriate dependency if missing, and enforce the required role/permission
check (admin or monitoring role) so anonymous requests receive a 401/403 instead
of the stats.
| count += await manager.broadcast_event( | ||
| event_type=event_type, | ||
| data=data, | ||
| channel=f"bounty:{bounty_id}" | ||
| ) | ||
|
|
||
| # Also broadcast to general bounties channel | ||
| if broadcast_all: | ||
| count += await manager.broadcast_event( | ||
| event_type=event_type, | ||
| data=data, | ||
| channel="bounties" | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
echo "=== ConnectionManager implementation ==="
nl -ba backend/app/services/websocket_manager.py | sed -n '1,260p'
echo
echo "=== Broadcast paths ==="
rg -n -C2 'broadcast_event\(|broadcast_to_user\(' \
backend/app/api/websocket.py \
backend/app/services/websocket_manager.py \
backend/tests/test_websocket.py
echo
echo "=== Overlapping subscription coverage ==="
rg -n -C2 'bounty:|bounties|prs|payouts|broadcast_bounty_event|broadcast_pr_event|broadcast_payout_event' \
backend/tests/test_websocket.pyRepository: SolFoundry/solfoundry
Length of output: 14306
🏁 Script executed:
cat -n backend/app/services/websocket_manager.py | sed -n '470,550p'Repository: SolFoundry/solfoundry
Length of output: 3307
🏁 Script executed:
cat -n backend/app/services/websocket_manager.py | sed -n '540,570p'Repository: SolFoundry/solfoundry
Length of output: 1365
🏁 Script executed:
cat -n backend/app/api/websocket.py | sed -n '340,360p'Repository: SolFoundry/solfoundry
Length of output: 712
🏁 Script executed:
rg -n -A5 -B2 'assert.*count|assert.*sent|receive_json' backend/tests/test_websocket.py | grep -A10 -B5 'broadcast_bounty\|broadcast_pr\|broadcast_payout'Repository: SolFoundry/solfoundry
Length of output: 940
Add deduplication for overlapping channel subscriptions and strengthen test assertions.
The concern is valid: broadcast_bounty_event() calls broadcast_event() twice (once for bounty:{bounty_id}, once for bounties), and each call returns its own count. A socket subscribed to both channels will receive the event twice, and the accumulated count will over-report actual notified connections.
The test at line 465-479 already demonstrates this overlap scenario—it subscribes the same socket to both "bounties" and "bounty:42", with the comment "Should receive on both bounties and bounty:42 channels"—but the assertion count >= 1 is too loose to catch duplicate delivery. Similar issues exist in test_broadcast_pr_event (line 495) and test_broadcast_payout_event (line 511), both using the same weak assertion.
Either add recipient deduplication in broadcast_event() across sequential calls, or strengthen test assertions to explicitly validate that overlapping subscriptions deliver exactly one copy per socket.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/websocket.py` around lines 346 - 358, broadcast_bounty_event
currently calls manager.broadcast_event twice which can double-count sockets
subscribed to both "bounty:{bounty_id}" and "bounties"; change the
implementation to deduplicate recipients before sending: either extend
manager.broadcast_event to accept and return a set of recipient IDs (or accept
an optional seen_set) so subsequent calls skip already-notified connections, or
fetch subscribers for "bounty:{bounty_id}" and "bounties" via the manager (e.g.,
manager.get_subscribers or similar), union the two sets, and call a single send
for each unique connection; also update tests (test_broadcast_bounty_event,
test_broadcast_pr_event, test_broadcast_payout_event) to assert exact delivery
counts (exactly one per unique socket) and/or verify each socket received
exactly one message instead of using count >= 1.
| from app.models.bounty import BountyDB | ||
| from app.database import Base |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Confirm what BountyDB inherits from
rg -n -C3 'class BountyDB|BaseModel|__tablename__' backend/app/models/bounty.py
# Confirm how BountyDB is used in this test file
rg -n -C2 'from app.models.bounty import BountyDB|db_session\.add\(|Base\.metadata\.(create_all|drop_all)' backend/tests/test_bounty_search.pyRepository: SolFoundry/solfoundry
Length of output: 5893
BountyDB import is incompatible with SQLAlchemy session usage in this test module.
Line 17 imports BountyDB, which inherits from Pydantic BaseModel (not SQLAlchemy declarative base). The test file then calls db_session.add(BountyDB(...)) at lines 107, 124, 141, 159, 178, 197, 209, 238, 256, 270, 322, 337, 351, and 364. SQLAlchemy's session.add() expects mapped ORM entities, not Pydantic models. This will fail when the skip is lifted. The test setup also relies on Base.metadata (SQLAlchemy ORM) at lines 39, 42, and 77, assuming BountyDB is a mapped table entity—another incompatibility. Either define BountyDB as a SQLAlchemy mapped class or create a separate Pydantic-based test model distinct from the ORM layer.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_bounty_search.py` around lines 17 - 18, The test imports
the Pydantic BountyDB but then uses SQLAlchemy session APIs (Base.metadata and
db_session.add(BountyDB(...))), causing a type mismatch; update the test to use
the SQLAlchemy mapped ORM class (the declarative model that inherits from Base,
e.g., Bounty or BountyORM) when calling Base.metadata and db_session.add, or
alternatively keep BountyDB for validation only and convert/construct the mapped
ORM instance before adding to db_session (introduce/from_* conversion helpers if
needed) so that db_session.add always receives an ORM-mapped entity.
SQLite does not support connection pool settings. Added check to skip pool_size, max_overflow, pool_timeout for SQLite. Wallet: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
|
Issues found:
Suggestions:
⏰ 67 hours remaining out of 72h since your first submission. Please address these items and push an update. SolFoundry Review Bot |
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
backend/app/models/dispute.py (2)
46-46: 🧹 Nitpick | 🔵 TrivialConsider using explicit callable for JSON column default.
While
default=listis safe in SQLAlchemy (it treats the type as a callable), using an explicit callable likedefault=lambda: []orserver_default='[]'is clearer and follows best practices for JSON columns.♻️ Recommended refactor for clarity
- evidence_links = Column(JSON, default=list, nullable=False) + evidence_links = Column(JSON, default=lambda: [], nullable=False)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/models/dispute.py` at line 46, The JSON column evidence_links in the Dispute model uses default=list; replace this with an explicit callable to avoid ambiguity by setting default=lambda: [] (or use server_default='[]' if you prefer a DB-side default) so each new row gets its own empty list; update the Column definition for evidence_links accordingly in the Dispute class.
110-115:⚠️ Potential issue | 🟡 MinorStrengthen UUID validation in bounty_id field validator.
The current validator only converts values to strings without validating UUID format. Since
bounty_idmaps to aGUID()column, invalid UUID strings will be rejected at the database layer, but catching format errors at the Pydantic validation layer provides better error messages and prevents unnecessary database round-trips. As per coding guidelines, input validation should be thorough at the API layer.🛡️ Proposed fix to add UUID format validation
`@field_validator`("bounty_id") `@classmethod` def validate_bounty_id(cls, v): + import uuid + try: + uuid.UUID(str(v)) + except (ValueError, AttributeError): + raise ValueError(f"Invalid UUID format for bounty_id: {v}") if isinstance(v, str): return v return str(v)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/models/dispute.py` around lines 110 - 115, The current field validator validate_bounty_id only coerces non-string values to str; update Dispute.validate_bounty_id to validate UUID format using Python's uuid.UUID() (or equivalent) so invalid values raise a clear ValueError at the Pydantic layer; accept ints/bytes by attempting uuid.UUID(v) or uuid.UUID(str(v)), then return the canonical string (str(uuid_obj)); ensure the validator still returns a string on success and raises a ValueError with a concise message on failure.backend/app/database.py (1)
131-178:⚠️ Potential issue | 🟠 MajorPostgreSQL-only DDL executed unconditionally during database initialization will fail on SQLite and other non-PostgreSQL engines.
Lines 131–178 execute PostgreSQL-specific statements without any database backend detection or error handling:
CREATE OR REPLACE FUNCTION ... LANGUAGE plpgsql(lines 133–142)CREATE TRIGGERwithEXECUTE FUNCTIONsyntax (lines 149–153)GINindex (lines 160–161)to_tsvector()function (line 136)SQLite does not support these constructs. Since
init_db()is called unconditionally during app startup (main.py:23), andconftest.pysetsDATABASE_URLto SQLite for testing, the app will fail to start when running against SQLite engines.Wrap PostgreSQL-specific DDL in conditional logic that detects the database dialect or guard with try/except to allow graceful degradation on unsupported engines.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/database.py` around lines 131 - 178, The PostgreSQL-specific DDL (function update_bounty_search_vector, trigger bounty_search_vector_update, use of to_tsvector, and GIN index ix_bounties_search_vector) in init_db() must be guarded so SQLite tests and other non-Postgres backends don't execute it; detect the dialect (e.g., via the SQLAlchemy engine or connection dialect name) and only run those CREATE FUNCTION / CREATE TRIGGER / CREATE INDEX ... USING GIN statements when dialect == 'postgresql' (or wrap them in a try/except that logs and continues on failure), leaving the rest of init_db() unchanged and idempotent. Ensure the check surrounds the blocks that create update_bounty_search_vector, bounty_search_vector_update, ix_bounties_search_vector and any to_tsvector usage so init_db() can run against SQLite.backend/tests/test_webhook.py (2)
95-117: 🧹 Nitpick | 🔵 TrivialEnvironment variable cleanup may fail on test assertion failure.
The tests manipulate
os.environ["GITHUB_WEBHOOK_SECRET"]and delete it after assertions. If an assertion fails, thedel os.environ["GITHUB_WEBHOOK_SECRET"]cleanup won't execute, potentially affecting subsequent tests.Consider using a pytest fixture or
monkeypatchfor safer environment variable management.♻️ Proposed fix using monkeypatch
`@pytest.mark.asyncio` - async def test_signature_verification_success(self, client): + async def test_signature_verification_success(self, client, monkeypatch): """Test valid signature passes verification.""" payload = json.dumps({"action": "opened"}).encode() signature = create_signature(payload, WEBHOOK_SECRET) - # Set secret in environment - os.environ["GITHUB_WEBHOOK_SECRET"] = WEBHOOK_SECRET + monkeypatch.setenv("GITHUB_WEBHOOK_SECRET", WEBHOOK_SECRET) response = await client.post( "/api/webhooks/github", headers={ "X-GitHub-Event": "pull_request", "X-Hub-Signature-256": signature, "X-GitHub-Delivery": "test-delivery-2", }, content=payload, ) assert response.status_code in [200, 202] - - del os.environ["GITHUB_WEBHOOK_SECRET"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_webhook.py` around lines 95 - 117, The test test_signature_verification_success sets and deletes os.environ["GITHUB_WEBHOOK_SECRET"] directly which can leak when an assertion fails; update the test to use pytest's monkeypatch (or a fixture) to set the environment variable instead of assigning/deleting it manually (e.g., use monkeypatch.setenv("GITHUB_WEBHOOK_SECRET", WEBHOOK_SECRET) and rely on monkeypatch to restore) or wrap the set/delete in a try/finally so cleanup always runs; locate and modify the test_signature_verification_success function in backend/tests/test_webhook.py to replace direct os.environ manipulation with monkeypatch.setenv or add try/finally cleanup.
118-139: 🧹 Nitpick | 🔵 TrivialSame environment cleanup concern applies here.
This test also manually sets and deletes
os.environ["GITHUB_WEBHOOK_SECRET"]. Usingmonkeypatch.setenvwould ensure automatic cleanup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_webhook.py` around lines 118 - 139, Replace manual os.environ manipulation in test_signature_verification_failure with pytest's monkeypatch: add the monkeypatch fixture to the test_signature_verification_failure function signature, call monkeypatch.setenv("GITHUB_WEBHOOK_SECRET", WEBHOOK_SECRET) before sending the request, and remove the final del os.environ["GITHUB_WEBHOOK_SECRET"]; this ensures automatic cleanup of the GITHUB_WEBHOOK_SECRET environment variable.
♻️ Duplicate comments (12)
backend/tests/test_bounty_search.py (2)
20-21:⚠️ Potential issue | 🟠 MajorUnconditional module skip removes all regression coverage for bounty search.
Line 20 and Line 21 disable the entire suite regardless of environment, so CI no longer validates search/filter behavior.
As per coding guidelines
backend/**: “Analyze thoroughly: ... Error handling and edge case coverage - API contract consistency with spec”.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_bounty_search.py` around lines 20 - 21, Remove the unconditional module-level skip that sets pytestmark and instead make the tests conditional so CI still runs them when BountySearchService is implemented: replace the plain pytest.mark.skip with a conditional skip/skipif that checks for the BountySearchService availability or a feature flag (or mark the tests xfail until implemented), keeping the test module and its assertions intact; reference the pytestmark assignment and the BountySearchService symbol when updating the test module so tests execute in CI when the service exists but are skipped in environments where it is legitimately missing.
17-18:⚠️ Potential issue | 🟠 Major
BountyDBimport remains incompatible with SQLAlchemy session usage in this test module.Line 17 imports
BountyDB, but test bodies add it viadb_session.add(...)and rely on SQLAlchemy metadata flow; this mismatch is still present and will fail once the module skip is removed.#!/bin/bash set -euo pipefail # Confirm BountyDB type and test usage patterns rg -n -C3 'class BountyDB' backend/app/models/bounty.py rg -n -C2 'db_session\.add\(BountyDB|Base\.metadata\.(create_all|drop_all)' backend/tests/test_bounty_search.py🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_bounty_search.py` around lines 17 - 18, The test imports BountyDB which is incompatible with SQLAlchemy session operations (db_session.add) and metadata flow; update the import to use the actual declarative ORM model class used by the session (e.g., the SQLAlchemy model class named Bounty) from app.models.bounty so db_session.add(Bounty(...)) and Base.metadata.create_all/drop_all work correctly, or alternatively adapt the test to construct and add the correct SQLAlchemy-mapped instance instead of BountyDB; ensure references to BountyDB in the file are replaced by the mapped model class and that Base is used to manage metadata as before.backend/app/api/websocket.py (6)
336-349:⚠️ Potential issue | 🟠 MajorProtect
/ws/statsbehind authentication/authorization.This route exposes live connection counts, active channel names, and Redis enablement to any caller. That is operational telemetry and should not be public by default.
As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: Authentication/authorization gaps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/websocket.py` around lines 336 - 349, The /ws/stats endpoint currently returns operational telemetry publicly; modify the router.get("/ws/stats") route and the get_websocket_stats function to require authentication and appropriate authorization before returning manager.get_stats(). Add a FastAPI dependency (e.g., Depends(get_current_user) or Depends(get_current_active_user)) to the route or function signature, then validate the caller's role/permissions (e.g., is_admin flag or specific scope) and raise HTTPException(status_code=403) if unauthorized; only call and return manager.get_stats() for authorized users.
79-113:⚠️ Potential issue | 🟠 MajorThe documented
{"type":"auth"}flow is still not implemented.The docs on Lines 177-178 promise first-message JWT auth, but the handler authenticates only from the query string before
manager.connect(), andparse_client_message()has no auth branch. A client following that contract is connected anonymously and then getsINVALID_MESSAGEfor its auth frame.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: Authentication/authorization gaps; API contract consistency with spec.Also applies to: 171-178, 200-214
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/websocket.py` around lines 79 - 113, parse_client_message currently lacks support for the documented {"type":"auth"} handshake so clients attempting first-message JWT auth are treated as anonymous; add handling for msg_type == "auth" to parse and return an AuthMessage (e.g., AuthMessage(token=raw.get("token") or raw.get("jwt"))) similar to how PongMessage/PingMessage are returned, and ensure the rest of the websocket connection code (the consumer that calls parse_client_message and manager.connect()) consumes that AuthMessage to authenticate the session before treating the socket as anonymous; reference parse_client_message, AuthMessage (create if absent), and the connection/authentication branch that uses manager.connect().
235-245:⚠️ Potential issue | 🔴 CriticalOver-limit clients never consume another frame.
Because the limiter is checked before
receive_text(), oncecheck_rate_limit()turns false this loop sendsRATE_LIMITEDand immediately continues without reading a new message or closing the socket. One abusive client can therefore trigger a tight error-write loop and avoidable CPU churn.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: Error handling and edge case coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/websocket.py` around lines 235 - 245, The loop currently checks manager.check_rate_limit(websocket) before reading a message, causing a tight write-only loop when over limit; change the logic to first await websocket.receive_text() (or otherwise consume the incoming frame) and then call manager.check_rate_limit(websocket) so an over-limit client cannot avoid consuming frames; if you prefer to keep the pre-check, explicitly await websocket.receive_text() (or close via await websocket.close()) when rate-limited to drain the client and prevent CPU spin. Update the loop around websocket.receive_text(), manager.check_rate_limit, manager.get_rate_limit_remaining, and send_error accordingly.
89-107:⚠️ Potential issue | 🟠 MajorReject non-string channel payloads before building subscribe/unsubscribe messages.
Payloads like
{"subscribe": 1}or{"unsubscribe": {}}are accepted verbatim here. The endpoint later calls.lower()onmessage.channel, which raises and drops the socket through the outer exception path instead of returningINVALID_MESSAGE. The same guard is needed in the legacychannel/scopepath before concatenatingtarget_id.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: Input validation and SQL injection vectors.Also applies to: 280-303
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/websocket.py` around lines 89 - 107, Reject non-string channel inputs before constructing SubscribeMessage/UnsubscribeMessage: when handling the simple form (checking "subscribe" or "unsubscribe") ensure raw["subscribe"] / raw["unsubscribe"] is an instance of str and otherwise return/raise the same INVALID_MESSAGE path; likewise in the legacy branch (msg_type, channel = raw.get("channel") or raw.get("scope", "")) validate that channel is a string and that raw.get("target_id") is a string before concatenating f"{channel}:{raw['target_id']}" so you never build a non-str message.channel used later by .lower(); update the code around msg_type, SubscribeMessage, and UnsubscribeMessage construction to perform these type checks and short-circuit on invalid types.
150-152:⚠️ Potential issue | 🟠 Major
?token=is treated as authenticated.
authenticate_user()returns an anonymous ID whentokenis falsy, but Line 202 marks any non-Nonetoken as authenticated. An empty query parameter therefore bypasses JWT validation, stores an anonymous socket as authenticated, and skips the invalid-token close path.As per coding guidelines,
backend/**: Python FastAPI backend. Analyze thoroughly: Authentication/authorization gaps.Also applies to: 200-205
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/websocket.py` around lines 150 - 152, authenticate_user() currently treats falsy tokens (like an empty "?token=") as anonymous IDs while the websocket auth check later treats any non-None token as authenticated; change the logic so empty string tokens do not count as provided. Concretely: update authenticate_user(token, ...) to return None (not an anonymous id) when token is falsy/empty, or trim and validate token presence before creating an anonymous id; and update the websocket connection auth branch (the code that sets the connection as authenticated / checks token presence and performs JWT validation) to require a truthy token (e.g., if token) rather than merely token is not None so empty query values trigger the invalid-token close path. Ensure the anonymous connection path still works for truly absent tokens by using the explicit None case from authenticate_user().
368-379:⚠️ Potential issue | 🟠 MajorThese helpers can double-deliver the same event to one socket.
broadcast_bounty_event()notifies bothbounty:{id}andbounties, so a socket subscribed to both gets the same payload twice.broadcast_pr_event()andbroadcast_payout_event()have the same problem when the target user's socket is also subscribed to the public channel, and the summedcountoverstates unique recipients in all three helpers.Also applies to: 401-410, 432-441
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/websocket.py` around lines 368 - 379, The helpers (broadcast_bounty_event, broadcast_pr_event, broadcast_payout_event) currently call manager.broadcast_event for two channels and sum counts, causing duplicate delivery and an overstated count when a socket is subscribed to both; fix by deduplicating recipients before sending (either by extending manager.broadcast_event to accept multiple channels and internally union connection IDs, or by calling manager.get_subscribers(channel) for each channel, forming a unique set of connection IDs, then sending the event once per unique connection via the manager send method and returning the size of that set). Update the three helper functions to use the deduped send flow (or the new multi-channel manager API) instead of adding returned counts from two separate broadcasts so the payload is delivered once per socket and the count reflects unique recipients.backend/app/services/websocket_manager.py (4)
520-529:⚠️ Potential issue | 🟠 MajorSnapshot these registries before awaiting
send_json().
broadcast_to_user(),broadcast_event(), and_redis_listener()iterate livesetobjects while awaiting network I/O. A concurrentdisconnect()orunsubscribe()can mutate those sets mid-iteration and raiseRuntimeError, which aborts the current fanout and can terminate the Redis listener task entirely.Also applies to: 568-580, 602-607
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/services/websocket_manager.py` around lines 520 - 529, The loops in broadcast_to_user(), broadcast_event(), and _redis_listener() iterate live sets stored on self._connections (and similar registries) and then await info.websocket.send_json(), which allows concurrent disconnect() or unsubscribe() to mutate those sets and raise RuntimeError; fix by snapshotting the registry before awaiting (e.g., assign connections = list(self._connections.get(user_id, set())) or make a shallow list copy of the set/iterable before the for-loop) and then iterate that snapshot so the async send_json() calls cannot be affected by concurrent mutations. Ensure you make the same change for the other occurrences referenced (around lines 568-580 and 602-607) where sets are iterated and awaits occur.
121-134:⚠️ Potential issue | 🔴 Critical
ConnectionInfois unhashable here, so the first connect will fail.
self._connections[user_id]is aset, butConnectionInfois a mutable dataclass and Python does not generate a hash for it. The.add(info)on Line 323 will raiseTypeErrorafterwebsocket.accept(), so every socket fails during registration. A list or a dict keyed bywebsocketis safer here becausesubscriptions,last_ping,last_pong, andis_aliveall mutate over the connection lifetime.Run this read-only check; it should reproduce the
TypeErrorand show the insertion sites.#!/bin/bash python - <<'PY' from dataclasses import dataclass, field from typing import Set `@dataclass` class ConnectionInfo: user_id: str subscriptions: Set[str] = field(default_factory=set) try: s = set() s.add(ConnectionInfo("u1")) except Exception as exc: print(type(exc).__name__, exc) PY rg -n -C2 'Dict\[str, Set\[ConnectionInfo\]\]|\.add\(info\)' backend/app/services/websocket_manager.pyAlso applies to: 217-218, 320-323
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/services/websocket_manager.py` around lines 121 - 134, ConnectionInfo is a mutable dataclass and therefore unhashable, causing TypeError when code does self._connections[user_id].add(info); change the container to a mutable sequence or mapping instead of a set. Update the type of self._connections from Dict[str, Set[ConnectionInfo]] to Dict[str, List[ConnectionInfo]] (or Dict[WebSocket, ConnectionInfo] if you prefer keyed by websocket), change all .add(info) calls to .append(info) (or assignment for a dict), and adjust remove/lookup code that assumed a set to use list iteration or dict access; modifying the container type and usages will avoid hashing the ConnectionInfo while preserving mutability of fields like subscriptions, last_ping, last_pong, and is_alive.
407-410:⚠️ Potential issue | 🟠 MajorRedis-enabled broadcasts are double-delivered to local subscribers.
Once
subscribe()has subscribed the shared pubsub to a channel,broadcast_event()both publishes to Redis and sends directly to the same local sockets._redis_listener()then forwards that published message back to those sockets again, so every local subscriber sees duplicate events when Redis is enabled.Also applies to: 557-563, 601-607
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/services/websocket_manager.py` around lines 407 - 410, broadcast_event() currently both publishes to Redis and directly sends to local sockets, while _redis_listener() also forwards Redis messages to local sockets, causing duplicate deliveries; change the logic so that when Redis is enabled and the pubsub is subscribed to the channel (i.e., _pubsub.subscribe was called), broadcast_event() should only publish the event to Redis (do not call the local send path), and let _redis_listener() perform local delivery, or alternatively, have _redis_listener() ignore messages that originated locally by tagging/payload metadata and skipping re-delivery; update broadcast_event(), _redis_listener(), and any subscribe-related logic (_pubsub.subscribe) to implement one of these approaches so local subscribers receive each event exactly once.
355-361:⚠️ Potential issue | 🟠 MajorTrack Redis subscriptions per channel instead of unsubscribing blindly.
disconnect()unsubscribes the sharedself._pubsubfor every channel on the closing socket, even if other local sockets still need that channel. The explicitunsubscribe()path has the opposite bug: afterdiscard(), thedefaultdictstill retains the key, so the last local unsubscribe never calls RedisUNSUBSCRIBE. Both paths need a remaining-local-subscriber check before touching the process-wide pubsub state.Also applies to: 432-441
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/services/websocket_manager.py` around lines 355 - 361, The disconnect/unsubscribe logic currently calls await self._pubsub.unsubscribe(channel) for every channel in info.subscriptions and the explicit unsubscribe path relies on discard() but leaves keys in the defaultdict, causing premature or missed Redis UNSUBSCRIBE; update the logic in the disconnect() flow and the explicit unsubscribe path (the code handling discard() and subscriptions) to first check the process-local subscriber count for a channel (the shared mapping that tracks local sockets per channel) and only call self._pubsub.unsubscribe(channel) when that count reaches zero, and ensure you remove the channel key from the defaultdict when the last local subscriber is removed so future checks behave correctly; apply this same remaining-local-subscriber check and key deletion in both places referenced (the disconnect() loop using info.subscriptions and the explicit unsubscribe block around discard()).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/database.py`:
- Around line 62-69: The get_db() generator's docstring is incorrect and its
current use of "async with async_session_factory() as session" does not commit
transactions, so DB writes are discarded; update get_db() (and its docstring) to
use a proper Unit of Work by either wrapping the yielded session in an "async
with session.begin():" transaction context (using the session created from
async_session_factory()) or by calling "await session.commit()" after the yield
(and still rolling back on exceptions), ensuring you reference
async_session_factory and get_db to locate the change and keep the existing
exception handling that re-raises on errors.
In `@backend/app/models/dispute.py`:
- Line 43: Add foreign key constraints to the user ID columns in the Dispute
model: change the submitter_id, reviewer_id, and actor_id Column definitions to
reference users.id with ForeignKey(..., ondelete="CASCADE") so they enforce
referential integrity; specifically update submitter_id (currently
Column(GUID(), nullable=False)) to use ForeignKey("users.id",
ondelete="CASCADE") and remain nullable=False, update reviewer_id to use
ForeignKey("users.id", ondelete="CASCADE") and remain nullable=True, and update
actor_id to use ForeignKey("users.id", ondelete="CASCADE") and remain
nullable=False so deletions cascade and match the existing FK patterns for
bounty_id/dispute_id.
In `@backend/app/models/notification.py`:
- Line 50: Update all references so the rename from metadata to extra_data is
consistent: modify the Pydantic schema NotificationBase to accept and serialize
extra_data (map incoming "metadata" JSON to extra_data if needed), update the
API docs in backend/app/api/notifications.py to describe extra_data instead of
metadata, and change the test test_create_notification_with_metadata in
test_notification_api.py to either send "extra_data" or send "metadata" and
assert that the server maps it into NotificationDB.extra_data; ensure
NotificationDB.extra_data field remains the single source of truth and that
serialization/deserialization in the route handlers (and NotificationBase)
reads/writes extra_data so existing clients sending "metadata" are handled or
tests are updated accordingly.
In `@backend/app/models/submission.py`:
- Around line 116-124: The PR URL validator validate_pr_url currently accepts
both "https://" and "http://"; change it to only allow "https://github.com/" and
reject any "http://" inputs by removing or updating the startswith check in the
validate_pr_url method (the `@field_validator` for "pr_url") so it enforces HTTPS
and still checks for "/pull/" before returning the value.
In `@backend/app/services/webhook_processor.py`:
- Around line 125-129: The code calls self._update_bounty_status with
new_status="in_review" and logs that transition, but the bounty model uses
"in_progress"; change the transition target to "in_progress" in the call to
_update_bounty_status and update the related log message (the same logging block
that reports the transition of variable updated/github_issue_number/github_repo)
so the API contract matches the bounty status enum and the log reflects the new
value.
- Around line 324-331: The code references undefined/incorrect symbols and the
wrong model type: replace the undefined VALID_STATUSES check with a check
against the actual enum or transition set (use VALID_STATUS_TRANSITIONS or
derive allowed targets from BountyStatus), stop using the Pydantic BountyDB as
an ORM in database queries (select(BountyDB))—use the SQLAlchemy ORM model class
(e.g., Bounty or whatever inherits from Base) for select/update operations, and
ensure statuses set/read by the webhook (replace "in_review") use the actual
BountyStatus enum members (OPEN, IN_PROGRESS, COMPLETED, PAID) consistently;
update imports in webhook_processor.py to import the correct ORM model and
BountyStatus/VALID_STATUS_TRANSITIONS and change all select(BountyDB)/assignment
sites and literal status strings to use the ORM class and enum values.
In `@backend/app/services/websocket_manager.py`:
- Around line 242-246: The code logs the raw Redis DSN in logger.info which can
leak credentials from self._redis_url; replace that logging with a safe
representation by parsing self._redis_url (using urllib.parse.urlparse) and
either log only scheme/hostname/port or a masked DSN (e.g., mask netloc
password) before calling logger.info, or remove the URL entirely; update the
code around the logger.info call in websocket_manager.py (referencing
self._redis_url, logger.info, and _redis_listener) to emit the sanitized value.
- Around line 201-203: The heartbeat timeout logic is using HEARTBEAT_INTERVAL *
3 instead of the configured HEARTBEAT_TIMEOUT, so stale sockets remain
registered far longer than intended; locate the heartbeat processing routine
(e.g., the method handling pings/pongs or check_heartbeats/_heartbeat_loop in
websocket_manager.py) and replace the incorrect condition that compares
last-pong time against HEARTBEAT_INTERVAL * 3 to use HEARTBEAT_TIMEOUT (or
compute using HEARTBEAT_INTERVAL + HEARTBEAT_TIMEOUT if intended as a sum),
ensure any waiting/scheduling uses HEARTBEAT_INTERVAL for ping cadence and
HEARTBEAT_TIMEOUT for pong deadline, and update any related comments/tests to
reflect the corrected semantics.
In `@backend/tests/test_notification_api.py`:
- Around line 442-444: Replace the non-Pythonic equality check `assert
data["read"] == False` with the idiomatic truthiness assertion `assert not
data["read"]` to match other assertions in the file (e.g., the style used in
earlier tests) and keep assertion style consistent; update the assertion
referencing data["read"] accordingly.
In `@backend/tests/test_websocket.py`:
- Around line 614-636: The test_many_connections function opens 50 websockets by
calling client.websocket_connect and manually invoking __enter__/__exit__, which
can leak connections on exceptions; change it to ensure deterministic cleanup by
wrapping the connection-opening and subsequent asserts in a try/finally (or use
a context manager that yields the connections) so that in the finally block you
iterate over the connections and close each one (e.g., call __exit__ or close)
to guarantee they are always cleaned up even if an assertion fails; keep
references to client.websocket_connect and the connections list so the cleanup
loop still runs.
- Around line 559-566: The test test_invalid_token_rejected is using
pytest.raises(Exception) which is too broad; change it to expect the specific
websocket rejection exception (e.g., pytest.raises(WebSocketDisconnect) from
starlette.websockets) when calling
client.websocket_connect("/ws?token=invalid_token"), and capture the exception
(excinfo) so you can assert the websocket close code/reason (e.g.,
excinfo.value.code or excinfo.value.reason) to verify the connection was
rejected for the intended token/auth reason instead of masking other failures.
In `@frontend/tsconfig.json`:
- Line 12: The tsconfig.json currently restricts ambient types to Node via the
"types": ["node"] entry which prevents Vite client ambient types (e.g.,
import.meta) from being available; update the "types" array to include
"vite/client" (or remove the explicit "types" setting to allow automatic
inclusion) so Vite's globals are available, or alternatively add a triple-slash
directive (/// <reference types="vite/client" />) in a central frontend .d.ts
file to enable Vite client ambient types.
- Line 15: The tsconfig.json currently excludes test files via the "exclude"
array which means tsc --noEmit in CI won't type-check tests; either remove test
globs from the "exclude" array (or add a separate tsconfig.tests.json that
includes src/**/*.test.tsx and src/**/*.test.ts) and run tsc --project
tsconfig.tests.json in CI, or enable TypeScript support in the test runner by
configuring vitest (e.g., update vitest/vite config to use esbuild/ts plugin or
set the "ts" transformer) so tests are transpiled with type checking; update CI
to run the chosen path so test files are statically validated.
---
Outside diff comments:
In `@backend/app/database.py`:
- Around line 131-178: The PostgreSQL-specific DDL (function
update_bounty_search_vector, trigger bounty_search_vector_update, use of
to_tsvector, and GIN index ix_bounties_search_vector) in init_db() must be
guarded so SQLite tests and other non-Postgres backends don't execute it; detect
the dialect (e.g., via the SQLAlchemy engine or connection dialect name) and
only run those CREATE FUNCTION / CREATE TRIGGER / CREATE INDEX ... USING GIN
statements when dialect == 'postgresql' (or wrap them in a try/except that logs
and continues on failure), leaving the rest of init_db() unchanged and
idempotent. Ensure the check surrounds the blocks that create
update_bounty_search_vector, bounty_search_vector_update,
ix_bounties_search_vector and any to_tsvector usage so init_db() can run against
SQLite.
In `@backend/app/models/dispute.py`:
- Line 46: The JSON column evidence_links in the Dispute model uses
default=list; replace this with an explicit callable to avoid ambiguity by
setting default=lambda: [] (or use server_default='[]' if you prefer a DB-side
default) so each new row gets its own empty list; update the Column definition
for evidence_links accordingly in the Dispute class.
- Around line 110-115: The current field validator validate_bounty_id only
coerces non-string values to str; update Dispute.validate_bounty_id to validate
UUID format using Python's uuid.UUID() (or equivalent) so invalid values raise a
clear ValueError at the Pydantic layer; accept ints/bytes by attempting
uuid.UUID(v) or uuid.UUID(str(v)), then return the canonical string
(str(uuid_obj)); ensure the validator still returns a string on success and
raises a ValueError with a concise message on failure.
In `@backend/tests/test_webhook.py`:
- Around line 95-117: The test test_signature_verification_success sets and
deletes os.environ["GITHUB_WEBHOOK_SECRET"] directly which can leak when an
assertion fails; update the test to use pytest's monkeypatch (or a fixture) to
set the environment variable instead of assigning/deleting it manually (e.g.,
use monkeypatch.setenv("GITHUB_WEBHOOK_SECRET", WEBHOOK_SECRET) and rely on
monkeypatch to restore) or wrap the set/delete in a try/finally so cleanup
always runs; locate and modify the test_signature_verification_success function
in backend/tests/test_webhook.py to replace direct os.environ manipulation with
monkeypatch.setenv or add try/finally cleanup.
- Around line 118-139: Replace manual os.environ manipulation in
test_signature_verification_failure with pytest's monkeypatch: add the
monkeypatch fixture to the test_signature_verification_failure function
signature, call monkeypatch.setenv("GITHUB_WEBHOOK_SECRET", WEBHOOK_SECRET)
before sending the request, and remove the final del
os.environ["GITHUB_WEBHOOK_SECRET"]; this ensures automatic cleanup of the
GITHUB_WEBHOOK_SECRET environment variable.
---
Duplicate comments:
In `@backend/app/api/websocket.py`:
- Around line 336-349: The /ws/stats endpoint currently returns operational
telemetry publicly; modify the router.get("/ws/stats") route and the
get_websocket_stats function to require authentication and appropriate
authorization before returning manager.get_stats(). Add a FastAPI dependency
(e.g., Depends(get_current_user) or Depends(get_current_active_user)) to the
route or function signature, then validate the caller's role/permissions (e.g.,
is_admin flag or specific scope) and raise HTTPException(status_code=403) if
unauthorized; only call and return manager.get_stats() for authorized users.
- Around line 79-113: parse_client_message currently lacks support for the
documented {"type":"auth"} handshake so clients attempting first-message JWT
auth are treated as anonymous; add handling for msg_type == "auth" to parse and
return an AuthMessage (e.g., AuthMessage(token=raw.get("token") or
raw.get("jwt"))) similar to how PongMessage/PingMessage are returned, and ensure
the rest of the websocket connection code (the consumer that calls
parse_client_message and manager.connect()) consumes that AuthMessage to
authenticate the session before treating the socket as anonymous; reference
parse_client_message, AuthMessage (create if absent), and the
connection/authentication branch that uses manager.connect().
- Around line 235-245: The loop currently checks
manager.check_rate_limit(websocket) before reading a message, causing a tight
write-only loop when over limit; change the logic to first await
websocket.receive_text() (or otherwise consume the incoming frame) and then call
manager.check_rate_limit(websocket) so an over-limit client cannot avoid
consuming frames; if you prefer to keep the pre-check, explicitly await
websocket.receive_text() (or close via await websocket.close()) when
rate-limited to drain the client and prevent CPU spin. Update the loop around
websocket.receive_text(), manager.check_rate_limit,
manager.get_rate_limit_remaining, and send_error accordingly.
- Around line 89-107: Reject non-string channel inputs before constructing
SubscribeMessage/UnsubscribeMessage: when handling the simple form (checking
"subscribe" or "unsubscribe") ensure raw["subscribe"] / raw["unsubscribe"] is an
instance of str and otherwise return/raise the same INVALID_MESSAGE path;
likewise in the legacy branch (msg_type, channel = raw.get("channel") or
raw.get("scope", "")) validate that channel is a string and that
raw.get("target_id") is a string before concatenating
f"{channel}:{raw['target_id']}" so you never build a non-str message.channel
used later by .lower(); update the code around msg_type, SubscribeMessage, and
UnsubscribeMessage construction to perform these type checks and short-circuit
on invalid types.
- Around line 150-152: authenticate_user() currently treats falsy tokens (like
an empty "?token=") as anonymous IDs while the websocket auth check later treats
any non-None token as authenticated; change the logic so empty string tokens do
not count as provided. Concretely: update authenticate_user(token, ...) to
return None (not an anonymous id) when token is falsy/empty, or trim and
validate token presence before creating an anonymous id; and update the
websocket connection auth branch (the code that sets the connection as
authenticated / checks token presence and performs JWT validation) to require a
truthy token (e.g., if token) rather than merely token is not None so empty
query values trigger the invalid-token close path. Ensure the anonymous
connection path still works for truly absent tokens by using the explicit None
case from authenticate_user().
- Around line 368-379: The helpers (broadcast_bounty_event, broadcast_pr_event,
broadcast_payout_event) currently call manager.broadcast_event for two channels
and sum counts, causing duplicate delivery and an overstated count when a socket
is subscribed to both; fix by deduplicating recipients before sending (either by
extending manager.broadcast_event to accept multiple channels and internally
union connection IDs, or by calling manager.get_subscribers(channel) for each
channel, forming a unique set of connection IDs, then sending the event once per
unique connection via the manager send method and returning the size of that
set). Update the three helper functions to use the deduped send flow (or the new
multi-channel manager API) instead of adding returned counts from two separate
broadcasts so the payload is delivered once per socket and the count reflects
unique recipients.
In `@backend/app/services/websocket_manager.py`:
- Around line 520-529: The loops in broadcast_to_user(), broadcast_event(), and
_redis_listener() iterate live sets stored on self._connections (and similar
registries) and then await info.websocket.send_json(), which allows concurrent
disconnect() or unsubscribe() to mutate those sets and raise RuntimeError; fix
by snapshotting the registry before awaiting (e.g., assign connections =
list(self._connections.get(user_id, set())) or make a shallow list copy of the
set/iterable before the for-loop) and then iterate that snapshot so the async
send_json() calls cannot be affected by concurrent mutations. Ensure you make
the same change for the other occurrences referenced (around lines 568-580 and
602-607) where sets are iterated and awaits occur.
- Around line 121-134: ConnectionInfo is a mutable dataclass and therefore
unhashable, causing TypeError when code does
self._connections[user_id].add(info); change the container to a mutable sequence
or mapping instead of a set. Update the type of self._connections from Dict[str,
Set[ConnectionInfo]] to Dict[str, List[ConnectionInfo]] (or Dict[WebSocket,
ConnectionInfo] if you prefer keyed by websocket), change all .add(info) calls
to .append(info) (or assignment for a dict), and adjust remove/lookup code that
assumed a set to use list iteration or dict access; modifying the container type
and usages will avoid hashing the ConnectionInfo while preserving mutability of
fields like subscriptions, last_ping, last_pong, and is_alive.
- Around line 407-410: broadcast_event() currently both publishes to Redis and
directly sends to local sockets, while _redis_listener() also forwards Redis
messages to local sockets, causing duplicate deliveries; change the logic so
that when Redis is enabled and the pubsub is subscribed to the channel (i.e.,
_pubsub.subscribe was called), broadcast_event() should only publish the event
to Redis (do not call the local send path), and let _redis_listener() perform
local delivery, or alternatively, have _redis_listener() ignore messages that
originated locally by tagging/payload metadata and skipping re-delivery; update
broadcast_event(), _redis_listener(), and any subscribe-related logic
(_pubsub.subscribe) to implement one of these approaches so local subscribers
receive each event exactly once.
- Around line 355-361: The disconnect/unsubscribe logic currently calls await
self._pubsub.unsubscribe(channel) for every channel in info.subscriptions and
the explicit unsubscribe path relies on discard() but leaves keys in the
defaultdict, causing premature or missed Redis UNSUBSCRIBE; update the logic in
the disconnect() flow and the explicit unsubscribe path (the code handling
discard() and subscriptions) to first check the process-local subscriber count
for a channel (the shared mapping that tracks local sockets per channel) and
only call self._pubsub.unsubscribe(channel) when that count reaches zero, and
ensure you remove the channel key from the defaultdict when the last local
subscriber is removed so future checks behave correctly; apply this same
remaining-local-subscriber check and key deletion in both places referenced (the
disconnect() loop using info.subscriptions and the explicit unsubscribe block
around discard()).
In `@backend/tests/test_bounty_search.py`:
- Around line 20-21: Remove the unconditional module-level skip that sets
pytestmark and instead make the tests conditional so CI still runs them when
BountySearchService is implemented: replace the plain pytest.mark.skip with a
conditional skip/skipif that checks for the BountySearchService availability or
a feature flag (or mark the tests xfail until implemented), keeping the test
module and its assertions intact; reference the pytestmark assignment and the
BountySearchService symbol when updating the test module so tests execute in CI
when the service exists but are skipped in environments where it is legitimately
missing.
- Around line 17-18: The test imports BountyDB which is incompatible with
SQLAlchemy session operations (db_session.add) and metadata flow; update the
import to use the actual declarative ORM model class used by the session (e.g.,
the SQLAlchemy model class named Bounty) from app.models.bounty so
db_session.add(Bounty(...)) and Base.metadata.create_all/drop_all work
correctly, or alternatively adapt the test to construct and add the correct
SQLAlchemy-mapped instance instead of BountyDB; ensure references to BountyDB in
the file are replaced by the mapped model class and that Base is used to manage
metadata as before.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4f9e4db4-9132-4cd8-9f4a-0e07f919ff34
📒 Files selected for processing (47)
backend/app/api/auth.pybackend/app/api/bounties.pybackend/app/api/contributors.pybackend/app/api/leaderboard.pybackend/app/api/notifications.pybackend/app/api/payouts.pybackend/app/api/webhooks/github.pybackend/app/api/websocket.pybackend/app/auth.pybackend/app/database.pybackend/app/main.pybackend/app/models/bounty.pybackend/app/models/contributor.pybackend/app/models/dispute.pybackend/app/models/notification.pybackend/app/models/payout.pybackend/app/models/submission.pybackend/app/models/user.pybackend/app/models/webhook_log.pybackend/app/services/auth_service.pybackend/app/services/bounty_service.pybackend/app/services/contributor_service.pybackend/app/services/leaderboard_service.pybackend/app/services/notification_service.pybackend/app/services/payout_service.pybackend/app/services/solana_client.pybackend/app/services/treasury_service.pybackend/app/services/webhook_processor.pybackend/app/services/webhook_service.pybackend/app/services/websocket_manager.pybackend/requirements.txtbackend/tests/conftest.pybackend/tests/test_auth.pybackend/tests/test_bounties.pybackend/tests/test_bounty_api.pybackend/tests/test_bounty_edge_cases.pybackend/tests/test_bounty_search.pybackend/tests/test_contributors.pybackend/tests/test_leaderboard.pybackend/tests/test_notification_api.pybackend/tests/test_payouts.pybackend/tests/test_webhook.pybackend/tests/test_websocket.pyfrontend/package.jsonfrontend/src/components/PRStatusTracker.tsxfrontend/src/hooks/useLeaderboard.tsfrontend/tsconfig.json
| Implements the Unit of Work pattern: | ||
| - A new session is created for each request | ||
| - The session automatically commits on successful completion | ||
| - Any exception triggers automatic rollback | ||
| - The session is always properly closed | ||
|
|
||
| This ensures that database operations are atomic and consistent | ||
| without requiring manual transaction management in the service layer. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify transaction semantics in get_db
rg -n -C3 'async def get_db|yield session|session\.commit|session\.rollback|autocommit' backend/app/database.pyRepository: SolFoundry/solfoundry
Length of output: 1242
🏁 Script executed:
# Get more context around the get_db function to see the complete implementation
cat -n backend/app/database.py | sed -n '58,90p'Repository: SolFoundry/solfoundry
Length of output: 1383
🏁 Script executed:
# Check if there are any tests that demonstrate expected transaction behavior
fd -t f '.*test.*\.py' | head -20 | xargs rg -l 'get_db|transaction|commit|rollback' 2>/dev/null || echo "No test files found with transaction keywords"Repository: SolFoundry/solfoundry
Length of output: 269
🌐 Web query:
SQLAlchemy AsyncSession context manager behavior autocommit=False commit behavior
💡 Result:
autocommit=False(the old ORMSession(…, autocommit=…)flag) is legacy and removed in SQLAlchemy 2.0; modernSession/AsyncSessionalways use explicit transactional patterns (“autobegin”,begin()blocks, explicitcommit()/rollback()). [1][2]async with AsyncSession(...) as session:is a session-lifecycle context manager: it closes the session on exit, and any “leftover” transactional state is discarded (rolled back) unless you explicitly committed. It does not auto-commit on exit. (The docs showwith Session() as session:requiring explicitsession.commit(), and note remaining transactional state is discarded on exit.) [3]async with session.begin():is a transaction context manager: on normal exit it flushes and commits; on exception it rolls back. [3]- If you truly want database/driver autocommit, SQLAlchemy 2.x supports it via connection isolation level (DBAPI-level autocommit), e.g.
Connection.execution_options(isolation_level="AUTOCOMMIT"); ORM “autocommit mode” is not the mechanism. [1]
Sources
[1] SQLAlchemy 2.0 migration guide (removal of autocommit; driver-level autocommit via isolation_level) (docs.sqlalchemy.org)
[2] SQLAlchemy 2.0 migration guide section “Autocommit mode removed from Session; autobegin support added” (dokk.org)
[3] SQLAlchemy docs: Session.begin() commits/rolls back on context exit; with Session() as session: requires explicit commits; closes/discards remaining transactional state (docs.sqlalchemy.org)
Citations:
- 1: https://docs.sqlalchemy.org/20/changelog/migration_20.html
- 2: https://dokk.org/documentation/sqlalchemy/rel_2_0_23/changelog/migration_20/
- 3: https://docs.sqlalchemy.org/14/orm/session_transaction.html
🏁 Script executed:
# Check usage of get_db() dependency in routes
rg -n 'Depends\(get_db\)|get_db\(' backend/app --type py -A 2 | head -50Repository: SolFoundry/solfoundry
Length of output: 1511
🏁 Script executed:
# Check if any route explicitly commits after using get_db()
rg -n 'session\.commit|session\.flush|session\.begin' backend/app --type py | grep -v database.pyRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check for any session/transaction middleware
rg -n 'middleware|SessionMiddleware|commit|rollback' backend/app/main.py -A 2 -B 2Repository: SolFoundry/solfoundry
Length of output: 314
🏁 Script executed:
# Check actual route implementations to see if they handle transactions differently
head -100 backend/app/api/webhooks/github.py | tail -50Repository: SolFoundry/solfoundry
Length of output: 1817
get_db() docstring is misleading about transaction handling, and implementation lacks explicit commits causing all writes to be silently rolled back.
The docstring (lines 62–69) claims the session "automatically commits on successful completion," but the implementation uses async with async_session_factory() as session: (a session-lifecycle context manager). Per SQLAlchemy documentation, this pattern does not auto-commit; uncommitted transactions are discarded on exit.
To implement the claimed Unit of Work pattern, the code should either:
- Wrap the session in
async with session.begin():(transaction context manager that auto-commits/rolls back), or - Add explicit
await session.commit()afteryield session
Currently, every database write via Depends(get_db) in routes like backend/app/api/webhooks/github.py and backend/app/api/notifications.py will be silently rolled back when the request completes, causing data loss. The exception handling is correct (line 86 properly re-raises), but that does not compensate for the missing commit logic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/database.py` around lines 62 - 69, The get_db() generator's
docstring is incorrect and its current use of "async with
async_session_factory() as session" does not commit transactions, so DB writes
are discarded; update get_db() (and its docstring) to use a proper Unit of Work
by either wrapping the yielded session in an "async with session.begin():"
transaction context (using the session created from async_session_factory()) or
by calling "await session.commit()" after the yield (and still rolling back on
exceptions), ensuring you reference async_session_factory and get_db to locate
the change and keep the existing exception handling that re-raises on errors.
| bounty_id = Column( | ||
| GUID(), ForeignKey("bounties.id", ondelete="CASCADE"), nullable=False | ||
| ) | ||
| submitter_id = Column(GUID(), nullable=False) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check for users table definition and verify if FK constraints are missing by design
# Search for user/users table definitions
rg -n "class User" --type=py -g "backend/app/models/**" -C 3
# Search for any existing FK references to users table
rg -n "ForeignKey.*users" --type=py -g "backend/app/**" -C 2
# Check if there's documentation about microservices architecture or external user management
rg -n -i "user.*service|external.*user|microservice" --type=md -C 2Repository: SolFoundry/solfoundry
Length of output: 1131
🏁 Script executed:
cat -n backend/app/models/dispute.pyRepository: SolFoundry/solfoundry
Length of output: 6901
Add foreign key constraints on user ID fields for referential integrity.
The submitter_id (line 43), reviewer_id (line 49), and actor_id (line 78) columns lack foreign key constraints to the users table, while bounty_id and dispute_id correctly define their foreign keys. Since the User model exists in the same database (backend/app/models/user.py), add FK constraints to ensure referential integrity:
submitter_id = Column(GUID(), ForeignKey("users.id", ondelete="CASCADE"), nullable=False)
reviewer_id = Column(GUID(), ForeignKey("users.id", ondelete="CASCADE"), nullable=True)
actor_id = Column(GUID(), ForeignKey("users.id", ondelete="CASCADE"), nullable=False)
This prevents orphaned records when users are deleted and maintains consistency with existing FK patterns in the model.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/models/dispute.py` at line 43, Add foreign key constraints to the
user ID columns in the Dispute model: change the submitter_id, reviewer_id, and
actor_id Column definitions to reference users.id with ForeignKey(...,
ondelete="CASCADE") so they enforce referential integrity; specifically update
submitter_id (currently Column(GUID(), nullable=False)) to use
ForeignKey("users.id", ondelete="CASCADE") and remain nullable=False, update
reviewer_id to use ForeignKey("users.id", ondelete="CASCADE") and remain
nullable=True, and update actor_id to use ForeignKey("users.id",
ondelete="CASCADE") and remain nullable=False so deletions cascade and match the
existing FK patterns for bounty_id/dispute_id.
| DateTime(timezone=True), default=lambda: datetime.now(timezone.utc), index=True | ||
| ) | ||
| metadata = Column(JSON, nullable=True) # Additional context | ||
| extra_data = Column(JSON, nullable=True) # Additional context |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for any remaining references to the old 'metadata' field name in notification context
rg -n "metadata" --type=py -g '**/*notification*' -g '!**/node_modules/**'
# Check for database migrations that might need updating
fd -e py migration --exec grep -l "metadata" {} \; 2>/dev/null || echo "No migration files found"Repository: SolFoundry/solfoundry
Length of output: 822
🏁 Script executed:
# First, let's check the actual notification.py file to see the current field names
cat -n backend/app/models/notification.py | head -120Repository: SolFoundry/solfoundry
Length of output: 3780
🏁 Script executed:
# Check the API endpoint that handles notification creation
rg -A 20 "async def create" backend/app/api/notifications.pyRepository: SolFoundry/solfoundry
Length of output: 789
🏁 Script executed:
# Look at the test that's failing to understand what field name it expects
sed -n '468,490p' backend/tests/test_notification_api.pyRepository: SolFoundry/solfoundry
Length of output: 849
🏁 Script executed:
# Check if NotificationCreate or NotificationBase have any config that allows extra fields
rg -A 5 "class NotificationCreate\|class NotificationBase\|ConfigDict\|model_config\|extra.*=" backend/app/models/notification.pyRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Run the failing test to see if it actually fails
cd backend && python -m pytest tests/test_notification_api.py::TestNotifications::test_create_notification_with_metadata -v 2>&1 | head -100Repository: SolFoundry/solfoundry
Length of output: 5595
🏁 Script executed:
# Look for Alembic migrations directory and any recent migration files
find backend -type d -name "migrations" -o -name "alembic" 2>/dev/null | head -5Repository: SolFoundry/solfoundry
Length of output: 83
🏁 Script executed:
# Check if there's a requirements.txt or setup.py to understand project setup
ls -la backend/ | grep -E "requirements|setup|pyproject"Repository: SolFoundry/solfoundry
Length of output: 130
🏁 Script executed:
# Check the migrations directory for any migration related to this change
ls -la backend/migrations/versions/ 2>/dev/null | tail -10Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Look for the most recent migration files
fd -e py . backend/migrations/versions/ 2>/dev/null | sort | tail -5Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check if any migration mentions the field rename
rg -i "metadata\|extra_data" backend/migrations/ 2>/dev/nullRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check the structure of the migrations directory
find backend/migrations -type f 2>/dev/null | head -20Repository: SolFoundry/solfoundry
Length of output: 109
🏁 Script executed:
# List all files in migrations directory with details
tree backend/migrations 2>/dev/null || find backend/migrations -type f -exec basename {} \;Repository: SolFoundry/solfoundry
Length of output: 134
🏁 Script executed:
# Check when the notifications table was created
rg -n "notifications" backend/migrations/ backend/app/models/ 2>/dev/null | head -20Repository: SolFoundry/solfoundry
Length of output: 619
🏁 Script executed:
# Look at the actual SQL migration file
cat backend/migrations/001_add_bounty_search.sqlRepository: SolFoundry/solfoundry
Length of output: 1766
🏁 Script executed:
# Check if there's an alembic.ini or any SQLAlchemy initialization that auto-creates tables
find backend -name "alembic.ini" -o -name "database.py" | xargs cat 2>/dev/null | head -50Repository: SolFoundry/solfoundry
Length of output: 1549
🏁 Script executed:
# Check git status to see which files were actually modified in this PR
cd backend && git diff --name-only HEAD~1 2>/dev/null | head -20Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check if the test file was modified in this PR
cd backend && git log --oneline -1 tests/test_notification_api.py 2>/dev/nullRepository: SolFoundry/solfoundry
Length of output: 121
🏁 Script executed:
# Verify the actual Pydantic field configuration - does it allow extra fields?
rg -B5 -A10 "class NotificationBase\|class NotificationCreate" backend/app/models/notification.py | grep -A10 "class NotificationCreate"Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check the git diff for the notification test file to see what changed
cd backend && git diff HEAD~1 tests/test_notification_api.py | head -100Repository: SolFoundry/solfoundry
Length of output: 251
🏁 Script executed:
# Check the git diff for notification.py model
cd backend && git diff HEAD~1 app/models/notification.py | head -100Repository: SolFoundry/solfoundry
Length of output: 251
🏁 Script executed:
# Check the git diff for the API endpoint
cd backend && git diff HEAD~1 app/api/notifications.py | head -80Repository: SolFoundry/solfoundry
Length of output: 251
🏁 Script executed:
# Verify the Pydantic config - check if there's any custom config on the models
cat -n backend/app/models/notification.py | grep -A20 "^ 61\|model_config\|ConfigDict"Repository: SolFoundry/solfoundry
Length of output: 1506
🏁 Script executed:
# Check if there's a base config or if Pydantic is using defaults
rg -n "ConfigDict\|Config\|extra.*=" backend/app/models/ --type=py | head -20Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Verify the actual test behavior by examining what Pydantic v2 does with extra fields
python3 << 'EOF'
from pydantic import BaseModel
from typing import Optional
class TestModel(BaseModel):
extra_data: Optional[dict] = None
# Test sending a field that doesn't exist in the schema
test_data = {"metadata": {"pr_number": 123}, "extra_data": None}
try:
model = TestModel(**test_data)
print("Model created successfully")
print(f"Model dict: {model.model_dump()}")
print(f"Has 'metadata' field: {'metadata' in model.model_dump()}")
print(f"Has 'extra_data' field: {'extra_data' in model.model_dump()}")
except Exception as e:
print(f"Error: {e}")
EOFRepository: SolFoundry/solfoundry
Length of output: 181
🏁 Script executed:
# Check the service layer to see how extra_data/metadata is handled
rg -n "extra_data\|metadata" backend/app/services/notification_service.py | head -20Repository: SolFoundry/solfoundry
Length of output: 47
Field rename from metadata to extra_data is incomplete and introduces breaking changes with failing tests.
The field was renamed in the SQLAlchemy model (NotificationDB, line 50) and Pydantic schema (NotificationBase, line 68) from metadata to extra_data. However, this change was not applied consistently:
-
Tests will fail: The test
test_create_notification_with_metadata(line 468-487 intest_notification_api.py) sends"metadata"in the request payload but the schema now expects"extra_data". Pydantic silently ignores the unknown"metadata"field, causing the response to have"extra_data": null, which breaks the assertionassert data["metadata"]["pr_number"] == 123. -
API documentation out of sync:
backend/app/api/notifications.py(line 150) still documents the field as- **metadata**: Additional context (optional). -
Breaking API change: Existing clients sending
"metadata"will have that data silently dropped instead of being stored in the renamedextra_datafield.
All three locations must be updated together: model definition (done), test cases, and API documentation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/models/notification.py` at line 50, Update all references so the
rename from metadata to extra_data is consistent: modify the Pydantic schema
NotificationBase to accept and serialize extra_data (map incoming "metadata"
JSON to extra_data if needed), update the API docs in
backend/app/api/notifications.py to describe extra_data instead of metadata, and
change the test test_create_notification_with_metadata in
test_notification_api.py to either send "extra_data" or send "metadata" and
assert that the server maps it into NotificationDB.extra_data; ensure
NotificationDB.extra_data field remains the single source of truth and that
serialization/deserialization in the route handlers (and NotificationBase)
reads/writes extra_data so existing clients sending "metadata" are handled or
tests are updated accordingly.
| @field_validator("pr_url") | ||
| @classmethod | ||
| def validate_pr_url(cls, v: str) -> str: | ||
| """Validate GitHub PR URL format.""" | ||
| if not v.startswith(('https://github.com/', 'http://github.com/')): | ||
| raise ValueError('PR URL must be a valid GitHub URL') | ||
| if '/pull/' not in v: | ||
| raise ValueError('URL must be a pull request URL (containing /pull/)') | ||
| if not v.startswith(("https://github.com/", "http://github.com/")): | ||
| raise ValueError("PR URL must be a valid GitHub URL") | ||
| if "/pull/" not in v: | ||
| raise ValueError("URL must be a pull request URL (containing /pull/)") | ||
| return v |
There was a problem hiding this comment.
Enforce HTTPS in PR URL validation
Line 120 currently accepts http://github.com/.... Please restrict to https://github.com/... to avoid insecure URL inputs in API payloads.
Suggested patch
`@field_validator`("pr_url")
`@classmethod`
def validate_pr_url(cls, v: str) -> str:
"""Validate GitHub PR URL format."""
- if not v.startswith(("https://github.com/", "http://github.com/")):
+ if not v.startswith("https://github.com/"):
raise ValueError("PR URL must be a valid GitHub URL")
if "/pull/" not in v:
raise ValueError("URL must be a pull request URL (containing /pull/)")
return vAs per coding guidelines, backend files must be analyzed for input validation and edge case coverage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/models/submission.py` around lines 116 - 124, The PR URL
validator validate_pr_url currently accepts both "https://" and "http://";
change it to only allow "https://github.com/" and reject any "http://" inputs by
removing or updating the startswith check in the validate_pr_url method (the
`@field_validator` for "pr_url") so it enforces HTTPS and still checks for
"/pull/" before returning the value.
| updated = await self._update_bounty_status( | ||
| github_issue_number=bounty_number, | ||
| github_repo=repository, | ||
| new_status="in_review", | ||
| ) |
There was a problem hiding this comment.
in_review transition target is inconsistent with bounty status model.
Line 125 through Line 129 sets new_status="in_review" and Line 133 through Line 138 logs that transition, but bounty status values in the model use in_progress (not in_review). This prevents intended PR-opened state updates.
As per coding guidelines backend/**: “Analyze thoroughly: ... API contract consistency with spec”.
Also applies to: 133-138
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/webhook_processor.py` around lines 125 - 129, The code
calls self._update_bounty_status with new_status="in_review" and logs that
transition, but the bounty model uses "in_progress"; change the transition
target to "in_progress" in the call to _update_bounty_status and update the
related log message (the same logging block that reports the transition of
variable updated/github_issue_number/github_repo) so the API contract matches
the bounty status enum and the log reflects the new value.
| assert data["notification_type"] == "bounty_claimed" | ||
| assert data["title"] == "Test Notification" | ||
| assert data["read"] == False |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Inconsistent assertion style.
Line 444 uses assert data["read"] == False while other assertions in this file (e.g., lines 184, 312) were updated to use Pythonic truthiness (assert not ... / assert ...). For consistency, consider updating this assertion.
♻️ Proposed fix for consistency
- assert data["read"] == False
+ assert not data["read"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert data["notification_type"] == "bounty_claimed" | |
| assert data["title"] == "Test Notification" | |
| assert data["read"] == False | |
| assert data["notification_type"] == "bounty_claimed" | |
| assert data["title"] == "Test Notification" | |
| assert not data["read"] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_notification_api.py` around lines 442 - 444, Replace the
non-Pythonic equality check `assert data["read"] == False` with the idiomatic
truthiness assertion `assert not data["read"]` to match other assertions in the
file (e.g., the style used in earlier tests) and keep assertion style
consistent; update the assertion referencing data["read"] accordingly.
| async def test_invalid_token_rejected(self): | ||
| """Test that invalid token is rejected.""" | ||
| with TestClient(app) as client: | ||
| # Invalid token should close connection | ||
| with pytest.raises(Exception): | ||
| with client.websocket_connect("/ws?token=invalid_token") as websocket: | ||
| pass | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Generic exception catch may mask specific connection rejection reasons.
Using pytest.raises(Exception) is too broad. If the WebSocket connection is rejected for an unexpected reason (e.g., a bug), the test would still pass. Consider catching a more specific exception type or validating the close code/reason.
♻️ Suggested improvement
`@pytest.mark.asyncio`
async def test_invalid_token_rejected(self):
"""Test that invalid token is rejected."""
with TestClient(app) as client:
- # Invalid token should close connection
- with pytest.raises(Exception):
- with client.websocket_connect("/ws?token=invalid_token") as websocket:
- pass
+ # Invalid token should close connection with specific error
+ from starlette.websockets import WebSocketDisconnect
+ with pytest.raises(WebSocketDisconnect) as exc_info:
+ with client.websocket_connect("/ws?token=invalid_token") as websocket:
+ websocket.receive_json() # Attempt to receive triggers disconnect
+ # Optionally verify close code indicates auth failure
+ assert exc_info.value.code in (1008, 4001) # Policy violation or custom auth error🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_websocket.py` around lines 559 - 566, The test
test_invalid_token_rejected is using pytest.raises(Exception) which is too
broad; change it to expect the specific websocket rejection exception (e.g.,
pytest.raises(WebSocketDisconnect) from starlette.websockets) when calling
client.websocket_connect("/ws?token=invalid_token"), and capture the exception
(excinfo) so you can assert the websocket close code/reason (e.g.,
excinfo.value.code or excinfo.value.reason) to verify the connection was
rejected for the intended token/auth reason instead of masking other failures.
| @pytest.mark.asyncio | ||
| async def test_many_connections(self): | ||
| """Test handling many concurrent connections.""" | ||
| with TestClient(app) as client: | ||
| connections = [] | ||
|
|
||
| # Open 50 connections | ||
| for _ in range(50): | ||
| ws = client.websocket_connect("/ws") | ||
| connections.append(ws.__enter__()) | ||
|
|
||
| # All should receive connection message | ||
| for ws in connections: | ||
| data = ws.receive_json() | ||
| assert data["type"] == "connected" | ||
|
|
||
| # Check stats | ||
| response = client.get("/ws/stats") | ||
| assert response.json()["total_connections"] == 50 | ||
|
|
||
| # Close all | ||
| for ws in connections: | ||
| ws.__exit__(None, None, None) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Stress test uses manual context manager protocol — consider cleanup safety.
The test manually calls __enter__() and __exit__() on WebSocket connections. If an exception occurs between opening connections and the cleanup loop, some connections may leak. Consider using a try/finally block or a context manager wrapper.
♻️ Safer cleanup pattern
`@pytest.mark.asyncio`
async def test_many_connections(self):
"""Test handling many concurrent connections."""
with TestClient(app) as client:
connections = []
-
- # Open 50 connections
- for _ in range(50):
- ws = client.websocket_connect("/ws")
- connections.append(ws.__enter__())
-
- # All should receive connection message
- for ws in connections:
- data = ws.receive_json()
- assert data["type"] == "connected"
-
- # Check stats
- response = client.get("/ws/stats")
- assert response.json()["total_connections"] == 50
-
- # Close all
- for ws in connections:
- ws.__exit__(None, None, None)
+ try:
+ # Open 50 connections
+ for _ in range(50):
+ ws = client.websocket_connect("/ws")
+ connections.append(ws.__enter__())
+
+ # All should receive connection message
+ for ws in connections:
+ data = ws.receive_json()
+ assert data["type"] == "connected"
+
+ # Check stats
+ response = client.get("/ws/stats")
+ assert response.json()["total_connections"] == 50
+ finally:
+ # Ensure all connections are closed even on failure
+ for ws in connections:
+ try:
+ ws.__exit__(None, None, None)
+ except Exception:
+ pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_websocket.py` around lines 614 - 636, The
test_many_connections function opens 50 websockets by calling
client.websocket_connect and manually invoking __enter__/__exit__, which can
leak connections on exceptions; change it to ensure deterministic cleanup by
wrapping the connection-opening and subsequent asserts in a try/finally (or use
a context manager that yields the connections) so that in the finally block you
iterate over the connections and close each one (e.g., call __exit__ or close)
to guarantee they are always cleaned up even if an assertion fails; keep
references to client.websocket_connect and the connections list so the cleanup
loop still runs.
| "noEmit": true, | ||
| "skipLibCheck": true, | ||
| "isolatedModules": true, | ||
| "types": ["node"] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify ambient type configuration and Vite-global usage.
fd '^tsconfig.*\.json$' frontend -x sh -c 'echo "== {} =="; sed -n "1,200p" "{}"'
rg -nP --type=ts --type=tsx '\bimport\.meta\.(env|hot)\b' frontend/src
rg -nP '"types"\s*:\s*\[' frontend/tsconfig.jsonRepository: SolFoundry/solfoundry
Length of output: 558
🏁 Script executed:
#!/bin/bash
# Fixed search for import.meta usage (without problematic type flags)
echo "=== Searching for import.meta usage ==="
rg -n 'import\.meta' frontend/src 2>/dev/null | head -20
echo ""
echo "=== Checking package.json for vite dependency ==="
cat frontend/package.json 2>/dev/null | grep -A 2 -B 2 "vite" || echo "No vite found in package.json"
echo ""
echo "=== Searching for vite.config files ==="
fd -e 'vite\.config\.' frontend
echo ""
echo "=== Searching for any reference to vite/client ==="
rg -n 'vite/client' frontend 2>/dev/null || echo "No vite/client references found"
echo ""
echo "=== Full frontend/tsconfig.json content ==="
cat -n frontend/tsconfig.json
echo ""
echo "=== Checking for vite environment variables usage ==="
rg -n 'VITE_' frontend/src 2>/dev/null | head -10 || echo "No VITE_ env vars found"Repository: SolFoundry/solfoundry
Length of output: 1230
🏁 Script executed:
#!/bin/bash
echo "=== Root-level vite.config files ==="
fd -e 'vite.config' . --max-depth 2
echo ""
echo "=== Checking if project structure suggests Vite is active ==="
ls -la | head -20
cat vite.config.* 2>/dev/null || echo "No root vite.config found"
echo ""
echo "=== Check if there's tsconfig.app.json or similar ==="
fd '^tsconfig' . --max-depth 2Repository: SolFoundry/solfoundry
Length of output: 1355
types: ["node"] prevents Vite ambient type definitions despite Vite being the build tool.
Vite is a project dependency (v6.0.0), but the tsconfig restricts ambient types to Node only, excluding vite/client types. While there's currently no import.meta usage in the codebase, this configuration prevents access to Vite client globals if added in the future. Add "vite/client" to the types array or use a triple-slash directive to properly support Vite's ambient types.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/tsconfig.json` at line 12, The tsconfig.json currently restricts
ambient types to Node via the "types": ["node"] entry which prevents Vite client
ambient types (e.g., import.meta) from being available; update the "types" array
to include "vite/client" (or remove the explicit "types" setting to allow
automatic inclusion) so Vite's globals are available, or alternatively add a
triple-slash directive (/// <reference types="vite/client" />) in a central
frontend .d.ts file to enable Vite client ambient types.
| "types": ["node"] | ||
| }, | ||
| "include": ["src"], | ||
| "exclude": ["src/**/*.test.tsx", "src/**/*.test.ts", "src/__tests__"] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm whether test files are excluded and whether an alternate test tsconfig exists.
fd '^tsconfig.*\.json$' frontend -x sh -c 'echo "== {} =="; rg -n "\"include\"|\"exclude\"" "{}"'
fd '.*\.(test|spec)\.(ts|tsx)$' frontend/srcRepository: SolFoundry/solfoundry
Length of output: 631
🏁 Script executed:
# Check package.json for test configuration and jest setup
cat frontend/package.json | jq '.jest, .scripts | select(. != null)'Repository: SolFoundry/solfoundry
Length of output: 140
🏁 Script executed:
# Look for jest configuration files
fd -e 'config.js' -e 'config.ts' frontend | grep -E '(jest|test)'
ls -la frontend/ | grep -E '(jest|test)'Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check if there's a separate tsconfig for tests
fd 'tsconfig.*json' frontendRepository: SolFoundry/solfoundry
Length of output: 87
🏁 Script executed:
# Look for vitest configuration
fd -e 'config.ts' -e 'config.js' frontend | head -20
ls -la frontend/ | grep -E '(vite|vitest)'Repository: SolFoundry/solfoundry
Length of output: 128
🏁 Script executed:
# Check if vite.config.ts exists and what's in it
find frontend -maxdepth 1 -name 'vitest*' -o -name 'vite.config.*'Repository: SolFoundry/solfoundry
Length of output: 88
🏁 Script executed:
# Check CI configuration to see how tests are run
find . -path './.git' -prune -o -name '*.yml' -o -name '*.yaml' | grep -E '(github|gitlab|ci|action)' | head -10Repository: SolFoundry/solfoundry
Length of output: 428
🏁 Script executed:
# Read vite.config.ts to understand vitest setup
cat frontend/vite.config.tsRepository: SolFoundry/solfoundry
Length of output: 278
🏁 Script executed:
# Read CI workflow to see how tests are executed
cat .github/workflows/ci.ymlRepository: SolFoundry/solfoundry
Length of output: 8778
🏁 Script executed:
# Check vitest documentation support for TypeScript in vite.config
# Look at actual vitest test execution to confirm lack of TS checking
rg 'ts-node|esbuild-register|tsx|vitest.*ts' frontend/ -A 2 -B 2Repository: SolFoundry/solfoundry
Length of output: 1501
🏁 Script executed:
# Verify that vitest is NOT configured with TypeScript support
cat frontend/package.json | jq '.devDependencies | select(. != null)'Repository: SolFoundry/solfoundry
Length of output: 374
Test files are not included in TypeScript type checking.
Test files are excluded from tsconfig.json (via the exclude array), and the CI pipeline's type-check job runs tsc --noEmit which respects this exclusion. Additionally, vitest is not configured with TypeScript support, so tests execute without static type validation. This reduces the ability to catch type-related bugs in the test suite itself.
Consider either:
- Configuring vitest with TypeScript support (e.g., via
esbuildortsoption in test config) - Adding a separate type-check step in CI that includes test files
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/tsconfig.json` at line 15, The tsconfig.json currently excludes test
files via the "exclude" array which means tsc --noEmit in CI won't type-check
tests; either remove test globs from the "exclude" array (or add a separate
tsconfig.tests.json that includes src/**/*.test.tsx and src/**/*.test.ts) and
run tsc --project tsconfig.tests.json in CI, or enable TypeScript support in the
test runner by configuring vitest (e.g., update vitest/vite config to use
esbuild/ts plugin or set the "ts" transformer) so tests are transpiled with type
checking; update CI to run the chosen path so test files are statically
validated.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/app/database.py (2)
94-116:⚠️ Potential issue | 🔴 Critical
get_db_session()also lacks explicit commit despite docstring claims.The docstring states "Auto-commits on exit" (line 108) and "Commit happens on successful exit" (line 113), but there is no
await session.commit()in the success path. The explicitawait session.rollback()on exception (line 115) is correct, but the missing commit means successful operations will have their writes discarded.🔧 Suggested fix
async def get_db_session(): async with async_session_factory() as session: try: yield session - # Commit happens on successful exit + await session.commit() except Exception: await session.rollback() raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/database.py` around lines 94 - 116, The get_db_session() context manager claims to auto-commit but never calls commit; modify the success path inside the async with async_session_factory() block in get_db_session to call await session.commit() after yield (before exiting the try) so successful transactions are persisted; keep the existing except block that does await session.rollback() and re-raise the exception to preserve rollback behavior.
136-183:⚠️ Potential issue | 🔴 CriticalPostgreSQL-specific SQL in
init_db()will crash on SQLite.The code now supports SQLite via the
is_sqlitecheck (line 26) for engine creation, butinit_db()unconditionally executes PostgreSQL-specific SQL:
CREATE OR REPLACE FUNCTION ... $$ LANGUAGE plpgsql— not supported in SQLiteDROP TRIGGER IF EXISTS ... ON table/CREATE TRIGGER ... EXECUTE FUNCTION— PostgreSQL trigger syntax differs from SQLiteUSING GIN(search_vector)— GIN indexes are PostgreSQL-specificWhen
DATABASE_URLis an SQLite connection string (e.g., in CI/test), callinginit_db()will raise SQL syntax errors.🔧 Suggested fix: Guard PostgreSQL-specific operations
async def init_db() -> None: logger.info("Initializing database schema...") async with engine.begin() as conn: from app.models.notification import NotificationDB # noqa: F401 # Create all tables from model definitions await conn.run_sync(Base.metadata.create_all) + # Skip PostgreSQL-specific setup for SQLite + if is_sqlite: + logger.info("SQLite detected - skipping PostgreSQL-specific triggers and indexes") + return # Create the search vector trigger function await conn.execute( text(""" CREATE OR REPLACE FUNCTION update_bounty_search_vector() ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/database.py` around lines 136 - 183, init_db() currently runs PostgreSQL-only SQL (function update_bounty_search_vector(), trigger bounty_search_vector_update, GIN index ix_bounties_search_vector and other indexes ix_bounties_status_tier / ix_bounties_status_category) unconditionally and will fail on SQLite; wrap or branch these statements behind a dialect check (use the existing is_sqlite/engine.dialect or inspect(engine).name) so that the PostgreSQL-specific CREATE FUNCTION, DROP/CREATE TRIGGER and CREATE INDEX ... USING GIN statements are executed only when the DB is Postgres, and for SQLite either skip them or provide SQLite-compatible alternatives (e.g., no GIN, different trigger syntax) to avoid executing invalid SQL on SQLite.
♻️ Duplicate comments (1)
backend/app/database.py (1)
63-91:⚠️ Potential issue | 🔴 Critical
get_db()docstring claims auto-commit but implementation silently discards writes.The docstring (lines 67-70) states "The session automatically commits on successful completion" and "Any exception triggers automatic rollback." However, the implementation uses
async with async_session_factory() as session:which is a session-lifecycle context manager—it closes the session on exit but does not auto-commit. Per SQLAlchemy documentation, uncommitted transactions are discarded on exit.This means all database writes via
Depends(get_db)will be silently rolled back when the request completes, causing data loss.To implement the documented behavior, either:
- Use
async with session.begin():as a transaction context manager, or- Add explicit
await session.commit()afteryield session🔧 Suggested fix using explicit commit
async def get_db() -> AsyncGenerator[AsyncSession, None]: async with async_session_factory() as session: try: yield session - # Context manager exit handles commit + await session.commit() except Exception: - # Context manager exit handles rollback + await session.rollback() raise,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/database.py` around lines 63 - 91, The get_db() dependency claims sessions auto-commit but currently uses async_session_factory() context which only closes the session and discards uncommitted transactions; update get_db() (the async def get_db and its use of async_session_factory()) to actually commit on successful completion and rollback on exception by either wrapping the yielded session in a transaction (use session.begin() as a transactional context) or by adding an explicit await session.commit() immediately after yield and ensuring await session.rollback() in the except block before re-raising; keep async_session_factory() as the session source but ensure commit/rollback semantics match the docstring.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@backend/app/database.py`:
- Around line 94-116: The get_db_session() context manager claims to auto-commit
but never calls commit; modify the success path inside the async with
async_session_factory() block in get_db_session to call await session.commit()
after yield (before exiting the try) so successful transactions are persisted;
keep the existing except block that does await session.rollback() and re-raise
the exception to preserve rollback behavior.
- Around line 136-183: init_db() currently runs PostgreSQL-only SQL (function
update_bounty_search_vector(), trigger bounty_search_vector_update, GIN index
ix_bounties_search_vector and other indexes ix_bounties_status_tier /
ix_bounties_status_category) unconditionally and will fail on SQLite; wrap or
branch these statements behind a dialect check (use the existing
is_sqlite/engine.dialect or inspect(engine).name) so that the
PostgreSQL-specific CREATE FUNCTION, DROP/CREATE TRIGGER and CREATE INDEX ...
USING GIN statements are executed only when the DB is Postgres, and for SQLite
either skip them or provide SQLite-compatible alternatives (e.g., no GIN,
different trigger syntax) to avoid executing invalid SQL on SQLite.
---
Duplicate comments:
In `@backend/app/database.py`:
- Around line 63-91: The get_db() dependency claims sessions auto-commit but
currently uses async_session_factory() context which only closes the session and
discards uncommitted transactions; update get_db() (the async def get_db and its
use of async_session_factory()) to actually commit on successful completion and
rollback on exception by either wrapping the yielded session in a transaction
(use session.begin() as a transactional context) or by adding an explicit await
session.commit() immediately after yield and ensuring await session.rollback()
in the except block before re-raising; keep async_session_factory() as the
session source but ensure commit/rollback semantics match the docstring.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9493da32-3b39-467c-a1c2-8cc76a7cb8bf
📒 Files selected for processing (1)
backend/app/database.py
- Create ThemeToggle.tsx component that was missing - Remove BackpackWalletAdapter from WalletProvider (not exported in current version) - Export ThemeToggle from layout/index.ts - Fix conftest.py to set env vars at module level Wallet address: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
- Fix import of Base from app.database instead of app.models.bounty in test files - Add type declarations for @solana/wallet-adapter-react to fix TS2786 error Wallet address: Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7
|
Issues found:
Suggestions:
⏰ 67 hours remaining out of 72h since your first submission. Please address these items and push an update. SolFoundry Review Bot |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
backend/tests/test_webhook.py (3)
96-118: 🧹 Nitpick | 🔵 TrivialEnvironment variable manipulation in tests can cause race conditions.
The test modifies
os.environ["GITHUB_WEBHOOK_SECRET"]directly, which could cause issues if tests run in parallel. Consider usingpytest'smonkeypatchfixture for safer environment variable handling.♻️ Suggested improvement using monkeypatch
`@pytest.mark.asyncio` -async def test_signature_verification_success(self, client): +async def test_signature_verification_success(self, client, monkeypatch): """Test valid signature passes verification.""" payload = json.dumps({"action": "opened"}).encode() signature = create_signature(payload, WEBHOOK_SECRET) # Set secret in environment - os.environ["GITHUB_WEBHOOK_SECRET"] = WEBHOOK_SECRET + monkeypatch.setenv("GITHUB_WEBHOOK_SECRET", WEBHOOK_SECRET) response = await client.post( "/api/webhooks/github", ... ) assert response.status_code in [200, 202] - - del os.environ["GITHUB_WEBHOOK_SECRET"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_webhook.py` around lines 96 - 118, The test test_signature_verification_success currently mutates os.environ directly which can race when tests run in parallel; change the test to accept pytest's monkeypatch fixture and replace the direct os.environ set/delete with monkeypatch.setenv("GITHUB_WEBHOOK_SECRET", WEBHOOK_SECRET) before calling client.post (and no manual deletion), or use monkeypatch.delenv to explicitly remove it if needed; refer to the test function name test_signature_verification_success and helpers create_signature and WEBHOOK_SECRET when making this change.
119-140: 🧹 Nitpick | 🔵 TrivialSame environment variable handling concern.
This test also directly manipulates
os.environ. The samemonkeypatchrefactor applies here for test isolation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_webhook.py` around lines 119 - 140, The test_signature_verification_failure test mutates os.environ directly; change it to use the pytest monkeypatch fixture to isolate env changes by adding a monkeypatch parameter to the test signature and replace os.environ["GITHUB_WEBHOOK_SECRET"] = WEBHOOK_SECRET with monkeypatch.setenv("GITHUB_WEBHOOK_SECRET", WEBHOOK_SECRET), and remove the manual del os.environ["GITHUB_WEBHOOK_SECRET"] by either omitting it (monkeypatch will revert after the test) or explicitly calling monkeypatch.delenv("GITHUB_WEBHOOK_SECRET", raising=False); update the test_signature_verification_failure function accordingly so environment setup/teardown is handled via monkeypatch.
141-190:⚠️ Potential issue | 🔴 CriticalFix field names when instantiating
BountyDBin test.The test instantiates
BountyDBwithcategory,github_issue_number, andgithub_repofields, but theBountyDBmodel inbackend/app/models/bounty.pyonly definesgithub_issue_urland lackscategoryandgithub_repoentirely. This will cause a Pydantic validation error when the test runs.Replace
github_issue_number=123, github_repo="test/repo"withgithub_issue_url="https://github.com/test/repo/issues/123"and removecategory="backend".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_webhook.py` around lines 141 - 190, The BountyDB instantiation in test_pull_request_opened_updates_bounty is using non-existent fields (category, github_repo, github_issue_number); update the test to match the BountyDB model by removing category and github_repo and replacing github_issue_number=123 with github_issue_url="https://github.com/test/repo/issues/123" when creating the BountyDB instance so the model validation passes.backend/tests/test_bounty_api.py (1)
77-130:⚠️ Potential issue | 🔴 CriticalFix field names in
sample_bountiesfixture to matchBountyDBmodel.The fixture instantiates
BountyDBwithcategoryandskillsfields that do not exist in the model. TheBountyDBclass definesrequired_skills(notskills) and has nocategoryfield. Update the fixture to use correct field names:
- Replace
skillswithrequired_skills- Remove the
categoryfield entirely (it is not part of the BountyDB model)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_bounty_api.py` around lines 77 - 130, The sample_bounties fixture is constructing BountyDB instances with non-existent fields: it uses category and skills, but the BountyDB model defines required_skills and has no category field; update each BountyDB(...) in the sample_bounties fixture to remove the category argument entirely and rename skills to required_skills so the field names match the BountyDB model (references: sample_bounties fixture, BountyDB).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/__mocks__/mockDashboard.ts`:
- Around line 6-9: mockDailyEarnings currently uses Math.random() making test
data non-deterministic and flaky; replace the non-deterministic generator with a
deterministic approach (e.g., a seeded PRNG or a deterministic formula based on
the index) so values are stable across imports—update the Array.from initializer
that produces mockDailyEarnings to use a deterministic generator (or a fixed
array) so amounts are repeatable for tests, and keep the same shape
(DailyEarning objects with date and amount) so consuming code/tests need no
changes.
In `@frontend/src/types/wallet-adapter.d.ts`:
- Around line 11-41: Update the ambient types to use the more specific Solana
types: import Adapter from `@solana/wallet-adapter-base` and PublicKey and
Connection (and Commitment) from `@solana/web3.js`, then change
WalletProviderProps.wallets from unknown[] to Adapter[], change
ConnectionProviderProps.commitment from string to Commitment, change
useWallet().publicKey from unknown to PublicKey | null, and change
useConnection().connection from unknown to Connection; keep signTransaction and
signAllTransactions as unknown-returning signatures to preserve the TS2786
workaround.
- Around line 9-25: Update the type declarations to use modern TS/React imports:
change the ReactNode import to a type-only import and use React.JSX.Element for
the provider return types; specifically update the import of ReactNode to
"import type { ReactNode } from 'react'" and replace the return types of
ConnectionProvider and WalletProvider from JSX.Element to React.JSX.Element
while keeping the existing props interfaces ConnectionProviderProps and
WalletProviderProps and the exported symbols ConnectionProvider and
WalletProvider.
---
Outside diff comments:
In `@backend/tests/test_bounty_api.py`:
- Around line 77-130: The sample_bounties fixture is constructing BountyDB
instances with non-existent fields: it uses category and skills, but the
BountyDB model defines required_skills and has no category field; update each
BountyDB(...) in the sample_bounties fixture to remove the category argument
entirely and rename skills to required_skills so the field names match the
BountyDB model (references: sample_bounties fixture, BountyDB).
In `@backend/tests/test_webhook.py`:
- Around line 96-118: The test test_signature_verification_success currently
mutates os.environ directly which can race when tests run in parallel; change
the test to accept pytest's monkeypatch fixture and replace the direct
os.environ set/delete with monkeypatch.setenv("GITHUB_WEBHOOK_SECRET",
WEBHOOK_SECRET) before calling client.post (and no manual deletion), or use
monkeypatch.delenv to explicitly remove it if needed; refer to the test function
name test_signature_verification_success and helpers create_signature and
WEBHOOK_SECRET when making this change.
- Around line 119-140: The test_signature_verification_failure test mutates
os.environ directly; change it to use the pytest monkeypatch fixture to isolate
env changes by adding a monkeypatch parameter to the test signature and replace
os.environ["GITHUB_WEBHOOK_SECRET"] = WEBHOOK_SECRET with
monkeypatch.setenv("GITHUB_WEBHOOK_SECRET", WEBHOOK_SECRET), and remove the
manual del os.environ["GITHUB_WEBHOOK_SECRET"] by either omitting it
(monkeypatch will revert after the test) or explicitly calling
monkeypatch.delenv("GITHUB_WEBHOOK_SECRET", raising=False); update the
test_signature_verification_failure function accordingly so environment
setup/teardown is handled via monkeypatch.
- Around line 141-190: The BountyDB instantiation in
test_pull_request_opened_updates_bounty is using non-existent fields (category,
github_repo, github_issue_number); update the test to match the BountyDB model
by removing category and github_repo and replacing github_issue_number=123 with
github_issue_url="https://github.com/test/repo/issues/123" when creating the
BountyDB instance so the model validation passes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 54efbca1-4a30-4e8a-af82-3f68235a9806
📒 Files selected for processing (6)
backend/app/models/bounty.pybackend/tests/test_bounty_api.pybackend/tests/test_bounty_edge_cases.pybackend/tests/test_webhook.pyfrontend/src/__mocks__/mockDashboard.tsfrontend/src/types/wallet-adapter.d.ts
| export const mockDailyEarnings: DailyEarning[] = Array.from({ length: 30 }, (_, i) => { | ||
| const date = new Date(); date.setDate(date.getDate() - (29 - i)); | ||
| return { date: date.toISOString().split('T')[0], amount: Math.random() > 0.7 ? Math.floor(Math.random() * 500) + 100 : 0 }; | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Non-deterministic mock data may cause test flakiness.
Math.random() generates different amounts on each module import. If any tests assert on specific mockDailyEarnings values (e.g., sum, specific day's amount), they will be flaky.
Consider using seeded/deterministic values for test stability:
♻️ Proposed deterministic alternative
export const mockDailyEarnings: DailyEarning[] = Array.from({ length: 30 }, (_, i) => {
const date = new Date(); date.setDate(date.getDate() - (29 - i));
- return { date: date.toISOString().split('T')[0], amount: Math.random() > 0.7 ? Math.floor(Math.random() * 500) + 100 : 0 };
+ // Deterministic pattern: earnings on ~30% of days using modulo
+ return { date: date.toISOString().split('T')[0], amount: i % 3 === 0 ? (i * 17 % 500) + 100 : 0 };
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const mockDailyEarnings: DailyEarning[] = Array.from({ length: 30 }, (_, i) => { | |
| const date = new Date(); date.setDate(date.getDate() - (29 - i)); | |
| return { date: date.toISOString().split('T')[0], amount: Math.random() > 0.7 ? Math.floor(Math.random() * 500) + 100 : 0 }; | |
| }); | |
| export const mockDailyEarnings: DailyEarning[] = Array.from({ length: 30 }, (_, i) => { | |
| const date = new Date(); date.setDate(date.getDate() - (29 - i)); | |
| // Deterministic pattern: earnings on ~30% of days using modulo | |
| return { date: date.toISOString().split('T')[0], amount: i % 3 === 0 ? (i * 17 % 500) + 100 : 0 }; | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/__mocks__/mockDashboard.ts` around lines 6 - 9,
mockDailyEarnings currently uses Math.random() making test data
non-deterministic and flaky; replace the non-deterministic generator with a
deterministic approach (e.g., a seeded PRNG or a deterministic formula based on
the index) so values are stable across imports—update the Array.from initializer
that produces mockDailyEarnings to use a deterministic generator (or a fixed
array) so amounts are repeatable for tests, and keep the same shape
(DailyEarning objects with date and amount) so consuming code/tests need no
changes.
| import { ReactNode } from 'react'; | ||
|
|
||
| export interface ConnectionProviderProps { | ||
| children: ReactNode; | ||
| endpoint: string; | ||
| commitment?: string; | ||
| } | ||
|
|
||
| export const ConnectionProvider: (props: ConnectionProviderProps) => JSX.Element; | ||
|
|
||
| export interface WalletProviderProps { | ||
| children: ReactNode; | ||
| wallets: unknown[]; | ||
| autoConnect?: boolean; | ||
| } | ||
|
|
||
| export const WalletProvider: (props: WalletProviderProps) => JSX.Element; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🌐 Web query:
In TypeScript 5.x + React 18 projects, what is the recommended declaration-file style for component return types (global JSX.Element vs React.JSX.Element) and for imports (import vs import type)?
💡 Result:
Component return types in .d.ts (TypeScript 5.x + React 18)
- Prefer
React.JSX.Elementover globalJSX.Elementin public typings. TypeScript’s JSX typing model expects theJSXnamespace to live on the JSX factory (for React:React.JSX). When it can’t find that, it falls back to a globalJSXnamespace. [1] - The global
JSXnamespace is deprecated in@types/react(with guidance to useReact.JSXinstead), so usingJSX.Elementin declaration files increasingly triggers deprecation tooling and can create upgrade friction (notably toward newer React type packages). [2]
Typical .d.ts style
import type * as React from "react";
export declare function MyComponent(props: Props): React.JSX.Element;Imports in TS 5.x (React 18 codebases): import vs import type
- Use
import type(or TS 4.5+ inline formimport { type X } from "…") for type-only dependencies. Type-only imports are guaranteed to be fully erased from JS output. [3] - In TS 5+, enabling
verbatimModuleSyntaxis the modern way to make import/output behavior explicit, and it works best when your code clearly distinguishes type imports from value imports. [4][5] - If you lint,
@typescript-eslint/consistent-type-importsis the common rule used to enforce this consistently. [6]
Typical mixed import style
import { useMemo } from "react";
import type { ReactNode } from "react";
import type * as React from "react"; // when you need React.JSX.*Sources: TypeScript JSX handbook [1], @types/react deprecation note via StackOverflow excerpt [2], TS 3.8 import type release notes [3], TSConfig verbatimModuleSyntax docs [4], TSConfig preserveValueImports note on type-only imports + isolated modules [5], typescript-eslint consistent-type-imports rule docs [6].
🏁 Script executed:
# Check if the file exists and view the specific lines
if [ -f "frontend/src/types/wallet-adapter.d.ts" ]; then
echo "=== File content (lines 1-30) ==="
head -30 "frontend/src/types/wallet-adapter.d.ts" | cat -n
else
echo "File not found"
fiRepository: SolFoundry/solfoundry
Length of output: 1113
🏁 Script executed:
# Check TypeScript and React versions in the project
echo "=== tsconfig.json ==="
if [ -f "tsconfig.json" ]; then
cat tsconfig.json | head -50
else
echo "tsconfig.json not found"
fi
echo -e "\n=== package.json (relevant dependencies) ==="
if [ -f "package.json" ]; then
jq '.devDependencies | select(. != null) | with_entries(select(.key | test("typescript|react")))' package.json 2>/dev/null || grep -A 5 '"typescript"\|"react"' package.json | head -20
else
echo "package.json not found"
fiRepository: SolFoundry/solfoundry
Length of output: 179
🏁 Script executed:
# Search for usage of this wallet adapter to understand integration context
echo "=== Imports of wallet-adapter.d.ts ==="
rg "from.*wallet-adapter" --type ts --type tsx -A 2 | head -30Repository: SolFoundry/solfoundry
Length of output: 148
🏁 Script executed:
# Search for tsconfig and package.json files
echo "=== Finding tsconfig.json ==="
find . -name "tsconfig.json" -type f 2>/dev/null | head -10
echo -e "\n=== Finding package.json ==="
find . -name "package.json" -type f 2>/dev/null | head -10Repository: SolFoundry/solfoundry
Length of output: 173
🏁 Script executed:
# Check for other .d.ts files to understand project conventions
echo "=== Other .d.ts files in frontend ==="
find frontend -name "*.d.ts" -type f 2>/dev/null | head -20Repository: SolFoundry/solfoundry
Length of output: 141
🏁 Script executed:
# Look for any imports from wallet-adapter.d.ts
echo "=== Files importing from wallet-adapter types ==="
rg "wallet-adapter|ConnectionProvider|WalletProvider" frontend/src --type ts -l 2>/dev/null | head -20Repository: SolFoundry/solfoundry
Length of output: 421
🏁 Script executed:
# Check project root structure
echo "=== Root directory structure ==="
ls -la | grep -E "tsconfig|package\.json"Repository: SolFoundry/solfoundry
Length of output: 97
🏁 Script executed:
# Check TypeScript and React versions
echo "=== frontend/package.json (dependencies) ==="
cat frontend/package.json | jq '.dependencies, .devDependencies' 2>/dev/null | grep -E "typescript|react|@types/react"
echo -e "\n=== Full package.json ==="
head -50 frontend/package.jsonRepository: SolFoundry/solfoundry
Length of output: 1208
🏁 Script executed:
# Check tsconfig.json for JSX settings
echo "=== frontend/tsconfig.json ==="
cat frontend/tsconfig.jsonRepository: SolFoundry/solfoundry
Length of output: 491
🏁 Script executed:
# Check actual usage in one of the importing files
echo "=== frontend/src/components/wallet/WalletProvider.tsx ==="
cat frontend/src/components/wallet/WalletProvider.tsxRepository: SolFoundry/solfoundry
Length of output: 2370
Modernize type declarations for React 18 + TypeScript 5.x compatibility.
Use import type for ReactNode (line 9) and React.JSX.Element instead of bare JSX.Element (lines 17, 25) to align with current best practices and avoid the deprecated global JSX namespace.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/types/wallet-adapter.d.ts` around lines 9 - 25, Update the type
declarations to use modern TS/React imports: change the ReactNode import to a
type-only import and use React.JSX.Element for the provider return types;
specifically update the import of ReactNode to "import type { ReactNode } from
'react'" and replace the return types of ConnectionProvider and WalletProvider
from JSX.Element to React.JSX.Element while keeping the existing props
interfaces ConnectionProviderProps and WalletProviderProps and the exported
symbols ConnectionProvider and WalletProvider.
| export interface ConnectionProviderProps { | ||
| children: ReactNode; | ||
| endpoint: string; | ||
| commitment?: string; | ||
| } | ||
|
|
||
| export const ConnectionProvider: (props: ConnectionProviderProps) => JSX.Element; | ||
|
|
||
| export interface WalletProviderProps { | ||
| children: ReactNode; | ||
| wallets: unknown[]; | ||
| autoConnect?: boolean; | ||
| } | ||
|
|
||
| export const WalletProvider: (props: WalletProviderProps) => JSX.Element; | ||
|
|
||
| export function useWallet(): { | ||
| publicKey: unknown; | ||
| connected: boolean; | ||
| connecting: boolean; | ||
| disconnecting: boolean; | ||
| select(walletName: string): void; | ||
| connect(): Promise<void>; | ||
| disconnect(): Promise<void>; | ||
| signTransaction?: (transaction: unknown) => Promise<unknown>; | ||
| signAllTransactions?: (transactions: unknown[]) => Promise<unknown[]>; | ||
| }; | ||
|
|
||
| export function useConnection(): { | ||
| connection: unknown; | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For the exact version of @solana/wallet-adapter-react used in this repo, what are the official TypeScript definitions for ConnectionProviderProps, WalletProviderProps, useWallet(), and useConnection()? Please include the expected types for commitment, wallets, publicKey, and connection.
💡 Result:
I can’t tell what exact @solana/wallet-adapter-react version “this repo” uses without the repo URL or its package.json/lockfile. If you paste the dependency line (and lockfile entry if it’s pinned), I can return the typings for that exact version.
Meanwhile, here are the official TypeScript definitions (from the generated TypeDoc for @solana/wallet-adapter-react) and the referenced @solana/web3.js types:
// `@solana/wallet-adapter-react`
export interface ConnectionProviderProps {
children: ReactNode;
endpoint: string;
config?: ConnectionConfig;
}[1]
export interface WalletProviderProps {
children: ReactNode;
wallets: Adapter[];
autoConnect?: boolean | ((adapter: Adapter) => Promise<boolean>);
localStorageKey?: string;
onError?: (error: WalletError, adapter?: Adapter) => void;
}[2]
export function useWallet(): WalletContextState;[3]
export interface WalletContextState {
autoConnect: boolean;
wallets: Wallet[];
wallet: Wallet | null;
publicKey: PublicKey | null;
connecting: boolean;
connected: boolean;
disconnecting: boolean;
select(walletName: WalletName | null): void;
connect(): Promise<void>;
disconnect(): Promise<void>;
sendTransaction: (
transaction: Transaction | VersionedTransaction,
connection: Connection,
options?: SendTransactionOptions
) => Promise<string>;
signTransaction: /* ... */ | undefined;
signAllTransactions: /* ... */ | undefined;
signMessage: /* ... */ | undefined;
signIn: /* ... */ | undefined;
}[4]
export function useConnection(): ConnectionContextState;
export interface ConnectionContextState {
connection: Connection;
}[5]
And the commitment type comes from @solana/web3.js’s ConnectionConfig:
// `@solana/web3.js`
export type ConnectionConfig = {
commitment?: Commitment;
// ...
};
export type Commitment =
| "processed"
| "confirmed"
| "finalized"
| "recent"
| "single"
| "singleGossip"
| "root"
| "max";[6] [7]
Requested key types (as used above):
commitment:Commitment(i.e.ConnectionConfig["commitment"]) [6] [7]wallets(prop):Adapter[][2]publicKey:PublicKey | null[4]connection:Connection[5]
Sources:
[1] ConnectionProviderProps TypeDoc: https://anza-xyz.github.io/wallet-adapter/interfaces/_solana_wallet-adapter-react.require.ConnectionProviderProps.html
[2] WalletProviderProps TypeDoc: https://anza-xyz.github.io/wallet-adapter/interfaces/_solana_wallet-adapter-react.require.WalletProviderProps.html
[3] useWallet() TypeDoc: https://anza-xyz.github.io/wallet-adapter/functions/_solana_wallet-adapter-react.require.useWallet.html
[4] WalletContextState TypeDoc: https://anza-xyz.github.io/wallet-adapter/interfaces/_solana_wallet-adapter-react.require.WalletContextState.html
[5] useConnection() / ConnectionContextState TypeDoc: https://anza-xyz.github.io/wallet-adapter/functions/_solana_wallet-adapter-react.require.useConnection.html and https://anza-xyz.github.io/wallet-adapter/interfaces/_solana_wallet-adapter-react.require.ConnectionContextState.html
[6] ConnectionConfig (commitment?: Commitment) TypeDoc: https://solana-foundation.github.io/solana-web3.js/v1.x/types/ConnectionConfig.html
[7] Commitment union type TypeDoc: https://solana-foundation.github.io/solana-web3.js/v1.x/types/Commitment.html
🏁 Script executed:
#!/bin/bash
# Find and inspect package.json for `@solana/wallet-adapter-react` version
fd -e "package.json" -e "package-lock.json" -e "yarn.lock" -e "pnpm-lock.yaml" | head -5
# Check package.json for the dependency
if [ -f "package.json" ]; then
echo "=== package.json ==="
jq '.dependencies, .devDependencies | to_entries[] | select(.key | contains("wallet-adapter-react"))' package.json 2>/dev/null || grep -i "wallet-adapter-react" package.json
fi
# Check lockfile
if [ -f "package-lock.json" ]; then
echo "=== package-lock.json (wallet-adapter-react entry) ==="
jq '.packages[] | select(.name | contains("wallet-adapter-react")) | {name, version}' package-lock.json 2>/dev/null | head -20
fiRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
#!/bin/bash
# List files in repo root
ls -la | grep -E "package\.|yarn\.|pnpm"
# Read package.json if it exists
if [ -f "package.json" ]; then
echo "=== Full package.json ==="
cat package.json
fi
# Also check frontend directory
if [ -d "frontend" ]; then
echo -e "\n=== Frontend directory contents ==="
ls -la frontend/ | head -20
if [ -f "frontend/package.json" ]; then
echo -e "\n=== Frontend package.json ==="
cat frontend/package.json
fi
fiRepository: SolFoundry/solfoundry
Length of output: 1428
🏁 Script executed:
#!/bin/bash
# Read the full wallet-adapter.d.ts file
echo "=== wallet-adapter.d.ts (full file) ==="
cat frontend/src/types/wallet-adapter.d.ts
# Check tsconfig.json
echo -e "\n=== frontend/tsconfig.json ==="
cat frontend/tsconfig.json
# Search for usage of these types in components
echo -e "\n=== Usage of useWallet/useConnection in source ==="
rg "useWallet|useConnection|ConnectionProvider|WalletProvider" frontend/src --type ts --type tsx -n | head -30Repository: SolFoundry/solfoundry
Length of output: 1777
🏁 Script executed:
#!/bin/bash
# Search for usage with correct syntax
echo "=== Usage of wallet hooks/providers ==="
rg "useWallet|useConnection|ConnectionProvider|WalletProvider" frontend/src -A 3 -B 1
# Also check if there are any type assertion workarounds elsewhere
echo -e "\n=== Type assertions or 'any' workarounds ==="
rg "as any|unknown as|@ts-ignore|@ts-expect-error" frontend/srcRepository: SolFoundry/solfoundry
Length of output: 9945
Workaround for TS2786 can use more specific types without sacrificing the React 18 compatibility fix.
The ambient module is intentionally broad to resolve a React 18 types incompatibility (TS2786) with ConnectionProvider. However, several fields can be typed more precisely:
wallets: unknown[]→wallets: Adapter[](imported from@solana/wallet-adapter-base)publicKey: unknown→publicKey: PublicKey | null(matches actual hook behavior and official types)connection: unknown→connection: Connection(imported from@solana/web3.js)commitment?: string→commitment?: Commitment(from@solana/web3.jsConnectionConfig)
Keeping broad unknown for method signatures (signTransaction, signAllTransactions) is reasonable given the TS2786 constraint, but the prop and state fields have deterministic types used consistently throughout the codebase. More precise types would catch invalid usage at compile time without sacrificing the workaround.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/types/wallet-adapter.d.ts` around lines 11 - 41, Update the
ambient types to use the more specific Solana types: import Adapter from
`@solana/wallet-adapter-base` and PublicKey and Connection (and Commitment) from
`@solana/web3.js`, then change WalletProviderProps.wallets from unknown[] to
Adapter[], change ConnectionProviderProps.commitment from string to Commitment,
change useWallet().publicKey from unknown to PublicKey | null, and change
useConnection().connection from unknown to Connection; keep signTransaction and
signAllTransactions as unknown-returning signatures to preserve the TS2786
workaround.
🏭 Bounty T2: Real-time WebSocket Server
Closes #17
✅ 实现功能
/wswith JWT authenticationbounties,prs,payouts,leaderboard{"type": "subscribe", "scope": "repo", "target_id": "repo_123"}{"event": "pr_status_changed", "timestamp": "...", "data": {...}}📁 文件变更
backend/app/api/websocket.pybackend/app/services/websocket_manager.pybackend/tests/test_websocket.pybackend/app/main.pybackend/requirements.txt🔧 技术栈
💰 钱包地址
Amu1YJjcKWKL6xuMTo2dx511kfzXAxgpetJrZp7N71o7评分要求: ≥7/10 | 赏金: 400,000 $FNDRY
Summary by CodeRabbit
New Features
API
Tests
Chores