-
Notifications
You must be signed in to change notification settings - Fork 904
Button and api route for uploading logs #1221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds a logging upload flow: propagates a logs directory into the desktop App, introduces a new logging module with an upload endpoint caller, exposes a tray menu action to trigger uploads, adds an unauthenticated API request helper, and adds a server-side /logs endpoint and webhook wiring. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Tray as Tray Menu
participant App as Desktop App
participant Logging as Logging Module
participant API as API Server
participant Discord as Discord Webhook
User->>Tray: Click "Upload Logs"
Tray->>App: Emit UploadLogs event
App->>Logging: upload_log_file(app)
rect rgb(230, 240, 255)
Note over Logging: Locate latest log file in app.logs_dir
Logging->>Logging: scan logs_dir for "cap-desktop.log"
end
rect rgb(230, 240, 255)
Note over Logging: Read & prepare content (MAX 1 MB)
alt File > 1 MB
Logging->>Logging: Read full file, prepend truncation header, truncate to 1 MB (prefer last newline)
else File ≤ 1 MB
Logging->>Logging: Read full content
end
end
rect rgb(220, 255, 220)
Note over Logging: Build multipart and send
Logging->>API: POST /api/desktop/logs (multipart: log, os, version) via app.api_request
end
API->>Discord: Forward to webhook (if configured)
Discord-->>API: Success
API-->>Logging: 200 OK
Logging-->>App: Ok / Err
App->>User: Show success or error dialog
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Changes span API, runtime init, new file I/O and multipart logic, UI/tray integrations, and server route/webhook handling. Review should verify error paths, size/truncation logic (1 MB), header propagation, and auth semantics across desktop-to-server flow. Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
apps/desktop/src-tauri/src/tray.rs (1)
138-151
: Consider adding user feedback on success.The upload handler shows an error dialog on failure but doesn't notify the user on success. Consider adding a success notification or dialog to confirm the logs were uploaded.
Apply this diff to add a success dialog:
Ok(TrayItem::UploadLogs) => { let app = app.clone(); tokio::spawn(async move { match crate::logging::upload_log_file(&app).await { Ok(_) => { tracing::info!("Successfully uploaded logs"); + app.dialog() + .message("Logs uploaded successfully") + .title("Upload Logs") + .show(|_| {}); } Err(e) => { tracing::error!("Failed to upload logs: {e:#}"); - app.dialog().message("Failed to upload logs").show(|_| {}); + app.dialog() + .message("Failed to upload logs") + .title("Upload Logs") + .show(|_| {}); } } }); }apps/desktop/src-tauri/src/web_api.rs (1)
108-123
: Consider extracting common header logic.The
api_request
method duplicates header-setting logic fromdo_authed_request
(lines 60-66). Consider extracting this into a helper function to reduce duplication.Example refactor:
fn add_common_headers(mut req: reqwest::RequestBuilder) -> reqwest::RequestBuilder { req = req.header("X-Cap-Desktop-Version", env!("CARGO_PKG_VERSION")); if let Ok(s) = std::env::var("VITE_VERCEL_AUTOMATION_BYPASS_SECRET") { req = req.header("x-vercel-protection-bypass", s); } req } async fn api_request(...) -> Result<reqwest::Response, reqwest::Error> { let url = self.make_app_url(path.into()).await; let client = reqwest::Client::new(); let req = build(client, url); let req = add_common_headers(req); req.send().await }apps/desktop/src-tauri/src/logging.rs (1)
35-36
: Consider using tokio::fs for async file operations.The code uses
std::fs
(synchronous) for file operations inside an async function. While this works, it blocks the async runtime. Consider usingtokio::fs::metadata
andtokio::fs::read_to_string
for better async performance.Example:
- let metadata = - fs::metadata(&log_file).map_err(|e| format!("Failed to read log file metadata: {}", e))?; + let metadata = tokio::fs::metadata(&log_file) + .await + .map_err(|e| format!("Failed to read log file metadata: {}", e))?; let file_size = metadata.len(); const MAX_SIZE: u64 = 9 * 1024 * 1024; let log_content = if file_size > MAX_SIZE { - let content = - fs::read_to_string(&log_file).map_err(|e| format!("Failed to read log file: {}", e))?; + let content = tokio::fs::read_to_string(&log_file) + .await + .map_err(|e| format!("Failed to read log file: {}", e))?;Also applies to: 42-43, 63-63
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
apps/desktop/src-tauri/src/lib.rs
(4 hunks)apps/desktop/src-tauri/src/logging.rs
(1 hunks)apps/desktop/src-tauri/src/main.rs
(1 hunks)apps/desktop/src-tauri/src/thumbnails/mod.rs
(0 hunks)apps/desktop/src-tauri/src/tray.rs
(6 hunks)apps/desktop/src-tauri/src/web_api.rs
(2 hunks)apps/web/app/api/desktop/[...route]/root.ts
(4 hunks)packages/env/server.ts
(1 hunks)
💤 Files with no reviewable changes (1)
- apps/desktop/src-tauri/src/thumbnails/mod.rs
🧰 Additional context used
📓 Path-based instructions (7)
**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/*.rs
: Format Rust code usingrustfmt
and ensure all Rust code passes workspace-level clippy lints.
Rust modules should be named with snake_case, and crate directories should be in kebab-case.
Files:
apps/desktop/src-tauri/src/tray.rs
apps/desktop/src-tauri/src/web_api.rs
apps/desktop/src-tauri/src/logging.rs
apps/desktop/src-tauri/src/main.rs
apps/desktop/src-tauri/src/lib.rs
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}
: Use a 2-space indent for TypeScript code.
Use Biome for formatting and linting TypeScript/JavaScript files by runningpnpm format
.Use strict TypeScript and avoid any; leverage shared types
Files:
packages/env/server.ts
apps/web/app/api/desktop/[...route]/root.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}
: Use kebab-case for filenames for TypeScript/JavaScript modules (e.g.,user-menu.tsx
).
Use PascalCase for React/Solid components.
Files:
packages/env/server.ts
apps/web/app/api/desktop/[...route]/root.ts
apps/web/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
On the client, always use
useEffectQuery
oruseEffectMutation
from@/lib/EffectRuntime
; never callEffectRuntime.run*
directly in components.
Files:
apps/web/app/api/desktop/[...route]/root.ts
apps/web/app/api/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/app/api/**/*.{ts,tsx}
: Prefer Server Actions for API surface; when routes are necessary, implement under app/api and export only the handler from apiToHandler(ApiLive)
Construct API routes with @effect/platform HttpApi/HttpApiBuilder, declare contracts with Schema, and only export the handler
Use HttpAuthMiddleware for required auth and provideOptionalAuth for guests; avoid duplicating session lookups
Map domain errors to transport with HttpApiError.* and keep translation exhaustive (catchTags/tapErrorCause)
Inside HttpApiBuilder.group, acquire services with Effect.gen and provide dependencies via Layer.provide instead of manual provideService
Files:
apps/web/app/api/desktop/[...route]/root.ts
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}
: Use TanStack Query v5 for all client-side server state and fetching in the web app
Mutations should call Server Actions directly and perform targeted cache updates with setQueryData/setQueriesData
Run server-side effects via the ManagedRuntime from apps/web/lib/server.ts using EffectRuntime.runPromise/runPromiseExit; do not create runtimes ad hoc
Client code should use helpers from apps/web/lib/EffectRuntime.ts (useEffectQuery, useEffectMutation, useRpcClient); never call ManagedRuntime.make inside components
Files:
apps/web/app/api/desktop/[...route]/root.ts
apps/web/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Server components needing Effect services must call EffectRuntime.runPromise(effect.pipe(provideOptionalAuth))
Files:
apps/web/app/api/desktop/[...route]/root.ts
🧬 Code graph analysis (4)
apps/desktop/src-tauri/src/tray.rs (1)
apps/desktop/src-tauri/src/logging.rs (2)
app
(6-7)upload_log_file
(32-83)
apps/desktop/src-tauri/src/logging.rs (1)
apps/desktop/src-tauri/src/lib.rs (6)
app
(1171-1172)app
(2338-2338)app
(2369-2369)app
(2406-2406)app
(2412-2412)app
(2612-2613)
apps/desktop/src-tauri/src/main.rs (1)
apps/desktop/src-tauri/src/lib.rs (1)
run
(1935-2507)
apps/web/app/api/desktop/[...route]/root.ts (2)
apps/web/app/api/utils.ts (2)
withOptionalAuth
(45-55)withAuth
(57-68)packages/env/server.ts (1)
serverEnv
(83-87)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Clippy
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (9)
packages/env/server.ts (1)
43-43
: LGTM!The new environment variable follows the same pattern as existing webhook URLs and is appropriately optional.
apps/web/app/api/desktop/[...route]/root.ts (2)
74-74
: LGTM!All endpoints that previously relied on global authentication now have explicit
withAuth
middleware, maintaining the same security posture.Also applies to: 120-120, 152-152, 198-198
14-14
: Clarify authentication requirement for/logs
endpoint.All existing endpoints now have explicit
withAuth
middleware, and the removal of global auth is correct. However, the new/logs
endpoint useswithOptionalAuth
, allowing unauthenticated log uploads. Verify this is intentional:
- If logs should be accessible to guests: add rate limiting and verify Discord webhook isn't abused
- If logs should require authentication: change
withOptionalAuth
towithAuth
on line 26apps/desktop/src-tauri/src/main.rs (1)
137-137
: LGTM!The function call correctly passes the newly required
logs_dir
parameter, which was properly initialized earlier in the function.apps/desktop/src-tauri/src/logging.rs (1)
5-30
: LGTM with minor observation.The function correctly finds the latest log file. The
contains("cap-desktop.log")
check on line 18 is appropriate given the rolling appender pattern used inmain.rs
(which creates files likecap-desktop.log.YYYY-MM-DD
).apps/desktop/src-tauri/src/lib.rs (4)
16-16
: LGTM!The new logging module is properly declared.
124-124
: LGTM!The
logs_dir
field is appropriately added to theApp
struct to support the logging subsystem.
1935-1935
: LGTM!The function signature correctly accepts the
logs_dir
parameter to propagate it through the application initialization.
2253-2253
: LGTM!The
logs_dir
is properly initialized in theApp
state with the value passed frommain.rs
.
let log_content = if file_size > MAX_SIZE { | ||
let content = | ||
fs::read_to_string(&log_file).map_err(|e| format!("Failed to read log file: {}", e))?; | ||
|
||
let header = format!( | ||
"⚠️ Log file truncated (original size: {} bytes, showing last ~9MB)\n\n", | ||
file_size | ||
); | ||
let max_content_size = (MAX_SIZE as usize) - header.len(); | ||
|
||
if content.len() > max_content_size { | ||
let start_pos = content.len() - max_content_size; | ||
let truncated = &content[start_pos..]; | ||
if let Some(newline_pos) = truncated.find('\n') { | ||
format!("{}{}", header, &truncated[newline_pos + 1..]) | ||
} else { | ||
format!("{}{}", header, truncated) | ||
} | ||
} else { | ||
content | ||
} | ||
} else { | ||
fs::read_to_string(&log_file).map_err(|e| format!("Failed to read log file: {}", e))? | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential UTF-8 boundary issue in truncation logic.
Lines 52-53 truncate the string at an arbitrary byte position, which could split a UTF-8 multi-byte character and cause a panic or invalid UTF-8.
Apply this diff to ensure UTF-8 safety:
if content.len() > max_content_size {
let start_pos = content.len() - max_content_size;
- let truncated = &content[start_pos..];
+ // Find the next valid UTF-8 boundary
+ let truncated = if content.is_char_boundary(start_pos) {
+ &content[start_pos..]
+ } else {
+ // Find the next character boundary after start_pos
+ let next_boundary = (start_pos..content.len())
+ .find(|&i| content.is_char_boundary(i))
+ .unwrap_or(content.len());
+ &content[next_boundary..]
+ };
if let Some(newline_pos) = truncated.find('\n') {
format!("{}{}", header, &truncated[newline_pos + 1..])
} else {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let log_content = if file_size > MAX_SIZE { | |
let content = | |
fs::read_to_string(&log_file).map_err(|e| format!("Failed to read log file: {}", e))?; | |
let header = format!( | |
"⚠️ Log file truncated (original size: {} bytes, showing last ~9MB)\n\n", | |
file_size | |
); | |
let max_content_size = (MAX_SIZE as usize) - header.len(); | |
if content.len() > max_content_size { | |
let start_pos = content.len() - max_content_size; | |
let truncated = &content[start_pos..]; | |
if let Some(newline_pos) = truncated.find('\n') { | |
format!("{}{}", header, &truncated[newline_pos + 1..]) | |
} else { | |
format!("{}{}", header, truncated) | |
} | |
} else { | |
content | |
} | |
} else { | |
fs::read_to_string(&log_file).map_err(|e| format!("Failed to read log file: {}", e))? | |
}; | |
let log_content = if file_size > MAX_SIZE { | |
let content = | |
fs::read_to_string(&log_file).map_err(|e| format!("Failed to read log file: {}", e))?; | |
let header = format!( | |
"⚠️ Log file truncated (original size: {} bytes, showing last ~9MB)\n\n", | |
file_size | |
); | |
let max_content_size = (MAX_SIZE as usize) - header.len(); | |
if content.len() > max_content_size { | |
let start_pos = content.len() - max_content_size; | |
// Find the next valid UTF-8 boundary | |
let truncated = if content.is_char_boundary(start_pos) { | |
&content[start_pos..] | |
} else { | |
// Find the next character boundary after start_pos | |
let next_boundary = (start_pos..content.len()) | |
.find(|&i| content.is_char_boundary(i)) | |
.unwrap_or(content.len()); | |
&content[next_boundary..] | |
}; | |
if let Some(newline_pos) = truncated.find('\n') { | |
format!("{}{}", header, &truncated[newline_pos + 1..]) | |
} else { | |
format!("{}{}", header, truncated) | |
} | |
} else { | |
content | |
} | |
} else { | |
fs::read_to_string(&log_file).map_err(|e| format!("Failed to read log file: {}", e))? | |
}; |
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/logging.rs around lines 41 to 64, the truncation
uses &content[start_pos..] which can slice in the middle of a UTF-8 multi-byte
character and panic; change the code to use content.get(start_pos..) (which
returns Option<&str>) and handle the None case safely (e.g., fall back to taking
the entire content or advance to the next valid char boundary), then continue
finding the newline on that &str; ensure all slicing uses .get(...) or other
UTF-8-safe APIs so no byte-index slicing can break UTF-8.
app.post( | ||
"/logs", | ||
zValidator( | ||
"form", | ||
z.object({ | ||
log: z.string(), | ||
os: z.string().optional(), | ||
version: z.string().optional(), | ||
}), | ||
), | ||
withOptionalAuth, | ||
async (c) => { | ||
const { log, os, version } = c.req.valid("form"); | ||
const user = c.get("user"); | ||
|
||
try { | ||
const discordWebhookUrl = serverEnv().DISCORD_LOGS_WEBHOOK_URL; | ||
if (!discordWebhookUrl) | ||
throw new Error("Discord webhook URL is not configured"); | ||
|
||
const formData = new FormData(); | ||
const logBlob = new Blob([log], { type: "text/plain" }); | ||
const fileName = `cap-desktop-${os || "unknown"}-${version || "unknown"}-${Date.now()}.log`; | ||
formData.append("file", logBlob, fileName); | ||
|
||
const content = [ | ||
"New log file uploaded", | ||
user && `User: ${user.email} (${user.id})`, | ||
os && `OS: ${os}`, | ||
version && `Version: ${version}`, | ||
] | ||
.filter(Boolean) | ||
.join("\n"); | ||
|
||
formData.append("content", content); | ||
|
||
const response = await fetch(discordWebhookUrl, { | ||
method: "POST", | ||
body: formData, | ||
}); | ||
|
||
if (!response.ok) | ||
throw new Error( | ||
`Failed to send logs to Discord: ${response.statusText}`, | ||
); | ||
|
||
return c.json({ | ||
success: true, | ||
message: "Logs uploaded successfully", | ||
}); | ||
} catch (error) { | ||
return c.json({ error: "Failed to upload logs" }, { status: 500 }); | ||
} | ||
}, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Consider adding rate limiting and authentication requirements.
The /logs
endpoint allows unauthenticated uploads, which could be abused. While it's useful for debugging issues from logged-out users, consider:
- Rate limiting: Add rate limiting by IP to prevent abuse
- File size validation: The 9MB limit is enforced client-side in Rust, but should be validated server-side as well
- Authentication preference: Consider requiring authentication or at least logging anonymous uploads differently
🌐 Web query:
What are common rate limiting strategies for file upload endpoints in Next.js/Hono applications?
💡 Result:
Common strategies for rate-limiting file-upload endpoints (Next.js or Hono):
-
Enforce request/size limits at the framework/middleware level (max Content-Length, body-size middleware). Prevents huge uploads from reaching your app. [1]
-
Per-client request-rate limits (requests per window) — IP, user ID, API key. Implement with middleware like hono-rate-limiter, express-rate-limit, Upstash Ratelimit, or Cloudflare Worker bindings. Use a shared store (Redis/Upstash) for distributed apps. [2][3][4]
-
Endpoint-specific limits and layered rules — e.g., a strict global limit + a looser authenticated-user limit or per-account quota. Useful for tiers and abuse mitigation. [4]
-
Offload uploads with presigned/direct-to-storage uploads (S3 presigned POST/PUT). Your backend only issues short-lived upload tokens (rate-limit issuance) and verifies afterward — removes payload bandwidth from your server and avoids serverless payload size limits. [5][6]
-
Chunked / resumable uploads (TUS, multipart chunking) with per-chunk and per-session throttling. Allows large files while controlling throughput and retry behavior. Combine with per-session rate limits and limits on concurrent sessions. [7]
-
Bandwidth / concurrency throttling — limit simultaneous uploads per client and total concurrent connections (server or proxy level) to avoid saturating I/O. Implement at reverse proxy / Cloud provider (NGINX, CDN, Cloudflare) or application layer.
-
Token-based and time-window controls for presigned URLs — limit how often a client may request new presigned URLs, set short expirations, and return 429+Retry-After when abused. [5][6]
-
Monitoring + adaptive blocking — log uploads, track spikes, block abusive clients with temporary bans, CAPTCHA, or progressive delays (exponential backoff on retries).
Sources:
- Hono bodyLimit middleware docs. [1]
- hono-rate-limiter (middleware / npm + GitHub). [2][3]
- Cloudflare Worker / Hono rate-limit integrations (bindings middleware). [4]
- Next.js rate-limiting options overview (express-rate-limit, Upstash, etc.). [4]
- Presigned S3 uploads / presigned POST guides and AWS docs. [5][6]
- Resumable / chunked upload patterns (TUS, Cloudflare, Supabase). [7]
If you want, I can give sample code for Next.js or Hono showing one or two of these strategies (e.g., presigned-URL flow + rate-limit issuance).
🏁 Script executed:
#!/bin/bash
# Inspect the actual file and surrounding context
find apps/web -path "*api/desktop*" -name "*.ts" -type f | head -20
Length of output: 228
🏁 Script executed:
#!/bin/bash
# Check for rate-limiting middleware setup in the project
rg -l "rate.*limit|RateLimit|rateLimiter" apps/web/app/api --type ts --type tsx -C 2
Length of output: 27
🏁 Script executed:
#!/bin/bash
# Check for body-size or content-length middleware
rg -n "bodyLimit|Content-Length|body.*size|MAX.*SIZE" apps/web/app/api --type ts --type tsx | head -30
Length of output: 27
🏁 Script executed:
#!/bin/bash
# Check the actual file and surrounding code
cat "apps/web/app/api/desktop/[...route]/root.ts" 2>/dev/null | head -100
Length of output: 2622
🏁 Script executed:
#!/bin/bash
# Check for rate-limiting middleware (fixed command)
rg -n "rate.*limit|RateLimit|rateLimiter" apps/web/app/api --type ts -C 2
Length of output: 356
🏁 Script executed:
#!/bin/bash
# Check for body-size or content-length middleware
rg -n "bodyLimit|Content-Length|body.*size|MAX.*SIZE" apps/web/app/api --type ts -C 2
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check the route.ts file which exports the app
cat "apps/web/app/api/desktop/[...route]/route.ts"
Length of output: 590
🏁 Script executed:
#!/bin/bash
# Check the utils file for middleware setup
cat "apps/web/app/api/utils.ts" 2>/dev/null || cat "apps/web/app/api/utils/index.ts" 2>/dev/null
Length of output: 2045
🏁 Script executed:
#!/bin/bash
# Check for withOptionalAuth and withAuth implementations
rg -n "withOptionalAuth|withAuth" "apps/web/app/api" --type ts -B 5 -A 10 | head -80
Length of output: 5134
Add rate limiting, server-side file size validation, and request size limits to the /logs
endpoint.
The endpoint currently allows unauthenticated uploads with no rate limiting or server-side size validation, creating DoS and bandwidth abuse risks. Implement:
- Rate limiting per IP: Use middleware like
hono-rate-limiter
or Upstash to limit requests per IP (e.g., 5 requests/minute for unauthenticated clients). Apply atroute.ts
level or per-handler. - Server-side file size validation: Add
maxLength
constraint to thelog
string schema (e.g.,z.string().max(10_000_000)
for 10MB) and enforce via Hono'sbodyLimit()
middleware at the route level. - Request size limit: Add
bodyLimit()
middleware toroute.ts
to cap total request payload (e.g., 15MB) before it reaches handlers.
The 9MB client-side limit in Rust is insufficient without server-side enforcement. Consider requiring authentication or tracking/alerting on anonymous uploads if you decide to keep the endpoint open.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src-tauri/src/web_api.rs (1)
104-109
: Map upgrade-required statuses to UpgradeRequired.The UpgradeRequired variant is never produced. Consider mapping 402/403 to it.
- if response.status() == StatusCode::UNAUTHORIZED { + if response.status() == StatusCode::UNAUTHORIZED { error!("Authentication expired. Please log in again."); return Err(AuthedApiError::InvalidAuthentication); + } else if response.status() == StatusCode::PAYMENT_REQUIRED + || response.status() == StatusCode::FORBIDDEN + { + warn!("Feature requires upgrade."); + return Err(AuthedApiError::UpgradeRequired); }
♻️ Duplicate comments (1)
apps/desktop/src-tauri/src/logging.rs (1)
51-59
: UTF‑8 boundary safety when slicing (duplicate of prior review).Slicing with &content[start_pos..] can split a multibyte char. Use .get(start_pos..) and advance to next boundary.
- let truncated = &content[start_pos..]; + let truncated = match content.get(start_pos..) { + Some(s) => s, + None => { + let next = (start_pos..content.len()) + .find(|&i| content.is_char_boundary(i)) + .unwrap_or(content.len()); + &content[next..] + } + };
🧹 Nitpick comments (4)
apps/desktop/src-tauri/src/web_api.rs (2)
123-127
: Avoid string URL concatenation; use Url::join.Prevents double slashes and simplifies future host checks.
- async fn make_app_url(&self, pathname: impl AsRef<str>) -> String { - let app_state = self.state::<ArcLock<crate::App>>(); - let server_url = &app_state.read().await.server_url; - format!("{}{}", server_url, pathname.as_ref()) - } + async fn make_app_url(&self, pathname: impl AsRef<str>) -> String { + let app_state = self.state::<ArcLock<crate::App>>(); + let server_url = app_state.read().await.server_url.clone(); + match reqwest::Url::parse(&server_url).and_then(|u| u.join(pathname.as_ref())) { + Ok(u) => u.to_string(), + Err(_) => format!("{}{}", server_url.trim_end_matches('/'), + if pathname.as_ref().starts_with('/') { "" } else { "/" }) + + pathname.as_ref(), + } + }
52-71
: Reuse a single reqwest::Client and set sane timeouts.Creating a new Client per call forfeits connection pooling and per-client defaults (timeouts, proxies). Prefer a shared Client in app state, or at minimum set per-request timeouts at call sites.
If desired, I can add a Client to ArcLock and wire it here with a 15–30s default timeout. Based on learnings.
Also applies to: 112-121
apps/desktop/src-tauri/src/logging.rs (2)
5-30
: Avoid blocking std::fs in async fns.These synchronous file ops run on the async runtime thread. Use tokio::fs or wrap in spawn_blocking.
Minimal spawn_blocking example:
-async fn get_latest_log_file(app: &AppHandle) -> Option<PathBuf> { +async fn get_latest_log_file(app: &AppHandle) -> Option<PathBuf> { let logs_dir = app .state::<ArcLock<crate::App>>() .read() .await .logs_dir .clone(); - - let entries = fs::read_dir(&logs_dir).ok()?; - let mut log_files: Vec<_> = entries - .filter_map(|entry| { /* ... */ }) - .collect(); - log_files.sort_by(|a, b| b.1.cmp(&a.1)); - log_files.first().map(|(path, _)| path.clone()) + tokio::task::spawn_blocking(move || { + let entries = std::fs::read_dir(&logs_dir).ok()?; + let mut log_files: Vec<_> = entries + .filter_map(|entry| { + let entry = entry.ok()?; + let path = entry.path(); + if path.is_file() && path.file_name()?.to_str()?.contains("cap-desktop.log") { + let metadata = std::fs::metadata(&path).ok()?; + let modified = metadata.modified().ok()?; + Some((path, modified)) + } else { None } + }) + .collect(); + log_files.sort_by(|a, b| b.1.cmp(&a.1)); + log_files.first().map(|(p, _)| p.clone()) + }).await.ok().flatten() }Likewise, prefer tokio::fs::read_to_string or spawn_blocking for the large reads below. Based on learnings.
Also applies to: 35-64
41-64
: Be resilient to non‑UTF‑8 log bytes.read_to_string fails on invalid UTF‑8. Fallback to lossy decoding.
- let content = - fs::read_to_string(&log_file).map_err(|e| format!("Failed to read log file: {}", e))?; + let content = match fs::read_to_string(&log_file) { + Ok(s) => s, + Err(_) => { + let bytes = fs::read(&log_file) + .map_err(|e| format!("Failed to read log file: {}", e))?; + String::from_utf8_lossy(&bytes).into_owned() + } + }; @@ - fs::read_to_string(&log_file).map_err(|e| format!("Failed to read log file: {}", e))? + match fs::read_to_string(&log_file) { + Ok(s) => s, + Err(_) => { + let bytes = fs::read(&log_file) + .map_err(|e| format!("Failed to read log file: {}", e))?; + String::from_utf8_lossy(&bytes).into_owned() + } + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/desktop/src-tauri/src/logging.rs
(1 hunks)apps/desktop/src-tauri/src/web_api.rs
(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/*.rs
: Format Rust code usingrustfmt
and ensure all Rust code passes workspace-level clippy lints.
Rust modules should be named with snake_case, and crate directories should be in kebab-case.
Files:
apps/desktop/src-tauri/src/web_api.rs
apps/desktop/src-tauri/src/logging.rs
🧬 Code graph analysis (2)
apps/desktop/src-tauri/src/web_api.rs (1)
apps/desktop/src/utils/tauri.ts (2)
AuthStore
(352-352)AuthSecret
(351-351)
apps/desktop/src-tauri/src/logging.rs (1)
apps/desktop/src-tauri/src/lib.rs (6)
app
(1171-1172)app
(2338-2338)app
(2369-2369)app
(2406-2406)app
(2412-2412)app
(2612-2613)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Analyze (rust)
let form = reqwest::multipart::Form::new() | ||
.text("log", log_content) | ||
.text("os", std::env::consts::OS) | ||
.text("version", env!("CARGO_PKG_VERSION")); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redact secrets/PII and set a request timeout.
Logs may contain tokens, API keys, emails, etc. Redact before upload. Also set a per-request timeout to avoid hanging.
- let form = reqwest::multipart::Form::new()
- .text("log", log_content)
+ // Simple redaction (expand as needed)
+ let log_content = redact_secrets(&log_content);
+ let form = reqwest::multipart::Form::new()
+ .text("log", log_content)
.text("os", std::env::consts::OS)
.text("version", env!("CARGO_PKG_VERSION"));
@@
- client.post(url).multipart(form)
+ client
+ .post(url)
+ .multipart(form)
+ .timeout(std::time::Duration::from_secs(20))
Helper (place near top of module):
fn redact_secrets(s: &str) -> std::borrow::Cow<'_, str> {
use once_cell::sync::Lazy;
use regex::Regex;
static AUTH: Lazy<Regex> = Lazy::new(|| Regex::new("(?i)(authorization:\\s*bearer\\s+)[A-Za-z0-9._-]+").unwrap());
static KEY: Lazy<Regex> = Lazy::new(|| Regex::new("(?i)(api[_-]?key\\s*[:=]\\s*)[A-Za-z0-9._-]+").unwrap());
let s = AUTH.replace_all(s, "$1[REDACTED]");
KEY.replace_all(&s, "$1[REDACTED]")
}
Note: adds regex + once_cell deps if not already present. Based on learnings.
Also applies to: 71-76
fn apply_env_headers(req: reqwest::RequestBuilder) -> reqwest::RequestBuilder { | ||
let mut req = req.header("X-Cap-Desktop-Version", env!("CARGO_PKG_VERSION")); | ||
|
||
if let Ok(s) = std::env::var("VITE_VERCEL_AUTOMATION_BYPASS_SECRET") { | ||
req = req.header("x-vercel-protection-bypass", s); | ||
} | ||
|
||
req | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Scope the Vercel bypass header to trusted hosts only.
Currently, x-vercel-protection-bypass is added to every outbound request if the env var is set. This risks leaking the secret to arbitrary hosts if server_url is misconfigured or user-controlled. Gate the header by hostname (e.g., only *.vercel.app) or an allowlist, and pass the URL to the helper so it can decide.
Proposed minimal change:
-fn apply_env_headers(req: reqwest::RequestBuilder) -> reqwest::RequestBuilder {
+fn apply_env_headers(req: reqwest::RequestBuilder, base_url: &str) -> reqwest::RequestBuilder {
let mut req = req.header("X-Cap-Desktop-Version", env!("CARGO_PKG_VERSION"));
- if let Ok(s) = std::env::var("VITE_VERCEL_AUTOMATION_BYPASS_SECRET") {
- req = req.header("x-vercel-protection-bypass", s);
- }
+ if let Ok(s) = std::env::var("VITE_VERCEL_AUTOMATION_BYPASS_SECRET") {
+ if let Ok(u) = reqwest::Url::parse(base_url) {
+ if matches!(u.host_str(), Some(h) if h.ends_with(".vercel.app")) {
+ req = req.header("x-vercel-protection-bypass", s);
+ }
+ }
+ }
req
}
@@
- apply_env_headers(req).send().await
+ apply_env_headers(req, &url).send().await
@@
- apply_env_headers(build(client, url)).send().await
+ apply_env_headers(build(client.clone(), url.clone()), &url).send().await
Please confirm the exact trusted hostnames/allowlist you want (prod, staging). Based on learnings.
Also applies to: 70-71, 112-121
I'll verify the security concern and the proposed fix by examining the actual codebase.
---
🏁 Script executed:
```shell
#!/bin/bash
# Inspect the apply_env_headers function and surrounding context
cat apps/desktop/src-tauri/src/web_api.rs | head -130 | tail -90
Length of output: 2923
🏁 Script executed:
#!/bin/bash
# Find all occurrences of apply_env_headers call sites
rg -n "apply_env_headers" apps/desktop/src-tauri/src/web_api.rs -A 2 -B 2
Length of output: 384
🏁 Script executed:
#!/bin/bash
# Understand function signatures and URL availability
rg -n "fn apply_env_headers|\.send\(\)" apps/desktop/src-tauri/src/web_api.rs -B 5 -A 5
Length of output: 1227
Add hostname validation to apply_env_headers
to prevent leaking Vercel bypass secret to untrusted hosts.
The security concern is valid: the Vercel bypass header is currently added to every outbound request without verifying the destination host. If server_url
is misconfigured or user-controlled, the secret leaks to arbitrary hosts.
The proposed fix is feasible—both call sites (lines 70 and 120) have access to the URL and can pass it as a parameter. You should:
- Update
apply_env_headers(req: reqwest::RequestBuilder, base_url: &str)
to accept the URL - Parse and validate the hostname before adding the bypass header
- Update both call sites to pass
&url
Clarify which hostnames should be trusted (the proposal suggests *.vercel.app
; confirm if staging/prod have distinct domains that should both be allowed).
Summary by CodeRabbit
New Features
Bug Fixes / Improvements