Conversation
lostiv
commented
May 11, 2026
- 移除 autoSync.ts 中所有 push-to-backend 逻辑,仅保留 5s 轮询拉取
- backendAdapter 新增 updateRepository(PATCH) 和 deleteRepository(DELETE) 方法
- 18 处 forceSyncToBackend() 替换为精确的 API 调用
- 后端 upsert 时按字段保留已有 AI 数据,防止覆盖分析结果
- 修复 SubscriptionRepoCard 中 existingRepo 可能 undefined 的崩溃隐患
- 清理 autoSync.ts 中死变量和过时注释
- 移除 autoSync.ts 中所有 push-to-backend 逻辑,仅保留 5s 轮询拉取 - backendAdapter 新增 updateRepository(PATCH) 和 deleteRepository(DELETE) 方法 - 18 处 forceSyncToBackend() 替换为精确的 API 调用 - 后端 upsert 时按字段保留已有 AI 数据,防止覆盖分析结果 - 修复 SubscriptionRepoCard 中 existingRepo 可能 undefined 的崩溃隐患 - 清理 autoSync.ts 中死变量和过时注释
|
ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Plus Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (2)
📝 WalkthroughSummary by CodeRabbit
Walkthrough将自动同步从 push/debounce 模式改为定时轮询;后端适配器新增 patch/delete 仓库接口;前端各组件改为调用细粒度后端 API(非阻塞),服务端批量 upsert 保留 AI 字段以防空覆盖。 变更详情端到端同步与 API 迁移
Sequence DiagramsequenceDiagram
participant App as App (React)
participant autoSync as autoSync Service
participant BackendAdapter as Backend Adapter
participant Server as Server API
App->>autoSync: startAutoSync()
autoSync->>autoSync: setInterval(syncFromBackend)
autoSync->>BackendAdapter: syncFromBackend()
BackendAdapter->>Server: GET /api/repositories
Server-->>BackendAdapter: repos
BackendAdapter-->>App: 更新本地存储
Note over App,BackendAdapter: 组件级定向更新(替代 forceSyncToBackend)
App->>BackendAdapter: updateRepository(id, fields)
BackendAdapter->>Server: PATCH /repositories/:id
Server-->>BackendAdapter: 200 OK
App->>BackendAdapter: deleteRepository(id)
BackendAdapter->>Server: DELETE /repositories/:id
Server-->>BackendAdapter: 204 No Content
代码审查估算🎯 4 (复杂) | ⏱️ ~60 分钟 可能关联的 PRs
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/components/ReleaseTimeline.tsx (1)
503-515:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift取消订阅拆成两次写入会产生“部分成功”。
这里先把仓库标记为未订阅,再同步 release 列表。只要第二步失败,catch 里回滚的只是本地 Zustand;后端第一步已经成功的话,下一次轮询还是会把仓库改回“未订阅”。这个流程需要单接口/事务化处理,或者在失败时做补偿写回。
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/components/ReleaseTimeline.tsx` around lines 503 - 515, The current unsubscribe flow performs two backend calls (backend.updateRepository and backend.syncReleases) which can leave the system in a partial state if the second call fails; change this to an atomic/compensating approach: either (A) use a single backend API that both flips subscribed_to_releases and returns updated releases (replace the two calls with one backend.unsubscribeRepository-like call and then update local state from its response), or (B) if a single API isn't available, on error call backend.updateRepository(repo.id, { subscribed_to_releases: true }) to revert the backend change before mutating local state (referencing backend.updateRepository, backend.syncReleases, updateRepository, and useAppStore.setState) so backend and local Zustand remain consistent.src/components/RepositoryList.tsx (1)
560-633:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift这些批量操作现在只改本地 store,没有落后端。
restore / subscribe / unsubscribe / lock-category / unlock-category / bulk categorize这几条路径里都只更新了 Zustand。本 PR 又改成以后端为单一数据源,所以这些操作会在下一次轮询后被后端旧值覆盖,等于“不持久”。需要为每个批量动作补上对应的backend.updateRepository/backend.syncReleases/backend.syncRepositories,或者提供批量接口统一处理。Also applies to: 857-923, 925-1000, 1014-1061
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/components/RepositoryList.tsx` around lines 560 - 633, handleBulkRestore currently only mutates the local Zustand store via updateRepository, so these changes will be overwritten by the backend on the next poll; call the appropriate backend persistence API (e.g., backend.updateRepository for each changed repo or a bulk backend.syncRepositories/bulkUpdate endpoint) inside the hasChanges branch, await its success, then only call updateRepository to update the local store after the backend confirms success (and handle failures by pushing to failedRepos and logging); apply the same pattern to the other bulk handlers referenced (subscribe/unsubscribe/lock-category/unlock-category/bulk categorize) so each uses backend.updateRepository or a bulk backend API and only updates Zustand on confirmed persistence.
🧹 Nitpick comments (1)
src/services/autoSync.ts (1)
22-24: 💤 Low value
quickHash使用 JSON.stringify 存在潜在性能隐患对于大型仓库列表,每 5 秒调用
JSON.stringify序列化全量数据可能产生性能开销。当前实现可接受,但如果数据量增长,可考虑使用更高效的哈希算法或仅比较关键字段/时间戳。🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/services/autoSync.ts` around lines 22 - 24, quickHash currently uses JSON.stringify(data) which can be costly for large arrays called every 5s; update quickHash (and call sites) to avoid full serialization by either 1) computing a lightweight stable hash of only the necessary fields/timestamps (e.g., join key properties or map to an object with only ids/updatedAt values) or 2) using a faster streaming hash implementation (e.g., a native crypto hash like createHash('sha256') over a compact canonical representation) so you reduce work per run; modify the quickHash function to take the minimal data structure it needs and ensure callers pass only those key fields rather than the full payload.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@server/src/routes/repositories.ts`:
- Around line 197-225: The current checks treat explicit empty arrays as "not
provided"; change the logic to detect presence of fields using
Object.prototype.hasOwnProperty.call(repo, 'ai_tags') and ...call(repo,
'ai_platforms') and use those for hasIncomingTags/hasIncomingPlatforms (and
include these presence checks in hasAnyIncomingAI), then when the property
exists use JSON.stringify(repo.ai_tags) / JSON.stringify(repo.ai_platforms) even
if empty, otherwise fall back to existing?.ai_tags / existing?.ai_platforms;
update aiTagsJson, aiPlatformsJson, hasAnyIncomingAI, hasIncomingTags, and
hasIncomingPlatforms accordingly.
In `@src/components/CategorySidebar.tsx`:
- Around line 212-217: Before calling deleteCustomCategory(...) and
backend.syncSettings(...), capture a snapshot of
useAppStore.getState().customCategories (e.g., const prev = [...]) so you can
rollback; then call deleteCustomCategory(category.id) and await
backend.syncSettings(...), and in the catch block restore the saved snapshot
into the store (replacing customCategories) before showing the toast (use the
same t(...) message). This mirrors the rollback used for hiding default
categories and ensures local state is reverted on sync failure.
In `@src/components/RepositoryCard.tsx`:
- Around line 581-583: 当前代码在 RepositoryCard 中以 fire-and-forget 方式调用
backend.deleteRepository(repository.id) 并吞掉错误,导致后端删除失败时本地 UI 已移除但很快被轮询恢复;请改为等待
deleteRepository 返回结果(使用 await 或 then/catch),在成功后再从本地状态/视图移除并显示成功提示(关联
repository.id);在失败时恢复/保留本地卡片状态并展示错误提示或重试按钮,确保处理 backend.isAvailable 为 false
的分支并记录/上报错误以便调试。
In `@src/components/RepositoryEditModal.tsx`:
- Around line 379-387: The current flow calls updateRepository(updatedRepo)
before awaiting backend.updateRepository and lacks a try/catch, so a failed save
leaves an incorrect optimistic state and an unhandled rejection; wrap the
backend.updateRepository(repository.id, {...}) call in a try/catch, on success
call onClose(), on failure revert the local change by calling updateRepository
with the original repository snapshot (capture original before the optimistic
update or refetch from backend), surface an error to the user (e.g., set an
error state or show a toast) and ensure the promise rejection is handled to
avoid leaving the modal in an inconsistent state.
In `@src/components/RepositoryList.tsx`:
- Around line 686-689: The loop currently deletes items locally and fires
backend.deleteRepository(repoId).catch(() => {}) which swallows failures and
causes repos to reappear on next sync; update the logic around
successIds/deleteRepository/backend.deleteRepository so you either (a) await
backend.deleteRepository for each repo and only call deleteRepository(repoId)
locally after the backend call succeeds, or (b) collect failed repoIds from
backend.deleteRepository and then roll back the local delete for those ids
(e.g., re-add them) and surface a user-visible error/notification listing failed
repoIds; target the loop using symbols successIds, deleteRepository, and
backend.deleteRepository when implementing this change.
In `@src/components/SubscriptionRepoCard.tsx`:
- Around line 118-120: The backend calls in SubscriptionRepoCard are being
treated as best-effort and errors are swallowed (e.g.,
backend.deleteRepository(existingRepo.id)), causing optimistic UI changes to
“bounce” on the next poll; change the logic so you await the backend call
instead of silently catching, and on failure revert the optimistic state update
and show a retry/error message to the user (or disable the control until
success). Specifically, update the code paths around
backend.deleteRepository(existingRepo.id) and the similar branch at the other
location (lines ~178-180) to (1) await the promise, (2) on catch revert whatever
local state/update you applied when optimistically toggling the card, and (3)
surface an error/toast or re-enable retry so the user can retry the action.
---
Outside diff comments:
In `@src/components/ReleaseTimeline.tsx`:
- Around line 503-515: The current unsubscribe flow performs two backend calls
(backend.updateRepository and backend.syncReleases) which can leave the system
in a partial state if the second call fails; change this to an
atomic/compensating approach: either (A) use a single backend API that both
flips subscribed_to_releases and returns updated releases (replace the two calls
with one backend.unsubscribeRepository-like call and then update local state
from its response), or (B) if a single API isn't available, on error call
backend.updateRepository(repo.id, { subscribed_to_releases: true }) to revert
the backend change before mutating local state (referencing
backend.updateRepository, backend.syncReleases, updateRepository, and
useAppStore.setState) so backend and local Zustand remain consistent.
In `@src/components/RepositoryList.tsx`:
- Around line 560-633: handleBulkRestore currently only mutates the local
Zustand store via updateRepository, so these changes will be overwritten by the
backend on the next poll; call the appropriate backend persistence API (e.g.,
backend.updateRepository for each changed repo or a bulk
backend.syncRepositories/bulkUpdate endpoint) inside the hasChanges branch,
await its success, then only call updateRepository to update the local store
after the backend confirms success (and handle failures by pushing to
failedRepos and logging); apply the same pattern to the other bulk handlers
referenced (subscribe/unsubscribe/lock-category/unlock-category/bulk categorize)
so each uses backend.updateRepository or a bulk backend API and only updates
Zustand on confirmed persistence.
---
Nitpick comments:
In `@src/services/autoSync.ts`:
- Around line 22-24: quickHash currently uses JSON.stringify(data) which can be
costly for large arrays called every 5s; update quickHash (and call sites) to
avoid full serialization by either 1) computing a lightweight stable hash of
only the necessary fields/timestamps (e.g., join key properties or map to an
object with only ids/updatedAt values) or 2) using a faster streaming hash
implementation (e.g., a native crypto hash like createHash('sha256') over a
compact canonical representation) so you reduce work per run; modify the
quickHash function to take the minimal data structure it needs and ensure
callers pass only those key fields rather than the full payload.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 561f789d-890c-4a24-ba7b-1ca2c51404a7
📒 Files selected for processing (10)
server/src/routes/repositories.tssrc/App.tsxsrc/components/CategorySidebar.tsxsrc/components/ReleaseTimeline.tsxsrc/components/RepositoryCard.tsxsrc/components/RepositoryEditModal.tsxsrc/components/RepositoryList.tsxsrc/components/SubscriptionRepoCard.tsxsrc/services/autoSync.tssrc/services/backendAdapter.ts
📜 Review details
🧰 Additional context used
📓 Path-based instructions (3)
src/services/**
⚙️ CodeRabbit configuration file
src/services/**: API 服务层。审查时请关注: 1. fetch 请求的超时处理(AbortController) 2. 错误处理和用户友好的错误信息翻译 3. API 密钥/Token 的传输安全性 4. 请求重试和降级策略 5. WebDAV/后端代理的 CORS 处理
Files:
src/services/backendAdapter.tssrc/services/autoSync.ts
src/components/**
⚙️ CodeRabbit configuration file
src/components/**: React 组件目录(TypeScript + Tailwind CSS)。审查时请关注: 1. 是否同时兼容暗色(dark: 前缀)和亮色主题 2. 响应式布局是否完整(Tailwind 断点系统:sm/md/lg/xl) 3. Props 类型定义是否完整(TypeScript interface) 4. 组件是否保持单一职责 5. 无障碍访问(aria-label、role、键盘导航) 6. React.memo / useMemo / useCallback 的使用是否合理
Files:
src/components/ReleaseTimeline.tsxsrc/components/CategorySidebar.tsxsrc/components/RepositoryCard.tsxsrc/components/SubscriptionRepoCard.tsxsrc/components/RepositoryEditModal.tsxsrc/components/RepositoryList.tsx
server/src/routes/**
⚙️ CodeRabbit configuration file
server/src/routes/**: Express 路由层。审查时请关注: 1. 输入验证(req.body/req.params/req.query 的类型和范围检查) 2. SQL 注入防护(全部使用 ? 参数化查询,禁止字符串拼接) 3. 事务使用是否正确(批量操作应使用 db.transaction()) 4. 错误处理是否完善(try-catch + 统一错误响应格式) 5. 认证中间件是否正确应用(authMiddleware) 6. JSON 响应格式统一:{ data/error, code }
Files:
server/src/routes/repositories.ts
🔇 Additional comments (5)
src/services/autoSync.ts (3)
175-189: LGTM!
startAutoSync的实现简洁清晰:
- 正确清理已有定时器避免重复
- 重置同步状态标志
- 返回
stopAutoSync作为清理函数,符合 React useEffect cleanup 模式
36-39: 并发控制逻辑正确使用
_isSyncingFromBackendActive标志防止并发拉取是正确的做法,避免了 5 秒轮询间隔内请求堆积的问题。
44-50: ⚡ Quick win超时处理已通过 backendAdapter 层统一实现
所有 fetch 请求已在
backendAdapter.fetchWithTimeout()中正确处理超时控制。fetchRepositories、fetchReleases、fetchAIConfigs、fetchWebDAVConfigs、fetchSettings这五个方法均使用该方法,内置AbortController和 30 秒默认超时机制。此设计无需在autoSync.ts层重复实现,已满足服务层 coding guidelines 的超时处理要求。> Likely an incorrect or invalid review comment.src/App.tsx (2)
80-110: LGTM! 生命周期管理正确异步初始化和清理逻辑实现正确:
cancelled标志防止组件卸载后的状态更新unsubscribe在 cleanup 中正确调用void backendAnalysis.resumeBatchAnalysis()明确表示忽略 Promise 返回值与新的
startAutoSyncAPI 集成良好。
17-17: 导入更新与新 API 一致移除了
stopAutoSync导入,因为startAutoSync现在直接返回清理函数。这简化了 API 使用,符合常见的 React 订阅模式。
| // Preserve existing AI data field-by-field when incoming value is empty | ||
| const hasAnyIncomingAI = | ||
| (repo.ai_summary != null && repo.ai_summary !== '') || | ||
| (Array.isArray(repo.ai_tags) && repo.ai_tags.length > 0) || | ||
| (Array.isArray(repo.ai_platforms) && repo.ai_platforms.length > 0) || | ||
| repo.analyzed_at != null || | ||
| repo.analysis_failed === true || | ||
| repo.analysis_failed === 1; | ||
|
|
||
| const hasIncomingSummary = repo.ai_summary != null && repo.ai_summary !== ''; | ||
| const hasIncomingTags = Array.isArray(repo.ai_tags) && repo.ai_tags.length > 0; | ||
| const hasIncomingPlatforms = Array.isArray(repo.ai_platforms) && repo.ai_platforms.length > 0; | ||
| const hasIncomingAnalyzedAt = repo.analyzed_at != null; | ||
|
|
||
| const aiSummary = hasIncomingSummary | ||
| ? repo.ai_summary | ||
| : (existing?.ai_summary ?? null); | ||
| const aiTagsJson = hasIncomingTags | ||
| ? JSON.stringify(repo.ai_tags) | ||
| : (existing?.ai_tags ?? '[]'); | ||
| const aiPlatformsJson = hasIncomingPlatforms | ||
| ? JSON.stringify(repo.ai_platforms) | ||
| : (existing?.ai_platforms ?? '[]'); | ||
| const analyzedAt = hasIncomingAnalyzedAt | ||
| ? repo.analyzed_at | ||
| : (existing?.analyzed_at ?? null); | ||
| const analysisFailed = hasAnyIncomingAI | ||
| ? ((repo.analysis_failed === true || repo.analysis_failed === 1) ? 1 : 0) | ||
| : (existing?.analysis_failed ?? 0); |
There was a problem hiding this comment.
显式空数组会被误判成“没传 AI 字段”。
现在用 length > 0 判断 ai_tags / ai_platforms 是否有传入值,这会把合法的 [] 当成“没传”,从而继续保留数据库里的旧值。结果是一次真实分析如果返回空标签或空平台,后端仍会保留旧分析结果。这里应该用“字段是否存在”来区分 omitted 和 explicit empty,比如 Object.prototype.hasOwnProperty.call(repo, 'ai_tags')。
♻️ 可参考的修正方向
- const hasIncomingTags = Array.isArray(repo.ai_tags) && repo.ai_tags.length > 0;
- const hasIncomingPlatforms = Array.isArray(repo.ai_platforms) && repo.ai_platforms.length > 0;
+ const hasIncomingTags = Object.prototype.hasOwnProperty.call(repo, 'ai_tags');
+ const hasIncomingPlatforms = Object.prototype.hasOwnProperty.call(repo, 'ai_platforms');
const aiTagsJson = hasIncomingTags
- ? JSON.stringify(repo.ai_tags)
+ ? JSON.stringify(Array.isArray(repo.ai_tags) ? repo.ai_tags : [])
: (existing?.ai_tags ?? '[]');
const aiPlatformsJson = hasIncomingPlatforms
- ? JSON.stringify(repo.ai_platforms)
+ ? JSON.stringify(Array.isArray(repo.ai_platforms) ? repo.ai_platforms : [])
: (existing?.ai_platforms ?? '[]');🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@server/src/routes/repositories.ts` around lines 197 - 225, The current checks
treat explicit empty arrays as "not provided"; change the logic to detect
presence of fields using Object.prototype.hasOwnProperty.call(repo, 'ai_tags')
and ...call(repo, 'ai_platforms') and use those for
hasIncomingTags/hasIncomingPlatforms (and include these presence checks in
hasAnyIncomingAI), then when the property exists use
JSON.stringify(repo.ai_tags) / JSON.stringify(repo.ai_platforms) even if empty,
otherwise fall back to existing?.ai_tags / existing?.ai_platforms; update
aiTagsJson, aiPlatformsJson, hasAnyIncomingAI, hasIncomingTags, and
hasIncomingPlatforms accordingly.
| if (backend.isAvailable) { | ||
| backend.deleteRepository(repository.id).catch(() => { /* non-critical */ }); | ||
| } |
There was a problem hiding this comment.
别吞掉后端删除失败。
这里把 backend.deleteRepository(...) 当成 fire-and-forget 且直接忽略错误;如果后端删除失败,卡片会先从本地消失,但下一次 5 秒轮询又会被后端数据加回来,用户会看到“取消 Star 成功”后仓库回弹。建议至少等待后端删除成功后再提示成功,或在失败时回滚本地删除并提示重试。
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/components/RepositoryCard.tsx` around lines 581 - 583, 当前代码在
RepositoryCard 中以 fire-and-forget 方式调用 backend.deleteRepository(repository.id)
并吞掉错误,导致后端删除失败时本地 UI 已移除但很快被轮询恢复;请改为等待 deleteRepository 返回结果(使用 await 或
then/catch),在成功后再从本地状态/视图移除并显示成功提示(关联
repository.id);在失败时恢复/保留本地卡片状态并展示错误提示或重试按钮,确保处理 backend.isAvailable 为 false
的分支并记录/上报错误以便调试。
| for (const repoId of successIds) { | ||
| deleteRepository(repoId); | ||
| backend.deleteRepository(repoId).catch(() => {}); | ||
| } |
There was a problem hiding this comment.
批量取消 Star 也不要吞掉后端删除失败。
这里和单卡片流程一样,后端删除失败时本地列表先删掉了,但下一次轮询会把仓库重新拉回来。建议至少收集失败项并提示用户,或在失败时回滚对应仓库。
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/components/RepositoryList.tsx` around lines 686 - 689, The loop currently
deletes items locally and fires backend.deleteRepository(repoId).catch(() => {})
which swallows failures and causes repos to reappear on next sync; update the
logic around successIds/deleteRepository/backend.deleteRepository so you either
(a) await backend.deleteRepository for each repo and only call
deleteRepository(repoId) locally after the backend call succeeds, or (b) collect
failed repoIds from backend.deleteRepository and then roll back the local delete
for those ids (e.g., re-add them) and surface a user-visible error/notification
listing failed repoIds; target the loop using symbols successIds,
deleteRepository, and backend.deleteRepository when implementing this change.
| if (backend.isAvailable) { | ||
| backend.deleteRepository(existingRepo.id).catch(() => {}); | ||
| } |
There was a problem hiding this comment.
Star/Unstar 的后端写入不能静默失败。
这两个分支都把后端同步当作 best-effort。只要请求失败,当前卡片会先按本地状态切换,但下一次轮询会被后端旧数据覆盖,表现成 Star/Unstar “回弹”。建议至少在失败时回滚乐观状态并提示重试,或等待后端写入成功后再确认操作完成。
Also applies to: 178-180
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/components/SubscriptionRepoCard.tsx` around lines 118 - 120, The backend
calls in SubscriptionRepoCard are being treated as best-effort and errors are
swallowed (e.g., backend.deleteRepository(existingRepo.id)), causing optimistic
UI changes to “bounce” on the next poll; change the logic so you await the
backend call instead of silently catching, and on failure revert the optimistic
state update and show a retry/error message to the user (or disable the control
until success). Specifically, update the code paths around
backend.deleteRepository(existingRepo.id) and the similar branch at the other
location (lines ~178-180) to (1) await the promise, (2) on catch revert whatever
local state/update you applied when optimistically toggling the card, and (3)
surface an error/toast or re-enable retry so the user can retry the action.
|
以下是对各条 review 意见的回复: 会修复的:
有意识不修的(设计权衡):
|
|
Tip For best results, initiate chat on the files or code changes.
会修复的三点 — 完全认同,这三处修复能有效提升健壮性:
有意识不修的四点 — 已理解设计意图,已记录到知识库,后续 review 不再重复标记: ✏️ Learnings added
|
- CategorySidebar: deleteCustomCategory 失败时回滚本地 state - RepositoryEditModal: backend.updateRepository 失败时回滚 + toast 提示 - ReleaseTimeline: 取消订阅两步写入失败时补偿恢复后端 subscribed_to_releases