Skip to content

refactor: 以后端为单一数据源,移除双向自动同步#12

Merged
lostiv merged 3 commits into
mainfrom
fix/ai
May 11, 2026
Merged

refactor: 以后端为单一数据源,移除双向自动同步#12
lostiv merged 3 commits into
mainfrom
fix/ai

Conversation

@lostiv
Copy link
Copy Markdown
Owner

@lostiv lostiv commented May 11, 2026

  • 移除 autoSync.ts 中所有 push-to-backend 逻辑,仅保留 5s 轮询拉取
  • backendAdapter 新增 updateRepository(PATCH) 和 deleteRepository(DELETE) 方法
  • 18 处 forceSyncToBackend() 替换为精确的 API 调用
  • 后端 upsert 时按字段保留已有 AI 数据,防止覆盖分析结果
  • 修复 SubscriptionRepoCard 中 existingRepo 可能 undefined 的崩溃隐患
  • 清理 autoSync.ts 中死变量和过时注释

- 移除 autoSync.ts 中所有 push-to-backend 逻辑,仅保留 5s 轮询拉取
- backendAdapter 新增 updateRepository(PATCH) 和 deleteRepository(DELETE) 方法
- 18 处 forceSyncToBackend() 替换为精确的 API 调用
- 后端 upsert 时按字段保留已有 AI 数据,防止覆盖分析结果
- 修复 SubscriptionRepoCard 中 existingRepo 可能 undefined 的崩溃隐患
- 清理 autoSync.ts 中死变量和过时注释
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 11, 2026

Review Change Stack
No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: c123f2d9-581c-49a8-944c-56ef8e1643c9

📥 Commits

Reviewing files that changed from the base of the PR and between 383ae46 and ff6df11.

⛔ Files ignored due to path filters (1)
  • versions/version-info.xml is excluded by !versions/**
📒 Files selected for processing (2)
  • package.json
  • server/package.json
✅ Files skipped from review due to trivial changes (2)
  • server/package.json
  • package.json

📝 Walkthrough

Summary by CodeRabbit

  • 改进
    • 自动同步改为以定期后端轮询为主,减少全量/推送驱动的同步
    • 批量更新时保留已有 AI 数据,避免被空值覆盖
    • 支持按项向后端更新与删除,减少对全量同步的依赖
    • 多处界面操作改为直接调用后端适配器,增强失败回滚与用户体验
    • 后端同步接口支持更细粒度的同步模式
  • 其他
    • 版本号更新至 0.6.1

Walkthrough

将自动同步从 push/debounce 模式改为定时轮询;后端适配器新增 patch/delete 仓库接口;前端各组件改为调用细粒度后端 API(非阻塞),服务端批量 upsert 保留 AI 字段以防空覆盖。

变更详情

端到端同步与 API 迁移

Layer / File(s) Summary
服务端 AI 字段保留
server/src/routes/repositories.ts
PUT /api/repositories 在批量 upsert 前预取现有 AI 字段并在 incoming 为空时回退到数据库值;INSERT OR REPLACE 使用计算后的 AI 变量;DELETE 路由仅格式化变更。
后端适配器
src/services/backendAdapter.ts
syncRepositories(repos, isFullSync=false) 新增 isFullSync 参数;新增 updateRepository(id, fields) (PATCH) 和 deleteRepository(id) (DELETE)。
自动同步服务
src/services/autoSync.ts
移除 push 防抖与 store 订阅逻辑;保留定时拉取:单一 active-pull 标志、startAutoSync 使用 setInterval(syncFromBackend),stopAutoSync 清理定时器并重置标志。
App 生命周期
src/App.tsx
移除 stopAutoSync 导入,改为在 cleanup 时直接调用 startAutoSync 返回的 unsubscribe。
类别侧栏
src/components/CategorySidebar.tsx
删除 forceSyncToBackend,删除/隐藏类别调用 backend.syncSettings(...);拖拽仓库调用 backend.updateRepository(id,{ custom_category, ... }),失败时回滚本地状态。
发布时间线
src/components/ReleaseTimeline.tsx
取消全局强制同步,取消订阅先更新本地再 backend.updateRepository(...){ subscribed_to_releases:false }backend.syncReleases(...),失败时尝试回滚后端标志。
单仓库操作
src/components/RepositoryCard.tsx, SubscriptionRepoCard.tsx, RepositoryEditModal.tsx
移除 forceSyncToBackend;卸星调用 backend.deleteRepository(id)(条件、非阻塞);星时或新增时调用 backend.syncRepositories([repo]);编辑保存调用 backend.updateRepository(id, fields),失败回滚并提示。
批量操作
src/components/RepositoryList.tsx
移除批量操作后的 await forceSyncToBackend();批量卸星改为逐个 backend.deleteRepository(repoId)(fire-and-forget);其他批量流程不再等待全量同步。

Sequence Diagram

sequenceDiagram
  participant App as App (React)
  participant autoSync as autoSync Service
  participant BackendAdapter as Backend Adapter
  participant Server as Server API

  App->>autoSync: startAutoSync()
  autoSync->>autoSync: setInterval(syncFromBackend)
  autoSync->>BackendAdapter: syncFromBackend()
  BackendAdapter->>Server: GET /api/repositories
  Server-->>BackendAdapter: repos
  BackendAdapter-->>App: 更新本地存储

  Note over App,BackendAdapter: 组件级定向更新(替代 forceSyncToBackend)
  App->>BackendAdapter: updateRepository(id, fields)
  BackendAdapter->>Server: PATCH /repositories/:id
  Server-->>BackendAdapter: 200 OK

  App->>BackendAdapter: deleteRepository(id)
  BackendAdapter->>Server: DELETE /repositories/:id
  Server-->>BackendAdapter: 204 No Content
Loading

代码审查估算

🎯 4 (复杂) | ⏱️ ~60 分钟

可能关联的 PRs

  • lostiv/GithubStarsManager#8: 与本 PR 的服务端 AI 字段写入/分析流程相关,可能影响 ai_* 字段的写入与保留逻辑。
🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed PR标题完全遵循 Conventional Commits 格式,清晰描述了主要变更:移除双向自动同步,转为后端单一数据源。
Description check ✅ Passed PR描述详细列出所有核心变更:删除push逻辑、新增API方法、替换18处forceSyncToBackend、AI字段保留、崩溃修复和代码清理。
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/ai
✨ Simplify code
  • Create PR with simplified code
  • Commit simplified code in branch fix/ai

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai Bot added bug Something isn't working frontend backend labels May 11, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/components/ReleaseTimeline.tsx (1)

503-515: ⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

取消订阅拆成两次写入会产生“部分成功”。

这里先把仓库标记为未订阅,再同步 release 列表。只要第二步失败,catch 里回滚的只是本地 Zustand;后端第一步已经成功的话,下一次轮询还是会把仓库改回“未订阅”。这个流程需要单接口/事务化处理,或者在失败时做补偿写回。

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/ReleaseTimeline.tsx` around lines 503 - 515, The current
unsubscribe flow performs two backend calls (backend.updateRepository and
backend.syncReleases) which can leave the system in a partial state if the
second call fails; change this to an atomic/compensating approach: either (A)
use a single backend API that both flips subscribed_to_releases and returns
updated releases (replace the two calls with one
backend.unsubscribeRepository-like call and then update local state from its
response), or (B) if a single API isn't available, on error call
backend.updateRepository(repo.id, { subscribed_to_releases: true }) to revert
the backend change before mutating local state (referencing
backend.updateRepository, backend.syncReleases, updateRepository, and
useAppStore.setState) so backend and local Zustand remain consistent.
src/components/RepositoryList.tsx (1)

560-633: ⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

这些批量操作现在只改本地 store,没有落后端。

restore / subscribe / unsubscribe / lock-category / unlock-category / bulk categorize 这几条路径里都只更新了 Zustand。本 PR 又改成以后端为单一数据源,所以这些操作会在下一次轮询后被后端旧值覆盖,等于“不持久”。需要为每个批量动作补上对应的 backend.updateRepository / backend.syncReleases / backend.syncRepositories,或者提供批量接口统一处理。

Also applies to: 857-923, 925-1000, 1014-1061

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/RepositoryList.tsx` around lines 560 - 633, handleBulkRestore
currently only mutates the local Zustand store via updateRepository, so these
changes will be overwritten by the backend on the next poll; call the
appropriate backend persistence API (e.g., backend.updateRepository for each
changed repo or a bulk backend.syncRepositories/bulkUpdate endpoint) inside the
hasChanges branch, await its success, then only call updateRepository to update
the local store after the backend confirms success (and handle failures by
pushing to failedRepos and logging); apply the same pattern to the other bulk
handlers referenced (subscribe/unsubscribe/lock-category/unlock-category/bulk
categorize) so each uses backend.updateRepository or a bulk backend API and only
updates Zustand on confirmed persistence.
🧹 Nitpick comments (1)
src/services/autoSync.ts (1)

22-24: 💤 Low value

quickHash 使用 JSON.stringify 存在潜在性能隐患

对于大型仓库列表,每 5 秒调用 JSON.stringify 序列化全量数据可能产生性能开销。当前实现可接受,但如果数据量增长,可考虑使用更高效的哈希算法或仅比较关键字段/时间戳。

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/services/autoSync.ts` around lines 22 - 24, quickHash currently uses
JSON.stringify(data) which can be costly for large arrays called every 5s;
update quickHash (and call sites) to avoid full serialization by either 1)
computing a lightweight stable hash of only the necessary fields/timestamps
(e.g., join key properties or map to an object with only ids/updatedAt values)
or 2) using a faster streaming hash implementation (e.g., a native crypto hash
like createHash('sha256') over a compact canonical representation) so you reduce
work per run; modify the quickHash function to take the minimal data structure
it needs and ensure callers pass only those key fields rather than the full
payload.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@server/src/routes/repositories.ts`:
- Around line 197-225: The current checks treat explicit empty arrays as "not
provided"; change the logic to detect presence of fields using
Object.prototype.hasOwnProperty.call(repo, 'ai_tags') and ...call(repo,
'ai_platforms') and use those for hasIncomingTags/hasIncomingPlatforms (and
include these presence checks in hasAnyIncomingAI), then when the property
exists use JSON.stringify(repo.ai_tags) / JSON.stringify(repo.ai_platforms) even
if empty, otherwise fall back to existing?.ai_tags / existing?.ai_platforms;
update aiTagsJson, aiPlatformsJson, hasAnyIncomingAI, hasIncomingTags, and
hasIncomingPlatforms accordingly.

In `@src/components/CategorySidebar.tsx`:
- Around line 212-217: Before calling deleteCustomCategory(...) and
backend.syncSettings(...), capture a snapshot of
useAppStore.getState().customCategories (e.g., const prev = [...]) so you can
rollback; then call deleteCustomCategory(category.id) and await
backend.syncSettings(...), and in the catch block restore the saved snapshot
into the store (replacing customCategories) before showing the toast (use the
same t(...) message). This mirrors the rollback used for hiding default
categories and ensures local state is reverted on sync failure.

In `@src/components/RepositoryCard.tsx`:
- Around line 581-583: 当前代码在 RepositoryCard 中以 fire-and-forget 方式调用
backend.deleteRepository(repository.id) 并吞掉错误,导致后端删除失败时本地 UI 已移除但很快被轮询恢复;请改为等待
deleteRepository 返回结果(使用 await 或 then/catch),在成功后再从本地状态/视图移除并显示成功提示(关联
repository.id);在失败时恢复/保留本地卡片状态并展示错误提示或重试按钮,确保处理 backend.isAvailable 为 false
的分支并记录/上报错误以便调试。

In `@src/components/RepositoryEditModal.tsx`:
- Around line 379-387: The current flow calls updateRepository(updatedRepo)
before awaiting backend.updateRepository and lacks a try/catch, so a failed save
leaves an incorrect optimistic state and an unhandled rejection; wrap the
backend.updateRepository(repository.id, {...}) call in a try/catch, on success
call onClose(), on failure revert the local change by calling updateRepository
with the original repository snapshot (capture original before the optimistic
update or refetch from backend), surface an error to the user (e.g., set an
error state or show a toast) and ensure the promise rejection is handled to
avoid leaving the modal in an inconsistent state.

In `@src/components/RepositoryList.tsx`:
- Around line 686-689: The loop currently deletes items locally and fires
backend.deleteRepository(repoId).catch(() => {}) which swallows failures and
causes repos to reappear on next sync; update the logic around
successIds/deleteRepository/backend.deleteRepository so you either (a) await
backend.deleteRepository for each repo and only call deleteRepository(repoId)
locally after the backend call succeeds, or (b) collect failed repoIds from
backend.deleteRepository and then roll back the local delete for those ids
(e.g., re-add them) and surface a user-visible error/notification listing failed
repoIds; target the loop using symbols successIds, deleteRepository, and
backend.deleteRepository when implementing this change.

In `@src/components/SubscriptionRepoCard.tsx`:
- Around line 118-120: The backend calls in SubscriptionRepoCard are being
treated as best-effort and errors are swallowed (e.g.,
backend.deleteRepository(existingRepo.id)), causing optimistic UI changes to
“bounce” on the next poll; change the logic so you await the backend call
instead of silently catching, and on failure revert the optimistic state update
and show a retry/error message to the user (or disable the control until
success). Specifically, update the code paths around
backend.deleteRepository(existingRepo.id) and the similar branch at the other
location (lines ~178-180) to (1) await the promise, (2) on catch revert whatever
local state/update you applied when optimistically toggling the card, and (3)
surface an error/toast or re-enable retry so the user can retry the action.

---

Outside diff comments:
In `@src/components/ReleaseTimeline.tsx`:
- Around line 503-515: The current unsubscribe flow performs two backend calls
(backend.updateRepository and backend.syncReleases) which can leave the system
in a partial state if the second call fails; change this to an
atomic/compensating approach: either (A) use a single backend API that both
flips subscribed_to_releases and returns updated releases (replace the two calls
with one backend.unsubscribeRepository-like call and then update local state
from its response), or (B) if a single API isn't available, on error call
backend.updateRepository(repo.id, { subscribed_to_releases: true }) to revert
the backend change before mutating local state (referencing
backend.updateRepository, backend.syncReleases, updateRepository, and
useAppStore.setState) so backend and local Zustand remain consistent.

In `@src/components/RepositoryList.tsx`:
- Around line 560-633: handleBulkRestore currently only mutates the local
Zustand store via updateRepository, so these changes will be overwritten by the
backend on the next poll; call the appropriate backend persistence API (e.g.,
backend.updateRepository for each changed repo or a bulk
backend.syncRepositories/bulkUpdate endpoint) inside the hasChanges branch,
await its success, then only call updateRepository to update the local store
after the backend confirms success (and handle failures by pushing to
failedRepos and logging); apply the same pattern to the other bulk handlers
referenced (subscribe/unsubscribe/lock-category/unlock-category/bulk categorize)
so each uses backend.updateRepository or a bulk backend API and only updates
Zustand on confirmed persistence.

---

Nitpick comments:
In `@src/services/autoSync.ts`:
- Around line 22-24: quickHash currently uses JSON.stringify(data) which can be
costly for large arrays called every 5s; update quickHash (and call sites) to
avoid full serialization by either 1) computing a lightweight stable hash of
only the necessary fields/timestamps (e.g., join key properties or map to an
object with only ids/updatedAt values) or 2) using a faster streaming hash
implementation (e.g., a native crypto hash like createHash('sha256') over a
compact canonical representation) so you reduce work per run; modify the
quickHash function to take the minimal data structure it needs and ensure
callers pass only those key fields rather than the full payload.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 561f789d-890c-4a24-ba7b-1ca2c51404a7

📥 Commits

Reviewing files that changed from the base of the PR and between 48b7338 and b16a06c.

📒 Files selected for processing (10)
  • server/src/routes/repositories.ts
  • src/App.tsx
  • src/components/CategorySidebar.tsx
  • src/components/ReleaseTimeline.tsx
  • src/components/RepositoryCard.tsx
  • src/components/RepositoryEditModal.tsx
  • src/components/RepositoryList.tsx
  • src/components/SubscriptionRepoCard.tsx
  • src/services/autoSync.ts
  • src/services/backendAdapter.ts
📜 Review details
🧰 Additional context used
📓 Path-based instructions (3)
src/services/**

⚙️ CodeRabbit configuration file

src/services/**: API 服务层。审查时请关注: 1. fetch 请求的超时处理(AbortController) 2. 错误处理和用户友好的错误信息翻译 3. API 密钥/Token 的传输安全性 4. 请求重试和降级策略 5. WebDAV/后端代理的 CORS 处理

Files:

  • src/services/backendAdapter.ts
  • src/services/autoSync.ts
src/components/**

⚙️ CodeRabbit configuration file

src/components/**: React 组件目录(TypeScript + Tailwind CSS)。审查时请关注: 1. 是否同时兼容暗色(dark: 前缀)和亮色主题 2. 响应式布局是否完整(Tailwind 断点系统:sm/md/lg/xl) 3. Props 类型定义是否完整(TypeScript interface) 4. 组件是否保持单一职责 5. 无障碍访问(aria-label、role、键盘导航) 6. React.memo / useMemo / useCallback 的使用是否合理

Files:

  • src/components/ReleaseTimeline.tsx
  • src/components/CategorySidebar.tsx
  • src/components/RepositoryCard.tsx
  • src/components/SubscriptionRepoCard.tsx
  • src/components/RepositoryEditModal.tsx
  • src/components/RepositoryList.tsx
server/src/routes/**

⚙️ CodeRabbit configuration file

server/src/routes/**: Express 路由层。审查时请关注: 1. 输入验证(req.body/req.params/req.query 的类型和范围检查) 2. SQL 注入防护(全部使用 ? 参数化查询,禁止字符串拼接) 3. 事务使用是否正确(批量操作应使用 db.transaction()) 4. 错误处理是否完善(try-catch + 统一错误响应格式) 5. 认证中间件是否正确应用(authMiddleware) 6. JSON 响应格式统一:{ data/error, code }

Files:

  • server/src/routes/repositories.ts
🔇 Additional comments (5)
src/services/autoSync.ts (3)

175-189: LGTM!

startAutoSync 的实现简洁清晰:

  • 正确清理已有定时器避免重复
  • 重置同步状态标志
  • 返回 stopAutoSync 作为清理函数,符合 React useEffect cleanup 模式

36-39: 并发控制逻辑正确

使用 _isSyncingFromBackendActive 标志防止并发拉取是正确的做法,避免了 5 秒轮询间隔内请求堆积的问题。


44-50: ⚡ Quick win

超时处理已通过 backendAdapter 层统一实现

所有 fetch 请求已在 backendAdapter.fetchWithTimeout() 中正确处理超时控制。fetchRepositoriesfetchReleasesfetchAIConfigsfetchWebDAVConfigsfetchSettings 这五个方法均使用该方法,内置 AbortController 和 30 秒默认超时机制。此设计无需在 autoSync.ts 层重复实现,已满足服务层 coding guidelines 的超时处理要求。

			> Likely an incorrect or invalid review comment.
src/App.tsx (2)

80-110: LGTM! 生命周期管理正确

异步初始化和清理逻辑实现正确:

  • cancelled 标志防止组件卸载后的状态更新
  • unsubscribe 在 cleanup 中正确调用
  • void backendAnalysis.resumeBatchAnalysis() 明确表示忽略 Promise 返回值

与新的 startAutoSync API 集成良好。


17-17: 导入更新与新 API 一致

移除了 stopAutoSync 导入,因为 startAutoSync 现在直接返回清理函数。这简化了 API 使用,符合常见的 React 订阅模式。

Comment on lines +197 to +225
// Preserve existing AI data field-by-field when incoming value is empty
const hasAnyIncomingAI =
(repo.ai_summary != null && repo.ai_summary !== '') ||
(Array.isArray(repo.ai_tags) && repo.ai_tags.length > 0) ||
(Array.isArray(repo.ai_platforms) && repo.ai_platforms.length > 0) ||
repo.analyzed_at != null ||
repo.analysis_failed === true ||
repo.analysis_failed === 1;

const hasIncomingSummary = repo.ai_summary != null && repo.ai_summary !== '';
const hasIncomingTags = Array.isArray(repo.ai_tags) && repo.ai_tags.length > 0;
const hasIncomingPlatforms = Array.isArray(repo.ai_platforms) && repo.ai_platforms.length > 0;
const hasIncomingAnalyzedAt = repo.analyzed_at != null;

const aiSummary = hasIncomingSummary
? repo.ai_summary
: (existing?.ai_summary ?? null);
const aiTagsJson = hasIncomingTags
? JSON.stringify(repo.ai_tags)
: (existing?.ai_tags ?? '[]');
const aiPlatformsJson = hasIncomingPlatforms
? JSON.stringify(repo.ai_platforms)
: (existing?.ai_platforms ?? '[]');
const analyzedAt = hasIncomingAnalyzedAt
? repo.analyzed_at
: (existing?.analyzed_at ?? null);
const analysisFailed = hasAnyIncomingAI
? ((repo.analysis_failed === true || repo.analysis_failed === 1) ? 1 : 0)
: (existing?.analysis_failed ?? 0);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

显式空数组会被误判成“没传 AI 字段”。

现在用 length > 0 判断 ai_tags / ai_platforms 是否有传入值,这会把合法的 [] 当成“没传”,从而继续保留数据库里的旧值。结果是一次真实分析如果返回空标签或空平台,后端仍会保留旧分析结果。这里应该用“字段是否存在”来区分 omitted 和 explicit empty,比如 Object.prototype.hasOwnProperty.call(repo, 'ai_tags')

♻️ 可参考的修正方向
-        const hasIncomingTags = Array.isArray(repo.ai_tags) && repo.ai_tags.length > 0;
-        const hasIncomingPlatforms = Array.isArray(repo.ai_platforms) && repo.ai_platforms.length > 0;
+        const hasIncomingTags = Object.prototype.hasOwnProperty.call(repo, 'ai_tags');
+        const hasIncomingPlatforms = Object.prototype.hasOwnProperty.call(repo, 'ai_platforms');

         const aiTagsJson = hasIncomingTags
-          ? JSON.stringify(repo.ai_tags)
+          ? JSON.stringify(Array.isArray(repo.ai_tags) ? repo.ai_tags : [])
           : (existing?.ai_tags ?? '[]');
         const aiPlatformsJson = hasIncomingPlatforms
-          ? JSON.stringify(repo.ai_platforms)
+          ? JSON.stringify(Array.isArray(repo.ai_platforms) ? repo.ai_platforms : [])
           : (existing?.ai_platforms ?? '[]');
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@server/src/routes/repositories.ts` around lines 197 - 225, The current checks
treat explicit empty arrays as "not provided"; change the logic to detect
presence of fields using Object.prototype.hasOwnProperty.call(repo, 'ai_tags')
and ...call(repo, 'ai_platforms') and use those for
hasIncomingTags/hasIncomingPlatforms (and include these presence checks in
hasAnyIncomingAI), then when the property exists use
JSON.stringify(repo.ai_tags) / JSON.stringify(repo.ai_platforms) even if empty,
otherwise fall back to existing?.ai_tags / existing?.ai_platforms; update
aiTagsJson, aiPlatformsJson, hasAnyIncomingAI, hasIncomingTags, and
hasIncomingPlatforms accordingly.

Comment thread src/components/CategorySidebar.tsx
Comment on lines +581 to +583
if (backend.isAvailable) {
backend.deleteRepository(repository.id).catch(() => { /* non-critical */ });
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

别吞掉后端删除失败。

这里把 backend.deleteRepository(...) 当成 fire-and-forget 且直接忽略错误;如果后端删除失败,卡片会先从本地消失,但下一次 5 秒轮询又会被后端数据加回来,用户会看到“取消 Star 成功”后仓库回弹。建议至少等待后端删除成功后再提示成功,或在失败时回滚本地删除并提示重试。

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/RepositoryCard.tsx` around lines 581 - 583, 当前代码在
RepositoryCard 中以 fire-and-forget 方式调用 backend.deleteRepository(repository.id)
并吞掉错误,导致后端删除失败时本地 UI 已移除但很快被轮询恢复;请改为等待 deleteRepository 返回结果(使用 await 或
then/catch),在成功后再从本地状态/视图移除并显示成功提示(关联
repository.id);在失败时恢复/保留本地卡片状态并展示错误提示或重试按钮,确保处理 backend.isAvailable 为 false
的分支并记录/上报错误以便调试。

Comment thread src/components/RepositoryEditModal.tsx Outdated
Comment on lines 686 to 689
for (const repoId of successIds) {
deleteRepository(repoId);
backend.deleteRepository(repoId).catch(() => {});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

批量取消 Star 也不要吞掉后端删除失败。

这里和单卡片流程一样,后端删除失败时本地列表先删掉了,但下一次轮询会把仓库重新拉回来。建议至少收集失败项并提示用户,或在失败时回滚对应仓库。

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/RepositoryList.tsx` around lines 686 - 689, The loop currently
deletes items locally and fires backend.deleteRepository(repoId).catch(() => {})
which swallows failures and causes repos to reappear on next sync; update the
logic around successIds/deleteRepository/backend.deleteRepository so you either
(a) await backend.deleteRepository for each repo and only call
deleteRepository(repoId) locally after the backend call succeeds, or (b) collect
failed repoIds from backend.deleteRepository and then roll back the local delete
for those ids (e.g., re-add them) and surface a user-visible error/notification
listing failed repoIds; target the loop using symbols successIds,
deleteRepository, and backend.deleteRepository when implementing this change.

Comment on lines +118 to +120
if (backend.isAvailable) {
backend.deleteRepository(existingRepo.id).catch(() => {});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Star/Unstar 的后端写入不能静默失败。

这两个分支都把后端同步当作 best-effort。只要请求失败,当前卡片会先按本地状态切换,但下一次轮询会被后端旧数据覆盖,表现成 Star/Unstar “回弹”。建议至少在失败时回滚乐观状态并提示重试,或等待后端写入成功后再确认操作完成。

Also applies to: 178-180

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/SubscriptionRepoCard.tsx` around lines 118 - 120, The backend
calls in SubscriptionRepoCard are being treated as best-effort and errors are
swallowed (e.g., backend.deleteRepository(existingRepo.id)), causing optimistic
UI changes to “bounce” on the next poll; change the logic so you await the
backend call instead of silently catching, and on failure revert the optimistic
state update and show a retry/error message to the user (or disable the control
until success). Specifically, update the code paths around
backend.deleteRepository(existingRepo.id) and the similar branch at the other
location (lines ~178-180) to (1) await the promise, (2) on catch revert whatever
local state/update you applied when optimistically toggling the card, and (3)
surface an error/toast or re-enable retry so the user can retry the action.

@lostiv
Copy link
Copy Markdown
Owner Author

lostiv commented May 11, 2026

@
@coderabbitai

以下是对各条 review 意见的回复:

会修复的:

  1. CategorySidebar.tsx handleDeleteCategory 缺少回滚 — 同意,应该和 handleHideDefaultCategory 一样做本地回滚。

  2. RepositoryEditModal.tsx 乐观更新无 try/catch — 同意,应该加 try/catch,失败时回滚本地状态。

  3. ReleaseTimeline.tsx 取消订阅两步写入 — 同意,catch 里应该补偿调用 backend.updateRepository 把 subscribed_to_releases 改回 true。

有意识不修的(设计权衡):

  1. RepositoryList.tsx 批量操作不同步后端 — 轮询中的 _lastHash 比较会阻止本地修改被立即覆盖,只有后端数据真正变化时才覆盖。对单设备场景,这些操作在刷新前都能正常持久。每个批量操作增加 N 次后端调用会严重拖慢 UI。后续计划是实现批量 PATCH 端点来统一处理。

  2. RepositoryCard/SubscriptionRepoCard/RepositoryList 中 fire-and-forget 吞错误 — 这是有意的 best-effort 设计。后端删除/更新失败时,本地乐观更新会让 UI 立即反馈,下次 5s 轮询会自动从后端拉回正确状态。阻止 UI 等待后端确认用户体验更差。

  3. server/routes/repositories.ts 空数组与 hasOwnProperty — 当前前端从不发送空数组来清空 AI 标签,AI 分析服务要么不包含这些字段,要么包含有内容的数组。用 Array.isArray + length > 0 的逻辑符合实际数据流。

  4. quickHash 用 JSON.stringify 性能 — 当前数据量下不是瓶颈,不必过早优化。
    @

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 11, 2026

Tip

For best results, initiate chat on the files or code changes.

@lostiv 感谢详细的说明!以下是对各点的确认:

会修复的三点 — 完全认同,这三处修复能有效提升健壮性:

  1. handleDeleteCategory 补充本地回滚,与 handleHideDefaultCategory 保持一致。
  2. RepositoryEditModaltry/catch + 失败回滚,防止乐观更新状态残留。
  3. ReleaseTimelinecatch 中补偿 backend.updateRepository({ subscribed_to_releases: true }),确保两步写入失败时状态一致。

有意识不修的四点 — 已理解设计意图,已记录到知识库,后续 review 不再重复标记:
4. 批量操作性能权衡:_lastHash 保护 + 后续批量 PATCH 端点是合理路径。
5. Fire-and-forget:5s 轮询作为最终一致性兜底,乐观更新优先 UX 的设计决策合理。
6. AI 字段空数组判断:前端数据流保证不会发送空数组清空 AI 标签,逻辑符合实际场景。
7. quickHash 性能:当前数据量下不是瓶颈,不需要提前优化。


✏️ Learnings added
Learnt from: lostiv
URL: https://github.com/lostiv/GithubStarsManager/pull/12

Timestamp: 2026-05-11T09:28:52.227Z
Learning: In GithubStarsManager (React 18 + TypeScript + Zustand frontend, Express + better-sqlite3 backend), bulk operations in RepositoryList.tsx intentionally do NOT call backend APIs after each action. The autoSync polling loop uses _lastHash comparison to prevent local changes from being overwritten by stale backend data. This is a deliberate performance tradeoff for single-device scenarios; a batch PATCH endpoint is planned for future improvement.

Learnt from: lostiv
URL: https://github.com/lostiv/GithubStarsManager/pull/12

Timestamp: 2026-05-11T09:28:52.227Z
Learning: In GithubStarsManager, fire-and-forget backend calls (e.g., backend.deleteRepository, backend.updateRepository) in RepositoryCard.tsx, SubscriptionRepoCard.tsx, and RepositoryList.tsx are intentional best-effort design. The 5-second polling loop acts as an eventual-consistency fallback to restore correct state if backend calls fail. Do not flag these as missing error handling.

Learnt from: lostiv
URL: https://github.com/lostiv/GithubStarsManager/pull/12

Timestamp: 2026-05-11T09:28:52.227Z
Learning: In GithubStarsManager server/src/routes/repositories.ts, the AI field preservation logic uses Array.isArray + length > 0 (rather than hasOwnProperty) because the frontend never sends empty arrays to clear AI tags — AI analysis either omits those fields or includes arrays with content. This logic matches the actual data flow.

Learnt from: lostiv
URL: https://github.com/lostiv/GithubStarsManager/pull/12

Timestamp: 2026-05-11T09:28:52.227Z
Learning: In GithubStarsManager src/services/autoSync.ts, the quickHash function uses JSON.stringify for hashing. This is intentional — at current data volumes it is not a bottleneck and premature optimization is not warranted.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

lostiv added 2 commits May 11, 2026 17:31
- CategorySidebar: deleteCustomCategory 失败时回滚本地 state
- RepositoryEditModal: backend.updateRepository 失败时回滚 + toast 提示
- ReleaseTimeline: 取消订阅两步写入失败时补偿恢复后端 subscribed_to_releases
@lostiv lostiv merged commit 193b718 into main May 11, 2026
1 check passed
@lostiv lostiv deleted the fix/ai branch May 11, 2026 09:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend bug Something isn't working frontend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant