Skip to content

Optimize P0-P2 stability, performance, and packaging#58

Merged
ShellMonster merged 48 commits intomainfrom
feature/p0-p2-optimizations
Apr 28, 2026
Merged

Optimize P0-P2 stability, performance, and packaging#58
ShellMonster merged 48 commits intomainfrom
feature/p0-p2-optimizations

Conversation

@ShellMonster
Copy link
Copy Markdown
Owner

@ShellMonster ShellMonster commented Apr 28, 2026

User description

Summary

  • Complete P0-P2 hardening across CI, backend limits/timeouts, provider logging, worker behavior, Docker health checks, and Tauri asset/CSP boundaries.
  • Improve desktop performance and maintainability with template grid virtualization, unified template scrolling, narrower Zustand subscriptions, slimmer history cache, lazy locale loading, and component/API splits.
  • Bump app metadata to v2.8.4 and add a local Tauri packaging script that builds the Go sidecar while skipping updater signing on developer machines.

Verification

  • cd backend && go test ./... && go vet ./...
  • cd desktop && npm run type-check && npm run build
  • cd frontend && npm run type-check && npm run build
  • docker compose config
  • cd desktop && PATH="$HOME/.cargo/bin:$PATH" npm run tauri:build:local

Notes

  • Tag v2.8.4 has been pushed.
  • Local macOS DMG generated at desktop/src-tauri/target/release/bundle/dmg/大香蕉 AI_2.8.4_aarch64.dmg.

CodeAnt-AI Description

Tighten app stability, speed up template browsing, and make image handling safer

What Changed

  • The app now uses a lighter template browser that only renders visible cards, keeping search and filters responsive even with 900+ templates
  • Reference images are limited to 10 files, 20MB each, and 80MB total, with clearer errors when uploads or local image picks go over the limit
  • Provider errors and logs now show safe, shortened previews instead of full response bodies or secret data
  • Desktop startup and language switching avoid loading every language file up front, and the app now switches languages after the needed file is loaded
  • The backend and Docker setup now use clearer health checks and connection settings so long-running generation streams stay connected and deployment checks catch frontend or Nginx failures

Impact

✅ Faster template browsing
✅ Fewer upload failures from oversized reference images
✅ Clearer provider error messages

🔄 Retrigger CodeAnt AI Review

Details

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

ShellMonster and others added 29 commits April 28, 2026 01:37
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Also adds a local Tauri packaging script that builds the sidecar and skips updater signing for developer machines without release secrets.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
@codeant-ai
Copy link
Copy Markdown

codeant-ai Bot commented Apr 28, 2026

CodeAnt AI is reviewing your PR.


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@codeant-ai
Copy link
Copy Markdown

codeant-ai Bot commented Apr 28, 2026

CodeAnt AI finished reviewing your PR.

ShellMonster and others added 19 commits April 28, 2026 16:37
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
@ShellMonster ShellMonster merged commit e594c22 into main Apr 28, 2026
5 checks passed
@ShellMonster ShellMonster deleted the feature/p0-p2-optimizations branch April 28, 2026 11:41
@codeant-ai
Copy link
Copy Markdown

codeant-ai Bot commented May 11, 2026

CodeAnt AI is running the review.


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@codeant-ai
Copy link
Copy Markdown

codeant-ai Bot commented May 11, 2026

Sequence Diagram

This PR hardens the image generation pipeline by enforcing strict reference image limits in both desktop and backend, validating local file paths, and wrapping provider calls with connection timeouts, panic recovery, and redacted response logging.

sequenceDiagram
    participant User
    participant Desktop as Desktop app
    participant API as Backend API
    participant Worker
    participant Provider

    User->>Desktop: Select or paste reference images
    Desktop->>Desktop: Enforce count and size limits, dedupe and compress
    Desktop->>API: Submit generate with images request with refImages or refPaths
    API->>API: Parse multipart and validate reference images and local paths
    API->>Worker: Enqueue generation task with validated image bytes
    Worker->>Provider: Call provider with task context and prompt
    Provider-->>Worker: Return generation result with redacted response preview
    Worker->>API: Save task result or record timeout or validation error
    API-->>Desktop: Return generation status and image metadata
Loading

Generated by CodeAnt AI

typeof window !== 'undefined'
? window.innerWidth || document.documentElement.clientWidth
: width;
const columnCount = width > 0 ? getTemplateColumnCount(width, viewportWidth) : 2;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Replace the hardcoded fallback column count with the existing grid minimum constant so all grid sizing logic is driven by top-level constants. [custom_rule]

Severity Level: Minor ⚠️

Why it matters? 🤔

The file defines TEMPLATE_GRID_COLUMNS.MIN as the shared minimum column count, but this line hardcodes 2 instead of using that constant. The suggestion accurately identifies a real violation of the custom rule to avoid inline numeric literals in grid sizing logic.

Fix in Cursor | Fix in VSCode Claude

(Use Cmd/Ctrl + Click for best experience)

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** desktop/src/components/TemplateMarket/TemplateMarketDrawer.tsx
**Line:** 1087:1087
**Comment:**
	*Custom Rule: Replace the hardcoded fallback column count with the existing grid minimum constant so all grid sizing logic is driven by top-level constants.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix
👍 | 👎

Comment on lines +172 to +173
// SSE 任务流需要在长时间生成期间持续保活,不能用全局写超时截断连接。
WriteTimeout: 0,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Setting the server-wide write timeout to 0 disables response write deadlines for every endpoint, not just SSE. A slow client can keep connections open indefinitely and consume goroutines/file descriptors, which creates a denial-of-service risk under load. Keep a bounded global WriteTimeout and handle long-lived SSE streams with route-specific behavior instead of disabling write timeouts globally. [security]

Severity Level: Critical 🚨
- ❌ Backend HTTP server vulnerable to slow client resource exhaustion.
- ⚠️ Non-SSE endpoints can be held open indefinitely.
Steps of Reproduction ✅
1. Start the backend HTTP server by running the binary that calls `main()` in
`backend/cmd/server/main.go:178`, which constructs the `http.Server` via `newHTTPServer`
at `backend/cmd/server/main.go:166–175` and assigns it to `srv` at line 367 with
`WriteTimeout: 0`.

2. From a client, send a request to any normal JSON/API endpoint served by the router `r`
passed into `newHTTPServer` at `backend/cmd/server/main.go:367` (for example a history or
template endpoint; handlers are mounted on `r` earlier in this file) and have the client
read the response body extremely slowly (e.g., throttling reads to 1 byte every 30–60
seconds).

3. The connection passes the `ReadHeaderTimeout` and `ReadTimeout` checks (lines 170–171),
the handler writes the response, and because `WriteTimeout` is explicitly set to `0` at
`backend/cmd/server/main.go:173`, the `http.Server` does not enforce any write deadline;
the goroutine remains blocked on socket writes to the slow client indefinitely.

4. Repeat step 2 with many concurrent slow clients (hundreds or thousands of connections);
each connection holds open a goroutine and file descriptor without timeout, eventually
exhausting process resources and causing new, legitimate requests to the backend to stall
or fail (effectively a denial-of-service condition).

Fix in Cursor | Fix in VSCode Claude

(Use Cmd/Ctrl + Click for best experience)

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** backend/cmd/server/main.go
**Line:** 172:173
**Comment:**
	*Security: Setting the server-wide write timeout to `0` disables response write deadlines for every endpoint, not just SSE. A slow client can keep connections open indefinitely and consume goroutines/file descriptors, which creates a denial-of-service risk under load. Keep a bounded global `WriteTimeout` and handle long-lived SSE streams with route-specific behavior instead of disabling write timeouts globally.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix
👍 | 👎

const incomingPage = typeof incoming.page === 'number' && incoming.page > 0 ? incoming.page : fallbackState.page;
const page = Math.min(incomingPage, pageFromItems);
const lastLoadedAt = typeof incoming.lastLoadedAt === 'number' ? incoming.lastLoadedAt : fallbackState.lastLoadedAt;
const hasMore = typeof incoming.hasMore === 'boolean' ? items.length < total || incoming.hasMore : items.length < total;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The hasMore recomputation can remain true even when cached items already cover the known total, because it ORs with the previously persisted hasMore flag. If a stale cache saved hasMore: true, the UI can keep trying to load more pages forever and trigger unnecessary/repeated fetches. Recompute hasMore deterministically from current items and total (or trusted API metadata) instead of preserving stale truthy values. [logic error]

Severity Level: Major ⚠️
- ⚠️ History view keeps requesting pages after data exhausted.
- ⚠️ Unnecessary network calls increase backend load and latency.
Steps of Reproduction ✅
1. Open the history view in the desktop app so that `useHistoryStore` (defined in
`desktop/src/store/historyStore.ts:165` and persisted under the key `'history-cache'` at
line 437) loads data until `items.length` reaches `total`, but due to prior logic the
persisted state ends up with `items.length === total` and `hasMore === true`, which is
then written to localStorage.

2. Restart the desktop app so Zustand rehydrates the `'history-cache'` slice; the persist
config at `desktop/src/store/historyStore.ts:437–455` calls `migrate` and `merge`, which
both invoke `normalizePersistedHistoryState(persistedState, currentState)` and compute
`hasMore` inside that function at line 73 as `items.length < total || incoming.hasMore`.

3. On rehydration, with `items.length === total` and `incoming.hasMore === true` from the
stale cache, the expression at `desktop/src/store/historyStore.ts:73` returns `true`, so
the merged in-memory state has `hasMore: true` even though the number of cached items
already equals the known `total`.

4. Any UI component that uses `useHistoryStore` to implement infinite scroll for history
(calling its `loadMore`/`loadHistory` actions when `hasMore` is true) will keep attempting
to fetch additional pages after the dataset is exhausted; the backend returns empty pages,
but because `hasMore` was derived from `items.length < total || incoming.hasMore`, the UI
can continue to believe more data exists until another state update explicitly flips
`hasMore` to `false`, causing unnecessary repeated network requests.

Fix in Cursor | Fix in VSCode Claude

(Use Cmd/Ctrl + Click for best experience)

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** desktop/src/store/historyStore.ts
**Line:** 73:73
**Comment:**
	*Logic Error: The `hasMore` recomputation can remain `true` even when cached items already cover the known `total`, because it ORs with the previously persisted `hasMore` flag. If a stale cache saved `hasMore: true`, the UI can keep trying to load more pages forever and trigger unnecessary/repeated fetches. Recompute `hasMore` deterministically from current `items` and `total` (or trusted API metadata) instead of preserving stale truthy values.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix
👍 | 👎

Comment on lines +112 to +114
{activeFilters.map((filter) => (
<ActiveFilterChip key={filter.label} label={filter.label} onClear={filter.onClear} />
))}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Using filter.label as the React key is unsafe because labels are not guaranteed to be unique across active filters (for example, two filters can share the same display text after formatting). Duplicate keys will cause React to reuse the wrong component instance, so clearing one chip can remove or visually update the wrong chip. Use a stable unique key (such as a filter id/type+value composite) instead. [missing react key]

Severity Level: Major ⚠️
- ⚠️ Template filter chips can clear the wrong filter.
- ⚠️ Users see inconsistent active filter chip state.
Steps of Reproduction ✅
1. Render the `TemplateMarketFilters` component exported from
`desktop/src/components/TemplateMarket/TemplateMarketFilters.tsx` (lines 73-91) in a
parent React component, passing an `activeFilters` array with at least two entries that
share the same `label` string but have different `onClear` callbacks (type defined at
lines 12-15).

2. During render, `TemplateMarketFilters` maps over `activeFilters` at lines 112-114 and
renders `<ActiveFilterChip>` elements using `key={filter.label}` and
`label={filter.label}`, so both chips get the same React `key` even though their `onClear`
functions differ.

3. From the parent component, remove one of the duplicate-label filters from the
`activeFilters` array (for example, toggle off a filter) while leaving the other in place,
causing React to reconcile the list of chips using the non-unique `label` keys at line
113.

4. Observe that React reuses the wrong chip instance: the remaining chip may keep the
`onClear` callback or visual state from the removed filter, so clicking the chip's clear
button (handled via `onClick={onClear}` in `ActiveFilterChip` at lines 37-48) clears the
wrong filter or fails to clear the intended one.

Fix in Cursor | Fix in VSCode Claude

(Use Cmd/Ctrl + Click for best experience)

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** desktop/src/components/TemplateMarket/TemplateMarketFilters.tsx
**Line:** 112:114
**Comment:**
	*Missing React Key: Using `filter.label` as the React `key` is unsafe because labels are not guaranteed to be unique across active filters (for example, two filters can share the same display text after formatting). Duplicate keys will cause React to reuse the wrong component instance, so clearing one chip can remove or visually update the wrong chip. Use a stable unique key (such as a filter id/type+value composite) instead.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix
👍 | 👎

Comment on lines +77 to +81
const TEMPLATE_GRID_COLUMNS = {
MIN: 2,
MEDIUM: 3,
LARGE: 4
} as const;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The grid logic enforces a minimum of 2 columns and a minimum card width of 180px, which requires at least 380px including gap; on narrower containers this overflows horizontally and cards become clipped/unreachable. Allow a 1-column fallback for narrow widths (or reduce min card width responsively) so layout remains usable on small viewports. [css layout issue]

Severity Level: Major ⚠️
- ⚠️ Template cards overflow and clip on narrow drawer widths.
- ⚠️ Second-column templates become partially or fully unusable.
Steps of Reproduction ✅
1. Render `TemplateMarketDrawer` (exported from
`desktop/src/components/TemplateMarket/TemplateMarketDrawer.tsx:1142`) in a narrow window
such that the inner scroll container with `ref={listRef}` (lines 1565–1572) has
`clientWidth < 380` pixels.

2. Ensure there are templates to render so `filteredTemplates` (computed from filters
around lines 1220–1230) is non-empty and passed into `VirtualTemplateGrid` (invocation
around lines 1638–1650, implementation at 1041–1139).

3. Observe that `VirtualTemplateGrid` computes `columnCount` using
`getTemplateColumnCount`, which enforces `TEMPLATE_GRID_COLUMNS.MIN = 2` (constant at
lines 77–81) and uses `TEMPLATE_GRID_MIN_CARD_WIDTH = 180` and `TEMPLATE_GRID_GAP = 20`
(lines 74–76), requiring at least `2 * 180 + 1 * 20 = 380` pixels for two columns.

4. When `containerWidth < 380`, the computed `columnWidth` still clamps to 180 and the
second column is positioned at `transform: translate(columnIndex * (columnWidth +
TEMPLATE_GRID_GAP), ...)`, so part or all of the second column overflows horizontally;
because the list wrapper only has `overflow-y-auto` (lines 1565–1572) and no horizontal
scrolling, the right side of the cards in the second column is clipped and unreachable on
small viewports.

Fix in Cursor | Fix in VSCode Claude

(Use Cmd/Ctrl + Click for best experience)

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** desktop/src/components/TemplateMarket/TemplateMarketDrawer.tsx
**Line:** 77:81
**Comment:**
	*Css Layout Issue: The grid logic enforces a minimum of 2 columns and a minimum card width of 180px, which requires at least 380px including gap; on narrower containers this overflows horizontally and cards become clipped/unreachable. Allow a 1-column fallback for narrow widths (or reduce min card width responsively) so layout remains usable on small viewports.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix
👍 | 👎

Comment on lines +1344 to +1347
requestAnimationFrame(() => {
container.scrollTop = top;
setListScrollTop(top);
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The restored scroll state is written back from scrollTopRef without reading the browser-clamped value after container.scrollTop assignment. When filters reduce the result set, this stale oversized scroll position can make virtualization compute a start row past the end row and render an empty grid. Update the state from container.scrollTop after assignment (or clamp against current content height) so virtual rows are computed from the real scroll position. [logic error]

Severity Level: Major ⚠️
- ❌ Template grid can appear empty after changing filters.
- ⚠️ Users may think no templates match their filters.
Steps of Reproduction ✅
1. Render `TemplateMarketDrawer` (exported at
`desktop/src/components/TemplateMarket/TemplateMarketDrawer.tsx:1142`) from any parent
component so that the drawer opens and `isOpen` is true.

2. Scroll far down in the template list so the scroll handler on the list container (`div`
with `ref={listRef}` and `onScroll` handler around lines 1565–1572) sets a large value
into both `scrollTopRef.current` and the `listScrollTop` state.

3. Apply filters (search and filter state computed with `filteredTemplates` in the memo
around lines 1220–1230) that dramatically reduce `filteredTemplates.length` while keeping
the drawer open (`isOpen` true, `isDormant` false), so the templates list becomes much
shorter.

4. When `filteredTemplates.length` changes, the scroll-restore `useLayoutEffect` (around
lines 1338–1347) runs and reads the stale large `top = scrollTopRef.current`, then calls
`requestAnimationFrame(() => { container.scrollTop = top; setListScrollTop(top); })`. If
`top` is now larger than the actual scrollable height, the browser clamps
`container.scrollTop` but `listScrollTop` remains the oversized `top`, causing
`VirtualTemplateGrid` (lines 1041–1139) to compute `startRow > rowCount - 1` and render no
`visibleCells`, so the template grid appears empty even though `filteredTemplates.length >
0` until the user manually scrolls to correct the state.

Fix in Cursor | Fix in VSCode Claude

(Use Cmd/Ctrl + Click for best experience)

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** desktop/src/components/TemplateMarket/TemplateMarketDrawer.tsx
**Line:** 1344:1347
**Comment:**
	*Logic Error: The restored scroll state is written back from `scrollTopRef` without reading the browser-clamped value after `container.scrollTop` assignment. When filters reduce the result set, this stale oversized scroll position can make virtualization compute a start row past the end row and render an empty grid. Update the state from `container.scrollTop` after assignment (or clamp against current content height) so virtual rows are computed from the real scroll position.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix
👍 | 👎

) return;

// 兜底:Tauri 打包环境下 Web ClipboardData 可能拿不到图片数据,尝试原生读取
void tryPasteFromTauriClipboard();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The Tauri clipboard fallback is triggered in both the component paste handler and the global capture listener, so a single paste event in the reference-image area can invoke native clipboard reads twice concurrently. This can cause duplicate add attempts and inconsistent UX; centralize fallback invocation to one path per event. [race condition]

Severity Level: Major ⚠️
- ⚠️ Tauri users may see duplicate reference-image entries.
- ⚠️ Duplicate success toasts degrade paste UX consistency.
- ⚠️ Redundant native clipboard reads waste desktop resources.
Steps of Reproduction ✅
1. Run the Tauri desktop build of the app so that `window.__TAURI_INTERNALS__` is defined
and the `useReferenceImagePaste` hook in
`desktop/src/components/ConfigPanel/useReferenceImagePaste.ts` is mounted, registering
both the `handlePaste` callback (lines 128–162) on the reference-image component and the
global `window` `paste` listener in `useEffect` (lines 165–197).

2. In a Tauri environment where Web `ClipboardData` does not expose image files (the
scenario described in the comment at line 160), copy an image into the system clipboard
and then paste while the focus is on the reference-image area that uses `handlePaste`.

3. For this paste event, the global `onPaste` handler at lines 166–190 runs first (capture
phase): `extractImageFilesFromClipboard` at line 169 returns `[]`, the Tauri check at line
177 passes, the plain-text-and-input guard at lines 180–187 is bypassed, and `void
tryPasteFromTauriClipboard();` at line 190 invokes the native `read_image_from_clipboard`
command once.

4. Because the global handler does not call `stopPropagation()` in this no-files branch,
the same DOM event continues to React's `handlePaste` at lines 128–162; there
`files.length === 0` at line 130, the same Tauri and text/input guards at lines 147–158
pass, and `void tryPasteFromTauriClipboard();` at line 161 is called a second time for the
same paste, causing two concurrent native clipboard reads and two `addRefFiles` calls
(lines 115–121) which can yield duplicate reference entries and duplicated success toasts.

Fix in Cursor | Fix in VSCode Claude

(Use Cmd/Ctrl + Click for best experience)

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** desktop/src/components/ConfigPanel/useReferenceImagePaste.ts
**Line:** 161:161
**Comment:**
	*Race Condition: The Tauri clipboard fallback is triggered in both the component paste handler and the global capture listener, so a single paste event in the reference-image area can invoke native clipboard reads twice concurrently. This can cause duplicate add attempts and inconsistent UX; centralize fallback invocation to one path per event.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix
👍 | 👎

if (files.length > 0) {
event.preventDefault();
event.stopPropagation();
void processPastedFiles(files);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The global paste listener calls the async processPastedFiles without error handling, so lock/contention errors (or any rejection inside processing) become unhandled promise rejections and bypass the busy/error toast flow. Wrap this call in a catch (or route through the same guarded handler used in handlePaste) so failures are handled consistently. [possible bug]

Severity Level: Major ⚠️
- ⚠️ Global paste can produce unhandled promise rejections.
- ⚠️ Busy-state toasts skipped for window-level paste events.
Steps of Reproduction ✅
1. Open the desktop app build that includes `useReferenceImagePaste` (file
`desktop/src/components/ConfigPanel/useReferenceImagePaste.ts`) so that the hook is
mounted and its `useEffect` at lines 165–197 registers the global `window` `paste`
listener.

2. Trigger a paste of one or more image files anywhere in the window (not necessarily over
the reference-image area); the global `onPaste` handler at lines 166–175 runs, detects
files via `extractImageFilesFromClipboard`, and calls `void processPastedFiles(files);` at
line 173.

3. Inside `processPastedFiles` (lines 56–88), the call `withProcessingLock(async () => {
... })` at line 64 can reject (for example when a previous paste is still being processed,
matching the `busyErrorMessage` contract used in the guarded `handlePaste` catch at lines
133–141).

4. Because the global `onPaste` path calls `processPastedFiles(files)` without `await` or
a `.catch` (line 173) and the surrounding function is not `async`, any rejection from
`withProcessingLock` becomes an unhandled promise rejection: no `busy` toast
(`t('refImage.toast.busy')`) is shown, and the error surfaces only as a global unhandled
promise error in the runtime.

Fix in Cursor | Fix in VSCode Claude

(Use Cmd/Ctrl + Click for best experience)

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** desktop/src/components/ConfigPanel/useReferenceImagePaste.ts
**Line:** 173:173
**Comment:**
	*Possible Bug: The global `paste` listener calls the async `processPastedFiles` without error handling, so lock/contention errors (or any rejection inside processing) become unhandled promise rejections and bypass the busy/error toast flow. Wrap this call in a `catch` (or route through the same guarded handler used in `handlePaste`) so failures are handled consistently.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Once fix is implemented, also check other comments on the same PR, and ask user if the user wants to fix the rest of the comments as well. if said yes, then fetch all the comments validate the correctness and implement a minimal fix
👍 | 👎

@codeant-ai
Copy link
Copy Markdown

codeant-ai Bot commented May 11, 2026

CodeAnt AI finished running the review.


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XXL This PR changes 1000+ lines, ignoring generated files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant