refactor: extract app overlay logic into dedicated module#59
refactor: extract app overlay logic into dedicated module#59
Conversation
📝 WalkthroughWalkthroughOverlay management (GSPS, SR, Parametric Map) was extracted from Changes
Sequence Diagram(s)sequenceDiagram
participant UI as "egui UI"
participant App as "DicomViewerApp"
participant Overlay as "overlay module"
participant History as "History/Cache"
participant Image as "DicomImage"
UI->>App: user toggles overlays / requests next overlay
App->>Overlay: next_overlay_navigation_target()
Overlay-->>App: OverlayNavigationTarget (viewport,frame)
App->>Image: update active frame / viewport
App->>Overlay: ensure overlays visible / attach pending
Overlay->>History: sync authoritative/pending changes
Overlay-->>App: overlay attach results
App->>UI: refresh_active_textures(ctx) / schedule repaint
UI-->>Image: render with overlays
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
DESIGN.md (1)
77-83: Add the benchmark step for this refactor path.The new verification bucket covers
app/overlay.rs, but it still omits the baseline-vs-refactor benchmark run. Since this PR moves a fairly hot chunk of viewer logic, I'd keep that perf gate in this checklist too so regressions are caught alongside the functional checks.Based on learnings: After large refactors, run
make benchmarkon the baseline and refactor versions with the sameBENCH_*settings, and report median deltas fortotal,startup,dicom_load, andrender_ui.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@DESIGN.md` around lines 77 - 83, Add a benchmark step to the verification checklist: when validating changes touching app/overlay.rs (and related app/load.rs and app/history.rs) run baseline vs refactor benchmark runs via the tools/benchmark path (invoke the Makefile target `make benchmark` with identical BENCH_* environment settings) and report median deltas for the named metrics `total`, `startup`, `dicom_load`, and `render_ui`; update the checklist entry (the tooling/benchmark item) to require these paired runs and a short summary of median regressions before approving the PR.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@DESIGN.md`:
- Around line 77-83: Add a benchmark step to the verification checklist: when
validating changes touching app/overlay.rs (and related app/load.rs and
app/history.rs) run baseline vs refactor benchmark runs via the tools/benchmark
path (invoke the Makefile target `make benchmark` with identical BENCH_*
environment settings) and report median deltas for the named metrics `total`,
`startup`, `dicom_load`, and `render_ui`; update the checklist entry (the
tooling/benchmark item) to require these paired runs and a short summary of
median regressions before approving the PR.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ea47e9cf-d44e-4f8a-8007-ce2ae8d8d890
📒 Files selected for processing (3)
DESIGN.mdsrc/app.rssrc/app/overlay.rs
There was a problem hiding this comment.
🧹 Nitpick comments (1)
DESIGN.md (1)
90-90: LGTM! Benchmark requirements consistently applied.The benchmark verification requirements are appropriately included in the tooling/benchmark changes section, maintaining consistency with the streaming/overlay/history verification scope.
Optional style improvement: Three consecutive sentences begin with "Run" (lines 88-90). Consider rewording for variety, such as:
- "Additionally, run paired baseline vs refactor benchmark runs..."
- "Finally, run paired baseline vs refactor benchmark runs..."
However, this is a minor stylistic point and the current wording is clear and functional.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@DESIGN.md` at line 90, The wording in the DESIGN.md snippet repeats the verb "Run" at the start of three consecutive sentences; please rephrase the sentence that currently reads "Run paired baseline vs refactor benchmark runs via `make benchmark` with identical `BENCH_*` environment settings, and include a short summary of median deltas/regressions for `total`, `startup`, `dicom_load`, and `render_ui`." to avoid repetition (e.g., start with "Additionally," or "Finally,") while keeping the same meaning and requirements so the benchmark instruction remains intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@DESIGN.md`:
- Line 90: The wording in the DESIGN.md snippet repeats the verb "Run" at the
start of three consecutive sentences; please rephrase the sentence that
currently reads "Run paired baseline vs refactor benchmark runs via `make
benchmark` with identical `BENCH_*` environment settings, and include a short
summary of median deltas/regressions for `total`, `startup`, `dicom_load`, and
`render_ui`." to avoid repetition (e.g., start with "Additionally," or
"Finally,") while keeping the same meaning and requirements so the benchmark
instruction remains intact.
Summary by CodeRabbit
Refactor
New Features
Documentation