feat(core): implement archive extractor module#25
Conversation
Native Rust archive extraction for ZIP, RAR v4/v5, 7z, and TAR (gz/bz2/xz/zstd). Follows hexagonal architecture with domain types, ArchiveExtractor port trait, and format-specific handler adapters. - Domain: ArchiveFormat, ExtractSummary, ArchiveEntry, ExtractionConfig - Detector: magic byte detection with extension fallback - ZIP: path traversal protection, AES password support - TAR: streaming decompression via flate2/bzip2/xz2/zstd - RAR: unrar crate wrapper for v4/v5 with graceful error recovery - 7z: sevenz-rust2 (pure Rust) with password and path safety - Split archives: detection for RAR parts, 7z segments, ZIP spans - Composite VortexArchiveExtractor with recursive extraction - CQRS: ExtractArchiveCommand + ListArchiveContentsQuery handlers - 425 tests passing (48 new extractor tests)
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a native archive extraction subsystem: domain models and a driven-port Changes
Sequence Diagram(s)sequenceDiagram
participant User as Tauri Client
participant CmdBus as CommandBus
participant Extractor as ArchiveExtractor
participant Detector as Format Detector
participant Handler as Format Handler
participant FS as Filesystem
User->>CmdBus: handle_extract_archive(ExtractArchiveCommand)
CmdBus->>CmdBus: validate download state & emit DownloadExtracting
CmdBus->>CmdBus: spawn_blocking(blocking task)
CmdBus->>Extractor: extract(file_path, dest_dir, password)
Extractor->>Detector: detect_format(file_path)
Detector->>FS: read magic bytes / check extension
Detector-->>Extractor: ArchiveFormat
Extractor->>Handler: route to per-format handler
Handler->>FS: open archive, create dest_dir
Handler->>FS: iterate entries, write files (with path checks)
Handler-->>Extractor: ExtractSummary
Extractor->>Extractor: optionally scan dest_dir for nested archives
Extractor->>Detector: detect nested formats
Extractor->>Handler: extract nested archives (up to max depth)
Extractor-->>CmdBus: ExtractSummary
CmdBus->>CmdBus: mark download Completed/Failed & emit event
CmdBus-->>User: ExtractSummary or error
sequenceDiagram
participant User as Tauri Client
participant QueryBus as QueryBus
participant Extractor as ArchiveExtractor
participant Detector as Format Detector
participant Handler as Format Handler
participant FS as Filesystem
User->>QueryBus: handle_list_archive_contents(ListArchiveContentsQuery)
QueryBus->>QueryBus: spawn_blocking(blocking task)
QueryBus->>Extractor: list_contents(file_path, password)
Extractor->>Detector: detect_format(file_path)
Detector->>FS: read magic bytes / check extension
Detector-->>Extractor: ArchiveFormat
Extractor->>Handler: route to per-format handler
Handler->>FS: open archive
Handler->>Handler: iterate entries, filter unsafe paths
Handler-->>Extractor: Vec<ArchiveEntry>
Extractor-->>QueryBus: Vec<ArchiveEntry>
QueryBus-->>User: Archive entries preview
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
Greptile SummaryThis PR adds a native Rust archive extraction module (ZIP, RAR v4/v5, 7z, TAR variants) following a hexagonal architecture with domain types, a port trait, format-specific adapters, and CQRS command/query handlers. The structure is clean and well-tested for the common cases, but three fixable bugs were found before this is production-ready.
Confidence Score: 4/5Safe to merge after fixing three P1 defects: broken Three P1 bugs were identified: the segmentation.rs (two P1 bugs) and extract_archive.rs (missing error-path state reset) need fixes before merge. Important Files Changed
Sequence DiagramsequenceDiagram
participant IPC as IPC Layer (future)
participant CB as CommandBus
participant Repo as DownloadRepository
participant EB as EventBus
participant VE as VortexArchiveExtractor
participant Det as detector.rs
participant H as Format Handler
IPC->>CB: ExtractArchiveCommand(id, password, dest_dir)
CB->>Repo: find_by_id(id)
Repo-->>CB: Download (Completed)
CB->>Repo: save(download → Extracting)
CB->>EB: publish(DownloadExtracting)
CB->>CB: spawn_blocking
CB->>VE: extract(file_path, dest_dir, password)
VE->>Det: detect_format(file_path)
Det-->>VE: ArchiveFormat
VE->>H: extract(file_path, dest_dir, password)
H-->>VE: ExtractSummary
VE-->>CB: ExtractSummary
alt success
CB->>Repo: save(download → Completed)
CB->>EB: publish(DownloadCompleted)
CB-->>IPC: ExtractSummary
else failure (current: no cleanup)
CB-->>IPC: AppError (download stuck in Extracting)
end
Reviews (1): Last reviewed commit: "feat(core): implement archive extractor ..." | Re-trigger Greptile |
| let pattern = Regex::new(&format!(r"^{}\.\z\d+$", regex::escape(base_name))) | ||
| .map_err(|e| DomainError::StorageError(format!("Regex error: {}", e)))?; | ||
| scan_directory(parent, &mut parts, |name| pattern.is_match(name))?; |
There was a problem hiding this comment.
.z01 split-ZIP detection always fails — \z is a regex anchor, not a literal z
In the Rust regex crate, \z is the end-of-text anchor. The pattern ^{base}\.\z\d+$ requires the string to simultaneously be at end-of-text and still have digits remaining — an unsatisfiable condition. Every call to pattern.is_match(name) returns false, so split ZIPs in the .z01/.z02 format are silently never detected.
| let pattern = Regex::new(&format!(r"^{}\.\z\d+$", regex::escape(base_name))) | |
| .map_err(|e| DomainError::StorageError(format!("Regex error: {}", e)))?; | |
| scan_directory(parent, &mut parts, |name| pattern.is_match(name))?; | |
| let pattern = Regex::new(&format!(r"^{}\.z\d+$", regex::escape(base_name))) | |
| .map_err(|e| DomainError::StorageError(format!("Regex error: {}", e)))?; |
| fn extract_part_number(file_name: &str) -> Option<u32> { | ||
| // Extract number from patterns like: | ||
| // name.part01.rar -> 01 | ||
| // name.7z.001 -> 001 | ||
| // name.r01 -> 01 | ||
| // name.z01 -> 01 | ||
|
|
||
| let re = Regex::new(r"(\d+)").ok()?; | ||
| re.find(file_name) | ||
| .and_then(|m| m.as_str().parse::<u32>().ok()) | ||
| } |
There was a problem hiding this comment.
extract_part_number grabs the first digit run — wrong when the base name contains numbers
re.find returns the leftmost match. For a file like video2024.part01.rar the function returns 2024 instead of 1. Because sort_parts_numerically calls this for every part, all parts receive the same sort key, leaving the order undefined. Feeding parts in the wrong order to the decompressor produces a corrupt extraction.
Fix — anchor to the last digit run:
fn extract_part_number(file_name: &str) -> Option<u32> {
let re = Regex::new(r"(\d+)[^0-9]*$").ok()?;
re.captures(file_name)
.and_then(|c| c[1].parse::<u32>().ok())
}| let summary = tokio::task::spawn_blocking(move || { | ||
| extractor.extract(&file_path, &dest_dir, password.as_deref()) | ||
| }) | ||
| .await | ||
| .map_err(|e| AppError::Storage(format!("extraction task failed: {}", e)))??; | ||
|
|
||
| // Mark as completed again after extraction | ||
| download.complete()?; |
There was a problem hiding this comment.
Download stuck in
Extracting state on any extraction failure
start_extracting() persists the Extracting state (lines 41–42). If spawn_blocking panics or extractor.extract returns an error, the ?? propagates immediately and the function returns — but the repository still holds the download in Extracting. There is no cleanup path, so the download is permanently stuck and a retry is impossible.
let summary = tokio::task::spawn_blocking(move || {
extractor.extract(&file_path, &dest_dir, password.as_deref())
})
.await
.map_err(|e| AppError::Storage(format!("extraction task failed: {}", e)))?;
let summary = match summary {
Ok(s) => s,
Err(e) => {
let _ = download.complete();
let _ = self.download_repo().save(&download);
return Err(e.into());
}
};| fn scan_directory<F>( | ||
| parent: &Path, | ||
| parts: &mut Vec<PathBuf>, | ||
| predicate: F, | ||
| ) -> Result<(), DomainError> | ||
| where | ||
| F: Fn(&str) -> bool, | ||
| { | ||
| std::fs::read_dir(parent) | ||
| .map_err(|e| DomainError::StorageError(format!("Failed to read directory: {}", e)))?; | ||
|
|
||
| for entry in std::fs::read_dir(parent) | ||
| .map_err(|e| DomainError::StorageError(format!("Failed to read directory: {}", e)))? | ||
| { | ||
| let entry = entry.map_err(|e| { | ||
| DomainError::StorageError(format!("Failed to read directory entry: {}", e)) | ||
| })?; | ||
|
|
||
| let path = entry.path(); | ||
| if let Some(file_name) = path.file_name().and_then(|n| n.to_str()) |
There was a problem hiding this comment.
Directory opened twice in
scan_directory
The first read_dir call (line 271) is used only for error propagation — its ReadDir iterator is immediately discarded — and a second identical call (line 274) does the actual iteration. This wastes two syscalls and introduces a minor TOCTOU window. Remove the first call.
| fn scan_directory<F>( | |
| parent: &Path, | |
| parts: &mut Vec<PathBuf>, | |
| predicate: F, | |
| ) -> Result<(), DomainError> | |
| where | |
| F: Fn(&str) -> bool, | |
| { | |
| std::fs::read_dir(parent) | |
| .map_err(|e| DomainError::StorageError(format!("Failed to read directory: {}", e)))?; | |
| for entry in std::fs::read_dir(parent) | |
| .map_err(|e| DomainError::StorageError(format!("Failed to read directory: {}", e)))? | |
| { | |
| let entry = entry.map_err(|e| { | |
| DomainError::StorageError(format!("Failed to read directory entry: {}", e)) | |
| })?; | |
| let path = entry.path(); | |
| if let Some(file_name) = path.file_name().and_then(|n| n.to_str()) | |
| fn scan_directory<F>( | |
| parent: &Path, | |
| parts: &mut Vec<PathBuf>, | |
| predicate: F, | |
| ) -> Result<(), DomainError> | |
| where | |
| F: Fn(&str) -> bool, | |
| { | |
| for entry in std::fs::read_dir(parent) | |
| .map_err(|e| DomainError::StorageError(format!("Failed to read directory: {}", e)))? |
| ((year - 1970) * 365 * 86400) | ||
| + (month * 30 * 86400) | ||
| + (day * 86400) | ||
| + (hour * 3600) | ||
| + (minute * 60) | ||
| + second | ||
| }); |
There was a problem hiding this comment.
Month offset is off by one in the timestamp approximation
ZIP months are 1-indexed (1 = January), so January should contribute zero days of month offset. The current formula month * 30 * 86400 adds 30 days for January. Using (month - 1) removes the systematic one-month shift:
| ((year - 1970) * 365 * 86400) | |
| + (month * 30 * 86400) | |
| + (day * 86400) | |
| + (hour * 3600) | |
| + (minute * 60) | |
| + second | |
| }); | |
| ((year - 1970) * 365 * 86400) | |
| + ((month - 1) * 30 * 86400) |
There was a problem hiding this comment.
Actionable comments posted: 11
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
♻️ Duplicate comments (7)
coverage/stores/downloadStore.ts.html (1)
1-295:⚠️ Potential issue | 🔴 CriticalGenerated coverage artifact—should not be committed.
This file is part of the generated
coverage/directory that should be excluded from version control per the comment oncoverage/index.html.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/stores/downloadStore.ts.html` around lines 1 - 295, This is a generated coverage artifact (coverage/stores/downloadStore.ts.html) and should not be committed; remove this file from the PR, add the coverage/ directory pattern to .gitignore (or ensure it already exists) so future coverage files like coverage/stores/downloadStore.ts.html are ignored, and if it was previously committed remove it from the repo (e.g., git rm --cached then commit or coordinate a history cleanup) to prevent reintroducing generated coverage files.coverage/components/ui/button.tsx.html (1)
1-253:⚠️ Potential issue | 🔴 CriticalGenerated coverage artifact—should not be committed.
This file is part of the generated
coverage/directory that should be excluded from version control per the comment oncoverage/index.html.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/components/ui/button.tsx.html` around lines 1 - 253, This is a generated coverage artifact (coverage/components/ui/button.tsx.html) that should not be committed; remove the file from the PR, ensure the coverage/ directory is listed in .gitignore (or that pattern already referenced in coverage/index.html comment), and if the file was already tracked remove it from git tracking (git rm --cached) and commit the .gitignore change so coverage artifacts are excluded going forward.coverage/layouts/Sidebar.tsx.html (1)
1-217:⚠️ Potential issue | 🔴 CriticalGenerated coverage artifact—should not be committed.
This file is part of the generated
coverage/directory that should be excluded from version control per the comment oncoverage/index.html.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/layouts/Sidebar.tsx.html` around lines 1 - 217, This is a generated coverage artifact (coverage/layouts/Sidebar.tsx.html) that must not be committed; remove the file from your commit, stop tracking it, and ensure the coverage/ directory is ignored: remove the file from Git's index (e.g. git rm --cached coverage/layouts/Sidebar.tsx.html), add or confirm coverage/ is listed in .gitignore, and commit the changes so the generated artifact is no longer tracked.coverage/api/client.ts.html (1)
1-163:⚠️ Potential issue | 🔴 CriticalGenerated coverage artifact—should not be committed.
This file is part of the generated
coverage/directory that should be excluded from version control per the comment oncoverage/index.html.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/api/client.ts.html` around lines 1 - 163, The committed generated coverage artifact (the HTML for the compiled file that includes symbols like toError, tauriInvoke, and queryClient) should be removed from the repo and excluded from version control: revert/remove the coverage/api/client.ts.html file from the commit, add or ensure coverage/ is listed in .gitignore, and if the coverage directory was accidentally added to the index run git rm --cached on the generated files so they are no longer tracked and commit the change.coverage/prettify.css (1)
1-1:⚠️ Potential issue | 🔴 CriticalGenerated coverage artifact—should not be committed.
This file is part of the generated
coverage/directory that should be excluded from version control per the comment oncoverage/index.html.Note: While static analysis flagged the deprecated media type
projectionin the@media print,projectionrule, this is a vendored file from the Istanbul/prettify.js library and should not be manually edited.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/prettify.css` at line 1, Remove this generated coverage artifact (coverage/prettify.css) from the commit and ensure the coverage/ directory is ignored going forward; this file (containing selectors like .pln, .str, .kwd, the `@media` print,projection rule, and pre.prettyprint styles) is vendored/generated by Istanbul/prettify.js and should not be edited manually—delete the file from the PR and update .gitignore (or the repo's ignore mechanism) so coverage/ is excluded, then re-run the coverage generation locally to verify the file is recreated as expected.coverage/components/ui/checkbox.tsx.html (1)
1-181:⚠️ Potential issue | 🔴 CriticalGenerated coverage artifact—should not be committed.
This file is part of the generated
coverage/directory that should be excluded from version control per the comment oncoverage/index.html.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/components/ui/checkbox.tsx.html` around lines 1 - 181, Generated coverage HTML for the Checkbox component (coverage/components/ui/checkbox.tsx.html) is a build artifact and should not be committed; remove this file from the PR, ensure the coverage/ directory is added to .gitignore (or update existing ignore rules) so files like coverage/components/ui/checkbox.tsx.html are excluded, and verify no other generated coverage artifacts (e.g., other coverage/*.html) are staged before committing; the source symbol to check is the Checkbox component implementation (Checkbox function in checkbox.tsx) to confirm only source code is committed.coverage/components/ui/index.html (1)
1-206:⚠️ Potential issue | 🔴 CriticalGenerated coverage artifact—should not be committed.
This file is part of the generated
coverage/directory that should be excluded from version control per the comment oncoverage/index.html.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/components/ui/index.html` around lines 1 - 206, The committed generated coverage artifact titled "Code coverage report for components/ui" should not be in the PR; remove this generated HTML from the changes (the index.html under the coverage output with title "Code coverage report for components/ui" and elements like template id="filterTemplate"), update the repository ignore rules so the coverage/ directory (the generated coverage HTML files) is excluded from future commits, and amend the PR so only source/test files remain.
🟠 Major comments (16)
src/routes/__tests__/routes.test.tsx-7-26 (1)
7-26:⚠️ Potential issue | 🟠 MajorExercise the real app route tree, not a local copy.
AppRoutesredefines routing independently fromsrc/App.tsx, so this suite can stay green even if the real router changes and the helper is not updated. It already diverges structurally by dropping the layout wrapper fromsrc/App.tsx:18-40. Prefer renderingAppwith lightweight provider mocks, or export a shared route definition consumed by both the app and this test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/routes/__tests__/routes.test.tsx` around lines 7 - 26, The test defines a local AppRoutes that diverges from the real router; instead update the test to render the real App component (use the App export) and provide lightweight provider mocks (e.g., mock stores, contexts, or wrappers used by App) so the real route tree and layout are exercised; alternatively extract the shared route-definition into a single export consumed by both App and the test and remove/replace the AppRoutes helper to avoid duplicated route definitions.coverage/lib/format.ts.html-2-214 (1)
2-214:⚠️ Potential issue | 🟠 MajorRemove generated coverage report artifacts from version control.
Lines 2-214 are generated Istanbul output (including volatile timestamp content like Line 202). This adds noisy, non-functional diffs and obscures the PR’s actual archive-extractor changes. Please keep coverage as CI/build artifacts and exclude
coverage/from tracked sources.Suggested cleanup
# .gitignore +coverage/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/lib/format.ts.html` around lines 2 - 214, The PR accidentally commits generated Istanbul coverage output (coverage/lib/format.ts.html); remove this file from version control, add coverage/ (or coverage/**/*.html) to .gitignore, and if already committed remove it from the index (git rm --cached <file> or git rm -r --cached coverage/) then commit the .gitignore change and the removal so coverage stays as CI/build artifacts only.coverage/api/queries.ts.html-1-169 (1)
1-169:⚠️ Potential issue | 🟠 MajorCoverage artifact unrelated to PR objectives.
This coverage report for
api/queries.tscontains frontend query factories that have no connection to the Rust archive extraction module described in the PR summary. Generated coverage reports should not be committed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/api/queries.ts.html` around lines 1 - 169, The PR accidentally includes a generated coverage artifact (coverage/api/queries.ts.html) unrelated to the Rust archive extraction work; remove this file from the commit and stop committing coverage outputs by adding the coverage/ directory (or *.html reports) to the repository ignore rules (e.g., add coverage/ to .gitignore) and, if applicable, update CI/build scripts to publish reports outside the repo; ensure you reference the exact artifact name "coverage/api/queries.ts.html" when creating the commit that removes it.coverage/hooks/index.html-1-146 (1)
1-146:⚠️ Potential issue | 🟠 MajorGenerated artifact unrelated to PR objectives.
This hooks directory coverage index should not be committed and has no connection to the Rust archive extraction module.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/hooks/index.html` around lines 1 - 146, Remove the generated coverage artifact coverage/hooks/index.html from the PR and repository: delete the file (hooks/index.html / coverage/hooks/index.html) from the commit, ensure it is not re-added by updating .gitignore or the coverage output configuration to exclude the hooks directory, and amend the commit so only relevant Rust archive extraction module changes remain.coverage/hooks/useDownloadProgress.ts.html-1-124 (1)
1-124:⚠️ Potential issue | 🟠 MajorCoverage artifact should not be committed.
This coverage report is unrelated to the archive extraction feature and should be excluded from version control via
.gitignore.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/hooks/useDownloadProgress.ts.html` around lines 1 - 124, Remove the generated coverage artifact file useDownloadProgress.ts.html from the commit, add a rule to .gitignore to prevent committing coverage HTML reports (e.g., ignore the coverage/ directory or **/*.ts.html files), and if the file is already tracked run git rm --cached on useDownloadProgress.ts.html (or the coverage directory) so it is no longer tracked; this ensures generated coverage HTML for useDownloadProgress.ts is not committed going forward.coverage/layouts/StatusBar.tsx.html-1-199 (1)
1-199:⚠️ Potential issue | 🟠 MajorCoverage artifact should not be committed.
This coverage report for
StatusBar.tsxis unrelated to the archive extraction feature described in the PR objectives. Generated HTML reports should be excluded from version control.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/layouts/StatusBar.tsx.html` around lines 1 - 199, The committed coverage HTML is a generated artifact and must be removed: delete the coverage/layouts/StatusBar.tsx.html from the commit, update .gitignore to exclude the coverage output (e.g., add coverage/ or the specific report pattern), and ensure no other generated coverage reports are staged; no changes are needed to the StatusBar component (StatusBar, Dot, selectTotalSpeed, selectActiveCount) — only remove the artifact and ignore the coverage folder so it isn't re-added.coverage/stores/index.html-1-161 (1)
1-161:⚠️ Potential issue | 🟠 MajorCoverage artifact should be excluded from repository.
This stores directory coverage index is a generated artifact unrelated to the archive extraction feature and should not be committed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/stores/index.html` around lines 1 - 161, The committed coverage artifact coverage/stores/index.html should be removed and excluded: delete the generated coverage/stores files from the repository (remove them from version control), add the coverage/ pattern to the project's ignore list so coverage artifacts are not tracked in future, and commit the removal and .gitignore update; reference the file coverage/stores/index.html to locate the offending artifact.coverage/lib/index.html-1-131 (1)
1-131:⚠️ Potential issue | 🟠 MajorCoverage artifacts should not be committed to the repository.
This Istanbul-generated HTML coverage report appears unrelated to the PR's stated objective of implementing a Rust archive extraction module. Generated coverage artifacts should typically be excluded via
.gitignoreand are not part of the source code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/lib/index.html` around lines 1 - 131, Remove the generated Istanbul coverage artifacts from the commit (e.g., the coverage HTML files such as index.html and accompanying assets like prettify.css/sorter.js under the coverage output), add a .gitignore entry to exclude the coverage output directory (e.g., ignore "coverage/" or the repo's coverage output folder), and if those files were already committed remove them from the repo history/staging (use git rm --cached or a cleanup/revert) so future commits don't include generated coverage reports; ensure no source changes are required in the Rust archive extraction code (look for the coverage report files referencing "istanbul" and the coverage summary table to locate the artifacts to remove).coverage/theme/index.html-1-131 (1)
1-131:⚠️ Potential issue | 🟠 MajorGenerated coverage artifact unrelated to PR.
This theme coverage index should not be committed to the repository and has no connection to the archive extraction module described in the PR objectives.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/theme/index.html` around lines 1 - 131, The file coverage/theme/index.html is a generated coverage artifact unrelated to this PR; remove it from the commit (git rm --cached or delete) and any sibling generated coverage files, add an appropriate ignore rule (e.g., add coverage/ or coverage/theme/ to .gitignore) so it won't be committed in future, and amend the PR commit so only source changes remain; reference: coverage/theme/index.html.coverage/api/index.html-1-146 (1)
1-146:⚠️ Potential issue | 🟠 MajorAdd
coverage/to.gitignore.The
coverage/directory contains generated artifacts (e.g.,coverage/api/index.htmlfrom istanbul) that should not be committed. Currently, this directory is not listed in.gitignoreand should be excluded from version control.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/api/index.html` around lines 1 - 146, The repo currently commits generated coverage artifacts (e.g., coverage/api/index.html); add "coverage/" to .gitignore to prevent future commits and remove already tracked coverage files from Git (e.g., run git rm --cached -r coverage) then commit the updated .gitignore and the removal; update references to the specific path "coverage/api/index.html" in the review and ensure .gitignore contains the coverage/ entry so the Coverage folder is ignored going forward.coverage/views/DownloadsView/SearchBar.tsx.html-2-147 (1)
2-147:⚠️ Potential issue | 🟠 MajorRemove generated coverage artifacts from this PR.
This is an Istanbul-generated HTML report, not source code. Committing this (and sibling files under
coverage/) adds noisy/stale diffs and obscures the actual feature changes. Please keep coverage output as CI artifacts and excludecoverage/from version control.Suggested cleanup
# .gitignore +coverage/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/views/DownloadsView/SearchBar.tsx.html` around lines 2 - 147, This PR accidentally includes generated Istanbul coverage artifacts (e.g. coverage/views/DownloadsView/SearchBar.tsx.html) that should not be committed; remove the entire coverage/ directory from the commit, add coverage/ to .gitignore, and if already committed remove it from history for this branch (use git rm -r --cached coverage/ and recommit) so only source changes (not generated reports) remain; ensure CI still publishes coverage as artifacts rather than committing files.src-tauri/src/application/queries/mod.rs-55-60 (1)
55-60:⚠️ Potential issue | 🟠 Major
Debugderivation can leak archive passwords.Line 55 derives
Debugand Line 58 storespassword. This can expose secrets via logs or panic messages. Prefer redacted debug output (or removeDebug).🔒 Suggested fix (redacted Debug)
-#[derive(Debug)] pub struct ListArchiveContentsQuery { pub file_path: String, pub password: Option<String>, } impl Query for ListArchiveContentsQuery {} + +impl std::fmt::Debug for ListArchiveContentsQuery { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("ListArchiveContentsQuery") + .field("file_path", &self.file_path) + .field("password", &self.password.as_ref().map(|_| "***redacted***")) + .finish() + } +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/queries/mod.rs` around lines 55 - 60, The struct ListArchiveContentsQuery currently derives Debug and stores a password, which risks leaking secrets; remove the #[derive(Debug)] from ListArchiveContentsQuery and provide a manual impl std::fmt::Debug for ListArchiveContentsQuery that prints file_path but redacts or omits the password field (e.g., show "REDACTED" or None), keeping impl Query as-is; update any tests/usages that relied on the auto-derived Debug accordingly.src-tauri/src/application/commands/extract_archive.rs-41-68 (1)
41-68:⚠️ Potential issue | 🟠 MajorFailed extractions leave the download stuck in
Extracting.After Lines 41-45 persist and publish the extracting state, any error from Lines 57-61 returns early before the download is restored to a terminal/recoverable state. Wrong passwords, corrupt archives, or storage failures will strand the record in
Extractingand emit no compensating event.src-tauri/src/adapters/driven/extractor/rar_handler.rs-100-107 (1)
100-107: 🛠️ Refactor suggestion | 🟠 Major
list_contentsexposes raw paths without validation.Similar to extraction, the listing method creates
PathBufdirectly fromentry.filenamewithout path traversal checks. For consistency withZipHandlerandSevenZHandler, consider filtering suspicious paths.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/rar_handler.rs` around lines 100 - 107, The listing currently builds ArchiveEntry::path from entry.filename without validation in RarHandler::list_contents (the entries.push call shown); mirror the checks used in ZipHandler/SevenZHandler by normalizing and rejecting suspicious paths — skip entries that are absolute, contain parent components (".."), have drive prefixes, or that normalize outside the target root — and only push ArchiveEntry for safe paths; update the code around the entries.push(...) location to validate PathBuf::from(&entry.filename) first and continue (skip) on any suspicious/unsafe path.src-tauri/src/adapters/driven/extractor/segmentation.rs-302-313 (1)
302-313:⚠️ Potential issue | 🟠 Major
extract_part_numberextracts first digit sequence, causing incorrect sorting for numbered base names.For a filename like
video2.part03.rar, this function extracts2(fromvideo2) instead of03(the actual part number). This causes incorrect sorting when base names contain digits.Consider extracting the part number more precisely by matching known patterns:
🐛 Proposed fix for accurate part number extraction
fn extract_part_number(file_name: &str) -> Option<u32> { - // Extract number from patterns like: - // name.part01.rar -> 01 - // name.7z.001 -> 001 - // name.r01 -> 01 - // name.z01 -> 01 - - let re = Regex::new(r"(\d+)").ok()?; - re.find(file_name) - .and_then(|m| m.as_str().parse::<u32>().ok()) + // Match specific patterns to extract the part number accurately + // Patterns: .part01.rar, .7z.001, .r01, .z01, .zip.001 + let patterns = [ + r"\.part(\d+)\.rar$", + r"\.7z\.(\d+)$", + r"\.r(\d+)$", + r"\.z(\d+)$", + r"\.zip\.(\d+)$", + ]; + + for pattern in &patterns { + if let Ok(re) = Regex::new(pattern) { + if let Some(caps) = re.captures(file_name) { + return caps.get(1)?.as_str().parse::<u32>().ok(); + } + } + } + None }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/segmentation.rs` around lines 302 - 313, extract_part_number currently grabs the first digit sequence in the filename (so "video2.part03.rar" returns 2); change it to match known archive part patterns instead: try a list of regexes in priority order (e.g. r"\.part(\d+)(?:\.[^.]+)?$", r"\.7z\.(\d+)$", r"\.r(\d+)$", r"\.z(\d+)$", and as a last resort r"\.(\d+)\.[^.]+$"), apply them in extract_part_number and parse the first capture group as u32, returning None if none match or parse fails; use these specific patterns (and the function name extract_part_number) so filenames with digits in the base name are handled correctly.src-tauri/src/adapters/driven/extractor/rar_handler.rs-56-56 (1)
56-56:⚠️ Potential issue | 🟠 MajorAdd path traversal protection to RarHandler.extract().
The
RarHandlerpasses entry paths directly toextract_to()without validation, unlikeZipHandler(which usesenclosed_name()) andSevenZHandler(which usesparse_safe_path()). The unrar crate does not prevent directory traversal attacks internally, making this a security vulnerability.Implement path validation before extraction (e.g., via
parse_safe_path()similar to SevenZHandler) to reject entries with..or absolute path components.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/rar_handler.rs` at line 56, RarHandler.extract() currently calls header.extract_to(dest_dir) with unvalidated entry paths; add path traversal protection by validating each entry's path before extraction (similar to SevenZHandler::parse_safe_path). Locate the loop handling archive entries in RarHandler.extract() and for each entry derive its path (the header path or name), reject or skip entries containing absolute paths or any ".." components, and join the validated relative path to dest_dir for extraction; only call header.extract_to() (or equivalent extraction) after the path is confirmed safe. Ensure you reference RarHandler.extract(), the entry header variable (e.g., header), and use the same safe-path logic/predicate used by SevenZHandler::parse_safe_path() or ZipHandler::enclosed_name() to keep behavior consistent.
🟡 Minor comments (11)
CHANGELOG.md-177-177 (1)
177-177:⚠️ Potential issue | 🟡 MinorAlign the trait method list with the actual public API.
This bullet currently omits
can_extractand shortensdetect_formattodetect, which can drift from the real port surface. Please list the trait methods using their actual names (detect_format,can_extract,extract,list_contents,detect_segments).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` at line 177, Update the CHANGELOG bullet for ArchiveExtractor to list the actual public trait method names exactly: use ArchiveExtractor and the methods detect_format, can_extract, extract, list_contents, detect_segments (replace the incorrect "detect" and missing "can_extract"); ensure the bullet matches the trait's public API by spelling each method name precisely.coverage/clover.xml-7-17 (1)
7-17:⚠️ Potential issue | 🟡 MinorUse repo-relative paths in committed coverage reports.
These
pathattributes expose a machine-specific absolute path (/home/matvei/...). That leaks local filesystem details and makes the artifact non-reproducible across contributors and CI. Please emit repo-relative paths here, or keep Clover output out of git entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/clover.xml` around lines 7 - 17, The Clover report contains machine-specific absolute paths in the file "path" attributes (e.g., entries for client.ts and hooks.ts) which should be repo-relative; update the coverage generation or post-processing step to emit relative paths (strip project-root prefix) or replace the absolute "path" attributes with repository-relative paths before committing the coverage/clover.xml artifact, or stop committing the generated Clover output altogether; look for the code that writes the Clover XML (the coverage tool config or a post-build script that touches coverage/clover.xml) and change it to compute and write paths relative to the repository root rather than using absolute filesystem paths.coverage/block-navigation.js-1-23 (1)
1-23:⚠️ Potential issue | 🟡 MinorUse strict mode and block-scoped declarations in this JS helper.
This file uses
vardeclarations and does not enable strict mode. Please switch to'use strict';andconst/let.As per coding guidelines,
**/*.js: Always use strict mode in JavaScript files, and**/*.{js,jsx,ts,tsx}: Prefer const over let, and let over var in JavaScript/TypeScript.Also applies to: 41-57
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/block-navigation.js` around lines 1 - 23, Add strict mode and replace function-scoped vars with block-scoped bindings: insert 'use strict'; at the top of the module (or at the top of the IIFE) and change the declaration of jumpToCode from var to const (const jumpToCode = (function init() { ... }) ); inside the IIFE replace var declarations like missingCoverageClasses, fileListingElements, notSelector, selector, missingCoverageElements, and currentIndex with const for values that never change and let for mutable ones (e.g., currentIndex). Ensure any other var usages in the remainder of the function (including the code referenced around lines 41-57) are similarly converted to const/let while preserving behavior.coverage/block-navigation.js-24-63 (1)
24-63:⚠️ Potential issue | 🟡 MinorGuard navigation when no uncovered blocks exist.
When
missingCoverageElements.lengthis0, callinggoToNext()orgoToPrevious()attempts to call.scrollIntoView()and.classList.add()on a null element returned by.item(0)or.item(-1). Add an early return inmakeCurrent()to prevent this.Proposed fix
function makeCurrent(index) { + if (missingCoverageElements.length === 0) return; toggleClass(index); currentIndex = index; missingCoverageElements.item(index).scrollIntoView({ behavior: 'smooth', block: 'center', inline: 'center' }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/block-navigation.js` around lines 24 - 63, Add a guard to avoid operating on missingCoverageElements when there are no items: in makeCurrent(index) early-return if missingCoverageElements.length === 0 or missingCoverageElements.item(index) is null; also make toggleClass(index) resilient by checking that missingCoverageElements.item(currentIndex) and missingCoverageElements.item(index) exist before calling classList.remove/add (and only update currentIndex in makeCurrent after these checks). This prevents .item(...).classList and .scrollIntoView() from being called on null when no uncovered blocks exist.coverage/sorter.js-1-210 (1)
1-210:⚠️ Potential issue | 🟡 MinorEither modernize this tracked file to comply with repository style rules or exclude it from version control.
This file uses
var(12 instances) throughout and is subject to the repository's JavaScript coding guidelines, which require preferringconstoverletandletovervar. Sincecoverage/sorter.jsis explicitly tracked in git, it should either be updated to use modern variable declarations or removed from version control as a generated artifact.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/sorter.js` around lines 1 - 210, The file uses legacy var declarations everywhere; update declarations to use const for variables that never change and let for those that are reassigned to satisfy the repo style rules. In the addSorting IIFE replace top-level var cols and currentSort with appropriate const/let, then update local vars inside functions (loadColumns, loadRowData, loadData, sortByIndex, removeSortIndicators, addSortIndicators, enableUI, addSearchBox, onFilterInput) to use const when a binding is immutable and let when mutated (e.g., loop indices should be let, accumulated arrays can be const, reassigned flags should be let). Ensure semantics are preserved (no hoisting changes) and run a quick smoke test of addSorting, loadColumns, loadRowData, loadData, sortByIndex and enableUI to verify behavior before committing.src-tauri/src/adapters/driven/extractor/detector.rs-277-281 (1)
277-281:⚠️ Potential issue | 🟡 MinorThis test doesn't actually cover extension precedence.
create_file_with_magic()creates a temp file without the conflicting.rarsuffix, so Lines 279-281 only re-test ZIP magic detection. Give the temp file a.rarsuffix here to prove magic bytes beat the extension fallback.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/detector.rs` around lines 277 - 281, The test test_magic_bytes_takes_precedence_over_extension currently calls create_file_with_magic() which returns a temp file without the .rar suffix, so it never verifies extension-vs-magic behavior; modify the test so the temp file created by create_file_with_magic is given a .rar extension (e.g., rename or create the temp path with a ".rar" suffix) before calling detect_format(file.path()), then assert detect_format(...) returns Some(ArchiveFormat::Zip) to prove magic bytes take precedence over the .rar extension; refer to the test function name, create_file_with_magic, detect_format, and ArchiveFormat::Zip when making the change.src-tauri/src/adapters/driven/extractor/detector.rs-69-71 (1)
69-71:⚠️ Potential issue | 🟡 MinorCheck the full TAR magic field.
Line 71 compares
buffer[257..262], which is only 5 bytes. TAR headers use a 6-byte magic field (ustar\0orustar), so this can false-positive on files that merely contain the 5-byte prefix. Read and compare the full field before routing to the TAR handler.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/detector.rs` around lines 69 - 71, The TAR magic check currently compares only 5 bytes (buffer[257..262] == b"ustar") which can false-positive; extend the buffer to cover the full 6-byte magic field (make the buffer at least 263 bytes, e.g., [0u8; 263]) and compare the full slice buffer[257..263] against the correct 6-byte values (accept b"ustar\0" and b"ustar "), updating the read_exact and match branch in detector.rs where buffer and the comparison occur.src-tauri/src/adapters/driven/extractor/detector.rs-83-108 (1)
83-108:⚠️ Potential issue | 🟡 MinorSingle-part extensions are still matched case-sensitively.
The function advertises case-insensitive matching, but Lines 102-107 match on
path.extension()without normalizing it..ZIP,.RAR,.TAR, and.7Zwill therefore returnNone. Lowercase the extracted extension before thematch.Suggested fix
fn extension_matches(path: &Path) -> Option<ArchiveFormat> { let path_str = path.to_string_lossy().to_lowercase(); @@ - match path.extension()?.to_string_lossy().as_ref() { + let ext = path.extension()?.to_string_lossy().to_lowercase(); + match ext.as_str() { "tar" => Some(ArchiveFormat::Tar), "zip" => Some(ArchiveFormat::Zip), "rar" => Some(ArchiveFormat::RarV5), // Default to v5 for .rar extension "7z" => Some(ArchiveFormat::SevenZ), _ => None,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/detector.rs` around lines 83 - 108, The extension_matches function claims case-insensitive matching but uses path.extension()? directly in a match without normalizing; change the single-part extension branch in extension_matches to lowercase the extension string before matching (e.g., call to_string_lossy().to_lowercase() or similar on path.extension() result) and then match against "tar", "zip", "rar", "7z" so uppercase extensions like .ZIP/.RAR/.7Z are handled; update the match on the normalized extension value in extension_matches.src-tauri/src/adapters/driven/extractor/mod.rs-122-126 (1)
122-126:⚠️ Potential issue | 🟡 Minor
with_extension("")may not fully strip compound extensions.For archives like
archive.tar.gz,with_extension("")producesarchive.tarrather thanarchive. This could lead to confusing nested extraction directories or even incorrect format re-detection on the second pass.Consider stripping all archive-related extensions:
♻️ Proposed fix for compound extensions
- let nested_dest = archive_path.with_extension(""); + let nested_dest = strip_archive_extensions(&archive_path); // Add helper function: fn strip_archive_extensions(path: &Path) -> PathBuf { let mut result = path.to_path_buf(); let archive_exts = ["gz", "bz2", "xz", "zst", "zstd", "tar", "zip", "rar", "7z"]; loop { match result.extension().and_then(|e| e.to_str()) { Some(ext) if archive_exts.contains(&ext.to_lowercase().as_str()) => { result = result.with_extension(""); } _ => break, } } result }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/mod.rs` around lines 122 - 126, The current nested_dest uses archive_path.with_extension("") which leaves compound names like "archive.tar" for "archive.tar.gz"; update the logic that builds nested_dest (where nested_dest and archive_path are used) to strip all archive-related extensions instead of only the last one—e.g., repeatedly remove suffixes while Path::extension() matches a set of archive extensions (gz, gzip, tar, tgz, zip, bz2, xz, 7z, rar, etc.) or loop using file_stem() until the extension is not in that set, then use the resulting stem to create the directory with std::fs::create_dir_all and return DomainError::StorageError on failure as before.src-tauri/src/adapters/driven/extractor/rar_handler.rs-54-65 (1)
54-65:⚠️ Potential issue | 🟡 MinorExtraction stops on first error instead of continuing.
Unlike
ZipHandlerandSevenZHandlerwhich continue extraction after per-entry failures (adding to warnings),RarHandlerbreaks out of the loop on the first error. This inconsistency could lead to incomplete extractions where other handlers would succeed partially.♻️ Proposed fix to continue on error
match header.extract_to(dest_dir) { Ok(next) => { extracted_files += 1; open = next; } Err(e) => { warnings.push(format!("failed to extract entry: {}", e)); - break; + // Try to continue with next entry + match header.skip() { + Ok(next) => open = next, + Err(_) => break, // Can't continue if skip also fails + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/rar_handler.rs` around lines 54 - 65, The RarHandler currently stops extraction on the first per-entry error because the match arm for Err(e) uses break; change this to record the warning and continue the loop instead so extraction proceeds like ZipHandler/SevenZHandler; specifically, in the loop handling header.extract_to(dest_dir) (the match that updates extracted_files and open on Ok and pushes to warnings on Err), replace break with continue so subsequent entries are processed and extracted_bytes/extracted_files/open behavior remains consistent.src-tauri/src/adapters/driven/extractor/tar_handler.rs-88-106 (1)
88-106:⚠️ Potential issue | 🟡 MinorMissing path traversal protection in
list_contents.Unlike
ZipHandler(usesenclosed_name()) andSevenZHandler(usesparse_safe_path()), this handler exposes raw entry paths without validation. Whiletar::Archive::unpack()has built-in traversal prevention for extraction,list_contents()returns unsanitized paths that could confuse callers or UI code expecting safe paths.Consider adding the same
parse_safe_pathcheck used inseven_z_handler.rsfor consistency:🛡️ Proposed fix to add path validation
+use super::seven_z_handler::parse_safe_path; // or extract to shared module // In list_contents: let path = entry .path() .map_err(|e| DomainError::StorageError(format!("invalid TAR entry path: {}", e)))? .to_path_buf(); + // Skip suspicious paths for consistency with other handlers + let path = match parse_safe_path(path.to_string_lossy().as_ref()) { + Some(p) => p, + None => { + debug!("skipping entry with path traversal: {}", path.display()); + continue; + } + }; + let is_dir = header.entry_type().is_dir();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/tar_handler.rs` around lines 88 - 106, In list_contents of tar_handler.rs, validate and sanitize each entry path before pushing ArchiveEntry: replace direct use of entry.path().to_path_buf() with a safe conversion using the existing parse_safe_path (as used in seven_z_handler.rs) or an equivalent enclosed_name check; on failure map the error to DomainError::StorageError (or skip the entry consistently) and ensure ArchiveEntry.path contains the sanitized PathBuf so callers receive traversal-safe paths.
🧹 Nitpick comments (11)
CHANGELOG.md (1)
185-186: Optional: make command/query runtime notes consistent.Line 185 calls out
spawn_blocking, but Line 186 doesn’t, even thoughListArchiveContentsQueryalso uses it. Consider mirroring that detail for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` around lines 185 - 186, The changelog mentions that ExtractArchiveCommand uses spawn_blocking for CPU-bound extraction but omits the same runtime note for ListArchiveContentsQuery; update the CHANGELOG entry to mention that ListArchiveContentsQuery also uses spawn_blocking (or equivalent) for CPU-bound work so both entries are consistent, referring to the ExtractArchiveCommand and ListArchiveContentsQuery symbols when adding the runtime note.coverage/prettify.js (1)
1-2: Keep generated coverage bundles out of the repo.This vendored/minified file is now part of the review and lint surface, and it already trips Biome on Line 2. Hand-fixing bundled coverage assets is brittle; publishing
coverage/**as a CI artifact or excluding it from version control/linting is a safer path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/prettify.js` around lines 1 - 2, This vendored/minified coverage bundle (contains symbols like window.prettyPrint, PR.registerLangHandler and the top "/* eslint-disable */" header) should be removed from version control; delete the file from the commit, add the coverage output path to VCS ignore (e.g. coverage/**) and to your lint/BIOME ignore configuration so it is not part of the lint/review surface, and instead publish the coverage bundle as a CI artifact or store it outside the repo.coverage/layouts/StatusBar.tsx.html (1)
156-156: Inconsistent string literal usage in embedded source.The ternary expression uses a template literal for the true branch but a plain string for the false branch. While functionally correct, consistency would improve readability.
♻️ Suggested fix for consistency
- const limitLabel = speedLimit > 0 ? `${speedLimit} MB/s` : "unlimited"; + const limitLabel = speedLimit > 0 ? `${speedLimit} MB/s` : `unlimited`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/layouts/StatusBar.tsx.html` at line 156, The ternary that assigns limitLabel mixes a template literal for the true branch and a plain string for the false branch; update the ternary expression that sets limitLabel (which uses speedLimit) so both branches use the same string literal style — e.g., change the false branch to a template literal (`unlimited`) or change the true branch to a plain string concatenation (speedLimit + " MB/s") to make the code consistent and readable.coverage/layouts/AppLayout.tsx.html (1)
2-250: Exclude generated coverage HTML from version control.This file is generated report output (timestamped + derived source snapshot), so keeping it in the PR adds churn without improving runtime behavior. Prefer publishing coverage as a CI artifact and ignoring
coverage/in Git.♻️ Suggested cleanup
+# .gitignore +coverage/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coverage/layouts/AppLayout.tsx.html` around lines 2 - 250, This generated coverage report (coverage/layouts/AppLayout.tsx.html) should not be committed; remove the file from the PR and add the coverage/ directory to version control ignore (e.g., update .gitignore) so generated HTML reports are not tracked, and ensure CI is configured to publish coverage/ as an artifact instead of committing it.src-tauri/src/application/commands/resume_download.rs (1)
261-298: Consolidate duplicatedFakeArchiveExtractortest double.This same stub is repeated across command tests in this PR. Move it to a shared test helper to avoid drift whenever the
ArchiveExtractortrait changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/commands/resume_download.rs` around lines 261 - 298, The FakeArchiveExtractor test double is duplicated; extract it into a single shared test helper module so all command tests reuse it. Create a new test helper (e.g., tests/helpers/archive_extractor.rs or a tests::helpers mod) that defines FakeArchiveExtractor implementing crate::domain::ports::driven::ArchiveExtractor with the same methods (detect_format, can_extract, extract, list_contents, detect_segments), export it (pub) and update the command tests to import and use this shared FakeArchiveExtractor so future trait changes only require one update.src-tauri/src/application/commands/set_priority.rs (1)
224-261: Consolidate the repeatedFakeArchiveExtractortest double into shared test support.This stub is now copied across several command/query test modules; centralizing it will reduce drift and maintenance overhead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/commands/set_priority.rs` around lines 224 - 261, Extract the duplicated FakeArchiveExtractor stub into a single shared test support module (e.g., test_helpers or tests::support) and have other command/query tests import it instead of redefining it; move the struct FakeArchiveExtractor and its impl of crate::domain::ports::driven::ArchiveExtractor into that module, ensure the module exposes the needed types (DomainError, crate::domain::model::archive::* via proper use/pub use or fully-qualified paths), and update callers in set_priority (and other tests) to use the shared FakeArchiveExtractor to avoid repeating the stub.src-tauri/src/application/commands/cancel_download.rs (1)
280-317: Consider centralizingFakeArchiveExtractorin a shared test helper.This exact stub appears across several test modules in this PR. Moving it to a common test utility would reduce duplication and simplify future trait updates.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/commands/cancel_download.rs` around lines 280 - 317, The FakeArchiveExtractor stub is duplicated across tests; extract it into a shared test helper module (e.g., tests/helpers or crate::test_utils) and re-export or import it where needed so updates to the ArchiveExtractor trait only require one change; move the struct FakeArchiveExtractor and its impl of crate::domain::ports::driven::ArchiveExtractor (including methods detect_format, can_extract, extract, list_contents, detect_segments) into that helper and update tests to use the centralized FakeArchiveExtractor.src-tauri/src/application/command_bus.rs (1)
425-462: Move this fake extractor into shared test support.This
FakeArchiveExtractoris now copy-pasted across several test modules. Centralizing it behind#[cfg(test)]will make futureArchiveExtractortrait changes a one-file update instead of a repo-wide mechanical edit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/command_bus.rs` around lines 425 - 462, FakeArchiveExtractor is duplicated across tests; extract it into a single shared test-support module behind #[cfg(test)] so changes to the ArchiveExtractor trait propagate once. Create a test-only module (e.g. tests/support or a tests_helpers.rs) that defines pub(crate) struct FakeArchiveExtractor and implements crate::domain::ports::driven::ArchiveExtractor (preserving the same method signatures: detect_format, can_extract, extract, list_contents, detect_segments), then replace local copies in test modules with use of the shared FakeArchiveExtractor to remove duplication.src-tauri/src/adapters/driven/extractor/seven_z_handler.rs (1)
192-210:parse_safe_pathreturns unsanitized path containing./components.The function skips
Component::CurDirduring validation but returns the originalPathBufunchanged, preserving./prefixes. While not a security issue, this may cause inconsistent paths inArchiveEntry(e.g.,"./docs/file.txt"vs"docs/file.txt").Consider normalizing the path by collecting only
Normalcomponents:♻️ Proposed fix to normalize paths
fn parse_safe_path(path_str: &str) -> Option<PathBuf> { let path = PathBuf::from(path_str); + let mut normalized = PathBuf::new(); // Check if path contains .. or absolute components for component in path.components() { use std::path::Component; match component { Component::ParentDir | Component::RootDir | Component::Prefix(_) => return None, Component::CurDir => { // Skip . components but continue } - Component::Normal(_) => { + Component::Normal(c) => { // Normal component, safe + normalized.push(c); } } } - Some(path) + Some(normalized) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/seven_z_handler.rs` around lines 192 - 210, parse_safe_path currently validates components but returns the original PathBuf, leaving "./" (Component::CurDir) parts intact; change it to build and return a normalized PathBuf composed only of allowed components: iterate path.components(), reject Component::ParentDir | Component::RootDir | Component::Prefix(_) as before, and push only Component::Normal(os_str) into a new PathBuf (skip Component::CurDir), then return Some(normalized_path) so ArchiveEntry gets consistent sanitized paths.src-tauri/src/adapters/driven/extractor/zip_handler.rs (1)
167-183: Timestamp calculation is significantly inaccurate.The formula uses
month * 30 * 86400and(year - 1970) * 365 * 86400, which doesn't account for actual month lengths, leap years, or that months are 1-indexed. For a file from 2024-06-15, the error could be off by weeks.Consider using
chronoor computing a more accurate timestamp. If rough approximation is intentional, document the margin of error.♻️ Proposed fix using proper date handling
If
chronois available:let modified_timestamp = entry.last_modified().and_then(|dt| { chrono::NaiveDate::from_ymd_opt(dt.year() as i32, dt.month() as u32, dt.day() as u32) .and_then(|date| date.and_hms_opt(dt.hour() as u32, dt.minute() as u32, dt.second() as u32)) .map(|ndt| ndt.and_utc().timestamp()) });Or if keeping the approximation, add a clarifying comment:
// Rough approximation (ignoring leap years/seconds for listing purposes) + // WARNING: Can be off by days/weeks; for display sorting only, not precision use ((year - 1970) * 365 * 86400)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/zip_handler.rs` around lines 167 - 183, Replace the manual rough timestamp math for modified_timestamp (currently using entry.last_modified() with year/month/day/hour/minute/second and custom arithmetic) with proper date handling: parse the zip::DateTime into a chrono NaiveDate/NaiveDateTime (using dt.year(), dt.month(), dt.day(), dt.hour(), dt.minute(), dt.second()), then convert to a UTC timestamp (or chrono DateTime<Utc>.timestamp()) and set modified_timestamp accordingly; ensure you handle invalid dates by mapping None when NaiveDate::from_ymd_opt or and_hms_opt returns None.src-tauri/src/adapters/driven/extractor/segmentation.rs (1)
271-276: Redundantread_dircall — directory is opened twice.Line 271-272 calls
read_dir(parent)to check for errors but discards the result, then immediately calls it again on line 274. This doubles filesystem operations.♻️ Proposed fix to remove duplicate call
fn scan_directory<F>( parent: &Path, parts: &mut Vec<PathBuf>, predicate: F, ) -> Result<(), DomainError> where F: Fn(&str) -> bool, { - std::fs::read_dir(parent) - .map_err(|e| DomainError::StorageError(format!("Failed to read directory: {}", e)))?; - for entry in std::fs::read_dir(parent) .map_err(|e| DomainError::StorageError(format!("Failed to read directory: {}", e)))? {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/segmentation.rs` around lines 271 - 276, The code calls std::fs::read_dir(parent) twice (once to check for errors and again to iterate); remove the duplicate by calling std::fs::read_dir(parent).map_err(|e| DomainError::StorageError(...)) once, store the result in a local (e.g., dir_iter) and then use that iterator in the for entry in dir_iter loop; update references around the loop in segmentation.rs so error mapping happens a single time and the same iterator is consumed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a267afd2-09f0-46a0-8c02-ba8f60da165c
⛔ Files ignored due to path filters (3)
coverage/favicon.pngis excluded by!**/*.pngcoverage/sort-arrow-sprite.pngis excluded by!**/*.pngsrc-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (92)
AGENT.mdCHANGELOG.mdcoverage/api/client.ts.htmlcoverage/api/hooks.ts.htmlcoverage/api/index.htmlcoverage/api/queries.ts.htmlcoverage/base.csscoverage/block-navigation.jscoverage/clover.xmlcoverage/components/ui/badge.tsx.htmlcoverage/components/ui/button.tsx.htmlcoverage/components/ui/checkbox.tsx.htmlcoverage/components/ui/dropdown-menu.tsx.htmlcoverage/components/ui/index.htmlcoverage/components/ui/input.tsx.htmlcoverage/components/ui/progress.tsx.htmlcoverage/components/ui/tooltip.tsx.htmlcoverage/coverage-final.jsoncoverage/hooks/index.htmlcoverage/hooks/useDownloadEvents.ts.htmlcoverage/hooks/useDownloadProgress.ts.htmlcoverage/hooks/useTauriEvent.ts.htmlcoverage/index.htmlcoverage/layouts/AppLayout.tsx.htmlcoverage/layouts/Sidebar.tsx.htmlcoverage/layouts/StatusBar.tsx.htmlcoverage/layouts/index.htmlcoverage/lib/format.ts.htmlcoverage/lib/index.htmlcoverage/lib/utils.ts.htmlcoverage/prettify.csscoverage/prettify.jscoverage/sorter.jscoverage/stores/downloadStore.ts.htmlcoverage/stores/index.htmlcoverage/stores/layout-store.ts.htmlcoverage/stores/settingsStore.ts.htmlcoverage/stores/uiStore.ts.htmlcoverage/theme/index.htmlcoverage/theme/theme-provider.tsx.htmlcoverage/theme/useTheme.ts.htmlcoverage/types/index.htmlcoverage/types/layout.ts.htmlcoverage/views/DownloadsView/ActionsBar.tsx.htmlcoverage/views/DownloadsView/DownloadsTable.tsx.htmlcoverage/views/DownloadsView/DownloadsView.tsx.htmlcoverage/views/DownloadsView/EtaCell.tsx.htmlcoverage/views/DownloadsView/FilterBar.tsx.htmlcoverage/views/DownloadsView/ProgressCell.tsx.htmlcoverage/views/DownloadsView/SearchBar.tsx.htmlcoverage/views/DownloadsView/SpeedCell.tsx.htmlcoverage/views/DownloadsView/StateIndicator.tsx.htmlcoverage/views/DownloadsView/index.htmlsrc-tauri/Cargo.tomlsrc-tauri/src/adapters/driven/extractor/detector.rssrc-tauri/src/adapters/driven/extractor/mod.rssrc-tauri/src/adapters/driven/extractor/rar_handler.rssrc-tauri/src/adapters/driven/extractor/segmentation.rssrc-tauri/src/adapters/driven/extractor/seven_z_handler.rssrc-tauri/src/adapters/driven/extractor/tar_handler.rssrc-tauri/src/adapters/driven/extractor/zip_handler.rssrc-tauri/src/adapters/driven/mod.rssrc-tauri/src/application/command_bus.rssrc-tauri/src/application/commands/cancel_download.rssrc-tauri/src/application/commands/extract_archive.rssrc-tauri/src/application/commands/install_plugin.rssrc-tauri/src/application/commands/mod.rssrc-tauri/src/application/commands/pause_all.rssrc-tauri/src/application/commands/pause_download.rssrc-tauri/src/application/commands/remove_download.rssrc-tauri/src/application/commands/resume_all.rssrc-tauri/src/application/commands/resume_download.rssrc-tauri/src/application/commands/retry_download.rssrc-tauri/src/application/commands/set_priority.rssrc-tauri/src/application/commands/start_download.rssrc-tauri/src/application/commands/toggle_clipboard.rssrc-tauri/src/application/commands/toggle_plugin.rssrc-tauri/src/application/commands/uninstall_plugin.rssrc-tauri/src/application/commands/update_config.rssrc-tauri/src/application/queries/count_by_state.rssrc-tauri/src/application/queries/get_download_detail.rssrc-tauri/src/application/queries/get_downloads.rssrc-tauri/src/application/queries/list_archive_contents.rssrc-tauri/src/application/queries/list_plugins.rssrc-tauri/src/application/queries/mod.rssrc-tauri/src/application/query_bus.rssrc-tauri/src/domain/model/archive.rssrc-tauri/src/domain/model/mod.rssrc-tauri/src/domain/ports/driven/archive_extractor.rssrc-tauri/src/domain/ports/driven/mod.rssrc-tauri/src/domain/ports/driven/tests.rssrc/routes/__tests__/routes.test.tsx
|
|
||
| <!doctype html> | ||
| <html lang="en"> | ||
|
|
||
| <head> | ||
| <title>Code coverage report for components/ui/tooltip.tsx</title> | ||
| <meta charset="utf-8" /> | ||
| <link rel="stylesheet" href="../../prettify.css" /> | ||
| <link rel="stylesheet" href="../../base.css" /> | ||
| <link rel="shortcut icon" type="image/x-icon" href="../../favicon.png" /> | ||
| <meta name="viewport" content="width=device-width, initial-scale=1" /> | ||
| <style type='text/css'> | ||
| .coverage-summary .sorter { | ||
| background-image: url(../../sort-arrow-sprite.png); | ||
| } | ||
| </style> | ||
| </head> | ||
|
|
||
| <body> | ||
| <div class='wrapper'> | ||
| <div class='pad1'> | ||
| <h1><a href="../../index.html">All files</a> / <a href="index.html">components/ui</a> tooltip.tsx</h1> | ||
| <div class='clearfix'> | ||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Statements</span> | ||
| <span class='fraction'>4/4</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Branches</span> | ||
| <span class='fraction'>2/2</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Functions</span> | ||
| <span class='fraction'>4/4</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Lines</span> | ||
| <span class='fraction'>4/4</span> | ||
| </div> | ||
|
|
||
|
|
||
| </div> | ||
| <p class="quiet"> | ||
| Press <em>n</em> or <em>j</em> to go to the next uncovered block, <em>b</em>, <em>p</em> or <em>k</em> for the previous block. | ||
| </p> | ||
| <template id="filterTemplate"> | ||
| <div class="quiet"> | ||
| Filter: | ||
| <input type="search" id="fileSearch"> | ||
| </div> | ||
| </template> | ||
| </div> | ||
| <div class='status-line high'></div> | ||
| <pre><table class="coverage"> | ||
| <tr><td class="line-count quiet"><a name='L1'></a><a href='#L1'>1</a> | ||
| <a name='L2'></a><a href='#L2'>2</a> | ||
| <a name='L3'></a><a href='#L3'>3</a> | ||
| <a name='L4'></a><a href='#L4'>4</a> | ||
| <a name='L5'></a><a href='#L5'>5</a> | ||
| <a name='L6'></a><a href='#L6'>6</a> | ||
| <a name='L7'></a><a href='#L7'>7</a> | ||
| <a name='L8'></a><a href='#L8'>8</a> | ||
| <a name='L9'></a><a href='#L9'>9</a> | ||
| <a name='L10'></a><a href='#L10'>10</a> | ||
| <a name='L11'></a><a href='#L11'>11</a> | ||
| <a name='L12'></a><a href='#L12'>12</a> | ||
| <a name='L13'></a><a href='#L13'>13</a> | ||
| <a name='L14'></a><a href='#L14'>14</a> | ||
| <a name='L15'></a><a href='#L15'>15</a> | ||
| <a name='L16'></a><a href='#L16'>16</a> | ||
| <a name='L17'></a><a href='#L17'>17</a> | ||
| <a name='L18'></a><a href='#L18'>18</a> | ||
| <a name='L19'></a><a href='#L19'>19</a> | ||
| <a name='L20'></a><a href='#L20'>20</a> | ||
| <a name='L21'></a><a href='#L21'>21</a> | ||
| <a name='L22'></a><a href='#L22'>22</a> | ||
| <a name='L23'></a><a href='#L23'>23</a> | ||
| <a name='L24'></a><a href='#L24'>24</a> | ||
| <a name='L25'></a><a href='#L25'>25</a> | ||
| <a name='L26'></a><a href='#L26'>26</a> | ||
| <a name='L27'></a><a href='#L27'>27</a> | ||
| <a name='L28'></a><a href='#L28'>28</a> | ||
| <a name='L29'></a><a href='#L29'>29</a> | ||
| <a name='L30'></a><a href='#L30'>30</a> | ||
| <a name='L31'></a><a href='#L31'>31</a> | ||
| <a name='L32'></a><a href='#L32'>32</a> | ||
| <a name='L33'></a><a href='#L33'>33</a> | ||
| <a name='L34'></a><a href='#L34'>34</a> | ||
| <a name='L35'></a><a href='#L35'>35</a> | ||
| <a name='L36'></a><a href='#L36'>36</a> | ||
| <a name='L37'></a><a href='#L37'>37</a> | ||
| <a name='L38'></a><a href='#L38'>38</a> | ||
| <a name='L39'></a><a href='#L39'>39</a> | ||
| <a name='L40'></a><a href='#L40'>40</a> | ||
| <a name='L41'></a><a href='#L41'>41</a> | ||
| <a name='L42'></a><a href='#L42'>42</a> | ||
| <a name='L43'></a><a href='#L43'>43</a> | ||
| <a name='L44'></a><a href='#L44'>44</a> | ||
| <a name='L45'></a><a href='#L45'>45</a> | ||
| <a name='L46'></a><a href='#L46'>46</a> | ||
| <a name='L47'></a><a href='#L47'>47</a> | ||
| <a name='L48'></a><a href='#L48'>48</a> | ||
| <a name='L49'></a><a href='#L49'>49</a> | ||
| <a name='L50'></a><a href='#L50'>50</a> | ||
| <a name='L51'></a><a href='#L51'>51</a> | ||
| <a name='L52'></a><a href='#L52'>52</a> | ||
| <a name='L53'></a><a href='#L53'>53</a> | ||
| <a name='L54'></a><a href='#L54'>54</a> | ||
| <a name='L55'></a><a href='#L55'>55</a> | ||
| <a name='L56'></a><a href='#L56'>56</a></td><td class="line-coverage quiet"><span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">11x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">22x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">22x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">22x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span></td><td class="text"><pre class="prettyprint lang-js">import * as React from "react" | ||
| import { Tooltip as TooltipPrimitive } from "radix-ui" | ||
| | ||
| import { cn } from "@/lib/utils" | ||
| | ||
| function TooltipProvider({ | ||
| delayDuration = 0, | ||
| ...props | ||
| }: React.ComponentProps<typeof TooltipPrimitive.Provider>) { | ||
| return ( | ||
| <TooltipPrimitive.Provider | ||
| data-slot="tooltip-provider" | ||
| delayDuration={delayDuration} | ||
| {...props} | ||
| /> | ||
| ) | ||
| } | ||
| | ||
| function Tooltip({ | ||
| ...props | ||
| }: React.ComponentProps<typeof TooltipPrimitive.Root>) { | ||
| return <TooltipPrimitive.Root data-slot="tooltip" {...props} /> | ||
| } | ||
| | ||
| function TooltipTrigger({ | ||
| ...props | ||
| }: React.ComponentProps<typeof TooltipPrimitive.Trigger>) { | ||
| return <TooltipPrimitive.Trigger data-slot="tooltip-trigger" {...props} /> | ||
| } | ||
| | ||
| function TooltipContent({ | ||
| className, | ||
| sideOffset = 0, | ||
| children, | ||
| ...props | ||
| }: React.ComponentProps<typeof TooltipPrimitive.Content>) { | ||
| return ( | ||
| <TooltipPrimitive.Portal> | ||
| <TooltipPrimitive.Content | ||
| data-slot="tooltip-content" | ||
| sideOffset={sideOffset} | ||
| className={cn( | ||
| "z-50 w-fit origin-(--radix-tooltip-content-transform-origin) animate-in rounded-md bg-foreground px-3 py-1.5 text-xs text-balance text-background fade-in-0 zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2 data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=closed]:zoom-out-95", | ||
| className | ||
| )} | ||
| {...props} | ||
| > | ||
| {children} | ||
| <TooltipPrimitive.Arrow className="z-50 size-2.5 translate-y-[calc(-50%_-_2px)] rotate-45 rounded-[2px] bg-foreground fill-foreground" /> | ||
| </TooltipPrimitive.Content> | ||
| </TooltipPrimitive.Portal> | ||
| ) | ||
| } | ||
| | ||
| export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider } | ||
| </pre></td></tr></table></pre> | ||
|
|
||
| <div class='push'></div><!-- for sticky footer --> | ||
| </div><!-- /wrapper --> | ||
| <div class='footer quiet pad2 space-top1 center small'> | ||
| Code coverage generated by | ||
| <a href="https://istanbul.js.org/" target="_blank" rel="noopener noreferrer">istanbul</a> | ||
| at 2026-04-10T13:55:05.026Z | ||
| </div> | ||
| <script src="../../prettify.js"></script> | ||
| <script> | ||
| window.onload = function () { | ||
| prettyPrint(); | ||
| }; | ||
| </script> | ||
| <script src="../../sorter.js"></script> | ||
| <script src="../../block-navigation.js"></script> | ||
| </body> | ||
| </html> | ||
|
No newline at end of file |
There was a problem hiding this comment.
Generated coverage artifact should not be committed.
This Istanbul-generated HTML file belongs in .gitignore, not source control.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@coverage/components/ui/tooltip.tsx.html` around lines 1 - 250, The PR
accidentally commits an Istanbul-generated coverage artifact
(coverage/components/ui/tooltip.tsx.html); remove that generated HTML from the
commit and ensure generated coverage files are ignored going forward by adding
the appropriate coverage path (e.g., coverage/ or coverage/components/) to
.gitignore; update the commit (unstage/remove the file and amend or create a new
commit) so only source files (like tooltip.tsx and the Tooltip, TooltipTrigger,
TooltipContent, TooltipProvider symbols) remain in version control.
coverage/stores/layout-store.ts.html
Outdated
|
|
||
| <!doctype html> | ||
| <html lang="en"> | ||
|
|
||
| <head> | ||
| <title>Code coverage report for stores/layout-store.ts</title> | ||
| <meta charset="utf-8" /> | ||
| <link rel="stylesheet" href="../prettify.css" /> | ||
| <link rel="stylesheet" href="../base.css" /> | ||
| <link rel="shortcut icon" type="image/x-icon" href="../favicon.png" /> | ||
| <meta name="viewport" content="width=device-width, initial-scale=1" /> | ||
| <style type='text/css'> | ||
| .coverage-summary .sorter { | ||
| background-image: url(../sort-arrow-sprite.png); | ||
| } | ||
| </style> | ||
| </head> | ||
|
|
||
| <body> | ||
| <div class='wrapper'> | ||
| <div class='pad1'> | ||
| <h1><a href="../index.html">All files</a> / <a href="index.html">stores</a> layout-store.ts</h1> | ||
| <div class='clearfix'> | ||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">50% </span> | ||
| <span class="quiet">Statements</span> | ||
| <span class='fraction'>2/4</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Branches</span> | ||
| <span class='fraction'>0/0</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">33.33% </span> | ||
| <span class="quiet">Functions</span> | ||
| <span class='fraction'>1/3</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">50% </span> | ||
| <span class="quiet">Lines</span> | ||
| <span class='fraction'>1/2</span> | ||
| </div> | ||
|
|
||
|
|
||
| </div> | ||
| <p class="quiet"> | ||
| Press <em>n</em> or <em>j</em> to go to the next uncovered block, <em>b</em>, <em>p</em> or <em>k</em> for the previous block. | ||
| </p> | ||
| <template id="filterTemplate"> | ||
| <div class="quiet"> | ||
| Filter: | ||
| <input type="search" id="fileSearch"> | ||
| </div> | ||
| </template> | ||
| </div> | ||
| <div class='status-line medium'></div> | ||
| <pre><table class="coverage"> | ||
| <tr><td class="line-count quiet"><a name='L1'></a><a href='#L1'>1</a> | ||
| <a name='L2'></a><a href='#L2'>2</a> | ||
| <a name='L3'></a><a href='#L3'>3</a> | ||
| <a name='L4'></a><a href='#L4'>4</a> | ||
| <a name='L5'></a><a href='#L5'>5</a> | ||
| <a name='L6'></a><a href='#L6'>6</a> | ||
| <a name='L7'></a><a href='#L7'>7</a> | ||
| <a name='L8'></a><a href='#L8'>8</a> | ||
| <a name='L9'></a><a href='#L9'>9</a> | ||
| <a name='L10'></a><a href='#L10'>10</a> | ||
| <a name='L11'></a><a href='#L11'>11</a> | ||
| <a name='L12'></a><a href='#L12'>12</a> | ||
| <a name='L13'></a><a href='#L13'>13</a> | ||
| <a name='L14'></a><a href='#L14'>14</a> | ||
| <a name='L15'></a><a href='#L15'>15</a> | ||
| <a name='L16'></a><a href='#L16'>16</a> | ||
| <a name='L17'></a><a href='#L17'>17</a> | ||
| <a name='L18'></a><a href='#L18'>18</a></td><td class="line-coverage quiet"><span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">2x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-no"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span></td><td class="text"><pre class="prettyprint lang-js">import { create } from 'zustand'; | ||
| | ||
| interface LayoutState { | ||
| sidebarCollapsed: boolean; | ||
| toggleSidebar: () => void; | ||
| speedLimit: number; | ||
| freeSpace: string; | ||
| appVersion: string; | ||
| } | ||
| | ||
| export const useLayoutStore = create<LayoutState>((set) => ({ | ||
| sidebarCollapsed: false, | ||
| toggleSidebar: <span class="fstat-no" title="function not covered" >() => <span class="cstat-no" title="statement not covered" >s</span>et(<span class="fstat-no" title="function not covered" >(s</span>) => (<span class="cstat-no" title="statement not covered" >{ sidebarCollapsed: !s.sidebarCollapsed }))</span>,</span> | ||
| speedLimit: 0, | ||
| freeSpace: '-- GB', | ||
| appVersion: '0.1.0', | ||
| })); | ||
| </pre></td></tr></table></pre> | ||
|
|
||
| <div class='push'></div><!-- for sticky footer --> | ||
| </div><!-- /wrapper --> | ||
| <div class='footer quiet pad2 space-top1 center small'> | ||
| Code coverage generated by | ||
| <a href="https://istanbul.js.org/" target="_blank" rel="noopener noreferrer">istanbul</a> | ||
| at 2026-04-10T13:55:05.026Z | ||
| </div> | ||
| <script src="../prettify.js"></script> | ||
| <script> | ||
| window.onload = function () { | ||
| prettyPrint(); | ||
| }; | ||
| </script> | ||
| <script src="../sorter.js"></script> | ||
| <script src="../block-navigation.js"></script> | ||
| </body> | ||
| </html> | ||
|
No newline at end of file |
There was a problem hiding this comment.
Build artifact should be excluded from repository.
This Istanbul-generated coverage report should not be committed. Add the entire coverage/ directory to .gitignore.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@coverage/stores/layout-store.ts.html` around lines 1 - 136, The coverage HTML
file under the repo (e.g., the generated report referencing useLayoutStore and
toggleSidebar in layout-store.ts.html) is a build artifact and must be removed
from source control; add the coverage directory name "coverage" to .gitignore,
remove the committed coverage files from the index (git rm --cached ... or
equivalent) and commit the .gitignore update so future Istanbul reports aren’t
committed.
|
|
||
| <!doctype html> | ||
| <html lang="en"> | ||
|
|
||
| <head> | ||
| <title>Code coverage report for views/DownloadsView/DownloadsView.tsx</title> | ||
| <meta charset="utf-8" /> | ||
| <link rel="stylesheet" href="../../prettify.css" /> | ||
| <link rel="stylesheet" href="../../base.css" /> | ||
| <link rel="shortcut icon" type="image/x-icon" href="../../favicon.png" /> | ||
| <meta name="viewport" content="width=device-width, initial-scale=1" /> | ||
| <style type='text/css'> | ||
| .coverage-summary .sorter { | ||
| background-image: url(../../sort-arrow-sprite.png); | ||
| } | ||
| </style> | ||
| </head> | ||
|
|
||
| <body> | ||
| <div class='wrapper'> | ||
| <div class='pad1'> | ||
| <h1><a href="../../index.html">All files</a> / <a href="index.html">views/DownloadsView</a> DownloadsView.tsx</h1> | ||
| <div class='clearfix'> | ||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Statements</span> | ||
| <span class='fraction'>6/6</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Branches</span> | ||
| <span class='fraction'>2/2</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Functions</span> | ||
| <span class='fraction'>1/1</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Lines</span> | ||
| <span class='fraction'>6/6</span> | ||
| </div> | ||
|
|
||
|
|
||
| </div> | ||
| <p class="quiet"> | ||
| Press <em>n</em> or <em>j</em> to go to the next uncovered block, <em>b</em>, <em>p</em> or <em>k</em> for the previous block. | ||
| </p> | ||
| <template id="filterTemplate"> | ||
| <div class="quiet"> | ||
| Filter: | ||
| <input type="search" id="fileSearch"> | ||
| </div> | ||
| </template> | ||
| </div> | ||
| <div class='status-line high'></div> | ||
| <pre><table class="coverage"> | ||
| <tr><td class="line-count quiet"><a name='L1'></a><a href='#L1'>1</a> | ||
| <a name='L2'></a><a href='#L2'>2</a> | ||
| <a name='L3'></a><a href='#L3'>3</a> | ||
| <a name='L4'></a><a href='#L4'>4</a> | ||
| <a name='L5'></a><a href='#L5'>5</a> | ||
| <a name='L6'></a><a href='#L6'>6</a> | ||
| <a name='L7'></a><a href='#L7'>7</a> | ||
| <a name='L8'></a><a href='#L8'>8</a> | ||
| <a name='L9'></a><a href='#L9'>9</a> | ||
| <a name='L10'></a><a href='#L10'>10</a> | ||
| <a name='L11'></a><a href='#L11'>11</a> | ||
| <a name='L12'></a><a href='#L12'>12</a> | ||
| <a name='L13'></a><a href='#L13'>13</a> | ||
| <a name='L14'></a><a href='#L14'>14</a> | ||
| <a name='L15'></a><a href='#L15'>15</a> | ||
| <a name='L16'></a><a href='#L16'>16</a> | ||
| <a name='L17'></a><a href='#L17'>17</a> | ||
| <a name='L18'></a><a href='#L18'>18</a> | ||
| <a name='L19'></a><a href='#L19'>19</a> | ||
| <a name='L20'></a><a href='#L20'>20</a> | ||
| <a name='L21'></a><a href='#L21'>21</a> | ||
| <a name='L22'></a><a href='#L22'>22</a> | ||
| <a name='L23'></a><a href='#L23'>23</a> | ||
| <a name='L24'></a><a href='#L24'>24</a> | ||
| <a name='L25'></a><a href='#L25'>25</a> | ||
| <a name='L26'></a><a href='#L26'>26</a> | ||
| <a name='L27'></a><a href='#L27'>27</a> | ||
| <a name='L28'></a><a href='#L28'>28</a> | ||
| <a name='L29'></a><a href='#L29'>29</a> | ||
| <a name='L30'></a><a href='#L30'>30</a> | ||
| <a name='L31'></a><a href='#L31'>31</a> | ||
| <a name='L32'></a><a href='#L32'>32</a> | ||
| <a name='L33'></a><a href='#L33'>33</a> | ||
| <a name='L34'></a><a href='#L34'>34</a> | ||
| <a name='L35'></a><a href='#L35'>35</a> | ||
| <a name='L36'></a><a href='#L36'>36</a> | ||
| <a name='L37'></a><a href='#L37'>37</a> | ||
| <a name='L38'></a><a href='#L38'>38</a> | ||
| <a name='L39'></a><a href='#L39'>39</a> | ||
| <a name='L40'></a><a href='#L40'>40</a> | ||
| <a name='L41'></a><a href='#L41'>41</a> | ||
| <a name='L42'></a><a href='#L42'>42</a> | ||
| <a name='L43'></a><a href='#L43'>43</a> | ||
| <a name='L44'></a><a href='#L44'>44</a> | ||
| <a name='L45'></a><a href='#L45'>45</a> | ||
| <a name='L46'></a><a href='#L46'>46</a> | ||
| <a name='L47'></a><a href='#L47'>47</a> | ||
| <a name='L48'></a><a href='#L48'>48</a></td><td class="line-coverage quiet"><span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">3x</span> | ||
| <span class="cline-any cline-yes">3x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">3x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">3x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">3x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">3x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span></td><td class="text"><pre class="prettyprint lang-js">import { useState } from 'react'; | ||
| import { useTauriQuery } from '@/api/hooks'; | ||
| import { downloadQueries } from '@/api/queries'; | ||
| import { useDownloadProgress } from '@/hooks/useDownloadProgress'; | ||
| import type { DownloadView } from '@/types/download'; | ||
| import type { FilterType } from './types'; | ||
| import { SearchBar } from './SearchBar'; | ||
| import { FilterBar } from './FilterBar'; | ||
| import { ActionsBar } from './ActionsBar'; | ||
| import { DownloadsTable } from './DownloadsTable'; | ||
| | ||
| export function DownloadsView() { | ||
| const [filter, setFilter] = useState<FilterType>('all'); | ||
| const [searchQuery, setSearchQuery] = useState(''); | ||
| | ||
| useDownloadProgress(); | ||
| | ||
| const { data: downloads, isLoading } = useTauriQuery<DownloadView[]>( | ||
| 'download_list', | ||
| undefined, | ||
| { queryKey: downloadQueries.lists(), staleTime: 1000 }, | ||
| ); | ||
| | ||
| const { data: countByState } = useTauriQuery<Record<string, number>>( | ||
| 'download_count_by_state', | ||
| undefined, | ||
| { queryKey: downloadQueries.countByState(), staleTime: 2000 }, | ||
| ); | ||
| | ||
| return ( | ||
| <div className="flex h-full flex-col gap-3 p-4"> | ||
| <SearchBar value={searchQuery} onChange={setSearchQuery} /> | ||
| <FilterBar | ||
| activeFilter={filter} | ||
| onFilterChange={setFilter} | ||
| counts={countByState} | ||
| /> | ||
| <ActionsBar /> | ||
| <DownloadsTable | ||
| downloads={downloads ?? []} | ||
| isLoading={isLoading} | ||
| filter={filter} | ||
| searchQuery={searchQuery} | ||
| /> | ||
| </div> | ||
| ); | ||
| } | ||
| </pre></td></tr></table></pre> | ||
|
|
||
| <div class='push'></div><!-- for sticky footer --> | ||
| </div><!-- /wrapper --> | ||
| <div class='footer quiet pad2 space-top1 center small'> | ||
| Code coverage generated by | ||
| <a href="https://istanbul.js.org/" target="_blank" rel="noopener noreferrer">istanbul</a> | ||
| at 2026-04-10T13:55:05.026Z | ||
| </div> | ||
| <script src="../../prettify.js"></script> | ||
| <script> | ||
| window.onload = function () { | ||
| prettyPrint(); | ||
| }; | ||
| </script> | ||
| <script src="../../sorter.js"></script> | ||
| <script src="../../block-navigation.js"></script> | ||
| </body> | ||
| </html> | ||
|
No newline at end of file |
There was a problem hiding this comment.
Auto-generated coverage report should be excluded.
This file is a build artifact and should not be committed to the repository. Add coverage/ to .gitignore.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@coverage/views/DownloadsView/DownloadsView.tsx.html` around lines 1 - 226,
The src contains an auto-generated coverage artifact
(coverage/views/DownloadsView/DownloadsView.tsx.html referencing the
DownloadsView component) that should not be committed; update the repo ignore
rules by adding coverage/ to .gitignore, remove the tracked artifact from the
index if already committed (unstage/remove cached file) and commit the
.gitignore change, and ensure future CI/build outputs under coverage/ are not
added (verify by checking that
coverage/views/DownloadsView/DownloadsView.tsx.html no longer appears in git
status).
|
|
||
| <!doctype html> | ||
| <html lang="en"> | ||
|
|
||
| <head> | ||
| <title>Code coverage report for views/DownloadsView/ProgressCell.tsx</title> | ||
| <meta charset="utf-8" /> | ||
| <link rel="stylesheet" href="../../prettify.css" /> | ||
| <link rel="stylesheet" href="../../base.css" /> | ||
| <link rel="shortcut icon" type="image/x-icon" href="../../favicon.png" /> | ||
| <meta name="viewport" content="width=device-width, initial-scale=1" /> | ||
| <style type='text/css'> | ||
| .coverage-summary .sorter { | ||
| background-image: url(../../sort-arrow-sprite.png); | ||
| } | ||
| </style> | ||
| </head> | ||
|
|
||
| <body> | ||
| <div class='wrapper'> | ||
| <div class='pad1'> | ||
| <h1><a href="../../index.html">All files</a> / <a href="index.html">views/DownloadsView</a> ProgressCell.tsx</h1> | ||
| <div class='clearfix'> | ||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Statements</span> | ||
| <span class='fraction'>4/4</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">25% </span> | ||
| <span class="quiet">Branches</span> | ||
| <span class='fraction'>1/4</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Functions</span> | ||
| <span class='fraction'>2/2</span> | ||
| </div> | ||
|
|
||
|
|
||
| <div class='fl pad1y space-right2'> | ||
| <span class="strong">100% </span> | ||
| <span class="quiet">Lines</span> | ||
| <span class='fraction'>3/3</span> | ||
| </div> | ||
|
|
||
|
|
||
| </div> | ||
| <p class="quiet"> | ||
| Press <em>n</em> or <em>j</em> to go to the next uncovered block, <em>b</em>, <em>p</em> or <em>k</em> for the previous block. | ||
| </p> | ||
| <template id="filterTemplate"> | ||
| <div class="quiet"> | ||
| Filter: | ||
| <input type="search" id="fileSearch"> | ||
| </div> | ||
| </template> | ||
| </div> | ||
| <div class='status-line high'></div> | ||
| <pre><table class="coverage"> | ||
| <tr><td class="line-count quiet"><a name='L1'></a><a href='#L1'>1</a> | ||
| <a name='L2'></a><a href='#L2'>2</a> | ||
| <a name='L3'></a><a href='#L3'>3</a> | ||
| <a name='L4'></a><a href='#L4'>4</a> | ||
| <a name='L5'></a><a href='#L5'>5</a> | ||
| <a name='L6'></a><a href='#L6'>6</a> | ||
| <a name='L7'></a><a href='#L7'>7</a> | ||
| <a name='L8'></a><a href='#L8'>8</a> | ||
| <a name='L9'></a><a href='#L9'>9</a> | ||
| <a name='L10'></a><a href='#L10'>10</a> | ||
| <a name='L11'></a><a href='#L11'>11</a> | ||
| <a name='L12'></a><a href='#L12'>12</a> | ||
| <a name='L13'></a><a href='#L13'>13</a> | ||
| <a name='L14'></a><a href='#L14'>14</a> | ||
| <a name='L15'></a><a href='#L15'>15</a> | ||
| <a name='L16'></a><a href='#L16'>16</a> | ||
| <a name='L17'></a><a href='#L17'>17</a> | ||
| <a name='L18'></a><a href='#L18'>18</a> | ||
| <a name='L19'></a><a href='#L19'>19</a> | ||
| <a name='L20'></a><a href='#L20'>20</a> | ||
| <a name='L21'></a><a href='#L21'>21</a> | ||
| <a name='L22'></a><a href='#L22'>22</a> | ||
| <a name='L23'></a><a href='#L23'>23</a> | ||
| <a name='L24'></a><a href='#L24'>24</a> | ||
| <a name='L25'></a><a href='#L25'>25</a></td><td class="line-coverage quiet"><span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">66x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">22x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-yes">22x</span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span> | ||
| <span class="cline-any cline-neutral"> </span></td><td class="text"><pre class="prettyprint lang-js">import { Progress } from '@/components/ui/progress'; | ||
| import { useDownloadStore } from '@/stores/downloadStore'; | ||
| import type { DownloadView } from '@/types/download'; | ||
| | ||
| interface ProgressCellProps { | ||
| download: DownloadView; | ||
| } | ||
| | ||
| export function ProgressCell({ download }: ProgressCellProps) { | ||
| const progress = useDownloadStore((s) => s.progressMap[download.id]); | ||
| | ||
| const percent = progress | ||
| ? <span class="branch-0 cbranch-no" title="branch not covered" >progress.totalBytes > 0</span> | ||
| ? Math.round((progress.downloadedBytes / progress.totalBytes) * 100) | ||
| : 0 | ||
| : download.progressPercent; | ||
| | ||
| return ( | ||
| <div className="flex items-center gap-2"> | ||
| <Progress className="flex-1 h-2" value={percent} /> | ||
| <span className="text-xs font-mono">{percent}%</span> | ||
| </div> | ||
| ); | ||
| } | ||
| </pre></td></tr></table></pre> | ||
|
|
||
| <div class='push'></div><!-- for sticky footer --> | ||
| </div><!-- /wrapper --> | ||
| <div class='footer quiet pad2 space-top1 center small'> | ||
| Code coverage generated by | ||
| <a href="https://istanbul.js.org/" target="_blank" rel="noopener noreferrer">istanbul</a> | ||
| at 2026-04-10T13:55:05.026Z | ||
| </div> | ||
| <script src="../../prettify.js"></script> | ||
| <script> | ||
| window.onload = function () { | ||
| prettyPrint(); | ||
| }; | ||
| </script> | ||
| <script src="../../sorter.js"></script> | ||
| <script src="../../block-navigation.js"></script> | ||
| </body> | ||
| </html> | ||
|
No newline at end of file |
There was a problem hiding this comment.
Generated artifact should not be in version control.
This Istanbul coverage report is auto-generated and should be excluded via .gitignore.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@coverage/views/DownloadsView/ProgressCell.tsx.html` around lines 1 - 157, The
coverage HTML artifact ProgressCell.tsx.html under the generated coverage output
was committed but should be ignored; remove the tracked generated file and add
the coverage output pattern to .gitignore. Specifically: delete/untrack the
generated ProgressCell.tsx.html from git (git rm --cached or remove and commit),
add an entry to .gitignore to ignore the coverage/ output (or the specific
generated HTML pattern), and commit the .gitignore change so future Istanbul
reports (like the one containing ProgressCell.tsx.html) are not added to version
control.
There was a problem hiding this comment.
19 issues found across 95 files
Note: This PR contains a large number of files. cubic only reviews up to 75 files per PR, so some files may not have been reviewed. cubic prioritises the most important files to review.
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/application/commands/extract_archive.rs">
<violation number="1" location="src-tauri/src/application/commands/extract_archive.rs:41">
P0: The command requires `Completed` state but then calls `start_extracting()`, which rejects `Completed`, so extraction always fails with an invalid transition.</violation>
<violation number="2" location="src-tauri/src/application/commands/extract_archive.rs:41">
P0: This transition call does not align with the allowed source state for extraction and will raise `InvalidTransition` for valid extraction requests. Align the command precondition and domain transition states.</violation>
<violation number="3" location="src-tauri/src/application/commands/extract_archive.rs:57">
P1: If extraction fails, the handler returns early and leaves the download persisted in `Extracting` state instead of transitioning to an error state.</violation>
<violation number="4" location="src-tauri/src/application/commands/extract_archive.rs:61">
P1: Avoid propagating extraction errors directly after setting `Extracting`; restore the download to a retryable/terminal state before returning the error, otherwise failed jobs remain stuck in `Extracting`.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/zip_handler.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/zip_handler.rs:168">
P2: `modified_timestamp` is computed with an approximate formula that produces incorrect Unix timestamps. Use an exact date-time conversion (or return `None`) instead of emitting inaccurate values.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/zip_handler.rs:178">
P3: Subtract 1 from the month component in this approximation; the current formula adds an extra month offset for all ZIP timestamps.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/seven_z_handler.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/seven_z_handler.rs:71">
P1: Path traversal protection is incomplete: writing to `dest_dir.join(safe_path)` can still escape through existing symlinks in the destination tree.</violation>
</file>
<file name="src/routes/__tests__/routes.test.tsx">
<violation number="1" location="src/routes/__tests__/routes.test.tsx:7">
P2: These tests reimplement routing instead of exercising `App.tsx`, so real route wiring regressions can still pass.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/mod.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/mod.rs:127">
P1: `extract_recursive` re-enters recursion via `self.extract` and then recurses again, which duplicates nested extraction and breaks depth limiting.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/detector.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/detector.rs:37">
P2: ZIP magic-byte detection is incomplete and misses valid ZIP signatures such as empty archives (`PK\x05\x06`).</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/detector.rs:102">
P2: Single-part extension matching is case-sensitive, so uppercase/mixed-case extensions like `.ZIP` are not detected.</violation>
<violation number="3" location="src-tauri/src/adapters/driven/extractor/detector.rs:128">
P3: This test does not actually verify extension precedence because the temp file path has no `.rar` extension.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/segmentation.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:157">
P2: Legacy RAR detection only processes `.rar` inputs; calling detection on `.r00/.r01` parts returns `None`.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:223">
P1: The `z01/z02` ZIP regex is malformed (`\z` anchor), so split ZIP parts in `.zNN` format will never match.</violation>
<violation number="3" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:223">
P1: Use a literal `z` in the split-ZIP regex; `\z` is an end-of-text anchor here and makes `.z01` segment detection fail.</violation>
<violation number="4" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:311">
P2: Part-number extraction uses the first numeric token in the filename, which misorders segments when base names contain digits.</violation>
<violation number="5" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:311">
P1: Extract the trailing numeric part instead of the first digit run; current parsing can misorder multipart archives when the base filename contains digits.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/tar_handler.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/tar_handler.rs:55">
P2: Post-extraction stats are computed from the whole destination folder, which can overcount unrelated files and add significant overhead on large directories.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/tar_handler.rs:158">
P1: `count_extracted` follows symlinked directories, which can recurse indefinitely on cyclic links and hang/overflow on crafted archives.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| } | ||
|
|
||
| // Transition to Extracting | ||
| download.start_extracting()?; |
There was a problem hiding this comment.
P0: The command requires Completed state but then calls start_extracting(), which rejects Completed, so extraction always fails with an invalid transition.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/extract_archive.rs, line 41:
<comment>The command requires `Completed` state but then calls `start_extracting()`, which rejects `Completed`, so extraction always fails with an invalid transition.</comment>
<file context>
@@ -0,0 +1,72 @@
+ }
+
+ // Transition to Extracting
+ download.start_extracting()?;
+ self.download_repo().save(&download)?;
+ self.event_bus().publish(DomainEvent::DownloadExtracting {
</file context>
| } | ||
|
|
||
| // Transition to Extracting | ||
| download.start_extracting()?; |
There was a problem hiding this comment.
P0: This transition call does not align with the allowed source state for extraction and will raise InvalidTransition for valid extraction requests. Align the command precondition and domain transition states.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/extract_archive.rs, line 41:
<comment>This transition call does not align with the allowed source state for extraction and will raise `InvalidTransition` for valid extraction requests. Align the command precondition and domain transition states.</comment>
<file context>
@@ -0,0 +1,72 @@
+ }
+
+ // Transition to Extracting
+ download.start_extracting()?;
+ self.download_repo().save(&download)?;
+ self.event_bus().publish(DomainEvent::DownloadExtracting {
</file context>
| } | ||
|
|
||
| // Count extracted files and bytes by walking the destination directory | ||
| let (extracted_files, extracted_bytes) = Self::count_extracted(dest_dir).unwrap_or((0, 0)); |
There was a problem hiding this comment.
P2: Post-extraction stats are computed from the whole destination folder, which can overcount unrelated files and add significant overhead on large directories.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/adapters/driven/extractor/tar_handler.rs, line 55:
<comment>Post-extraction stats are computed from the whole destination folder, which can overcount unrelated files and add significant overhead on large directories.</comment>
<file context>
@@ -0,0 +1,297 @@
+ }
+
+ // Count extracted files and bytes by walking the destination directory
+ let (extracted_files, extracted_bytes) = Self::count_extracted(dest_dir).unwrap_or((0, 0));
+
+ let duration_ms = start.elapsed().as_millis() as u64;
</file context>
|
|
||
| #[test] | ||
| fn test_detect_zip_by_magic_bytes() { | ||
| let file = create_file_with_magic(&[0x50, 0x4B, 0x03, 0x04, 0xFF, 0xFF, 0xFF, 0xFF]); |
There was a problem hiding this comment.
P3: This test does not actually verify extension precedence because the temp file path has no .rar extension.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/adapters/driven/extractor/detector.rs, line 128:
<comment>This test does not actually verify extension precedence because the temp file path has no `.rar` extension.</comment>
<file context>
@@ -0,0 +1,283 @@
+
+ #[test]
+ fn test_detect_zip_by_magic_bytes() {
+ let file = create_file_with_magic(&[0x50, 0x4B, 0x03, 0x04, 0xFF, 0xFF, 0xFF, 0xFF]);
+ let result = detect_format(file.path()).expect("should detect zip");
+ assert_eq!(result, Some(ArchiveFormat::Zip));
</file context>
- State machine: allow Completed → Extracting transition (P0) - Extract handler: recover download to Error state on failure (P1) - Regex: fix \z anchor bug in split-ZIP detection (P1) - extract_part_number: match trailing digits, not first run (P1) - Recursive extraction: call extract_by_format to avoid double recursion (P1) - 7z handler: reject symlink targets to prevent traversal (P1) - TAR count_extracted: use symlink_metadata, skip symlinks (P1) - Segmentation: remove duplicate read_dir call (P2) - ZIP timestamp: return None instead of incorrect approximation (P2) - Detector: recognize empty/spanned ZIP signatures (P2) - Detector: case-insensitive single-part extension matching (P2) - Remove accidentally committed coverage/ and AGENT.md artifacts
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (1)
src-tauri/src/domain/model/download.rs (1)
484-489: Add a regression test for the newCompleted -> Extractingtransition.The transition is now allowed, but there’s no direct unit test asserting this path. A dedicated test will prevent accidental rollback of this behavior.
✅ Suggested test addition
@@ fn test_start_extracting_from_checking() { let mut d = make_download(); d.start().unwrap(); d.start_checking().unwrap(); assert_eq!(d.state(), DownloadState::Checking); let event = d.start_extracting().unwrap(); assert_eq!(d.state(), DownloadState::Extracting); assert_eq!(event, DomainEvent::DownloadExtracting { id: DownloadId(1) }); } + + #[test] + fn test_start_extracting_from_completed() { + let mut d = make_download(); + d.start().unwrap(); + d.complete().unwrap(); + assert_eq!(d.state(), DownloadState::Completed); + let event = d.start_extracting().unwrap(); + assert_eq!(d.state(), DownloadState::Extracting); + assert_eq!(event, DomainEvent::DownloadExtracting { id: DownloadId(1) }); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/model/download.rs` around lines 484 - 489, Add a unit test that asserts start_extracting() on a Download in DownloadState::Completed transitions state to DownloadState::Extracting and returns Ok(DomainEvent::DownloadExtracting { id: ... }). Specifically, create a Download instance (or use the existing test helper/factory) with state = DownloadState::Completed, call start_extracting(), assert the result is Ok(DomainEvent::DownloadExtracting { id: download.id }), and assert download.state == DownloadState::Extracting; reference the start_extracting method and the DownloadState::Completed / DomainEvent::DownloadExtracting symbols when locating code to test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/extractor/mod.rs`:
- Around line 98-150: extract_recursive currently returns only warnings so
nested successful extractions never increment the top-level
ExtractSummary.extracted_files/extracted_bytes; change extract_recursive to
return rolled-up counts plus warnings (preferably an ExtractSummary or a tuple
like (Vec<String>, u64 /*files*/, u64 /*bytes*/)) and update its signature and
all call sites (including where extract_recursive is invoked around the other
block at lines ~188-197) to merge returned extracted_files and extracted_bytes
into the parent summary; inside extract_recursive, when extract_by_format
returns Ok(summary) add summary.extracted_files and summary.extracted_bytes to
your accumulators and also merge warnings, and when recursing capture and add
the nested recursive counts into the same accumulators before returning the
combined summary and warnings.
In `@src-tauri/src/adapters/driven/extractor/segmentation.rs`:
- Around line 81-99: verify_all_parts_present currently only checks adjacent
gaps and can miss when the first segment is absent; fix by computing the minimum
part index across all entries (use extract_part_number on each Path's file_name,
fallback 0), and if that minimum is not 0 push the expected initial part into
missing (construct the expected initial filename using the same formatting logic
used elsewhere, e.g. format!("{:02}", 0) joined to the part's parent or the
parent of any part); use the same extract_part_number and parts.iter() helpers
so the check integrates with the existing adjacent-gap logic.
- Around line 220-229: The z-part branch currently collects only .zNN segments
(in the re_z capture using base_name, pattern, scan_directory and parts) but
omits the final base_name.zip file; update the code after scan_directory and
before returning to check for the terminal zip (e.g. let zip_name =
format!("{}.zip", base_name)) using parent.join(&zip_name).exists() (or by
scanning names) and if present add zip_name to parts (ensure ordering so the
numeric .zNN parts come first and the .zip is last—e.g. call
sort_parts_numerically(&mut parts) then push(zip_name) if found) so the returned
Vec includes the final .zip segment for extraction/validation.
- Around line 145-149: The current guard only detects ".rar" or a single-digit
suffix with an 'r' immediately before it and thus misses legacy multi-digit RAR
part names like ".r00" or ".r01"; update the check to detect extensions that are
"rar" or start with 'r' followed by one or more digits. Replace the existing
compound ends_with logic on file_name with an extension check (e.g., get the
substring after the last '.' or use Path::new(file_name).extension()) and treat
the name as a RAR part if ext == "rar" || (ext.starts_with('r') &&
ext[1..].chars().all(|c| c.is_ascii_digit())); keep the surrounding return
Ok(None) behavior the same.
In `@src-tauri/src/adapters/driven/extractor/seven_z_handler.rs`:
- Around line 71-105: The current check only calls target_path.is_symlink()
which misses symlinked ancestor directories (e.g., dest_dir/foo -> symlink) so
entries like foo/bar.txt can escape the extraction root; add a helper (e.g.,
reject_symlinked_ancestors(dest_dir: &Path, target_parent: &Path) -> Result<(),
Error>) that walks from dest_dir (exclusive) down to target_path.parent()
(inclusive) using symlink_metadata and returns an error if any ancestor is a
symlink, and call that helper before any fs::create_dir_all or fs::File::create
operations (i.e., before the parent create_dir_all block and before creating the
file), referencing target_path, dest_dir, and entry to locate the insertion
points and ensure extraction aborts/records a warning if an ancestor symlink is
detected.
In `@src-tauri/src/adapters/driven/extractor/zip_handler.rs`:
- Around line 73-109: The code trusts enclosed_name() but doesn't prevent
writing through symlinked ancestors under dest_dir; add a helper (e.g.,
reject_symlink_ancestors or ensure_no_symlink_ancestors) that, given dest_dir
and target_path, walks from target_path's parent up to dest_dir and uses
symlink_metadata to detect any symlinked ancestor and return Err if found; call
this helper before fs::create_dir_all(parent) and before
fs::File::create(&target_path) (and before any write like std::io::copy) so you
skip/expose a warning if any ancestor is a symlink and avoid following links
outside dest_dir.
---
Nitpick comments:
In `@src-tauri/src/domain/model/download.rs`:
- Around line 484-489: Add a unit test that asserts start_extracting() on a
Download in DownloadState::Completed transitions state to
DownloadState::Extracting and returns Ok(DomainEvent::DownloadExtracting { id:
... }). Specifically, create a Download instance (or use the existing test
helper/factory) with state = DownloadState::Completed, call start_extracting(),
assert the result is Ok(DomainEvent::DownloadExtracting { id: download.id }),
and assert download.state == DownloadState::Extracting; reference the
start_extracting method and the DownloadState::Completed /
DomainEvent::DownloadExtracting symbols when locating code to test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 308e119f-4e03-42e7-923d-3f2c008cb936
📒 Files selected for processing (8)
src-tauri/src/adapters/driven/extractor/detector.rssrc-tauri/src/adapters/driven/extractor/mod.rssrc-tauri/src/adapters/driven/extractor/segmentation.rssrc-tauri/src/adapters/driven/extractor/seven_z_handler.rssrc-tauri/src/adapters/driven/extractor/tar_handler.rssrc-tauri/src/adapters/driven/extractor/zip_handler.rssrc-tauri/src/application/commands/extract_archive.rssrc-tauri/src/domain/model/download.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src-tauri/src/application/commands/extract_archive.rs
- src-tauri/src/adapters/driven/extractor/tar_handler.rs
There was a problem hiding this comment.
5 issues found across 8 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/extractor/seven_z_handler.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/seven_z_handler.rs:74">
P1: The symlink check only validates `target_path` itself and misses symlinked parent directories, so extraction can still escape `dest_dir` through a linked ancestor path.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/seven_z_handler.rs:74">
P0: Checking only `target_path.is_symlink()` is insufficient for extraction safety: symlinked parent directories under `dest_dir` can still redirect writes outside the extraction root. Validate ancestor components before creating directories or files.</violation>
</file>
<file name="src-tauri/src/application/commands/extract_archive.rs">
<violation number="1" location="src-tauri/src/application/commands/extract_archive.rs:61">
P1: `spawn_blocking` join failures return early and bypass the new failure-state handling, leaving downloads stuck in `Extracting`.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/segmentation.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:223">
P1: Split ZIP detection should include the terminal `.zip` segment when matching `.zNN` parts; otherwise the returned part list is incomplete and downstream extraction/validation can fail.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:304">
P1: The new regex in `extract_part_number` does not reliably extract the trailing segment number; it can capture earlier digits (e.g., the `7` in `.7z`), causing wrong part sorting and false missing-part detection.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Remove coverage/, AGENT.md, and src/routes/ test file that were accidentally included in the initial commit.
- Add regression test for Completed → Extracting transition - Recursive extraction: roll up extracted_files/bytes into parent summary - extract_part_number: use last digit run (fixes .7z.001 → 7 bug) - verify_all_parts_present: detect missing first segment - Split ZIP: include terminal .zip in part list - Legacy RAR: proper multi-digit extension detection (.r00, .r01) - Symlink ancestor traversal check in zip_handler + seven_z_handler - spawn_blocking join failure: recover download to Error state
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
src-tauri/src/adapters/driven/extractor/segmentation.rs (1)
156-194:⚠️ Potential issue | 🟠 MajorHandle legacy
.r00/.r01entrypoints too.Line 162 now admits
archive.r00, but Line 172 only derivesbase_namefrom.rar, sodetect_segments(Path::new("archive.r00"))still falls through toOk(None). That makes non-first legacy RAR parts undiscoverable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/segmentation.rs` around lines 156 - 194, The current logic only derives base_name when file_name.strip_suffix(".rar") succeeds, so inputs like "archive.r00" are ignored; update detect_segments to also detect and strip legacy part suffixes (match r\d+ like ".r00"/".r01") to compute base_name when the extension is a numbered R part, then reuse the same Regex/scan_directory/has_rar_parts flow. Concretely, in the block using file_name.strip_suffix(".rar"), extend it to attempt stripping a ".rNN" suffix (using the same r\d+ pattern) to produce base_name, and then call the existing Regex creation, scan_directory, has_rar_parts, sort_parts_numerically and return paths as before so non-first RAR parts are discovered; ensure you reference the existing helpers has_rar_parts, scan_directory and sort_parts_numerically when implementing the fix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/extractor/mod.rs`:
- Around line 122-149: The recursive scan currently calls
find_archives_in_dir(dest_dir) which walks the whole destination tree and picks
up unrelated sibling archives; change those calls to limit scanning to the
directory produced by the current extraction (use
find_archives_in_dir(&nested_dest) when recursing into nested_dest or otherwise
pass the specific extraction folder/list of extracted paths), and apply the same
change where find_archives_in_dir is used around lines referenced (the other
extract_recursive recursion sites) so only archives under the current extraction
directory are discovered and re-extracted.
In `@src-tauri/src/adapters/driven/extractor/zip_handler.rs`:
- Around line 87-117: The code only checks reject_symlinked_ancestors(dest_dir,
&target_path) but misses rejecting when target_path itself already exists as a
symlink; update the extraction logic around target_path (before calling
fs::create_dir_all(&target_path) for directories and before creating parents or
calling fs::File::create(&target_path) for files) to call
std::fs::symlink_metadata(&target_path) and if metadata.file_type().is_symlink()
then warn, push a warning, and continue (similar to the ancestor rejection),
ensuring both directory branch (entry.is_dir) and file branch perform this check
to avoid following/writing through symlinks.
---
Duplicate comments:
In `@src-tauri/src/adapters/driven/extractor/segmentation.rs`:
- Around line 156-194: The current logic only derives base_name when
file_name.strip_suffix(".rar") succeeds, so inputs like "archive.r00" are
ignored; update detect_segments to also detect and strip legacy part suffixes
(match r\d+ like ".r00"/".r01") to compute base_name when the extension is a
numbered R part, then reuse the same Regex/scan_directory/has_rar_parts flow.
Concretely, in the block using file_name.strip_suffix(".rar"), extend it to
attempt stripping a ".rNN" suffix (using the same r\d+ pattern) to produce
base_name, and then call the existing Regex creation, scan_directory,
has_rar_parts, sort_parts_numerically and return paths as before so non-first
RAR parts are discovered; ensure you reference the existing helpers
has_rar_parts, scan_directory and sort_parts_numerically when implementing the
fix.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e8a722c5-acdb-4c8d-a8f6-94dae163089b
📒 Files selected for processing (6)
src-tauri/src/adapters/driven/extractor/mod.rssrc-tauri/src/adapters/driven/extractor/segmentation.rssrc-tauri/src/adapters/driven/extractor/seven_z_handler.rssrc-tauri/src/adapters/driven/extractor/zip_handler.rssrc-tauri/src/application/commands/extract_archive.rssrc-tauri/src/domain/model/download.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src-tauri/src/domain/model/download.rs
There was a problem hiding this comment.
7 issues found across 6 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/extractor/segmentation.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:161">
P2: Legacy RAR segment files (`.r00`, `.r01`, …) still won’t be detected because only `.rar` filenames enter the scan branch.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:247">
P2: Appending `archive.zip` into the `.zNN` segment vector breaks numeric continuity checks and can trigger false missing-part errors.</violation>
<violation number="3" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:247">
P1: Appending the terminal `.zip` segment here breaks continuity validation for normal split ZIP sets (`.z01`, `.z02`, `.zip`) because the validator interprets the final `.zip` as part `0`. Update part verification to special-case the terminal `.zip` member.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/mod.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/mod.rs:206">
P2: Recursive extraction is starting from the entire destination directory, which can include the source archive and unrelated sibling archives. Restrict recursion to files produced by the current extraction to avoid re-extracting unrelated content and inflating summary totals.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/seven_z_handler.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/seven_z_handler.rs:198">
P1: `reject_symlinked_ancestors` does not validate that `dest_dir` itself is not a symlink, so extraction can still escape the intended root when the destination directory is symlinked.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/zip_handler.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/zip_handler.rs:90">
P1: The new symlink defense skips checking whether `target_path` itself is a symlink, so file extraction can still follow a pre-existing leaf symlink and write outside the destination.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/zip_handler.rs:90">
P0: Also reject cases where `target_path` itself is a symlink before creating directories or files. Otherwise a pre-existing symlink at the final path can redirect writes outside `dest_dir`.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- Legacy RAR: detect .r00/.r01 entrypoints, not just .rar - Split ZIP: remove terminal .zip from parts (breaks continuity checks) - Recursive scan: only immediate directory, not full subtree - zip_handler: reject target_path itself as symlink, not just ancestors - seven_z_handler: reject dest_dir itself as symlink before extraction
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (3)
src-tauri/src/adapters/driven/extractor/mod.rs (1)
203-209:⚠️ Potential issue | 🟠 MajorRecursive mode still scans the caller-selected destination root.
The first
extract_recursive(dest_dir, ...)call will pick up the source archive and any unrelated archives already sitting in that directory. That re-extracts files the current operation did not produce and inflates the rolled-up summary. Recursive discovery needs to be limited to paths created by the current extraction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/mod.rs` around lines 203 - 209, The recursive pass currently calls extract_recursive(dest_dir, ...) which scans the entire destination root and reprocesses unrelated archives; change the flow so recursive discovery is limited to the paths produced by this extraction: have the initial extraction return or record the specific output path(s) (e.g., a created_subdir or Vec<PathBuf>) and pass those to extract_recursive (or add an allowlist parameter like allowed_roots/starting_paths to extract_recursive) instead of the full dest_dir; update the extract_recursive signature and its callers (extract_recursive) and use the returned created paths when updating summary (extracted_files, extracted_bytes, warnings).src-tauri/src/adapters/driven/extractor/segmentation.rs (2)
220-248:⚠️ Potential issue | 🟠 MajorClassic split ZIP detection is still incomplete.
This branch only recognizes
name.zNNinputs and still returns only the.zNNsiblings. Selectingname.zipfrom the same split set returnsNone, and selectingname.z01yields an incomplete segment list because the terminalname.zipmember is omitted.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/segmentation.rs` around lines 220 - 248, The zNN branch in detect_zip currently only gathers .zNN siblings and omits the terminal base .zip file (and selecting the base .zip doesn't find its .zNN parts); update detect_zip so that when re_z captures (in function detect_zip) you build a parts list that includes both all matching .zNN files and the terminal "{base_name}.zip" (use scan_directory to collect ".z\\d+" matches and then check for parent.join(format!("{}.zip", base_name)) and push it if it exists), and also ensure the earlier .zip.N branch or scan_parts handles cases where the passed file_name is "{base_name}.zip" by scanning for corresponding ".zNN" siblings in addition to ".zip.N" parts (use scan_parts, scan_directory and sort_parts_numerically to produce a complete, numerically sorted Vec<PathBuf>).
86-98:⚠️ Potential issue | 🟠 MajorDon't coerce the terminal
.zipsegment to part0.This continuity check still treats any non-numeric filename as
0. A validarchive.z01,archive.z02,archive.zipset will be rejected once the detector returns the.ziptail, because the final segment does not participate in the numeric sequence. This needs a ZIP-specific terminal-segment case instead ofunwrap_or(0).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/segmentation.rs` around lines 86 - 98, The continuity check in segmentation.rs erroneously treats non-numeric filenames as 0 by using unwrap_or(0) on extract_part_number, which makes a terminal .zip file be evaluated as part 0; change the logic in the block that computes prev_num/curr_num (around the extract_part_number calls) to treat a None from extract_part_number as a ZIP terminal segment: detect when the filename ends with ".zip" (or when extract_part_number returns None and filename matches the expected archive zip tail) and skip the numeric-sequence comparison for that final segment (or allow it as the acceptable terminal without coercing to 0) so only truly numeric parts are compared for continuity and missing.push is only used when a numeric gap is detected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/extractor/mod.rs`:
- Around line 243-258: find_archives_in_dir currently only lists immediate files
and therefore misses archives nested in subdirectories created during
extraction; change find_archives_in_dir to perform a bounded recursive scan (or
accept a depth/recursive flag) so it descends into subfolders it is handed (but
avoids infinite symlink loops), using
detector::detect_format(&path).is_ok_and(|f| f.is_some()) to identify archive
files, and ensure extract_recursive calls the updated find_archives_in_dir with
recursion enabled (or passes the correct depth) so nested archives like
docs/inner.zip are discovered and queued for extraction.
In `@src-tauri/src/adapters/driven/extractor/segmentation.rs`:
- Around line 188-197: The current regex/collection yields "archive.rar" and
"archive.r00" that both collapse to the same numeric key in
sort_parts_numerically, causing wrong ordering; update the logic so the ".rar"
volume is given an explicit ordinal before any ".rNN" parts. Concretely: when
building the parts list (the Vec parts populated by scan_directory for
base_name), detect entries whose extension is ".rar" and assign them the lowest
ordinal (e.g., 0) while parsing ".rNN" parts to numeric ordinals starting after
that (e.g., r00 => 1, r01 => 2) or change sort_parts_numerically to treat ".rar"
as strictly earlier than any ".rNN"; ensure the code paths using base_name,
parts, and sort_parts_numerically preserve this ordering so
verify_all_parts_present sees the correct sequence.
In `@src-tauri/src/adapters/driven/extractor/zip_handler.rs`:
- Around line 44-50: Reject extraction if the requested extraction root or any
path component of dest_dir is a symlink: in extract (and the code handling
individual entries around the 87-117 block) call std::fs::symlink_metadata on
each component of dest_dir (walk components from the root to dest_dir) and if
any file_type().is_symlink() return an Err(DomainError) before calling
create_dir_all or File::create; apply the same check before creating/writing
each target path to ensure no symlink in the destination path components.
---
Duplicate comments:
In `@src-tauri/src/adapters/driven/extractor/mod.rs`:
- Around line 203-209: The recursive pass currently calls
extract_recursive(dest_dir, ...) which scans the entire destination root and
reprocesses unrelated archives; change the flow so recursive discovery is
limited to the paths produced by this extraction: have the initial extraction
return or record the specific output path(s) (e.g., a created_subdir or
Vec<PathBuf>) and pass those to extract_recursive (or add an allowlist parameter
like allowed_roots/starting_paths to extract_recursive) instead of the full
dest_dir; update the extract_recursive signature and its callers
(extract_recursive) and use the returned created paths when updating summary
(extracted_files, extracted_bytes, warnings).
In `@src-tauri/src/adapters/driven/extractor/segmentation.rs`:
- Around line 220-248: The zNN branch in detect_zip currently only gathers .zNN
siblings and omits the terminal base .zip file (and selecting the base .zip
doesn't find its .zNN parts); update detect_zip so that when re_z captures (in
function detect_zip) you build a parts list that includes both all matching .zNN
files and the terminal "{base_name}.zip" (use scan_directory to collect ".z\\d+"
matches and then check for parent.join(format!("{}.zip", base_name)) and push it
if it exists), and also ensure the earlier .zip.N branch or scan_parts handles
cases where the passed file_name is "{base_name}.zip" by scanning for
corresponding ".zNN" siblings in addition to ".zip.N" parts (use scan_parts,
scan_directory and sort_parts_numerically to produce a complete, numerically
sorted Vec<PathBuf>).
- Around line 86-98: The continuity check in segmentation.rs erroneously treats
non-numeric filenames as 0 by using unwrap_or(0) on extract_part_number, which
makes a terminal .zip file be evaluated as part 0; change the logic in the block
that computes prev_num/curr_num (around the extract_part_number calls) to treat
a None from extract_part_number as a ZIP terminal segment: detect when the
filename ends with ".zip" (or when extract_part_number returns None and filename
matches the expected archive zip tail) and skip the numeric-sequence comparison
for that final segment (or allow it as the acceptable terminal without coercing
to 0) so only truly numeric parts are compared for continuity and missing.push
is only used when a numeric gap is detected.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: aef7a6b4-7a85-46bc-9289-2aecdee486ee
📒 Files selected for processing (4)
src-tauri/src/adapters/driven/extractor/mod.rssrc-tauri/src/adapters/driven/extractor/segmentation.rssrc-tauri/src/adapters/driven/extractor/seven_z_handler.rssrc-tauri/src/adapters/driven/extractor/zip_handler.rs
There was a problem hiding this comment.
5 issues found across 4 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/extractor/mod.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/mod.rs:256">
P1: This change stops scanning subdirectories, so recursive extraction can miss nested archives that are not at the root of the extracted directory.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/mod.rs:257">
P1: `find_archives_in_dir` now only checks immediate files, so nested archives inside extracted subdirectories are skipped and won't be processed during recursive extraction.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/segmentation.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:196">
P2: Legacy RAR segment sorting is nondeterministic because `.rar` and `.r00` are both sorted as part `0`, which can return parts in the wrong order.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:196">
P2: Give the `.rar` volume an explicit ordinal before `.r00` when sorting legacy RAR parts; relying on generic numeric sorting can produce non-deterministic ordering and fail valid multi-part sets.</violation>
</file>
<file name="src-tauri/src/adapters/driven/extractor/zip_handler.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/zip_handler.rs:90">
P0: Also reject a symlinked destination root before writing entries. Checking only target ancestors and the target path still allows writes outside the intended extraction tree when `dest_dir` itself is a symlink.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- find_archives_in_dir: restore subdirectory scan with symlink skip - zip_handler: reject dest_dir itself as symlink before extraction - sort_parts_numerically: .rar sorts first (-1), .zip sorts last - verify_all_parts_present: skip continuity for terminal .rar/.zip - detect_zip: handle .zip input finding .zNN siblings, include terminal .zip
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src-tauri/src/adapters/driven/extractor/segmentation.rs (1)
116-126:⚠️ Potential issue | 🟡 MinorFirst-part baseline check incomplete for 0-indexed archives.
The condition
first_num > 1correctly catches missing parts for 1-indexed archives (e.g.,part01.rarseries). However, for 0-indexed archives like legacy RAR (r00,r01, ...), ifr00is missing and the set starts atr01,first_numequals 1 which is NOT > 1, so the missingr00goes undetected.Consider detecting whether the archive series is 0-indexed vs 1-indexed and adjusting the baseline accordingly:
🐛 Suggested fix
// Check if the first part starts at the expected number (0 or 1) if let Some(first) = parts.first() && let Some(first_num) = first .file_name() .and_then(|n| n.to_str()) .and_then(extract_part_number) - && first_num > 1 + && first_num > 1 // TODO: For 0-indexed archives (legacy RAR .rNN), check first_num > 0 && let Some(parent) = first.parent() { missing.push(parent.join(format!("{:02}", first_num - 1))); }This is related to a past review comment about
verify_all_parts_presentaccepting incomplete sets.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/segmentation.rs` around lines 116 - 126, The baseline check should account for 0-indexed series (r00, r01, ...) by deciding baseline = 0 if any part parses to 0, otherwise baseline = 1; replace the hardcoded `first_num > 1` logic around `parts.first()`/`first_num` with a computed baseline (use existing `extract_part_number` when scanning `parts` to detect any zero-valued part), then check `first_num > baseline` and push the missing path using `missing.push(parent.join(format!("{:02}", first_num - 1)))` (or `first_num - 1` relative to the detected baseline) so a missing `r00` is flagged when the set starts at `r01`; refer to `parts.first()`, `extract_part_number`, and `missing.push(...)` to locate and update the code.
🧹 Nitpick comments (4)
src-tauri/src/adapters/driven/extractor/segmentation.rs (3)
367-387:has_rar_partsfunction appears unused.This function is defined but not called anywhere in the module. If it's intended for future use, consider adding a
#[allow(dead_code)]attribute with a TODO comment. Otherwise, consider removing it to reduce maintenance burden.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/segmentation.rs` around lines 367 - 387, The function has_rar_parts is defined but not referenced anywhere; either remove it to avoid dead code or mark it as intentionally unused by adding #[allow(dead_code)] above the function and a TODO comment explaining intended future use; locate the has_rar_parts function (which returns Result<bool, DomainError> and uses Regex::new and std::fs::read_dir) and apply one of these two fixes so the compiler and reviewers know the intent.
360-365: Consider caching the compiled regex forextract_part_number.The regex is compiled on every call. For archive sets with many parts, this could add overhead. Consider using
std::sync::OnceLockor theonce_cellcrate to compile the pattern once.♻️ Example using OnceLock
+use std::sync::OnceLock; + +fn part_number_regex() -> &'static Regex { + static RE: OnceLock<Regex> = OnceLock::new(); + RE.get_or_init(|| Regex::new(r"(\d+)").unwrap()) +} + fn extract_part_number(file_name: &str) -> Option<u32> { - let re = Regex::new(r"(\d+)").ok()?; - re.find_iter(file_name) + part_number_regex() + .find_iter(file_name) .last() .and_then(|m| m.as_str().parse::<u32>().ok()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/segmentation.rs` around lines 360 - 365, The extract_part_number function currently compiles the Regex on every call; change it to use a static cached compiled regex (e.g., with std::sync::OnceLock or once_cell::sync::Lazy) so the Regex is created once and reused; replace the local Regex::new(...) in extract_part_number with a reference to the static (use get_or_init or Lazy initialization) and keep the rest of the function logic (find_iter, last, parse::<u32>()) the same so behavior is unchanged while avoiding repeated compilation.
389-541: Test coverage is good, but consider adding tests for legacy RAR and z-format ZIP.The existing tests cover modern RAR, 7z, and
.zip.NNNformats well. Consider adding tests for:
- Legacy RAR format:
.rar,.r00,.r01- ZIP z-format:
.z01,.z02,.zipThese would validate the special sorting logic for terminal segments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/segmentation.rs` around lines 389 - 541, Add unit tests that cover legacy RAR and z-format ZIP segment patterns to exercise detect_segments and verify_all_parts_present: create a test (e.g., test_detect_legacy_rar_segments) that writes files like "archive.r00", "archive.r01", "archive.rar" and asserts detect_segments returns the expected ordered list and verify_all_parts_present passes/fails as appropriate; and another test (e.g., test_detect_z_format_zip_segments) that writes "archive.z01", "archive.z02", "archive.zip" and asserts detection, ordering (terminal .zip last), and that missing parts cause verify_all_parts_present to return Err. Ensure the tests use TempDir, call detect_segments(&file_path).unwrap()/unwrap_err() and compare file_name() strings to validate sorting and presence semantics.src-tauri/src/adapters/driven/extractor/mod.rs (1)
6-7: Remember to remove#![allow(dead_code)]after IPC integration.The comment explains this is temporary until the module is wired into the IPC layer. Consider adding a TODO comment or tracking this in an issue to ensure it's addressed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/extractor/mod.rs` around lines 6 - 7, The crate-level attribute #![allow(dead_code)] in src/adapters/driven/extractor/mod.rs is temporary; replace it with a TODO comment referencing an issue/ticket and the work required (wire this module into the IPC layer) and then remove the attribute when the module is integrated; specifically, update the top of mod.rs to keep a clear TODO like "TODO: remove #![allow(dead_code)] when extractor module is wired into IPC (issue `#XYZ`)" and open/link a tracker issue if one doesn't exist so the dead_code suppression is not forgotten.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src-tauri/src/adapters/driven/extractor/segmentation.rs`:
- Around line 116-126: The baseline check should account for 0-indexed series
(r00, r01, ...) by deciding baseline = 0 if any part parses to 0, otherwise
baseline = 1; replace the hardcoded `first_num > 1` logic around
`parts.first()`/`first_num` with a computed baseline (use existing
`extract_part_number` when scanning `parts` to detect any zero-valued part),
then check `first_num > baseline` and push the missing path using
`missing.push(parent.join(format!("{:02}", first_num - 1)))` (or `first_num - 1`
relative to the detected baseline) so a missing `r00` is flagged when the set
starts at `r01`; refer to `parts.first()`, `extract_part_number`, and
`missing.push(...)` to locate and update the code.
---
Nitpick comments:
In `@src-tauri/src/adapters/driven/extractor/mod.rs`:
- Around line 6-7: The crate-level attribute #![allow(dead_code)] in
src/adapters/driven/extractor/mod.rs is temporary; replace it with a TODO
comment referencing an issue/ticket and the work required (wire this module into
the IPC layer) and then remove the attribute when the module is integrated;
specifically, update the top of mod.rs to keep a clear TODO like "TODO: remove
#![allow(dead_code)] when extractor module is wired into IPC (issue `#XYZ`)" and
open/link a tracker issue if one doesn't exist so the dead_code suppression is
not forgotten.
In `@src-tauri/src/adapters/driven/extractor/segmentation.rs`:
- Around line 367-387: The function has_rar_parts is defined but not referenced
anywhere; either remove it to avoid dead code or mark it as intentionally unused
by adding #[allow(dead_code)] above the function and a TODO comment explaining
intended future use; locate the has_rar_parts function (which returns
Result<bool, DomainError> and uses Regex::new and std::fs::read_dir) and apply
one of these two fixes so the compiler and reviewers know the intent.
- Around line 360-365: The extract_part_number function currently compiles the
Regex on every call; change it to use a static cached compiled regex (e.g., with
std::sync::OnceLock or once_cell::sync::Lazy) so the Regex is created once and
reused; replace the local Regex::new(...) in extract_part_number with a
reference to the static (use get_or_init or Lazy initialization) and keep the
rest of the function logic (find_iter, last, parse::<u32>()) the same so
behavior is unchanged while avoiding repeated compilation.
- Around line 389-541: Add unit tests that cover legacy RAR and z-format ZIP
segment patterns to exercise detect_segments and verify_all_parts_present:
create a test (e.g., test_detect_legacy_rar_segments) that writes files like
"archive.r00", "archive.r01", "archive.rar" and asserts detect_segments returns
the expected ordered list and verify_all_parts_present passes/fails as
appropriate; and another test (e.g., test_detect_z_format_zip_segments) that
writes "archive.z01", "archive.z02", "archive.zip" and asserts detection,
ordering (terminal .zip last), and that missing parts cause
verify_all_parts_present to return Err. Ensure the tests use TempDir, call
detect_segments(&file_path).unwrap()/unwrap_err() and compare file_name()
strings to validate sorting and presence semantics.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 08dd1569-1d5f-4351-9916-0a1bb4d7737b
📒 Files selected for processing (3)
src-tauri/src/adapters/driven/extractor/mod.rssrc-tauri/src/adapters/driven/extractor/segmentation.rssrc-tauri/src/adapters/driven/extractor/zip_handler.rs
There was a problem hiding this comment.
2 issues found across 3 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/extractor/segmentation.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:96">
P2: Terminal `.zip` segments are misclassified when the archive basename contains digits, causing false missing-part errors during verification.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:349">
P2: Sorting unnumbered `.rar` files first breaks continuity checks for valid legacy RAR sets (`.rar`, `.r00`, `.r01`).</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- verify_all_parts_present: 0-indexed baseline for legacy RAR (r00) - Terminal segment skip: only for unnumbered .rar/.zip, not part01.rar - extract_part_number: cache regex with OnceLock - Remove unused has_rar_parts function - Add TODO to #![allow(dead_code)] for IPC wiring reminder - Add tests: legacy RAR segments, z-format ZIP segments, entrypoints
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/extractor/segmentation.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:134">
P1: The new first-part baseline check incorrectly skips numeric modern RAR parts (`.partNN.rar`), so missing initial segments (like `part01`) are no longer detected.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- first_numeric part check: use extract_part_number presence, not ext Fixes: .partNN.rar being skipped in baseline detection
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/extractor/segmentation.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:133">
P2: The new `first_numeric` logic can treat terminal `.zip`/`.rar` filenames with digits in the base name as numbered parts, which can hide missing first segments (e.g. missing `z01`).</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- first_numeric: skip terminal .zip/.rar with basename digits (e.g. game2.zip) but keep .partNN.rar which has digits in the stem extension
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/extractor/segmentation.rs">
<violation number="1" location="src-tauri/src/adapters/driven/extractor/segmentation.rs:141">
P2: The terminal `.rar/.zip` skip logic is too broad: any digit in the basename makes the file look like a numbered segment, which can falsely report missing parts for valid archives whose base name contains digits.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- is_part_pattern helper: distinguish part numbering (part01, 001) from basename digits (game2, v3) in terminal .rar/.zip detection
Summary
VortexArchiveExtractorwith format routing, recursive extraction, and split archive detectionExtractArchiveCommand(withspawn_blocking) +ListArchiveContentsQueryArchitecture
Test plan
cargo clippy -- -D warningscleancargo fmt --checkclean#[ignore])Summary by cubic
Adds a native Rust archive extractor for ZIP, RAR v4/v5, 7z, and TAR (gz/bz2/xz/zstd), wired into CQRS for extraction and content listing. Implements the
ArchiveExtractorport with format-specific handlers and updates the state machine; addresses Linear task 26.New Features
.zipwith.zNNsiblings and includes the terminal.zip.dest_dir/ancestors/targets; TAR/7z skip symlinks; supports recursive extraction.ExtractArchiveCommand(runs inspawn_blocking) andListArchiveContentsQuery; domain typesArchiveFormat,ArchiveEntry,ExtractSummary,ExtractionConfig.Bug Fixes
spawn_blockingjoin failure, transition download to Error..r00/.r01entrypoints with 0-index baseline; numeric sorting (.rarfirst,.ziplast); skip continuity only for unnumbered.rar/.zip; include terminal.zipwhen listing.zNN; baseline check uses detected numeric part (keeps.partNN.rar); skip terminal.zip/.rarwith digits in basename but keep.partNN.rar; add explicit part-pattern check to distinguish part numbering (partNN,001) from basename digits.dest_dirand targets; 7z handler rejects symlinkeddest_dir; ZIP timestamp returns None.Written for commit 6e2e389. Summary will update on new commits.
Summary by CodeRabbit
New Features
Security
Changelog