feat(link): auto-group playlist links into one package per playlist#135
feat(link): auto-group playlist links into one package per playlist#135
Conversation
…task 30)
Add a `PlaylistGrouper` application service that finds-or-creates a
`Package` keyed on a new `external_id` natural key (the playlist URL).
Re-resolving the same YouTube/SoundCloud playlist now reuses the
existing package instead of creating a duplicate (PRD-v2 §P1.11).
- Migration `m20260430_000008` adds `packages.external_id TEXT NULL` +
`idx_packages_external_id` (uniqueness enforced at the application
layer so manual packages can keep `NULL`).
- Domain `Package` gains `external_id: Option<String>` with setter +
accessor; `PackageRepository::find_by_external_id` returns the
oldest matching row deterministically.
- New `GroupPlaylistsCommand` handler routes through the grouper;
new `link_group_playlists` Tauri IPC exposes it to the frontend.
- `MediaGrabberDialog` shows a `PlaylistPackageBanner` ("Will create
package: {name} with {N} items") above playlist items; EN + FR
plural translations under `mediaGrabber.playlistBanner.*`.
- `LinkGrabberView.handleMediaGrabberConfirm` now calls
`link_group_playlists` first, then `download_media_start`, then
attaches every returned download to the package via
`package_add_download` (attach failures non-fatal).
Tests: 7 unit on `PlaylistGrouper`, 5 SQLite on `find_by_external_id`,
3 handler, 4 frontend on the banner. cargo test 1327 passing,
vitest 626 passing, clippy + fmt clean.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds playlist auto-grouping end-to-end: DB schema (nullable Changes
Sequence DiagramsequenceDiagram
participant FE as Frontend
participant IPC as Tauri IPC
participant CB as CommandBus
participant PG as PlaylistGrouper
participant Repo as PackageRepository
participant EB as EventBus
FE->>IPC: link_group_playlists([PlaylistGroup...])
IPC->>CB: handle_group_playlists(cmd)
CB->>PG: group_all(groups, created_at_ms)
loop each PlaylistGroup
PG->>Repo: find_by_external_id(trimmed_playlist_id)
alt exists
Repo-->>PG: Some(Package)
PG-->>PG: return result(created=false)
else not found
PG->>Repo: save(new Package{external_id})
PG->>EB: publish(PackageCreated)
PG-->>PG: return result(created=true)
end
end
PG-->>CB: Vec<PlaylistGroupResult>
CB-->>IPC: results
IPC-->>FE: results
FE->>FE: startMediaDownload(packageId?)
FE->>IPC: package_add_download(packageId, downloadId)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7bb3f5f455
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/application/services/playlist_grouper.rs`:
- Around line 93-121: Concurrent calls can both observe "not found" via
find_by_external_id and create duplicate Package rows and emit PackageCreated;
change the flow to an atomic get-or-create at the repository level (e.g., add a
method like repo.get_or_create_by_external_id(external_id, |create_if_missing|
-> Package) or repo.insert_if_absent returning the existing row), or enforce a
unique constraint on external_id and catch insert conflicts to load and return
the existing Package; update the caller code that currently uses
find_by_external_id, Package::new, repo.save, and
event_bus.publish(DomainEvent::PackageCreated { .. }) so it either (1) invokes
the new atomic repo method which returns the final Package and a flag whether it
was created and only publishes PackageCreated when created==true, or (2)
attempts repo.save and on unique-constraint error fetches the existing Package
and suppresses duplicate PackageCreated emission. Ensure the identifier symbols
referred to (find_by_external_id, repo.save, Package::new, PackageId,
PlaylistGroupResult, DomainEvent::PackageCreated) are updated accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c678a5b3-9f37-45cc-909d-aa8a4035b58b
📒 Files selected for processing (28)
CHANGELOG.mdsrc-tauri/src/adapters/driven/sqlite/entities/package.rssrc-tauri/src/adapters/driven/sqlite/migrations/m20260430_000008_add_package_external_id.rssrc-tauri/src/adapters/driven/sqlite/migrations/mod.rssrc-tauri/src/adapters/driven/sqlite/package_repo.rssrc-tauri/src/adapters/driving/tauri_ipc.rssrc-tauri/src/application/command_bus.rssrc-tauri/src/application/commands/group_playlists.rssrc-tauri/src/application/commands/mod.rssrc-tauri/src/application/commands/move_package_to_folder.rssrc-tauri/src/application/commands/set_package_password.rssrc-tauri/src/application/commands/set_package_priority.rssrc-tauri/src/application/commands/tests_support.rssrc-tauri/src/application/commands/toggle_package_auto_extract.rssrc-tauri/src/application/commands/update_package.rssrc-tauri/src/application/services/mod.rssrc-tauri/src/application/services/playlist_grouper.rssrc-tauri/src/domain/model/package.rssrc-tauri/src/domain/ports/driven/package_repository.rssrc-tauri/src/domain/ports/driven/tests.rssrc-tauri/src/lib.rssrc/i18n/locales/en.jsonsrc/i18n/locales/fr.jsonsrc/types/media.tssrc/views/LinkGrabberView/LinkGrabberView.tsxsrc/views/LinkGrabberView/MediaGrabberDialog/MediaGrabberDialog.tsxsrc/views/LinkGrabberView/MediaGrabberDialog/PlaylistPackageBanner.tsxsrc/views/LinkGrabberView/__tests__/PlaylistPackageBanner.test.tsx
…urrent group_one Two issues raised on PR #135 review: 1. Codex P1 — `playlistItems.length > 0` skipped auto-grouping when the user clicked Download without manually selecting items. The default list is empty and the backend treats that as "every track", so the common path resolved a playlist batch with no package created. Add an explicit `isPlaylist` flag (and a separate `playlistItemCount`) to MediaGrabberOptions, set from `metadata.isPlaylist`, and gate the IPC on that flag. 2. CodeRabbit / Codex P2 — `find_by_external_id` and `repo.save` are non-atomic, so two concurrent `group_one` calls for the same `playlist_id` could both miss the lookup and insert duplicate rows. Wrap the find-then-save pair in a process-wide `Mutex` (via `OnceLock`) so the race is closed without changing the migration or the repository contract. Recover from a poisoned guard with `into_inner` instead of unwrapping. Tests: regression test that spawns 16 threads on the same playlist_id and asserts a single PackageCreated event + a single row + a single package_id across results. Frontend: a dialog test asserting the new `isPlaylist` / `playlistItemCount` fields are populated when no item is manually selected. Updated existing onConfirm assertions for the new fields. Refs PR #135.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d8cc6c1444
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/application/services/playlist_grouper.rs`:
- Around line 88-90: The fallback_name function currently embeds the raw
playlist_id (external_id) into the UI, which can leak full URLs; change fn
fallback_name(playlist_id: &str) -> String to return a generic label instead
(e.g., "Playlist" or "Untitled Playlist") rather than interpolating playlist_id,
so the UI uses a stable, non-sensitive fallback that matches the frontend's
generic playlist fallback.
In `@src/views/LinkGrabberView/LinkGrabberView.tsx`:
- Around line 150-161: The current code silently swallows errors from
invoke("package_add_download") by mapping rejections to undefined; change the
attachment step to use Promise.allSettled(result.downloadIds.map(...)) (keep
using invoke("package_add_download") so failures remain non-fatal) and after
allSettled check for any entries with status === "rejected" and trigger a
user-facing toast (e.g., show an error toast) indicating that some downloads
failed to attach to the auto-package while still allowing the downloads to
proceed; reference the invoke call, result.downloadIds, and the
package_add_download operation when adding the allSettled+toast logic.
In `@src/views/LinkGrabberView/MediaGrabberDialog/MediaGrabberDialog.tsx`:
- Around line 114-123: The banner never receives the reuse flag—update
MediaGrabberDialog to compute a willReuseExisting boolean before rendering
(e.g., by checking the existing-package/grouping lookup for this playlist via
metadata.playlistId or title / a resolvedGrouping result) and pass it into
PlaylistPackageBanner as willReuseExisting={willReuseExisting}; compute the
value using the same lookup used later in the grouping flow (or a synchronous
cached lookup), and ensure the computation happens prior to the JSX that
currently renders PlaylistPackageBanner (which uses metadata,
selectedPlaylistItems and metadata.playlistItems) so the banner reflects the
actual reuse decision.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 75cd1848-817b-4e2f-806e-a5f040dbd87d
📒 Files selected for processing (5)
src-tauri/src/application/services/playlist_grouper.rssrc/types/media.tssrc/views/LinkGrabberView/LinkGrabberView.tsxsrc/views/LinkGrabberView/MediaGrabberDialog/MediaGrabberDialog.tsxsrc/views/LinkGrabberView/__tests__/MediaGrabberDialog.test.tsx
✅ Files skipped from review due to trivial changes (1)
- src/types/media.ts
…llback name Four PR-review findings on #135: 1. Codex P1 — equivalent YouTube URLs (`watch?v=…&list=…` vs `playlist?list=…`) hashed to different `external_id` values and created separate packages. Add `canonicalPlaylistKey()` in a dedicated module: YouTube hosts collapse to `youtube:playlist:<id>`, other sources keep the raw URL (SoundCloud paths are already stable). Six unit tests for host normalisation, missing list param, malformed URL, and SoundCloud passthrough. 2. Codex P2 — `useTauriMutation("download_media_start")` already surfaces a default error toast on rejection; the local `try/catch` was double-reporting. Drop the redundant toast and unused `downloadFailed` i18n key, keep the early-return so we skip the success path. 3. CodeRabbit Major — `Promise.all(...catch(() => undefined))` swallowed `package_add_download` failures and still showed the success toast. Switch to `Promise.allSettled` and surface a pluralised toast (`playlistAttachFailed`) when any attachment rejects, while keeping downloads themselves running. 4. CodeRabbit Minor — `fallback_name(playlist_id)` persisted names like `Playlist https://youtube.com/...` (or now `Playlist youtube:playlist:PL…`). Replace with the generic `"Untitled playlist"` so the UI never shows the natural key. Test updated. Refs PR #135.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 856abe85fa
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const YOUTUBE_HOSTS = new Set([ | ||
| "youtube.com", | ||
| "www.youtube.com", | ||
| "m.youtube.com", | ||
| "music.youtube.com", |
There was a problem hiding this comment.
Canonicalize youtu.be playlist URLs before grouping
The canonicalizer only recognizes youtube.com host variants, so playlist links like https://youtu.be/<video>?list=PL... fall back to raw URL keys. That means the same playlist can create different packages depending on whether users paste a youtu.be URL or a youtube.com URL, which breaks the new idempotent “one package per playlist” behavior for a common YouTube share format.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
playlist links like
https://youtu.be/<video>?list=PL...fall back to raw URL keys. That means the same playlist can create different packages depending on whether users paste ayoutu.beURL or ayoutube.comURL
Fixed in 5660a37. Added youtu.be to YOUTUBE_HOSTS in canonicalPlaylistKey.ts. New regression test in canonicalPlaylistKey.test.ts asserts https://youtu.be/<v>?list=PL12345 collapses to the same youtube:playlist:PL12345 token as the long-form youtube.com/playlist?list=PL12345, so the dedupe holds across share formats.
There was a problem hiding this comment.
To use Codex here, create an environment for this repo.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src-tauri/src/application/services/playlist_grouper.rs (1)
125-155: ⚡ Quick winPublish
PackageCreatedafter releasing the grouper lock.
event_bus.publish(...)still runs while_guardis live, so any synchronous subscriber work stretches the global critical section and blocks other grouping calls unnecessarily. Narrow the mutex scope to the lookup/save only.Suggested refactor
- let _guard = acquire_group_lock(); - - if let Some(existing) = self.repo.find_by_external_id(trimmed_id)? { - return Ok(PlaylistGroupResult { - package_id: existing.id().clone(), - package_name: existing.name().to_string(), - created: false, - item_count: group.item_count, - }); - } - - let trimmed_name = group.playlist_name.trim(); - let name = if trimmed_name.is_empty() { - fallback_name() - } else { - trimmed_name.to_string() - }; - - let package_id = PackageId::new(Uuid::new_v4().to_string()); - let mut package = Package::new( - package_id.clone(), - name.clone(), - PackageSourceType::Playlist, - created_at_ms, - ); - package.set_external_id(Some(trimmed_id.to_string())); - self.repo.save(&package)?; - self.event_bus.publish(DomainEvent::PackageCreated { - id: package_id.clone(), - name: name.clone(), - }); + let (package_id, name) = { + let _guard = acquire_group_lock(); + + if let Some(existing) = self.repo.find_by_external_id(trimmed_id)? { + return Ok(PlaylistGroupResult { + package_id: existing.id().clone(), + package_name: existing.name().to_string(), + created: false, + item_count: group.item_count, + }); + } + + let trimmed_name = group.playlist_name.trim(); + let name = if trimmed_name.is_empty() { + fallback_name() + } else { + trimmed_name.to_string() + }; + + let package_id = PackageId::new(Uuid::new_v4().to_string()); + let mut package = Package::new( + package_id.clone(), + name.clone(), + PackageSourceType::Playlist, + created_at_ms, + ); + package.set_external_id(Some(trimmed_id.to_string())); + self.repo.save(&package)?; + (package_id, name) + }; + + self.event_bus.publish(DomainEvent::PackageCreated { + id: package_id.clone(), + name: name.clone(), + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/services/playlist_grouper.rs` around lines 125 - 155, The publish call for DomainEvent::PackageCreated is executed while the group lock guard from acquire_group_lock() is still held, which can block other grouping operations; drop or limit the scope of _guard so the mutex is released before calling self.event_bus.publish(...). Concretely, ensure the critical section only covers the find_by_external_id and self.repo.save(&package) operations (the lookup/save and assignment of external id) and move the event_bus.publish(DomainEvent::PackageCreated { ... }) to run after the guard is out of scope (or explicitly drop(_guard)) so publish does not execute inside the locked region.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/views/LinkGrabberView/canonicalPlaylistKey.ts`:
- Around line 1-20: canonicalPlaylistKey currently ignores short-share youtu.be
URLs so links like https://youtu.be/<id>?list=PL123 fall back to raw URL; update
the logic to treat youtu.be as a YouTube host (either add "youtu.be" to the
YOUTUBE_HOSTS set or explicitly handle parsed.hostname === "youtu.be") and
continue to read parsed.searchParams.get("list") so you return
`youtube:playlist:${list}` for those inputs; also add a regression test that
passes a youtu.be URL containing a list query (e.g. youtu.be/<video>?list=PL...)
into canonicalPlaylistKey and asserts the returned key equals
"youtube:playlist:PL...".
---
Nitpick comments:
In `@src-tauri/src/application/services/playlist_grouper.rs`:
- Around line 125-155: The publish call for DomainEvent::PackageCreated is
executed while the group lock guard from acquire_group_lock() is still held,
which can block other grouping operations; drop or limit the scope of _guard so
the mutex is released before calling self.event_bus.publish(...). Concretely,
ensure the critical section only covers the find_by_external_id and
self.repo.save(&package) operations (the lookup/save and assignment of external
id) and move the event_bus.publish(DomainEvent::PackageCreated { ... }) to run
after the guard is out of scope (or explicitly drop(_guard)) so publish does not
execute inside the locked region.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7c69c83e-146f-4cd5-a465-342a07c7f017
📒 Files selected for processing (6)
src-tauri/src/application/services/playlist_grouper.rssrc/i18n/locales/en.jsonsrc/i18n/locales/fr.jsonsrc/views/LinkGrabberView/LinkGrabberView.tsxsrc/views/LinkGrabberView/__tests__/canonicalPlaylistKey.test.tssrc/views/LinkGrabberView/canonicalPlaylistKey.ts
✅ Files skipped from review due to trivial changes (1)
- src/i18n/locales/fr.json
🚧 Files skipped from review as they are similar to previous changes (1)
- src/i18n/locales/en.json
…ation Two follow-up review findings on PR #135: 1. Codex P2 / CodeRabbit Major — `canonicalPlaylistKey()` only recognised `youtube.com` host variants, so `youtu.be/<v>?list=PL…` short-share URLs hashed to a different `external_id` than the equivalent `youtube.com/playlist?list=PL…`. Add `youtu.be` to the recognised host set + a regression test asserting both forms collapse to the same canonical token. 2. Codex P2 — package creation ran before `download_media_start`. A failed start (network, plugin, backend) left an empty playlist package behind, accumulating clutter on every retry. Reorder the flow: download_media_start first, then group on success, then attach. The grouping branch already short-circuits on `result.downloadIds.length === 0`, so partial failures still avoid creating empty packages. Refs PR #135.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5660a37498
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Codex P1 follow-up on PR #135: the in-process `PlaylistGrouper` mutex only serialises writers within one Tauri process. Concurrent CLI / multi-window / external writers could still insert duplicate `external_id` rows for the same playlist key. Make the index `UNIQUE`. SQLite treats each `NULL` in a UNIQUE index as distinct, so manual packages (which keep `external_id NULL`) are not restricted to a single row — the constraint only fires on duplicate non-NULL keys, which is exactly the invariant we want. Replaces the obsolete `find_by_external_id_returns_oldest_when_duplicates` test (the storage no longer permits the duplicate it constructed) with: - `test_save_rejects_duplicate_external_id_via_unique_index` — second insert with same `external_id` returns a UNIQUE-constraint error and leaves the first row intact. - `test_unique_index_allows_multiple_null_external_ids` — two manual packages with NULL `external_id` coexist under the new constraint. Refs PR #135.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: aae8cf6f84
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Codex P2 follow-up on PR #135: `canonicalPlaylistKey()` only handled YouTube hosts, so SoundCloud playlist links pasted with tracking parameters (`?in=…&utm_source=…`), via `m.soundcloud.com`, or with a trailing slash all hashed to different `external_id` values and created duplicate packages. Add a `SOUNDCLOUD_HOSTS` set covering `soundcloud.com`, `www.soundcloud.com`, `m.soundcloud.com`, and collapse those URLs to `soundcloud:<lowercased-path>` keyed on the URL path (the user/set slug pair is the natural identifier of a playlist). Tracking query params, the trailing slash, and host case differences all drop out on the way to the key. Short-link hosts like `on.soundcloud.com/<id>` need an HTTP round-trip to resolve and stay as raw URLs — out of scope for a sync helper. New regression tests in `canonicalPlaylistKey.test.ts`: - canonical SoundCloud playlist token format - host / tracking-param / trailing-slash equivalence - case insensitivity on the path - unrecognised hosts (Vimeo) keep raw URL - `on.soundcloud.com` short-share fall-through Refs PR #135.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 082b040d59
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Codex P2 follow-up on PR #135: with the UNIQUE index in place, a cross-process writer can insert the same `external_id` between our `find_by_external_id` and `save` (the in-process mutex only serialises within one process). The SQLite constraint makes our save fail, the grouper used to bubble that error up as a playlist-grouping toast even though the correct package already existed. Wrap the save in conflict-recovery: on `Err`, re-query `find_by_external_id` once. If a row is now visible, return it as a reuse (`created = false`, no `PackageCreated` event). If still missing, the save failed for a real reason — propagate. New regression test `test_group_one_recovers_when_save_loses_unique_race` uses a `RacingPackageRepo` mock that simulates the race: `save` seeds the inner store with the "winner" row and rejects the caller's row with a UNIQUE-style error. The grouper must surface the winner as a reuse and fire no extra `PackageCreated`. Refs PR #135.
Summary
Auto-group YouTube/SoundCloud playlist links into one Package per unique playlist. When the same playlist is re-resolved, the existing Package is reused, eliminating duplicates. Implements Task 30 (P1.11) from PRD v2 roadmap.
Why
Resolving playlists multiple times created duplicate packages. This workflow is common in JDownloader and expected by users. Idempotent grouping via
external_id(playlist URL) ensures deterministic behavior on re-resolution.Changes
Backend service : New
PlaylistGrouper(src-tauri/src/application/services/playlist_grouper.rs) withgroup_one()/group_all()methods. Input:PlaylistGroup { id, name, item_count }. Output:PlaylistGroupResult { package_id, created: bool }. Deterministic reuse viafind_by_external_id()returning oldest match.Domain & persistence :
Package.external_id: Option<String>stores playlist URL as natural keypackages.external_id TEXT NULLcolumn +idx_packages_external_idindex (nullable: manual packages keep NULL)PackageRepository::find_by_external_id()trait method + sea_orm implementationCommand & IPC :
GroupPlaylistsCommandhandler routes throughPlaylistGrouperlink_group_playlistsTauri IPC command with DTOs (camelCase conversion)Frontend :
PlaylistPackageBannercomponent shows "Will create package: {name} with {N} items" or "Reusing playlist package {name}"LinkGrabberView.handleMediaGrabberConfirmorchestrates group → download_media_start → package_add_downloadPromise.allSettled()Test coverage : 7 unit tests (PlaylistGrouper service), 3 handler tests, 5 SQLite repo tests, 4 component tests. All assertions on idempotency, fallback naming, batch operations, determinism.
Testing
Manual verification
Notes for Reviewer
find_by_external_id()returns oldest match for consistent reuse across restarts. This is the natural choice for determinism without sequence numbers.mutatecallback tomutateAsyncto control order: group → start → attach. Ensures package exists before attaching downloads.countparameter for "1 item" vs "{N} items" in both EN and FR.Checklist
CHANGELOG.md, inline comments)Summary by cubic
Auto-group YouTube/SoundCloud playlist links into a single package per playlist and reuse it on re-resolve to avoid duplicates. Adds a backend grouper, a new
link_group_playlistsIPC, and a small UI banner; fulfills PRD v2 P1.11 (Task 30).New Features
PlaylistGrouperservice finds-or-creates aPackagekeyed byPackage.external_id(playlist URL). Repository addsfind_by_external_id()and returns the oldest match for deterministic reuse.GroupPlaylistsCommandroutes through the grouper. New Tauri IPClink_group_playlistsreturns{ packageId, packageName, created, itemCount }for the UI.PlaylistPackageBannershows "will create" vs "will reuse" with item count.LinkGrabberViewflow is group →download_media_start→ attach eachdownloadIdviapackage_add_download(attach failures are non-fatal). EN/FR i18n strings added.Migration
packages.external_idand indexidx_packages_external_id(uniqueness enforced in app to allow manual packages withNULL).Written for commit 7bb3f5f. Summary will update on new commits. Review in cubic
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation